Merge remote-tracking branch 'upstream/main' into dev-1.30

This commit is contained in:
Vyom-Yadav 2024-04-10 23:04:57 +05:30
commit 37b0b3ed72
No known key found for this signature in database
59 changed files with 4219 additions and 109 deletions

View File

@ -177,9 +177,11 @@ aliases:
- truongnh1992
sig-docs-ru-owners: # Admins for Russian content
- Arhell
- kirkonru
- shurup
sig-docs-ru-reviews: # PR reviews for Russian content
- Arhell
- kirkonru
- shurup
sig-docs-pl-owners: # Admins for Polish content
- mfilocha

View File

@ -29,7 +29,7 @@ Falls eine Nachricht ähnlich wie die Folgende zu sehen ist, ist kubectl nicht k
The connection to the server <server-name:port> was refused - did you specify the right host or port?
```
Wenn zum Beispiel versucht wird ein Kubernetes Cluster lokal auf dem Laptop zu starten, muss ein Tool wie zum Beispiel Minikube zuerst installiert werden. Danach können die oben erwähnten Befehle erneut ausgeführt werden.
Wenn zum Beispiel versucht wird ein Kubernetes Cluster lokal auf dem Laptop zu starten, muss ein Tool wie zum Beispiel [Minikube](https://minikube.sigs.k8s.io/docs/start/) zuerst installiert werden. Danach können die oben erwähnten Befehle erneut ausgeführt werden.
Falls kubectl cluster-info eine URL zurück gibt, aber nicht auf das Cluster zugreifen kann, prüfe ob kubectl korrekt konfiguriert wurde:

View File

@ -150,7 +150,7 @@ channel. You can also explore the [Windows Operational Readiness test suite](htt
and make contributions directly to the GitHub repository.
Special thanks to Kulwant Singh (AWS), Pramita Gautam Rana (VMWare), Xinqi Li
(Google) for their help in making notable contributions to the specification. Additionally,
(Google) and Marcio Morales (AWS) for their help in making notable contributions to the specification. Additionally,
appreciation goes to James Sturtevant (Microsoft), Mark Rossetti (Microsoft),
Claudiu Belu (Cloudbase Solutions) and Aravindh Puthiyaparambil
(Softdrive Technologies Group Inc.) from the SIG Windows team for their guidance and support.

View File

@ -0,0 +1,497 @@
<?xml version="1.0"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="936.0999999630001" height="725.1999999999999" font-family="Consolas, Menlo, 'Bitstream Vera Sans Mono', monospace, 'Powerline Symbols'" font-size="14px">
<style>
<!-- asciinema theme -->
.default-text-fill {fill: #cccccc}
.default-bg-fill {fill: #121314}
.c-0 {fill: #000000}
.c-1 {fill: #dd3c69}
.c-2 {fill: #4ebf22}
.c-3 {fill: #ddaf3c}
.c-4 {fill: #26b0d7}
.c-5 {fill: #b954e1}
.c-6 {fill: #54e1b9}
.c-7 {fill: #d9d9d9}
.c-8 {fill: #4d4d4d}
.c-9 {fill: #dd3c69}
.c-10 {fill: #4ebf22}
.c-11 {fill: #ddaf3c}
.c-12 {fill: #26b0d7}
.c-13 {fill: #b954e1}
.c-14 {fill: #54e1b9}
.c-15 {fill: #ffffff}
.c-8, .c-9, .c-10, .c-11, .c-12, .c-13, .c-14, .c-15 {font-weight: bold}
<!-- 256 colors -->
.c-16 {fill: #000000}
.c-17 {fill: #00005f}
.c-18 {fill: #000087}
.c-19 {fill: #0000af}
.c-20 {fill: #0000d7}
.c-21 {fill: #0000ff}
.c-22 {fill: #005f00}
.c-23 {fill: #005f5f}
.c-24 {fill: #005f87}
.c-25 {fill: #005faf}
.c-26 {fill: #005fd7}
.c-27 {fill: #005fff}
.c-28 {fill: #008700}
.c-29 {fill: #00875f}
.c-30 {fill: #008787}
.c-31 {fill: #0087af}
.c-32 {fill: #0087d7}
.c-33 {fill: #0087ff}
.c-34 {fill: #00af00}
.c-35 {fill: #00af5f}
.c-36 {fill: #00af87}
.c-37 {fill: #00afaf}
.c-38 {fill: #00afd7}
.c-39 {fill: #00afff}
.c-40 {fill: #00d700}
.c-41 {fill: #00d75f}
.c-42 {fill: #00d787}
.c-43 {fill: #00d7af}
.c-44 {fill: #00d7d7}
.c-45 {fill: #00d7ff}
.c-46 {fill: #00ff00}
.c-47 {fill: #00ff5f}
.c-48 {fill: #00ff87}
.c-49 {fill: #00ffaf}
.c-50 {fill: #00ffd7}
.c-51 {fill: #00ffff}
.c-52 {fill: #5f0000}
.c-53 {fill: #5f005f}
.c-54 {fill: #5f0087}
.c-55 {fill: #5f00af}
.c-56 {fill: #5f00d7}
.c-57 {fill: #5f00ff}
.c-58 {fill: #5f5f00}
.c-59 {fill: #5f5f5f}
.c-60 {fill: #5f5f87}
.c-61 {fill: #5f5faf}
.c-62 {fill: #5f5fd7}
.c-63 {fill: #5f5fff}
.c-64 {fill: #5f8700}
.c-65 {fill: #5f875f}
.c-66 {fill: #5f8787}
.c-67 {fill: #5f87af}
.c-68 {fill: #5f87d7}
.c-69 {fill: #5f87ff}
.c-70 {fill: #5faf00}
.c-71 {fill: #5faf5f}
.c-72 {fill: #5faf87}
.c-73 {fill: #5fafaf}
.c-74 {fill: #5fafd7}
.c-75 {fill: #5fafff}
.c-76 {fill: #5fd700}
.c-77 {fill: #5fd75f}
.c-78 {fill: #5fd787}
.c-79 {fill: #5fd7af}
.c-80 {fill: #5fd7d7}
.c-81 {fill: #5fd7ff}
.c-82 {fill: #5fff00}
.c-83 {fill: #5fff5f}
.c-84 {fill: #5fff87}
.c-85 {fill: #5fffaf}
.c-86 {fill: #5fffd7}
.c-87 {fill: #5fffff}
.c-88 {fill: #870000}
.c-89 {fill: #87005f}
.c-90 {fill: #870087}
.c-91 {fill: #8700af}
.c-92 {fill: #8700d7}
.c-93 {fill: #8700ff}
.c-94 {fill: #875f00}
.c-95 {fill: #875f5f}
.c-96 {fill: #875f87}
.c-97 {fill: #875faf}
.c-98 {fill: #875fd7}
.c-99 {fill: #875fff}
.c-100 {fill: #878700}
.c-101 {fill: #87875f}
.c-102 {fill: #878787}
.c-103 {fill: #8787af}
.c-104 {fill: #8787d7}
.c-105 {fill: #8787ff}
.c-106 {fill: #87af00}
.c-107 {fill: #87af5f}
.c-108 {fill: #87af87}
.c-109 {fill: #87afaf}
.c-110 {fill: #87afd7}
.c-111 {fill: #87afff}
.c-112 {fill: #87d700}
.c-113 {fill: #87d75f}
.c-114 {fill: #87d787}
.c-115 {fill: #87d7af}
.c-116 {fill: #87d7d7}
.c-117 {fill: #87d7ff}
.c-118 {fill: #87ff00}
.c-119 {fill: #87ff5f}
.c-120 {fill: #87ff87}
.c-121 {fill: #87ffaf}
.c-122 {fill: #87ffd7}
.c-123 {fill: #87ffff}
.c-124 {fill: #af0000}
.c-125 {fill: #af005f}
.c-126 {fill: #af0087}
.c-127 {fill: #af00af}
.c-128 {fill: #af00d7}
.c-129 {fill: #af00ff}
.c-130 {fill: #af5f00}
.c-131 {fill: #af5f5f}
.c-132 {fill: #af5f87}
.c-133 {fill: #af5faf}
.c-134 {fill: #af5fd7}
.c-135 {fill: #af5fff}
.c-136 {fill: #af8700}
.c-137 {fill: #af875f}
.c-138 {fill: #af8787}
.c-139 {fill: #af87af}
.c-140 {fill: #af87d7}
.c-141 {fill: #af87ff}
.c-142 {fill: #afaf00}
.c-143 {fill: #afaf5f}
.c-144 {fill: #afaf87}
.c-145 {fill: #afafaf}
.c-146 {fill: #afafd7}
.c-147 {fill: #afafff}
.c-148 {fill: #afd700}
.c-149 {fill: #afd75f}
.c-150 {fill: #afd787}
.c-151 {fill: #afd7af}
.c-152 {fill: #afd7d7}
.c-153 {fill: #afd7ff}
.c-154 {fill: #afff00}
.c-155 {fill: #afff5f}
.c-156 {fill: #afff87}
.c-157 {fill: #afffaf}
.c-158 {fill: #afffd7}
.c-159 {fill: #afffff}
.c-160 {fill: #d70000}
.c-161 {fill: #d7005f}
.c-162 {fill: #d70087}
.c-163 {fill: #d700af}
.c-164 {fill: #d700d7}
.c-165 {fill: #d700ff}
.c-166 {fill: #d75f00}
.c-167 {fill: #d75f5f}
.c-168 {fill: #d75f87}
.c-169 {fill: #d75faf}
.c-170 {fill: #d75fd7}
.c-171 {fill: #d75fff}
.c-172 {fill: #d78700}
.c-173 {fill: #d7875f}
.c-174 {fill: #d78787}
.c-175 {fill: #d787af}
.c-176 {fill: #d787d7}
.c-177 {fill: #d787ff}
.c-178 {fill: #d7af00}
.c-179 {fill: #d7af5f}
.c-180 {fill: #d7af87}
.c-181 {fill: #d7afaf}
.c-182 {fill: #d7afd7}
.c-183 {fill: #d7afff}
.c-184 {fill: #d7d700}
.c-185 {fill: #d7d75f}
.c-186 {fill: #d7d787}
.c-187 {fill: #d7d7af}
.c-188 {fill: #d7d7d7}
.c-189 {fill: #d7d7ff}
.c-190 {fill: #d7ff00}
.c-191 {fill: #d7ff5f}
.c-192 {fill: #d7ff87}
.c-193 {fill: #d7ffaf}
.c-194 {fill: #d7ffd7}
.c-195 {fill: #d7ffff}
.c-196 {fill: #ff0000}
.c-197 {fill: #ff005f}
.c-198 {fill: #ff0087}
.c-199 {fill: #ff00af}
.c-200 {fill: #ff00d7}
.c-201 {fill: #ff00ff}
.c-202 {fill: #ff5f00}
.c-203 {fill: #ff5f5f}
.c-204 {fill: #ff5f87}
.c-205 {fill: #ff5faf}
.c-206 {fill: #ff5fd7}
.c-207 {fill: #ff5fff}
.c-208 {fill: #ff8700}
.c-209 {fill: #ff875f}
.c-210 {fill: #ff8787}
.c-211 {fill: #ff87af}
.c-212 {fill: #ff87d7}
.c-213 {fill: #ff87ff}
.c-214 {fill: #ffaf00}
.c-215 {fill: #ffaf5f}
.c-216 {fill: #ffaf87}
.c-217 {fill: #ffafaf}
.c-218 {fill: #ffafd7}
.c-219 {fill: #ffafff}
.c-220 {fill: #ffd700}
.c-221 {fill: #ffd75f}
.c-222 {fill: #ffd787}
.c-223 {fill: #ffd7af}
.c-224 {fill: #ffd7d7}
.c-225 {fill: #ffd7ff}
.c-226 {fill: #ffff00}
.c-227 {fill: #ffff5f}
.c-228 {fill: #ffff87}
.c-229 {fill: #ffffaf}
.c-230 {fill: #ffffd7}
.c-231 {fill: #ffffff}
.c-232 {fill: #080808}
.c-233 {fill: #121212}
.c-234 {fill: #1c1c1c}
.c-235 {fill: #262626}
.c-236 {fill: #303030}
.c-237 {fill: #3a3a3a}
.c-238 {fill: #444444}
.c-239 {fill: #4e4e4e}
.c-240 {fill: #585858}
.c-241 {fill: #626262}
.c-242 {fill: #6c6c6c}
.c-243 {fill: #767676}
.c-244 {fill: #808080}
.c-245 {fill: #8a8a8a}
.c-246 {fill: #949494}
.c-247 {fill: #9e9e9e}
.c-248 {fill: #a8a8a8}
.c-249 {fill: #b2b2b2}
.c-250 {fill: #bcbcbc}
.c-251 {fill: #c6c6c6}
.c-252 {fill: #d0d0d0}
.c-253 {fill: #dadada}
.c-254 {fill: #e4e4e4}
.c-255 {fill: #eeeeee}
.br { font-weight: bold }
.fa { fill-opacity: 0.5 }
.it { font-style: italic }
.un { text-decoration: underline }
</style>
<rect width="100%" height="100%" class="default-bg-fill" rx="4" ry="4" />
<svg x="0.901%" y="1.351%" class="default-text-fill">
<g style="shape-rendering: optimizeSpeed">
<rect x="0.000%" y="0.000%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="2.703%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="5.405%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="8.108%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="10.811%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="13.514%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="16.216%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="18.919%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="21.622%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="24.324%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="27.027%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="29.730%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="32.432%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="32.432%" width="23.423%" height="19.7" class="c-7" />
<rect x="39.640%" y="32.432%" width="13.514%" height="19.7" class="c-7" />
<rect x="53.153%" y="32.432%" width="25.225%" height="19.7" class="c-7" />
<rect x="78.378%" y="32.432%" width="0.901%" height="19.7" class="c-7" />
<rect x="79.279%" y="32.432%" width="18.919%" height="19.7" class="c-4" />
<rect x="0.000%" y="35.135%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="35.135%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="35.135%" width="62.162%" height="19.7" class="c-7" />
<rect x="79.279%" y="35.135%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="35.135%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="37.838%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="37.838%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="37.838%" width="62.162%" height="19.7" class="c-7" />
<rect x="79.279%" y="37.838%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="37.838%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="40.541%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="40.541%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="40.541%" width="62.162%" height="19.7" class="c-7" />
<rect x="79.279%" y="40.541%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="40.541%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="43.243%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="43.243%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="43.243%" width="62.162%" height="19.7" class="c-7" />
<rect x="79.279%" y="43.243%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="43.243%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="45.946%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="45.946%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="45.946%" width="62.162%" height="19.7" class="c-7" />
<rect x="79.279%" y="45.946%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="45.946%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="48.649%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="48.649%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="48.649%" width="1.802%" height="19.7" class="c-7" />
<rect x="18.919%" y="48.649%" width="56.757%" height="19.7" class="c-7" />
<rect x="75.676%" y="48.649%" width="3.604%" height="19.7" class="c-7" />
<rect x="79.279%" y="48.649%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="48.649%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="51.351%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="51.351%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="51.351%" width="1.802%" height="19.7" class="c-7" />
<rect x="18.919%" y="51.351%" width="0.901%" height="19.7" class="c-7" />
<rect x="19.820%" y="51.351%" width="10.811%" height="19.7" class="c-12" />
<rect x="30.631%" y="51.351%" width="45.045%" height="19.7" class="c-7" />
<rect x="75.676%" y="51.351%" width="3.604%" height="19.7" class="c-7" />
<rect x="79.279%" y="51.351%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="51.351%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="54.054%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="54.054%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="54.054%" width="1.802%" height="19.7" class="c-7" />
<rect x="18.919%" y="54.054%" width="0.901%" height="19.7" class="c-7" />
<rect x="19.820%" y="54.054%" width="59.459%" height="19.7" class="c-7" />
<rect x="79.279%" y="54.054%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="54.054%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="56.757%" width="16.216%" height="19.7" class="c-4" />
<rect x="16.216%" y="56.757%" width="0.901%" height="19.7" class="c-7" />
<rect x="17.117%" y="56.757%" width="62.162%" height="19.7" class="c-7" />
<rect x="79.279%" y="56.757%" width="1.802%" height="19.7" class="c-0" />
<rect x="81.081%" y="56.757%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="59.459%" width="18.018%" height="19.7" class="c-4" />
<rect x="18.018%" y="59.459%" width="63.063%" height="19.7" class="c-0" />
<rect x="81.081%" y="59.459%" width="17.117%" height="19.7" class="c-4" />
<rect x="0.000%" y="62.162%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="64.865%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="67.568%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="70.270%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="72.973%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="75.676%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="78.378%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="81.081%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="83.784%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="86.486%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="89.189%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="91.892%" width="98.198%" height="19.7" class="c-4" />
<rect x="0.000%" y="94.595%" width="98.198%" height="19.7" class="c-4" />
</g>
<text class="default-text-fill">
<tspan y="0.000%">
</tspan>
<tspan y="2.703%">
</tspan>
<tspan y="5.405%">
</tspan>
<tspan y="8.108%">
</tspan>
<tspan y="10.811%">
</tspan>
<tspan y="13.514%">
</tspan>
<tspan y="16.216%">
</tspan>
<tspan y="18.919%">
</tspan>
<tspan y="21.622%">
</tspan>
<tspan y="24.324%">
</tspan>
<tspan y="27.027%">
</tspan>
<tspan y="29.730%">
</tspan>
<tspan y="32.432%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="17.117%" class="br c-15"></tspan><tspan x="18.018%" class="br c-15"></tspan><tspan x="18.919%" class="br c-15"></tspan><tspan x="19.820%" class="br c-15"></tspan><tspan x="20.721%" class="br c-15"></tspan><tspan x="21.622%" class="br c-15"></tspan><tspan x="22.523%" class="br c-15"></tspan><tspan x="23.423%" class="br c-15"></tspan><tspan x="24.324%" class="br c-15"></tspan><tspan x="25.225%" class="br c-15"></tspan><tspan x="26.126%" class="br c-15"></tspan><tspan x="27.027%" class="br c-15"></tspan><tspan x="27.928%" class="br c-15"></tspan><tspan x="28.829%" class="br c-15"></tspan><tspan x="29.730%" class="br c-15"></tspan><tspan x="30.631%" class="br c-15"></tspan><tspan x="31.532%" class="br c-15"></tspan><tspan x="32.432%" class="br c-15"></tspan><tspan x="33.333%" class="br c-15"></tspan><tspan x="34.234%" class="br c-15"></tspan><tspan x="35.135%" class="br c-15"></tspan><tspan x="36.036%" class="br c-15"></tspan><tspan x="36.937%" class="br c-15"></tspan><tspan x="37.838%" class="br c-15"></tspan><tspan x="38.739%" class="br c-15"></tspan><tspan x="39.640%" class="br c-12">t</tspan><tspan x="40.541%" class="br c-12">a</tspan><tspan x="41.441%" class="br c-12">l</tspan><tspan x="42.342%" class="br c-12">o</tspan><tspan x="43.243%" class="br c-12">s</tspan><tspan x="44.144%" class="br c-12">-</tspan><tspan x="45.045%" class="br c-12">b</tspan><tspan x="45.946%" class="br c-12">o</tspan><tspan x="46.847%" class="br c-12">o</tspan><tspan x="47.748%" class="br c-12">t</tspan><tspan x="48.649%" class="br c-12">s</tspan><tspan x="49.550%" class="br c-12">t</tspan><tspan x="50.450%" class="br c-12">r</tspan><tspan x="51.351%" class="br c-12">a</tspan><tspan x="52.252%" class="br c-12">p</tspan><tspan x="53.153%" class="br c-15"></tspan><tspan x="54.054%" class="br c-15"></tspan><tspan x="54.955%" class="br c-15"></tspan><tspan x="55.856%" class="br c-15"></tspan><tspan x="56.757%" class="br c-15"></tspan><tspan x="57.658%" class="br c-15"></tspan><tspan x="58.559%" class="br c-15"></tspan><tspan x="59.459%" class="br c-15"></tspan><tspan x="60.360%" class="br c-15"></tspan><tspan x="61.261%" class="br c-15"></tspan><tspan x="62.162%" class="br c-15"></tspan><tspan x="63.063%" class="br c-15"></tspan><tspan x="63.964%" class="br c-15"></tspan><tspan x="64.865%" class="br c-15"></tspan><tspan x="65.766%" class="br c-15"></tspan><tspan x="66.667%" class="br c-15"></tspan><tspan x="67.568%" class="br c-15"></tspan><tspan x="68.468%" class="br c-15"></tspan><tspan x="69.369%" class="br c-15"></tspan><tspan x="70.270%" class="br c-15"></tspan><tspan x="71.171%" class="br c-15"></tspan><tspan x="72.072%" class="br c-15"></tspan><tspan x="72.973%" class="br c-15"></tspan><tspan x="73.874%" class="br c-15"></tspan><tspan x="74.775%" class="br c-15"></tspan><tspan x="75.676%" class="br c-15"></tspan><tspan x="76.577%" class="br c-15"></tspan><tspan x="77.477%" class="br c-15"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="35.135%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="18.018%" class="c-0">I</tspan><tspan x="18.919%" class="c-0">n</tspan><tspan x="19.820%" class="c-0">s</tspan><tspan x="20.721%" class="c-0">t</tspan><tspan x="21.622%" class="c-0">a</tspan><tspan x="22.523%" class="c-0">l</tspan><tspan x="23.423%" class="c-0">l</tspan><tspan x="24.324%" class="c-0">i</tspan><tspan x="25.225%" class="c-0">n</tspan><tspan x="26.126%" class="c-0">g</tspan><tspan x="27.027%" class="c-0">.</tspan><tspan x="27.928%" class="c-0">.</tspan><tspan x="28.829%" class="c-0">.</tspan><tspan x="30.631%" class="c-0">(</tspan><tspan x="31.532%" class="c-0">p</tspan><tspan x="32.432%" class="c-0">o</tspan><tspan x="33.333%" class="c-0">r</tspan><tspan x="34.234%" class="c-0">t</tspan><tspan x="36.036%" class="c-0">i</tspan><tspan x="36.937%" class="c-0">s</tspan><tspan x="38.739%" class="c-0">o</tspan><tspan x="39.640%" class="c-0">p</tspan><tspan x="40.541%" class="c-0">e</tspan><tspan x="41.441%" class="c-0">n</tspan><tspan x="43.243%" class="c-0">a</tspan><tspan x="44.144%" class="c-0">t</tspan><tspan x="45.946%" class="c-0">1</tspan><tspan x="46.847%" class="c-0">9</tspan><tspan x="47.748%" class="c-0">2</tspan><tspan x="48.649%" class="c-0">.</tspan><tspan x="49.550%" class="c-0">1</tspan><tspan x="50.450%" class="c-0">6</tspan><tspan x="51.351%" class="c-0">8</tspan><tspan x="52.252%" class="c-0">.</tspan><tspan x="53.153%" class="c-0">1</tspan><tspan x="54.054%" class="c-0">0</tspan><tspan x="54.955%" class="c-0">0</tspan><tspan x="55.856%" class="c-0">.</tspan><tspan x="56.757%" class="c-0">1</tspan><tspan x="57.658%" class="c-0">1</tspan><tspan x="58.559%" class="c-0">2</tspan><tspan x="59.459%" class="c-0">)</tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="37.838%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="40.541%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="43.243%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="45.946%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="48.649%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="18.919%" class="br c-15"></tspan><tspan x="19.820%" class="br c-15"></tspan><tspan x="20.721%" class="br c-15"></tspan><tspan x="21.622%" class="br c-15"></tspan><tspan x="22.523%" class="br c-15"></tspan><tspan x="23.423%" class="br c-15"></tspan><tspan x="24.324%" class="br c-15"></tspan><tspan x="25.225%" class="br c-15"></tspan><tspan x="26.126%" class="br c-15"></tspan><tspan x="27.027%" class="br c-15"></tspan><tspan x="27.928%" class="br c-15"></tspan><tspan x="28.829%" class="br c-15"></tspan><tspan x="29.730%" class="br c-15"></tspan><tspan x="30.631%" class="br c-15"></tspan><tspan x="31.532%" class="br c-15"></tspan><tspan x="32.432%" class="br c-15"></tspan><tspan x="33.333%" class="br c-15"></tspan><tspan x="34.234%" class="br c-15"></tspan><tspan x="35.135%" class="br c-15"></tspan><tspan x="36.036%" class="br c-15"></tspan><tspan x="36.937%" class="br c-15"></tspan><tspan x="37.838%" class="br c-15"></tspan><tspan x="38.739%" class="br c-15"></tspan><tspan x="39.640%" class="br c-15"></tspan><tspan x="40.541%" class="br c-15"></tspan><tspan x="41.441%" class="br c-15"></tspan><tspan x="42.342%" class="br c-15"></tspan><tspan x="43.243%" class="br c-15"></tspan><tspan x="44.144%" class="br c-15"></tspan><tspan x="45.045%" class="br c-15"></tspan><tspan x="45.946%" class="br c-15"></tspan><tspan x="46.847%" class="br c-15"></tspan><tspan x="47.748%" class="br c-15"></tspan><tspan x="48.649%" class="br c-15"></tspan><tspan x="49.550%" class="br c-15"></tspan><tspan x="50.450%" class="br c-15"></tspan><tspan x="51.351%" class="br c-15"></tspan><tspan x="52.252%" class="br c-15"></tspan><tspan x="53.153%" class="br c-15"></tspan><tspan x="54.054%" class="br c-15"></tspan><tspan x="54.955%" class="br c-15"></tspan><tspan x="55.856%" class="br c-15"></tspan><tspan x="56.757%" class="br c-15"></tspan><tspan x="57.658%" class="br c-15"></tspan><tspan x="58.559%" class="br c-15"></tspan><tspan x="59.459%" class="br c-15"></tspan><tspan x="60.360%" class="br c-15"></tspan><tspan x="61.261%" class="br c-15"></tspan><tspan x="62.162%" class="br c-15"></tspan><tspan x="63.063%" class="br c-15"></tspan><tspan x="63.964%" class="br c-15"></tspan><tspan x="64.865%" class="br c-15"></tspan><tspan x="65.766%" class="br c-15"></tspan><tspan x="66.667%" class="br c-15"></tspan><tspan x="67.568%" class="br c-15"></tspan><tspan x="68.468%" class="br c-15"></tspan><tspan x="69.369%" class="br c-15"></tspan><tspan x="70.270%" class="br c-15"></tspan><tspan x="71.171%" class="br c-15"></tspan><tspan x="72.072%" class="br c-15"></tspan><tspan x="72.973%" class="br c-15"></tspan><tspan x="73.874%" class="br c-15"></tspan><tspan x="74.775%" class="br c-15"></tspan><tspan x="75.676%" class="c-0"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="51.351%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="18.919%" class="br c-15"></tspan><tspan x="46.847%" class="br c-12">2</tspan><tspan x="47.748%" class="br c-12">0</tspan><tspan x="48.649%" class="br c-12">%</tspan><tspan x="75.676%" class="c-0"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="54.054%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="18.919%" class="br c-15"></tspan><tspan x="19.820%" class="c-0"></tspan><tspan x="20.721%" class="c-0"></tspan><tspan x="21.622%" class="c-0"></tspan><tspan x="22.523%" class="c-0"></tspan><tspan x="23.423%" class="c-0"></tspan><tspan x="24.324%" class="c-0"></tspan><tspan x="25.225%" class="c-0"></tspan><tspan x="26.126%" class="c-0"></tspan><tspan x="27.027%" class="c-0"></tspan><tspan x="27.928%" class="c-0"></tspan><tspan x="28.829%" class="c-0"></tspan><tspan x="29.730%" class="c-0"></tspan><tspan x="30.631%" class="c-0"></tspan><tspan x="31.532%" class="c-0"></tspan><tspan x="32.432%" class="c-0"></tspan><tspan x="33.333%" class="c-0"></tspan><tspan x="34.234%" class="c-0"></tspan><tspan x="35.135%" class="c-0"></tspan><tspan x="36.036%" class="c-0"></tspan><tspan x="36.937%" class="c-0"></tspan><tspan x="37.838%" class="c-0"></tspan><tspan x="38.739%" class="c-0"></tspan><tspan x="39.640%" class="c-0"></tspan><tspan x="40.541%" class="c-0"></tspan><tspan x="41.441%" class="c-0"></tspan><tspan x="42.342%" class="c-0"></tspan><tspan x="43.243%" class="c-0"></tspan><tspan x="44.144%" class="c-0"></tspan><tspan x="45.045%" class="c-0"></tspan><tspan x="45.946%" class="c-0"></tspan><tspan x="46.847%" class="c-0"></tspan><tspan x="47.748%" class="c-0"></tspan><tspan x="48.649%" class="c-0"></tspan><tspan x="49.550%" class="c-0"></tspan><tspan x="50.450%" class="c-0"></tspan><tspan x="51.351%" class="c-0"></tspan><tspan x="52.252%" class="c-0"></tspan><tspan x="53.153%" class="c-0"></tspan><tspan x="54.054%" class="c-0"></tspan><tspan x="54.955%" class="c-0"></tspan><tspan x="55.856%" class="c-0"></tspan><tspan x="56.757%" class="c-0"></tspan><tspan x="57.658%" class="c-0"></tspan><tspan x="58.559%" class="c-0"></tspan><tspan x="59.459%" class="c-0"></tspan><tspan x="60.360%" class="c-0"></tspan><tspan x="61.261%" class="c-0"></tspan><tspan x="62.162%" class="c-0"></tspan><tspan x="63.063%" class="c-0"></tspan><tspan x="63.964%" class="c-0"></tspan><tspan x="64.865%" class="c-0"></tspan><tspan x="65.766%" class="c-0"></tspan><tspan x="66.667%" class="c-0"></tspan><tspan x="67.568%" class="c-0"></tspan><tspan x="68.468%" class="c-0"></tspan><tspan x="69.369%" class="c-0"></tspan><tspan x="70.270%" class="c-0"></tspan><tspan x="71.171%" class="c-0"></tspan><tspan x="72.072%" class="c-0"></tspan><tspan x="72.973%" class="c-0"></tspan><tspan x="73.874%" class="c-0"></tspan><tspan x="74.775%" class="c-0"></tspan><tspan x="75.676%" class="c-0"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="56.757%">
<tspan dy="1em" x="16.216%" class="br c-15"></tspan><tspan x="17.117%" class="c-0"></tspan><tspan x="18.018%" class="c-0"></tspan><tspan x="18.919%" class="c-0"></tspan><tspan x="19.820%" class="c-0"></tspan><tspan x="20.721%" class="c-0"></tspan><tspan x="21.622%" class="c-0"></tspan><tspan x="22.523%" class="c-0"></tspan><tspan x="23.423%" class="c-0"></tspan><tspan x="24.324%" class="c-0"></tspan><tspan x="25.225%" class="c-0"></tspan><tspan x="26.126%" class="c-0"></tspan><tspan x="27.027%" class="c-0"></tspan><tspan x="27.928%" class="c-0"></tspan><tspan x="28.829%" class="c-0"></tspan><tspan x="29.730%" class="c-0"></tspan><tspan x="30.631%" class="c-0"></tspan><tspan x="31.532%" class="c-0"></tspan><tspan x="32.432%" class="c-0"></tspan><tspan x="33.333%" class="c-0"></tspan><tspan x="34.234%" class="c-0"></tspan><tspan x="35.135%" class="c-0"></tspan><tspan x="36.036%" class="c-0"></tspan><tspan x="36.937%" class="c-0"></tspan><tspan x="37.838%" class="c-0"></tspan><tspan x="38.739%" class="c-0"></tspan><tspan x="39.640%" class="c-0"></tspan><tspan x="40.541%" class="c-0"></tspan><tspan x="41.441%" class="c-0"></tspan><tspan x="42.342%" class="c-0"></tspan><tspan x="43.243%" class="c-0"></tspan><tspan x="44.144%" class="c-0"></tspan><tspan x="45.045%" class="c-0"></tspan><tspan x="45.946%" class="c-0"></tspan><tspan x="46.847%" class="c-0"></tspan><tspan x="47.748%" class="c-0"></tspan><tspan x="48.649%" class="c-0"></tspan><tspan x="49.550%" class="c-0"></tspan><tspan x="50.450%" class="c-0"></tspan><tspan x="51.351%" class="c-0"></tspan><tspan x="52.252%" class="c-0"></tspan><tspan x="53.153%" class="c-0"></tspan><tspan x="54.054%" class="c-0"></tspan><tspan x="54.955%" class="c-0"></tspan><tspan x="55.856%" class="c-0"></tspan><tspan x="56.757%" class="c-0"></tspan><tspan x="57.658%" class="c-0"></tspan><tspan x="58.559%" class="c-0"></tspan><tspan x="59.459%" class="c-0"></tspan><tspan x="60.360%" class="c-0"></tspan><tspan x="61.261%" class="c-0"></tspan><tspan x="62.162%" class="c-0"></tspan><tspan x="63.063%" class="c-0"></tspan><tspan x="63.964%" class="c-0"></tspan><tspan x="64.865%" class="c-0"></tspan><tspan x="65.766%" class="c-0"></tspan><tspan x="66.667%" class="c-0"></tspan><tspan x="67.568%" class="c-0"></tspan><tspan x="68.468%" class="c-0"></tspan><tspan x="69.369%" class="c-0"></tspan><tspan x="70.270%" class="c-0"></tspan><tspan x="71.171%" class="c-0"></tspan><tspan x="72.072%" class="c-0"></tspan><tspan x="72.973%" class="c-0"></tspan><tspan x="73.874%" class="c-0"></tspan><tspan x="74.775%" class="c-0"></tspan><tspan x="75.676%" class="c-0"></tspan><tspan x="76.577%" class="c-0"></tspan><tspan x="77.477%" class="c-0"></tspan><tspan x="78.378%" class="c-0"></tspan>
</tspan>
<tspan y="59.459%">
</tspan>
<tspan y="62.162%">
</tspan>
<tspan y="64.865%">
</tspan>
<tspan y="67.568%">
</tspan>
<tspan y="70.270%">
</tspan>
<tspan y="72.973%">
</tspan>
<tspan y="75.676%">
</tspan>
<tspan y="78.378%">
</tspan>
<tspan y="81.081%">
</tspan>
<tspan y="83.784%">
</tspan>
<tspan y="86.486%">
</tspan>
<tspan y="89.189%">
</tspan>
<tspan y="91.892%">
</tspan>
<tspan y="94.595%">
</tspan>
</text>
<g transform="translate(-50 -50)">
<svg x="50%" y="50%" width="100" height="100">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 866.0254037844387 866.0254037844387">
<defs>
<mask id="small-triangle-mask">
<rect width="100%" height="100%" fill="white"/>
<polygon points="508.01270189221935 433.01270189221935, 208.0127018922194 259.8076211353316, 208.01270189221927 606.217782649107" fill="black"></polygon>
</mask>
</defs>
<polygon points="808.0127018922194 433.01270189221935, 58.01270189221947 -1.1368683772161603e-13, 58.01270189221913 866.0254037844386" mask="url(#small-triangle-mask)" fill="white"></polygon>
<polyline points="481.2177826491071 333.0127018922194, 134.80762113533166 533.0127018922194" stroke="white" stroke-width="90"></polyline>
</svg>
</svg>
</g>
</svg>
</svg>

After

Width:  |  Height:  |  Size: 29 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 51 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,251 @@
---
layout: blog
title: "DIY: Create Your Own Cloud with Kubernetes (Part 1)"
slug: diy-create-your-own-cloud-with-kubernetes-part-1
date: 2024-04-05T07:30:00+00:00
---
**Author**: Andrei Kvapil (Ænix)
At Ænix, we have a deep affection for Kubernetes and dream that all modern technologies will soon
start utilizing its remarkable patterns.
Have you ever thought about building your own cloud? I bet you have. But is it possible to do this
using only modern technologies and approaches, without leaving the cozy Kubernetes ecosystem?
Our experience in developing Cozystack required us to delve deeply into it.
You might argue that Kubernetes is not intended for this purpose and why not simply use OpenStack
for bare metal servers and run Kubernetes inside it as intended. But by doing so, you would simply
shift the responsibility from your hands to the hands of OpenStack administrators.
This would add at least one more huge and complex system to your ecosystem.
Why complicate things? - after all, Kubernetes already has everything needed to run tenant
Kubernetes clusters at this point.
I want to share with you our experience in developing a cloud platform based on Kubernetes,
highlighting the open-source projects that we use ourselves and believe deserve your attention.
In this series of articles, I will tell you our story about how we prepare managed Kubernetes
from bare metal using only open-source technologies. Starting from the basic level of data
center preparation, running virtual machines, isolating networks, setting up fault-tolerant
storage to provisioning full-featured Kubernetes clusters with dynamic volume provisioning,
load balancers, and autoscaling.
With this article, I start a series consisting of several parts:
- **Part 1**: Preparing the groundwork for your cloud. Challenges faced during the preparation
and operation of Kubernetes on bare metal and a ready-made recipe for provisioning infrastructure.
- **Part 2**: Networking, storage, and virtualization. How to turn Kubernetes into a tool for
launching virtual machines and what is needed for this.
- **Part 3**: Cluster API and how to start provisioning Kubernetes clusters at the push of a
button. How autoscaling works, dynamic provisioning of volumes, and load balancers.
I will try to describe various technologies as independently as possible, but at the same time,
I will share our experience and why we came to one solution or another.
To begin with, let's understand the main advantage of Kubernetes and how it has changed the
approach to using cloud resources.
It is important to understand that the use of Kubernetes in the cloud and on bare metal differs.
## Kubernetes in the cloud
When you operate Kubernetes in the cloud, you don't worry about persistent volumes,
cloud load balancers, or the process of provisioning nodes. All of this is handled by your cloud
provider, who accepts your requests in the form of Kubernetes objects. In other words, the server
side is completely hidden from you, and you don't really want to know how exactly the cloud
provider implements as it's not in your area of responsibility.
{{< figure src="cloud.svg" alt="A diagram showing cloud Kubernetes, with load balancing and storage done outside the cluster" caption="A diagram showing cloud Kubernetes, with load balancing and storage done outside the cluster" >}}
Kubernetes offers convenient abstractions that work the same everywhere, allowing you to deploy
your application on any Kubernetes in any cloud.
In the cloud, you very commonly have several separate entities: the Kubernetes control plane,
virtual machines, persistent volumes, and load balancers as distinct entities. Using these entities, you can create highly dynamic environments.
Thanks to Kubernetes, virtual machines are now only seen as a utility entity for utilizing
cloud resources. You no longer store data inside virtual machines. You can delete all your virtual
machines at any moment and recreate them without breaking your application. The Kubernetes control
plane will continue to hold information about what should run in your cluster. The load balancer
will keep sending traffic to your workload, simply changing the endpoint to send traffic to a new
node. And your data will be safely stored in external persistent volumes provided by cloud.
This approach is fundamental when using Kubernetes in clouds. The reason for it is quite obvious:
the simpler the system, the more stable it is, and for this simplicity you go buying Kubernetes
in the cloud.
## Kubernetes on bare metal
Using Kubernetes in the clouds is really simple and convenient, which cannot be said about bare
metal installations. In the bare metal world, Kubernetes, on the contrary, becomes unbearably
complex. Firstly, because the entire network, backend storage, cloud balancers, etc. are usually
run not outside, but inside your cluster. As result such a system is much more difficult to
update and maintain.
{{< figure src="baremetal.svg" alt="A diagram showing bare metal Kubernetes, with load balancing and storage done inside the cluster" caption="A diagram showing bare metal Kubernetes, with load balancing and storage done inside the cluster" >}}
Judge for yourself: in the cloud, to update a node, you typically delete the virtual machine
(or even use `kubectl delete node`) and you let your node management tooling create a new
one, based on an immutable image. The new node will join the cluster and ”just work” as a node;
following a very simple and commonly used pattern in the Kubernetes world.
Many clusters order new virtual machines every few minutes, simply because they can use
cheaper spot instances. However, when you have a physical server, you can't just delete and
recreate it, firstly because it often runs some cluster services, stores data, and its update process
is significantly more complicated.
There are different approaches to solving this problem, ranging from in-place updates, as done by
kubeadm, kubespray, and k3s, to full automation of provisioning physical nodes through Cluster API
and Metal3.
I like the hybrid approach offered by Talos Linux, where your entire system is described in a
single configuration file. Most parameters of this file can be applied without rebooting or
recreating the node, including the version of Kubernetes control-plane components. However, it
still keeps the maximum declarative nature of Kubernetes.
This approach minimizes unnecessary impact on cluster services when updating bare metal nodes.
In most cases, you won't need to migrate your virtual machines and rebuild the cluster filesystem
on minor updates.
## Preparing a base for your future cloud
So, suppose you've decided to build your own cloud. To start somewhere, you need a base layer.
You need to think not only about how you will install Kubernetes on your servers but also about how
you will update and maintain it. Consider the fact that you will have to think about things like
updating the kernel, installing necessary modules, as well packages and security patches.
Now you have to think much more that you don't have to worry about when using a ready-made
Kubernetes in the cloud.
Of course you can use standard distributions like Ubuntu or Debian, or you can consider specialized
ones like Flatcar Container Linux, Fedora Core, and Talos Linux. Each has its advantages and
disadvantages.
What about us? At Ænix, we use quite a few specific kernel modules like ZFS, DRBD, and OpenvSwitch,
so we decided to go the route of forming a system image with all the necessary modules in advance.
In this case, Talos Linux turned out to be the most convenient for us.
For example, such a config is enough to build a system image with all the necessary kernel modules:
```yaml
arch: amd64
platform: metal
secureboot: false
version: v1.6.4
input:
kernel:
path: /usr/install/amd64/vmlinuz
initramfs:
path: /usr/install/amd64/initramfs.xz
baseInstaller:
imageRef: ghcr.io/siderolabs/installer:v1.6.4
systemExtensions:
- imageRef: ghcr.io/siderolabs/amd-ucode:20240115
- imageRef: ghcr.io/siderolabs/amdgpu-firmware:20240115
- imageRef: ghcr.io/siderolabs/bnx2-bnx2x:20240115
- imageRef: ghcr.io/siderolabs/i915-ucode:20240115
- imageRef: ghcr.io/siderolabs/intel-ice-firmware:20240115
- imageRef: ghcr.io/siderolabs/intel-ucode:20231114
- imageRef: ghcr.io/siderolabs/qlogic-firmware:20240115
- imageRef: ghcr.io/siderolabs/drbd:9.2.6-v1.6.4
- imageRef: ghcr.io/siderolabs/zfs:2.1.14-v1.6.4
output:
kind: installer
outFormat: raw
```
Then we use the `docker` command line tool to build an OS image:
```
cat config.yaml | docker run --rm -i -v /dev:/dev --privileged "ghcr.io/siderolabs/imager:v1.6.4" -
```
And as a result, we get a Docker container image with everything we need, which we can use to
install Talos Linux on our servers. You can do the same; this image will contain all the necessary
firmware and kernel modules.
But the question arises, how do you deliver the freshly formed image to your nodes?
I have been contemplating the idea of PXE booting for quite some time. For example, the
**Kubefarm** project that I wrote an
[article](/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/) about
two years ago was entirely built using this approach. But unfortunately, it does help you to
deploy your very first parent cluster that will hold the others. So now you have prepared a
solution that will help you do this the same using PXE approach.
Essentially, all you need to do is [run temporary](https://cozystack.io/docs/get-started/)
**DHCP** and **PXE** servers inside containers. Then your nodes will boot from your
image, and you can use a simple Debian-flavored script to help you bootstrap your nodes.
[![asciicast](asciicast.svg)](https://asciinema.org/a/627123)
The [source](https://github.com/aenix-io/talos-bootstrap/) for that `talos-bootstrap` script is
available on GitHub.
This script allows you to deploy Kubernetes on bare metal in five minutes and obtain a kubeconfig
for accessing it. However, many unresolved issues still lie ahead.
## Delivering system components
At this stage, you already have a Kubernetes cluster capable of running various workloads. However,
it is not fully functional yet. In other words, you need to set up networking and storage, as well
as install necessary cluster extensions, like KubeVirt to run virtual machines, as well the
monitoring stack and other system-wide components.
Traditionally, this is solved by installing **Helm charts** into your cluster. You can do this by
running `helm install` commands locally, but this approach becomes inconvenient when you want to
track updates, and if you have multiple clusters and you want to keep them uniform. In fact, there
are plenty of ways to do this declaratively. To solve this, I recommend using best GitOps practices.
I mean tools like ArgoCD and FluxCD.
While ArgoCD is more convenient for dev purposes with its graphical interface and a central control
plane, FluxCD, on the other hand, is better suited for creating Kubernetes distributions. With FluxCD,
you can specify which charts with what parameters should be launched and describe dependencies. Then,
FluxCD will take care of everything for you.
It is suggested to perform a one-time installation of FluxCD in your newly created cluster and
provide it with the configuration. This will install everything necessary, bringing the cluster
to the expected state.
By carrying out a single installation of FluxCD in your newly minted cluster and configuring it
accordingly, you enable it to automatically deploy all the essentials. This will allow your cluster
to upgrade itself into the desired state. For example, after installing our platform you'll see the
next pre-configured Helm charts with system components:
```
NAMESPACE NAME AGE READY STATUS
cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
cozy-cilium cilium 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
cozy-fluxcd cozy-fluxcd 4m1s True Release reconciliation succeeded
cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
cozy-linstor linstor 4m1s True Release reconciliation succeeded
cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
cozy-metallb metallb 4m1s True Release reconciliation succeeded
cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
```
## Conclusion
As a result, you achieve a highly repeatable environment that you can provide to anyone, knowing
that it operates exactly as intended.
This is actually what the [Cozystack](https://github.com/aenix-io/cozystack) project does, which
you can try out for yourself absolutely free.
In the following articles, I will discuss
[how to prepare Kubernetes for running virtual machines](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2/)
and [how to run Kubernetes clusters with the click of a button](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/).
Stay tuned, it'll be fun!

View File

@ -0,0 +1,260 @@
---
layout: blog
title: "DIY: Create Your Own Cloud with Kubernetes (Part 2)"
slug: diy-create-your-own-cloud-with-kubernetes-part-2
date: 2024-04-05T07:35:00+00:00
---
**Author**: Andrei Kvapil (Ænix)
Continuing our series of posts on how to build your own cloud using just the Kubernetes ecosystem.
In the [previous article](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/), we
explained how we prepare a basic Kubernetes distribution based on Talos Linux and Flux CD.
In this article, we'll show you a few various virtualization technologies in Kubernetes and prepare
everything need to run virtual machines in Kubernetes, primarily storage and networking.
We will talk about technologies such as KubeVirt, LINSTOR, and Kube-OVN.
But first, let's explain what virtual machines are needed for, and why can't you just use docker
containers for building cloud?
The reason is that containers do not provide a sufficient level of isolation.
Although the situation improves year by year, we often encounter vulnerabilities that allow
escaping the container sandbox and elevating privileges in the system.
On the other hand, Kubernetes was not originally designed to be a multi-tenant system, meaning
the basic usage pattern involves creating a separate Kubernetes cluster for every independent
project and development team.
Virtual machines are the primary means of isolating tenants from each other in a cloud environment.
In virtual machines, users can execute code and programs with administrative privilege, but this
doesn't affect other tenants or the environment itself. In other words, virtual machines allow to
achieve [hard multi-tenancy isolation](/docs/concepts/security/multi-tenancy/#isolation), and run
in environments where tenants do not trust each other.
## Virtualization technologies in Kubernetes
There are several different technologies that bring virtualization into the Kubernetes world:
[KubeVirt](https://kubevirt.io/) and [Kata Containers](https://katacontainers.io/)
are the most popular ones. But you should know that they work differently.
**Kata Containers** implements the CRI (Container Runtime Interface) and provides an additional
level of isolation for standard containers by running them in virtual machines.
But they work in a same single Kubernetes-cluster.
{{< figure src="kata-containers.svg" caption="A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers" alt="A diagram showing how container isolation is ensured by running containers in virtual machines with Kata Containers" >}}
**KubeVirt** allows running traditional virtual machines using the Kubernetes API. KubeVirt virtual
machines are run as regular linux processes in containers. In other words, in KubeVirt, a container
is used as a sandbox for running virtual machine (QEMU) processes.
This can be clearly seen in the figure below, by looking at how live migration of virtual machines
is implemented in KubeVirt. When migration is needed, the virtual machine moves from one container
to another.
{{< figure src="kubevirt-migration.svg" caption="A diagram showing live migration of a virtual machine from one container to another in KubeVirt" alt="A diagram showing live migration of a virtual machine from one container to another in KubeVirt" >}}
There is also an alternative project - [Virtink](https://github.com/smartxworks/virtink), which
implements lightweight virtualization using
[Cloud-Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) and is initially focused
on running virtual Kubernetes clusters using the Cluster API.
Considering our goals, we decided to use KubeVirt as the most popular project in this area.
Besides we have extensive expertise and already made a lot of contributions to KubeVirt.
KubeVirt is [easy to install](https://kubevirt.io/user-guide/operations/installation/) and allows
you to run virtual machines out-of-the-box using
[containerDisk](https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#containerdisk)
feature - this allows you to store and distribute VM images directly as OCI images from container
image registry.
Virtual machines with containerDisk are well suited for creating Kubernetes worker nodes and other
VMs that do not require state persistence.
For managing persistent data, KubeVirt offers a separate tool, Containerized Data Importer (CDI).
It allows for cloning PVCs and populating them with data from base images. The CDI is necessary
if you want to automatically provision persistent volumes for your virtual machines, and it is
also required for the KubeVirt CSI Driver, which is used to handle persistent volumes claims
from tenant Kubernetes clusters.
But at first, you have to decide where and how you will store these data.
## Storage for Kubernetes VMs
With the introduction of the CSI (Container Storage Interface), a wide range of technologies that
integrate with Kubernetes has become available.
In fact, KubeVirt fully utilizes the CSI interface, aligning the choice of storage for
virtualization closely with the choice of storage for Kubernetes itself.
However, there are nuances, which you need to consider. Unlike containers, which typically use a
standard filesystem, block devices are more efficient for virtual machine.
Although the CSI interface in Kubernetes allows the request of both types of volumes: filesystems
and block devices, it's important to verify that your storage backend supports this.
Using block devices for virtual machines eliminates the need for an additional abstraction layer,
such as a filesystem, that makes it more performant and in most cases enables the use of the
_ReadWriteMany_ mode. This mode allows concurrent access to the volume from multiple nodes, which
is a critical feature for enabling the live migration of virtual machines in KubeVirt.
The storage system can be external or internal (in the case of hyper-converged infrastructure).
Using external storage in many cases makes the whole system more stable, as your data is stored
separately from compute nodes.
{{< figure src="storage-external.svg" caption="A diagram showing external data storage communication with the compute nodes" alt="A diagram showing external data storage communication with the compute nodes" >}}
External storage solutions are often popular in enterprise systems because such storage is
frequently provided by an external vendor, that takes care of its operations. The integration with
Kubernetes involves only a small component installed in the cluster - the CSI driver. This driver
is responsible for provisioning volumes in this storage and attaching them to pods run by Kubernetes.
However, such storage solutions can also be implemented using purely open-source technologies.
One of the popular solutions is [TrueNAS](https://www.truenas.com/) powered by
[democratic-csi](https://github.com/democratic-csi/democratic-csi) driver.
{{< figure src="storage-local.svg" caption="A diagram showing local data storage running on the compute nodes" alt="A diagram showing local data storage running on the compute nodes" >}}
On the other hand, hyper-converged systems are often implemented using local storage (when you do
not need replication) and with software-defined storages, often installed directly in Kubernetes,
such as [Rook/Ceph](https://rook.io/), [OpenEBS](https://openebs.io/),
[Longhorn](https://longhorn.io/), [LINSTOR](https://linbit.com/linstor/), and others.
{{< figure src="storage-clustered.svg" caption="A diagram showing clustered data storage running on the compute nodes" alt="A diagram showing clustered data storage running on the compute nodes" >}}
A hyper-converged system has its advantages. For example, data locality: when your data is stored
locally, access to such data is faster. But there are disadvantages as such a system is usually
more difficult to manage and maintain.
At Ænix, we wanted to provide a ready-to-use solution that could be used without the need to
purchase and setup an additional external storage, and that was optimal in terms of speed and
resource utilization. LINSTOR became that solution.
The time-tested and industry-popular technologies such as LVM and ZFS as backend gives confidence
that data is securely stored. DRBD-based replication is incredible fast and consumes a small amount
of computing resources.
For installing LINSTOR in Kubernetes, there is the Piraeus project, which already provides a
ready-made block storage to use with KubeVirt.
{{< note >}}
In case you are using Talos Linux, as we described in the
[previous article](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/), you will
need to enable the necessary kernel modules in advance, and configure piraeus as described in the
[instruction](https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/how-to/talos.md).
{{< /note >}}
## Networking for Kubernetes VMs
Despite having the similar interface - CNI, The network architecture in Kubernetes is actually more
complex and typically consists of many independent components that are not directly connected to
each other. In fact, you can split Kubernetes networking into four layers, which are described below.
### Node Network (Data Center Network)
The network through which nodes are interconnected with each other. This network is usually not
managed by Kubernetes, but it is an important one because, without it, nothing would work.
In practice, the bare metal infrastructure usually has more than one of such networks e.g.
one for node-to-node communication, second for storage replication, third for external access, etc.
{{< figure src="net-nodes.svg" caption="A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme" alt="A diagram showing the role of the node network (data center network) on the Kubernetes networking scheme" >}}
Configuring the physical network interaction between nodes goes beyond the scope of this article,
as in most situations, Kubernetes utilizes already existing network infrastructure.
### Pod Network
This is the network provided by your CNI plugin. The task of the CNI plugin is to ensure transparent
connectivity between all containers and nodes in the cluster. Most CNI plugins implement a flat
network from which separate blocks of IP addresses are allocated for use on each node.
{{< figure src="net-pods.svg" caption="A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme" alt="A diagram showing the role of the pod network (CNI-plugin) on the Kubernetes network scheme" >}}
In practice, your cluster can have several CNI plugins managed by
[Multus](https://github.com/k8snetworkplumbingwg/multus-cni). This approach is often used in
virtualization solutions based on KubeVirt - [Rancher](https://www.rancher.com/) and
[OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift/virtualization).
The primary CNI plugin is used for integration with Kubernetes services, while additional CNI
plugins are used to implement private networks (VPC) and integration with the physical networks
of your data center.
The [default CNI-plugins](https://github.com/containernetworking/plugins/tree/main/plugins) can
be used to connect bridges or physical interfaces. Additionally, there are specialized plugins
such as [macvtap-cni](https://github.com/kubevirt/macvtap-cni) which are designed to provide
more performance.
One additional aspect to keep in mind when running virtual machines in Kubernetes is the need for
IPAM (IP Address Management), especially for secondary interfaces provided by Multus. This is
commonly managed by a DHCP server operating within your infrastructure. Additionally, the allocation
of MAC addresses for virtual machines can be managed by
[Kubemacpool](https://github.com/k8snetworkplumbingwg/kubemacpool).
Although in our platform, we decided to go another way and fully rely on
[Kube-OVN](https://www.kube-ovn.io/). This CNI plugin is based on OVN (Open Virtual Network) which
was originally developed for OpenStack and it provides a complete network solution for virtual
machines in Kubernetes, features Custom Resources for managing IPs and MAC addresses, supports
live migration with preserving IP addresses between the nodes, and enables the creation of VPCs
for physical network separation between tenants.
In Kube-OVN you can assign separate subnets to an entire namespace or connect them as additional
network interfaces using Multus.
### Services Network
In addition to the CNI plugin, Kubernetes also has a services network, which is primarily needed
for service discovery.
Contrary to traditional virtual machines, Kubernetes is originally designed to run pods with a
random address.
And the services network provides a convenient abstraction (stable IP addresses and DNS names)
that will always direct traffic to the correct pod.
The same approach is also commonly used with virtual machines in clouds despite the fact that
their IPs are usually static.
{{< figure src="net-services.svg" caption="A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme" alt="A diagram showing the role of the services network (services network plugin) on the Kubernetes network scheme" >}}
The implementation of the services network in Kubernetes is handled by the services network plugin,
The standard implementation is called **kube-proxy** and is used in most clusters.
But nowadays, this functionality might be provided as part of the CNI plugin. The most advanced
implementation is offered by the [Cilium](https://cilium.io/) project, which can be run in kube-proxy replacement mode.
Cilium is based on the eBPF technology, which allows for efficient offloading of the Linux
networking stack, thereby improving performance and security compared to traditional methods based
on iptables.
In practice, Cilium and Kube-OVN can be easily
[integrated](https://kube-ovn.readthedocs.io/zh-cn/stable/en/advance/with-cilium/) to provide a
unified solution that offers seamless, multi-tenant networking for virtual machines, as well as
advanced network policies and combined services network functionality.
### External Traffic Load Balancer
At this stage, you already have everything needed to run virtual machines in Kubernetes.
But there is actually one more thing.
You still need to access your services from outside your cluster, and an external load balancer
will help you with organizing this.
For bare metal Kubernetes clusters, there are several load balancers available:
[MetalLB](https://metallb.universe.tf/), [kube-vip](https://kube-vip.io/),
[LoxiLB](https://www.loxilb.io/), also [Cilium](https://docs.cilium.io/en/latest/network/lb-ipam/) and
[Kube-OVN](https://kube-ovn.readthedocs.io/zh-cn/latest/en/guide/loadbalancer-service/)
provides built-in implementation.
The role of a external load balancer is to provide a stable address available externally and direct
external traffic to the services network.
The services network plugin will direct it to your pods and virtual machines as usual.
{{< figure src="net-loadbalancer.svg" caption="A diagram showing the role of the external load balancer on the Kubernetes network scheme" alt="The role of the external load balancer on the Kubernetes network scheme" >}}
In most cases, setting up a load balancer on bare metal is achieved by creating floating IP address
on the nodes within the cluster, and announce it externally using ARP/NDP or BGP protocols.
After exploring various options, we decided that MetalLB is the simplest and most reliable solution,
although we do not strictly enforce the use of only it.
Another benefit is that in L2 mode, MetalLB speakers continuously check their neighbour's state by
sending preforming liveness checks using a memberlist protocol.
This enables failover that works independently of Kubernetes control-plane.
## Conclusion
This concludes our overview of virtualization, storage, and networking in Kubernetes.
The technologies mentioned here are available and already pre-configured on the
[Cozystack](https://github.com/aenix-io/cozystack) platform, where you can try them with no limitations.
In the [next article](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-3/),
I'll detail how, on top of this, you can implement the provisioning of fully functional Kubernetes
clusters with just the click of a button.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 28 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 20 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 117 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 73 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 88 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 101 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 28 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 28 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 41 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 104 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 84 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 155 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 50 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 71 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 136 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 83 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 70 KiB

View File

@ -0,0 +1,267 @@
---
layout: blog
title: "DIY: Create Your Own Cloud with Kubernetes (Part 3)"
slug: diy-create-your-own-cloud-with-kubernetes-part-3
date: 2024-04-05T07:40:00+00:00
---
**Author**: Andrei Kvapil (Ænix)
Approaching the most interesting phase, this article delves into running Kubernetes within
Kubernetes. Technologies such as Kamaji and Cluster API are highlighted, along with their
integration with KubeVirt.
Previous discussions have covered
[preparing Kubernetes on bare metal](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/)
and
[how to turn Kubernetes into virtual machines management system](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-2).
This article concludes the series by explaining how, using all of the above, you can build a
full-fledged managed Kubernetes and run virtual Kubernetes clusters with just a click.
First up, let's dive into the Cluster API.
## Cluster API
Cluster API is an extension for Kubernetes that allows the management of Kubernetes clusters as
custom resources within another Kubernetes cluster.
The main goal of the Cluster API is to provide a unified interface for describing the basic
entities of a Kubernetes cluster and managing their lifecycle. This enables the automation of
processes for creating, updating, and deleting clusters, simplifying scaling, and infrastructure
management.
Within the context of Cluster API, there are two terms: **management cluster** and
**tenant clusters**.
- **Management cluster** is a Kubernetes cluster used to deploy and manage other clusters.
This cluster contains all the necessary Cluster API components and is responsible for describing,
creating, and updating tenant clusters. It is often used just for this purpose.
- **Tenant clusters** are the user clusters or clusters deployed using the Cluster API. They are
created by describing the relevant resources in the management cluster. They are then used for
deploying applications and services by end-users.
It's important to understand that physically, tenant clusters do not necessarily have to run on
the same infrastructure with the management cluster; more often, they are running elsewhere.
{{< figure src="clusterapi1.svg" caption="A diagram showing interaction of management Kubernetes cluster and tenant Kubernetes clusters using Cluster API" alt="A diagram showing interaction of management Kubernetes cluster and tenant Kubernetes clusters using Cluster API" >}}
For its operation, Cluster API utilizes the concept of _providers_ which are separate controllers
responsible for specific components of the cluster being created. Within Cluster API, there are
several types of providers. The major ones are:
- **Infrastructure Provider**, which is responsible for providing the computing infrastructure, such as virtual machines or physical servers.
- **Control Plane Provider**, which provides the Kubernetes control plane, namely the components kube-apiserver, kube-scheduler, and kube-controller-manager.
- **Bootstrap Provider**, which is used for generating cloud-init configuration for the virtual machines and servers being created.
To get started, you will need to install the Cluster API itself and one provider of each type.
You can find a complete list of supported providers in the project's
[documentation](https://cluster-api.sigs.k8s.io/reference/providers.html).
For installation, you can use the `clusterctl` utility, or
[Cluster API Operator](https://github.com/kubernetes-sigs/cluster-api-operator)
as the more declarative method.
## Choosing providers
### Infrastructure provider
To run Kubernetes clusters using KubeVirt, the
[KubeVirt Infrastructure Provider](https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt)
must be installed.
It enables the deployment of virtual machines for worker nodes in the same management cluster, where
the Cluster API operates.
### Control plane provider
The [Kamaji](https://github.com/clastix/kamaji) project offers a ready solution for running the
Kubernetes control plane for tenant clusters as containers within the management cluster.
This approach has several significant advantages:
- **Cost-effectiveness**: Running the control plane in containers avoids the use of separate control
plane nodes for each cluster, thereby significantly reducing infrastructure costs.
- **Stability**: Simplifying architecture by eliminating complex multi-layered deployment schemes.
Instead of sequentially launching a virtual machine and then installing etcd and Kubernetes components
inside it, there's a simple control plane that is deployed and run as a regular application inside
Kubernetes and managed by an operator.
- **Security**: The cluster's control plane is hidden from the end user, reducing the possibility
of its components being compromised, and also eliminates user access to the cluster's certificate
store. This approach to organizing a control plane invisible to the user is often used by cloud providers.
### Bootstrap provider
[Kubeadm](https://github.com/kubernetes-sigs/cluster-api/tree/main/bootstrap) as the Bootstrap
Provider - as the standard method for preparing clusters in Cluster API. This provider is developed
as part of the Cluster API itself. It requires only a prepared system image with kubelet and kubeadm
installed and allows generating configs in the cloud-init and ignition formats.
It's worth noting that Talos Linux also supports provisioning via the Cluster API and
[has](https://github.com/siderolabs/cluster-api-bootstrap-provider-talos)
[providers](https://github.com/siderolabs/cluster-api-bootstrap-provider-talos) for this.
Although [previous articles](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/)
discussed using Talos Linux to set up a management cluster on bare-metal nodes, to provision tenant
clusters the Kamaji+Kubeadm approach has more advantages.
It facilitates the deployment of Kubernetes control planes in containers, thus removing the need for
separate virtual machines for control plane instances. This simplifies the management and reduces costs.
## How it works
The primary object in Cluster API is the Cluster resource, which acts as the parent for all the others.
Typically, this resource references two others: a resource describing the **control plane** and a
resource describing the **infrastructure**, each managed by a separate provider.
Unlike the Cluster, these two resources are not standardized, and their kind depends on the specific
provider you are using:
{{< figure src="clusterapi2.svg" caption="A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API" alt="A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API" >}}
Within Cluster API, there is also a resource named MachineDeployment, which describes a group of nodes,
whether they are physical servers or virtual machines. This resource functions similarly to standard
Kubernetes resources such as Deployment, ReplicaSet, and Pod, providing a mechanism for the
declarative description of a group of nodes and automatic scaling.
In other words, the MachineDeployment resource allows you to declaratively describe nodes for your
cluster, automating their creation, deletion, and updating according to specified parameters and
the requested number of replicas.
{{< figure src="machinedeploymentres.svg" caption="A diagram showing the relationship of a MachineDeployment resource and its children in Cluster API" alt="A diagram showing the relationship of a Cluster resource and its children in Cluster API" >}}
To create machines, MachineDeployment refers to a template for generating the machine itself and a
template for generating its cloud-init config:
{{< figure src="clusterapi3.svg" caption="A diagram showing the relationship of a MachineDeployment resource and the resources it links to in Cluster API" alt="A diagram showing the relationship of a Cluster resource and the resources it links to in Cluster API" >}}
To deploy a new Kubernetes cluster using Cluster API, you will need to prepare the following set of resources:
- A general Cluster resource
- A KamajiControlPlane resource, responsible for the control plane operated by Kamaji
- A KubevirtCluster resource, describing the cluster configuration in KubeVirt
- A KubevirtMachineTemplate resource, responsible for the virtual machine template
- A KubeadmConfigTemplate resource, responsible for generating tokens and cloud-init
- At least one MachineDeployment to create some workers
## Polishing the cluster
In most cases, this is sufficient, but depending on the providers used, you may need other resources
as well. You can find examples of the resources created for each type of provider in the
[Kamaji project documentation](https://github.com/clastix/cluster-api-control-plane-provider-kamaji?tab=readme-ov-file#-supported-capi-infrastructure-providers).
At this stage, you already have a ready tenant Kubernetes cluster, but so far, it contains nothing
but API workers and a few core plugins that are standardly included in the installation of any
Kubernetes cluster: **kube-proxy** and **CoreDNS**. For full integration, you will need to install
several more components:
To install additional components, you can use a separate
[Cluster API Add-on Provider for Helm](https://github.com/kubernetes-sigs/cluster-api-addon-provider-helm),
or the same [FluxCD](https://fluxcd.io/) discussed in
[previous articles](/blog/2024/04/05/diy-create-your-own-cloud-with-kubernetes-part-1/).
When creating resources in FluxCD, it's possible to specify the target cluster by referring to the
kubeconfig generated by Cluster API. Then, the installation will be performed directly into it.
Thus, FluxCD becomes a universal tool for managing resources both in the management cluster and
in the user tenant clusters.
{{< figure src="fluxcd.svg" caption="A diagram showing the interaction scheme of fluxcd, which can install components in both management and tenant Kubernetes clusters" alt="A diagram showing the interaction scheme of fluxcd, which can install components in both management and tenant Kubernetes clusters" >}}
What components are being discussed here? Generally, the set includes the following:
### CNI Plugin
To ensure communication between pods in a tenant Kubernetes cluster, it's necessary to deploy a
CNI plugin. This plugin creates a virtual network that allows pods to interact with each other
and is traditionally deployed as a Daemonset on the cluster's worker nodes. You can choose and
install any CNI plugin that you find suitable.
{{< figure src="components1.svg" caption="A diagram showing a CNI plugin installed inside the tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" alt="A diagram showing a CNI plugin installed inside the tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" >}}
### Cloud Controller Manager
The main task of the Cloud Controller Manager (CCM) is to integrate Kubernetes with the cloud
infrastructure provider's environment (in your case, it is the management Kubernetes cluster
in which all worksers of tenant Kubernetes are provisioned). Here are some tasks it performs:
1. When a service of type LoadBalancer is created, the CCM initiates the process of creating a cloud load balancer, which directs traffic to your Kubernetes cluster.
1. If a node is removed from the cloud infrastructure, the CCM ensures its removal from your cluster as well, maintaining the cluster's current state.
1. When using the CCM, nodes are added to the cluster with a special taint, `node.cloudprovider.kubernetes.io/uninitialized`,
which allows for the processing of additional business logic if necessary. After successful initialization, this taint is removed from the node.
Depending on the cloud provider, the CCM can operate both inside and outside the tenant cluster.
[The KubeVirt Cloud Provider](https://github.com/kubevirt/cloud-provider-kubevirt) is designed
to be installed in the external parent management cluster. Thus, creating services of type
LoadBalancer in the tenant cluster initiates the creation of LoadBalancer services in the parent
cluster, which direct traffic into the tenant cluster.
{{< figure src="components2.svg" caption="A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of services it manages from the parent to the child Kubernetes cluster" alt="A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of services it manages from the parent to the child Kubernetes cluster" >}}
### CSI Driver
The Container Storage Interface (CSI) is divided into two main parts for interacting with storage
in Kubernetes:
- **csi-controller**: This component is responsible for interacting with the cloud provider's API
to create, delete, attach, detach, and resize volumes.
- **csi-node**: This component runs on each node and facilitates the mounting of volumes to pods
as requested by kubelet.
In the context of using the [KubeVirt CSI Driver](https://github.com/kubevirt/csi-driver), a unique
opportunity arises. Since virtual machines in KubeVirt runs within the management Kubernetes cluster,
where a full-fledged Kubernetes API is available, this opens the path for running the csi-controller
outside of the user's tenant cluster. This approach is popular in the KubeVirt community and offers
several key advantages:
- **Security**: This method hides the internal cloud API from the end-user, providing access to
resources exclusively through the Kubernetes interface. Thus, it reduces the risk of direct access
to the management cluster from user clusters.
- **Simplicity and Convenience**: Users don't need to manage additional controllers in their clusters,
simplifying the architecture and reducing the management burden.
However, the CSI-node must necessarily run inside the tenant cluster, as it directly interacts with
kubelet on each node. This component is responsible for the mounting and unmounting of volumes into pods,
requiring close integration with processes occurring directly on the cluster nodes.
The KubeVirt CSI Driver acts as a proxy for ordering volumes. When a PVC is created inside the tenant
cluster, a PVC is created in the management cluster, and then the created PV is connected to the
virtual machine.
{{< figure src="components3.svg" caption="A diagram showing a CSI plugin components installed on both inside and outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of persistent volumes it manages from the parent to the child Kubernetes cluster" alt="A diagram showing a CSI plugin components installed on both inside and outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters and the mapping of persistent volumes it manages from the parent to the child Kubernetes cluster" >}}
### Cluster Autoscaler
The [Cluster Autoscaler](https://github.com/kubernetes/autoscaler) is a versatile component that
can work with various cloud APIs, and its integration with Cluster-API is just one of the available
functions. For proper configuration, it requires access to two clusters: the tenant cluster, to
track pods and determine the need for adding new nodes, and the managing Kubernetes cluster
(management kubernetes cluster), where it interacts with the MachineDeployment resource and adjusts
the number of replicas.
Although Cluster Autoscaler usually runs inside the tenant Kubernetes cluster, in this situation,
it is suggested to install it outside for the same reasons described before. This approach is
simpler to maintain and more secure as it prevents users of tenant clusters from accessing the
management API of the management cluster.
{{< figure src="components4.svg" caption="A diagram showing a Cluster Autoscaler installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" alt="A diagram showing a Cloud Controller Manager installed outside of a tenant Kubernetes cluster on a scheme of nested Kubernetes clusters" >}}
### Konnectivity
There's another additional component I'd like to mention -
[Konnectivity](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/).
You will likely need it later on to get webhooks and the API aggregation layer working in your
tenant Kubernetes cluster. This topic is covered in detail in one of my
[previous article](/blog/2021/12/22/kubernetes-in-kubernetes-and-pxe-bootable-server-farm/#webhooks-and-api-aggregation-layer).
Unlike the components presented above, Kamaji allows you to easily enable Konnectivity and manage
it as one of the core components of your tenant cluster, alongside kube-proxy and CoreDNS.
## Conclusion
Now you have a fully functional Kubernetes cluster with the capability for dynamic scaling, automatic
provisioning of volumes, and load balancers.
Going forward, you might consider metrics and logs collection from your tenant clusters, but that
goes beyond the scope of this article.
Of course, all the components necessary for deploying a Kubernetes cluster can be packaged into a
single Helm chart and deployed as a unified application. This is precisely how we organize the
deployment of managed Kubernetes clusters with the click of a button on our open PaaS platform,
[Cozystack](https://cozystack.io/), where you can try all the technologies described in the article
for free.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 33 KiB

View File

@ -0,0 +1,140 @@
---
layout: blog
title: "Spotlight on SIG Architecture: Code Organization"
slug: sig-architecture-code-spotlight-2024
canonicalUrl: https://www.kubernetes.dev/blog/2024/04/11/sig-architecture-code-spotlight-2024
date: 2024-04-11
---
**Author: Frederico Muñoz (SAS Institute)**
_This is the third interview of a SIG Architecture Spotlight series that will cover the different
subprojects. We will cover [SIG Architecture: Code Organization](https://github.com/kubernetes/community/blob/e44c2c9d0d3023e7111d8b01ac93d54c8624ee91/sig-architecture/README.md#code-organization)._
In this SIG Architecture spotlight I talked with [Madhav Jivrajan](https://github.com/MadhavJivrajani)
(VMware), a member of the Code Organization subproject.
## Introducing the Code Organization subproject
**Frederico (FSM)**: Hello Madhav, thank you for your availability. Could you start by telling us a
bit about yourself, your role and how you got involved in Kubernetes?
**Madhav Jivrajani (MJ)**: Hello! My name is Madhav Jivrajani, I serve as a technical lead for SIG
Contributor Experience and a GitHub Admin for the Kubernetes project. Apart from that I also
contribute to SIG API Machinery and SIG Etcd, but more recently, Ive been helping out with the work
that is needed to help Kubernetes [stay on supported versions of
Go](https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions),
and it is through this that I am involved with the Code Organization subproject of SIG Architecture.
**FSM**: A project the size of Kubernetes must have unique challenges in terms of code organization
-- is this a fair assumption? If so, what would you pick as some of the main challenges that are
specific to Kubernetes?
**MJ**: Thats a fair assumption! The first interesting challenge comes from the sheer size of the
Kubernetes codebase. We have ≅2.2 million lines of Go code (which is steadily decreasing thanks to
[dims](https://github.com/dims) and other folks in this sub-project!), and a little over 240
dependencies that we rely on either directly or indirectly, which is why having a sub-project
dedicated to helping out with dependency management is crucial: we need to know what dependencies
were pulling in, what versions these dependencies are at, and tooling to help make sure we are
managing these dependencies across different parts of the codebase in a consistent manner.
Another interesting challenge with Kubernetes is that we publish a lot of Go modules as part of the
Kubernetes release cycles, one example of this is
[`client-go`](https://github.com/kubernetes/client-go).However, we as a project would also like the
benefits of having everything in one repository to get the advantages of using a monorepo, like
atomic commits... so, because of this, code organization works with other SIGs (like SIG Release) to
automate the process of publishing code from the monorepo to downstream individual repositories
which are much easier to consume, and this way you wont have to import the entire Kubernetes
codebase!
## Code organization and Kubernetes
**FSM**: For someone just starting contributing to Kubernetes code-wise, what are the main things
they should consider in terms of code organization? How would you sum up the key concepts?
**MJ**: I think one of the key things to keep in mind at least as youre starting off is the concept
of staging directories. In the [`kubernetes/kubernetes`](https://github.com/kubernetes/kubernetes)
repository, you will come across a directory called
[`staging/`](https://github.com/kubernetes/kubernetes/tree/master/staging). The sub-folders in this
directory serve as a bunch of pseudo-repositories. For example, the
[`kubernetes/client-go`](https://github.com/kubernetes/client-go) repository that publishes releases
for `client-go` is actually a [staging
repo](https://github.com/kubernetes/kubernetes/tree/master/staging/src/k8s.io/client-go).
**FSM**: So the concept of staging directories fundamentally impact contributions?
**MJ**: Precisely, because if youd like to contribute to any of the staging repos, you will need to
send in a PR to its corresponding staging directory in `kubernetes/kubernetes`. Once the code merges
there, we have a bot called the [`publishing-bot`](https://github.com/kubernetes/publishing-bot)
that will sync the merged commits to the required staging repositories (like
`kubernetes/client-go`). This way we get the benefits of a monorepo but we also can modularly
publish code for downstream consumption. PS: The `publishing-bot` needs more folks to help out!
For more information on staging repositories, please see the [contributor
documentation](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/staging.md).
**FSM**: Speaking of contributions, the very high number of contributors, both individuals and
companies, must also be a challenge: how does the subproject operate in terms of making sure that
standards are being followed?
**MJ**: When it comes to dependency management in the project, there is a [dedicated
team](https://github.com/kubernetes/org/blob/a106af09b8c345c301d072bfb7106b309c0ad8e9/config/kubernetes/org.yaml#L1329)
that helps review and approve dependency changes. These are folks who have helped lay the foundation
of much of the
[tooling](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/vendor.md)
that Kubernetes uses today for dependency management. This tooling helps ensure there is a
consistent way that contributors can make changes to dependencies. The project has also worked on
additional tooling to signal statistics of dependencies that is being added or removed:
[`depstat`](https://github.com/kubernetes-sigs/depstat)
Apart from dependency management, another crucial task that the project does is management of the
staging repositories. The tooling for achieving this (`publishing-bot`) is completely transparent to
contributors and helps ensure that the staging repos get a consistent view of contributions that are
submitted to `kubernetes/kubernetes`.
Code Organization also works towards making sure that Kubernetes [stays on supported versions of
Go](https://github.com/kubernetes/enhancements/tree/cf6ee34e37f00d838872d368ec66d7a0b40ee4e6/keps/sig-release/3744-stay-on-supported-go-versions). The
linked KEP provides more context on why we need to do this. We collaborate with SIG Release to
ensure that we are testing Kubernetes as rigorously and as early as we can on Go releases and
working on changes that break our CI as a part of this. An example of how we track this process can
be found [here](https://github.com/kubernetes/release/issues/3076).
## Release cycle and current priorities
**FSM**: Is there anything that changes during the release cycle?
**MJ** During the release cycle, specifically before code freeze, there are often changes that go in
that add/update/delete dependencies, fix code that needs fixing as part of our effort to stay on
supported versions of Go.
Furthermore, some of these changes are also candidates for
[backporting](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md)
to our supported release branches.
**FSM**: Is there any major project or theme the subproject is working on right now that you would
like to highlight?
**MJ**: I think one very interesting and immensely useful change that
has been recently added (and I take the opportunity to specifically
highlight the work of [Tim Hockin](https://github.com/thockin) on
this) is the introduction of [Go workspaces to the Kubernetes
repo](/blog/2024/03/19/go-workspaces-in-kubernetes/). A lot of our
current tooling for dependency management and code publishing, as well
as the experience of editing code in the Kubernetes repo, can be
significantly improved by this change.
## Wrapping up
**FSM**: How would someone interested in the topic start helping the subproject?
**MJ**: The first step, as is the first step with any project in Kubernetes, is to join our slack:
[slack.k8s.io](https://slack.k8s.io), and after that join the `#k8s-code-organization` channel. There is also a
[code-organization office
hours](https://github.com/kubernetes/community/tree/master/sig-architecture#meetings) that takes
place that you can choose to attend. Timezones are hard, so feel free to also look at the recordings
or meeting notes and follow up on slack!
**FSM**: Excellent, thank you! Any final comments you would like to share?
**MJ**: The Code Organization subproject always needs help! Especially areas like the publishing
bot, so dont hesitate to get involved in the `#k8s-code-organization` Slack channel.

View File

@ -43,7 +43,7 @@ The two supported mechanisms are as follows:
detail specific schema for the resources. For reference about resource schemas,
please refer to the OpenAPI document.
- The [Kubernetes OpenAPI Document](#openapi-specification) provides (full)
- The [Kubernetes OpenAPI Document](#openapi-interface-definition) provides (full)
[OpenAPI v2.0 and 3.0 schemas](https://www.openapis.org/) for all Kubernetes API
endpoints.
The OpenAPI v3 is the preferred method for accessing OpenAPI as it

View File

@ -13,7 +13,7 @@ weight: 45
<!-- overview -->
In Kubernetes, _namespaces_ provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced {{< glossary_tooltip text="objects" term_id="object" >}} _(e.g. Deployments, Services, etc)_ and not for cluster-wide objects _(e.g. StorageClass, Nodes, PersistentVolumes, etc)_.
In Kubernetes, _namespaces_ provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced {{< glossary_tooltip text="objects" term_id="object" >}} _(e.g. Deployments, Services, etc.)_ and not for cluster-wide objects _(e.g. StorageClass, Nodes, PersistentVolumes, etc.)_.
<!-- body -->

View File

@ -36,9 +36,12 @@ You need to make sure a `RuntimeClass` is utilized which defines the `overhead`
To work with Pod overhead, you need a RuntimeClass that defines the `overhead` field. As
an example, you could use the following RuntimeClass definition with a virtualization container
runtime that uses around 120MiB per Pod for the virtual machine and the guest OS:
runtime (in this example, Kata Containers combined with the Firecracker virtual machine monitor)
that uses around 120MiB per Pod for the virtual machine and the guest OS:
```yaml
# You need to change this example to match the actual runtime name, and per-Pod
# resource overhead, that the container runtime is adding in your cluster.
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:

View File

@ -2,6 +2,9 @@
title: Service Accounts
description: >
Learn about ServiceAccount objects in Kubernetes.
api_metadata:
- apiVersion: "v1"
kind: "ServiceAccount"
content_type: concept
weight: 25
---

View File

@ -865,9 +865,19 @@ You can use a headless Service to interface with other service discovery mechani
without being tied to Kubernetes' implementation.
For headless Services, a cluster IP is not allocated, kube-proxy does not handle
these Services, and there is no load balancing or proxying done by the platform
for them. How DNS is automatically configured depends on whether the Service has
selectors defined:
these Services, and there is no load balancing or proxying done by the platform for them.
A headless Service allows a client to connect to whichever Pod it prefers, directly. Services that are headless don't
configure routes and packet forwarding using
[virtual IP addresses and proxies](/docs/reference/networking/virtual-ips/); instead, headless Services report the
endpoint IP addresses of the individual pods via internal DNS records, served through the cluster's
[DNS service](/docs/concepts/services-networking/dns-pod-service/).
To define a headless Service, you make a Service with `.spec.type` set to ClusterIP (which is also the default for `type`),
and you additionally set `.spec.clusterIP` to None.
The string value None is a special case and is not the same as leaving the `.spec.clusterIP` field unset.
How DNS is automatically configured depends on whether the Service has selectors defined:
### With selectors

View File

@ -331,7 +331,7 @@ You can restrict the use of `gitRepo` volumes in your cluster using
[ValidatingAdmissionPolicy](/docs/reference/access-authn-authz/validating-admission-policy/).
You can use the following Common Expression Language (CEL) expression as
part of a policy to reject use of `gitRepo` volumes:
`!has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))`.
`has(object.spec.volumes) || !object.spec.volumes.exists(v, has(v.gitRepo))`.
{{< /warning >}}

View File

@ -393,6 +393,10 @@ jwt:
# location than the issuer (such as locally in the cluster).
# discoveryURL must be different from url if specified and must be unique across all authenticators.
discoveryURL: https://discovery.example.com/.well-known/openid-configuration
# PEM encoded CA certificates used to validate the connection when fetching
# discovery information. If not set, the system verifier will be used.
# Same value as the content of the file referenced by the --oidc-ca-file flag.
certificateAuthority: <PEM encoded CA certificates>
# audiences is the set of acceptable audiences the JWT must be issued to.
# At least one of the entries must match the "aud" claim in presented JWTs.
audiences:

View File

@ -2,10 +2,9 @@
title: Init Container
id: init-container
date: 2018-04-12
full_link:
full_link: /docs/concepts/workloads/pods/init-containers/
short_description: >
One or more initialization containers that must run to completion before any app containers run.
full_link: /docs/concepts/workloads/pods/init-containers/
aka:
tags:
- fundamental

View File

@ -1001,9 +1001,10 @@ Type: Label
Example: `service.kubernetes.io/headless: ""`
Used on: Service
Used on: Endpoints
The control plane adds this label to an Endpoints object when the owning Service is headless.
To learn more, read [Headless Services](/docs/concepts/services-networking/service/#headless-services).
### service.kubernetes.io/topology-aware-hints (deprecated) {#servicekubernetesiotopology-aware-hints}

View File

@ -34,10 +34,18 @@ Dashboard also provides information on the state of Kubernetes resources in your
## Deploying the Dashboard UI
{{< note >}}
Kubernetes Dashboard supports only Helm-based installation currently as it is faster
and gives us better control over all dependencies required by Dashboard to run.
{{< /note >}}
The Dashboard UI is not deployed by default. To deploy it, run the following command:
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
```shell
# Add kubernetes-dashboard repository
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
```
## Accessing the Dashboard UI

View File

@ -139,6 +139,7 @@ The following methods exist for installing kubectl on Linux:
# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring
```
{{< note >}}
@ -151,6 +152,7 @@ In releases older than Debian 12 and Ubuntu 22.04, folder `/etc/apt/keyrings` do
```shell
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # helps tools such as command-not-found to work correctly
```
{{< note >}}

View File

@ -0,0 +1,192 @@
---
reviewers:
- freehan
title: EndpointSlices
content_type: concept
weight: 60
description: >-
La API de EndpointSlice es el mecanismo que Kubernetes utiliza para permitir que tu Servicio
escale para manejar un gran número de backends, y permite que el clúster actualice tu lista de
backends saludables eficientemente.
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.21" state="stable" >}}
La API de _EndpointSlice_ de Kubernetes proporciona una forma de rastrear los endpoints de red
dentro de un clúster Kubernetes. EndpointSlices ofrece una alternativa más escalable
y extensible a [Endpoints](/docs/concepts/services-networking/service/#endpoints).
<!-- body -->
## EndpointSlice API {#recurso-endpointslice}
En Kubernetes, un EndpointSlice contiene referencias a un conjunto de endpoints de red. El plano de control crea automáticamente EndpointSlices para cualquier Servicio de Kubernetes que tenga especificado un {{< glossary_tooltip text="selector" term_id="selector" >}}. Estos EndpointSlices incluyen referencias a todos los Pods que coinciden con el selector de Servicio. Los EndpointSlices agrupan los endpoints de la red mediante combinaciones únicas de protocolo, número de puerto y nombre de Servicio.
El nombre de un objeto EndpointSlice debe ser un
[nombre de subdominio DNS](/docs/concepts/overview/working-with-objects/names#dns-subdomain-names)
válido.
A modo de ejemplo, a continuación se muestra un objeto EndpointSlice de ejemplo, propiedad del Servicio `example`
de Kubernetes.
```yaml
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: example-abc
labels:
kubernetes.io/service-name: example
addressType: IPv4
ports:
- name: http
protocol: TCP
port: 80
endpoints:
- addresses:
- "10.1.2.3"
conditions:
ready: true
hostname: pod-1
nodeName: node-1
zone: us-west2-a
```
Por defecto, el plano de control crea y gestiona EndpointSlices para que no tengan más de 100 endpoints cada una. Puedes configurar esto con la bandera de funcionalidad
`--max-endpoints-per-slice`
{{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}}
hasta un máximo de 1000.
EndpointSlices puede actuar como la fuente de verdad
{{< glossary_tooltip term_id="kube-proxy" text="kube-proxy" >}} sobre cómo enrutar el tráfico interno.
### Tipos de dirección
EndpointSlices admite tres tipos de direcciones:
* IPv4
* IPv6
* FQDN (Fully Qualified Domain Name)
Cada objeto `EndpointSlice` representa un tipo de dirección IP específico. Si tienes un servicio disponible a través de IPv4 e IPv6, habrá al menos dos objetos `EndpointSlice` (uno para IPv4 y otro para IPv6).
### Condiciones
La API EndpointSlice almacena condiciones sobre los endpoints que pueden ser útiles para los consumidores.
Las tres condiciones son `ready`, `serving` y `terminating`.
#### Ready
`ready` es una condición que corresponde a la condición `Ready` de un Pod. Un Pod en ejecución con la condición `Ready` establecida a `True` debería tener esta condición EndpointSlice también establecida a `true`. Por razones de compatibilidad, `ready` NUNCA es `true` cuando un Pod está terminando. Los consumidores deben referirse a la condición `serving` para inspeccionar la disponibilidad de los Pods que están terminando. La única excepción a esta regla son los servicios con `spec.publishNotReadyAddresses` a `true`. Los endpoints de estos servicios siempre tendrán la condición `ready` a `true`.
#### Serving
{{< feature-state for_k8s_version="v1.26" state="stable" >}}
La condición `serving` es casi idéntica a la condición `ready`. La diferencia es que los consumidores de la API EndpointSlice deben comprobar la condición `serving` si se preocupan por la disponibilidad del pod mientras el pod también está terminando.
{{< note >}}
Aunque `serving` es casi idéntico a `ready`, se añadió para evitar romper el significado existente de `ready`. Podría ser inesperado para los clientes existentes si `ready` pudiera ser `true` para los endpoints de terminación, ya que históricamente los endpoints de terminación nunca se incluyeron en la API Endpoints o EndpointSlice para empezar. Por esta razón, `ready` es _siempre_ `false` para los Endpoints que terminan, y se ha añadido una nueva condición `serving` en la v1.20 para que los clientes puedan realizar un seguimiento de la disponibilidad de los pods que terminan independientemente de la semántica existente para `ready`.
{{< /note >}}
#### Terminating
{{< feature-state for_k8s_version="v1.22" state="beta" >}}
`Terminating` es una condición que indica si un endpoint está terminando. En el caso de los pods, se trata de cualquier pod que tenga establecida una marca de tiempo de borrado.
### Información sobre topología {#topology}
Cada endpoint dentro de un EndpointSlice puede contener información topológica relevante. La información de topología incluye la ubicación del endpoint e información sobre el Nodo y la zona correspondientes. Estos están disponibles en los siguientes campos por endpoint en EndpointSlices:
* `nodeName` - El nombre del Nodo en el que se encuentra este endpoint.
* `zone` - La zona en la que se encuentra este endpoint.
{{< note >}}
En la API v1, el endpoint `topology` se eliminó en favor de los campos dedicados `nodeName` y `zone`.
La configuración de campos de topología arbitrarios en el campo `endpoint` de un recurso `EndpointSlice` ha quedado obsoleta y no se admite en la API v1. En su lugar, la API v1 permite establecer campos individuales `nodeName` y `zone`. Estos campos se traducen automáticamente entre versiones de la API. Por ejemplo, el valor de la clave "topology.kubernetes.io/zone" en el campo `topology` de la API v1beta1 es accesible como campo `zone` en la API v1.
{{< /note >}}
### Administración
En la mayoría de los casos, el plano de control (concretamente, el endpoint slice {{< glossary_tooltip text="controller" term_id="controller" >}}) crea y gestiona objetos EndpointSlice. Existe una variedad de otros casos de uso para EndpointSlices, como implementaciones de servicios Mesh, que podrían dar lugar a que otras entidades o controladores gestionen conjuntos adicionales de EndpointSlices.
Para garantizar que varias entidades puedan gestionar EndpointSlices sin interferir unas con otras, Kubernetes define el parámetro
{{< glossary_tooltip term_id="label" text="label" >}}
`endpointslice.kubernetes.io/managed-by`, que indica la entidad que gestiona un EndpointSlice.
El controlador de endpoint slice establece `endpointslice-controller.k8s.io` como valor para esta etiqueta en todos los EndpointSlices que gestiona. Otras entidades que gestionen EndpointSlices también deben establecer un valor único para esta etiqueta.
### Propiedad
En la mayoría de los casos de uso, los EndpointSlices son propiedad del Servicio para el que el objeto EndpointSlices rastree los endpoints. Esta propiedad se indica mediante una referencia de propietario en cada EndpointSlice, así como una etiqueta `kubernetes.io/service-name` que permite búsquedas sencillas de todos los EndpointSlices que pertenecen a un Servicio.
### Replicación de EndpointSlice
En algunos casos, las aplicaciones crean recursos Endpoints personalizados. Para garantizar que estas aplicaciones no tengan que escribir simultáneamente en recursos Endpoints y EndpointSlice, el plano de control del clúster refleja la mayoría de los recursos Endpoints en los EndpointSlices correspondientes.
El plano de control refleja los recursos de los Endpoints a menos que:
* El recurso Endpoints tenga una etiqueta `endpointslice.kubernetes.io/skip-mirror` con el valor en `true`.
* El recurso Endpoints tenga una anotación `control-plane.alpha.kubernetes.io/leader`.
* El recurso Service correspondiente no exista.
* El recurso Service correspondiente tiene un selector no nulo.
Los recursos Endpoints individuales pueden traducirse en múltiples EndpointSlices. Esto ocurrirá si un recurso Endpoints tiene
múltiples subconjuntos o incluye endpoints con múltiples familias IP (IPv4 e IPv6). Se reflejará un máximo de 1000 direcciones
por subconjunto en EndpointSlices.
### Distribución de EndpointSlices
Cada EndpointSlice tiene un conjunto de puertos que se aplica a todos los endpoints dentro del recurso. Cuando se utilizan puertos con nombre para un Servicio, los Pods pueden terminar con diferentes números de puerto de destino para el mismo puerto con nombre, requiriendo diferentes EndpointSlices. Esto es similar a la lógica detrás de cómo se agrupan los subconjuntos con Endpoints.
El plano de control intenta llenar los EndpointSlices tanto como sea posible, pero no los reequilibra activamente. La lógica es bastante sencilla:
1. Iterar a través de los EndpointSlices existentes, eliminar los endpoints que ya no se deseen y actualizar los endpoints coincidentes que hayan cambiado.
2. Recorrer los EndpointSlices que han sido modificados en el primer paso y rellenarlos con los nuevos endpoints necesarios.
3. Si aún quedan nuevos endpoints por añadir, intente encajarlos en un slice que no se haya modificado previamente y/o cree otros nuevos.
Es importante destacar que el tercer paso prioriza limitar las actualizaciones de EndpointSlice sobre una distribución perfectamente completa de EndpointSlices. Por ejemplo, si hay 10 nuevos endpoints que añadir y 2 EndpointSlices con espacio para 5 endpoints más cada uno, este enfoque creará un nuevo EndpointSlice en lugar de llenar los 2 EndpointSlices existentes. En otras palabras, es preferible una única creación de EndpointSlice que múltiples actualizaciones de EndpointSlice.
Con kube-proxy ejecutándose en cada Nodo y vigilando los EndpointSlices, cada cambio en un EndpointSlice se vuelve relativamente caro ya que será transmitido a cada Nodo del clúster. Este enfoque pretende limitar el número de cambios que necesitan ser enviados a cada Nodo, incluso si puede resultar con múltiples EndpointSlices que no están llenos.
En la práctica, esta distribución menos que ideal debería ser poco frecuente. La mayoría de los cambios procesados por el controlador EndpointSlice serán lo suficientemente pequeños como para caber en un EndpointSlice existente, y si no, es probable que pronto sea necesario un nuevo EndpointSlice de todos modos. Las actualizaciones continuas de los Deployments también proporcionan un reempaquetado natural de los EndpointSlices con todos los Pods y sus correspondientes endpoints siendo reemplazados.
### Endpoints duplicados
Debido a la naturaleza de los cambios de EndpointSlice, los endpoints pueden estar representados en más de un EndpointSlice al mismo tiempo. Esto ocurre de forma natural, ya que los cambios en diferentes objetos EndpointSlice pueden llegar a la vigilancia / caché del cliente de Kubernetes en diferentes momentos.
{{< note >}}
Los clientes de la API EndpointSlice deben iterar a través de todos los EndpointSlices existentes asociados a un Servicio y construir una lista completa de endpoints de red únicos. Es importante mencionar que los endpoints pueden estar duplicados en diferentes EndpointSlices.
Puedes encontrar una implementación de referencia sobre cómo realizar esta agregación y deduplicación de endpoints como parte del código `EndpointSliceCache` dentro de `kube-proxy`.
{{< /note >}}
## Comparación con endpoints {#motivación}
La API Endpoints original proporcionaba una forma simple y directa de rastrear los endpoints de red en Kubernetes. A medida que los clústeres de Kubernetes y los {{< glossary_tooltip text="Services" term_id="service" >}} crecían para manejar más tráfico y enviar más tráfico a más Pods backend, las limitaciones de la API original se hicieron más visibles.
Más notablemente, estos incluyen desafíos con la ampliación a un mayor número de endpoints de red.
Dado que todos los endpoints de red para un Servicio se almacenaban en un único objeto Endpoint, esos objetos Endpoints podían llegar a ser bastante grandes. Para los Services que permanecían estables (el mismo conjunto de endpoints durante un largo período de tiempo), el impacto era menos notable; incluso entonces, algunos casos de uso de Kubernetes no estaban bien servidos.
Cuando un Service tenía muchos Endpoints de backend y la carga de trabajo se escalaba con frecuencia o se introducían nuevos cambios con frecuencia, cada actualización del objeto Endpoint para ese Service suponía mucho tráfico entre los componentes del clúster de Kubernetes (dentro del plano de control y también entre los nodos y el servidor de API). Este tráfico adicional también tenía un coste en términos de uso de la CPU.
Con EndpointSlices, la adición o eliminación de un único Pod desencadena el mismo _número_ de actualizaciones a los clientes que están pendientes de los cambios, pero el tamaño de esos mensajes de actualización es mucho menor a gran escala.
EndpointSlices también ha permitido innovar en torno a nuevas funciones, como las redes de doble pila y el enrutamiento con conocimiento de la topología.
## {{% heading "whatsnext" %}}
* Sigue las instrucciones del tutorial [Conexión de aplicaciones con servicios](/docs/tutorials/services/connect-applications-service/)
* Lee la [Referencia API](/docs/reference/kubernetes-api/service-resources/endpoint-slice-v1/) para la API EndpointSlice
* Lee la [Referencia API](/docs/reference/kubernetes-api/service-resources/endpoints-v1/) para la API Endpoints

View File

@ -0,0 +1,93 @@
---
title: サイドカーコンテナ
content_type: concept
weight: 50
---
<!-- overview -->
{{< feature-state for_k8s_version="v1.29" state="beta" >}}
サイドカーコンテナは、メインのアプリケーションコンテナと同じ{{< glossary_tooltip text="Pod" term_id="pod" >}}内で実行されるセカンダリーコンテナです。
これらのコンテナは、主要なアプリケーションコードを直接変更することなく、ロギング、モニタリング、セキュリティ、データの同期などの追加サービスや機能を提供することにより、アプリケーションコンテナの機能を強化または拡張するために使用されます。
<!-- body -->
## サイドカーコンテナの有効化
Kubernetes 1.29でデフォルトで有効化された`SidecarContainers`という名前の [フィーチャーゲート](/docs/reference/command-line-tools-reference/feature-gates/)により、
Podの`initContainers`フィールドに記載されているコンテナの`restartPolicy`を指定することができます。
これらの再起動可能な _サイドカー_ コンテナは、同じポッド内の他の[initコンテナ](/docs/concepts/workloads/pods/init-containers/)やメインのアプリケーションコンテナとは独立しています。
これらは、メインアプリケーションコンテナや他のinitコンテナに影響を与えることなく、開始、停止、または再起動することができます。
## サイドカーコンテナとPodのライフサイクル
もしinitコンテナが`restartPolicy`を`Always`に設定して作成された場合、それはPodのライフサイクル全体にわたって起動し続けます。
これは、メインアプリケーションコンテナから分離されたサポートサービスを実行するのに役立ちます。
このinitコンテナに`readinessProbe`が指定されている場合、その結果はPodの`ready`状態を決定するために使用されます。
これらのコンテナはinitコンテナとして定義されているため、他のinitコンテナと同様に順序に関する保証を受けることができ、複雑なPodの初期化フローに他のinitコンテナと混在させることができます。
通常のinitコンテナと比較して、`initContainers`内で定義されたサイドカーは、開始した後も実行を続けます。
これは、`.spec.initContainers`にPod用の複数のエントリーがある場合に重要です。
サイドカースタイルのinitコンテナが実行中になった後(kubeletがそのinitコンテナの`started`ステータスをtrueに設定した後)、kubeletは順序付けられた`.spec.initContainers`リストから次のinitコンテナを開始します。
そのステータスは、コンテナ内でプロセスが実行されておりStartup Probeが定義されていない場合、あるいはその`startupProbe`が成功するとtrueになります。
以下は、サイドカーを含む2つのコンテナを持つDeploymentの例です:
{{% code_sample language="yaml" file="application/deployment-sidecar.yaml" %}}
この機能は、サイドカーコンテナがメインコンテナが終了した後もジョブが完了するのを妨げないため、サイドカーを持つジョブを実行するのにも役立ちます。
以下は、サイドカーを含む2つのコンテナを持つJobの例です:
{{% code_sample language="yaml" file="application/job/job-sidecar.yaml" %}}
## 通常のコンテナとの違い
サイドカーコンテナは、同じPod内の通常のコンテナと並行して実行されます。
しかし、主要なアプリケーションロジックを実行するわけではなく、メインのアプリケーションにサポート機能を提供します。
サイドカーコンテナは独自の独立したライフサイクルを持っています。
通常のコンテナとは独立して開始、停止、再起動することができます。
これは、メインアプリケーションに影響を与えることなく、サイドカーコンテナを更新、スケール、メンテナンスできることを意味します。
サイドカーコンテナは、メインのコンテナと同じネットワークおよびストレージの名前空間を共有します。
このような配置により、密接に相互作用し、リソースを共有することができます。
## initコンテナとの違い
サイドカーコンテナは、メインのコンテナと並行して動作し、その機能を拡張し、追加サービスを提供します。
サイドカーコンテナは、メインアプリケーションコンテナと並行して実行されます。
Podのライフサイクル全体を通じてアクティブであり、メインコンテナとは独立して開始および停止することができます。
[Initコンテナ](/docs/concepts/workloads/pods/init-containers/)とは異なり、サイドカーコンテナはライフサイクルを制御するための[Probe](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe)をサポートしています。
これらのコンテナは、メインアプリケーションコンテナと直接相互作用することができ、同じネットワーク名前空間、ファイルシステム、環境変数を共有します。追加の機能を提供するために緊密に連携して動作します。
## コンテナ内のリソース共有
{{< comment >}}
このセクションは[Initコンテナ](/docs/concepts/workloads/pods/init-containers/)ページにも存在します。
このセクションを編集する場合は、その両方を変更してください。
{{< /comment >}}
Initコンテナ、サイドカーコンテナ、アプリケーションコンテナの順序と実行を考えるとき、リソースの使用に関して下記のルールが適用されます。
* 全てのInitコンテナの中で定義された最も高いリソースリクエストとリソースリミットが、*有効なinitリクエストリミット* になります。いずれかのリソースでリミットが設定されていない場合、これが最上級のリミットとみなされます。
* Podのリソースの*有効なリクエスト/リミット* は、[Podのオーバーヘッド](/ja/docs/concepts/scheduling-eviction/pod-overhead/)と次のうち大きい方の合計になります。
* リソースに対する全てのアプリケーションコンテナとサイドカーコンテナのリクエスト/リミットの合計
* リソースに対する有効なinitリクエストリミット
* スケジューリングは有効なリクエストリミットに基づいて実行されます。つまり、InitコンテナはPodの生存中には使用されない初期化用のリソースを確保することができます。
* Podの*有効なQoS(quality of service)ティアー* は、Initコンテナ、サイドカーコンテナ、アプリケーションコンテナで同様です。
クォータとリミットは有効なPodリクエストとリミットに基づいて適用されます。
Podレベルのコントロールグループ(cgroups)は、スケジューラーと同様に、有効なPodリクエストとリミットに基づいています。
## {{% heading "whatsnext" %}}
* [ネイティブサイドカーコンテナ](/blog/2023/08/25/native-sidecar-containers/)に関するブログ投稿を読む。
* [Initコンテナを持つPodを作成する方法](/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container)について読む。
* [Probeの種類](/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe)について学ぶ: Liveness, Readiness, Startup Probe。
* [Podのオーバーヘッド](/docs/concepts/scheduling-eviction/pod-overhead/)について学ぶ。

View File

@ -14,19 +14,19 @@ card:
<!-- body -->
## ドキュメントを日本語に翻訳するまでの流れ
## ドキュメントを日本語に翻訳するまでの流れ {#translate-flow}
翻訳を行うための基本的な流れについて説明します。不明点がある場合は[Kubernetes公式Slack](http://slack.kubernetes.io/)の`#kubernetes-docs-ja`チャンネルにてお気軽にご質問ください。
### 前提知識
### 前提知識 {#prerequisite}
翻訳作業は全て[GitHubのIssue](https://github.com/kubernetes/website/issues?q=is%3Aissue+is%3Aopen+label%3Alanguage%2Fja)によって管理されています。翻訳作業を行いたい場合は、Issueの一覧をまず最初にご確認ください。
また、Kubernetes傘下のリポジトリでは`CLA`と呼ばれる同意書に署名しないと、Pull Requestをマージすることができません。詳しくは[英語のドキュメント](https://github.com/kubernetes/community/blob/master/CLA.md)や、[Qiitaに有志の方が書いてくださった日本語のまとめ](https://qiita.com/jlandowner/items/d14d9bc8797a62b65e67)をご覧ください。
### 翻訳を始めるまで
### 翻訳を始めるまで {#start-translation}
#### 翻訳を希望するページのIssueが存在しない場合
#### 翻訳を希望するページのIssueが存在しない場合 {#no-issue}
1. [こちらのサンプル](https://github.com/kubernetes/website/issues/22340)に従う形でIssueを作成する
2. 自分自身を翻訳作業に割り当てたい場合は、Issueのメッセージまたはコメントに`/assign`と書く
@ -34,12 +34,12 @@ card:
**不明点がある場合は[Kubernetes公式Slack](http://slack.kubernetes.io/)の`#kubernetes-docs-ja`チャンネルにてお気軽にご質問ください。**
#### 翻訳を希望するページのIssueが存在する場合
#### 翻訳を希望するページのIssueが存在する場合 {#exist-issue}
1. 自分自身を翻訳作業に割り当てるために、Issueのコメントに`/assign`と書く
2. [新規ページを翻訳する場合](#translate-new-page)のステップに進む
### Pull Requestを送るまで
### Pull Requestを送るまで {#create-pull-request}
#### 新規ページを翻訳する場合の手順 {#translate-new-page}
@ -48,95 +48,82 @@ card:
3. `content/en`のディレクトリから必要なファイルを`content/ja`にコピーし、翻訳する
4. `main`ブランチに向けてPull Requestを作成する
#### 既存のページの誤字脱字や古い記述を修正する場合の手順
#### 既存のページの誤字脱字や古い記述を修正する場合の手順 {#fix-existing-page}
1. `kubernetes/website`リポジトリをフォークする
2. `main`から任意の名前でブランチを作成する
3. `content/ja`のディレクトリから必要なファイルを編集する
4. `main`ブランチに向けてPull Requestを作成する
## 翻訳スタイルガイド
## 翻訳スタイルガイド {#style-guide}
### 基本方針
### 基本方針 {#basic-policy}
- 本文を、敬体(ですます調)で統一
- 本文を、敬体(ですます調)で統一
- 特に、「〜になります」「〜となります」という表現は「〜です」の方が適切な場合が多いため注意
- 句読点は「、」と「。」を使用
- 漢字、ひらがな、カタカナは全角で表記
- 数字とアルファベットは半角で表記
- スペースと括弧 `()` 、コロン `:` は半角、それ以外の記号類は全角で表記
- 記号類は感嘆符「!」と疑問符「?」のみ全角、それ以外は半角で表記
- 英単語と日本語の間に半角スペースは不要
- 日本語文では、文章の途中で改行を行わない。句点「。」で改行する
- メタデータの`reviewer`の項目は削除する
- すでに日本語訳が存在するページにリンクを張る場合は、`/ja/`を含めたURLを使用する
- 例: `/path/to/page/`ではなく、`/ja/path/to/page/`を使用する
### 頻出単語
### 用語の表記 {#terminology}
英語 | 日本語
--------- | ---------
Addon/Add-on|アドオン
Aggregation Layer | アグリゲーションレイヤー
architecture | アーキテクチャ
binary | バイナリ
cluster|クラスター
community | コミュニティ
container | コンテナ
controller | コントローラー
Deployment/Deploy|KubernetesリソースとしてのDeploymentはママ表記、一般的な用語としてのdeployの場合は、デプロイ
directory | ディレクトリ
For more information|さらなる情報(一時的)
GitHub | GitHub (ママ表記)
Issue | Issue (ママ表記)
operator | オペレーター
orchestrate(動詞)|オーケストレーションする
Persistent Volume|KubernetesリソースとしてのPersistentVolumeはママ表記、一般的な用語としての場合は、永続ボリューム
prefix | プレフィックス
Pull Request | Pull Request (ママ表記)
Quota|クォータ
registry | レジストリ
secure | セキュア
a set of ~ | ~の集合
stacked | 積層(例: stacked etcd clusterは積層etcdクラスター)
Kubernetesのリソース名や技術用語などは、原則としてそのままの表記を使用します。
例えば、PodやService、Deploymentなどは翻訳せずにそのまま表記してください。
### 備考
ただし、ノード(Node)に関しては明確にKubernetesとしてのNodeリソース(例: `kind: Node`や`kubectl get nodes`)を指していないのであれば、「ノード」と表記してください。
ServiceやDeploymentなどのKubernetesのAPIオブジェクトや技術仕様的な固有名詞は、無理に日本語訳せずそのまま書いてください。
またこれらの単語は、複数形ではなく単数形を用います。
例えば、原文に"pods"と表記されている場合でも、日本語訳では"Pod"と表記してください。
また、日本語では名詞を複数形にする意味はあまりないので、英語の名詞を利用する場合は原則として単数形で表現してください。
例:
- Kubernetes Service
- Node
- Pod
外部サイトへの参照の記事タイトルは翻訳しましょう。(一時的)
### 頻出表記(日本語)
### 頻出表記(日本語) {#frequent-phrases}
よくある表記 | あるべき形
--------- | ---------
〜ので、〜から、〜だから| 〜のため 、〜ため
(あいうえお。)| (あいうえお)
,,|〇、〇、〇(※今回列挙はすべて読点で統一)
(あいうえお。)| (あいうえお)。
,,|〇、〇、〇(※列挙はすべて読点で統一)
### 単語末尾に長音記号(「ー」)を付けるかどうか
### 長音の有無 {#long-vowel}
「サーバー」「ユーザー」など英単語をカタカナに訳すときに、末尾の「ー」を付けるかどうか
カタカナ語に長音を付与するかどうかは、以下の原則に従ってください。
- 「r」「re」「y」などで終わる単語については、原則付ける
- 上の頻出語のように、別途まとめたものは例外とする
- -er、-or、-ar、-cy、-gyで終わる単語は長音を付与する
- 例: 「クラスター」「セレクター」「サイドカー」「ポリシー」「トポロジー」
- -ear、-eer、-re、-ty、-dy、-ryで終わる単語は長音を付与しない
- 例: 「クリア」「エンジニア」「アーキテクチャ」「セキュリティ」「スタディ」「ディレクトリ」
参考: https://kubernetes.slack.com/archives/CAG2M83S8/p1554096635015200 辺りのやりとり
ただし、「コンテナ」は例外的に長音を付与しないこととします。
### cron jobの訳し方に関して
この原則を作成するにあたって、[mozilla-japan/translation Editorial Guideline#カタカナ語の表記](https://github.com/mozilla-japan/translation/wiki/Editorial-Guideline#カタカナ語の表記)を参考にしました。
混同を避けるため、cron jobはcronジョブと訳し、CronJobはリソース名としてのままにする。
cron「の」ジョブは、「の」が続く事による解釈の難から基本的にはつけないものとする。
### その他の表記 {#other-notation}
### その他基本方針など
その他の表記については、以下の表を参考にしてください。
英語 | 日本語
--------- | ---------
interface | インターフェース
proxy | プロキシ
quota|クォータ
stacked | 積層
### cron jobの訳し方に関して {#cron-job}
混同を避けるため、cron jobはcronジョブと訳し、CronJobはリソース名としてそのまま表記します。
cron「の」ジョブは、「の」が続く事による解釈の難から基本的にはつけないものとします。
### その他基本方針など {#other-basic-policy}
- 意訳と直訳で迷った場合は「直訳」で訳す
- 訳で難しい・わからないと感じたらSlackの#kubernetes-docs-jaでみんなに聞く
- 訳で難しい・わからないと感じたらSlackの`#kubernetes-docs-ja`で相談する
- できることを挙手制で、できないときは早めに報告
## アップストリームのコントリビューター
## アップストリームのコントリビューター {#upstream-contributor}
SIG Docsでは、英語のソースに対する[アップストリームへのコントリビュートや誤りの訂正](/docs/contribute/intermediate#localize-content)を歓迎しています。

View File

@ -0,0 +1,34 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: alpine:latest
command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done']
volumeMounts:
- name: data
mountPath: /opt
initContainers:
- name: logshipper
image: alpine:latest
restartPolicy: Always
command: ['sh', '-c', 'tail -F /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
volumes:
- name: data
emptyDir: {}

View File

@ -0,0 +1,27 @@
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
template:
spec:
containers:
- name: myjob
image: alpine:latest
command: ['sh', '-c', 'echo "logging" > /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
initContainers:
- name: logshipper
image: alpine:latest
restartPolicy: Always
command: ['sh', '-c', 'tail -F /opt/logs.txt']
volumeMounts:
- name: data
mountPath: /opt
restartPolicy: Never
volumes:
- name: data
emptyDir: {}

View File

@ -275,7 +275,7 @@ description: |-
<b><code>echo Nome do Pod: $POD_NAME</code></b></p>
<p>Você pode acessar o Pod através da API encaminhada, rodando o comando:</p>
<p><b><code>curl
http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME/</code></b>
http://localhost:8001/api/v1/namespaces/default/pods/$POD_NAME:8080/proxy/</code></b>
</p>
<p>
Para que o novo Deployment esteja acessível sem utilizar o proxy, um

View File

@ -1,17 +1,17 @@
---
title: Иполняемая среда контейнеров
title: Иcполняемая среда контейнеров
id: container-runtime
date: 2019-06-05
full_link: /docs/setup/production-environment/container-runtimes
short_description: >
Иполняемая среда контейнеров — это программа, предназначенная для запуска контейнеров.
Иcполняемая среда контейнеров — это программа, предназначенная для запуска контейнеров.
aka:
tags:
- fundamental
- workload
---
Иполняемая среда контейнера — это программа, предназначенная для запуска контейнера в Kubernetes.
Иcполняемая среда контейнера — это программа, предназначенная для запуска контейнера в Kubernetes.
<!--more-->

View File

@ -206,14 +206,14 @@ are enabled, kubelets are only authorized to create/modify their own Node resour
<!--
As mentioned in the [Node name uniqueness](#node-name-uniqueness) section,
when Node configuration needs to be updated, it is a good practice to re-register
the node with the API server. For example, if the kubelet being restarted with
the new set of `--node-labels`, but the same Node name is used, the change will
not take an effect, as labels are being set on the Node registration.
the node with the API server. For example, if the kubelet is being restarted with
a new set of `--node-labels`, but the same Node name is used, the change will
not take effect, as labels are only set (or modified) upon Node registration with the API server.
-->
正如[节点名称唯一性](#node-name-uniqueness)一节所述,当 Node 的配置需要被更新时,
一种好的做法是重新向 API 服务器注册该节点。例如,如果 kubelet 重启时其 `--node-labels`
是新的值集,但同一个 Node 名称已经被使用,则所作变更不会起作用,
因为节点标签是在 Node 注册时完成的。
因为节点标签是在 Node 注册到 API 服务器时完成(或修改)的。
<!--
Pods already scheduled on the Node may misbehave or cause issues if the Node
@ -1065,6 +1065,8 @@ Learn more about the following:
* [Components](/docs/concepts/overview/components/#node-components) that make up a node.
* [API definition for Node](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#node-v1-core).
* [Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node) section of the architecture design document.
* [Cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/) to
manage the number and size of nodes in your cluster.
* [Taints and Tolerations](/docs/concepts/scheduling-eviction/taint-and-toleration/).
* [Node Resource Managers](/docs/concepts/policy/node-resource-managers/).
* [Resource Management for Windows nodes](/docs/concepts/configuration/windows-resource-management/).
@ -1076,6 +1078,8 @@ Learn more about the following:
* 架构设计文档中有关
[Node](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
的章节。
* [集群自动扩缩](https://git.k8s.io/design-proposals-archive/architecture/architecture.md#the-kubernetes-node)
以管理集群中节点的数量和规模。
* [污点和容忍度](/zh-cn/docs/concepts/scheduling-eviction/taint-and-toleration/)。
* [节点资源管理器](/zh-cn/docs/concepts/policy/node-resource-managers/)。
* [Windows 节点的资源管理](/zh-cn/docs/concepts/configuration/windows-resource-management/)。

View File

@ -98,12 +98,14 @@ Before choosing a guide, here are some considerations:
## Managing a cluster
* Learn how to [manage nodes](/docs/concepts/architecture/nodes/).
* Read about [cluster autoscaling](/docs/concepts/cluster-administration/cluster-autoscaling/).
* Learn how to set up and manage the [resource quota](/docs/concepts/policy/resource-quotas/) for shared clusters.
-->
## 管理集群 {#managing-a-cluster}
* 学习如何[管理节点](/zh-cn/docs/concepts/architecture/nodes/)。
* 阅读[集群自动扩缩](/zh-cn/docs/concepts/cluster-administration/cluster-autoscaling/)。
* 学习如何设定和管理集群共享的[资源配额](/zh-cn/docs/concepts/policy/resource-quotas/)。

View File

@ -209,6 +209,15 @@ Add-on 扩展了 Kubernetes 的功能。
并将系统问题报告为[事件](/zh-cn/docs/reference/kubernetes-api/cluster-resources/event-v1/)
或[节点状况](/zh-cn/docs/concepts/architecture/nodes/#condition)。
<!--
## Instrumentation
* [kube-state-metrics](/docs/concepts/cluster-administration/kube-state-metrics)
-->
## 插桩 {#instrumentation}
* [kube-state-metrics](/zh-cn/docs/concepts/cluster-administration/kube-state-metrics)
<!--
## Legacy Add-ons

View File

@ -1,5 +1,8 @@
---
title: 定制资源
api_metadata:
- apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
content_type: concept
weight: 10
---
@ -8,6 +11,9 @@ title: Custom Resources
reviewers:
- enisoc
- deads2k
api_metadata:
- apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
content_type: concept
weight: 10
-->

View File

@ -1,5 +1,8 @@
---
title: 资源配额
api_metadata:
- apiVersion: "v1"
kind: "ResourceQuota"
content_type: concept
weight: 20
---
@ -8,6 +11,9 @@ weight: 20
reviewers:
- derekwaynecarr
title: Resource Quotas
api_metadata:
- apiVersion: "v1"
kind: "ResourceQuota"
content_type: concept
weight: 20
-->

View File

@ -1,6 +1,9 @@
---
title: 网络策略
content_type: concept
api_metadata:
- apiVersion: "networking.k8s.io/v1"
kind: "NetworkPolicy"
weight: 70
description: >-
如果你希望在 IP 地址或端口层面OSI 第 3 层或第 4 层)控制网络流量,
@ -15,6 +18,9 @@ reviewers:
- danwinship
title: Network Policies
content_type: concept
api_metadata:
- apiVersion: "networking.k8s.io/v1"
kind: "NetworkPolicy"
weight: 70
description: >-
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4),

View File

@ -34,7 +34,7 @@ Enabled by default with Kubernetes 1.29, a
Pod's `initContainers` field. These restartable _sidecar_ containers are independent with
other [init containers](/docs/concepts/workloads/pods/init-containers/) and main
application container within the same pod. These can be started, stopped, or restarted
without effecting the main application container and other init containers.
without affecting the main application container and other init containers.
-->
## 启用边车容器 {#enabling-sidecar-containers}

View File

@ -2,10 +2,9 @@
title: Init 容器Init Container
id: init-container
date: 2018-04-12
full_link:
full_link: /zh-cn/docs/concepts/workloads/pods/init-containers/
short_description: >
应用容器运行前必须先运行完成的一个或多个 Init 容器Init Container
应用容器运行前必须先运行完成的一个或多个 Init 容器Init Container
aka:
tags:
- fundamental
@ -15,10 +14,9 @@ tags:
title: Init Container
id: init-container
date: 2018-04-12
full_link:
full_link: /docs/concepts/workloads/pods/init-containers/
short_description: >
One or more initialization containers that must run to completion before any app containers run.
aka:
tags:
- fundamental
@ -36,3 +34,12 @@ Initialization (init) containers are like regular app containers, with one diffe
-->
Init 容器像常规应用容器一样只有一点不同Init 容器必须在应用容器启动前运行完成。
Init 容器的运行顺序:一个 Init 容器必须在下一个 Init 容器开始前运行完成。
<!--
Unlike {{< glossary_tooltip text="sidecar containers" term_id="sidecar-container" >}}, init containers do not remain running after Pod startup.
For more information, read [init containers](/docs/concepts/workloads/pods/init-containers/).
-->
与{{< glossary_tooltip text="边车容器" term_id="sidecar-container" >}}不同Init 容器在 Pod 启动后不会继续运行。
有关更多信息,请阅读 [Init 容器](/zh-cn/docs/concepts/workloads/pods/init-containers/)。

View File

@ -76,9 +76,9 @@ Path to a kubeadm configuration file.
<tr>
<td></td><td style="line-height: 130%; word-wrap: break-word;">
<!--
help for admin.conf
help for super-admin.conf
-->
<p>admin.conf 的帮助信息。</p></td>
<p>super-admin.conf 的帮助信息。</p></td>
</tr>
<tr>

View File

@ -429,7 +429,7 @@ These pods are tracked via `.status.currentHealthy` field in the PDB status.
-->
## 不健康的 Pod 驱逐策略 {#unhealthy-pod-eviction-policy}
{{< feature-state for_k8s_version="v1.26" state="beta" >}}
{{< feature-state for_k8s_version="v1.27" state="beta" >}}
{{< note >}}
<!--

View File

@ -57,12 +57,12 @@ The connection to the server <server-name:port> was refused - did you specify th
<!--
For example, if you are intending to run a Kubernetes cluster on your laptop (locally),
you will need a tool like Minikube to be installed first and then re-run the commands stated above.
you will need a tool like [Minikube](https://minikube.sigs.k8s.io/docs/start/) to be installed first and then re-run the commands stated above.
If kubectl cluster-info returns the url response but you can't access your cluster,
to check whether it is configured properly, use:
-->
例如,如果你想在自己的笔记本上(本地)运行 Kubernetes 集群,你需要先安装一个 Minikube
例如,如果你想在自己的笔记本上(本地)运行 Kubernetes 集群,你需要先安装一个 [Minikube](https://minikube.sigs.k8s.io/docs/start/)
这样的工具,然后再重新运行上面的命令。
如果命令 `kubectl cluster-info` 返回了 URL但你还不能访问集群那可以用以下命令来检查配置是否妥当

View File

@ -98,12 +98,12 @@ The following methods exist for installing kubectl on Linux:
下载 kubectl 校验和文件:
{{< tabs name="download_checksum_linux" >}}
{{< tabs name="download_checksum_linux" >}}
{{< tab name="x86-64" codelang="bash" >}}
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
{{< /tab >}}
{{< tab name="ARM64" codelang="bash" >}}
curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl.sha256"
{{< /tab >}}
{{< /tabs >}}
@ -225,6 +225,7 @@ Or use this for detailed view of version:
# 如果 `/etc/apt/keyrings` 目录不存在,则应在 curl 命令之前创建它,请阅读下面的注释。
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring
```
{{< note >}}
@ -246,23 +247,22 @@ In releases older than Debian 12 and Ubuntu 22.04, folder `/etc/apt/keyrings` do
```shell
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # helps tools such as command-not-found to work correctly
```
-->
```shell
# 这会覆盖 /etc/apt/sources.list.d/kubernetes.list 中的所有现存配置
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/{{< param "version" >}}/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # 有助于让诸如 command-not-found 等工具正常工作
```
{{< note >}}
<!--
To upgrade kubectl to another minor release, you'll need to bump the version in
`/etc/apt/sources.list.d/kubernetes.list` before running `apt-get update` and
`apt-get upgrade`. This procedure is described in more detail in
[Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/).
-->
要升级 kubectl 到别的次要版本,你需要先升级 `/etc/apt/sources.list.d/kubernetes.list` 中的版本,
再运行 `apt-get update``apt-get upgrade`
更详细的步骤可以在[更改 Kubernetes 软件包仓库](/zh-cn/docs/tasks/administer-cluster/kubeadm/change-package-repository/)中找到。
<!--
To upgrade kubectl to another minor release, you'll need to bump the version in `/etc/apt/sources.list.d/kubernetes.list` before running `apt-get update` and `apt-get upgrade`. This procedure is described in more detail in [Changing The Kubernetes Package Repository](/docs/tasks/administer-cluster/kubeadm/change-package-repository/).
-->
要将 kubectl 升级到别的次要版本,你需要先升级 `/etc/apt/sources.list.d/kubernetes.list` 中的版本,
再运行 `apt-get update``apt-get upgrade` 命令。
更详细的步骤可以在[更改 Kubernetes 软件包存储库](/zh-cn/docs/tasks/administer-cluster/kubeadm/change-package-repository/)中找到。
{{< /note >}}
<!--

View File

@ -338,7 +338,8 @@ kubectl 为 Bash、Zsh、Fish 和 PowerShell 提供自动补全功能,可以
5. 安装插件后,清理安装文件:
```powershell
del kubectl-convert.exe kubectl-convert.exe.sha256
del kubectl-convert.exe
del kubectl-convert.exe.sha256
```
## {{% heading "whatsnext" %}}