{"id":1365,"date":"2021-06-13T22:01:57","date_gmt":"2021-06-13T20:01:57","guid":{"rendered":"https:\/\/wchmurze.cloud\/?p=1365"},"modified":"2021-06-13T22:12:23","modified_gmt":"2021-06-13T20:12:23","slug":"certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4","status":"publish","type":"post","link":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/","title":{"rendered":"Certified Kubernetes Administrator (CKA) krok po kroku &#8211; cz\u0119\u015b\u0107 4"},"content":{"rendered":"<h3>Pody statyczne (static pods), kube-scheduler i kontrola rozmieszczenia obiekt\u00f3w pod na w\u0119z\u0142ach klastra.<\/h3>\n<p>&nbsp;<\/p>\n<h3>Architektura<\/h3>\n<p>&nbsp;<\/p>\n<p>Czy istnieje mo\u017cliwo\u015b\u0107 uruchamiania obiekt\u00f3w pod tylko na jedynym w\u0119\u017ale klastra ? Czy mo\u017cna wyznaczy\u0107, na kt\u00f3rym w\u0119\u017ale zostanie uruchomiona nasza mikrous\u0142uga ? A mo\u017ce chcemy, by us\u0142ugi by\u0142y od siebie odseparowane albo by\u0142y uruchamiane jak najbli\u017cej siebie ?<\/p>\n<p>Co jest odpowiedzialne za kontrol\u0119 takiej konfiguracji. Wszelkie klastry Kubernetes z jakimi mamy do czynienia na egzaminie s\u0105 instalowane za pomoc\u0105 narz\u0119dzia <strong>kubeadm<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>Na pocz\u0105tku warto przypomnie\u0107 sobie architektur\u0119 Kubernetes z lotu ptaka<\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/dwmkerr.com\/tips-for-cka\/images\/k8s-architecture.png\" alt=\"Kubernetes Architecture\" \/><\/p>\n<p>Po lewej stronie mamy wyszczeg\u00f3lnione komponenty <strong>control-plane<\/strong> (dawniej master) . <strong>Etcd, kube-controller-manager i kube-api-server i kube-scheduler<\/strong> s\u0105 dostarczane w postaci kontener\u00f3w. Po prawej stronie mamy komponenty, kt\u00f3re s\u0105 sk\u0142adow\u0105 w\u0119z\u0142\u00f3w typu <strong>worker<\/strong>.\u00a0 Wszelkie komponenty <strong>control-plane<\/strong>, opr\u00f3cz <strong>kubelet<\/strong>, kt\u00f3ry jest plikiem binarnym odpowiedzialnym za tworzenie i kontrol\u0119 kontener\u00f3w na danym w\u0119\u017ale, s\u0105 w tym rozwi\u0105zaniu dostarczane w postaci skonteneryzowanej.<\/p>\n<p>Patrz\u0105c literalnie na powy\u017cszy obraz mo\u017cna doj\u015b\u0107 do wniosku ze <strong>control-plane<\/strong> nie zawiera <strong>kubelet<\/strong> i <strong>kube-prox<\/strong>y. Nic bardziej b\u0142\u0119dnego. W\u0119ze\u0142 kontrolny mo\u017ce pe\u0142ni\u0107 rol\u0119 w\u0119z\u0142a typu <strong>worker<\/strong>. Tak dzia\u0142a np. popularny <strong>minikube<\/strong>. Ale taka konfiguracja nie jest stosowana produkcyjnie, gdzie raczej separujemy cz\u0119\u015bci systemowe od aplikacji biznesowych.<\/p>\n<p>&nbsp;<\/p>\n<p>Klaster Kubernetes komunikuje si\u0119 ze \u015bwiatem zewn\u0119trznym przez komponent <strong>API-Server<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>Przyk\u0142ad wdro\u017cenia nowego obiektu pod na klastrze, pochodzi on z bloga <a href=\"https:\/\/medium.com\/@heptio\" target=\"_blank\" rel=\"noopener\">heptio<\/a> i doskonale pokazuje interakcj\u0119 mi\u0119dzy komponentami klastra.<\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" class=\"kg-image js-zoomable medium-zoom-image\" src=\"https:\/\/i.imgur.com\/MaONXzt.png\" alt=\"\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>Scheduler w Kubernetes jest niezale\u017cnym komponentem, co wi\u0119cej mo\u017ce by\u0107 ich wi\u0119cej ni\u017c jeden w danym klastrze. W manife\u015bcie obiektu mo\u017cna poda\u0107 nazw\u0119 schedulera, kt\u00f3ry ma przeprowadzi\u0107 ca\u0142y proces wyznaczenia w\u0142a\u015bciwego w\u0119z\u0142a.<\/p>\n<p>Opiszmy jak dzia\u0142a po kolei w bardzo uproszczony spos\u00f3b:<\/p>\n<p>Kiedy tworzymy nowy obiekt typu pod jego manifest jest wysy\u0142any do <strong>API-server<\/strong> i zapisany przez niego w bazie <strong>etcd<\/strong><\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\"><strong>Scheduler<\/strong> odpytuje cyklicznie <strong>API-Server<\/strong>, czy istniej\u0105 obiekty pod, dla kt\u00f3rych nie okre\u015blono nazwy w\u0119z\u0142a. Obiekty takie trafiaj\u0105 do kolejki.<\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\"><strong>Scheduler<\/strong> pobiera definicj\u0119 takiego poda i rozpoczyna proces wyznaczenia w\u0119z\u0142a, na kt\u00f3ry ma on trafi\u0107.<\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\">Proces taki sk\u0142ada si\u0119 z dw\u00f3ch g\u0142\u00f3wnych krok\u00f3w<\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\">a) filtrowania (<strong>filter<\/strong>). W tym miejscu wywo\u0142ywana jest grupa funkcji przyjmuj\u0105cych jako argumenty identyfikator obiektu pod i identyfikator w\u0119z\u0142a. Funkcje takie zwracaj\u0105 dla danej pary tylko warto\u015bci prawdy lub fa\u0142szu.<\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\">b) punktacja (<strong>scoring<\/strong>) W tym miejscu wywo\u0142ywana jest grupa funkcji przyjmuj\u0105cych jako argumenty identyfikator obiektu pod i identyfikator w\u0119z\u0142a. Funkcje takie zwracaj\u0105 ranking dla danej pary w postaci liczby.<\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\">Na ko\u0144cu wybierany jest w\u0119ze\u0142, dla kt\u00f3rego uda\u0142o si\u0119 ustali\u0107 dopasowanie z najwi\u0119ksz\u0105 punktacj\u0105.<\/p>\n<p data-spm-anchor-id=\"a2c65.11461447.0.i1.53bd7819e9Vekk\">Wys\u0142any jest komunikat do <strong>API-Server<\/strong>,kt\u00f3ry to jest w\u0119ze\u0142. <strong>API-Server<\/strong> zleca ustawienie w <strong>etcd<\/strong> <code>pod.Spec.NodeName<\/code><\/p>\n<p><strong>Kubelet<\/strong> danego w\u0119z\u0142a zabiera si\u0119 do pracy, tworzy i kontroluje mikrous\u0142ug\u0119 i raportuje zwrotnie do <strong>API-Server<\/strong> o statusie swojej pracy.<\/p>\n<p><strong>Api-Server<\/strong> zleca zapis konfiguracji do bazy <strong>etcd<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>To co nale\u017cy zapami\u0119ta\u0107.<\/p>\n<ul>\n<li>Jedynym komponentem klastra komunikuj\u0105cym si\u0119 z <strong>etcd<\/strong> jest <strong>API-Server.<\/strong><\/li>\n<li>Jedynym komponentem systemowym klastra, kt\u00f3ry przechowuje stan jest baza danych <strong>etcd<\/strong>.<\/li>\n<li>Jedynym komponentem klastra, kt\u00f3ry nie mo\u017ce by\u0107 skonteneryzowany jest <strong>kubelet. <\/strong><\/li>\n<\/ul>\n<p>Ten ostatni jest odpowiedzialny za komunikacj\u0119 z silnikiem konteneryzacji, kt\u00f3rym nie musi by\u0107 <strong>docker<\/strong>, ale np <strong>containerd<\/strong> lub <strong>cri-o<\/strong>.\u00a0 Kubelet kontroluje stan mikrous\u0142ug, kt\u00f3re utworzy\u0142, weryfikuje na bie\u017c\u0105co wykorzystanie zasob\u00f3w i w razie potrzeby restartuje je.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.tutorialworks.com\/difference-docker-containerd-runc-crio-oci\" target=\"_blank\" rel=\"noopener\">https:\/\/www.tutorialworks.com\/difference-docker-containerd-runc-crio-oci<\/a>\/<\/p>\n<p>&nbsp;<\/p>\n<p>Nie wszystkie obiekty wdra\u017cane na klastrze Kubernetes korzystaj\u0105 z kube-scheduler. Pom\u00f3wmy o dw\u00f3ch przypadkach<\/p>\n<p>&nbsp;<\/p>\n<ul>\n<li>Obiekt kontroluj\u0105cy obiety pod\u00a0 typu daemonset<\/li>\n<li>Pody kontrolowane przez kubelet danego w\u0119z\u0142a, czyli pody statyczne (static pods)<\/li>\n<\/ul>\n<p>Obiekt daemonset stara si\u0119 wdro\u017cy\u0107 na klastrze stan, w kt\u00f3rym na ka\u017cdym w\u0119\u017ale pojawi si\u0119 dok\u0142adnie jedna instancja obiektu pod.<\/p>\n<p>Je\u017celi chcemy wdro\u017cy\u0107 te obiekty r\u00f3wnie\u017c na w\u0119z\u0142ach control-plane, to nale\u017cy doda\u0107 w manife\u015bcie tolerancj\u0119<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">spec:\r\n  template:\r\n    spec:\r\n      tolerations:\r\n      # this toleration is to have the daemonset runnable on master nodes\r\n      # remove it if your masters can't run pods\r\n      - key: node-role.kubernetes.io\/master\r\n        operator: Exists\r\n        effect: NoSchedule<\/pre>\n<p>Om\u00f3wimy to dok\u0142adniej w cz\u0119\u015bci praktycznej<\/p>\n<p>&nbsp;<\/p>\n<h3>Kubelet<\/h3>\n<p>Zobaczmy stan us\u0142ugi <strong>kubelet<\/strong> na dowolnym w\u0119\u017ale (control-plane czy worker). Podczas egzaminu mamy do czynienia z dystrybucj\u0105 Ubuntu i klastrami Kubernetes zainstalowanymi za pomoc\u0105 narz\u0119dzie <strong>kubeadm<\/strong>. Takie klastry maj\u0105 zainstalowane komponenty systemowe w postaci static pods.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">systemctl status kubelet<\/pre>\n<p>Przyk\u0142adowy rezultat<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">\u25cf kubelet.service - kubelet: The Kubernetes Node Agent\r\nLoaded: loaded (\/lib\/systemd\/system\/kubelet.service; enabled; vendor preset: enabled)\r\nDrop-In: \/etc\/systemd\/system\/kubelet.service.d\r\n\u2514\u250010-kubeadm.conf\r\nActive: active (running) since Fri 2021-06-04 10:52:09 UTC; 1min 25s ago\r\nDocs: https:\/\/kubernetes.io\/docs\/home\/\r\nMain PID: 6842 (kubelet)\r\nTasks: 15 (limit: 2336)<\/pre>\n<p>Je\u015bli stan nie jest <strong>active<\/strong> to mamy problem i taki w\u0119ze\u0142 jest traktowany przez klaster jako notReady. To tyle tytu\u0142em wst\u0119pu do troubleshooting.<\/p>\n<p>Je\u015bli chcemy si\u0119 wi\u0119cej dowiedzie\u0107 o problemach z kubelet, mo\u017cemy skorzysta\u0107 <strong>journactl<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">journalctl -u kubelet\r\n<\/pre>\n<p>Przyk\u0142adowe logi<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">-- Logs begin at Fri 2021-06-04 10:50:32 UTC, end at Fri 2021-06-04 10:54:42 UTC. --\r\nJun 04 10:50:32 controlplane systemd[1]: Started kubelet: The Kubernetes Node Agent.\r\nJun 04 10:50:33 controlplane kubelet[396]: F0604 10:50:33.828046     396 server.go:199] failed to load Kubelet c\r\nJun 04 10:50:33 controlplane systemd[1]: kubelet.service: Main process exited, code=exited, status=255\/n\/a\r\nJun 04 10:50:33 controlplane systemd[1]: kubelet.service: Failed with result 'exit-code'.\r\nJun 04 10:50:35 controlplane systemd[1]: Stopped kubelet: The Kubernetes Node Agent.\r\nJun 04 10:50:38 controlplane systemd[1]: Started kubelet: The Kubernetes Node Agent.<\/pre>\n<p>&nbsp;<\/p>\n<p>Gdzie <strong>kubelet<\/strong> trzyma konfiguracj\u0119 ?<\/p>\n<p>&nbsp;<\/p>\n<p>Sp\u00f3jrzmy na definicje pliku <strong>\/etc\/systemd\/system\/kubelet.service.d\/10-kubeadm.conf<\/strong><\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">[Service]\r\nEnvironment=\"KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=\/etc\/kubernetes\/bootstrap-kubelet.conf --kubeconfig=\/etc\/kubernetes\/kubelet.conf\"\r\nEnvironment=\"KUBELET_CONFIG_ARGS=--config=\/var\/lib\/kubelet\/config.yaml\"\r\n# This is a file that \"kubeadm init\" and \"kubeadm join\" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically\r\nEnvironmentFile=-\/var\/lib\/kubelet\/kubeadm-flags.env\r\n# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use\r\n# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.\r\nEnvironmentFile=-\/etc\/default\/kubelet\r\nExecStart=\r\nExecStart=\/usr\/bin\/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS<\/pre>\n<p>To co nas interesuje, to plik konfiguracyjny <strong>\/var\/lib\/kubelet\/config.yaml<\/strong><\/p>\n<p>Zobaczmy jego zawarto\u015b\u0107<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: kubelet.config.k8s.io\/v1beta1\r\nauthentication:\r\n  anonymous:\r\n    enabled: false\r\n  webhook:\r\n    cacheTTL: 0s\r\n    enabled: true\r\n  x509:\r\n    clientCAFile: \/etc\/kubernetes\/pki\/ca.crt\r\nauthorization:\r\n  mode: Webhook\r\n  webhook:\r\n    cacheAuthorizedTTL: 0s\r\n    cacheUnauthorizedTTL: 0s\r\nclusterDNS:\r\n- 10.96.0.10\r\nclusterDomain: cluster.local\r\ncpuManagerReconcilePeriod: 0s\r\nevictionPressureTransitionPeriod: 0s\r\nfileCheckFrequency: 0s\r\nhealthzBindAddress: 127.0.0.1\r\nhealthzPort: 10248\r\nhttpCheckFrequency: 0s\r\nimageMinimumGCAge: 0s\r\nkind: KubeletConfiguration\r\nnodeStatusReportFrequency: 0s\r\nnodeStatusUpdateFrequency: 0s\r\nrotateCertificates: true\r\nruntimeRequestTimeout: 0s\r\nstaticPodPath: \/etc\/kubernetes\/manifests\r\nstreamingConnectionIdleTimeout: 0s\r\nsyncFrequency: 0s\r\nvolumeStatsAggPeriod: 0s<\/pre>\n<p>Jak wida\u0107 konfiguracja jest w postaci pliku YAML<\/p>\n<p>W kontek\u015bcie obiekt\u00f3w static pod nale\u017cy zwr\u00f3ci\u0107 uwag\u0119 na zmienn\u0105 <strong>staticPodPath<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">staticPodPath: \/etc\/kubernetes\/manifests<\/pre>\n<p>Oczywi\u015bcie mo\u017cemy zapami\u0119ta\u0107 ten\u00a0 adres, z tym, \u017ce mo\u017cna czasem trafi\u0107 na zadanie, gdzie albo nie ma jej skonfigurowanej, albo wskazuje na inny i nieoczywisty katalog.<\/p>\n<p>&nbsp;<\/p>\n<p>Jak w tym schemacie\u00a0 mamy\u00a0 rozumie\u0107, czym w\u0142a\u015bciwie s\u0105 obiekty static pod.<\/p>\n<p>&nbsp;<\/p>\n<h3>Konserwacja<\/h3>\n<p>&nbsp;<\/p>\n<p>Wyobra\u017amy sobie sytuacj\u0119, gdy jeden z w\u0119z\u0142\u00f3w ulega awarii lub chcemy go podda\u0107 konserwacji (instalacja nowych wersji pakiet\u00f3w, niwelacja luk bezpiecze\u0144stwa. itp&#8230;).<\/p>\n<p>Symulacj\u0119 takiej awarii mo\u017cna przeprowadzi\u0107 za pomoc\u0105 polecenia <strong>kubectl drain node-name<\/strong>.<\/p>\n<p>Obiekty pod, kt\u00f3re zosta\u0142y pouk\u0142adane na w\u0119\u017ale przez scheduler zostan\u0105 z takiego w\u0119z\u0142a eksmitowane (evicted) , ale nie dotyczy to static pods i nie dotyczy obiekt\u00f3w kontrolowanych przez DaemonSet. Je\u015bli na danym w\u0119\u017ale mamy obiekty pod w takiej konfiguracji to jawnie musimy poda\u0107, \u017ce nale\u017cy je zignorowa\u0107<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">kubectl drain node-name --ignore-daemonsets<\/pre>\n<p>&nbsp;<\/p>\n<p>O tym postaram si\u0119 dok\u0142adniej napisa\u0107 podczas \u0107wicze\u0144 dotycz\u0105cych instalacji i upgrade klastra za pomoc\u0105 narz\u0119dzie kubeadm.<\/p>\n<p>&nbsp;<\/p>\n<p>Zobaczmy jak wygl\u0105da plik manifestu <strong>kube-scheduler.yaml<\/strong><\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">cat \/etc\/kubernetes\/manifests\/kube-scheduler.yaml\r\n\r\n<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    component: kube-scheduler\r\n    tier: control-plane\r\n  name: kube-scheduler\r\n  namespace: kube-system\r\nspec:\r\n  containers:\r\n  - command:\r\n    - kube-scheduler\r\n    - --authentication-kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n    - --authorization-kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n    - --bind-address=127.0.0.1\r\n    - --kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n    - --leader-elect=true\r\n    image: k8s.gcr.io\/kube-scheduler:v1.18.0\r\n    imagePullPolicy: IfNotPresent\r\n    livenessProbe:\r\n      failureThreshold: 8\r\n      httpGet:\r\n        host: 127.0.0.1\r\n        path: \/healthz\r\n        port: 10259\r\n        scheme: HTTPS\r\n      initialDelaySeconds: 15\r\n      timeoutSeconds: 15\r\n    name: kube-scheduler\r\n    resources:\r\n      requests:\r\n        cpu: 100m\r\n    volumeMounts:\r\n    - mountPath: \/etc\/kubernetes\/scheduler.conf\r\n      name: kubeconfig\r\n      readOnly: true\r\n  hostNetwork: true\r\n  priorityClassName: system-cluster-critical\r\n  volumes:\r\n  - hostPath:\r\n      path: \/etc\/kubernetes\/scheduler.conf\r\n      type: FileOrCreate\r\n    name: kubeconfig\r\nstatus: {}<\/pre>\n<p>&nbsp;<\/p>\n<p>To na co nale\u017cy zwr\u00f3ci\u0107 uwag\u0119 to nazwa obiektu w obszarze metadata. <strong>name: kube-scheduler, <\/strong>obiekt jest wdro\u017cony w przestrzeni nazw <strong>kube-system <\/strong>i nie ma wype\u0142nionego znacznika <strong>nodeName<\/strong>. Jest wykorzystana sie\u0107 hosta <strong>hostNetwork: true<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>To co widzimy z poziomu API-Server w przestrzeni nazw <strong>kube-scheduler<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">kubectl get pod\u00a0 -n kube-system<\/pre>\n<p>Przyk\u0142adowa lista obiekt\u00f3w:<strong><br \/>\n<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">NAME                                       READY   STATUS      RESTARTS   AGE\r\ncoredns-66bff467f8-bmwfk                   1\/1     Running     0          15m\r\ncoredns-66bff467f8-vq78s                   1\/1     Running     0          15m\r\netcd-controlplane                          1\/1     Running     0          14m\r\nkube-apiserver-controlplane                1\/1     Running     0          14m\r\nkube-controller-manager-controlplane       1\/1     Running     0          14m\r\nkube-flannel-ds-amd64-55njh                1\/1     Running     0          15m\r\nkube-flannel-ds-amd64-t5shx                1\/1     Running     0          15m\r\nkube-keepalived-vip-fplqp                  1\/1     Running     0          14m\r\nkube-proxy-886tw                           1\/1     Running     0          15m\r\nkube-proxy-pjxzf                           1\/1     Running     0          15m\r\nkube-scheduler-controlplane                1\/1     Running     0          15m\r\nmetrics-server-85f8dd85fd-msnhf            1\/1     Running     0          15m<\/pre>\n<p>Pod <strong>kube-scheduler<\/strong> ma zmienion\u0105 nazw\u0119 i jest tak zwany mirror pod o nazwie kube-scheduler_<strong>node_name<\/strong><\/p>\n<p>Jak wygl\u0105da jego manifest, po usuni\u0119ciu cz\u0119\u015bci danych kt\u00f3re s\u0105\u00a0 wstrzykni\u0119te przez klaster ?<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  annotations:\r\n    kubernetes.io\/config.hash: 74e2b561ea40b4f834f9854608d559d4\r\n    kubernetes.io\/config.mirror: 74e2b561ea40b4f834f9854608d559d4\r\n    kubernetes.io\/config.seen: \"2021-06-04T11:01:30.654339598Z\"\r\n    kubernetes.io\/config.source: file\r\n  labels:\r\n    component: kube-scheduler\r\n    tier: control-plane\r\n  name: kube-scheduler-controlplane\r\n  namespace: kube-system\r\nspec:\r\n  containers:\r\n  - command:\r\n    - kube-scheduler\r\n    - --authentication-kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n    - --authorization-kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n    - --bind-address=127.0.0.1\r\n    - --kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n    - --leader-elect=true\r\n    image: k8s.gcr.io\/kube-scheduler:v1.18.0\r\n    livenessProbe:\r\n      failureThreshold: 8\r\n      httpGet:\r\n        host: 127.0.0.1\r\n        path: \/healthz\r\n        port: 10259\r\n        scheme: HTTPS\r\n      initialDelaySeconds: 15\r\n      timeoutSeconds: 15\r\n    name: kube-scheduler\r\n    resources:\r\n      requests:\r\n        cpu: 100m\r\n    volumeMounts:\r\n    - mountPath: \/etc\/kubernetes\/scheduler.conf\r\n      name: kubeconfig\r\n      readOnly: true\r\n  hostNetwork: true\r\n  nodeName: controlplane\r\n  priority: 2000000000\r\n  priorityClassName: system-cluster-critical\r\n  tolerations:\r\n  - effect: NoExecute\r\n    operator: Exists\r\n  volumes:\r\n  - hostPath:\r\n      path: \/etc\/kubernetes\/scheduler.conf\r\n      type: FileOrCreate\r\n    name: kubeconfig<\/pre>\n<p>I tutaj widzimy dodane <strong>nodeName: controlplane<\/strong> i zmieniona nazw\u0119 name: <strong>kube-scheduler-controlplane<\/strong>. To co nale\u017cy zapami\u0119ta\u0107, to brak mo\u017cliwo\u015bci tworzenia, usuwania i modyfikacji takich obiekt\u00f3w z poziomu <strong>kubectl<\/strong>, czyli przez <strong>API-Server<\/strong>. Takie obiekty s\u0105 zmieniane jedynie przez wrzucenie nowego pliku, jego modyfikacj\u0119 lub usuni\u0119cie z katalogu, w kt\u00f3rym <strong>kubelet<\/strong> danego w\u0119z\u0142a kontroluje zawarto\u015b\u0107 zgodnie ze \u015bcie\u017ck\u0105 <strong>staticPodPath<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>Jak w praktyce odr\u00f3\u017cni\u0107 static pod bez logowania si\u0119 na ka\u017cdy w\u0119ze\u0142 i ogl\u0105dania konfiguracji <strong>kubelet<\/strong>?<\/p>\n<p>&nbsp;<\/p>\n<p>Zacznijmy od listy w\u0119z\u0142\u00f3w<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get nodes<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME STATUS ROLES AGE VERSION\r\ncontrolplane Ready master 21m v1.19.0\r\nnode01 Ready &lt;none&gt; 20m v1.19.0<\/pre>\n<p>Potem obejrzyjmy list\u0119 obiekt\u00f3w <strong>pod<\/strong> w przestrzeni nazw <strong>kube-system<\/strong><\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod -n kube-system<\/pre>\n<p>Lista obiekt\u00f3w pod w przestrzeni nazw kube-system<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME                                       READY   STATUS      RESTARTS   AGE\r\ncoredns-66bff467f8-bmwfk                   1\/1     Running     0          39m\r\ncoredns-66bff467f8-vq78s                   1\/1     Running     0          39m\r\netcd-controlplane                          1\/1     Running     0          38m\r\nkube-apiserver-controlplane                1\/1     Running     0          38m\r\nkube-controller-manager-controlplane       1\/1     Running     0          38m\r\nkube-flannel-ds-amd64-55njh                1\/1     Running     0          39m\r\nkube-flannel-ds-amd64-t5shx                1\/1     Running     0          39m\r\nkube-keepalived-vip-fplqp                  1\/1     Running     0          38m\r\nkube-proxy-886tw                           1\/1     Running     0          39m\r\nkube-proxy-pjxzf                           1\/1     Running     0          39m\r\nkube-scheduler-controlplane                1\/1     Running     0          29m\r\nmetrics-server-85f8dd85fd-msnhf            1\/1     Running     0          39m<\/pre>\n<p>Naj\u0142atwiej jest zauwa\u017cy\u0107, \u017ce sufiksem nazwy obiektu jest nazwa w\u0119z\u0142a (w naszym przypadku jest to <strong>controlplane<\/strong>).<\/p>\n<p>Ale je\u015bli mamy klaster, kt\u00f3ry ma wiele w\u0119z\u0142\u00f3w, ten spos\u00f3b mo\u017ce si\u0119 okaza\u0107 ma\u0142o efektywny.<\/p>\n<p>&nbsp;<\/p>\n<p>Innym sposobem jest weryfikacja obiektu kontroluj\u0105cego dany obiekt pod i mo\u017cna wykorzysta\u0107 pole, kt\u00f3re wskazuje jaki obiekt nadrz\u0119dny jest tym kt\u00f3ry kontroluje jego stan.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl describe pod -n kube-system | grep \"Controlled By:\"<\/pre>\n<p>Przyk\u0142adowy rezultat<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">Controlled By:  ReplicaSet\/coredns-66bff467f8\r\nControlled By:  ReplicaSet\/coredns-66bff467f8\r\nControlled By:  Node\/controlplane\r\nControlled By:  ReplicaSet\/katacoda-cloud-provider-6bc7d5d9ff\r\nControlled By:  Node\/controlplane\r\nControlled By:  Node\/controlplane\r\nControlled By:  DaemonSet\/kube-flannel-ds-amd64\r\nControlled By:  DaemonSet\/kube-flannel-ds-amd64\r\nControlled By:  DaemonSet\/kube-keepalived-vip\r\nControlled By:  DaemonSet\/kube-proxy\r\nControlled By:  DaemonSet\/kube-proxy\r\nControlled By:  Node\/controlplane\r\nControlled By:  ReplicaSet\/metrics-server-85f8dd85fd\r\n<\/pre>\n<p>\u0141atwo zauwa\u017cy\u0107, \u017ce mamy obiekty kontrolowane przez DaemonSet,ReplicaSet (i w konsekwencji przez obiekt Deployment), ale pojawiaj\u0105 si\u0119 te\u017c obiekty kontrolowane przez Node\/nazwa-w\u0119z\u0142a)<\/p>\n<p>&nbsp;<\/p>\n<p>Dane takie mo\u017cna te\u017c wy\u015bwietli\u0107 w bardziej eleganckiej formie wykorzystuj\u0105c, <strong>kubectl get (&#8230;) -o custom-columns<\/strong>.<\/p>\n<p>Wykorzystaniu dok\u0142adniej <strong>custom-columns<\/strong> i <strong>jsonpath<\/strong> b\u0119dzie dedykowany inny odcinek, tu mamy tylko ma\u0142y przedsmak wykorzystania<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/stackoverflow.com\/questions\/43225591\/kubernetes-custom-columns-select-element-from-array\" target=\"_blank\" rel=\"noopener\">https:\/\/stackoverflow.com\/questions\/43225591\/kubernetes-custom-columns-select-element-from-array<\/a><\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pods -n kube-system -o custom-columns=NAME:.metadata.name,CONTROLLER:.metadata.ownerReferences[].kind,NAMESPACE:.metadata.namespace\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME                                       CONTROLLER   NAMESPACE\r\ncoredns-66bff467f8-bmwfk                   ReplicaSet   kube-system\r\ncoredns-66bff467f8-vq78s                   ReplicaSet   kube-system\r\netcd-controlplane                          Node         kube-system\r\nkatacoda-cloud-provider-6bc7d5d9ff-t4zr2   ReplicaSet   kube-system\r\nkube-apiserver-controlplane                Node         kube-system\r\nkube-controller-manager-controlplane       Node         kube-system\r\nkube-flannel-ds-amd64-55njh                DaemonSet    kube-system\r\nkube-flannel-ds-amd64-t5shx                DaemonSet    kube-system\r\nkube-keepalived-vip-fplqp                  DaemonSet    kube-system\r\nkube-proxy-886tw                           DaemonSet    kube-system\r\nkube-proxy-pjxzf                           DaemonSet    kube-system\r\nkube-scheduler-controlplane                Node         kube-system\r\nmetrics-server-85f8dd85fd-msnhf            ReplicaSet   kube-system\r\n<\/pre>\n<p>I dodatkowo mo\u017cna odfiltrowa\u0107 zwracany wynik, tylko dla obiekt\u00f3w static pod.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\" data-enlighter-theme=\"godzilla\" data-enlighter-linenumbers=\"false\">kubectl get pods --all-namespaces -o custom-columns=NAME:.metadata.name,CONTROLLER:.metadata.ownerReferences[].kind,NAMESPACE:.metadata.namespace |grep Node\r\n<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"shell\" data-enlighter-linenumbers=\"false\">etcd-controlplane Node kube-system\r\nkube-apiserver-controlplane Node kube-system\r\nkube-controller-manager-controlplane Node kube-system\r\nkube-scheduler-controlplane Node kube-system<\/pre>\n<p>&nbsp;<\/p>\n<p>Dosy\u0107 teorii, zabierzmy si\u0119 za cz\u0119\u015b\u0107 praktyczn\u0105<\/p>\n<p>&nbsp;<\/p>\n<h3>\u0106wiczenia<\/h3>\n<p>&nbsp;<\/p>\n<h4>Zadanie pierwsze<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu pod o nazwie\u00a0 <strong>static-nginx-node01<\/strong> zawieraj\u0105cy obraz <strong>nginx<\/strong> pracuj\u0105cy na porcie <strong>80<\/strong>. Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma<\/strong> i na w\u0119\u017ale o nazwie <strong>node01<\/strong>. Obiekt powinien by\u0107 kontrolowany przez API-Server. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run static-nginx-node01 --image=nginx --port=80 -o yaml --dry-run=client &gt; 01.pod.static-nginx-node01.yaml<\/pre>\n<p>Zawarto\u015b\u0107 naszego manifestu<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: static-nginx-node01\r\n  name: static-nginx-node01\r\nspec:\r\n  containers:\r\n  - image: nginx\r\n    name: static-nginx-node01\r\n    ports:\r\n    - containerPort: 80\r\n    resources: {}\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\nstatus: {}<\/pre>\n<p>&nbsp;<\/p>\n<p>W jaki spos\u00f3b umie\u015bci\u0107 pod na w\u0119\u017ale <strong>node01<\/strong> ?<\/p>\n<p>&nbsp;<\/p>\n<p>Zacznijmy od najprostszego rozwi\u0105\u017cania, nie mo\u017ce by\u0107 to static pod, gdy\u017c ma by\u0107 kontrolowany przez API-Server, ale mo\u017cemy wyr\u0119czy\u0107 <strong>kube-scheduler<\/strong> przez podanie jawnie nazwy w\u0119z\u0142a<strong> nodeName: node01<\/strong>.<\/p>\n<p>Po dodaniu odpowiedniej przestrzeni nazw (tutaj gamma) nasz manifest powinien wygl\u0105da\u0107 tak:<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: static-nginx-node01\r\n  name: static-nginx-node01\r\n  namespace: gamma\r\nspec:\r\n  containers:\r\n  - image: nginx\r\n    name: static-nginx-node01\r\n    ports:\r\n    - containerPort: 80\r\n    resources: {}\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\n  nodeName: node01\r\nstatus: {}<\/pre>\n<p>Wdra\u017camy nasz obiekt na klaster<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl apply -f 01.pod.static-nginx-node01.yaml<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">pod\/static-nginx-node01 created<\/pre>\n<p>Zobaczmy, czy zosta\u0142 umieszczony na odpowiednim w\u0119\u017ale, w tym celu mo\u017cemy wykorzysta\u0107 jako parametr polecenia <strong>kubectl get pod -o wide<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod static-nginx-node01 -n gamma -o wide<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r\nstatic-nginx-node01 1\/1 Running 0 3m4s 10.244.2.13 node01 &lt;none&gt; &lt;none&gt;<\/pre>\n<p>Wszystko wygl\u0105da poprawie, ale czy jest to jedyne rozwi\u0105zanie ?<\/p>\n<p>Co je\u015bli nie wype\u0142nimy nodeName ? Czy obiekt zostanie wdro\u017cony na inny w\u0119ze\u0142 ni\u017c <strong>node01<\/strong> ? W naszym przypadku mamy do czynienia z ma\u0142ym dwuw\u0119z\u0142owym klastrem.<\/p>\n<p>Bez podania nazwy wez\u0142a do akcji wkroczy <strong>kube-scheduler<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>Uwa\u017cna osoba zapyta si\u0119 dlaczego kaza\u0142em nazwa\u0107 obiekt z prefiksem static, a w zasadzie to nie jest to static pod. No wla\u015bnie dlatego, aby by\u0142o to mniej oczywiste i podnios\u0142o poziom \u0107wiczenia.<\/p>\n<p>&nbsp;<\/p>\n<h4>Zadanie drugie<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu pod o nazwie\u00a0 <strong>static-nginx-controlplane<\/strong> zawieraj\u0105cy obraz <strong>nginx<\/strong> pracuj\u0105cy na porcie <strong>80<\/strong> . Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma<\/strong> i na w\u0119\u017ale o nazwie <strong>controlplane<\/strong>. Obiekt\u00a0 powinien by\u0107 kontrolowany przez API-Server. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107. <strong>Kube-scheduler<\/strong> nie powinien bra\u0107 udzia\u0142u w ustaleniu docelowego w\u0119z\u0142a.<\/p>\n<p>&nbsp;<\/p>\n<p>To zadanie jest praktycznie tym samym co zadanie pierwsze, ale warto przetestowa\u0107 jak dzia\u0142a nodeName dla\u00a0 w\u0119z\u0142a controlplane<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run static-nginx-controlplane --image=nginx --port=80 -o yaml --dry-run=client &gt; 02.pod.static-nginx-controlplane.yaml<\/pre>\n<p>Po dodaniu odpowiedniej przestrzeni nazw nasz manifest powinien wygl\u0105da\u0107 tak<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: static-nginx-controlplane\r\n  name: static-nginx-controlplane\r\n  namespace: gamma\r\nspec:\r\n  containers:\r\n  - image: nginx\r\n    name: static-nginx-controlplane\r\n    ports:\r\n    - containerPort: 80\r\n    resources: {}\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\n  nodeName: controlplane\r\nstatus: {}<\/pre>\n<p>Wdra\u017camy nasz obiekt na klaster<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl apply -f 02.pod.static-nginx-controlplane.yaml<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">pod\/static-nginx-controlplane created<\/pre>\n<p>Zobaczmy, czy zosta\u0142 umieszczony na odpowiednim w\u0119\u017ale, w tym celu mo\u017cemy wykorzysta\u0107 jako parametr polecenia kubectl -o wide<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod  static-nginx-controlplane -n gamma -o wide<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r\nstatic-nginx-controlplane 1\/1 Running 0 3m4s 10.244.2.13 controlplane &lt;none&gt; &lt;none&gt;<\/pre>\n<p>&nbsp;<\/p>\n<p>Jak wida\u0107 z powy\u017cszych przyk\u0142ad\u00f3w <strong>nodeName<\/strong> jest opcj\u0105 atomow\u0105, z tym \u017ce ma jedn\u0105 wad\u0119. Musimy zna\u0107 nazwy w\u0119z\u0142\u00f3w, a z tym bywa r\u00f3\u017cnie w zale\u017cno\u015bci od konfiguracji klastra.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h4>Zadanie trzecie<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu pod o nazwie\u00a0 <strong>static-nginx<\/strong> zawieraj\u0105cy obraz <strong>nginx<\/strong> pracuj\u0105cy na porcie <strong>80<\/strong> . Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma<\/strong> i na w\u0119\u017ale o nazwie <strong>controlplane<\/strong>. Obiekt\u00a0 powinien by\u0107 kontrolowany przez <strong>API-Server<\/strong>. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107. <strong>Kube-scheduler<\/strong> powinien bra\u0107 udzia\u0142 w ustaleniu docelowego w\u0119z\u0142a. Nale\u017cy wykorzysta\u0107 mechanizm <strong>taints<\/strong> and <strong>tolerations<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<p>Zacznijmy od mechanizmu <strong>taints<\/strong> and <strong>tolerations<\/strong><\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl describe nodes | egrep \"Name:|Taints:\"\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">Name: controlplane\r\nTaints: node-role.kubernetes.io\/master:NoSchedule\r\nName: node01\r\nTaints: &lt;none&gt;<\/pre>\n<p>Standardowo, w\u0119z\u0142y kontroluj\u0105ce klaster maja ustawiony zapach (<strong>taint<\/strong>), kt\u00f3ry uniemo\u017cliwia utworzenia na nich obiekt\u00f3w typu pod. Istnieje taka mo\u017cliwo\u015b\u0107, ale wtedy nale\u017cy w definicji obiektu uwzgl\u0119dni\u0107 tolerancj\u0119 (<strong>toleration<\/strong>) na zapach danego w\u0119z\u0142a.<\/p>\n<p>&nbsp;<\/p>\n<p>W jaki spos\u00f3b mo\u017cna doda\u0107 zapach dla danego w\u0119z\u0142a ?<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl taint nodes controlplane key1=value1:NoSchedule<\/pre>\n<p>&nbsp;<\/p>\n<p>W jaki spos\u00f3b doda\u0107 toleracj\u0119 do obiektu pod?<\/p>\n<p>&nbsp;<\/p>\n<p>na poziomie spec.tolerations<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">tolerations:\r\n- key: \"key1\"\r\n  operator: \"Equal\"\r\n  value: \"value1\"\r\n  effect: \"NoSchedule\"<\/pre>\n<p>Gotowy manifest z dokumentacji Kubernetes wygl\u0105da tak<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx\r\n  labels:\r\n    env: test\r\nspec:\r\n  containers:\r\n  - name: nginx\r\n    image: nginx\r\n    imagePullPolicy: IfNotPresent\r\n  tolerations:\r\n  - key: \"example-key\"\r\n    operator: \"Exists\"\r\n    effect: \"NoSchedule\"<\/pre>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run nginx-pod-master-tolerations -n gamma --image=nginx:1.18.0 --port=80 -o yaml --dry-run=client &gt; 03.pod.nginx-pod-master-tolerations.yaml<\/pre>\n<p>Dodajemy do pliku z manifestem toleracj\u0119<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">tolerations:\r\n- key: \"node-role.kubernetes.io\/master\"\r\n  effect: \"NoSchedule\"\r\nnodeSelector:\r\n  whereareyou: master<\/pre>\n<p>&nbsp;<\/p>\n<p>Opr\u00f3cz dodania tolerancji niespodziewanie pojawi\u0142 si\u0119 nodeSelector. Dlaczego tak ? To, \u017ce dany obiekt jest odporny na zapach w\u0119z\u0142a <strong>controlplane<\/strong> nie oznacza, \u017ce musi by\u0107 wdro\u017cony na tym w\u0119\u017ale.<\/p>\n<p>&nbsp;<\/p>\n<p>Opcja nodeSelector pozwala nam wyj\u015b\u0107 z ograniczeniem <strong>nodeName<\/strong> jakim jest nazwa hosta i pozwala na filtrowanie tych w\u0119z\u0142\u00f3w, kt\u00f3re maja odpowiednie etykiety<\/p>\n<p>&nbsp;<\/p>\n<p>Przyk\u0142adowy manifest z dokumentacji Kubernetes<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: nginx\r\n  labels:\r\n    env: test\r\nspec:\r\n  containers:\r\n  - name: nginx\r\n    image: nginx\r\n    imagePullPolicy: IfNotPresent\r\n  nodeSelector:\r\n    disktype: ssd<\/pre>\n<p>&nbsp;<\/p>\n<p>W naszym przyk\u0142adzie <strong>filtrujemy<\/strong> mo\u017cliwo\u015bc wdro\u017cenia obiektu pod tylko do w\u0119z\u0142\u00f3w, kt\u00f3re maj\u0105 ustawion\u0105 etykiet\u0119 o nazwie <strong>disktype<\/strong> i warto\u015bci <strong>ssd<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-10040948 size-large\" src=\"https:\/\/cdn.thenewstack.io\/media\/2020\/01\/6f288b5f-np-aff-1024x447.png\" alt=\"\" width=\"640\" height=\"279\" data-id=\"10040948\" \/><\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Wracamy do naszego zadania. Manifest powinien wygl\u0105da\u0107 tak<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: nginx-pod-master-tolerations\r\n  name: nginx-pod-master-tolerations\r\n  namespace: gamma\r\nspec:\r\n  containers:\r\n  - image: nginx:1.18.0\r\n    name: nginx-pod-master-tolerations\r\n    ports:\r\n    - containerPort: 80\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\n  tolerations:\r\n  - key: \"node-role.kubernetes.io\/master\"\r\n    effect: \"NoSchedule\"\r\n  nodeSelector:\r\n    whereareyou: master<\/pre>\n<p>Wdra\u017camy nasz obiekt na klaster<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl apply -f \u00a003.pod.nginx-pod-master-tolerations.yaml<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">pod\/static-nginx-controlplane created<\/pre>\n<p>Zobaczmy, czy zosta\u0142 umieszczony na odpowiednim w\u0119\u017ale, w tym celu mo\u017cemy wykorzysta\u0107 jako parametr polecenia kubectl -o wide<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod  static-nginx-controlplane -n gamma -o wide<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r\nstatic-nginx-controlplane 1\/1 Running 0 3m4s 10.244.2.13 controlplane &lt;none&gt; &lt;none&gt;<\/pre>\n<p>&nbsp;<\/p>\n<p>A gdyby\u015bmy jednak skorzystali z tolerancji i nie uzy\u0142i nodeSelector ?<\/p>\n<p>&nbsp;<\/p>\n<div>\n<div>Sprawdzmy etykiety na w\u0119z\u0142ach<\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl\u00a0get\u00a0nodes\u00a0--show-labels<\/pre>\n<p>Mamy dwa w\u0119z\u0142y: <strong>controlplane<\/strong> i <strong>node01<\/strong>, a interesuje nas spos\u00f3b wyboru w\u0119z\u0142a bez podawania wprost jego nazwy.<\/p>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0STATUS\u00a0\u00a0\u00a0ROLES\u00a0\u00a0\u00a0\u00a0AGE\u00a0\u00a0\u00a0VERSION\u00a0\u00a0\u00a0LABELS\r\ncontrolplane\u00a0\u00a0\u00a0Ready\u00a0\u00a0\u00a0\u00a0master\u00a0\u00a0\u00a032m\u00a0\u00a0\u00a0v1.19.0\u00a0\u00a0\u00a0beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,kubernetes.io\/arch=amd64,kubernetes.io\/hostname=controlplane,kubernetes.io\/os=linux,node-role.kubernetes.io\/master=,whereareyou=master\r\nnode01\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Ready\u00a0\u00a0\u00a0\u00a0&lt;none&gt;\u00a0\u00a0\u00a032m\u00a0\u00a0\u00a0v1.19.0\u00a0\u00a0\u00a0beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,kubernetes.io\/arch=amd64,kubernetes.io\/hostname=node01,kubernetes.io\/os=linux,whereareyou=worker<\/pre>\n<p>Znajdzmy etykiety, kt\u00f3rymi si\u0119 r\u00f3\u017cn\u0105 oba w\u0119z\u0142y. To co nas interesuje to etykiety <strong>whereareyou=master<\/strong> dla w\u0119z\u0142a <strong>controlplane<\/strong> i <strong>whereareyou=worker<\/strong> dla w\u0119z\u0142a <strong>node01<\/strong><\/p>\n<\/div>\n<div><\/div>\n<div>Je\u015bli nie mamy ustawionych etykiet na naszym klastrze mo\u017cna skorzysta\u0107 z polecenia <strong>kubectl label node<\/strong><\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl label node controlplane whereareyou=master --overwrite\r\nkubectl label node node01 whereareyou=worker --overwrite<\/pre>\n<p>Opcja <strong>&#8211;overwrite<\/strong> nie zglosi bl\u0119du w przypadku, gdy dana etykieta jest juz nadana.<\/p>\n<\/div>\n<p>&nbsp;<\/p>\n<p>Bez ustawionego nodeSelector nasz manifest wygl\u0105da\u0142by tak<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: nginx-pod-master-test\r\n  name: nginx-pod-master-test\r\n  namespace: gamma\r\nspec:\r\n  containers:\r\n  - image: nginx:1.18.0\r\n    name: nginx-pod-master-test\r\n    ports:\r\n    - containerPort: 80\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\n   tolerations: \r\n   - key: \"node-role.kubernetes.io\/master\" \r\n     effect: \"NoSchedule\"\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Sprobujmy wdro\u017cy\u0107 nasz manifest nieco inaczej, korzystaj\u0105\u0107 z <strong>kubectl -f &#8211;<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">cat &lt;&lt;EOF | kubectl apply -f -\r\n\r\n# Zawarto\u015b\u0107 manifestu YAML\r\n\r\nEOF\r\n<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">cat &lt;&lt;EOF | kubectl apply -f -\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: nginx-pod-master-test\r\n  name: nginx-pod-master-test\r\n  namespace: gamma\r\nspec:\r\n  containers:\r\n  - image: nginx:1.18.0\r\n    name: nginx-pod-master-test\r\n    ports:\r\n    - containerPort: 80\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\n  tolerations:\r\n  - key: \"node-role.kubernetes.io\/master\"\r\n  effect: \"NoSchedule\"\r\nEOF<\/pre>\n<p>&nbsp;<\/p>\n<p>Zobaczmy, kt\u00f3rym w\u0119\u017ale wyl\u0105duje nasz obiekt<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl describe pod nginx-pod-master-test -n gamma<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">Events:\r\nType Reason Age From Message\r\n---- ------ ---- ---- -------\r\nNormal Scheduled 50s Successfully assigned gamma\/nginx-pod-master-test to node01\r\nNormal Pulling 48s kubelet, node01 Pulling image \"nginx:1.18.0\"\r\nNormal Pulled 42s kubelet, node01 Successfully pulled image \"nginx:1.18.0\" in 5.867211246s\r\nNormal Created 42s kubelet, node01 Created container nginx-pod-master-test\r\nNormal Started 41s kubelet, node01 Started container nginx-pod-master-test<\/pre>\n<p>Jak wida\u0107 dodanie samej tolerancji nie gwarantuje, \u017ce obiekt znajdzie si\u0119 na w\u0119\u017ale, na kt\u00f3rego zapach jest odporny, w naszym przypadku obiekt pod zosta\u0142 umieszczony na w\u0119\u017ale <strong>node01<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<h4>Zadanie czwarte<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu pod o nazwie\u00a0 <strong>static-nginx<\/strong> zawieraj\u0105cy obraz <strong>nginx<\/strong> pracuj\u0105cy na porcie <strong>80<\/strong> . Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma<\/strong> i na w\u0119\u017ale o nazwie <strong>controlplane<\/strong>. Obiekt nie powinien by\u0107 kontrolowany przez <strong>API-Server<\/strong>. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107.<\/p>\n<p>&nbsp;<\/p>\n<p>Poniewa\u017c obiekt ma nie byc kontrolowany przez API-Server bedziemy musili zbudowa\u0107 static pod<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run static-nginx --image=nginx --port=80 -o yaml --dry-run=client &gt; 04.pod.static-nginx.yaml<\/pre>\n<p>Dodajemy brakuj\u0105c\u0105 przestzre\u0144 nazw gamma<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    run: static-nginx\r\n  name: static-nginx\r\n  namespace: gamma\r\nspec:\r\n  containers:\r\n  - image: nginx\r\n    name: static-nginx\r\n    ports:\r\n    - containerPort: 80\r\n    resources: {}\r\n  dnsPolicy: ClusterFirst\r\n  restartPolicy: Always\r\nstatus: {}<\/pre>\n<p>&nbsp;<\/p>\n<p>Wdro\u017cenie polega na przekopiowaniu pliku do odpowiedniej \u015bcie\u017cki kontrolowanej przez kubelet<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">cp 04.pod.static-nginx.yaml \/etc\/kubernetes\/manifests\/<\/pre>\n<p>&nbsp;<\/p>\n<p>Po chwili widzimy ju\u017c nasz mirror pod<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod static-nginx-controlplane -n gamma -o wide<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r\nstatic-nginx-controlplane 1\/1 Running 0 2m41s 10.244.0.4 controlplane &lt;none&gt; &lt;none&gt;<\/pre>\n<p>&nbsp;<\/p>\n<h4>Zadanie pi\u0105te<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu pod o nazwie\u00a0 <strong>static-nginx<\/strong> zawieraj\u0105cy obraz <strong>nginx:1.18.0<\/strong> pracuj\u0105cy na porcie <strong>80<\/strong>. Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma<\/strong> i na w\u0119\u017ale o nazwie <strong>node01<\/strong>. Obiekt nie powinien by\u0107 kontrolowany przez <strong>API-Server<\/strong>. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107.<\/p>\n<p>&nbsp;<\/p>\n<p>Poniewa\u017c obiekt ma nie by\u0107 kontrolowany przez API-Server bedziemy musieli zbudowa\u0107 static pod, tym razem na w\u0119zle <strong>node01<\/strong>. Jest tu pewna pu\u0142apka. Zmian\u0119 w\u0119z\u0142\u00f3w na klastrze podczas egzaminu dokonujemy przez <strong>ssh student@nazwa_wezla<\/strong> a potem pracujemy na prawach uzytkownika <strong>root<\/strong> za pomoc\u0105 <strong>sudo -i<\/strong><\/p>\n<p>Po zalogowaniu si\u0119 na w\u0119ze\u0142 <strong>node01<\/strong> oka\u017ce si\u0119, ze nie ma tam pliku binarnego kubectl. Co mo\u017cna zrobi\u0107 ?<\/p>\n<p>Wykona\u0107 poni\u017cszy kod na w\u0119\u017ale <strong>controlplane<\/strong> (tu jest <strong>kubectl<\/strong>) i wklei\u0107 manifest do schowka (ten schowek po to jest mi\u0119dzy innymi). Nast\u0119pnie uruchamiamy edytor vim\u00a0 i wklejamy zawarto\u015b\u0107.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run static-nginx --image=nginx --port=80 -o yaml --dry-run=client &gt; 05.pod.static-nginx.yaml<\/pre>\n<p>Pozostaje nam tylko poprawa manifestu i wrzucenie pliku do odpowiedniego katalogu na w\u0119\u017cle.<\/p>\n<p>&nbsp;<\/p>\n<h4>Zadanie sz\u00f3ste<\/h4>\n<p>&nbsp;<\/p>\n<div>\n<div>Utw\u00f3rz obiekt typu pod o nazwie <strong>nginx-pod-master-selector<\/strong> zawieraj\u0105cy obraz <strong>nginx:1.18.0<\/strong> pracuj\u0105cy na porcie <strong>80. <\/strong>Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma <\/strong>i na w\u0119\u017ale o nazwie <strong>node01<\/strong>, nie mo\u017cesz uzy\u0107 taints i tolerations, wykorzystaj <strong>nodeSelector<\/strong> i etykiet\u0119 <strong>wheareyou<\/strong>. Obiekt powinien by\u0107 kontrolowany przez <strong>API-Server<\/strong>. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107.<\/div>\n<div><\/div>\n<\/div>\n<div>\n<p>&nbsp;<\/p>\n<\/div>\n<div>Zacznijmy od wygenerowania bazowego manifestu<\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run nginx-pod-worker-selector -n gamma --image=nginx:1.18.0 --port=80 -o yaml --dry-run=client &gt;06.pod.nginx-pod-worker-selector.yaml<\/pre>\n<\/div>\n<p>&nbsp;<\/p>\n<p>Przypomnijmy sobie jeszcze raz etykiery w\u0119z\u0142\u00f3w naszego klastra<\/p>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl\u00a0get\u00a0nodes\u00a0--show-labels<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0STATUS\u00a0\u00a0\u00a0ROLES\u00a0\u00a0\u00a0\u00a0AGE\u00a0\u00a0\u00a0VERSION\u00a0\u00a0\u00a0LABELS\r\ncontrolplane\u00a0\u00a0\u00a0Ready\u00a0\u00a0\u00a0\u00a0master\u00a0\u00a0\u00a032m\u00a0\u00a0\u00a0v1.19.0\u00a0\u00a0\u00a0beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,kubernetes.io\/arch=amd64,kubernetes.io\/hostname=controlplane,kubernetes.io\/os=linux,node-role.kubernetes.io\/master=,whereareyou=master\r\nnode01\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Ready\u00a0\u00a0\u00a0\u00a0&lt;none&gt;\u00a0\u00a0\u00a032m\u00a0\u00a0\u00a0v1.19.0\u00a0\u00a0\u00a0beta.kubernetes.io\/arch=amd64,beta.kubernetes.io\/os=linux,kubernetes.io\/arch=amd64,kubernetes.io\/hostname=node01,kubernetes.io\/os=linux,whereareyou=worker<\/pre>\n<\/div>\n<p>Jak wida\u0107 w\u0119ze\u0142 <strong>node01<\/strong> ma ustawion\u0105 etykiet\u0119 <strong>whereareyou<\/strong> o warto\u015bci <strong>worker<\/strong>.<\/p>\n<p>Podczas edycji pliku nale\u017cy doda\u0107<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">\u00a0\u00a0nodeSelector:\r\n\u00a0\u00a0\u00a0\u00a0whereareyou:\u00a0worker<\/pre>\n<div>i uzupe\u0142ni\u0107 brakuj\u0105cy ewentualnie namespace: <strong>gamma<\/strong>.<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\" data-enlighter-theme=\"eclipse\">apiVersion:\u00a0v1\r\nkind:\u00a0Pod\r\nmetadata:\r\n\u00a0\u00a0creationTimestamp:\u00a0null\r\n\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0run:\u00a0nginx-pod-worker-selector\r\n\u00a0\u00a0name:\u00a0nginx-pod-worker-selector\r\n\u00a0\u00a0namespace: gamma\r\nspec:\r\n\u00a0\u00a0containers:\r\n\u00a0\u00a0-\u00a0image:\u00a0nginx:1.18.0\r\n\u00a0\u00a0\u00a0\u00a0name:\u00a0nginx-pod-worker-selector\r\n\u00a0\u00a0\u00a0\u00a0ports:\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0containerPort:\u00a080\r\n\u00a0\u00a0\u00a0\u00a0resources:\u00a0{}\r\n\u00a0\u00a0dnsPolicy:\u00a0ClusterFirst\r\n\u00a0\u00a0restartPolicy:\u00a0Always\r\n\u00a0\u00a0nodeSelector:\r\n\u00a0\u00a0\u00a0\u00a0whereareyou:\u00a0worker\r\nstatus:\u00a0{}<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<div>Zapisujemy plik i wdra\u017camy na klaster<\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl\u00a0apply\u00a0-f\u00a006.pod.nginx-pod-worker-selector.yaml<\/pre>\n<\/div>\n<p>&nbsp;<\/p>\n<p>\u0141atwo zweryfikowa\u0107, czy wdro\u017cenie zosta\u0142o poprawnie przeprowadzone<\/p>\n<div><\/div>\n<div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl describe pod nginx-pod-worker-selector -n gamma\r\n<\/pre>\n<p>Odfiltrowana cz\u0119\u015b\u0107 zwrotna, pokazuje, \u017ce wykorzystano nodeSelector i jaki oraz, kt\u00f3ry w\u0119ze\u0142 zosta\u0142 wybrany jako w\u0142a\u015bciwy<\/p>\n<\/div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">\u00a0\u00a0...\r\n\u00a0\u00a0Node-Selectors:\u00a0\u00a0whereareyou=worker\r\n\u00a0\u00a0...\r\nEvents:\r\n\u00a0\u00a0Type\u00a0\u00a0\u00a0\u00a0Reason\u00a0\u00a0\u00a0\u00a0\u00a0Age\u00a0\u00a0\u00a0From\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Message\r\n\u00a0\u00a0----\u00a0\u00a0\u00a0\u00a0------\u00a0\u00a0\u00a0\u00a0\u00a0----\u00a0\u00a0----\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-------\r\n\u00a0\u00a0Normal\u00a0\u00a0Scheduled\u00a0\u00a010s\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Successfully assigned\u00a0 gamma\/nginx-pod-worker-selector to node01\r\n\u00a0\u00a0...<\/pre>\n<p>&nbsp;<\/p>\n<div>\n<h4>Zadanie si\u00f3dme<\/h4>\n<\/div>\n<\/div>\n<div><\/div>\n<div>Utw\u00f3rz obiekt typu pod o nazwie\u00a0<strong>nginx-pod-master-tolerations<\/strong> zawieraj\u0105cy obraz <strong>nginx:1.18.0<\/strong> pracuj\u0105cy na porcie <strong>80. <\/strong>Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma <\/strong>i na w\u0119\u017ale o nazwie <strong>controlplane<\/strong>, mo\u017cesz uzy\u0107 taints i tolerations, wykorzystaj <strong>nodeSelector<\/strong>. Wykorzystaj etykiet\u0119 <strong>whereareyou<\/strong> .Obiekt powinien by\u0107 kontrolowany przez API-Server. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107.<\/div>\n<div>\n<div><\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl run nginx-pod-master-tolerations -n gamma--image=nginx:1.18.0 --port=80 -o yaml --dry-run=client &gt; 07.pod.nginx-pod-master-tolerations.yaml<\/pre>\n<\/div>\n<p>Do tak wygenerowanego manifestu nale\u017cy doda\u0107 tolerancj\u0119, na zapach w\u0119z\u0142a <strong>controlplane<\/strong>, ale dodatkowo r\u00f3wnie\u017c wyb\u00f3r w\u0119z\u0142a za pomoc\u0105 etykiety. W\u0119ze\u0142 <strong>controlplane<\/strong> ma ustawion\u0105 etykiet\u0119 <strong>whereareyou<\/strong> o warto\u015bci <strong>master<\/strong>.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">\u00a0\u00a0tolerations:\r\n\u00a0\u00a0-\u00a0key:\u00a0\"node-role.kubernetes.io\/master\"\r\n\u00a0\u00a0\u00a0\u00a0effect:\u00a0\"NoSchedule\"\r\n\u00a0\u00a0nodeSelector:\r\n\u00a0\u00a0\u00a0\u00a0whereareyou:\u00a0master<\/pre>\n<div><\/div>\n<div>\n<div><\/div>\n<div>Po zapisaniu pliku i wdro\u017ceniu go na klaster<\/div>\n<\/div>\n<div><\/div>\n<div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl apply -f\u00a0\u00a007.pod.nginx-pod-master-tolerations.yaml<\/pre>\n<p>mo\u017cemy zweryfikowa\u0107, stan naszego zasobu.<\/p>\n<\/div>\n<\/div>\n<div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl describe pod nginx-pod-master-tolerations -n gamma | grep \"Tolerations\"\r\nkubectl describe pod nginx-pod-master-tolerations -n gamma | grep \"Node-Selectors\"<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">Tolerations:\u00a0\u00a0\u00a0\u00a0\u00a0node-role.kubernetes.io\/master:NoSchedule\r\nNode-Selectors:\u00a0\u00a0whereareyou=master\r\nEvents:\r\n\u00a0\u00a0Type\u00a0\u00a0\u00a0\u00a0Reason\u00a0\u00a0\u00a0\u00a0\u00a0Age\u00a0\u00a0\u00a0From\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Message\r\n\u00a0\u00a0----\u00a0\u00a0\u00a0\u00a0------\u00a0\u00a0\u00a0\u00a0\u00a0----\u00a0\u00a0----\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-------\r\n\u00a0\u00a0Normal\u00a0\u00a0Scheduled\u00a0\u00a03m1s\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Successfully assigned gamma\/nginx-pod-master-tolerations to controlplane\r\n\u00a0\u00a0Normal\u00a0\u00a0Pulled\u00a0\u00a0\u00a0\u00a0\u00a03m1s\u00a0\u00a0kubelet,\u00a0controlplane\u00a0\u00a0Container\u00a0image\u00a0\"nginx:1.18.0\"\u00a0already\u00a0present\u00a0on\u00a0machine\r\n\u00a0\u00a0Normal\u00a0\u00a0Created\u00a0\u00a0\u00a0\u00a03m\u00a0\u00a0\u00a0\u00a0kubelet,\u00a0controlplane\u00a0\u00a0Created\u00a0container\u00a0nginx-pod-master-tolerations\r\n\u00a0\u00a0Normal\u00a0\u00a0Started\u00a0\u00a0\u00a0\u00a03m\u00a0\u00a0\u00a0\u00a0kubelet,\u00a0controlplane\u00a0\u00a0Started\u00a0container\u00a0nginx-pod-master-tolerations<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div>\n<div>Tak powinien wygl\u0105da\u0107 nasz manifest.<\/div>\n<div><\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n\u00a0\u00a0creationTimestamp: null\r\n\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0run: nginx-pod-master-tolerations\r\n\u00a0\u00a0name: nginx-pod-master-tolerations\r\n\u00a0\u00a0namespace: gamma\r\nspec:\r\n\u00a0\u00a0containers:\r\n\u00a0\u00a0- image: nginx:1.18.0\r\n\u00a0\u00a0\u00a0\u00a0name: nginx-pod-master-tolerations\r\n\u00a0\u00a0\u00a0\u00a0ports:\r\n\u00a0\u00a0\u00a0\u00a0- containerPort: 80\r\n\u00a0\u00a0\u00a0\u00a0resources: {}\r\n\u00a0\u00a0dnsPolicy: ClusterFirst\r\n\u00a0\u00a0restartPolicy: Always\r\n\u00a0\u00a0tolerations:\r\n\u00a0\u00a0- key: \"node-role.kubernetes.io\/master\"\r\n\u00a0\u00a0\u00a0\u00a0effect: \"NoSchedule\"\r\n\u00a0\u00a0nodeSelector:\r\n\u00a0\u00a0\u00a0\u00a0whereareyou: master\r\nstatus: {}<\/pre>\n<p>Wdra\u017camy nasz obiekt z pliku i przechodzimy do kolejnego wyzwania<\/p>\n<\/div>\n<div><\/div>\n<div>\n<h4>Zadanie \u00f3sme<\/h4>\n<\/div>\n<div><\/div>\n<div>Utw\u00f3rz drugi scheduler na klastrze kubernetes o nazwie <strong>my-scheduler<\/strong>, kt\u00f3ry b\u0119dzie wykorzystywa\u0142 port <strong>54321<\/strong> w wersji niebezpiecznej (insecure) i port <strong>54322<\/strong> w wersji bezpiecznej (secure). Obiekt nale\u017cy umie\u015bci\u0107 w przestrzeni nazw <strong>kube-system<\/strong> i na w\u0119\u017ale o nazwie controlplane. Wykorzystaj mechanizm static pod.<\/div>\n<div>\n<div><\/div>\n<div><\/div>\n<div>Na pocz\u0105tku zobaczmy jak wygl\u0105da aktualny obiekt kube-scheduler.<\/div>\n<\/div>\n<div><\/div>\n<div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl\u00a0get\u00a0pod\u00a0-n\u00a0kube-system\u00a0-l\u00a0component=kube-scheduler<\/pre>\n<p>Tutaj wykorzystalem etykiet\u0119 o nazwie <strong>component<\/strong> i warto\u015bci <strong>kube-scheduler<\/strong>.<\/p>\n<\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">NAME\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0READY\u00a0\u00a0\u00a0STATUS\u00a0\u00a0\u00a0\u00a0RESTARTS\u00a0\u00a0\u00a0AGE\r\nkube-scheduler-controlplane\u00a0\u00a0\u00a01\/1\u00a0\u00a0\u00a0\u00a0\u00a0Running\u00a0\u00a0\u00a00\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a033m<\/pre>\n<p>Mamy do czynienia z obiektem typu static pod (suffiks z nazw\u0105 w\u0119z\u0142a)<\/p>\n<\/div>\n<div><\/div>\n<div>Wykorzystamy istniej\u0105cy plik manifestu i skopiujmy jego zawarto\u015b\u0107 do nowego pliku<\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">cp\u00a0\/etc\/kubernetes\/manifests\/kube-scheduler.yaml\u00a0\/tmp\/my-scheduler.yaml<\/pre>\n<\/div>\n<p>W nowym pliku nale\u017cy dokona\u0107 kilku modyfikacji. Po piewsze nie mog\u0105 istnie\u0107 dwa obiekty o tej samej nazwie w tej samej przestrzeni nazw. Po drugie <strong>hostNetwork: true<\/strong> oznacza,\u017ce obiekt korzysta bezpo\u015brednio z sieci danego w\u0119z\u0142a, co w praktyce oznacza, ze nie mo\u017cna uruchomi\u0107 w takim samym trybie kolejnego obiektu z obs\u0142uguj\u0105cego ten sam port i protok\u00f3\u0142.<\/p>\n<p>&nbsp;<\/p>\n<p>Zmieniamy nazw\u0119 obiektu na <strong>my-scheduler<\/strong> i dodajemy<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">-\u00a0--leader-elect=false\r\n-\u00a0--port=54321\r\n-\u00a0--secure-port=54322\r\n-\u00a0--scheduler-name=my-scheduler<\/pre>\n<p>Wyb\u00f3r lidera (<strong>leader election<\/strong>) to mechanizm, kt\u00f3ry gwarantuje, \u017ce tylko jedna instancja schedulera aktywnie podejmuje decyzje, podczas gdy pozosta\u0142e instancje sa nieaktywne, ale s\u0105 przygotowane by to zrobi\u0107, gdy lider przestanie funkcjonowa\u0107.<\/p>\n<p>Jednocze\u015bnie nale\u017cy pami\u0119ta\u0107 o zmianie numer\u00f3w port\u00f3w dla sond gotowo\u015bci (<strong>readiness<\/strong>) i \u017cywotno\u015bci (<strong>liveness<\/strong>), czy startu (<strong>startup<\/strong>)<\/p>\n<p>&nbsp;<\/p>\n<div>Nasz manifest powinien wygl\u0105da\u0107 tak:<\/div>\n<div>\n<div><\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">apiVersion:\u00a0v1\r\nkind:\u00a0Pod\r\nmetadata:\r\n\u00a0\u00a0creationTimestamp:\u00a0null\r\n\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0component:\u00a0kube-scheduler\r\n\u00a0\u00a0\u00a0\u00a0tier:\u00a0control-plane\r\n\u00a0\u00a0name:\u00a0my-scheduler\r\n\u00a0\u00a0namespace:\u00a0kube-system\r\nspec:\r\n\u00a0\u00a0containers:\r\n\u00a0\u00a0-\u00a0command:\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0kube-scheduler\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--authentication-kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--authorization-kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--bind-address=127.0.0.1\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--kubeconfig=\/etc\/kubernetes\/scheduler.conf\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--leader-elect=false\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--port=54321\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--secure-port=54322\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0--scheduler-name=my-scheduler\r\n\u00a0\u00a0\u00a0\u00a0image:\u00a0k8s.gcr.io\/kube-scheduler:v1.19.0\r\n\u00a0\u00a0\u00a0\u00a0imagePullPolicy:\u00a0IfNotPresent\r\n\u00a0\u00a0\u00a0\u00a0livenessProbe:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0failureThreshold:\u00a08\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0httpGet:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0host:\u00a0127.0.0.1\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0path:\u00a0\/healthz\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0port:\u00a054322\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0scheme:\u00a0HTTPS\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0initialDelaySeconds:\u00a010\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0periodSeconds:\u00a010\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0timeoutSeconds:\u00a015\r\n\u00a0\u00a0\u00a0\u00a0name:\u00a0kube-scheduler\r\n\u00a0\u00a0\u00a0\u00a0resources:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0requests:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0cpu:\u00a0100m\r\n\u00a0\u00a0\u00a0\u00a0startupProbe:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0failureThreshold:\u00a024\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0httpGet:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0host:\u00a0127.0.0.1\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0path:\u00a0\/healthz\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0port:\u00a054322\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0scheme:\u00a0HTTPS\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0initialDelaySeconds:\u00a010\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0periodSeconds:\u00a010\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0timeoutSeconds:\u00a015\r\n\u00a0\u00a0\u00a0\u00a0volumeMounts:\r\n\u00a0\u00a0\u00a0\u00a0-\u00a0mountPath:\u00a0\/etc\/kubernetes\/scheduler.conf\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0name:\u00a0kubeconfig\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0readOnly:\u00a0true\r\n\u00a0\u00a0hostNetwork:\u00a0true\r\n\u00a0\u00a0priorityClassName:\u00a0system-node-critical\r\n\u00a0\u00a0volumes:\r\n\u00a0\u00a0-\u00a0hostPath:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0path:\u00a0\/etc\/kubernetes\/scheduler.conf\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0type:\u00a0FileOrCreate\r\n\u00a0\u00a0\u00a0\u00a0name:\u00a0kubeconfig\r\nstatus:\u00a0{}<\/pre>\n<\/div>\n<p>&nbsp;<\/p>\n<div>\n<div>Kopiujemy plik do katalogu zarz\u0105dzanego przez kubelet w\u0119z\u0142a <strong>controlplane<\/strong><\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">cp \/tmp\/my-scheduler.yaml \/etc\/kubernetes\/manifests\/\r\n\r\n<\/pre>\n<\/div>\n<\/div>\n<div><\/div>\n<div>Po chwili powinien si\u0119 pojawi\u0107 nasz obiekt w postaci mirror pod<\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod -n kube-system  -l component=kube-scheduler<\/pre>\n<p>Mamy dwa dzia\u0142aj\u0105ce obiekty:<\/p>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">NAME READY STATUS RESTARTS AGE\r\nkube-scheduler-controlplane 1\/1 Running 0 26m\r\nmy-scheduler-controlplane 1\/1 Running 0 100s\r\n\r\n<\/pre>\n<\/div>\n<div>Tutaj przez pewien czas mo\u017ce si\u0119 pojawia\u0107 <strong>STATUS 0\/1<\/strong> dla obiektu <strong>my-scheduler<\/strong>. Dlaczego tak si\u0119 dzieje. Mamy do czynienia z komponentem systemowym klastra, kt\u00f3ry wykorzystuje sond\u0119. Dop\u00f3ki mikrous\u0142uga nie odpowiada na okre\u015blonym porcie, jest uznawana za niegotow\u0105 do \u015bwiadczenia us\u0142ugi. To bardzo dobra praktyka.<\/div>\n<div><\/div>\n<div>Po wdro\u017ceniu warto obejrze\u0107 logi<\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl\u00a0logs\u00a0my-scheduler-controlplane\u00a0-n\u00a0kube-system<\/pre>\n<\/div>\n<p>Przyk\u0142adowe logi<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">I0611 19:16:55.006677       1 registry.go:173] Registering SelectorSpread plugin\r\nI0611 19:16:55.006762       1 registry.go:173] Registering SelectorSpread plugin\r\nI0611 19:16:55.509219       1 serving.go:331] Generated self-signed cert in-memory\r\nI0611 19:16:56.190034       1 registry.go:173] Registering SelectorSpread plugin\r\nI0611 19:16:56.190064       1 registry.go:173] Registering SelectorSpread plugin\r\nW0611 19:16:56.192607       1 authorization.go:47] Authorization is disabled\r\nW0611 19:16:56.192625       1 authentication.go:40] Authentication is disabled\r\nI0611 19:16:56.192633       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:54321\r\nI0611 19:16:56.196239       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController\r\nI0611 19:16:56.196258       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController\r\nI0611 19:16:56.196282       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\r\nI0611 19:16:56.196295       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\r\nI0611 19:16:56.196318       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\r\nI0611 19:16:56.196322       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\r\nI0611 19:16:56.197259       1 secure_serving.go:197] Serving securely on 127.0.0.1:54322\r\nI0611 19:16:56.197739       1 tlsconfig.go:240] Starting DynamicServingCertificateController\r\nI0611 19:16:56.296585       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file \r\nI0611 19:16:56.296979       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController \r\nI0611 19:16:56.296995       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file<\/pre>\n<p>&nbsp;<\/p>\n<h4>Zadanie dziewi\u0105te<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu deployment o nazwie nazwie <strong>nginx-deployment-my-scheduler<\/strong> zawieraj\u0105cy obraz <strong>nginx:1.18.0<\/strong> pracuj\u0105cy na porcie <strong>80. <\/strong>Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma. <\/strong>Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107. Obiekt powinien wykorzysta\u0107 wdro\u017cony w poprzednim zadaniu scheduler o nazwie <strong>my-scheduler<\/strong><\/p>\n<div><\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl create\u00a0\u00a0deployment\u00a0\u00a0nginx-deployment-my-scheduler --image=nginx:1.18.0\u00a0\u00a0--namespace=gamma--port=80 -o yaml --dry-run=client &gt; 09.deploy.deploy-nginx-my-scheduler.yaml<\/pre>\n<\/div>\n<div>\n<div>Edytujemy nasz plik<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">vim 09.deploy.deploy-nginx-my-scheduler.yaml\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Nale\u017cy doda\u0107 na poziomie spec nazw\u0119 schedulera<\/p>\n<\/div>\n<div><\/div>\n<div><\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\" data-enlighter-linenumbers=\"false\">spec:\r\n  schedulerName: my-scheduler<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<div>Nasz plik manifestu powinien wygl\u0105da\u0107 tak<\/div>\n<div><\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion:\u00a0apps\/v1\r\nkind:\u00a0Deployment\r\nmetadata:\r\n\u00a0\u00a0creationTimestamp:\u00a0null\r\n\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0app:\u00a0nginx-deployment-my-scheduler\r\n\u00a0\u00a0name:\u00a0nginx-deployment-my-scheduler\r\n\u00a0\u00a0namespace:\u00a0gamma\r\nspec:\r\n\u00a0\u00a0replicas:\u00a01\r\n\u00a0\u00a0selector:\r\n\u00a0\u00a0\u00a0\u00a0matchLabels:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0app:\u00a0nginx-deployment-my-scheduler\r\n\u00a0\u00a0strategy:\u00a0{}\r\n\u00a0\u00a0template:\r\n\u00a0\u00a0\u00a0\u00a0metadata:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0creationTimestamp:\u00a0null\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0app:\u00a0nginx-deployment-my-scheduler\r\n\u00a0\u00a0\u00a0\u00a0spec:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0containers:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-\u00a0image:\u00a0nginx:1.18.0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0name:\u00a0nginx\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ports:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-\u00a0containerPort:\u00a080\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0resources:\u00a0{}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0schedulerName:\u00a0my-scheduler<\/pre>\n<\/div>\n<p>&nbsp;<\/p>\n<p>Wdra\u017camy nasz manifest na klaster<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl apply -f 09.deploy.deploy-nginx-my-scheduler.yaml\r\n<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">deployment.apps\/nginx-deployment-my-scheduler created<\/pre>\n<p>&nbsp;<\/p>\n<p>Zobaczmy jak wygl\u0105da nasze wdro\u017cenie<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">kubectl get all -n gamma -l app=nginx-deployment-my-scheduler<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">NAME READY STATUS RESTARTS AGE\r\npod\/nginx-deployment-my-scheduler-7d5d5f5dcf-brrp9 1\/1 Running 0 110s\r\n\r\nNAME READY UP-TO-DATE AVAILABLE AGE\r\ndeployment.apps\/nginx-deployment-my-scheduler 1\/1 1 1 110s\r\n\r\nNAME DESIRED CURRENT READY AGE\r\nreplicaset.apps\/nginx-deployment-my-scheduler-7d5d5f5dcf 1 1 1 110s<\/pre>\n<p>&nbsp;<\/p>\n<p>Warto przeczyta\u0107 cz\u0119\u015b\u0107 oficjalnej dokumentacji jak sobie radzi\u015b z konfiguracj\u0105 wielu scheduler\u00f3w na klastrze, tu jedynie po raz kolejny podrapa\u0142em powierzchni\u0119.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/configure-multiple-schedulers\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/configure-multiple-schedulers\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<div>\n<h4>Zadanie dziesi\u0105te<\/h4>\n<\/div>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu <strong>service<\/strong> o nazwie <strong>nginx-service-deployment-myscheduler<\/strong> kt\u00f3ry wystawi obiekt deployment o nazwie <strong>nginx-deployment-my-scheduler<\/strong> za pomoc\u0105 typu <strong>ClusterIP<\/strong> i na porcie <strong>80<\/strong>.\u00a0 Obiekt nale\u017cy umie\u015bci\u0107 w przestrzeni nazw <strong>gamma<\/strong>. Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl expose deploy\/nginx-deployment-my-scheduler -n gamma --name nginx-service-deployment-my-scheduler -o yaml \r\n  --dry-run=client &gt; 10.service.nginx-service-deploymeny-nginx-my-scheduler.yaml<\/pre>\n<p>&nbsp;<\/p>\n<div>\n<div>Eduytujemy nasz plik<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">vim 10.service.nginx-service-deploymeny-nginx-my-scheduler.yaml<\/pre>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div>Dodajemy brakuj\u0105c\u0105 przestrze\u0144 nazw <strong>gamma<\/strong><\/div>\n<div>\n<div><\/div>\n<div>Nasz manifest powinien wygl\u0105da\u0107 tak:<\/div>\n<\/div>\n<div>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: v1\r\nkind: Service\r\nmetadata:\r\n\u00a0\u00a0creationTimestamp: null\r\n\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0app: nginx-deployment-my-scheduler\r\n\u00a0\u00a0name: nginx-service-deployment-my-scheduler\r\n\u00a0\u00a0namespace: gamma\r\nspec:\r\n\u00a0\u00a0ports:\r\n\u00a0\u00a0- port: 80\r\n\u00a0\u00a0\u00a0\u00a0protocol: TCP\r\n\u00a0\u00a0\u00a0\u00a0targetPort: 80\r\n\u00a0\u00a0selector:\r\n\u00a0\u00a0\u00a0\u00a0app: nginx-deployment-my-scheduler\r\nstatus:\r\n\u00a0\u00a0loadBalancer: {}<\/pre>\n<\/div>\n<p>&nbsp;<\/p>\n<p>Po wdro\u017ceniu manifestu na klaster<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl apply -f 10.servicenginx-service-deployment-my-scheduler.yaml\r\n<\/pre>\n<p>przechodzimy do nast\u0119pnego wyzwania.<\/p>\n<p>&nbsp;<\/p>\n<h4>Zadanie jedenaste<\/h4>\n<p>&nbsp;<\/p>\n<p>Utw\u00f3rz obiekt typu deployment o nazwie nazwie <strong>nginx-deployment-all-nodes<\/strong> zawieraj\u0105cy obraz <strong>nginx:1.18.0<\/strong> pracuj\u0105cy na porcie <strong>80. <\/strong>Umie\u015b\u0107 go w przestrzeni nazw <strong>gamma. <\/strong>Je\u017celi przestrze\u0144 nazw nie istnieje nale\u017cy j\u0105 utworzy\u0107. Ustaw liczb\u0119 replik na pi\u0119\u0107 (5), wa\u017cne by przekroczy\u0107 liczb\u0119 w\u0119z\u0142\u00f3w worker naszego klastra. Obiekt powinien wykorzysta\u0107 wdro\u017cony w poprzednim zadaniu scheduler o nazwie <strong>my-scheduler. <\/strong>Wykorzystaj mechanizm <strong>podAntiAffinity<\/strong>. Jak etykiet\u0119 ustaw <strong>app: nginx-deployment-all-nodes<\/strong>.<strong><br \/>\n<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Zacznijmy od wyja\u015bnienia czym jest ten mechanizm.<\/p>\n<p>&nbsp;<\/p>\n<p>Wyobra\u017cmy sobie, \u017ce nasza aplikacja sk\u0142ada si\u0119 z dw\u00f3ch obiekt\u00f3w pod: <strong>podA<\/strong> i <strong>podB<\/strong>, a chcemy j\u0105 wdro\u017cy\u0107 w taki spos\u00f3b, aby <strong>podA<\/strong> zosta\u0142 r\u00f3wnomiernie rozmieszczony na ka\u017cdym w\u0119\u017ale roboczym. Natomiast <strong>podB<\/strong> znalaz\u0142 si\u0119 maksymalnie blisko obiektu <strong>podA<\/strong>.<\/p>\n<p>Etykiety dla podA to <strong>app: podA<\/strong>, dla podB <strong>app: podB<\/strong><\/p>\n<p>Dwa mechanizmy, kt\u00f3re tu mo\u017cna zastosowac to<\/p>\n<ul>\n<li>podAffinity dla obiektu <strong>podB<\/strong><\/li>\n<\/ul>\n<p>Przyk\u0142adowy fragment manifestu dla <strong>spec.<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">affinity:\r\n  podAffinity:\r\n    requiredDuringSchedulingIgnoredDuringExecution:\r\n    - labelSelector:\r\n        matchExpressions:\r\n        - key: app\r\n          operator: In\r\n          values:\r\n          - podA\r\n      topologyKey: \"kubernetes.io\/hostname\"\r\n<\/pre>\n<ul>\n<li>podAntiAffinity dla obiektu <strong>podA<\/strong><\/li>\n<\/ul>\n<p>Przyk\u0142adowy fragment manifestu<strong><br \/>\n<\/strong><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"yaml\">spec:\r\n  affinity:\r\n    podAntiAffinity:\r\n      requiredDuringSchedulingIgnoredDuringExecution:\r\n      - labelSelector:\r\n          matchExpressions:\r\n          - key: app\r\n            operator: In\r\n            values:\r\n            - podA\r\n        topologyKey: \"kubernetes.io\/hostname\"\r\n\r\n<\/pre>\n<p>lub w nieco innej wersji<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">spec:\r\n  affinity:\r\n    podAntiAffinity:\r\n      preferredDuringSchedulingIgnoredDuringExecution:\r\n      - weight: 100\r\n        podAffinityTerm:\r\n          labelSelector:\r\n            matchExpressions:\r\n            - key: app\r\n              operator: In\r\n              values:\r\n              - podA\r\n          topologyKey: \"kubernetes.io\/hostname\"\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Obostrzenie mo\u017ce by\u0107 wprowadzone w dw\u00f3ch trybach<\/p>\n<p>twardym (hard) <strong>requiredDuringSchedulingIgnoredDuringExecution, <\/strong>blokujemy rozk\u0142ad na w\u0119z\u0142y je\u015bli warunek b\u0119dzie spe\u0142niony<\/p>\n<p>mi\u0119kkim (soft) <strong>preferredDuringSchedulingIgnoredDuringExecution, <\/strong>sugerujemy rozk\u0142ad na w\u0119z\u0142y je\u015bli warunek b\u0119dzie spe\u0142niony<strong><br \/>\n<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>The operator mo\u017ce przej\u0105\u0107 nast\u0119puj\u0105ce warto\u015bci <code>In<\/code>, <code>NotIn<\/code>, <code>Exists<\/code>, or <code>DoesNotExist<\/code>. Dla przyk\u0142adu mo\u017cna uzy\u0107 <code>In<\/code> aby zweryfikowa\u0107 czy etykieta zwi\u0105zana z obiektem pod jest wdro\u017cona na w\u0119\u017ale ( st\u0105d <strong>topologyKey<\/strong> wskazuj\u0105cy nazw\u0119 w\u0119z\u0142a)<\/p>\n<p>&nbsp;<\/p>\n<p>W przypadku antiAffinity nale\u017cy jeszcze doda\u0107 weight z warto\u015bci\u0105 od 1 do 100, im wi\u0119ksz\u0105 warto\u015b\u0107, wym wi\u0119ksza waga.<\/p>\n<p>Nie uczmy si\u0119 tego na pami\u0119\u0107, wa\u017cne by wiedzie\u0107 gdzie znale\u017a\u0107 w dokumentacji przyk\u0142adowe manifesty i jak je modyfikowa\u0107 w zale\u017cno\u015bci od wymaga\u0144 danego zadania.<\/p>\n<p>&nbsp;<\/p>\n<p>Mo\u017cna te\u017c\u00a0 wykorzysta\u0107 jednocze\u015bcie oba mechanizmy na raz.<\/p>\n<p>Przyk\u0142ad nas postawie dokumentacji\u00a0 <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/<\/a><\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n  name: with-pod-affinity\r\nspec:\r\n  affinity:\r\n    podAffinity:\r\n      requiredDuringSchedulingIgnoredDuringExecution:\r\n      - labelSelector:\r\n          matchExpressions:\r\n          - key: security\r\n            operator: In\r\n            values:\r\n            - S1\r\n        topologyKey: \"kubernetes.io\/hostname\"\r\n    podAntiAffinity:\r\n      preferredDuringSchedulingIgnoredDuringExecution:\r\n      - weight: 100\r\n        podAffinityTerm:\r\n          labelSelector:\r\n            matchExpressions:\r\n            - key: security\r\n              operator: In\r\n              values:\r\n              - S2\r\n          topologyKey: \"kubernetes.io\/hostname\"\r\n  containers:\r\n  - name: with-pod-affinity\r\n    image: k8s.gcr.io\/pause:2.0<\/pre>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/github.com\/infracloudio\/kubernetes-scheduling-examples\/blob\/master\/podAffinity\/README.md\" target=\"_blank\" rel=\"noopener\">https:\/\/github.com\/infracloudio\/kubernetes-scheduling-examples\/blob\/master\/podAffinity\/README.md<\/a><\/p>\n<p>Dla trybu soft nale\u017cy doda\u0107 wag\u0119 (<strong>weight<\/strong>)\u00a0 jego regu\u0142y. Mo\u017cna tych regu\u0142 zbudowa\u0107 wi\u0119cej ni\u017c jedn\u0105, gdy\u017c mo\u017cemy mie\u0107 na przyk\u0142ad do czynienia z kilkoma rodzajami obiekt\u00f3w pod<\/p>\n<p>(podA, podB, podC, &#8230;.) i wtedy mamy spos\u00f3b na okre\u015blenie, kt\u00f3re z tych regu\u0142 maja wi\u0119ksze znaczenie.<\/p>\n<p>&nbsp;<\/p>\n<p>Wracaj\u0105c do cz\u0119\u015bci praktycznej, przygotowali\u015bmy nasz manifest, dodali\u015bmy mechanizm <strong>antiAffinity<\/strong> i wskazali\u015bmy, kt\u00f3ry scheduler ma przygotowa\u0107 proces wyznaczenia docelowego w\u0119z\u0142a. Nasz plik wygl\u0105da tak<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n\u00a0\u00a0creationTimestamp: null\r\n\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0app: nginx-deployment-all-nodes\r\n\u00a0\u00a0name: nginx-deployment-all-nodes\r\n\u00a0\u00a0namespace: gamma\r\nspec:\r\n\u00a0\u00a0replicas: 5\r\n\u00a0\u00a0selector:\r\n\u00a0\u00a0\u00a0\u00a0matchLabels:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0app: nginx-deployment-all-nodes\r\n\u00a0\u00a0strategy: {}\r\n\u00a0\u00a0template:\r\n\u00a0\u00a0\u00a0\u00a0metadata:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0creationTimestamp: null\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0labels:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0app: nginx-deployment-all-nodes\r\n\u00a0\u00a0\u00a0\u00a0spec:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0containers:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- image: nginx:1.18.0\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0name: nginx\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ports:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- containerPort: 80\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0resources: {}\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0affinity:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0podAntiAffinity:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0requiredDuringSchedulingIgnoredDuringExecution:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- labelSelector:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0matchExpressions:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- key: app\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0operator: In\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0values:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- nginx-deployment-all-nodes\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0topologyKey: \"kubernetes.io\/hostname\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0tolerations:\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0- key: \"node-role.kubernetes.io\/master\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0effect: \"NoSchedule\"\r\n  schedulerName: my-scheduler\r\n<\/pre>\n<p>Wdra\u017camy na nasz dwuw\u0119z\u0142owy klaster (controlplane + node01)<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get all -n gamma -l app=nginx-deployment-all-nodes<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">NAME READY STATUS RESTARTS AGE\r\npod\/nginx-deployment-all-nodes-594c6c5777-b257x 0\/1 Pending 0 37s\r\npod\/nginx-deployment-all-nodes-594c6c5777-g7dqc 0\/1 Pending 0 37s\r\npod\/nginx-deployment-all-nodes-594c6c5777-mlgmc 0\/1 Pending 0 37s\r\npod\/nginx-deployment-all-nodes-594c6c5777-vtc8k 1\/1 Running 0 37s\r\npod\/nginx-deployment-all-nodes-594c6c5777-x8t8w 1\/1 Running 0 37s\r\n\r\nNAME READY UP-TO-DATE AVAILABLE AGE\r\ndeployment.apps\/nginx-deployment-all-nodes 2\/5 5 2 37s\r\n\r\nNAME DESIRED CURRENT READY AGE\r\nreplicaset.apps\/nginx-deployment-all-nodes-594c6c5777 5 5 2 37s<\/pre>\n<p>Zauwa\u017cmy, \u017ce tylko dwa obiekty pod s\u0105 w statusie <strong>running<\/strong>, pozosta\u0142e w statusie <strong>pending<\/strong>.<\/p>\n<p>Sprawdzmy, na kt\u00f3rych w\u0119z\u0142ach pojawi\u0142y si\u0119 te pierwsze<\/p>\n<p>Poniewa\u017c obiekt pod\u00a0 i deployment s\u0105 otagowane w ten sam spos\u00f3b, czyli <strong>app: nginx-deployment-all-nodes <\/strong>\u0142atwo odfiltrowa\u0107 te obiekty, kt\u00f3re nas interesuj\u0105<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod -n gamma -l app=nginx-deployment-all-nodes -o wide<\/pre>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\r\nnginx-deployment-all-nodes-594c6c5777-b257x 0\/1 Pending 0 81s &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt;\r\nnginx-deployment-all-nodes-594c6c5777-g7dqc 0\/1 Pending 0 81s &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt;\r\nnginx-deployment-all-nodes-594c6c5777-mlgmc 0\/1 Pending 0 81s &lt;none&gt; &lt;none&gt; &lt;none&gt; &lt;none&gt;\r\nnginx-deployment-all-nodes-594c6c5777-vtc8k 1\/1 Running 0 81s 192.168.49.65 controlplane &lt;none&gt; &lt;none&gt;\r\nnginx-deployment-all-nodes-594c6c5777-x8t8w 1\/1 Running 0 81s 10.244.1.9 node01<\/pre>\n<p>&nbsp;<\/p>\n<p>Dlaczego tak si\u0119 sta\u0142o, pozosta\u0142e trzy instancje nie moga by\u0107 wdro\u017cone ani na w\u0119ze\u0142 <strong>controlplane<\/strong> ani na <strong>node01<\/strong>, gdy\u017c na ka\u017cdym z nich pojawi\u0142 si\u0119 ju\u017c instancja z etykiet\u0105 app o warto\u015bci <strong>nginx-deployment-all-nodes<\/strong>.\u00a0 Regu\u0142a <strong>requiredDuringSchedulingIgnoredDuringExecution<\/strong> dzia\u0142a w wersji twardej (<strong>hard<\/strong>) i blokuje wyb\u00f3r w\u0119z\u0142a<\/p>\n<p>&nbsp;<\/p>\n<p>Dobrze jest to widoczne w zdarzeniach (events) danej przestrzeni nazw. Tu przy okazji warto zapamit\u0119ta\u0107, jak poprawnie sortowa\u0107 zdarzenia\u00a0 za pomoc\u0105 <strong>&#8211;sort-by<\/strong><\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">kubectl get events -n gamma --sort-by=.metadata.creationTimestamp | grep anti<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">&lt;unknown&gt; Warning FailedScheduling pod\/nginx-deployment-all-nodes-594c6c5777-b257x 0\/2 nodes are available: 2 node(s) didn't match pod affinity\/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.\r\n&lt;unknown&gt; Warning FailedScheduling pod\/nginx-deployment-all-nodes-594c6c5777-g7dqc 0\/2 nodes are available: 2 node(s) didn't match pod affinity\/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.\r\n&lt;unknown&gt; Warning FailedScheduling pod\/nginx-deployment-all-nodes-594c6c5777-mlgmc 0\/2 nodes are available: 2 node(s) didn't match pod affinity\/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.<\/pre>\n<p>&nbsp;<\/p>\n<p>Jak mo\u017cna rozwi\u0105za\u0107 taki problem ?<\/p>\n<p>&nbsp;<\/p>\n<p>Zmieniaj\u0105\u0107 wersj\u0119 tward\u0105 na mi\u0119kk\u0105 (<strong>soft<\/strong>), czyli zamiast <strong>requiredDuringSchedulingIgnoredDuringExecution <\/strong>u\u017cy\u0107 <strong>preferredDuringSchedulingIgnoredDuringExecution.<\/strong><\/p>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\">apiVersion: apps\/v1\r\nkind: Deployment\r\nmetadata:\r\n  creationTimestamp: null\r\n  labels:\r\n    app: nginx-deployment-all-nodes\r\n  name: nginx-deployment-all-nodes\r\n  namespace: gamma\r\nspec:\r\n  replicas: 5\r\n  selector:\r\n    matchLabels:\r\n      app: nginx-deployment-all-nodes\r\n  strategy: {}\r\n  template:\r\n    metadata:\r\n      creationTimestamp: null\r\n      labels:\r\n        app: nginx-deployment-all-nodes\r\n    spec:\r\n      containers:\r\n      - image: nginx:1.18.0\r\n        name: nginx\r\n        ports:\r\n        - containerPort: 80\r\n        resources: {}\r\n      affinity:\r\n        podAntiAffinity:\r\n          preferredDuringSchedulingIgnoredDuringExecution:\r\n          - weight: 1\r\n            podAffinityTerm:   \r\n              labelSelector:\r\n                matchExpressions:\r\n                - key: app\r\n                  operator: In\r\n                  values:\r\n                  - nginx-deployment-all-nodes\r\n              topologyKey: \"kubernetes.io\/hostname\"\r\n      tolerations:\r\n      - key: \"node-role.kubernetes.io\/master\"\r\n        effect: \"NoSchedule\"\r\n  schedulerName: my-scheduler<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">kubectl get pod -n gamma -l app=nginx-deployment-all-nodes -o wide<\/pre>\n<p>Jak wida\u0107 na poni\u017cszym, dwa obiekty s\u0105 na jednym w\u0119\u017ale a trzy na drugim.<\/p>\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-linenumbers=\"false\">NAME                                          READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES\r\nnginx-deployment-all-nodes-865f8cb865-5tnxj   1\/1     Running   0          6s    10.244.1.15     node01         &lt;none&gt;           &lt;none&gt;\r\nnginx-deployment-all-nodes-865f8cb865-7ftpz   1\/1     Running   0          6s    192.168.49.67   controlplane   &lt;none&gt;           &lt;none&gt;\r\nnginx-deployment-all-nodes-865f8cb865-h5f78   1\/1     Running   0          6s    10.244.1.16     node01         &lt;none&gt;           &lt;none&gt;\r\nnginx-deployment-all-nodes-865f8cb865-nvjc7   1\/1     Running   0          6s    10.244.1.17     node01         &lt;none&gt;           &lt;none&gt;\r\nnginx-deployment-all-nodes-865f8cb865-zv5nd   1\/1     Running   0          6s    192.168.49.66   controlplane   &lt;none&gt;           &lt;none&gt;<\/pre>\n<p>&nbsp;<\/p>\n<p>Na tym ko\u0144czymy dzisiejsz\u0105 audycj\u0119, do us\u0142yszenia wkr\u00f3tce.<\/p>\n<p>&nbsp;<\/p>\n<p>Poprzednie cz\u0119\u015bci<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/wchmurze.cloud\/index.php\/2021\/05\/07\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-1\/\" target=\"_blank\" rel=\"noopener\">Certified Kubernetes Administrator (CKA) krok po kroku &amp;#8211; cz\u0119\u015b\u0107 1<\/a><\/p>\n<p><a href=\"https:\/\/wchmurze.cloud\/index.php\/2021\/05\/24\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-2\/\" target=\"_blank\" rel=\"noopener\">Certified Kubernetes Administrator (CKA) krok po kroku &amp;#8211; cz\u0119\u015b\u0107 2<\/a><\/p>\n<p><a href=\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/07\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-3\/\" target=\"_blank\" rel=\"noopener\">Certified Kubernetes Administrator (CKA) krok po kroku \u2013 cz\u0119\u015b\u0107 3<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Nast\u0119pne cz\u0119\u015bci<\/p>\n<p>&nbsp;<\/p>\n<p>TODO<\/p>\n<p>&nbsp;<\/p>\n<p>Literatura:<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/www.alibabacloud.com\/blog\/a-brief-analysis-on-the-implementation-of-the-kubernetes-scheduler_595083\" target=\"_blank\" rel=\"noopener\">https:\/\/www.alibabacloud.com\/blog\/a-brief-analysis-on-the-implementation-of-the-kubernetes-scheduler_595083<\/a><\/p>\n<p><a href=\"https:\/\/blog.mayadata.io\/openebs\/static-pods-in-kubernetes\" target=\"_blank\" rel=\"noopener\">https:\/\/blog.mayadata.io\/openebs\/static-pods-in-kubernetes<\/a><\/p>\n<p><a href=\"https:\/\/dev.to\/chuck_ha\/reading-kubernetes-logs-315k\" target=\"_blank\" rel=\"noopener\">https:\/\/dev.to\/chuck_ha\/reading-kubernetes-logs-315k<\/a><\/p>\n<p><a href=\"https:\/\/stackoverflow.com\/questions\/43225591\/kubernetes-custom-columns-select-element-from-array\" target=\"_blank\" rel=\"noopener\">https:\/\/stackoverflow.com\/questions\/43225591\/kubernetes-custom-columns-select-element-from-array<\/a><\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/<\/a><\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/taint-and-toleration\/<\/a><\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/<\/a><\/p>\n<p><a href=\"https:\/\/www.tutorialworks.com\/difference-docker-containerd-runc-crio-oci\/\" target=\"_blank\" rel=\"noopener\">https:\/\/www.tutorialworks.com\/difference-docker-containerd-runc-crio-oci\/<\/a><\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/configure-multiple-schedulers\/\" target=\"_blank\" rel=\"noopener\">https:\/\/kubernetes.io\/docs\/tasks\/extend-kubernetes\/configure-multiple-schedulers\/<\/a><\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Pody statyczne (static pods), kube-scheduler i kontrola rozmieszczenia obiekt\u00f3w pod na w\u0119z\u0142ach klastra. &nbsp; Architektura &nbsp; Czy istnieje mo\u017cliwo\u015b\u0107 uruchamiania obiekt\u00f3w pod tylko na jedynym w\u0119\u017ale klastra ? Czy mo\u017cna wyznaczy\u0107, na kt\u00f3rym w\u0119\u017ale zostanie uruchomiona nasza mikrous\u0142uga ? A &hellip; <a href=\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\">Continued<\/a><\/p>\n","protected":false},"author":1,"featured_media":1060,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[22,27,23,26],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v19.13 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Certified Kubernetes Administrator (CKA) krok po kroku - cz\u0119\u015b\u0107 4 - W chmurze o chmurze i nie tylko<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\" \/>\n<meta property=\"og:locale\" content=\"pl_PL\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Certified Kubernetes Administrator (CKA) krok po kroku - cz\u0119\u015b\u0107 4 - W chmurze o chmurze i nie tylko\" \/>\n<meta property=\"og:description\" content=\"Pody statyczne (static pods), kube-scheduler i kontrola rozmieszczenia obiekt\u00f3w pod na w\u0119z\u0142ach klastra. &nbsp; Architektura &nbsp; Czy istnieje mo\u017cliwo\u015b\u0107 uruchamiania obiekt\u00f3w pod tylko na jedynym w\u0119\u017ale klastra ? Czy mo\u017cna wyznaczy\u0107, na kt\u00f3rym w\u0119\u017ale zostanie uruchomiona nasza mikrous\u0142uga ? A &hellip; Continued\" \/>\n<meta property=\"og:url\" content=\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\" \/>\n<meta property=\"og:site_name\" content=\"W chmurze o chmurze i nie tylko\" \/>\n<meta property=\"article:published_time\" content=\"2021-06-13T20:01:57+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-06-13T20:12:23+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/wchmurze.cloud\/wp-content\/uploads\/2019\/08\/Kubernetes_New.png\" \/>\n\t<meta property=\"og:image:width\" content=\"730\" \/>\n\t<meta property=\"og:image:height\" content=\"389\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"djkormo\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Napisane przez\" \/>\n\t<meta name=\"twitter:data1\" content=\"djkormo\" \/>\n\t<meta name=\"twitter:label2\" content=\"Szacowany czas czytania\" \/>\n\t<meta name=\"twitter:data2\" content=\"26 minut\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\"},\"author\":{\"name\":\"djkormo\",\"@id\":\"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323\"},\"headline\":\"Certified Kubernetes Administrator (CKA) krok po kroku &#8211; cz\u0119\u015b\u0107 4\",\"datePublished\":\"2021-06-13T20:01:57+00:00\",\"dateModified\":\"2021-06-13T20:12:23+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\"},\"wordCount\":4094,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323\"},\"articleSection\":[\"Certyfikacja\",\"CKA\",\"konteneryzacja\",\"Kubernetes\"],\"inLanguage\":\"pl-PL\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\",\"url\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\",\"name\":\"Certified Kubernetes Administrator (CKA) krok po kroku - cz\u0119\u015b\u0107 4 - W chmurze o chmurze i nie tylko\",\"isPartOf\":{\"@id\":\"https:\/\/wchmurze.cloud\/#website\"},\"datePublished\":\"2021-06-13T20:01:57+00:00\",\"dateModified\":\"2021-06-13T20:12:23+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#breadcrumb\"},\"inLanguage\":\"pl-PL\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Strona g\u0142\u00f3wna\",\"item\":\"https:\/\/wchmurze.cloud\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Certified Kubernetes Administrator (CKA) krok po kroku &#8211; cz\u0119\u015b\u0107 4\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/wchmurze.cloud\/#website\",\"url\":\"https:\/\/wchmurze.cloud\/\",\"name\":\"W chmurze o chmurze i nie tylko\",\"description\":\"W chmurze o chmurze i nie tylko\",\"publisher\":{\"@id\":\"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/wchmurze.cloud\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"pl-PL\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323\",\"name\":\"djkormo\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"pl-PL\",\"@id\":\"https:\/\/wchmurze.cloud\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/14a901b808871fa98086ae259c45d646?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/14a901b808871fa98086ae259c45d646?s=96&d=mm&r=g\",\"caption\":\"djkormo\"},\"logo\":{\"@id\":\"https:\/\/wchmurze.cloud\/#\/schema\/person\/image\/\"},\"url\":\"https:\/\/wchmurze.cloud\/index.php\/author\/djkormo\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Certified Kubernetes Administrator (CKA) krok po kroku - cz\u0119\u015b\u0107 4 - W chmurze o chmurze i nie tylko","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/","og_locale":"pl_PL","og_type":"article","og_title":"Certified Kubernetes Administrator (CKA) krok po kroku - cz\u0119\u015b\u0107 4 - W chmurze o chmurze i nie tylko","og_description":"Pody statyczne (static pods), kube-scheduler i kontrola rozmieszczenia obiekt\u00f3w pod na w\u0119z\u0142ach klastra. &nbsp; Architektura &nbsp; Czy istnieje mo\u017cliwo\u015b\u0107 uruchamiania obiekt\u00f3w pod tylko na jedynym w\u0119\u017ale klastra ? Czy mo\u017cna wyznaczy\u0107, na kt\u00f3rym w\u0119\u017ale zostanie uruchomiona nasza mikrous\u0142uga ? A &hellip; Continued","og_url":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/","og_site_name":"W chmurze o chmurze i nie tylko","article_published_time":"2021-06-13T20:01:57+00:00","article_modified_time":"2021-06-13T20:12:23+00:00","og_image":[{"width":730,"height":389,"url":"https:\/\/wchmurze.cloud\/wp-content\/uploads\/2019\/08\/Kubernetes_New.png","type":"image\/png"}],"author":"djkormo","twitter_card":"summary_large_image","twitter_misc":{"Napisane przez":"djkormo","Szacowany czas czytania":"26 minut"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#article","isPartOf":{"@id":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/"},"author":{"name":"djkormo","@id":"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323"},"headline":"Certified Kubernetes Administrator (CKA) krok po kroku &#8211; cz\u0119\u015b\u0107 4","datePublished":"2021-06-13T20:01:57+00:00","dateModified":"2021-06-13T20:12:23+00:00","mainEntityOfPage":{"@id":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/"},"wordCount":4094,"commentCount":0,"publisher":{"@id":"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323"},"articleSection":["Certyfikacja","CKA","konteneryzacja","Kubernetes"],"inLanguage":"pl-PL","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/","url":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/","name":"Certified Kubernetes Administrator (CKA) krok po kroku - cz\u0119\u015b\u0107 4 - W chmurze o chmurze i nie tylko","isPartOf":{"@id":"https:\/\/wchmurze.cloud\/#website"},"datePublished":"2021-06-13T20:01:57+00:00","dateModified":"2021-06-13T20:12:23+00:00","breadcrumb":{"@id":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#breadcrumb"},"inLanguage":"pl-PL","potentialAction":[{"@type":"ReadAction","target":["https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/wchmurze.cloud\/index.php\/2021\/06\/13\/certified-kubernetes-administrator-cka-krok-po-kroku-czesc-4\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Strona g\u0142\u00f3wna","item":"https:\/\/wchmurze.cloud\/"},{"@type":"ListItem","position":2,"name":"Certified Kubernetes Administrator (CKA) krok po kroku &#8211; cz\u0119\u015b\u0107 4"}]},{"@type":"WebSite","@id":"https:\/\/wchmurze.cloud\/#website","url":"https:\/\/wchmurze.cloud\/","name":"W chmurze o chmurze i nie tylko","description":"W chmurze o chmurze i nie tylko","publisher":{"@id":"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/wchmurze.cloud\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"pl-PL"},{"@type":["Person","Organization"],"@id":"https:\/\/wchmurze.cloud\/#\/schema\/person\/9832cc6f86f99f541d983d2b8d60f323","name":"djkormo","image":{"@type":"ImageObject","inLanguage":"pl-PL","@id":"https:\/\/wchmurze.cloud\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/14a901b808871fa98086ae259c45d646?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/14a901b808871fa98086ae259c45d646?s=96&d=mm&r=g","caption":"djkormo"},"logo":{"@id":"https:\/\/wchmurze.cloud\/#\/schema\/person\/image\/"},"url":"https:\/\/wchmurze.cloud\/index.php\/author\/djkormo\/"}]}},"_links":{"self":[{"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/posts\/1365"}],"collection":[{"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/comments?post=1365"}],"version-history":[{"count":35,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/posts\/1365\/revisions"}],"predecessor-version":[{"id":1523,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/posts\/1365\/revisions\/1523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/media\/1060"}],"wp:attachment":[{"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/media?parent=1365"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/categories?post=1365"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wchmurze.cloud\/index.php\/wp-json\/wp\/v2\/tags?post=1365"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}