diff --git "a/k8s\346\234\215\345\212\241/3-\351\203\250\347\275\262harbor\344\273\223\345\272\223.md" "b/k8s\346\234\215\345\212\241/3-\351\203\250\347\275\262harbor\344\273\223\345\272\223.md" index c99e75e770b2469b1ec84447e2c4822fc6814b78..f9386b0d50291c1d364458f58a33c22d30382a96 100644 --- "a/k8s\346\234\215\345\212\241/3-\351\203\250\347\275\262harbor\344\273\223\345\272\223.md" +++ "b/k8s\346\234\215\345\212\241/3-\351\203\250\347\275\262harbor\344\273\223\345\272\223.md" @@ -332,7 +332,7 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # **k8s集群节点访问harbor仓库配置**(购买域名证书可忽略) -K8S集群访问harbor仓库需要配置,否则是访问不了的,有几种方式可以配置,可以看下述 +K8S集群访问harbor仓库需要配置,否则是访问不了的,有几种方式可以配置,可以看下述,建议使用方式一 ## 方式一: diff --git "a/k8s\346\234\215\345\212\241/4-k8s\350\265\204\346\272\220\344\273\213\347\273\215.md" "b/k8s\346\234\215\345\212\241/4-k8s\350\265\204\346\272\220\344\273\213\347\273\215.md" new file mode 100644 index 0000000000000000000000000000000000000000..61cb15a8ced9a49276e895ead6f408c13b8fe2b1 --- /dev/null +++ "b/k8s\346\234\215\345\212\241/4-k8s\350\265\204\346\272\220\344\273\213\347\273\215.md" @@ -0,0 +1,332 @@ +# K8S资源介绍 + +**什么是资源?** + +在kubernetes中,所有的内容都抽象为资源,用户需要通过操作资源来管理kubernetes。 + +kubernetes的本质上就是一个集群系统,用户可以在集群中部署各种服务,所谓的部署服务,其实就是在kubernetes集群中运行一个个的容器,并将指定的程序跑在容器中。kubernetes的最小管理单元是pod而不是容器,所以只能将容器放在Pod中,而kubernetes一般也不会直接管理Pod,而是通过Pod控制器来管理Pod的。Pod可以提供服务之后,就要考虑如何访问Pod中服务,kubernetes提供了Service资源实现这个功能。当然,如果Pod中程序的数据需要持久化,kubernetes还提供了各种存储系统。 + +**简单点说,k8s中一切皆是资源** + +## 如何查看K8S中的所有资源? + +```shell +#通过kubectl api-resources命令可查看K8S中所有的资源 +[root@master ~]# kubectl api-resources +#资源名称 #简写名称 #api版本号 #是否有名称空间 #资源类型 +NAME SHORTNAMES APIVERSION NAMESPACED KIND +bindings v1 true Binding +componentstatuses cs v1 false ComponentStatus +configmaps cm v1 true ConfigMap +endpoints ep v1 true Endpoints +events ev v1 true Event +limitranges limits v1 true LimitRange +namespaces ns v1 false Namespace +nodes no v1 false Node +persistentvolumeclaims pvc v1 true PersistentVolumeClaim +persistentvolumes pv v1 false PersistentVolume +pods po v1 true Pod +podtemplates v1 true PodTemplate +replicationcontrollers rc v1 true ReplicationController +resourcequotas quota v1 true ResourceQuota +secrets v1 true Secret +serviceaccounts sa v1 true ServiceAccount +services svc v1 true Service +mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration +validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration +customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition +apiservices apiregistration.k8s.io/v1 false APIService +controllerrevisions apps/v1 true ControllerRevision +daemonsets ds apps/v1 true DaemonSet +deployments deploy apps/v1 true Deployment +replicasets rs apps/v1 true ReplicaSet +statefulsets sts apps/v1 true StatefulSet +tokenreviews authentication.k8s.io/v1 false TokenReview +localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview +selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview +selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview +subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview +horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler +cronjobs cj batch/v1 true CronJob +jobs batch/v1 true Job +certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest +leases coordination.k8s.io/v1 true Lease +endpointslices discovery.k8s.io/v1 true EndpointSlice +events ev events.k8s.io/v1 true Event +flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema +prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration +ingressclasses networking.k8s.io/v1 false IngressClass +ingresses ing networking.k8s.io/v1 true Ingress +networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy +runtimeclasses node.k8s.io/v1 false RuntimeClass +poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget +podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy +clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding +clusterroles rbac.authorization.k8s.io/v1 false ClusterRole +rolebindings rbac.authorization.k8s.io/v1 true RoleBinding +roles rbac.authorization.k8s.io/v1 true Role +priorityclasses pc scheduling.k8s.io/v1 false PriorityClass +csidrivers storage.k8s.io/v1 false CSIDriver +csinodes storage.k8s.io/v1 false CSINode +csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity +storageclasses sc storage.k8s.io/v1 false StorageClass +volumeattachments storage.k8s.io/v1 false VolumeAttachment + +``` + +## 查看K8S中指定的资源 + +#### 查看master组件状态 + +```shell +[root@master ~]# kubectl get componentstatuses +Warning: v1 ComponentStatus is deprecated in v1.19+ +NAME STATUS MESSAGE ERROR +controller-manager Healthy ok +etcd-0 Healthy {"health":"true","reason":""} +scheduler Healthy ok + +#使用简写的方式 +[root@master ~]# kubectl get cs +Warning: v1 ComponentStatus is deprecated in v1.19+ +NAME STATUS MESSAGE ERROR +scheduler Healthy ok +controller-manager Healthy ok +etcd-0 Healthy {"health":"true","reason":""} + +``` + +#### 查看集群节点状态 + +```shell +[root@master ~]# kubectl get nodes +NAME STATUS ROLES AGE VERSION +master Ready control-plane,master 5d3h v1.23.17 +worker31 Ready 5d2h v1.23.17 +worker32 Ready 5d2h v1.23.17 + +#简写方式 +[root@master ~]# kubectl get no +NAME STATUS ROLES AGE VERSION +master Ready control-plane,master 5d3h v1.23.17 +worker31 Ready 5d2h v1.23.17 +worker32 Ready 5d2h v1.23.17 +[root@master ~]# +``` + +#### 组合查看master组件状态、集群节点状态 + +```shell +[root@master ~]# kubectl get cs,no +Warning: v1 ComponentStatus is deprecated in v1.19+ +NAME STATUS MESSAGE ERROR +componentstatus/controller-manager Healthy ok +componentstatus/scheduler Healthy ok +componentstatus/etcd-0 Healthy {"health":"true","reason":""} + +NAME STATUS ROLES AGE VERSION +node/master Ready control-plane,master 5d3h v1.23.17 +node/worker31 Ready 5d3h v1.23.17 +node/worker32 Ready 5d3h v1.23.17 +[root@master ~]# +``` + +# K8S资源管理方式 + +K8S资源管理方式分为几种,一种以命令为主,一种以yaml文件(资源清单)为主,这里建议后面以yaml文件学习 + +- 命令式对象管理:直接使用命令去操作kubernetes资源 + +``` +kubectl run nginx-pod --image=nginx:1.17.1 --port=80 +``` + +- 命令式对象配置:通过命令配置和配置文件去操作kubernetes资源 + +``` +kubectl create/patch -f nginx-pod.yaml +``` + +- 声明式对象配置:通过apply命令和配置文件去操作kubernetes资源 + +``` +kubectl apply -f nginx-pod.yaml +``` + +### 什么是资源清单? + +简单点说就是:用于描述定义K8S集群资源的配置文件。 + +k8s 集群中对资源管理和资源对象编排部署都可以通过声明样式(YAML)文件来解决,也就是可以把需要对资源对象操作编辑到YAML 格式文件中,我们把这种文件叫做资源清单文件,通过kubectl 命令直接使用资源清单文件就可以实现对大量的资源对象进行编排部署了。一般在我们开发的时候,都是通过配置YAML文件来部署集群的。 + +YAML文件:就是资源清单文件,用于资源编排,后文我们统称为资源清单 + +不知道yaml文件是什么的,请查看该链接:https://www.runoob.com/w3cnote/yaml-intro.html + +### 资源清单的组成部分 + +- apiVersion + +指定api的版本号,基本上每个K8S的型号固定,不能乱写 + +- kind + +资源的类型,每个资源的类型也是固定的 + +- metadata: + +资源的元数据,比如资源的名称,资源的标签,所属于的名称空间以及资源注解等信息 + +- spec: + +用户期望资源,定义,镜像名称,容器的名称,调度策略,镜像拉取策略,重启策略,环境变量等相关的配置。 + +说白了就是用户期望容器如何运行 + +- status + +在k8s集群中资源实际的运行状态,由k8s组件维护,自动更新 + +### 查看资源清单的组成及详细信息文档 + +```shell +#kubectl explain <资源类型> +#查看pod +[root@master ~]# kubectl explain pods +KIND: Pod +VERSION: v1 + +DESCRIPTION: + Pod is a collection of containers that can run on a host. This resource is + created by clients and scheduled onto hosts. + +FIELDS: + apiVersion + APIVersion defines the versioned schema of this representation of an + object. Servers should convert recognized schemas to the latest internal + value, and may reject unrecognized values. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources + + kind + Kind is a string value representing the REST resource this object + represents. Servers may infer this from the endpoint the client submits + requests to. Cannot be updated. In CamelCase. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds + + metadata + Standard object's metadata. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + + spec + Specification of the desired behavior of the pod. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + + status + Most recently observed status of the pod. This data may not be up to date. + Populated by the system. Read-only. More info: + https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + +[root@master ~]# + +``` + +### 查看指定字段的文档信息 + +```shell +[root@master ~]# kubectl explain po.spec.containers.imagePullPolicy +KIND: Pod +VERSION: v1 + +FIELD: imagePullPolicy + +DESCRIPTION: + Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always + if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. + More info: + https://kubernetes.io/docs/concepts/containers/images#updating-images + +``` + +### 较完整的资源清单模板 + +```yaml +apiVersion: v1 #必选,版本号,例如v1 +kind: Pod   #必选,资源类型,例如 Pod +metadata:   #必选,元数据 + name: string #必选,Pod名称 + namespace: string #Pod所属的命名空间,默认为"default" + labels:    #自定义标签列表 + - name: string   +spec: #必选,Pod中容器的详细定义 + containers: #必选,Pod中容器列表 + - name: string #必选,容器名称 + image: string #必选,容器的镜像名称 + imagePullPolicy: [ Always|Never|IfNotPresent ] #获取镜像的策略 + command: [string] #容器的启动命令列表,如不指定,使用打包时使用的启动命令 + args: [string] #容器的启动命令参数列表 + workingDir: string #容器的工作目录 + volumeMounts: #挂载到容器内部的存储卷配置 + - name: string #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名 + mountPath: string #存储卷在容器内mount的绝对路径,应少于512字符 + readOnly: boolean #是否为只读模式 + ports: #需要暴露的端口库号列表 + - name: string #端口的名称 + containerPort: int #容器需要监听的端口号 + hostPort: int #容器所在主机需要监听的端口号,默认与Container相同 + protocol: string #端口协议,支持TCP和UDP,默认TCP + env: #容器运行前需设置的环境变量列表 + - name: string #环境变量名称 + value: string #环境变量的值 + resources: #资源限制和请求的设置 + limits: #资源限制的设置 + cpu: string #Cpu的限制,单位为core数,将用于docker run --cpu-shares参数 + memory: string #内存限制,单位可以为Mib/Gib,将用于docker run --memory参数 + requests: #资源请求的设置 + cpu: string #Cpu请求,容器启动的初始可用数量 + memory: string #内存请求,容器启动的初始可用数量 + lifecycle: #生命周期钩子 + postStart: #容器启动后立即执行此钩子,如果执行失败,会根据重启策略进行重启 + preStop: #容器终止前执行此钩子,无论结果如何,容器都会终止 + livenessProbe: #对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器 + exec:   #对Pod容器内检查方式设置为exec方式 + command: [string] #exec方式需要制定的命令或脚本 + httpGet: #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port + path: string + port: number + host: string + scheme: string + HttpHeaders: + - name: string + value: string + tcpSocket: #对Pod内个容器健康检查方式设置为tcpSocket方式 + port: number + initialDelaySeconds: 0 #容器启动完成后首次探测的时间,单位为秒 + timeoutSeconds: 0    #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒 + periodSeconds: 0    #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次 + successThreshold: 0 + failureThreshold: 0 + securityContext: + privileged: false + restartPolicy: [Always | Never | OnFailure] #Pod的重启策略 + nodeName: #设置NodeName表示将该Pod调度到指定到名称的node节点上 + nodeSelector: obeject #设置NodeSelector表示将该Pod调度到包含这个label的node上 + imagePullSecrets: #Pull镜像时使用的secret名称,以key:secretkey格式指定 + - name: string + hostNetwork: false #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络 + volumes: #在该pod上定义共享存储卷列表 + - name: string #共享存储卷名称 (volumes类型有很多种) + emptyDir: {} #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值 + hostPath: string #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录 + path: string    #Pod所在宿主机的目录,将被用于同期中mount的目录 + secret:    #类型为secret的存储卷,挂载集群与定义的secret对象到容器内部 + scretname: string + items: + - key: string + path: string + configMap: #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部 + name: string + items: + - key: string + path: string +``` + diff --git "a/k8s\346\234\215\345\212\241/5-k8s\350\265\204\346\272\220\344\271\213Pod\345\237\272\347\241\200.md" "b/k8s\346\234\215\345\212\241/5-k8s\350\265\204\346\272\220\344\271\213Pod\345\237\272\347\241\200.md" new file mode 100644 index 0000000000000000000000000000000000000000..697b09b8cd9131a6bf56d5c7dc4390bc82b281aa --- /dev/null +++ "b/k8s\346\234\215\345\212\241/5-k8s\350\265\204\346\272\220\344\271\213Pod\345\237\272\347\241\200.md" @@ -0,0 +1,1279 @@ +# K8S核心资源之Pod基础 + +## 什么是Pod? + +Pod是K8S系统中可以创建和管理的最小单元,是资源对象模型中由用户创建或部署的最小资源对象模型,也是在K8S上运行容器化应用的资源对象,其它的资源对象都是用来支撑或者扩展Pod对象功能的,比如控制器对象是用来管控Pod对象的,Service或者Ingress资源对象是用来暴露Pod引用对象的,PersistentVolume资源对象是用来为Pod提供存储等等,K8S不会直接处理容器,而是Pod,Pod是由一个或多个container组成。 + +Pod是Kubernetes的最重要概念,每一个Pod都有一个特殊的被称为 “根容器”的Pause容器。Pause容器对应的镜像属于Kubernetes平台的一部分,除了Pause容器,每个Pod还包含一个或多个紧密相关的用户业务容器。 + +![image-20240919113345994](./images/image-20240919113345994.png) + +## Pod的基本概念 + +- k8s中最小部署的单元 +- Pod里面是由一个或多个容器组成【一组容器的集合】 +- 一个pod中的容器是共享网络命名空间,基于pause容器,也就是根容器 +- Pod是短暂的 +- 每个Pod包含一个或多个紧密相关的用户业务容器 + +## Pod存在的意义 + +- 创建容器使用docker,一个docker对应一个容器,一个容器运行一个应用进程 +- Pod是多进程设计,运用多个应用程序,也就是一个Pod里面有多个容器,而一个容器里面运行一个应用程序 + +![image-20240919113615210](./images/image-20240919113615210.png) + +- Pod的存在是为了亲密性应用 + - 两个应用之间进行交互 + - 网络之间的调用【通过127.0.0.1 或 socket】 + - 两个应用之间需要频繁调用 + +``` +这儿可以理解两个微服务之间的互相调用,避免了网络波动 +``` + +Pod是在K8S集群中运行部署应用或服务的最小单元,它是可以支持多容器的。Pod的设计理念是支持多个容器在一个Pod中共享网络地址和文件系统,可以通过进程间通信和文件共享这种简单高效的方式组合完成服务。同时Pod对多容器的支持是K8S中最基础的设计理念。在生产环境中,通常是由不同的团队各自开发构建自己的容器镜像,在部署的时候组合成一个微服务对外提供服务。 + +Pod是K8S集群中所有业务类型的基础,可以把Pod看作运行在K8S集群上的小机器人,不同类型的业务就需要不同类型的小机器人去执行。目前K8S的业务主要可以分为以下几种 + +- 长期伺服型:long-running +- 批处理型:batch +- 节点后台支撑型:node-daemon +- 有状态应用型:stateful application + +上述的几种类型,分别对应的小机器人控制器为:Deployment、Job、DaemonSet 和 StatefulSet (后面将介绍控制器) + +## 命令式管理第一个Pod + +学习过程中体验一下即可,不用记住,主要记住的是资源清单的方式 + +- 命令式创建Pod + +```shell +[root@master ~]# kubectl run nginx-test-pod --image=nginx:1.17.1 --port=80 +pod/nginx-test-pod created +``` + +- 查看pod + +pod的STATUS这个时候查看可能是ContainerCreating,说明容器正在拉取镜像中,这儿取决你的网速 + +```shell +[root@master ~]# kubectl get pod +NAME READY STATUS RESTARTS AGE +nginx-test-pod 0/1 ContainerCreating 0 9s +``` + +如果创建成功了则是如下 + +```shell +[root@master ~]# kubectl get pod +NAME READY STATUS RESTARTS AGE +nginx-test-pod 1/1 Running 0 2m39s +``` + +- 删除Pod + +```shell +[root@master ~]# kubectl delete pod nginx-test-pod +pod "nginx-test-pod" deleted +[root@master ~]# kubectl get pod +No resources found in default namespace. +``` + +## 资源清单管理Pod + +### Pod单容器 + +vim /data/k8s/study/pod/1-pod.yaml + +```yaml +#指定api版本号 +apiVersion: v1 +#指定资源类型 +kind: Pod +#指定元数据 +metadata: + #Pod的名称 + name: pod-nginx-test1 +spec: + containers: + #指定运行容器的名字 + - name: nginx + #指定镜像 + image: nginx:1.17.1 +``` + +- 创建pod + +```shell +[root@master /data/k8s/study/pod]# kubectl apply -f 1-pod.yaml +pod/pod-nginx-test1 created +``` + +- 查看pod + +```shell +[root@master /data/k8s/study/pod]# kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod-nginx-test1 1/1 Running 0 8s 10.100.2.5 worker32 + +#NAME:表示Pod的名称 +#READY:表示Pod内部几个容器,就绪了几个 +#STATUS:表示Pod当前的状态,Running表示就绪,上文中的ContainerCreating表示容器正在创建中,后面学习中我们还会接触到其它的值 +#RESTARTS:表示重启次数 +#AGE:表示该Pod存在了多长时间 +#IP:分配的IP地址,此地址和cni插件有关系 +#NODE:Pod分配的节点 +``` + +- 删除pod + +```shell +[root@master /data/k8s/study/pod]# kubectl delete -f 1-pod.yaml +pod "pod-nginx-test1" deleted +``` + +- 进入指定的pod容器内部 + +```shell +[root@master /data/k8s/study/pod]# kubectl exec pod-nginx-test1 -it -- bash +root@pod-nginx-test1:/# +``` + +- 查看创建Pod的详细信息 + +```shell +[root@master /data/k8s/study/pod]# kubectl describe pod pod-nginx-test1 +Name: pod-nginx-test1 +Namespace: default +Priority: 0 +Node: worker32/10.0.0.32 +Start Time: Thu, 19 Sep 2024 15:29:23 +0800 +Labels: +Annotations: +Status: Running +IP: 10.100.2.6 +IPs: + IP: 10.100.2.6 +Containers: + nginx: + Container ID: docker://8d37a1059ad258502786e1811a5c40d32baa8c59178a738e34999f117b434d75 + Image: nginx:1.17.1 + Image ID: docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbb + Port: + Host Port: + State: Running + Started: Thu, 19 Sep 2024 15:29:24 +0800 + Ready: True + Restart Count: 0 + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mrwvr (ro) +Conditions: + Type Status + Initialized True + Ready True + ContainersReady True + PodScheduled True +Volumes: + kube-api-access-mrwvr: + Type: Projected (a volume that contains injected data from multiple sources) + TokenExpirationSeconds: 3607 + ConfigMapName: kube-root-ca.crt + ConfigMapOptional: + DownwardAPI: true +QoS Class: BestEffort +Node-Selectors: +Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s + node.kubernetes.io/unreachable:NoExecute op=Exists for 300s +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 8s default-scheduler Successfully assigned default/pod-nginx-test1 to worker32 + Normal Pulled 8s kubelet Container image "nginx:1.17.1" already present on machine + Normal Created 7s kubelet Created container nginx + Normal Started 7s kubelet Started container nginx + +``` + +### Pod多容器 + +上面的案例只是一个Pod中创建了一个容器,那我一个Pod中想要创建多个容器,应该怎么做呢?在containers标签下指定就好了 + +vim /data/k8s/study/pod/2-pod.yaml + +```shell +#指定api版本号 +apiVersion: v1 +#指定资源类型 +kind: Pod +#指定元数据 +metadata: + #Pod的名称 + name: pod-nginx-tomcat +spec: + containers: + #容器1 + - name: nginx + image: nginx:1.17.1 + #容器二 + - name: tomcat + image: tomcat:latest +``` + +- 创建后查看pod + +```shell +#创建pod +[root@master /data/k8s/study/pod]# kubectl apply -f 2-pod.yaml +pod/pod-nginx-tomcat created + +#查看pod +[root@master /data/k8s/study/pod]# kubectl get po -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod-nginx-tomcat 2/2 Running 0 3m28s 10.100.1.3 worker31 +``` + +- 进入容器内部 + +```shell +#进入nginx容器内部 +[root@master ~]# kubectl exec pod-nginx-tomcat -it -c nginx -- bash +root@pod-nginx-tomcat:/# exit +exit + +#进入tomcat容器内部 +[root@master ~]# kubectl exec pod-nginx-tomcat -it -c tomcat -- bash +root@pod-nginx-tomcat:/usr/local/tomcat# exit +exit +[root@master ~]# + +``` + +## Pod的访问 + +Pod默认只能在集群内部进行访问,并且默认只能使用集群内部分配的IP地址进行访问,不能被外界所进行访问的,这是为什么呢? + +- 安全性考虑 + +Kubernetes设计时遵循最小权限原则,即组件仅获得完成其任务所需的最少权限。直接暴露Pod给外部网络可能会引入安全隐患,比如让攻击者更容易定位和攻击运行在Pod内的服务。通过限制Pod的直接访问,Kubernetes鼓励使用更安全的服务暴露机制。 + +- 可管理性和弹性 + +Kubernetes设计鼓励使用Service来抽象Pod的访问。Service为一组具有相同功能的Pod提供一个稳定的服务访问入口,并且可以实现负载均衡。即使Pod因为故障重建或扩展,Service依然能够透明地路由流量到新的或现有的Pod实例,从而保证服务的高可用性和弹性。 + +### 测试Pod的访问 + +我们查看一下上述创建的容器分配的IP地址 + +``` +[root@master ~]# kubectl get po -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod-nginx-test1 1/1 Running 1 (11m ago) 22m 10.100.2.7 worker32 +pod-nginx-tomcat 2/2 Running 2 (11m ago) 23m 10.100.1.5 worker31 +``` + +分配的IP地址是10.100.2.7和10.100.1.5,我们在master节点或者worker节点curl一下看看能否连通 + +```shell +#master节点 +[root@master ~]# curl 10.100.1.5:80 + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + + +#worker节点 +[root@worker31 ~]# curl 10.100.1.5:80 + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +``` + +经过测试,Pod在集群内部可以访问,那我们换一台其它的机器进行访问呢?比如使用harbor仓库访问 + +```shell +[root@harbor /data/harbor]# curl 10.100.1.5:80 +curl: (7) Failed to connect to 0.100.1.5 port 80 after 0 ms: Connection refused + +``` + +经过测试,发现Pod默认是在集群外部是连接不通的,那有一个问题了,部署到K8S集群的服务,如何才能被外界访问到呢? + +### 容器被外部访问的几种方案 + +#### 方案一:hostNetwork + +直接使用宿主机网络的方式,介绍: + +```shell +#hostNetwork 是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络 +[root@master ~]# kubectl explain po.spec.hostNetwork +KIND: Pod +VERSION: v1 + +FIELD: hostNetwork + +DESCRIPTION: + Host networking requested for this pod. Use the host's network namespace. + If this option is set, the ports that will be used must be specified. + Default to false. +``` + +- ##### 案例: + + vim 3-hostNetwork-pod.yaml + +```yaml +#指定api版本号 +apiVersion: v1 +#指定资源类型 +kind: Pod +#指定元数据 +metadata: + #Pod的名称 + name: pod-nginx-hostnetwork +spec: + #指定使用宿主机网络 + hostNetwork: true + containers: + #指定运行容器的名字 + - name: nginx + #指定镜像 + image: nginx:1.17.1 +``` + +检查并访问 + +```shell +[root@master /data/k8s/study/pod]# kubectl get po -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +#这条数据的IP是宿主机的IP +pod-nginx-hostnetwork 1/1 Running 0 2m32s 10.0.0.32 worker32 +``` + +访问http://10.0.0.32/ + +![image-20240923112534516](./images/image-20240923112534516.png) + +#### 方案二:ports + +将容器的端口映射到宿主机端口中 + +相关的资源文档 + +```shell +[root@master /data/k8s/study/pod]# kubectl explain pod.spec.containers.ports +KIND: Pod +VERSION: v1 + +RESOURCE: ports <[]Object> + +DESCRIPTION: + List of ports to expose from the container. Exposing a port here gives the + system additional information about the network connections a container + uses, but is primarily informational. Not specifying a port here DOES NOT + prevent that port from being exposed. Any port which is listening on the + default "0.0.0.0" address inside a container will be accessible from the + network. Cannot be updated. + + ContainerPort represents a network port in a single container. + +FIELDS: + containerPort -required- + Number of port to expose on the pod's IP address. This must be a valid port + number, 0 < x < 65536. + + hostIP + What host IP to bind the external port to. + + hostPort + Number of port to expose on the host. If specified, this must be a valid + port number, 0 < x < 65536. If HostNetwork is specified, this must match + ContainerPort. Most containers do not need this. + + name + If specified, this must be an IANA_SVC_NAME and unique within the pod. Each + named port in a pod must have a unique name. Name for the port that can be + referred to by services. + + protocol + Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". + +``` + +解释: + +``` +ports: #需要暴露的端口库号列表 + - name: string #声明容器的端口名称,要求唯一,将来可以基于该名称来识别containerPort所对应的端口。 + containerPort: int #声明宿主机的端口,早期会监听端口,后来只是添加映射,k8s v1.23.17+ 不会有端口监听但是会生成iptables规则: + hostPort: int #容器所在主机需要监听的端口号,默认与Container相同 + protocol: string #端口协议,支持TCP和UDP,默认TCP +``` + +- ##### 案例 + +vim 4-ports-pod.yaml + +```shell +#指定api版本号 +apiVersion: v1 +#指定资源类型 +kind: Pod +#指定元数据 +metadata: + #Pod的名称 + name: pod-tomcat-ports +spec: + containers: + #指定运行容器的名字 + - name: tomcat + #指定镜像 + image: tomcat:latest + #指定端口映射 + ports: + - containerPort: 8080 + hostPort: 8081 + protocol: TCP + name: tomcat-8081 +``` + +检查后发现,IP是k8s集群分配的IP,可以访问worker31节点的8081端口测试一下 + +```shell +[root@master /data/k8s/study/pod]# kubectl get po -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod-tomcat-ports 1/1 Running 0 15s 10.100.1.6 worker31 +``` + +访问http://10.0.0.31:8081/ + +![image-20240923113650795](./images/image-20240923113650795.png) + +#### 方案三:port-forward + +kubectl port-forward 是 Kubernetes 命令行工具 (kubectl) 提供的一个功能,用于在本地计算机和集群内运行的Pod之间建立临时的网络连接,实现端口转发。 + +**该方法通常用于开发或者测试,生产环境慎用** + +- ##### 命令基本语法: + +``` +kubectl port-forward POD [LOCAL_PORT:]REMOTE_PORT [-n NAMESPACE] --address IP地址 --pod-running-timeout 300 + +参数解析: +POD: 需要转发端口的Pod的名称或标签选择器。 +LOCAL_PORT: 本地计算机上的端口号,可选。如果不指定,Kubernetes会随机选择一个可用的本地端口。 +REMOTE_PORT: Pod中需要转发的端口号。 +-n NAMESPACE: 指定Pod所在的命名空间,如果不在当前上下文中,则需要指定。 +--address:指定外部访问的地址 +--pod-running-timeout: 参数设置等待Pod运行的最长时间,超过这个时间命令会退出 +``` + +- ##### 案例: + +以pod-nginx-test1为例, + +```shell +[root@master /data/k8s/study/pod]# kubectl port-forward pod-nginx-test1 81:80 --address 0.0.0.0 +Forwarding from 0.0.0.0:81 -> 80 +``` + +访问的时候只能访问port-forward的机器上,比如在master节点上使用了port-forward命令,那么访问的时候只能访问master节点的IP + +http://10.0.0.30:81/ + +![image-20240923114428795](./images/image-20240923114428795.png) + +#### 方案四:Service(后面讲解) + +#### 方案五:ingress(后面讲解) + +## Pod中重启策略 + +现在有一个问题,当Pod内部的容器启动失败时,我们需要控制Pod的重启,这个时候应该怎么做呢?这个时候就需要使用Pod的重启策略了。可以看下面的资源文档信息 + +```shell +[root@master /data/k8s/study/pod]# kubectl explain pod.spec.restartPolicy +KIND: Pod +VERSION: v1 + +FIELD: restartPolicy + +DESCRIPTION: + Restart policy for all containers within the pod. One of Always, OnFailure, + Never. Default to Always. More info: + https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy +``` + +根据文档信息可以看出,Pod的重启策略有三个,分别是:Always,OnFailure,Never,下面是详细介绍 + +### Always + +无论容器因何种原因终止,Pod 都会被重启。这也是默认值 + +### OnFailure + +当容器正常退出时不会重启Pod,当容器异常退出时,会重启Pod。**也就是容器终止运行且退出码不为0时重启** + +### Never: + +不论状态怎么样,只要失败了,就不会重启该Pod + +重启策略适用于pod对象中的所有容器,首次需要重启的容器,将在其需要时立即进行重启,随后再次需要重启的操作将由kubelet延迟一段时间后进行,且反复的重启操作的延迟时长以此为10s、20s、40s、80s、160s和300s,300s是最大延迟时长。 + +- ### 案例 + +```yaml +#指定api版本号 +apiVersion: v1 +#指定资源类型 +kind: Pod +#指定元数据 +metadata: + #Pod的名称 + name: pod-java-restart +spec: + restartPolicy: Always + containers: + #指定运行容器的名字 + - name: java-8 + #指定镜像 + image: openjdk:8u102-jdk +``` + +验证 + +```shell +#第一次重启 +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-java-restart 0/1 CrashLoopBackOff 1 (3s ago) 5s +#第二次重启正在加载 +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-java-restart 0/1 Completed 2 (14s ago) 16s +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-java-restart 0/1 CrashLoopBackOff 2 (24s ago) 39s +#第N次重启 +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-java-restart 0/1 CrashLoopBackOff 5 (16s ago) 3m19s +``` + +上述问题中,为什么openjdk:8u102-jdk这个镜像会一直重启呢?学过Java的同学应该都知道,jdk可以理解成一个虚拟系统,启动它时需要有Java服务,在docker中是叫容器阻塞,在docker中是使用CMD或者ENTRYPOINT实现,那么在k8s中可以有实现的吗?答案是有的! + +## 容器中command和args参数 + +command和args参数是属于container层面的,**command类似docker中的ENTRYPONIT之类,args类似docker中的CMD指令** + +下面是它俩和docker中CMD和ENTRYPINT的具体区别 + +- **如果command和args均没有写,那么用Dockerfile的配置。** +- **如果command写了,但args没有写,那么Dockerfile默认的配置会被忽略,执行输入的command** +- **如果command没写,但args写了,那么Dockerfile中配置的ENTRYPOINT的命令会被执行,使用当前args的参数** +- **如果command和args都写了,那么Dockerfile的配置被忽略,执行command并追加上args参数** + +command和args如何使用呢?下面是详细的资源文档 + +```shell +[root@master /data/k8s/study/pod]# kubectl explain pod.spec.containers.command +KIND: Pod +VERSION: v1 + +FIELD: command <[]string> + +DESCRIPTION: + Entrypoint array. Not executed within a shell. The docker image's + ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) + are expanded using the container's environment. If a variable cannot be + resolved, the reference in the input string will be unchanged. Double $$ + are reduced to a single $, which allows for escaping the $(VAR_NAME) + syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless of whether the + variable exists or not. Cannot be updated. More info: + https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + +#args的文档 +[root@master /data/k8s/study/pod]# kubectl explain pod.spec.containers.args +KIND: Pod +VERSION: v1 + +FIELD: args <[]string> + +DESCRIPTION: + Arguments to the entrypoint. The docker image's CMD is used if this is not + provided. Variable references $(VAR_NAME) are expanded using the + container's environment. If a variable cannot be resolved, the reference in + the input string will be unchanged. Double $$ are reduced to a single $, + which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will + produce the string literal "$(VAR_NAME)". Escaped references will never be + expanded, regardless of whether the variable exists or not. Cannot be + updated. More info: + https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell + +``` + +通过文档可以发现他俩的参数是一个string类型的数组,我们来写个案例 + +- ### 案例 + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-command-args +spec: + containers: + #指定运行容器的名字 + - name: java-8 + #指定镜像 + image: openjdk:8u102-jdk + # 用户替换Dockefile的ENTRYPOINT指令 + # command: ["sleep","3600"] + # 用于替换Dockerfile的CMD指令 + # args: ["tail","-f","/etc/hosts"] + # 当command和args一起使用时,和Dockefile的作用类似,args将作为参数传递给command。 + args: + - -f + - /etc/hosts + command: + - tail +``` + +我们检查pod的运行状态可以发现是Running,而之前创建的却一直在重启 + +```shell +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-command-args 1/1 Running 0 3s +pod-java-restart 0/1 CrashLoopBackOff 9 (112s ago) 22m +``` + +## 容器镜像的拉取策略 + +除了Pod的重启策略之外,还有容器镜像的拉取策略,主要作用就是根据配置项而选择是否需要去拉取仓库中拉取镜像。拉取策略的详细的资源文档如下: + +```shell +[root@master /data/k8s/study/pod]# kubectl explain pod.spec.containers.imagePullPolicy +KIND: Pod +VERSION: v1 + +FIELD: imagePullPolicy + +DESCRIPTION: + Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always + if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. + More info: + https://kubernetes.io/docs/concepts/containers/images#updating-images +``` + +根据文档我们可以发现拉取策略主要有三种: + +### Never: + +永不拉取,如果本地有镜像则尝试启动,若本地没有镜像则不去远程仓库拉取镜像。 + +### IfNotPresent: + +如果本地有镜像就使用本地的,如果没有就去远程仓库拉取 + +### Always + +如果本地有镜像,则对比本地镜像和远程仓库的摘要信息,若相同则使用本地缓存,若不同则重新拉取镜像。 + +如果本地没有镜像,则无需对比摘要信息,直接拉取镜像。 + +可以理解成Always就是获取最新的镜像 + +默认值为:**如果资源清单中没有指定镜像的版本号或者版本号为latest,那么拉取策略就是always,否则就是ifNotPresent** + +**案例** + +vim 7-imagepullpolicy-pod.yaml + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-mysql-imagepolicy +spec: + containers: + #指定运行容器的名字 + - name: mysql-8 + #指定镜像 + image: mysql:8.0.12 + #指定重启策略 + imagePullPolicy: Never +``` + +检查 + +```shell +[root@master /data/k8s/study/pod]# kubectl get po +NAME READY STATUS RESTARTS AGE +pod-mysql-imagepolicy 0/1 ErrImageNeverPull 0 18s + +#查看pod内部的详细信息 +[root@master /data/k8s/study/pod]# kubectl describe pod pod-mysql-imagepolicy +#此处省略一部分信息... +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 37s default-scheduler Successfully assigned default/pod-mysql-imagepolicy to worker31 + Normal SandboxChanged 35s kubelet Pod sandbox changed, it will be killed and re-created. + Warning ErrImageNeverPull 10s (x6 over 36s) kubelet Container image "mysql:8.0.12" is not present with pull policy of Never + Warning Failed 10s (x6 over 36s) kubelet Error: ErrImageNeverPull + +``` + +我们将imagePullPolicy的值改为Always测试一下 + +vim 7-imagepullpolicy-pod.yaml + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-mysql-imagepolicy +spec: + containers: + #指定运行容器的名字 + - name: mysql-8 + #指定镜像 + image: mysql:8.0.12 + #指定重启策略 + #imagePullPolicy: Never + imagePullPolicy: Always +``` + +检查发现镜像拉取下来了,但是启动失败了,pod状态为Error且一直在重启,为什么呢? + +通过查看pod的日志,我们可以发现原来是数据库的环境变量我们并没有进行配置,如果没有配置MySQL环境变量,MySQL是启动不起来的,那在K8S集群中我们怎么才能设置容器的环境变量呢? + +```shell +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-mysql-imagepolicy 0/1 Error 3 (33s ago) 88s + +#查看详细信息 +[root@master /data/k8s/study/pod]# kubectl describe pod pod-mysql-imagepolicy +#...此处省略一部分内容 +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Scheduled 104s default-scheduler Successfully assigned default/pod-mysql-imagepolicy to worker31 + Normal Pulled 70s kubelet Successfully pulled image "mysql:8.0.12" in 33.086602666s (33.086609897s including waiting) + Normal Pulled 66s kubelet Successfully pulled image "mysql:8.0.12" in 3.076153559s (3.07615908s including waiting) + Normal Pulled 50s kubelet Successfully pulled image "mysql:8.0.12" in 2.824497743s (2.824505464s including waiting) + Normal Pulling 25s (x4 over 104s) kubelet Pulling image "mysql:8.0.12" + Normal Created 22s (x4 over 70s) kubelet Created container mysql-8 + Normal Started 22s (x4 over 70s) kubelet Started container mysql-8 + Normal Pulled 22s kubelet Successfully pulled image "mysql:8.0.12" in 2.860211903s (2.860220434s including waiting) + Warning BackOff 5s (x6 over 65s) kubelet Back-off restarting failed container + +#查看pod的日志 +[root@master /data/k8s/study/pod]# kubectl logs pod-mysql-imagepolicy mysql-8 +error: database is uninitialized and password option is not specified + You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD + +``` + +## 容器中的环境变量 + +我们可以先查看一下资源文档 + +```shell +[root@master /data/k8s/study/pod]# kubectl explain pod.spec.containers.env +KIND: Pod +VERSION: v1 + +RESOURCE: env <[]Object> + +DESCRIPTION: + List of environment variables to set in the container. Cannot be updated. + + EnvVar represents an environment variable present in a Container. + +FIELDS: + name -required- + Name of the environment variable. Must be a C_IDENTIFIER. + + value + Variable references $(VAR_NAME) are expanded using the previously defined + environment variables in the container and any service environment + variables. If a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single $, which allows + for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the + string literal "$(VAR_NAME)". Escaped references will never be expanded, + regardless of whether the variable exists or not. Defaults to "". + + valueFrom + Source for the environment variable's value. Cannot be used if value is not + empty. + +``` + +通过上面的资源文档我们可以发现,name是环境变量的Key,是必填的,而value则是相对应的值,valueFrom这里不重要。 + +我们将上述的案例改一下,使我们的MySQL启动起来 + +**案例** + +vim 8-mysql-pod.yaml + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: pod-mysql-env +spec: + containers: + #指定运行容器的名字 + - name: mysql-8 + #指定镜像 + image: mysql:8.0.12 + #指定重启策略 + #imagePullPolicy: Never + imagePullPolicy: Always + env: + - name: "MYSQL_ROOT_PASSWORD" + value: "123 +``` + +启动后查看,pod的状态为Running: + +```shell +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-mysql-env 1/1 Running 0 6s +``` + +上面我们学习了Pod的基本使用,下面我们来学一个新的知识,学之前我们需要带一个疑问,当好几个Pod存在相同的特点时,我们需要怎么管理呢?能不能给他们加上一个标记?答案是肯定的。 + +# Label标签 + +## 概念 + +Label是kubernetes系统中的一个重要概念。它的作用就是在**资源上添加标识,用来对它们进行区分和选择**。 + +## Label的特点: + +- 一个Label会以key/value键值对的形式附加到各种对象上,如Node、Pod、Service等等 +- 一个资源对象可以定义任意数量的Label ,同一个Label也可以被添加到任意数量的资源对象上去 +- Label通常在资源对象定义时确定,当然也可以在对象创建后动态添加或者删除 + +我们可以通过Label实现资源的多维度分组,以便灵活、方便地进行资源分配、调度、配置、部署等管理工作。 + +一些常用的Label 示例如下: + +- 版本标签:"version":"release", "version":"stable"...... +- 环境标签:"environment":"dev","environment":"test","environment":"pro" +- 架构标签:"tier":"frontend","tier":"backend" + +标签定义完毕之后,还要考虑到标签的选择,这就要使用到Label Selector,即:Label用于给某个资源对象定义标识 + +## Label Selector + +这个也叫标签选择器,主要作用如下: + +**用于查询和筛选拥有某些标签的资源对象** + +当前有两种Label Selector: + +- 基于等式的Label Selector + + - name = slave: 选择所有包含Label中key="name"且value="slave"的对象 + + - env != production: 选择所有包括Label中的key="env"且value不等于"production"的对象 + +- 基于集合的Label Selector + + - name in (master, slave): 选择所有包含Label中的key="name"且value="master"或"slave"的对象 + + - name not in (frontend): 选择所有包含Label中的key="name"且value不等于"frontend"的对象 + +标签的选择条件可以使用多个,此时将多个Label Selector进行组合,使用逗号","进行分隔即可。例如: + +- name=slave,env!=production +- name not in (frontend),env!=production + +下面先以Label标签讲解,Label Selector后面会讲 + +## 创建Pod并关联上Label + +vim 9-label-pod.yaml + +```yaml +apiVersion: v1 +kind: Pod +#指定元数据 +metadata: + name: pod-nginx-label-1 + #给pod加上label + labels: + name: "nginx" + version: "1.0" +spec: + containers: + - name: nginx + image: nginx:1.17.1 +--- +apiVersion: v1 +kind: Pod +#指定元数据 +metadata: + name: pod-nginx-label-2 + #给pod加上label + labels: + name: "nginx" + version: "2.0" +spec: + containers: + - name: nginx + image: nginx:1.17.1 +--- +apiVersion: v1 +kind: Pod +#指定元数据 +metadata: + name: pod-nginx-label-3 + #给pod加上label + labels: + name: "nginx" + version: "3.0" +spec: + containers: + - name: nginx + image: nginx:1.17.1 +``` + +## 关于资源标签的相关命令: + +### 查看标签: + +```shell +#查看所有Pod的labels +[root@master /data/k8s/study/pod]# kubectl get po --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-1 1/1 Running 0 74s name=nginx,version=1.0 +pod-nginx-label-2 1/1 Running 0 74s name=nginx,version=2.0 +pod-nginx-label-3 1/1 Running 0 74s name=nginx,version=3.0 + +#查看指定Pod的标签 +[root@master /data/k8s/study/pod]# kubectl get pod pod-nginx-label-1 --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-1 1/1 Running 0 2m47s name=nginx,version=1.0 + +#查看指定标签的Pod +[root@master /data/k8s/study/pod]# kubectl get pod -l version=2.0 --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-2 1/1 Running 0 3m37s name=nginx,version=2.0 + +[root@master /data/k8s/study/pod]# kubectl get pod -l name=nginx --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-1 1/1 Running 0 4m13s name=nginx,version=1.0 +pod-nginx-label-2 1/1 Running 0 4m13s name=nginx,version=2.0 +pod-nginx-label-3 1/1 Running 0 4m13s name=nginx,version=3.0 + +#筛选出指定标签!=xxx的pod +[root@master /data/k8s/study/pod]# kubectl get pod -l version!=1.0 --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-2 1/1 Running 0 4m59s name=nginx,version=2.0 +pod-nginx-label-3 1/1 Running 0 4m59s name=nginx,version=3.0 +[root@master /data/k8s/study/pod]# +``` + +### 修改pod的标签 + +```shell +#修改Pod的标签 +[root@master /data/k8s/study/pod]# kubectl label pod pod-nginx-label-1 version=1.1 --overwrite +pod/pod-nginx-label-1 labeled + +#查看Pod +[root@master /data/k8s/study/pod]# kubectl get pod -l version=1.1 --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-1 1/1 Running 0 7m23s name=nginx,version=1.1 + +``` + +### 删除标签 + +```shell +#删除标签,语法:kubectl label pod pod名称 标签的key- +[root@master /data/k8s/study/pod]# kubectl label pod pod-nginx-label-1 name- +pod/pod-nginx-label-1 unlabeled + +[root@master /data/k8s/study/pod]# kubectl get pod pod-nginx-label-1 --show-labels +NAME READY STATUS RESTARTS AGE LABELS +pod-nginx-label-1 1/1 Running 0 12m version=1.1 +[root@master /data/k8s/study/pod]# +``` + +### 根据标签删除pod + +```shell +#删除pod +[root@master /data/k8s/study/pod]# kubectl delete pod -l name=nginx +pod "pod-nginx-label-2" deleted +pod "pod-nginx-label-3" deleted + +#查看pod +[root@master /data/k8s/study/pod]# kubectl get pod +NAME READY STATUS RESTARTS AGE +pod-nginx-label-1 1/1 Running 0 13m +[root@master /data/k8s/study/pod]# + +``` + +除了上述的标签之外,还有一个名称空间namespace也可以对k8s的资源进行一些管理,例如区分环境、租户等 + +# K8S核心资源之Namespace + +## 什么是namespace呢? + +Namespace是kubernetes系统中的一种非常重要资源,它的主要作用是用来实现**多套环境的资源隔离**或者**多租户的资源隔离**。 + +默认情况下,kubernetes集群中的所有的Pod都是可以相互访问的。但是在实际中,可能不想让两个Pod之间进行互相的访问,那此时就可以将两个Pod划分到不同的namespace下。kubernetes通过将集群内部的资源分配到不同的Namespace中,可以形成逻辑上的"组",以方便不同的组的资源进行隔离使用和管理。 + +可以通过kubernetes的授权机制,将不同的namespace交给不同租户进行管理,这样就实现了多租户的资源隔离。此时还能结合kubernetes的资源配额机制,限定不同租户能占用的资源,例如CPU使用量、内存使用量等等,来实现租户可用资源的管理。 + +![image-20240924151229418](./images/image-20240924151229418.png) + +对于资源隔离,有的资源是不支持名称空间的,我们称这类资源叫全局资源,而支持名称空间的叫局部资源。 + +## 查看哪些资源支持名称空间 + +NAMESPACED为true的支持名称空间,为false的则不支持 + +```shell +[root@master /data/k8s/study/pod]# kubectl api-resources +NAME SHORTNAMES APIVERSION NAMESPACED KIND +bindings v1 true Binding +componentstatuses cs v1 false ComponentStatus +configmaps cm v1 true ConfigMap +endpoints ep v1 true Endpoints +events ev v1 true Event +limitranges limits v1 true LimitRange +namespaces ns v1 false Namespace +nodes no v1 false Node +persistentvolumeclaims pvc v1 true PersistentVolumeClaim +persistentvolumes pv v1 false PersistentVolume +pods po v1 true Pod +podtemplates v1 true PodTemplate +replicationcontrollers rc v1 true ReplicationController +resourcequotas quota v1 true ResourceQuota +secrets v1 true Secret +serviceaccounts sa v1 true ServiceAccount +services svc v1 true Service +mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration +validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration +customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition +apiservices apiregistration.k8s.io/v1 false APIService +controllerrevisions apps/v1 true ControllerRevision +daemonsets ds apps/v1 true DaemonSet +deployments deploy apps/v1 true Deployment +replicasets rs apps/v1 true ReplicaSet +statefulsets sts apps/v1 true StatefulSet +tokenreviews authentication.k8s.io/v1 false TokenReview +localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview +selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview +selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview +subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview +horizontalpodautoscalers hpa autoscaling/v2 true HorizontalPodAutoscaler +cronjobs cj batch/v1 true CronJob +jobs batch/v1 true Job +certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest +leases coordination.k8s.io/v1 true Lease +endpointslices discovery.k8s.io/v1 true EndpointSlice +events ev events.k8s.io/v1 true Event +flowschemas flowcontrol.apiserver.k8s.io/v1beta2 false FlowSchema +prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta2 false PriorityLevelConfiguration +ingressclasses networking.k8s.io/v1 false IngressClass +ingresses ing networking.k8s.io/v1 true Ingress +networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy +runtimeclasses node.k8s.io/v1 false RuntimeClass +poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget +podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy +clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding +clusterroles rbac.authorization.k8s.io/v1 false ClusterRole +rolebindings rbac.authorization.k8s.io/v1 true RoleBinding +roles rbac.authorization.k8s.io/v1 true Role +priorityclasses pc scheduling.k8s.io/v1 false PriorityClass +csidrivers storage.k8s.io/v1 false CSIDriver +csinodes storage.k8s.io/v1 false CSINode +csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity +storageclasses sc storage.k8s.io/v1 false StorageClass +volumeattachments storage.k8s.io/v1 false VolumeAttachment +``` + +## kubernetes在集群启动之后,会默认创建几个namespace + +当pod创建时,不指定namespace默认的就是default,我们前面的案例创建的pod所属的namespace全部属于default + +```shell +[root@master /data/k8s/study/pod]# kubectl get ns +NAME STATUS AGE +default Active 10d +kube-flannel Active 10d +kube-node-lease Active 10d +kube-public Active 10d +kube-system Active 10d +``` + +## namespaces相关操作命令 + +### 创建namespace + +```shell +#语法:kubectl create ns ns名称 +[root@master /data/k8s/study/pod]# kubectl create ns dev-test +namespace/dev-test created +``` + +### 查看namespace + +```shell +#查看所有的名称空间 +[root@master /data/k8s/study/pod]# kubectl get ns +NAME STATUS AGE +default Active 10d +dev-test Active 65s +kube-flannel Active 10d +kube-node-lease Active 10d +kube-public Active 10d +kube-system Active 10d + +#查看指定的名称空间 +[root@master /data/k8s/study/pod]# kubectl get ns dev-test -o wide +NAME STATUS AGE +dev-test Active 72s + +#输出指定的名称空间的详细信息 +[root@master /data/k8s/study/pod]# kubectl describe ns dev-test +Name: dev-test +Labels: kubernetes.io/metadata.name=dev-test +Annotations: +Status: Active + +No resource quota. + +No LimitRange resource. +``` + +### 删除名称空间 + +```shell +#删除,语法:kubectl delete ns ns名称 +[root@master /data/k8s/study/pod]# kubectl delete ns dev-test +namespace "dev-test" deleted + +#查看是否删除 +[root@master /data/k8s/study/pod]# kubectl get ns +NAME STATUS AGE +default Active 10d +kube-flannel Active 10d +kube-node-lease Active 10d +kube-public Active 10d +kube-system Active 10d +[root@master /data/k8s/study/pod]# + +``` + +## Pod关联namespace + +vim 10-ns-pod.yaml + +```yaml +#创建namespace +apiVersion: v1 +kind: Namespace +metadata: + name: dev +--- +#创建pod +apiVersion: v1 +kind: Pod +metadata: + name: ns-pod + #指定namespace + namespace: dev +spec: + containers: + - name: nginx + image: nginx:1.17.1 + +``` + +### 查看指定namespace下的Pod + +```shell +#查看dev名称空间下的pod +[root@master /data/k8s/study/pod]# kubectl get po --namespace=dev +NAME READY STATUS RESTARTS AGE +ns-pod 1/1 Running 0 24s + +#查看所有的pod +[root@master /data/k8s/study/pod]# kubectl get po -A +NAMESPACE NAME READY STATUS RESTARTS AGE +dev ns-pod 1/1 Running 0 61s +kube-flannel kube-flannel-ds-9hz9c 1/1 Running 1 (4d23h ago) 10d +kube-flannel kube-flannel-ds-cx98g 1/1 Running 2 (4d23h ago) 10d +kube-flannel kube-flannel-ds-pjwhv 1/1 Running 1 (4d23h ago) 10d +kube-system coredns-6d8c4cb4d-gpjjg 1/1 Running 18 (4d23h ago) 10d +kube-system coredns-6d8c4cb4d-wpw8f 1/1 Running 65 (4d23h ago) 10d +kube-system etcd-master 1/1 Running 1 (4d23h ago) 10d +kube-system kube-apiserver-master 1/1 Running 1 (4d23h ago) 10d +kube-system kube-controller-manager-master 1/1 Running 1 (4d23h ago) 10d +kube-system kube-proxy-9k8v6 1/1 Running 1 (4d23h ago) 10d +kube-system kube-proxy-pvx69 1/1 Running 1 (4d23h ago) 10d +kube-system kube-proxy-sh24v 1/1 Running 1 (4d23h ago) 10d +kube-system kube-scheduler-master 1/1 Running 1 (4d23h ago) 10d +``` diff --git "a/k8s\346\234\215\345\212\241/6-k8s\350\265\204\346\272\220\344\271\213Pod\345\255\230\345\202\250.md" "b/k8s\346\234\215\345\212\241/6-k8s\350\265\204\346\272\220\344\271\213Pod\345\255\230\345\202\250.md" new file mode 100644 index 0000000000000000000000000000000000000000..0fc8c6f034999cf0881224e9995877f56ab97c88 --- /dev/null +++ "b/k8s\346\234\215\345\212\241/6-k8s\350\265\204\346\272\220\344\271\213Pod\345\255\230\345\202\250.md" @@ -0,0 +1,338 @@ +# Pod的存储 + +前置知识我们都学了,我们现在来学一下存储,以数据库为例,当Pod消失之后,那存储的东西自然也就没了,这个时候我们肯定想要Pod消失之后,但是里面的重要数据还是同样存留着,这样我们怎么实现呢? + +后续我们使用wordpress网站为案例讲解,大家可以自行去了解wordpress, + +docker仓库:https://hub.docker.com/_/wordpress + +## 部署wordpress,不使用存储卷 + +vim 1-wp-db.yaml + +```yaml +#部署数据库 +apiVersion: v1 +kind: Pod +metadata: + name: mysql-pod +spec: + containers: + #指定运行容器的名字 + - name: mysql-8 + #指定镜像 + image: mysql:8.0.12 + env: + - name: "MYSQL_ROOT_PASSWORD" + value: "1" + - name: "MYSQL_DATABASE" + value: "wp" + - name: "MYSQL_USER" + value: "wp" + - name: "MYSQL_PASSWORD" + value: "1" +``` + +vim 2-wp.yaml + +```yaml +#部署wordpress +apiVersion: v1 +kind: Pod +metadata: + name: wp-pod +spec: + containers: + - name: wp + image: wordpress:php8.3 + ports: + - containerPort: 80 + hostPort: 80 + protocol: TCP + env: + - name: "WORDPRESS_DB_HOST" + #这里IP需要查看第一步部署的数据库的IP地址 + value: "10.100.1.12" + - name: "WORDPRESS_DB_USER" + value: "wp" + - name: "WORDPRESS_DB_PASSWORD" + value: "1" + - name: "WORDPRESS_DB_NAME" + value: "wp" + - name: "WORDPRESS_TABLE_PREFIX" + value: "wp" +``` + +最后部署好之后,访问相关地址,我这里是在worker32节点上,所以我访问http://10.0.0.32,然后我们写一篇文章并发布 + +![image-20240924161228101](./images/image-20240924161228101.png) + +然后我们将pod删除之后重新创建,然后会发生什么呢? + +```shell +#删除 +[root@master /data/k8s/study/wp-volume]# kubectl delete -f . +pod "mysql-pod" deleted +pod "wp-pod" deleted + +#新建,新建之前需要将2-wp.yaml中关于数据库的IP重新改一下 +[root@master /data/k8s/study/wp-volume]# kubectl apply -f . +pod/mysql-pod created +pod/wp-pod created + +#查看 +[root@master /data/k8s/study/wp-volume]# kubectl get pod -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +mysql-pod 1/1 Running 0 14s 10.100.1.13 worker31 +wp-pod 1/1 Running 0 14s 10.100.2.17 worker32 +[root@master /data/k8s/study/wp-volume]# + +``` + +我们将pod创建之后发现,竟然需要重新创建网站的用户等,说明之前的数据丢了,那应该怎么解决呢?这儿就需要存储了 + +## K8S存储介绍 + +在前面已经提到,容器的生命周期可能很短,会被频繁地创建和销毁。那么容器在销毁时,保存在容器中的数据也会被清除。这种结果对用户来说,在某些情况下是不乐意看到的。为了持久化保存容器的数据,kubernetes引入了Volume的概念。 + +Volume是Pod中能够被多个容器访问的共享目录,它被定义在Pod上,然后被一个Pod里的多个容器挂载到具体的文件目录下,kubernetes通过Volume实现同一个Pod中不同容器之间的数据共享以及数据的持久化存储。Volume的生命容器不与Pod中单个容器的生命周期相关,当容器终止或者重启时,Volume中的数据也不会丢失。 + +kubernetes的Volume支持多种类型,比较常见的有下面几个: + +- 简单存储:EmptyDir、HostPath、NFS +- 高级存储:PV、PVC +- 配置存储:ConfigMap、Secret + +## NFS存储实战案例 + +这儿我们先介绍NFS改造上述的案例 + +### 安装NFS + +master30节点操作 + +```shell +#部署nfs服务 +apt -y install nfs-kernel-server + +#创建共享存储目录 +mkdir -p /data/nfs/wordpress + +#配置nfs的配置文件 +echo "/data/nfs/wordpress *(rw,no_root_squash)" >> /etc/exports + +#启动nfs服务 +systemctl enable --now nfs-server +systemctl reload nfs-server.service +#检查nfs的配置 +exportfs + +#创建wp的存储目录 +mkdir -p /data/nfs/wordpress/wp +#创建db的存储目录 +mkdir -p /data/nfs/wordpress/db +#修改权限 +chmod 777 -R /data/nfs/wordpress +``` + +worker所有节点操作 + +```shell +apt -y install nfs-kernel-server + +#测试挂载 +root@worker32:~# mount -t nfs 10.0.0.30:/data/nfs/wordpress/wp ~ +#查看 +root@worker32:~# df -h +Filesystem Size Used Avail Use% Mounted on +... +10.0.0.30:/data/nfs/wordpress/wp 24G 9.6G 13G 43% /root + +#卸载 +root@worker32:~# umount /root +``` + +### 改造wordpress存储 + +**文档:** + +可以使用`kubectl explain pod.spec.volumes`、`kubectl explain pod.spec.containers.volumeMounts`查看文档的详细细节 + +#### 改造数据库存储 + +```yaml +#部署数据库 +apiVersion: v1 +kind: Pod +metadata: + name: mysql-pod +spec: + #定义存储卷 + volumes: + - name: mysql-nfs + nfs: + path: /data/nfs/wordpress/db + server: master30 + readOnly: false + containers: + - name: mysql-8 + image: mysql:8.0.12 + env: + - name: "MYSQL_ROOT_PASSWORD" + value: "1" + - name: "MYSQL_DATABASE" + value: "wp" + - name: "MYSQL_USER" + value: "wp" + - name: "MYSQL_PASSWORD" + value: "1" + #存储卷挂载 + volumeMounts: + #上面定义存储卷的名称 + - name: mysql-nfs + #挂载路径,容器内部的存储路径 + mountPath: /var/lib/mysql/ +``` + +查看共享存储的内容,发现共享目录下已经有了数据库的内容 + +```shell +[root@master /data/k8s/study/wp-volume]# ll /data/nfs/wordpress/db/ +total 179212 +drwxrwxrwx 6 lxd docker 4096 Sep 24 16:43 ./ +drwxrwxrwx 4 root root 4096 Sep 24 16:27 ../ +-rw-r----- 1 lxd docker 56 Sep 24 16:43 auto.cnf +-rw-r----- 1 lxd docker 3072527 Sep 24 16:43 binlog.000001 +-rw-r----- 1 lxd docker 155 Sep 24 16:43 binlog.000002 +-rw-r----- 1 lxd docker 32 Sep 24 16:43 binlog.index +-rw------- 1 lxd docker 1676 Sep 24 16:43 ca-key.pem +-rw-r--r-- 1 lxd docker 1112 Sep 24 16:43 ca.pem +-rw-r--r-- 1 lxd docker 1112 Sep 24 16:43 client-cert.pem +-rw------- 1 lxd docker 1680 Sep 24 16:43 client-key.pem +-rw-r----- 1 lxd docker 6994 Sep 24 16:43 ib_buffer_pool +-rw-r----- 1 lxd docker 12582912 Sep 24 16:43 ibdata1 +-rw-r----- 1 lxd docker 50331648 Sep 24 16:43 ib_logfile0 +-rw-r----- 1 lxd docker 50331648 Sep 24 16:42 ib_logfile1 +-rw-r----- 1 lxd docker 12582912 Sep 24 16:43 ibtmp1 +drwxr-x--- 2 lxd docker 4096 Sep 24 16:43 mysql/ +-rw-r----- 1 lxd docker 31457280 Sep 24 16:43 mysql.ibd +drwxr-x--- 2 lxd docker 4096 Sep 24 16:43 performance_schema/ +-rw------- 1 lxd docker 1676 Sep 24 16:43 private_key.pem +-rw-r--r-- 1 lxd docker 452 Sep 24 16:43 public_key.pem +-rw-r--r-- 1 lxd docker 1112 Sep 24 16:43 server-cert.pem +-rw------- 1 lxd docker 1680 Sep 24 16:43 server-key.pem +drwxr-x--- 2 lxd docker 4096 Sep 24 16:43 sys/ +-rw-r----- 1 lxd docker 12582912 Sep 24 16:43 undo_001 +-rw-r----- 1 lxd docker 10485760 Sep 24 16:43 undo_002 +drwxr-x--- 2 lxd docker 4096 Sep 24 16:43 wp/ +``` + +#### 改造wordpress + +```yaml +#部署wordpress +apiVersion: v1 +kind: Pod +metadata: + name: wp-pod +spec: + volumes: + - name: wp-nfs + nfs: + path: /data/nfs/wordpress/wp + server: master30 + readOnly: false + containers: + - name: wp + image: wordpress:php8.3 + ports: + - containerPort: 80 + hostPort: 80 + protocol: TCP + env: + - name: "WORDPRESS_DB_HOST" + #这里IP需要查看第一步部署的数据库的IP地址 + value: "10.100.1.14" + - name: "WORDPRESS_DB_USER" + value: "wp" + - name: "WORDPRESS_DB_PASSWORD" + value: "1" + - name: "WORDPRESS_DB_NAME" + value: "wp" + - name: "WORDPRESS_TABLE_PREFIX" + value: "wp" + volumeMounts: + - name: wp-nfs + mountPath: /var/www/html +``` + +大家可自行测试删除,重建pod之后数据是否存在 + +## emptyDir实战案例 + +查看文档: + +kubectl explain po.spec.containers.volumes.emptyDir + +### 什么是emptyDir? + +EmptyDir是在Pod被分配到Node时创建的,它的初始内容为空,并且无须指定宿主机上对应的目录文件,因为kubernetes会自动分配一个目录,当Pod销毁时, EmptyDir中的数据也会被永久删除 + +![image-20240924170448280](./images/image-20240924170448280.png) + +### filebeat收集nginx日志 + +下面的案例在你们那是运行不起来的,重点在于理解即可,这个不重要 + +``` +apiVersion: v1 +kind: Pod +metadata: + name: emptydir + labels: + name: emptydir-v1 +spec: + volumes: + - name: emptydir-volumes + emptyDir: {} + containers: + - name: nginx_alpine + image: harbor.huangsir.com/middle/nginx_alpine:v1 + volumeMounts: + - name: emptydir-volumes + mountPath: /var/log/nginx/access.log + - name: filebeat + image: harbor.huangsir.com/middle/filebeat:7.17.21 + volumeMounts: + - name: emptydir-volumes + mountPath: /data/log + +``` + +## hostPath实战案例 + +EmptyDir中数据不会被持久化,它会随着Pod的结束而销毁,如果想简单的将数据持久化到主机中,可以选择HostPath。 + +HostPath就是将Node主机中一个实际目录挂在到Pod中,以供容器使用,这样的设计就可以保证Pod销毁了,但是数据依据可以存在于Node主机上。 + +### 共享nginx访问界面 + +``` +apiVersion: v1 +kind: Pod +metadata: + name: nginx-alpine-v1 +spec: + volumes: + - name: path + #指定存储卷类型为hostPath类型 + hostPath: + path: /data/nginx + containers: + - name: nginx-test1 + image: nginx:latest + volumeMounts: + - name: path + mountPath: /usr/share/nginx +``` + diff --git "a/k8s\346\234\215\345\212\241/images/image-20240919113345994.png" "b/k8s\346\234\215\345\212\241/images/image-20240919113345994.png" new file mode 100644 index 0000000000000000000000000000000000000000..b1fe568e577e2bc995f7da5f573dacfffbf1de53 Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240919113345994.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240919113615210.png" "b/k8s\346\234\215\345\212\241/images/image-20240919113615210.png" new file mode 100644 index 0000000000000000000000000000000000000000..b57b15a32595b8f0f22fdbacc14882c907a2990f Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240919113615210.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240923112532398.png" "b/k8s\346\234\215\345\212\241/images/image-20240923112532398.png" new file mode 100644 index 0000000000000000000000000000000000000000..3be7dd73aece036e1459e29b9cdff7a830a53cf5 Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240923112532398.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240923112534516.png" "b/k8s\346\234\215\345\212\241/images/image-20240923112534516.png" new file mode 100644 index 0000000000000000000000000000000000000000..3be7dd73aece036e1459e29b9cdff7a830a53cf5 Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240923112534516.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240923113650795.png" "b/k8s\346\234\215\345\212\241/images/image-20240923113650795.png" new file mode 100644 index 0000000000000000000000000000000000000000..d63d9ccdbd355f9e4db610b8954fb8f4f454a4f0 Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240923113650795.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240923114428795.png" "b/k8s\346\234\215\345\212\241/images/image-20240923114428795.png" new file mode 100644 index 0000000000000000000000000000000000000000..7b14d81555090b3049b957286d9687fb9636f503 Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240923114428795.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240924151229418.png" "b/k8s\346\234\215\345\212\241/images/image-20240924151229418.png" new file mode 100644 index 0000000000000000000000000000000000000000..46d8f7bb3bf48cae795cb8acc995ca6055e54b67 Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240924151229418.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240924161228101.png" "b/k8s\346\234\215\345\212\241/images/image-20240924161228101.png" new file mode 100644 index 0000000000000000000000000000000000000000..805aee34ba71a52fb8dd417457ef0f8137acd3ef Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240924161228101.png" differ diff --git "a/k8s\346\234\215\345\212\241/images/image-20240924170448280.png" "b/k8s\346\234\215\345\212\241/images/image-20240924170448280.png" new file mode 100644 index 0000000000000000000000000000000000000000..ca4515c84ee4f2f9ecceb2a0d2ed4267abc0f1dc Binary files /dev/null and "b/k8s\346\234\215\345\212\241/images/image-20240924170448280.png" differ