diff --git a/docs/en/docs/Kubernetes/Kubernetes.md b/docs/en/docs/Kubernetes/Kubernetes.md
new file mode 100644
index 0000000000000000000000000000000000000000..2667a3307744c66de7cb7f4d2d20548bc0604ab2
--- /dev/null
+++ b/docs/en/docs/Kubernetes/Kubernetes.md
@@ -0,0 +1 @@
+# Kubernetes Cluster Deployment Guide
**Statement: This document applies only for experiment and learning environments and does not apply to commercial environments.**
This document describes how to deploy a Kubernetes cluster in binary mode on openEuler.
Note: All operations in this document are performed using the `root` permission.
## Cluster Status
The cluster status used in this document is as follows:
- Cluster structure: six VMs running the `openEuler 20.03 LTS SP2` OS, three master nodes, and three nodes.
- Physical machine: `x86/ARM` server of `openEuler 20.03 LTS SP2`.
diff --git a/docs/en/docs/Kubernetes/deploying-a-Kubernetes-cluster.md b/docs/en/docs/Kubernetes/deploying-a-Kubernetes-cluster.md
new file mode 100644
index 0000000000000000000000000000000000000000..6a15d10e217bea6fcaaa6ea05e0d8aed46b54d6e
--- /dev/null
+++ b/docs/en/docs/Kubernetes/deploying-a-Kubernetes-cluster.md
@@ -0,0 +1 @@
+# Deploying a Kubernetes Cluster
This section describes how to deploy a Kubernetes cluster.
## Environment
Obtain the following VM list based on the preceding VM installation and deployment:
| HostName | MAC | IPv4 |
| ---------- | ----------------- | ------------------ |
| k8smaster0 | 52:54:00:00:00:80 | 192.168.122.154/24 |
| k8smaster1 | 52:54:00:00:00:81 | 192.168.122.155/24 |
| k8smaster2 | 52:54:00:00:00:82 | 192.168.122.156/24 |
| k8snode1 | 52:54:00:00:00:83 | 192.168.122.157/24 |
| k8snode2 | 52:54:00:00:00:84 | 192.168.122.158/24 |
| k8snode3 | 52:54:00:00:00:85 | 192.168.122.159/24 |
\ No newline at end of file
diff --git a/docs/en/docs/Kubernetes/deploying-a-node-component.md b/docs/en/docs/Kubernetes/deploying-a-node-component.md
new file mode 100644
index 0000000000000000000000000000000000000000..a8de522f08aa006e3ecaca0759d8abc917832cab
--- /dev/null
+++ b/docs/en/docs/Kubernetes/deploying-a-node-component.md
@@ -0,0 +1 @@
+# Deploying a Node Component
This section uses the `k8snode1` node as an example.
## Environment Preparation
```bash
# A proxy needs to be configured for the intranet.
$ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins
$ swapoff -a
$ mkdir -p /etc/kubernetes/pki/
$ mkdir -p /etc/cni/net.d
$ mkdir -p /opt/cni
# Delete the default kubeconfig file.
$ rm /etc/kubernetes/kubelet.kubeconfig
## Use iSulad as the runtime ########.
# Configure the iSulad.
cat /etc/isulad/daemon.json
{
"registry-mirrors": [
"docker.io"
],
"insecure-registries": [
"k8s.gcr.io",
"quay.io"
],
"pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause type
"network-plugin": "cni", # If this parameter is left blank, the CNI network plug-in is disabled. In this case, the following two paths become invalid. After the plug-in is installed, restart iSulad.
"cni-bin-dir": "/usr/libexec/cni/",
"cni-conf-dir": "/etc/cni/net.d",
}
# Add the proxy to the iSulad environment variable and download the image.
cat /usr/lib/systemd/system/isulad.service
[Service]
Type=notify
Environment="HTTP_PROXY=http://name:password@proxy:8080"
Environment="HTTPS_PROXY=http://name:password@proxy:8080"
# Restart the iSulad and set it to start automatically upon power-on.
systemctl daemon-reload
systemctl restart isulad
## If Docker is used as the runtime, run the following command: ########
$ dnf install -y docker
# If a proxy environment is required, configure a proxy for Docker, add the configuration file http-proxy.conf, and edit the following content. Replace name, password, and proxy-addr with the actual values.
$ cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://name:password@proxy-addr:8080"
$ systemctl daemon-reload
$ systemctl restart docker
```
## Creating kubeconfig Configuration Files
Perform the following operations on each node to create a configuration file:
```bash
$ kubectl config set-cluster openeuler-k8s \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=https://192.168.122.154:6443 \
--kubeconfig=k8snode1.kubeconfig
$ kubectl config set-credentials system:node:k8snode1 \
--client-certificate=/etc/kubernetes/pki/k8snode1.pem \
--client-key=/etc/kubernetes/pki/k8snode1-key.pem \
--embed-certs=true \
--kubeconfig=k8snode1.kubeconfig
$ kubectl config set-context default \
--cluster=openeuler-k8s \
--user=system:node:k8snode1 \
--kubeconfig=k8snode1.kubeconfig
$ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig
```
**Note: Change k8snode1 to the corresponding node name.**
## Copying the Certificate
Similar to the control plane, all certificates, keys, and related configurations are stored in the `/etc/kubernetes/pki/` directory.
```bash
$ ls /etc/kubernetes/pki/
ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem
k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig
```
## CNI Network Configuration
containernetworking-plugins is used as the CNI plug-in used by kubelet. In the future, plug-ins such as calico and flannel can be introduced to enhance the network capability of the cluster.
```bash
# Bridge Network Configuration
$ cat /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.244.0.0/16",
"gateway": "10.244.0.1"
},
"dns": {
"nameservers": [
"10.244.0.1"
]
}
}
# Loopback Network Configuration
$ cat /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"name": "lo",
"type": "loopback"
}
```
## Deploying the kubelet Service
### Configuration File on Which Kubelet Depends
```bash
$ cat /etc/kubernetes/pki/kubelet_config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
clusterDNS:
- 10.32.0.10
clusterDomain: cluster.local
runtimeRequestTimeout: "15m"
tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem"
tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem"
```
**Note: The IP address of the cluster DNS is 10.32.0.10, which must be the same as the value of service-cluster-ip-range.**
### Compiling the systemd Configuration File
```bash
$ cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet \
--config=/etc/kubernetes/pki/kubelet_config.yaml \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \
--register-node=true \
--hostname-override=k8snode1 \
--cni-bin-dir="/usr/libexec/cni/" \
--v=2
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
```
**Note: If iSulad is used as the runtime, add the following configuration:**
```bash
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/isulad.sock \
```
## Deploying kube-proxy
### Configuration File on Which kube-proxy Depends
```bash
cat /etc/kubernetes/pki/kube_proxy_config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16
mode: "iptables"
```
### Compiling the systemd Configuration File
```bash
$ cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
After=network.target
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
--config=/etc/kubernetes/pki/kube_proxy_config.yaml \
--hostname-override=k8snode1 \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
```
## Starting a Component Service
```bash
$ systemctl enable kubelet kube-proxy
$ systemctl start kubelet kube-proxy
```
Deploy other nodes in sequence.
## Verifying the Cluster Status
Wait for several minutes and run the following command to check the node status:
```bash
$ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
NAME STATUS ROLES AGE VERSION
k8snode1 Ready 17h v1.20.2
k8snode2 Ready 19m v1.20.2
k8snode3 Ready 12m v1.20.2
```
## Deploying coredns
coredns can be deployed on a node or master node. In this document, coredns is deployed on the `k8snode1` node.
### Compiling the coredns Configuration File
```bash
$ cat /etc/kubernetes/pki/dns/Corefile
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
endpoint https://192.168.122.154:6443
tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem
kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
```
Note:
- Listen to port 53.
- Configure the Kubernetes plug-in, including the certificate and the URL of kube api.
### Preparing the service File of systemd
```bash
cat /usr/lib/systemd/system/coredns.service
[Unit]
Description=Kubernetes Core DNS server
Documentation=https://github.com/coredns/coredns
After=network.target
[Service]
ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
```
### Starting the Service
```bash
$ systemctl enable coredns
$ systemctl start coredns
```
### Creating the Service Object of coredns
```bash
$ cat coredns_server.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
```
### Creating the Endpoint Object of coredns
```bash
$ cat coredns_ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: kube-dns
namespace: kube-system
subsets:
- addresses:
- ip: 192.168.122.157
ports:
- name: dns-tcp
port: 53
protocol: TCP
- name: dns
port: 53
protocol: UDP
- name: metrics
port: 9153
protocol: TCP
```
### Confirming the coredns Service
```bash
# View the service object.
$ kubectl get service -n kube-system kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.32.0.10 53/UDP,53/TCP,9153/TCP 51m
# View the endpoint object.
$ kubectl get endpoints -n kube-system kube-dns
NAME ENDPOINTS AGE
kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m
```
\ No newline at end of file
diff --git a/docs/en/docs/Kubernetes/deploying-control-plane-components.md b/docs/en/docs/Kubernetes/deploying-control-plane-components.md
new file mode 100644
index 0000000000000000000000000000000000000000..a9b9bb2faff7c208fe6fb3fb1f02616d5c2f7f18
--- /dev/null
+++ b/docs/en/docs/Kubernetes/deploying-control-plane-components.md
@@ -0,0 +1,357 @@
+# Deploying Components on the Control Plane
+
+## Preparing the kubeconfig File for All Components
+
+### kube-proxy
+
+```bash
+kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.122.154:6443 --kubeconfig=kube-proxy.kubeconfig
+kubectl config set-credentials system:kube-proxy --client-certificate=/etc/kubernetes/pki/kube-proxy.pem --client-key=/etc/kubernetes/pki/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
+kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-proxy --kubeconfig=kube-proxy.kubeconfig
+kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
+```
+
+### kube-controller-manager
+
+```bash
+kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-controller-manager.kubeconfig
+kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
+kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
+kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
+```
+
+### kube-scheduler
+
+```bash
+kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-scheduler.kubeconfig
+kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
+kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
+kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
+```
+
+### admin
+
+```bash
+kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=admin.kubeconfig
+kubectl config set-credentials admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=admin.kubeconfig
+kubectl config set-context default --cluster=openeuler-k8s --user=admin --kubeconfig=admin.kubeconfig
+kubectl config use-context default --kubeconfig=admin.kubeconfig
+```
+
+### Obtaining the kubeconfig Configuration File
+
+```bash
+admin.kubeconfig kube-proxy.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig
+```
+
+## Configuration for Generating the Key Provider
+
+When api-server is started, a key pair `--encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml` needs to be provided. In this document, a key pair `--encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml` is generated by using urandom:
+
+```bash
+$ cat generate.bash
+#!/bin/bash
+
+ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
+
+cat > encryption-config.yaml <
+ k8smaster0
+ 8
+ 8
+
+ hvm
+ /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw
+ /var/lib/libvirt/qemu/nvram/k8smaster0.fd
+
+
+
+
+
+
+
+
+ 1
+
+ destroy
+ restart
+ restart
+
+ /usr/libexec/qemu-kvm
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+The VM configuration must be unique. Therefore, you need to modify the following to ensure that the VM is unique:
+
+- name: host name of the VM. You are advised to use lowercase letters. In this example, the value is `k8smaster0`.
+- nvram: handle file path of the NVRAM, which must be globally unique. In this example, the value is `/var/lib/libvirt/qemu/nvram/k8smaster0.fd`.
+- disk source file: VM disk file path. In this example, the value is `/mnt/vm/images/master0.img`.
+- mac address of the interface: MAC address of the interface. In this example, the value is `52:54:00:00:00:80`.
+
+## Installing a VM
+
+1. Create and start a VM.
+
+ ```shell
+ virsh define master.xml
+ virsh start k8smaster0
+ ```
+
+2. Obtain the VNC port number of the VM.
+
+ ```shell
+ virsh vncdisplay k8smaster0
+ ```
+
+3. Use a VM connection tool, such as VNC Viewer, to remotely connect to the VM and perform configurations as prompted.
+
+4. Set the host name of the VM, for example, k8smaster0.
+
+ ```shell
+ hostnamectl set-hostname k8smaster0
+ ```
diff --git a/docs/en/docs/Kubernetes/preparing-certificates.md b/docs/en/docs/Kubernetes/preparing-certificates.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb5d459e5ce2308b00a39647daa41d2d77834d68
--- /dev/null
+++ b/docs/en/docs/Kubernetes/preparing-certificates.md
@@ -0,0 +1,413 @@
+
+# Preparing Certificates
+
+**Statement: The certificate used in this document is self-signed and cannot be used in a commercial environment.**
+
+Before deploying a cluster, you need to generate certificates required for communication between components in the cluster. This document uses the open-source CFSSL as the verification and deployment tool to help users understand the certificate configuration and the association between certificates of cluster components. You can select a tool based on the site requirements, for example, OpenSSL.
+
+## Building and Installing CFSSL
+
+The following commands for building and installing CFSSL are for your reference (the CFSSL website access permission is required, and the proxy must be configured first):
+
+```bash
+wget --no-check-certificate https://github.com/cloudflare/cfssl/archive/v1.5.0.tar.gz
+tar -zxf v1.5.0.tar.gz
+cd cfssl-1.5.0/
+make -j6
+cp bin/* /usr/local/bin/
+```
+
+## Generating a Root Certificate
+
+Compile the CA configuration file, for example, ca-config.json:
+
+```bash
+$ cat ca-config.json | jq
+{
+ "signing": {
+ "default": {
+ "expiry": "8760h"
+ },
+ "profiles": {
+ "kubernetes": {
+ "usages": [
+ "signing",
+ "key encipherment",
+ "server auth",
+ "client auth"
+ ],
+ "expiry": "8760h"
+ }
+ }
+ }
+}
+```
+
+Compile a CA CSR file, for example, ca-csr.json:
+
+```bash
+$ cat ca-csr.json | jq
+{
+ "CN": "Kubernetes",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "openEuler",
+ "OU": "WWW",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+Generate the CA certificate and key:
+
+```bash
+cfssl gencert -initca ca-csr.json | cfssljson -bare ca
+```
+
+The following certificates are obtained:
+
+```bash
+ca.csr ca-key.pem ca.pem
+```
+
+## Generating the admin Account Certificate
+
+admin is an account used by K8S for system management. Compile the CSR configuration of the admin account, for example, admin-csr.json:
+
+```bash
+cat admin-csr.json | jq
+{
+ "CN": "admin",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:masters",
+ "OU": "Containerum",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+Generate a certificate:
+
+```bash
+cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
+```
+
+The result is as follows:
+
+```bash
+admin.csr admin-key.pem admin.pem
+```
+
+## Generating a service-account Certificate
+
+Compile the CSR configuration file of the service-account account, for example, service-account-csr.json:
+
+```bash
+cat service-account-csr.json | jq
+{
+ "CN": "service-accounts",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "Kubernetes",
+ "OU": "openEuler k8s install",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+Generate a certificate:
+
+```bash
+cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes service-account-csr.json | cfssljson -bare service-account
+```
+
+The result is as follows:
+
+```bash
+service-account.csr service-account-key.pem service-account.pem
+```
+
+## Generating the kube-controller-manager Certificate
+
+Compile the CSR configuration of kube-controller-manager:
+
+```bash
+{
+ "CN": "system:kube-controller-manager",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:kube-controller-manager",
+ "OU": "openEuler k8s kcm",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+Generate a certificate:
+
+```bash
+cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
+```
+
+The result is as follows:
+
+```bash
+kube-controller-manager.csr kube-controller-manager-key.pem kube-controller-manager.pem
+```
+
+## Generating the kube-proxy Certificate
+
+Compile the CSR configuration of kube-proxy:
+
+```bash
+{
+ "CN": "system:kube-proxy",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:node-proxier",
+ "OU": "openEuler k8s kube proxy",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+Generate a certificate:
+
+```bash
+cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
+```
+
+The result is as follows:
+
+```bash
+kube-proxy.csr kube-proxy-key.pem kube-proxy.pem
+```
+
+## Generating the kube-scheduler Certificate
+
+Compile the CSR configuration of kube-scheduler:
+
+```bash
+{
+ "CN": "system:kube-scheduler",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:kube-scheduler",
+ "OU": "openEuler k8s kube scheduler",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+Generate a certificate:
+
+```bash
+cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
+```
+
+The result is as follows:
+
+```bash
+kube-scheduler.csr kube-scheduler-key.pem kube-scheduler.pem
+```
+
+## Generating the kubelet Certificate
+
+The certificate involves the host name and IP address of the server where kubelet is located. Therefore, the configuration of each node is different. The script is compiled as follows:
+
+```bash
+$ cat node_csr_gen.bash
+
+#!/bin/bash
+
+nodes=(k8snode1 k8snode2 k8snode3)
+IPs=("192.168.122.157" "192.168.122.158" "192.168.122.159")
+
+for i in "${!nodes[@]}"; do
+
+cat > "${nodes[$i]}-csr.json" <}})
- [Using JDK for Compilation]({{< relref "./docs/ApplicationDev/using-jdk-for-compilation.md" >}})
- [Building an RPM Package]({{< relref "./docs/ApplicationDev/building-an-rpm-package.md" >}})
+- [Kubernetes Cluster Deployment Guide]({{< relref "./docs/Kubernetes/Kubernetes.md" >}})
+ - [Preparing VMs]( {{< relref "./docs/Kubernetes/preparing-VMs.md">}})
+ - [Deploying a Kubernetes Cluster]({{< relref "./docs/Kubernetes/deploying-a-Kubernetes-cluster.md" >}})
+ - [Installing the Kubernetes Software Package]( {{< relref "./docs/Kubernetes/installing-the-Kubernetes-software-package.md" >}})
+ - [Preparing Certificates] ({{< relref "./docs/Kubernetes/preparing-certificates.md" >}})
+ - [Installing ETCD] ({{< relref "./docs/Kubernetes/installing-etcd.md" >}})
+ - [Deploying Components on the Control Plane]({{< relref "./docs/Kubernetes/deploying-control-plane-components.md" >}})
+ - [Deploying a Node Component]({{< relref "./docs/Kubernetes/deploying-a-node-component.md" >}})
+ - [Running the Test Pod]({{< relref "./docs/Kubernetes/running-the-test-pod.md" >}})
- [Third-Party Software Porting Guide]({{< relref "./docs/thirdparty_migration/thidrparty.md" >}})
- [Guide to Porting OpenStack-Train to openEuler]({{< relref "./docs/thirdparty_migration/openstack-train.md" >}})
- [Guide to Porting Kubernetes to openEuler]({{< relref "./docs/thirdparty_migration/k8sinstall.md" >}})
@@ -147,4 +156,4 @@ headless: true
- [DDE]({{< relref "./docs/desktop/dde.md" >}})
- [install-DDE]({{< relref "./docs/desktop/install-DDE.md" >}})
- [DDE User Guide]({{< relref "./docs/desktop/DDE-User-Manual.md" >}})
-- [HA User Guide]({{< relref "./docs/desktop/HAuserguide.md" >}})
\ No newline at end of file
+- [HA User Guide]({{< relref "./docs/desktop/HAuserguide.md" >}})
diff --git a/docs/zh/docs/Kubernetes/Kubernetes.md b/docs/zh/docs/Kubernetes/Kubernetes.md
new file mode 100644
index 0000000000000000000000000000000000000000..330dc1c1a79f34866d294040209c2366309e905d
--- /dev/null
+++ b/docs/zh/docs/Kubernetes/Kubernetes.md
@@ -0,0 +1,14 @@
+# Kubernetes 集群部署指南
+
+**声明:kubernetes软件包目前收入在openEuler的EPOL仓,本文档仅适用于实验和学习环境,不适用于商用环境**
+
+本文档介绍在 openEuler 操作系统上,通过二进制部署 K8S 集群的一个参考方法。
+
+说明:本文所有操作均使用 `root`权限执行。
+
+## 集群状态
+
+本文所使用的集群状态如下:
+
+- 集群结构:6 个 `openEuler 20.03 LTS SP2`系统的虚拟机,3 个 master 和 3 个 node 节点
+- 物理机:`openEuler 20.03 LTS SP2`的 `x86/ARM`服务器
diff --git "a/docs/zh/docs/Kubernetes/\345\207\206\345\244\207\350\231\232\346\213\237\346\234\272.md" "b/docs/zh/docs/Kubernetes/\345\207\206\345\244\207\350\231\232\346\213\237\346\234\272.md"
new file mode 100644
index 0000000000000000000000000000000000000000..f88cf5020d0efd5dcdb9e48578a6c4119a758c41
--- /dev/null
+++ "b/docs/zh/docs/Kubernetes/\345\207\206\345\244\207\350\231\232\346\213\237\346\234\272.md"
@@ -0,0 +1,157 @@
+# 准备虚拟机
+
+
+本章介绍使用 virt manager 安装虚拟机的方法,如果您已经准备好虚拟机,可以跳过本章节。
+
+## 安装依赖工具
+
+安装虚拟机,会依赖相关工具,安装依赖并使能 libvirtd 服务的参考命令如下(如果需要代理,请先配置代理):
+
+```bash
+$ dnf install virt-install virt-manager libvirt-daemon-qemu edk2-aarch64.noarch virt-viewer
+$ systemctl start libvirtd
+$ systemctl enable libvirtd
+```
+
+## 准备虚拟机磁盘文件
+
+```bash
+$ dnf install -y qemu-img
+$ virsh pool-define-as vmPool --type dir --target /mnt/vm/images/
+$ virsh pool-build vmPool
+$ virsh pool-start vmPool
+$ virsh pool-autostart vmPool
+$ virsh vol-create-as --pool vmPool --name master0.img --capacity 200G --allocation 1G --format qcow2
+$ virsh vol-create-as --pool vmPool --name master1.img --capacity 200G --allocation 1G --format qcow2
+$ virsh vol-create-as --pool vmPool --name master2.img --capacity 200G --allocation 1G --format qcow2
+$ virsh vol-create-as --pool vmPool --name node1.img --capacity 300G --allocation 1G --format qcow2
+$ virsh vol-create-as --pool vmPool --name node2.img --capacity 300G --allocation 1G --format qcow2
+$ virsh vol-create-as --pool vmPool --name node3.img --capacity 300G --allocation 1G --format qcow2
+```
+
+## 打开 VNC 防火墙端口
+
+**方法一**
+
+1. 查询端口
+
+ ```shell
+ $ netstat -lntup | grep qemu-kvm
+ ```
+
+2. 打开 VNC 的防火墙端口。假设端口从 5900 开始,参考命令如下:
+
+ ```shell
+ $ firewall-cmd --zone=public --add-port=5900/tcp
+ $ firewall-cmd --zone=public --add-port=5901/tcp
+ $ firewall-cmd --zone=public --add-port=5902/tcp
+ $ firewall-cmd --zone=public --add-port=5903/tcp
+ $ firewall-cmd --zone=public --add-port=5904/tcp
+ $ firewall-cmd --zone=public --add-port=5905/tcp
+ ```
+
+
+
+**方法二**
+
+直接关闭防火墙
+
+```shell
+$ systemctl stop firewalld
+```
+
+
+
+## 准备虚拟机配置文件
+
+创建虚拟机需要虚拟机配置文件。假设配置文件为 master.xml ,以虚拟机 hostname 为 k8smaster0 的节点为例,参考配置如下:
+
+```bash
+ cat master.xml
+
+
+ k8smaster0
+ 8
+ 8
+
+ hvm
+ /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw
+ /var/lib/libvirt/qemu/nvram/k8smaster0.fd
+
+
+
+
+
+
+
+
+ 1
+
+ destroy
+ restart
+ restart
+
+ /usr/libexec/qemu-kvm
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+由于虚拟机相关配置必须唯一,新增虚拟机需要适配修改如下内容,保证虚拟机的唯一性:
+
+- name:虚拟机 hostname,建议尽量小写。例中为 `k8smaster0`
+- nvram:nvram的句柄文件路径,需要全局唯一。例中为 `/var/lib/libvirt/qemu/nvram/k8smaster0.fd`
+- disk 的 source file:虚拟机磁盘文件路径。例中为 `/mnt/vm/images/master0.img`
+- interface 的 mac address:interface 的 mac 地址。例中为 `52:54:00:00:00:80`
+
+
+
+## 安装虚拟机
+
+1. 创建并启动虚拟机
+
+ ```shell
+ $ virsh define master.xml
+ $ virsh start k8smaster0
+ ```
+
+2. 获取虚拟机的 VNC 端口号
+
+ ```shell
+ $ virsh vncdisplay k8smaster0
+ ```
+
+3. 使用虚拟机链接工具,例如 VNC Viewer 远程链接虚拟机,并根据提示依次选择配置,完成系统安装
+
+4. 设置虚拟机 hostname,例如设置为 k8smaster0
+
+ ```shell
+ $ hostnamectl set-hostname k8smaster0
+ ```
diff --git "a/docs/zh/docs/Kubernetes/\345\207\206\345\244\207\350\257\201\344\271\246.md" "b/docs/zh/docs/Kubernetes/\345\207\206\345\244\207\350\257\201\344\271\246.md"
new file mode 100644
index 0000000000000000000000000000000000000000..9ac080a5e893eb1a69eeddd6529835d344ab1e3c
--- /dev/null
+++ "b/docs/zh/docs/Kubernetes/\345\207\206\345\244\207\350\257\201\344\271\246.md"
@@ -0,0 +1,388 @@
+
+# 准备证书
+
+
+**声明:本文使用的证书为自签名,不能用于商用环境**
+
+部署集群前,需要生成集群各组件之间通信所需的证书。本文使用开源 CFSSL 作为验证部署工具,以便用户了解证书的配置和集群组件之间证书的关联关系。用户可以根据实际情况选择合适的工具,例如 OpenSSL 。
+
+## 编译安装 CFSSL
+
+编译安装 CFSSL 的参考命令如下(需要互联网下载权限,需要配置代理的请先完成配置),
+
+```bash
+$ wget --no-check-certificate https://github.com/cloudflare/cfssl/archive/v1.5.0.tar.gz
+$ tar -zxf v1.5.0.tar.gz
+$ cd cfssl-1.5.0/
+$ make -j6
+$ cp bin/* /usr/local/bin/
+```
+
+## 生成根证书
+
+编写 CA 配置文件,例如 ca-config.json:
+
+```bash
+$ cat ca-config.json | jq
+{
+ "signing": {
+ "default": {
+ "expiry": "8760h"
+ },
+ "profiles": {
+ "kubernetes": {
+ "usages": [
+ "signing",
+ "key encipherment",
+ "server auth",
+ "client auth"
+ ],
+ "expiry": "8760h"
+ }
+ }
+ }
+}
+```
+
+编写 CA CSR 文件,例如 ca-csr.json:
+
+```bash
+$ cat ca-csr.json | jq
+{
+ "CN": "Kubernetes",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "openEuler",
+ "OU": "WWW",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+生成 CA 证书和密钥:
+```bash
+$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
+```
+
+得到如下证书:
+
+```bash
+ca.csr ca-key.pem ca.pem
+```
+
+## 生成 admin 账户证书
+
+admin 是 K8S 用于系统管理的一个账户,编写 admin 账户的 CSR 配置,例如 admin-csr.json:
+```bash
+cat admin-csr.json | jq
+{
+ "CN": "admin",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:masters",
+ "OU": "Containerum",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+生成证书:
+```bash
+$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
+```
+
+结果如下:
+```bash
+admin.csr admin-key.pem admin.pem
+```
+
+## 生成 service-account 账户证书
+
+编写 service-account 账户的 CSR 配置文件,例如 service-account-csr.json:
+```bash
+cat service-account-csr.json | jq
+{
+ "CN": "service-accounts",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "Kubernetes",
+ "OU": "openEuler k8s install",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+生成证书:
+```bash
+$ cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes service-account-csr.json | cfssljson -bare service-account
+```
+
+结果如下:
+```bash
+service-account.csr service-account-key.pem service-account.pem
+```
+
+## 生成 kube-controller-manager 组件证书
+
+编写 kube-controller-manager 的 CSR 配置:
+```bash
+{
+ "CN": "system:kube-controller-manager",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:kube-controller-manager",
+ "OU": "openEuler k8s kcm",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+生成证书:
+```bash
+$ cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json-profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
+```
+
+结果如下:
+```bash
+kube-controller-manager.csr kube-controller-manager-key.pem kube-controller-manager.pem
+```
+
+## 生成 kube-proxy 证书
+
+编写 kube-proxy 的 CSR 配置:
+```bash
+{
+ "CN": "system:kube-proxy",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:node-proxier",
+ "OU": "openEuler k8s kube proxy",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+生成证书:
+```bash
+$ cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
+```
+
+结果如下:
+```bash
+kube-proxy.csr kube-proxy-key.pem kube-proxy.pem
+```
+
+## 生成 kube-scheduler 证书
+
+编写 kube-scheduler 的 CSR 配置:
+```bash
+{
+ "CN": "system:kube-scheduler",
+ "key": {
+ "algo": "rsa",
+ "size": 2048
+ },
+ "names": [
+ {
+ "C": "CN",
+ "L": "HangZhou",
+ "O": "system:kube-scheduler",
+ "OU": "openEuler k8s kube scheduler",
+ "ST": "BinJiang"
+ }
+ ]
+}
+```
+
+生成证书:
+```bash
+$ cfssl gencert -ca=../ca/ca.pem -ca-key=../ca/ca-key.pem -config=../ca/ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
+```
+
+结果如下:
+```bash
+kube-scheduler.csr kube-scheduler-key.pem kube-scheduler.pem
+```
+
+## 生成 kubelet 证书
+
+由于证书涉及到 kubelet 所在机器的 hostname 和 IP 地址信息,因此每个 node 节点配置不尽相同,所以编写脚本完成,生成脚本如下:
+```bash
+$ cat node_csr_gen.bash
+
+#!/bin/bash
+
+nodes=(k8snode1 k8snode2 k8snode3)
+IPs=("192.168.122.157" "192.168.122.158" "192.168.122.159")
+
+for i in "${!nodes[@]}"; do
+
+cat > "${nodes[$i]}-csr.json" <24 |
+| k8smaster1 | 52:54:00:00:00:81 | 192.168.122.155/24 |
+| k8smaster2 | 52:54:00:00:00:82 | 192.168.122.156/24 |
+| k8snode1 | 52:54:00:00:00:83 | 192.168.122.157/24 |
+| k8snode2 | 52:54:00:00:00:84 | 192.168.122.158/24 |
+| k8snode3 | 52:54:00:00:00:85 | 192.168.122.159/24 |
+
+
diff --git "a/docs/zh/docs/Kubernetes/\351\203\250\347\275\262Node\350\212\202\347\202\271\347\273\204\344\273\266.md" "b/docs/zh/docs/Kubernetes/\351\203\250\347\275\262Node\350\212\202\347\202\271\347\273\204\344\273\266.md"
new file mode 100644
index 0000000000000000000000000000000000000000..c747dd3a9cef55a6305a09f393f40c348a0ad2cc
--- /dev/null
+++ "b/docs/zh/docs/Kubernetes/\351\203\250\347\275\262Node\350\212\202\347\202\271\347\273\204\344\273\266.md"
@@ -0,0 +1,383 @@
+# 部署 Node 节点组件
+
+
+
+本章节仅以`k8snode1`节点为例。
+
+## 环境准备
+
+```bash
+# 内网需要配置代理
+$ dnf install -y docker iSulad conntrack-tools socat containernetworking-plugins
+$ swapoff -a
+$ mkdir -p /etc/kubernetes/pki/
+$ mkdir -p /etc/cni/net.d
+$ mkdir -p /opt/cni
+# 删除默认kubeconfig
+$ rm /etc/kubernetes/kubelet.kubeconfig
+
+## 使用isulad作为运行时 ########
+# 配置iSulad
+cat /etc/isulad/daemon.json
+{
+ "registry-mirrors": [
+ "docker.io"
+ ],
+ "insecure-registries": [
+ "k8s.gcr.io",
+ "quay.io"
+ ],
+ "pod-sandbox-image": "k8s.gcr.io/pause:3.2",# pause类型
+ "network-plugin": "cni", # 置空表示禁用cni网络插件则下面两个路径失效, 安装插件后重启isulad即可
+ "cni-bin-dir": "/usr/libexec/cni/",
+ "cni-conf-dir": "/etc/cni/net.d",
+}
+
+# 在iSulad环境变量中添加代理,下载镜像
+cat /usr/lib/systemd/system/isulad.service
+[Service]
+Type=notify
+Environment="HTTP_PROXY=http://name:password@proxy:8080"
+Environment="HTTPS_PROXY=http://name:password@proxy:8080"
+
+# 重启iSulad并设置为开机自启
+systemctl daemon-reload
+systemctl restart isulad
+
+
+
+
+## 如果使用docker作为运行时 ########
+$ dnf install -y docker
+# 如果需要代理的环境,可以给docker配置代理,新增配置文件http-proxy.conf,并编写如下内容,替换name,password和proxy-addr为实际的配置。
+$ cat /etc/systemd/system/docker.service.d/http-proxy.conf
+[Service]
+Environment="HTTP_PROXY=http://name:password@proxy-addr:8080"
+$ systemctl daemon-reload
+$ systemctl restart docker
+```
+
+## 创建 kubeconfig 配置文件
+
+对各节点依次如下操作创建配置文件:
+
+```bash
+$ kubectl config set-cluster openeuler-k8s \
+ --certificate-authority=/etc/kubernetes/pki/ca.pem \
+ --embed-certs=true \
+ --server=https://192.168.122.154:6443 \
+ --kubeconfig=k8snode1.kubeconfig
+
+$ kubectl config set-credentials system:node:k8snode1 \
+ --client-certificate=/etc/kubernetes/pki/k8snode1.pem \
+ --client-key=/etc/kubernetes/pki/k8snode1-key.pem \
+ --embed-certs=true \
+ --kubeconfig=k8snode1.kubeconfig
+
+$ kubectl config set-context default \
+ --cluster=openeuler-k8s \
+ --user=system:node:k8snode1 \
+ --kubeconfig=k8snode1.kubeconfig
+
+$ kubectl config use-context default --kubeconfig=k8snode1.kubeconfig
+```
+
+**注:修改k8snode1为对应节点名**
+
+## 拷贝证书
+
+和控制面一样,所有证书、密钥和相关配置都放到`/etc/kubernetes/pki/`目录。
+
+```bash
+$ ls /etc/kubernetes/pki/
+ca.pem k8snode1.kubeconfig kubelet_config.yaml kube-proxy-key.pem kube-proxy.pem
+k8snode1-key.pem k8snode1.pem kube_proxy_config.yaml kube-proxy.kubeconfig
+```
+
+## CNI 网络配置
+
+先通过 containernetworking-plugins 作为 kubelet 使用的 cni 插件,后续可以引入 calico,flannel 等插件,增强集群的网络能力。
+
+```bash
+# 桥网络配置
+$ cat /etc/cni/net.d/10-bridge.conf
+{
+ "cniVersion": "0.3.1",
+ "name": "bridge",
+ "type": "bridge",
+ "bridge": "cnio0",
+ "isGateway": true,
+ "ipMasq": true,
+ "ipam": {
+ "type": "host-local",
+ "subnet": "10.244.0.0/16",
+ "gateway": "10.244.0.1"
+ },
+ "dns": {
+ "nameservers": [
+ "10.244.0.1"
+ ]
+ }
+}
+
+# 回环网络配置
+$ cat /etc/cni/net.d/99-loopback.conf
+{
+ "cniVersion": "0.3.1",
+ "name": "lo",
+ "type": "loopback"
+}
+```
+
+## 部署 kubelet 服务
+
+### kubelet 依赖的配置文件
+
+```bash
+$ cat /etc/kubernetes/pki/kubelet_config.yaml
+kind: KubeletConfiguration
+apiVersion: kubelet.config.k8s.io/v1beta1
+authentication:
+ anonymous:
+ enabled: false
+ webhook:
+ enabled: true
+ x509:
+ clientCAFile: /etc/kubernetes/pki/ca.pem
+authorization:
+ mode: Webhook
+clusterDNS:
+- 10.32.0.10
+clusterDomain: cluster.local
+runtimeRequestTimeout: "15m"
+tlsCertFile: "/etc/kubernetes/pki/k8snode1.pem"
+tlsPrivateKeyFile: "/etc/kubernetes/pki/k8snode1-key.pem"
+```
+
+**注意:clusterDNS 的地址为:10.32.0.10,必须和之前设置的 service-cluster-ip-range 一致**
+
+### 编写 systemd 配置文件
+
+```bash
+$ cat /usr/lib/systemd/system/kubelet.service
+[Unit]
+Description=kubelet: The Kubernetes Node Agent
+Documentation=https://kubernetes.io/docs/
+Wants=network-online.target
+After=network-online.target
+
+[Service]
+ExecStart=/usr/bin/kubelet \
+ --config=/etc/kubernetes/pki/kubelet_config.yaml \
+ --network-plugin=cni \
+ --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
+ --kubeconfig=/etc/kubernetes/pki/k8snode1.kubeconfig \
+ --register-node=true \
+ --hostname-override=k8snode1 \
+ --cni-bin-dir="/usr/libexec/cni/" \
+ --v=2
+
+Restart=always
+StartLimitInterval=0
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+**注意:如果使用isulad作为runtime,需要增加如下配置**
+
+```bash
+--container-runtime=remote \
+--container-runtime-endpoint=unix:///var/run/isulad.sock \
+```
+
+## 部署 kube-proxy
+
+### kube-proxy 依赖的配置文件
+
+```bash
+cat /etc/kubernetes/pki/kube_proxy_config.yaml
+kind: KubeProxyConfiguration
+apiVersion: kubeproxy.config.k8s.io/v1alpha1
+clientConnection:
+ kubeconfig: /etc/kubernetes/pki/kube-proxy.kubeconfig
+clusterCIDR: 10.244.0.0/16
+mode: "iptables"
+```
+
+### 编写 systemd 配置文件
+
+```bash
+$ cat /usr/lib/systemd/system/kube-proxy.service
+[Unit]
+Description=Kubernetes Kube-Proxy Server
+Documentation=https://kubernetes.io/docs/reference/generated/kube-proxy/
+After=network.target
+
+[Service]
+EnvironmentFile=-/etc/kubernetes/config
+EnvironmentFile=-/etc/kubernetes/proxy
+ExecStart=/usr/bin/kube-proxy \
+ $KUBE_LOGTOSTDERR \
+ $KUBE_LOG_LEVEL \
+ --config=/etc/kubernetes/pki/kube_proxy_config.yaml \
+ --hostname-override=k8snode1 \
+ $KUBE_PROXY_ARGS
+Restart=on-failure
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+## 启动组件服务
+
+```bash
+$ systemctl enable kubelet kube-proxy
+$ systemctl start kubelet kube-proxy
+```
+
+其他节点依次部署即可。
+
+## 验证集群状态
+
+等待几分钟,使用如下命令查看node状态:
+
+```bash
+$ kubectl get nodes --kubeconfig /etc/kubernetes/pki/admin.kubeconfig
+NAME STATUS ROLES AGE VERSION
+k8snode1 Ready 17h v1.20.2
+k8snode2 Ready 19m v1.20.2
+k8snode3 Ready 12m v1.20.2
+```
+
+## 部署 coredns
+
+coredns可以部署到node节点或者master节点,本文这里部署到节点`k8snode1`。
+
+### 编写 coredns 配置文件
+
+```bash
+$ cat /etc/kubernetes/pki/dns/Corefile
+.:53 {
+ errors
+ health {
+ lameduck 5s
+ }
+ ready
+ kubernetes cluster.local in-addr.arpa ip6.arpa {
+ pods insecure
+ endpoint https://192.168.122.154:6443
+ tls /etc/kubernetes/pki/ca.pem /etc/kubernetes/pki/admin-key.pem /etc/kubernetes/pki/admin.pem
+ kubeconfig /etc/kubernetes/pki/admin.kubeconfig default
+ fallthrough in-addr.arpa ip6.arpa
+ }
+ prometheus :9153
+ forward . /etc/resolv.conf {
+ max_concurrent 1000
+ }
+ cache 30
+ loop
+ reload
+ loadbalance
+}
+```
+
+说明:
+
+- 监听53端口;
+- 设置kubernetes插件配置:证书、kube api的URL;
+
+### 准备 systemd 的 service 文件
+
+```bash
+cat /usr/lib/systemd/system/coredns.service
+[Unit]
+Description=Kubernetes Core DNS server
+Documentation=https://github.com/coredns/coredns
+After=network.target
+
+[Service]
+ExecStart=bash -c "KUBE_DNS_SERVICE_HOST=10.32.0.10 coredns -conf /etc/kubernetes/pki/dns/Corefile"
+
+Restart=on-failure
+LimitNOFILE=65536
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### 启动服务
+
+```bash
+$ systemctl enable coredns
+$ systemctl start coredns
+```
+
+### 创建 coredns 的 Service 对象
+
+```bash
+$ cat coredns_server.yaml
+apiVersion: v1
+kind: Service
+metadata:
+ name: kube-dns
+ namespace: kube-system
+ annotations:
+ prometheus.io/port: "9153"
+ prometheus.io/scrape: "true"
+ labels:
+ k8s-app: kube-dns
+ kubernetes.io/cluster-service: "true"
+ kubernetes.io/name: "CoreDNS"
+spec:
+ clusterIP: 10.32.0.10
+ ports:
+ - name: dns
+ port: 53
+ protocol: UDP
+ - name: dns-tcp
+ port: 53
+ protocol: TCP
+ - name: metrics
+ port: 9153
+ protocol: TCP
+```
+
+### 创建 coredns 的 endpoint 对象
+
+```bash
+$ cat coredns_ep.yaml
+apiVersion: v1
+kind: Endpoints
+metadata:
+ name: kube-dns
+ namespace: kube-system
+subsets:
+ - addresses:
+ - ip: 192.168.122.157
+ ports:
+ - name: dns-tcp
+ port: 53
+ protocol: TCP
+ - name: dns
+ port: 53
+ protocol: UDP
+ - name: metrics
+ port: 9153
+ protocol: TCP
+```
+
+### 确认 coredns 服务
+
+```bash
+# 查看service对象
+$ kubectl get service -n kube-system kube-dns
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+kube-dns ClusterIP 10.32.0.10 53/UDP,53/TCP,9153/TCP 51m
+# 查看endpoint对象
+$ kubectl get endpoints -n kube-system kube-dns
+NAME ENDPOINTS AGE
+kube-dns 192.168.122.157:53,192.168.122.157:53,192.168.122.157:9153 52m
+```
\ No newline at end of file
diff --git "a/docs/zh/docs/Kubernetes/\351\203\250\347\275\262\346\216\247\345\210\266\351\235\242\347\273\204\344\273\266.md" "b/docs/zh/docs/Kubernetes/\351\203\250\347\275\262\346\216\247\345\210\266\351\235\242\347\273\204\344\273\266.md"
new file mode 100644
index 0000000000000000000000000000000000000000..410f35a191b4f62c13d3e86be3919891af5de791
--- /dev/null
+++ "b/docs/zh/docs/Kubernetes/\351\203\250\347\275\262\346\216\247\345\210\266\351\235\242\347\273\204\344\273\266.md"
@@ -0,0 +1,353 @@
+# 部署控制面组件
+
+
+## 准备所有组件的 kubeconfig
+
+### kube-proxy
+
+```bash
+$ kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.122.154:6443 --kubeconfig=kube-proxy.kubeconfig
+$ kubectl config set-credentials system:kube-proxy --client-certificate=/etc/kubernetes/pki/kube-proxy.pem --client-key=/etc/kubernetes/pki/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
+$ kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-proxy --kubeconfig=kube-proxy.kubeconfig
+$ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
+```
+
+### kube-controller-manager
+
+```bash
+$ kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-controller-manager.kubeconfig
+$ kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
+$ kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
+$ kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
+```
+
+### kube-scheduler
+
+```bash
+$ kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=kube-scheduler.kubeconfig
+$ kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
+$ kubectl config set-context default --cluster=openeuler-k8s --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
+$ kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
+```
+
+### admin
+
+```bash
+$ kubectl config set-cluster openeuler-k8s --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://127.0.0.1:6443 --kubeconfig=admin.kubeconfig
+$ kubectl config set-credentials admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=admin.kubeconfig
+$ kubectl config set-context default --cluster=openeuler-k8s --user=admin --kubeconfig=admin.kubeconfig
+$ kubectl config use-context default --kubeconfig=admin.kubeconfig
+```
+
+### 获得相关 kubeconfig 配置文件
+
+```bash
+admin.kubeconfig kube-proxy.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig
+```
+
+## 生成密钥提供者的配置
+
+api-server 启动时需要提供一个密钥对`--encryption-provider-config=/etc/kubernetes/pki/encryption-config.yaml`,本文通过 urandom 生成一个:
+
+```bash
+$ cat generate.bash
+#!/bin/bash
+
+ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
+
+cat > encryption-config.yaml <}})
- [使用JDK编译]({{< relref "./docs/ApplicationDev/使用JDK编译.md" >}})
- [构建RPM包]({{< relref "./docs/ApplicationDev/构建RPM包.md" >}})
+- [Kubernetes集群部署指南]({{< relref "./docs/Kubernetes/Kubernetes.md" >}})
+ - [准备虚拟机]({{< relref "./docs/Kubernetes/准备虚拟机.md" >}})
+ - [部署Kubernetes集群]({{< relref "./docs/Kubernetes/部署Kubernetes集群.md" >}})
+ - [安装Kubernetes软件包]({{< relref "./docs/Kubernetes/安装Kubernetes软件包.md" >}})
+ - [准备证书]({{< relref "./docs/Kubernetes/准备证书.md" >}})
+ - [安装etcd]({{< relref "./docs/Kubernetes/安装etcd.md" >}})
+ - [部署控制面组件]({{< relref "./docs/Kubernetes/部署控制面组件.md" >}})
+ - [部署Node节点组件]({{< relref "./docs/Kubernetes/部署Node节点组件.md" >}})
+ - [运行测试pod]({{< relref "./docs/Kubernetes/运行测试pod.md" >}})
- [第三方软件安装指南]({{< relref "./docs/thirdparty_migration/thidrparty.md" >}})
- [OpenStack-train迁移至openEuler指导]({{< relref "./docs/thirdparty_migration/openstack-train.md" >}})
- [K8S迁移至openEuler指导]({{< relref "./docs/thirdparty_migration/k8sinstall.md" >}})