diff --git a/docs/en/docs/Container/specifying-rootfs-to-create-a-container.md b/docs/en/docs/Container/specifying-rootfs-to-create-a-container.md
index 5ff1952020ec65cc7cbec947baeec1447caf6975..6a2dbfae7a81e4ef6dafbb5ce1a951e067c93d5d 100644
--- a/docs/en/docs/Container/specifying-rootfs-to-create-a-container.md
+++ b/docs/en/docs/Container/specifying-rootfs-to-create-a-container.md
@@ -1,11 +1,8 @@
# Specifying Rootfs to Create a Container
-- [Specifying Rootfs to Create a Container](#specifying-rootfs-to-create-a-container)
-
-
## Function Description
-Different from a common container that needs to be started by specifying a container image, a system container is started by specifying a local root file system \(rootfs\) through the **--external-rootfs** parameter. Rootfs contains the operating system environment on which the container depends during running.
+Different from a common container that needs to be started by specifying a container image, a system container is started by specifying a local root file system \(rootfs\) using the **--external-rootfs** parameter. The rootfs contains the operating system environment on which the container depends during running.
## Parameter Description
@@ -22,7 +19,7 @@ Different from a common container that needs to be started by specifying a conta
--external-rootfs
|
-- Variable of the string type.
- Absolute path in the root file system of the container, that is, the path of rootfs.
+ | - Variable of the string type.
- Absolute path in the root file system of the container, that is, the path of the rootfs.
|
@@ -30,20 +27,20 @@ Different from a common container that needs to be started by specifying a conta
## Constraints
-- The rootfs directory specified by the **--external-rootfs** parameter must be an absolute path.
-- The rootfs directory specified by the **--external-rootfs** parameter must be a complete OS environment including **systemd** package. Otherwise, the container fails to be started.
-- When a container is deleted, the rootfs directory specified by **--external-rootfs** is not deleted.
-- Containers based on ARM rootfs cannot run on x86 servers. Containers based on x86 rootfs cannot run on ARM servers.
-- You are not advised to start multiple container instances by using the same rootfs. That is, one rootfs is used only by container instances in the same lifecycle.
+- The rootfs directory specified using the **--external-rootfs** parameter must be an absolute path.
+- The rootfs directory specified using the **--external-rootfs** parameter must be a complete OS environment including **systemd** package. Otherwise, the container fails to be started.
+- When a container is deleted, the rootfs directory specified using **--external-rootfs** is not deleted.
+- Containers based on an ARM rootfs cannot run in the x86 environment. Containers based on an x86 rootfs cannot run in the ARM environment.
+- You are advised not to start multiple container instances in the same rootfs. That is, one rootfs is used by only one container instance that is in the lifecycle.
## Example
-If the local rootfs path is **/root/myrootfs**, run the following command to start a system container:
+Assuming the local rootfs path is **/root/myrootfs**, run the following command to start a system container:
```
# isula run -tid --system-container --external-rootfs /root/myrootfs none init
```
> **NOTE:**
->Rootfs is a user-defined file system. Prepare it by yourself. For example, a rootfs is generated after the TAR package of container images is decompressed.
+>The rootfs is a user-defined file system. Prepare it by yourself. For example, a rootfs is generated after the TAR package of a container image is decompressed.
diff --git a/docs/en/docs/Embedded/embedded.md b/docs/en/docs/Embedded/embedded.md
new file mode 100644
index 0000000000000000000000000000000000000000..b4aa274ed85f434bfce4e6391645e98d19cc33bd
--- /dev/null
+++ b/docs/en/docs/Embedded/embedded.md
@@ -0,0 +1,185 @@
+# openEuler Embedded Usage Guide
+
+openEuler Embedded is a Linux version for embedded scenarios based on the openEuler community version. The embedded system applications are restricted by multiple factors, such as resources, power consumption, and diversity. Therefore, the server-oriented Linux versions and the corresponding build systems can hardly satisfy the requirements of embedded scenarios. The [Yocto](https://www.yoctoproject.org/) is widely used to customize and build embedded Linux. Currently, openEuler Embedded is also built using Yocto, but has the same source code as other openEuler versions. For details about the build method, see the related code repository in [SIG-Yocto](https://gitee.com/openeuler/community/tree/master/sig/sig-Yocto).
+
+This document describes how to obtain pre-built images, run the images, and develop basic embedded Linux applications based on the images.
+
+## Obtaining the Image
+The released pre-built images support only the ARM and AArch64 architectures, and are compatible only with the ARM virt-4.0 platform of QEMU. You can obtain the images through the following links:
+
+- [qemu_arm](https://repo.openeuler.org/openEuler-21.09/embedded_img/qemu-arm) for ARM Cortex A15 processor of 32-bit ARM architecture.
+- [qemu_aarch64](https://repo.openeuler.org/openEuler-21.09/embedded_img/qemu-aarch64) for ARM Cortex A57 processor of 64-bit AArch64 architecture.
+
+You can deploy an openEuler Embedded image on a physical bare-metal server, cloud server, container, or VM as long as the environment supports QEMU emulator version 5.0 or later.
+
+## Image Content
+
+The downloaded image consists of the following parts:
+
+- Kernel image **zImage**, which is built based on Linux 5.10 of the openEuler community. You can obtain the kernel configurations through the following links:
+ - [arm(cortex a15)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-21.09/config/arm/defconfig-kernel)
+ - [arm(cortex a57)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-21.09/config/arm64/defconfig-kernel) for the AArch64 architecture. The image provides the image self-decompression function in addition. For details, see the corresponding [patch](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-21.09/patches/arm64/0001-arm64-add-zImage-support-for-arm64.patch).
+
+- Root file system image (select one of the following as required):
+ - **initrd_tiny**, which is the image of the simplified root file system with only basic functions. It contains the BusyBox and basic glibc library. The image is simple and consumes little memory. It is suitable for exploring Linux kernel functions.
+ - **initrd**, which is the image of the standard root file system. In addition to the content of the simplified root file system image, the standard image has received necessary security hardening and includes various software packages, such as audit, cracklib, OpenSSH, Linux PAM, shadow and iSula containers. This image is suitable for exploring more functions.
+
+## Executing the Image
+
+You can run the image to experience the functions of openEuler Embedded, and develop basic embedded Linux applications.
+
+---
+**_Note_**
+
+- You are advised to use QEMU 5.0 or later to run the image. Some additional functions (the network and shared file system) depend on the virtio-net and virtio-fs features of QEMU. If these features are not enabled in QEMU, errors may occur during image running. In this case, you may need to recompile QEMU from the source code.
+
+- When running the image, you are advised to place the kernel image and root file system image in the same directory. The following uses the standard root file system image (initrd) as an example.
+
+---
+
+
+### Simplified Running Scenario
+
+In this scenario, the network and shared file system are not enabled in QEMU. You can use this scenario to experience the functions.
+
+For ARM architecture (ARM Cortex A15), run the following command:
+```shell
+qemu-system-arm -M virt-4.0 -cpu cortex-a15 -nographic -kernel zImage -initrd initrd
+```
+For AArch64 architecture (ARM Cortex A57), run the following command:
+```shell
+qemu-system-aarch64 -M virt-4.0 -cpu cortex-a57 -nographic -kernel zImage -initrd initrd
+```
+
+The standard root file system image is securely hardened and requires you to set a password for the root user during the first startup. The password must contain at least eight characters, including digits, letters, and special characters, for example, openEuler@2021. For systems running the simplified root file system image, you can log in without the user name and password.
+
+After you successfully run QEMU and log in to the system, the Shell of openEuler Embedded is displayed.
+
+### Shared File System Enabled Scenario
+
+Shared file system allows the host machine of QEMU emulator to share files with openEuler Embedded. In this way, programs that are cross-compiled on the host machine can run on openEuler Embedded after being copied to the shared directory.
+
+Assume that the `/tmp` directory of the host machine is used as the shared directory, and a `hello_openeuler.txt` file is created in the directory in advance. To enable the shared file system function, perform the following steps:
+
+1. **Start QEMU.**
+
+For ARM architecture (ARM Cortex A15), run the following command:
+```sh
+qemu-system-arm -M virt-4.0 -cpu cortex-a15 -nographic -kernel zImage -initrd initrd -device virtio-9p-device,fsdev=fs1,mount_tag=host -fsdev local,security_model=passthrough,id=fs1,path=/tmp
+```
+For AArch64 architecture (ARM Cortex A57), run the following command:
+```sh
+qemu-system-aarch64 -M virt-4.0 -cpu cortex-a57 -nographic -kernel zImage -initrd initrd -device virtio-9p-device,fsdev=fs1,mount_tag=host -fsdev local,security_model=passthrough,id=fs1,path=/tmp
+```
+
+2. **Mount the file system.**
+
+After you start and log in to openEuler Embedded, run the following commands to mount the shared file system:
+```shell
+cd /tmp
+mkdir host
+mount -t 9p -o trans=virtio,version=9p2000.L host /tmp/host
+```
+The shared file system is mounted to the `/tmp/host` directory of openEuler Embedded.
+
+3. **Check whether the file system is shared successfully.**
+
+In openEuler Embedded, run the following commands:
+
+```shell
+cd /tmp/host
+ls
+```
+If `hello_openeuler.txt` is discovered, the file system is shared successfully.
+
+### Network Enabled Scenario
+
+The virtio-net of QEMU and the virtual NIC of the host machine allow for the network communication between the host machine and openEuler Embedded.
+1. **Start QEMU.**
+
+For ARM architecture (ARM Cortex A15), run the following command:
+```shell
+qemu-system-arm -M virt-4.0 -cpu cortex-a15 -nographic -kernel zImage -initrd initrd -device virtio-net-device,netdev=tap0 -netdev tap,id=tap0,script=/etc/qemu-ifup
+```
+For AArch64 architecture (ARM Cortex A57), run the following command:
+```shell
+qemu-system-aarch64 -M virt-4.0 -cpu cortex-a57 -nographic -kernel zImage -initrd initrd -device virtio-net-device,netdev=tap0 -netdev tap,id=tap0,script=/etc/qemu-ifup
+```
+2. **Create the vNIC on the host machine.**
+
+You can use the `/etc/qemu-ifup` script to create a **tap0** vNIC on the host machine. **root** permission is required for running the script. The script details are as follows:
+
+```sh
+#!/bin/bash
+ifconfig $1 192.168.10.1 up
+```
+Use the `qemu-ifup` script to create a **tap0** vNIC on the host machine. The IP address of the vNIC is 192.168.10.1.
+
+3. **Configure the NIC of openEuler Embedded.**
+
+Log in to openEuler Embedded and run the following command:
+```shell
+ifconfig eth0 192.168.10.2
+```
+
+4. **Check whether the network connection is normal.**
+
+In openEuler Embedded, run the following command:
+```shell
+ping 192.168.10.1
+```
+
+If the IP address can be pinged, the network connection between the host machine and openEuler Embedded is normal.
+
+---
+**_Note_**
+
+If you need openEuler Embedded to access the Internet through the host machine, create a bridge on the host machine. For details, see the related documents.
+
+---
+
+## User-Mode Application Development Based on openEuler Embedded
+
+In addition to the basic functions of openEuler Embedded, you can also use the released images for the basic development of user-mode applications, that is, running your own programs on openEuler Embedded.
+
+
+1. **Prepare the environment.**
+
+The current images are built using the Linaro ARM/AArch64 GCC 7.3.1 toolchains. It is recommended that the same toolchains are used for application development. You can obtain the toolchains from the following links:
+- [linaro arm](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/arm-linux-gnueabi/gcc-linaro-7.3.1-2018.05-x86_64_arm-linux-gnueabi.tar.xz)
+- [linrao arm sysroot](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/arm-linux-gnueabi/sysroot-glibc-linaro-2.25-2018.05-arm-linux-gnueabi.tar.xz)
+- [linaro aarch64](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz)
+- [linrao aarch64 sysroot](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/sysroot-glibc-linaro-2.25-2018.05-aarch64-linux-gnu.tar.xz)
+
+Download and decompress the required packages to a specified directory, for example, `/opt/openEuler_toolchain`.
+
+2. **Create and compile a user-mode program.**
+
+The following uses a `hello openEuler` program as an example to describe how to build a program that runs on the AArch64 standard root file system image.
+
+Create a `hello.c` file on the host machine. The source code is as follows:
+```c
+#include
+
+int main(void)
+{
+ printf("hello openEuler\r\n");
+}
+```
+
+On the host machine, run the following commands to compile using the corresponding toolchain:
+```shell
+export PATH=$PATH:/opt/openEuler_toolchain/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin
+aarch64-linux-gnu-gcc --sysroot= hello.c -o hello
+mv hello /temp
+```
+Copy the cross-compiled hello program to the `/tmp` directory, and enable openEuler Embedded to access the directory on the host machine by referring to the description in **Shared File System Enabled Scenario**.
+
+3. **Run the user-mode program.**
+
+In openEuler Embedded, run the following commands to run the hello program:
+```shell
+cd /tmp/host
+./hello
+```
+If the program runs successfully, **hello openEuler** is displayed in the Shell of openEuler Embedded.
diff --git a/docs/en/docs/KubeEdge/kubeedge-deployment-guide.md b/docs/en/docs/KubeEdge/kubeedge-deployment-guide.md
new file mode 100644
index 0000000000000000000000000000000000000000..ac273dca0b2be42936f2d376a4a1e21475d5bf00
--- /dev/null
+++ b/docs/en/docs/KubeEdge/kubeedge-deployment-guide.md
@@ -0,0 +1,1392 @@
+# KubeEdge Deployment Guide
+
+## Description
+
+### KubeEdge
+
+KubeEdge is an open source system dedicated to solving problems in edge scenarios. It extends the capabilities of containerized application orchestration and device management to edge devices. Based on Kubernetes, KubeEdge provides core infrastructure support for networks, application deployment, and metadata synchronization between the cloud and the edge. KubeEdge supports MQTT and allows for custom logic to enable communication for the resource-constrained devices at the edge. KubeEdge consists of components deployed on the cloud and edge nodes. The components are now open source.
+
+> https://kubeedge.io/
+
+### iSulad
+
+iSulad is a lightweight container runtime daemon designed for IoT and cloud infrastructure. It is lightweight, fast, and is not restricted by hardware specifications or architectures. It is suitable for wide application in various scenarios, such as cloud, IoT, and edge computing.
+
+> https://gitee.com/openeuler/iSulad
+
+## Preparations
+
+### Component Versions
+
+| Component| Version|
+| ---------- | --------------------------------- |
+| OS | openEuler 21.09 |
+| Kubernetes | 1.20.2-4 |
+| iSulad | 2.0.9-20210625.165022.git5a088d9c |
+| KubeEdge | v1.8.0 |
+
+### Node Planning Example
+
+| Node| Location| Components|
+| ------------ | ------------- | -------------------------------- |
+| 9.63.252.224 | Cloud| Kubernetes (Master), iSulad, CloudCore|
+| 9.63.252.227 | Edge| iSulad, EdgeCore|
+
+### Environment Configurations
+
+Configure the following settings on the cloud and edge nodes:
+
+```bash
+# Disable the firewall.
+$ systemctl stop firewalld
+$ systemctl disable firewalld
+
+# Disable SELinux.
+$ setenforce 0
+
+# Configure the network, and enable the required forwarding mechanism.
+$ cat >> /etc/sysctl.d/k8s.conf <> /etc/hosts << EOF
+9.63.252.224 cloud.kubeedge
+9.63.252.227 edge.kubeedge
+EOF
+
+# Choose an accessible NTP server to synchronize the clock.
+$ ntpdate cn.pool.ntp.org
+
+# Install the cri-tools network tool.
+$ wget --no-check-certificate https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-linux-amd64.tar.gz
+$ tar zxvf crictl-v1.20.0-linux-amd64.tar.gz -C /usr/local/bin
+
+# Install the CNI network plugins.
+$ wget --no-check-certificate https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
+$ mkdir -p /opt/cni/bin
+$ tar -zxvf cni-plugins-linux-amd64-v0.9.0.tgz -C /opt/cni/bin
+```
+
+### Configuring iSulad
+
+Configure the following settings on the cloud and edge nodes:
+
+```bash
+# Configure iSulad (only the items to be modified are listed).
+$ cat /etc/isulad/daemon.json
+{
+ "registry-mirrors": [
+ "docker.io"
+ ],
+ "insecure-registries": [
+ "k8s.gcr.io",
+ "quay.io",
+ "hub.oepkgs.net"
+ ],
+ "pod-sandbox-image": "k8s.gcr.io/pause:3.2",
+ "network-plugin": "cni",
+ "cni-bin-dir": "/opt/cni/bin",
+ "cni-conf-dir": "/etc/cni/net.d",
+}
+
+# Configure the proxy if the node cannot directly access the Internet. Otherwise, skip this section.
+$ cat /usr/lib/systemd/system/isulad.service
+[Service]
+Type=notify
+Environment="HTTP_PROXY=http://..."
+Environment="HTTPS_PROXY=http://..."
+
+# Restart the iSulad and set it to start upon system startup.
+$ systemctl daemon-reload && systemctl restart isulad
+```
+
+### Deploying the Kubernetes Components
+
+Install and deploy the Kubernetes components on the cloud node only.
+
+```bash
+# Install the Kubernetes tools.
+$ yum install kubernetes-master kubernetes-kubeadm kubernetes-client kubernetes-kubelet
+# Set kubelet to start upon system startup.
+$ systemctl enable kubelet --now
+
+# Note that the system proxy must be canceled before using the kubeadm init command.
+$ unset `env | grep -iE "tps?_proxy" | cut -d= -f1`
+$ env | grep proxy
+
+# Run the kubeadm init command.
+$ kubeadm init --kubernetes-version v1.20.2 --pod-network-cidr=10.244.0.0/16 --upload-certs --cri-socket=/var/run/isulad.sock
+# The default Kubernetes component image is gcr.k8s.io. You can add the --image-repository=xxx option to configure a custom image repository address (to test your own Kubernetes image).
+
+# Note that the network segment specified by pod-network-cidr cannot overlap the network segment of the host machine. Otherwise, the network is inaccessible.
+# You are advised to run the init command before configuring the network.
+Your Kubernetes control-plane has initialized successfully!
+
+To start using your cluster, you need to run the following as a regular user:
+
+ mkdir -p $HOME/.kube
+ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+ sudo chown $(id -u):$(id -g) $HOME/.kube/config
+
+You should now deploy a pod network to the cluster.
+Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
+ https://kubernetes.io/docs/concepts/cluster-administration/addons/
+
+Then you can join any number of worker nodes by running the following on each as root:
+...
+
+# Run the commands as prompted.
+mkdir -p $HOME/.kube
+cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+chown $(id -u):$(id -g) $HOME/.kube/config
+# The commands make a copy of admin.conf, which is a kubectl configuration file initialized by the kubeadm command.
+# The file contains important configurations, such as the authentication information.
+
+# You can reset the configurations if an error occurs when running the init command.
+$ kubeadm reset
+
+# If "Unable to read config path "/etc/kubernetes/manifests"" is displayed, run the following command:
+$ mkdir -p /etc/kubernetes/manifests
+```
+
+### Configuring the Network
+
+Because **the Calico network plugin cannot run on edge nodes**, `flannel` is used instead. The [issue](https://github.com/kubeedge/kubeedge/issues/2788#issuecomment-907627687) has been submitted to the KubeEdge community.
+
+The cloud and edge nodes are in different network environments and require different **affinity**. Therefore, two flannel configuration files are required.
+
+```bash
+# Download the flannel network plugin.
+$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
+
+# Prepare for network configuration for the cloud node.
+$ cp kube-flannel.yml kube-flannel-cloud.yml
+
+# Modify the network configuration for the cloud node.
+$ patch kube-flannel-cloud.yml - <
+Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.959661 424578 signcerts.go:100] Succeed to creating token
+Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.959716 424578 server.go:44] start unix domain socket server
+Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.959973 424578 uds.go:71] listening on: //var/lib/kubeedge/kubeedge.sock
+Aug 29 10:50:15 cloud.kubeedge cloudcore[424578]: I0829 10:50:15.966693 424578 server.go:64] Starting cloudhub websocket server
+Aug 29 10:50:17 cloud.kubeedge cloudcore[424578]: I0829 10:50:17.847150 424578 upstream.go:63] Start upstream devicecontroller
+```
+
+CloudCore has been deployed on the cloud node. Then, deploy EdgeCore on the edge node.
+
+### Managing the Edge Node
+
+Run the `keadm join` command on the edge node to manage it.
+
+#### Modifying iSulad Configurations
+
+File path: `/etc/isulad/daemon.json`
+
+```bash
+{
+ # Set pod-sandbox-image.
+ "pod-sandbox-image": "kubeedge/pause:3.1",
+ # The listening port `10350` of EdgeCore conflicts with the WebSocket port of iSulad. As a result, EdgeCore cannot be started. To resolve the conflict, change the value of `websocket-server-listening-port` in the iSulad configuration file (`/etc/isulad/daemon.json`) to `10351`.
+ "websocket-server-listening-port": 10351,
+}
+```
+
+After the configuration file is modified, run the `systemctl restart isulad` command to restart iSulad.
+
+
+#### Connecting EdgeCore to CloudCore
+
+```bash
+# Obtain a token on the cloud node.
+$ keadm gettoken
+28c25d3b137593f5bbfb776cf5b19866ab9727cab9e97964dd503f87cd52cbde.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzAyOTE4MTV9.aGUyCi9gdysVtMu0DQzrD5TcV_DcXob647YeqcOxKDA
+
+# Run the keadm join command to connect EdgeCore to CloudCore.
+# --cloudcore-ipport is a mandatory parameter. 10000 is the default port of CloudCore.
+$ keadm join --cloudcore-ipport=9.63.252.224:10000 --kubeedge-version=1.8.0 --token=28c25d3b137593f5bbfb776cf5b19866ab9727cab9e97964dd503f87cd52cbde.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzAyOTE4MTV9.aGUyCi9gdysVtMu0DQzrD5TcV_DcXob647YeqcOxKDA
+Host has mosquit+ already installed and running. Hence skipping the installation steps !!!
+kubeedge-v1.8.0-linux-amd64.tar.gz checksum:
+checksum_kubeedge-v1.8.0-linux-amd64.tar.gz.txt content:
+[Run as service] start to download service file for edgecore
+[Run as service] success to download service file for edgecore
+kubeedge-v1.8.0-linux-amd64/
+kubeedge-v1.8.0-linux-amd64/edge/
+kubeedge-v1.8.0-linux-amd64/edge/edgecore
+kubeedge-v1.8.0-linux-amd64/cloud/
+kubeedge-v1.8.0-linux-amd64/cloud/csidriver/
+kubeedge-v1.8.0-linux-amd64/cloud/csidriver/csidriver
+kubeedge-v1.8.0-linux-amd64/cloud/admission/
+kubeedge-v1.8.0-linux-amd64/cloud/admission/admission
+kubeedge-v1.8.0-linux-amd64/cloud/cloudcore/
+kubeedge-v1.8.0-linux-amd64/cloud/cloudcore/cloudcore
+kubeedge-v1.8.0-linux-amd64/version
+
+KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -b
+```
+
+At this time, the deployment of EdgeCore is not complete because Docker is used in the default settings. You need to modify the configuration file to connect EdgeCore to iSulad.
+
+#### Modifying EdgeCore Configurations
+
+```bash
+# Enable systemd to manage EdgeCore.
+# Copy edgecore.service to /usr/lib/systemd/system.
+$ cp /etc/kubeedge/edgecore.service /usr/lib/systemd/system
+
+# Modify the EdgeCore configurations.
+$ cd /etc/kubeedge/config
+$ patch edgecore.yaml - <
+
+# You can see that Nginx has been successfully deployed on the edge node.
+```
+
+#### Testing the Function
+
+```bash
+# Test whether the function is running properly.
+# Run the curl command on the edge node to access the IP address of Nginx, which is 10.244.2.4.
+$ curl 10.244.2.4:80
+
+
+
+Welcome to nginx!
+
+
+
+Welcome to nginx!
+If you see this page, the nginx web server is successfully installed and
+working. Further configuration is required.
+
+For online documentation and support please refer to
+nginx.org.
+Commercial support is available at
+nginx.com.
+
+Thank you for using nginx.
+
+
+```
+
+The deployment of KubeEdge and iSulad is complete.
+
+## Deploying the KubeEdge Cluster Using Binary Files
+
+You can also deploy the KubeEdge cluster using binary files. Only two RPM packages are required: `cloudcore` (on the cloud node) and `edgecore` (on the edge node).
+> The KubeEdge deployment using binary files is for testing only. Do not use this method in the production environment.
+
+### Deploying CloudCore on the Cloud Node
+
+> Log in to the cloud host.
+
+#### Installing the `cloudcore` RPM Package
+
+```bash
+$ yum install kubeedge-cloudcore
+```
+
+#### Creating CRDs
+
+```bash
+$ kubectl apply -f /etc/kubeedge/crds/devices/
+$ kubectl apply -f /etc/kubeedge/crds/reliablesyncs/
+$ kubectl apply -f /etc/kubeedge/crds/router/
+```
+
+#### Preparing the Configuration File
+
+```bash
+$ cloudcore --defaultconfig > /etc/kubeedge/config/cloudcore.yaml
+```
+
+Modify CloudCore configurations by referring to **Deploying CloudCore Using Keadm**.
+
+#### Running CloudCore
+
+```bash
+$ pkill cloudcore
+$ systemctl start cloudcore
+```
+
+### Deploying EdgeCore on the Edge Node
+
+> Log in to the edge host.
+
+#### Installing the `edgecore` RPM Package
+
+```bash
+$ yum install kubeedge-edgecore
+```
+
+#### Preparing the Configuration File
+
+```bash
+$ edgecore --defaultconfig > /etc/kubeedge/config/edgecore.yaml
+```
+
+Modify EdgeCore configurations by referring to **Deploying EdgeCore Using Keadm**.
+
+
+
+```bash
+$ kubectl get secret -nkubeedge tokensecret -o=jsonpath='{.data.tokendata}' | base64 -d
+1c4ff11289a14c59f2cbdbab726d1857262d5bda778ddf0de34dd59d125d3f69.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MzE0ODM3MzN9.JY77nMVDHIKD9ipo03Y0mSbxief9qOvJ4yMNx1yZpp0
+
+# Add the obtained token to the configuration file.
+sed -i -e "s|token: .*|token: ${token}|g" /etc/kubeedge/config/edgecore.yaml
+
+# The value of the token variable is obtained in the previous step.
+```
+
+#### Running EdgeCore
+
+```bash
+$ pkill edgecore
+$ systemctl start edgecore
+```
+
+## Appendix
+
+### kube-flannel-cloud.yml
+
+```bash
+# Application scenario: cloud node
+---
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: psp.flannel.unprivileged
+ annotations:
+ seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
+ seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
+ apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
+ apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
+spec:
+ privileged: false
+ volumes:
+ - configMap
+ - secret
+ - emptyDir
+ - hostPath
+ allowedHostPaths:
+ - pathPrefix: "/etc/cni/net.d"
+ - pathPrefix: "/etc/kube-flannel"
+ - pathPrefix: "/run/flannel"
+ readOnlyRootFilesystem: false
+ # Users and groups
+ runAsUser:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ fsGroup:
+ rule: RunAsAny
+ # Privilege Escalation
+ allowPrivilegeEscalation: false
+ defaultAllowPrivilegeEscalation: false
+ # Capabilities
+ allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
+ defaultAddCapabilities: []
+ requiredDropCapabilities: []
+ # Host namespaces
+ hostPID: false
+ hostIPC: false
+ hostNetwork: true
+ hostPorts:
+ - min: 0
+ max: 65535
+ # SELinux
+ seLinux:
+ # SELinux is unused in CaaSP
+ rule: 'RunAsAny'
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: flannel
+rules:
+- apiGroups: ['extensions']
+ resources: ['podsecuritypolicies']
+ verbs: ['use']
+ resourceNames: ['psp.flannel.unprivileged']
+- apiGroups:
+ - ""
+ resources:
+ - pods
+ verbs:
+ - get
+- apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - list
+ - watch
+- apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: flannel
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: flannel
+subjects:
+- kind: ServiceAccount
+ name: flannel
+ namespace: kube-system
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: flannel
+ namespace: kube-system
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: kube-flannel-cfg
+ namespace: kube-system
+ labels:
+ tier: node
+ app: flannel
+data:
+ cni-conf.json: |
+ {
+ "name": "cbr0",
+ "cniVersion": "0.3.1",
+ "plugins": [
+ {
+ "type": "flannel",
+ "delegate": {
+ "hairpinMode": true,
+ "isDefaultGateway": true
+ }
+ },
+ {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }
+ ]
+ }
+ net-conf.json: |
+ {
+ "Network": "10.244.0.0/16",
+ "Backend": {
+ "Type": "vxlan"
+ }
+ }
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: kube-flannel-cloud-ds
+ namespace: kube-system
+ labels:
+ tier: node
+ app: flannel
+spec:
+ selector:
+ matchLabels:
+ app: flannel
+ template:
+ metadata:
+ labels:
+ tier: node
+ app: flannel
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kubernetes.io/os
+ operator: In
+ values:
+ - linux
+ - key: node-role.kubernetes.io/agent
+ operator: DoesNotExist
+ hostNetwork: true
+ priorityClassName: system-node-critical
+ tolerations:
+ - operator: Exists
+ effect: NoSchedule
+ serviceAccountName: flannel
+ initContainers:
+ - name: install-cni
+ image: quay.io/coreos/flannel:v0.14.0
+ command:
+ - cp
+ args:
+ - -f
+ - /etc/kube-flannel/cni-conf.json
+ - /etc/cni/net.d/10-flannel.conflist
+ volumeMounts:
+ - name: cni
+ mountPath: /etc/cni/net.d
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
+ containers:
+ - name: kube-flannel
+ image: quay.io/coreos/flannel:v0.14.0
+ command:
+ - /opt/bin/flanneld
+ args:
+ - --ip-masq
+ - --kube-subnet-mgr
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "50Mi"
+ limits:
+ cpu: "100m"
+ memory: "50Mi"
+ securityContext:
+ privileged: false
+ capabilities:
+ add: ["NET_ADMIN", "NET_RAW"]
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run/flannel
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
+ volumes:
+ - name: run
+ hostPath:
+ path: /run/flannel
+ - name: cni
+ hostPath:
+ path: /etc/cni/net.d
+ - name: flannel-cfg
+ configMap:
+ name: kube-flannel-cfg
+
+```
+
+### kube-flannel-edge.yml
+
+```bash
+# Application scenario: edge node
+---
+apiVersion: policy/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: psp.flannel.unprivileged
+ annotations:
+ seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
+ seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
+ apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
+ apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
+spec:
+ privileged: false
+ volumes:
+ - configMap
+ - secret
+ - emptyDir
+ - hostPath
+ allowedHostPaths:
+ - pathPrefix: "/etc/cni/net.d"
+ - pathPrefix: "/etc/kube-flannel"
+ - pathPrefix: "/run/flannel"
+ readOnlyRootFilesystem: false
+ # Users and groups
+ runAsUser:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ fsGroup:
+ rule: RunAsAny
+ # Privilege Escalation
+ allowPrivilegeEscalation: false
+ defaultAllowPrivilegeEscalation: false
+ # Capabilities
+ allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
+ defaultAddCapabilities: []
+ requiredDropCapabilities: []
+ # Host namespaces
+ hostPID: false
+ hostIPC: false
+ hostNetwork: true
+ hostPorts:
+ - min: 0
+ max: 65535
+ # SELinux
+ seLinux:
+ # SELinux is unused in CaaSP
+ rule: 'RunAsAny'
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: flannel
+rules:
+- apiGroups: ['extensions']
+ resources: ['podsecuritypolicies']
+ verbs: ['use']
+ resourceNames: ['psp.flannel.unprivileged']
+- apiGroups:
+ - ""
+ resources:
+ - pods
+ verbs:
+ - get
+- apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - list
+ - watch
+- apiGroups:
+ - ""
+ resources:
+ - nodes/status
+ verbs:
+ - patch
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: flannel
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: flannel
+subjects:
+- kind: ServiceAccount
+ name: flannel
+ namespace: kube-system
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: flannel
+ namespace: kube-system
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: kube-flannel-cfg
+ namespace: kube-system
+ labels:
+ tier: node
+ app: flannel
+data:
+ cni-conf.json: |
+ {
+ "name": "cbr0",
+ "cniVersion": "0.3.1",
+ "plugins": [
+ {
+ "type": "flannel",
+ "delegate": {
+ "hairpinMode": true,
+ "isDefaultGateway": true
+ }
+ },
+ {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }
+ ]
+ }
+ net-conf.json: |
+ {
+ "Network": "10.244.0.0/16",
+ "Backend": {
+ "Type": "vxlan"
+ }
+ }
+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: kube-flannel-edge-ds
+ namespace: kube-system
+ labels:
+ tier: node
+ app: flannel
+spec:
+ selector:
+ matchLabels:
+ app: flannel
+ template:
+ metadata:
+ labels:
+ tier: node
+ app: flannel
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kubernetes.io/os
+ operator: In
+ values:
+ - linux
+ - key: node-role.kubernetes.io/agent
+ operator: Exists
+ hostNetwork: true
+ priorityClassName: system-node-critical
+ tolerations:
+ - operator: Exists
+ effect: NoSchedule
+ serviceAccountName: flannel
+ initContainers:
+ - name: install-cni
+ image: quay.io/coreos/flannel:v0.14.0
+ command:
+ - cp
+ args:
+ - -f
+ - /etc/kube-flannel/cni-conf.json
+ - /etc/cni/net.d/10-flannel.conflist
+ volumeMounts:
+ - name: cni
+ mountPath: /etc/cni/net.d
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
+ containers:
+ - name: kube-flannel
+ image: quay.io/coreos/flannel:v0.14.0
+ command:
+ - /opt/bin/flanneld
+ args:
+ - --ip-masq
+ - --kube-subnet-mgr
+ - --kube-api-url=http://127.0.0.1:10550
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "50Mi"
+ limits:
+ cpu: "100m"
+ memory: "50Mi"
+ securityContext:
+ privileged: false
+ capabilities:
+ add: ["NET_ADMIN", "NET_RAW"]
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ volumeMounts:
+ - name: run
+ mountPath: /run/flannel
+ - name: flannel-cfg
+ mountPath: /etc/kube-flannel/
+ volumes:
+ - name: run
+ hostPath:
+ path: /run/flannel
+ - name: cni
+ hostPath:
+ path: /etc/cni/net.d
+ - name: flannel-cfg
+ configMap:
+ name: kube-flannel-cfg
+
+```
+
+### cloudcore.service
+
+```bash
+# Application scenario: cloud node
+# File path: /usr/lib/systemd/system/cloudcore.service
+[Unit]
+Description=cloudcore.service
+
+[Service]
+Type=simple
+ExecStart=/usr/local/bin/cloudcore
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### cloudcore.yaml
+
+```bash
+# Application scenario: cloud node
+# File path: /etc/kubeedge/config/cloudcore.yaml
+apiVersion: cloudcore.config.kubeedge.io/v1alpha1
+commonConfig:
+ tunnelPort: 10351
+kind: CloudCore
+kubeAPIConfig:
+ burst: 200
+ contentType: application/vnd.kubernetes.protobuf
+ kubeConfig: /root/.kube/config
+ master: ""
+ qps: 100
+modules:
+ cloudHub:
+ advertiseAddress:
+ - 9.63.252.224
+ dnsNames:
+ - ""
+ edgeCertSigningDuration: 365
+ enable: true
+ https:
+ address: 0.0.0.0
+ enable: true
+ port: 10002
+ keepaliveInterval: 30
+ nodeLimit: 1000
+ quic:
+ address: 0.0.0.0
+ enable: false
+ maxIncomingStreams: 10000
+ port: 10001
+ tlsCAFile: /etc/kubeedge/ca/rootCA.crt
+ tlsCAKeyFile: /etc/kubeedge/ca/rootCA.key
+ tlsCertFile: /etc/kubeedge/certs/server.crt
+ tlsPrivateKeyFile: /etc/kubeedge/certs/server.key
+ tokenRefreshDuration: 12
+ unixsocket:
+ address: unix:///var/lib/kubeedge/kubeedge.sock
+ enable: true
+ websocket:
+ address: 0.0.0.0
+ enable: true
+ port: 10000
+ writeTimeout: 30
+ cloudStream:
+ enable: false
+ streamPort: 10003
+ tlsStreamCAFile: /etc/kubeedge/ca/streamCA.crt
+ tlsStreamCertFile: /etc/kubeedge/certs/stream.crt
+ tlsStreamPrivateKeyFile: /etc/kubeedge/certs/stream.key
+ tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
+ tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
+ tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
+ tunnelPort: 10004
+ deviceController:
+ buffer:
+ deviceEvent: 1
+ deviceModelEvent: 1
+ updateDeviceStatus: 1024
+ context:
+ receiveModule: devicecontroller
+ responseModule: cloudhub
+ sendModule: cloudhub
+ enable: true
+ load:
+ updateDeviceStatusWorkers: 1
+ dynamicController:
+ enable: true
+ edgeController:
+ buffer:
+ configMapEvent: 1
+ deletePod: 1024
+ endpointsEvent: 1
+ podEvent: 1
+ queryConfigMap: 1024
+ queryEndpoints: 1024
+ queryNode: 1024
+ queryPersistentVolume: 1024
+ queryPersistentVolumeClaim: 1024
+ querySecret: 1024
+ queryService: 1024
+ queryVolumeAttachment: 1024
+ ruleEndpointsEvent: 1
+ rulesEvent: 1
+ secretEvent: 1
+ serviceAccountToken: 1024
+ serviceEvent: 1
+ updateNode: 1024
+ updateNodeStatus: 1024
+ updatePodStatus: 1024
+ context:
+ receiveModule: edgecontroller
+ responseModule: cloudhub
+ sendModule: cloudhub
+ sendRouterModule: router
+ enable: true
+ load:
+ ServiceAccountTokenWorkers: 4
+ UpdateRuleStatusWorkers: 4
+ deletePodWorkers: 4
+ queryConfigMapWorkers: 4
+ queryEndpointsWorkers: 4
+ queryNodeWorkers: 4
+ queryPersistentVolumeClaimWorkers: 4
+ queryPersistentVolumeWorkers: 4
+ querySecretWorkers: 4
+ queryServiceWorkers: 4
+ queryVolumeAttachmentWorkers: 4
+ updateNodeStatusWorkers: 1
+ updateNodeWorkers: 4
+ updatePodStatusWorkers: 1
+ nodeUpdateFrequency: 10
+ router:
+ address: 0.0.0.0
+ enable: false
+ port: 9443
+ restTimeout: 60
+ syncController:
+ enable: true
+
+```
+
+### edgecore.service
+
+```bash
+# Application scenario: edge node
+# File path: /etc/systemd/system/edgecore.service
+[Unit]
+Description=edgecore.service
+
+[Service]
+Type=simple
+ExecStart=/usr/local/bin/edgecore
+Restart=always
+RestartSec=10
+
+[Install]
+WantedBy=multi-user.target
+```
+
+### edgecore.yaml
+
+```bash
+# Application scenario: edge node
+# File path: /etc/kubeedge/config/edgecore.yaml
+apiVersion: edgecore.config.kubeedge.io/v1alpha1
+database:
+ aliasName: default
+ dataSource: /var/lib/kubeedge/edgecore.db
+ driverName: sqlite3
+kind: EdgeCore
+modules:
+ dbTest:
+ enable: false
+ deviceTwin:
+ enable: true
+ edgeHub:
+ enable: true
+ heartbeat: 15
+ httpServer: https://9.63.252.224:10002
+ projectID: e632aba927ea4ac2b575ec1603d56f10
+ quic:
+ enable: false
+ handshakeTimeout: 30
+ readDeadline: 15
+ server: 9.63.252.227:10001
+ writeDeadline: 15
+ rotateCertificates: true
+ tlsCaFile: /etc/kubeedge/ca/rootCA.crt
+ tlsCertFile: /etc/kubeedge/certs/server.crt
+ tlsPrivateKeyFile: /etc/kubeedge/certs/server.key
+ token: ""# Enter the token obtained from the cloud side.
+ websocket:
+ enable: true
+ handshakeTimeout: 30
+ readDeadline: 15
+ server: 9.63.252.224:10000
+ writeDeadline: 15
+ edgeMesh:
+ enable: false
+ lbStrategy: RoundRobin
+ listenInterface: docker0
+ listenPort: 40001
+ subNet: 9.251.0.0/16
+ edgeStream:
+ enable: false
+ handshakeTimeout: 30
+ readDeadline: 15
+ server: 9.63.252.224:10004
+ tlsTunnelCAFile: /etc/kubeedge/ca/rootCA.crt
+ tlsTunnelCertFile: /etc/kubeedge/certs/server.crt
+ tlsTunnelPrivateKeyFile: /etc/kubeedge/certs/server.key
+ writeDeadline: 15
+ edged:
+ cgroupDriver: cgroupfs
+ cgroupRoot: ""
+ cgroupsPerQOS: true
+ clusterDNS: ""
+ clusterDomain: ""
+ cniBinDir: /opt/cni/bin
+ cniCacheDirs: /var/lib/cni/cache
+ cniConfDir: /etc/cni/net.d
+ concurrentConsumers: 5
+ devicePluginEnabled: false
+ dockerAddress: unix:///var/run/docker.sock
+ edgedMemoryCapacity: 7852396000
+ enable: true
+ enableMetrics: true
+ gpuPluginEnabled: false
+ hostnameOverride: edge.kubeedge
+ imageGCHighThreshold: 80
+ imageGCLowThreshold: 40
+ imagePullProgressDeadline: 60
+ maximumDeadContainersPerPod: 1
+ networkPluginMTU: 1500
+ nodeIP: 9.63.252.227
+ nodeStatusUpdateFrequency: 10
+ podSandboxImage: kubeedge/pause:3.1
+ registerNode: true
+ registerNodeNamespace: default
+ remoteImageEndpoint: unix:///var/run/isulad.sock
+ remoteRuntimeEndpoint: unix:///var/run/isulad.sock
+ runtimeRequestTimeout: 2
+ runtimeType: remote
+ volumeStatsAggPeriod: 60000000000
+ eventBus:
+ enable: true
+ eventBusTLS:
+ enable: false
+ tlsMqttCAFile: /etc/kubeedge/ca/rootCA.crt
+ tlsMqttCertFile: /etc/kubeedge/certs/server.crt
+ tlsMqttPrivateKeyFile: /etc/kubeedge/certs/server.key
+ mqttMode: 2
+ mqttQOS: 0
+ mqttRetain: false
+ mqttServerExternal: tcp://127.0.0.1:1883
+ mqttServerInternal: tcp://127.0.0.1:1884
+ mqttSessionQueueSize: 100
+ metaManager:
+ contextSendGroup: hub
+ contextSendModule: websocket
+ enable: true
+ metaServer:
+ debug: false
+ enable: true
+ podStatusSyncInterval: 60
+ remoteQueryTimeout: 60
+ serviceBus:
+ enable: false
+
+```
+
+### daemon.json
+
+```bash
+# Application scenarios: cloud and edge nodes
+# File path: /etc/isulad/daemon.json
+{
+ "group": "isula",
+ "default-runtime": "lcr",
+ "graph": "/var/lib/isulad",
+ "state": "/var/run/isulad",
+ "engine": "lcr",
+ "log-level": "ERROR",
+ "pidfile": "/var/run/isulad.pid",
+ "log-opts": {
+ "log-file-mode": "0600",
+ "log-path": "/var/lib/isulad",
+ "max-file": "1",
+ "max-size": "30KB"
+ },
+ "log-driver": "stdout",
+ "container-log": {
+ "driver": "json-file"
+ },
+ "hook-spec": "/etc/default/isulad/hooks/default.json",
+ "start-timeout": "2m",
+ "storage-driver": "overlay2",
+ "storage-opts": [
+ "overlay2.override_kernel_check=true"
+ ],
+ "registry-mirrors": [
+ "docker.io"
+ ],
+ "insecure-registries": [
+ "k8s.gcr.io",
+ "quay.io",
+ "hub.oepkgs.net"
+ ],
+ "pod-sandbox-image": "k8s.gcr.io/pause:3.2", # Set this parameter to kubeedge/pause:3.1 for edge nodes.
+ "websocket-server-listening-port": 10351,
+ "native.umask": "secure",
+ "network-plugin": "cni",
+ "cni-bin-dir": "/opt/cni/bin",
+ "cni-conf-dir": "/etc/cni/net.d",
+ "image-layer-check": false,
+ "use-decrypted-key": true,
+ "insecure-skip-verify-enforce": false
+}
+```
diff --git a/docs/en/docs/KubeEdge/kubeedge-usage-guide.md b/docs/en/docs/KubeEdge/kubeedge-usage-guide.md
new file mode 100644
index 0000000000000000000000000000000000000000..b36dacfb21c78c7d5c70ea218233ebc42c527b37
--- /dev/null
+++ b/docs/en/docs/KubeEdge/kubeedge-usage-guide.md
@@ -0,0 +1,216 @@
+# KubeEdge Usage Guide
+
+KubeEdge extends the capabilities of Kubernetes to edge scenarios and provides infrastructure support for the network, application deployment, and metadata synchronization between the cloud and the edge. The usage of KubeEdge is the same as that of Kubernetes. In addition, KubeEdge supports the management and control of edge devices. The following example describes how to use KubeEdge to implement edge-cloud synergy.
+
+## 1. Preparations
+
+**Example: KubeEdge Counter Demo**
+
+The counter is a pseudo device. You can run this demo without any additional physical devices. The counter runs on the edge side. You can use the web interface on the cloud side to control the counter and get the counter value. Click the link below to view the schematic diagram.
+
+For details, see https://github.com/kubeedge/examples/tree/master/kubeedge-counter-demo.
+
+**1) This demo requires the KubeEdge v1.2.1 or later. In this example, the latest KubeEdge v1.8.0 is used.**
+
+```
+[root@ke-cloud ~]# kubectl get node
+NAME STATUS ROLES AGE VERSION
+ke-cloud Ready master 13h v1.20.2
+ke-edge1 Ready agent,edge 64s v1.19.3-kubeedge-v1.8.0
+
+Note: In this document, the edge node ke-edge1 is used for verification. If you perform verification by referring to this document, you need to change the edge node name based on your actual deployment.
+```
+
+**2) Ensure that the following configuration items are enabled for the Kubernetes API server:**
+
+```
+--insecuret-port=8080
+--insecure-bind-address=0.0.0.0
+```
+You can modify the `/etc/kubernetes/manifests/kube-apiserver.yaml` file, and then restart the Pod of the Kubernetes API server component to make the modifications take effect.
+
+**3) Download the sample code:**
+
+```
+[root@ke-cloud ~]# git clone https://github.com/kubeedge/examples.git $GOPATH/src/github.com/kubeedge/examples
+```
+
+## 2. Creating the Device Model and Device
+
+**1) Create the device model.**
+```
+[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
+[root@ke-cloud crds~]# kubectl create -f kubeedge-counter-model.yaml
+```
+
+**2) Create the device.**
+
+Modify **matchExpressions** as required.
+
+```
+[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
+[root@ke-cloud crds~]# vim kubeedge-counter-instance.yaml
+apiVersion: devices.kubeedge.io/v1alpha1
+kind: Device
+metadata:
+ name: counter
+ labels:
+ description: 'counter'
+ manufacturer: 'test'
+spec:
+ deviceModelRef:
+ name: counter-model
+ nodeSelector:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: 'kubernetes.io/hostname'
+ operator: In
+ values:
+ - ke-edge1
+
+status:
+ twins:
+ - propertyName: status
+ desired:
+ metadata:
+ type: string
+ value: 'OFF'
+ reported:
+ metadata:
+ type: string
+ value: '0'
+
+[root@ke-cloud crds~]# kubectl create -f kubeedge-counter-instance.yaml
+```
+
+## 3. Deploying the Cloud Application
+
+**1) Modify the code.**
+
+The cloud application **web-controller-app** controls the edge application **pi-counter-app**. The default listening port of the cloud application is 80. Change the port number to 8089.
+```
+[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/web-controller-app
+[root@ke-cloud web-controller-app~]# vim main.go
+package main
+
+import (
+ "github.com/astaxie/beego"
+ "github.com/kubeedge/examples/kubeedge-counter-demo/web-controller-app/controller"
+)
+
+func main() {
+ beego.Router("/", new(controllers.TrackController), "get:Index")
+ beego.Router("/track/control/:trackId", new(controllers.TrackController), "get,post:ControlTrack")
+
+ beego.Run(":8089")
+}
+```
+
+**2) Build the image.**
+
+Note: When building the image, copy the source code to the path specified by **GOPATH**. Disable Go modules if they are enabled.
+
+```
+[root@ke-cloud web-controller-app~]# make all
+[root@ke-cloud web-controller-app~]# make docker
+```
+
+**3) Deploy web-controller-app.**
+
+```
+[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
+[root@ke-cloud crds~]# kubectl apply -f kubeedge-web-controller-app.yaml
+```
+
+## 4. Deploying the Edge Application
+
+The **pi-counter-app** application on the edge is controlled by the cloud application. The edge application communicates with the MQTT server to perform simple counting.
+
+**1) Modify the code and build the image.**
+
+Change the value of **GOARCH** to **amd64** in `Makefile` to run the container.
+
+```
+[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/counter-mapper
+[root@ke-cloud counter-mapper~]# vim Makefile
+.PHONY: all pi-execute-app docker clean
+all: pi-execute-app
+
+pi-execute-app:
+ GOARCH=amd64 go build -o pi-counter-app main.go
+
+docker:
+ docker build . -t kubeedge/kubeedge-pi-counter:v1.0.0
+
+clean:
+ rm -f pi-counter-app
+
+[root@ke-cloud counter-mapper~]# make all
+[root@ke-cloud counter-mapper~]# make docker
+```
+
+**2) Deploy pi-counter-app.**
+
+```
+[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
+[root@ke-cloud crds~]# kubectl apply -f kubeedge-pi-counter-app.yaml
+
+Note: To prevent Pod deployment from being stuck at `ContainerCreating`, run the docker save, scp, and docker load commands to release the image to the edge.
+
+[root@ke-cloud ~]# docker save -o kubeedge-pi-counter.tar kubeedge/kubeedge-pi-counter:v1.0.0
+[root@ke-cloud ~]# scp kubeedge-pi-counter.tar root@192.168.1.56:/root
+[root@ke-edge1 ~]# docker load -i kubeedge-pi-counter.tar
+```
+
+## 5. Trying the Demo
+
+Now, the KubeEdge Demo is deployed on the cloud and edge as follows:
+
+```
+[root@ke-cloud ~]# kubectl get pods -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+kubeedge-counter-app-758b9b4ffd-f8qjj 1/1 Running 0 26m 192.168.1.66 ke-cloud
+kubeedge-pi-counter-c69698d6-rb4xz 1/1 Running 0 2m 192.168.1.56 ke-edge1
+```
+
+Let's test the running effect of the Demo.
+
+**1) Execute the ON command.**
+On the web page, select **ON** and click **Execute**. You can run the following command on the edge node to view the execution result:
+```
+[root@ke-edge1 ~]# docker logs -f counter-container-id
+```
+
+**2) Check the counter's STATUS.**
+On the web page, select **STATUS** and click **Execute**. The current counter status is displayed on the web page.
+
+**3) Execute the OFF command.**
+On the web page, select **OFF** and click **Execute**. You can run the following command on the edge node to view the execution result:
+```
+[root@ke-edge1 ~]# docker logs -f counter-container-id
+```
+
+## 6. Others
+
+**1) For more official KubeEdge examples, visit https://github.com/kubeedge/examples.**
+
+|Name | Description |
+|---|---|
+| [LED-RaspBerry-Pi](led-raspberrypi/README.md) |Controlling a LED light with Raspberry Pi using KubeEdge platform.
+|[Data Analysis @ Edge](apache-beam-analysis/README.md) | Analyzing data at edge by using Apache Beam and KubeEdge.
+| [Security@Edge](security-demo/README.md) | Security at edge using SPIRE for identity management.
+[Bluetooth-CC2650-demo](bluetooth-CC2650-demo/README.md) |Controlling a CC2650 SensorTag bluetooth device using KubeEdge platform.
+| [Play Music @Edge through WeChat](wechat-demo/README.md) | Play music at edge based on WeChat and KubeEdge.
+| [Play Music @Edge through Web](web-demo/README.md) | Play music at edge based on Web and KubeEdge.
+| [Collecting temperature @Edge](temperature-demo/README.md) | Collecting temperature at edge based KubeEdge.
+| [Control pseudo device counter and collect data](kubeedge-counter-demo/README.md) | Control pseudo device counter and collect data based KubeEdge.
+ [Play Music @Edge through Twitter](ke-twitter-demo/README.md)| Play music at edge based on Twitter and KubeEdge.
+ [Control Zigbee @Edge through cloud](kubeedge-edge-ai-application/README.md) | Face detection at cloud using OpenCV and using it to control zigbee on edge using KubeEdge.
+
+**2) Use EdgeMesh to discover edge services.**
+
+https://github.com/kubeedge/edgemesh
+
+**3) Customize the cloud-edge message route.**
+
+https://kubeedge.io/en/docs/developer/custom_message_deliver/
diff --git a/docs/en/docs/KubeEdge/overview.md b/docs/en/docs/KubeEdge/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b81038bda1ac5fb4618c68678351f99e1b3c63b
--- /dev/null
+++ b/docs/en/docs/KubeEdge/overview.md
@@ -0,0 +1,3 @@
+# KubeEdge User Guide
+
+This document describes how to deploy and use the KubeEdge edge computing platform for users and administrators.
\ No newline at end of file
diff --git a/docs/en/docs/Kubernetes/eggo-automatic-deployment.md b/docs/en/docs/Kubernetes/eggo-automatic-deployment.md
index dd217b259e19855853ea9b88a09d3372887f7893..afd915506d05010ee49497bf948e60d70702453f 100644
--- a/docs/en/docs/Kubernetes/eggo-automatic-deployment.md
+++ b/docs/en/docs/Kubernetes/eggo-automatic-deployment.md
@@ -1,4 +1,4 @@
-# Automatic deployment
+# Automatic Deployment
Manual deployment of Kubernetes clusters requires manually deploying various components. This is both time- and labor-consuming, especially during large scale Kubernetes cluster deployment, as low efficiency and errors are likely to surface. To solve the problem, openEuler launched the Kubernetes cluster deployment tool in version 21.09. This highly flexible tool provides functions such as automatic deployment and deployment process tracking of large scale Kubernetes clusters.
diff --git a/docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md b/docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md
index cb5c90b580c7ad72d28cf62fe6e6169c6fc5bc17..f99f6b10593b52fa524466aa950200883e0102ac 100644
--- a/docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md
+++ b/docs/en/docs/Kubernetes/eggo-dismantling-a-cluster.md
@@ -1,4 +1,4 @@
-# Dismantling a cluster.
+# Dismantling a Cluster
When service requirements decrease and the existing number of nodes is not required, you can delete nodes from the cluster to save system resources and reduce costs. Or, when the service does not require a cluster, you can delete the entire cluster.
diff --git a/docs/en/docs/StratoVirt/Interconnect_isula.md b/docs/en/docs/StratoVirt/interconnect_isula.md
similarity index 96%
rename from docs/en/docs/StratoVirt/Interconnect_isula.md
rename to docs/en/docs/StratoVirt/interconnect_isula.md
index 426cb014ac02a76105e7f2ba322ef654a513c211..d113fb8aaaa57b65da09ed00c9c26d163f13dadf 100644
--- a/docs/en/docs/StratoVirt/Interconnect_isula.md
+++ b/docs/en/docs/StratoVirt/interconnect_isula.md
@@ -1,10 +1,10 @@
-# Connecting to the iSula Security Container
+# Connecting to the iSula Secure Container
## Overview
To provide a better isolation environment for containers and improve system security, you can use the iSula secure container, that is, connect StratoVirt to the iSula secure container.
-## Connecting to the iSula Security Container
+## Connecting to the iSula Secure Container
### **Prerequisites**
@@ -21,7 +21,7 @@ The following describes how to install and configure iSulad and kata-containers.
2. Create and configure the storage:
- You need to plan the drive, for example, /dev/sdx, which will be formatted.
+ You need to plan the drive, for example, /dev/sdxx, which will be formatted.
```shell
# pvcreate /dev/sdxx
diff --git a/docs/en/docs/desktop/DDE-User-Manual.md b/docs/en/docs/desktop/DDE-user-guide.md
old mode 100755
new mode 100644
similarity index 99%
rename from docs/en/docs/desktop/DDE-User-Manual.md
rename to docs/en/docs/desktop/DDE-user-guide.md
index 1472942cbcd2445a4731108094f6dadd8304db8d..cc128fda58f1e04eaaf80579b07dfdd3a420a268
--- a/docs/en/docs/desktop/DDE-User-Manual.md
+++ b/docs/en/docs/desktop/DDE-user-guide.md
@@ -1,6 +1,6 @@
-# DDE Desktop Environment User Manual
+# DDE Desktop Environmen
## Overview
DDE desktop environment is an elegant, secure, reliable and easy to use GUI comprised of the desktop, dock, launcher and control center. Acting as the key basis for our operating system, its main interface is shown as below.
@@ -17,7 +17,7 @@ When you enter DDE for the very first time, a welcome program will automatically
## Desktop
-Desktop is the main screen you see after logging in. On the desktop, you can create a new file/folder, sort files, open in terminal, set wallpaper and screensaver and etc. You can also add shortcuts for applications on desktop by using [Send to desktop](#Set App Shortcut).
+Desktop is the main screen you see after logging in. On the desktop, you can create a new file/folder, sort files, open in terminal, set wallpaper and screensaver and etc. You can also add shortcuts for applications on desktop by using [Send to desktop](#set-app-shortcut).

diff --git a/docs/en/docs/desktop/Xfce_userguide.md b/docs/en/docs/desktop/Xfce_userguide.md
index ea6396320d845f3c7f44ac8d2bdfb95a7ff6d4a0..fb567c913cf3425f407864989936a3a28851fada 100644
--- a/docs/en/docs/desktop/Xfce_userguide.md
+++ b/docs/en/docs/desktop/Xfce_userguide.md
@@ -32,7 +32,7 @@
* [4.1.5 Application Finder](#4.1.5 Application Finder)
* [4.1.6 User Home Directory](#4.1. 6 User Home Directory)
-# Xfce User Guide
+# Xfce Desktop Environment
## 1\. Overview
diff --git a/docs/en/docs/desktop/dde.md b/docs/en/docs/desktop/dde.md
index 96d37a7b4d8c7a4544454fce5ca5368845a560a7..0d90e6cd0e367734bfe7070e320e029284e149ae 100644
--- a/docs/en/docs/desktop/dde.md
+++ b/docs/en/docs/desktop/dde.md
@@ -1,3 +1,25 @@
# DDE User Guide
-This section describes how to install and use the Deepin Desktop Environment (DDE).
\ No newline at end of file
+This section describes how to install and use the Deepin Desktop Environment (DDE).
+
+## FAQs
+
+### 1. After the DDE is installed, why are the computer and recycle bin icons not displayed on the desktop when I log in as the **root** user?
+
+* Issue
+
+ After the DDE is installed, the computer and recycle bin icon is not displayed on the desktop when a user logs in as the **root** user.
+
+
+
+* Cause
+
+ The **root** user is created before the DDE is installed. During the installation, the DDE does not add desktop icons for existing users. This issue does not occur if the user is created after the DDE is installed.
+
+* Solution
+
+ Right-click the icon in the launcher and choose **Send to Desktop**. The icon functions the same as the one added by DDE.
+
+ 
+
+
diff --git a/docs/en/docs/desktop/figures/dde-1.png b/docs/en/docs/desktop/figures/dde-1.png
new file mode 100644
index 0000000000000000000000000000000000000000..fb1d5177c39262ed182f10a57fdae850d007eeb1
Binary files /dev/null and b/docs/en/docs/desktop/figures/dde-1.png differ
diff --git a/docs/en/docs/desktop/figures/dde-2.png b/docs/en/docs/desktop/figures/dde-2.png
new file mode 100644
index 0000000000000000000000000000000000000000..be5d296937bd17b9646b32c80934aa76738027af
Binary files /dev/null and b/docs/en/docs/desktop/figures/dde-2.png differ
diff --git a/docs/en/docs/desktop/Install_XFCE.md b/docs/en/docs/desktop/installing-Xfce.md
similarity index 98%
rename from docs/en/docs/desktop/Install_XFCE.md
rename to docs/en/docs/desktop/installing-Xfce.md
index a1cc296893e10ff28b794a74fcc8505866b6bea7..d0cf26e6afedb49a36bbe05e7bc56b53a6e4370d 100644
--- a/docs/en/docs/desktop/Install_XFCE.md
+++ b/docs/en/docs/desktop/installing-Xfce.md
@@ -1,4 +1,4 @@
-# Xfce Installation Guide
+# Xfce Installation
Xfce is a lightweight Linux desktop. In the current version, all components have been updated from GTK2 to GTK3 and from D-Dbus Glib to GDBus. Most components support GObject Introspection (GI), which is used to generate and parse the API meta information of the C program library, so that the dynamic language (or managed language) can be bound to the program library based on C + GObject. In the current version, user experience is optimized, new features are added, and a large number of bugs are fixed. Xfce occupies fewer memory and CPU resources than other UIs (GNOME and KDE), providing smoother and more efficient user experience.
diff --git a/docs/en/docs/desktop/install-deploy-HA.md b/docs/en/docs/desktop/installing-and-deploying-HA.md
similarity index 82%
rename from docs/en/docs/desktop/install-deploy-HA.md
rename to docs/en/docs/desktop/installing-and-deploying-HA.md
index 14f816b222d6a570902c8515309cfa8649468701..cd7d20790bf271a9850d449b0242147851e5da59 100644
--- a/docs/en/docs/desktop/install-deploy-HA.md
+++ b/docs/en/docs/desktop/installing-and-deploying-HA.md
@@ -3,20 +3,20 @@
This chapter describes how to install and deploy an HA cluster.
-- [Installing and Deploying HA](#Installing and Deploying HA)
- - [Installation and Deployment](#Installation and Deployment)
- - [Modifying the Host Name and the /etc/hosts File](#Modifying the Host Name and the etchosts File)
- - [Configuring the Yum Source](# Configure the Yum Source)
- - [Installing the HA Software Package Components](#Installing the HA Software Package Components)
- - [Setting the hacluster User Password](#Setting the hacluster User Password)
- - [Modifying the `/etc/corosync/corosync.conf` File](#Modify the etccorosynccorosyncconf File)
- - [Managing the Services](#Managing the Services)
- - [Disabling the Firewall](#Disabling the Firewall)
- - [Managing the pcs Service](#Managing the pcs Service)
- - [Managing the Pacemaker Service](#Managing the Pacemaker Service)
- - [Managing the Corosync Service](#Managing the Corosync Service)
- - [Performing Node Authentication](#Performing Node Authentication)
- - [Accessing the Front-End Management Platform](#Accessing the Front-End Management Platform)
+- [Installing and Deploying HA](#installing-and-deploying-ha)
+ - [Installation and Deployment](#installation-and-deployment)
+ - [Modifying the Host Name and the /etc/hosts File](#modifying-the-host-name-and-the-etchosts-file)
+ - [Configuring the Yum Source](#-configure-the-yum-source)
+ - [Installing the HA Software Package Components](#installing-the-ha-software-package-components)
+ - [Setting the hacluster User Password](#setting-the-hacluster-user-password)
+ - [Modifying the `/etc/corosync/corosync.conf` File](#modify-the-etccorosynccorosyncconf-file)
+ - [Managing the Services](#managing-the-services)
+ - [Disabling the Firewall](#disabling-the-firewall)
+ - [Managing the pcs Service](#managing-the-pcs-service)
+ - [Managing the Pacemaker Service](#managing-the-pacemaker-service)
+ - [Managing the Corosync Service](#managing-the-corosync-service)
+ - [Performing Node Authentication](#performing-node-authentication)
+ - [Accessing the Front-End Management Platform](#accessing-the-front-end-management-platform)
## Installation and Deployment
diff --git a/docs/en/docs/desktop/install-DDE.md b/docs/en/docs/desktop/installling-DDE.md
similarity index 94%
rename from docs/en/docs/desktop/install-DDE.md
rename to docs/en/docs/desktop/installling-DDE.md
index 05a05125a3867f2b12897901f08bcddad9a24274..f96f3e29edf8ac3747c67fe7653701e1100f3930 100644
--- a/docs/en/docs/desktop/install-DDE.md
+++ b/docs/en/docs/desktop/installling-DDE.md
@@ -1,31 +1,31 @@
-# DDE installation
-#### Introduction
-
-DDE is a powerful desktop environment developed by UnionTech Team. Contains dozens of powerful desktop applications, which are truly self-developed desktop products.
-
-#### installation method
-
-1. [download](https://openeuler.org/zh/download/) openEuler ISO and install the OS.
-2. update the software source
-```bash
-sudo dnf update
-```
-3. install DDE
-```bash
-sudo dnf install dde
-```
-4. set to start with a graphical interface
-```bash
-sudo systemctl set-default graphical.target
-```
-5. reboot
-```bash
-sudo reboot
-```
-6. After the restart is complete, use the user created during the installation process or the openeuler user to log in to the desktop.
-
- > dde cannot log in with root account
- > dde has built-in openeuler user, the password of this user is openeuler
-
-Now you can use dde.
-
+# DDE Installation
+#### Introduction
+
+DDE is a powerful desktop environment developed by UnionTech Team. Contains dozens of powerful desktop applications, which are truly self-developed desktop products.
+
+#### installation method
+
+1. [download](https://openeuler.org/zh/download/) openEuler ISO and install the OS.
+2. update the software source
+```bash
+sudo dnf update
+```
+3. install DDE
+```bash
+sudo dnf install dde
+```
+4. set to start with a graphical interface
+```bash
+sudo systemctl set-default graphical.target
+```
+5. reboot
+```bash
+sudo reboot
+```
+6. After the restart is complete, use the user created during the installation process or the openeuler user to log in to the desktop.
+
+ > dde cannot log in with root account
+ > dde has built-in openeuler user, the password of this user is openeuler
+
+Now you can use dde.
+
diff --git a/docs/en/docs/desktop/install-UKUI.md b/docs/en/docs/desktop/installling-UKUI.md
old mode 100755
new mode 100644
similarity index 97%
rename from docs/en/docs/desktop/install-UKUI.md
rename to docs/en/docs/desktop/installling-UKUI.md
index 6fcb824248c13f0811e04664f5ff905e55640521..7dac5664d2525659aa87afb26666aa7c79470b75
--- a/docs/en/docs/desktop/install-UKUI.md
+++ b/docs/en/docs/desktop/installling-UKUI.md
@@ -1,21 +1,21 @@
-# UKUI installation
-UKUI is a Linux desktop built by the KylinSoft software team over the years, primarily based on GTK and QT. Compared to other UI interfaces, UKUI is easy to use. The components of UKUI are small and low coupling, can run alone without relying on other suites. It can provide user a friendly and efficient experience.
-
-UKUI supports both x86_64 and aarch64 architectures.
-
-We recommend you create a new administrator user before install UKUI.
-
-1.download openEuler 21.09 and update the software source.
-```
-sudo dnf update
-```
-2.install UKUI.
-```
-sudo dnf install ukui
-```
-3.If you want to start with graphical interface after confirming the installation, please run this code and reboot(`reboot`).
-```
-systemctl set-default graphical.target
-```
-At present, UKUI version is still constantly updated. Please check the latest installation method :
-[https://gitee.com/openkylin/ukui-issues](https://gitee.com/openkylin/ukui-issues)
+# UKUI installation
+UKUI is a Linux desktop built by the KylinSoft software team over the years, primarily based on GTK and QT. Compared to other UI interfaces, UKUI is easy to use. The components of UKUI are small and low coupling, can run alone without relying on other suites. It can provide user a friendly and efficient experience.
+
+UKUI supports both x86_64 and aarch64 architectures.
+
+We recommend you create a new administrator user before install UKUI.
+
+1.download openEuler 21.09 and update the software source.
+```
+sudo dnf update
+```
+2.install UKUI.
+```
+sudo dnf install ukui
+```
+3.If you want to start with graphical interface after confirming the installation, please run this code and reboot(`reboot`).
+```
+systemctl set-default graphical.target
+```
+At present, UKUI version is still constantly updated. Please check the latest installation method :
+[https://gitee.com/openkylin/ukui-issues](https://gitee.com/openkylin/ukui-issues)
diff --git a/docs/en/docs/desktop/xfce.md b/docs/en/docs/desktop/xfce.md
index f7563d6532f9c442c2a62b0e71cf8d0d22076d01..1c8485f2d64a9553483bfb6bb5b9e2bd60147e7b 100644
--- a/docs/en/docs/desktop/xfce.md
+++ b/docs/en/docs/desktop/xfce.md
@@ -1,3 +1,3 @@
# Xfce User Guide
-This section describes how to install and use theXfce.
\ No newline at end of file
+This section describes how to install and use the Xfce.
\ No newline at end of file
diff --git a/docs/en/menu/index.md b/docs/en/menu/index.md
index 2171b9ce9b2790f596ce499f2861e6160fb46364..45a09dcf9fd5fcbd4ef3278379066ab135ab8d4c 100644
--- a/docs/en/menu/index.md
+++ b/docs/en/menu/index.md
@@ -74,8 +74,7 @@ headless: true
- [Preparing the Environment]({{< relref "./docs/StratoVirt/Prepare_env.md" >}})
- [Configuring a VM]({{< relref "./docs/StratoVirt/VM_configuration.md" >}})
- [VM Management]({{< relref "./docs/StratoVirt/VM_management.md" >}})
- - [Interconnecting with the iSula Secure Container]({{< relref "./docs/StratoVirt/Interconnect_isula.md" >}})
-- [Container User Guide]({{< relref "./docs/Container/container.md" >}})
+ - [Connecting to the iSula Secure Container]({{< relref "./docs/StratoVirt/connecting-to-the-isula-secure-container.md" >}})
- [iSulad Container Engine]({{< relref "./docs/Container/isulad-container-engine.md" >}})
- [Installation, Upgrade and Uninstallation]({{< relref "./docs/Container/installation-upgrade-Uninstallation.md" >}})
- [Installation and Configuration]({{< relref "./docs/Container/installation-configuration.md" >}})
@@ -160,19 +159,23 @@ headless: true
- [Installing ETCD] ({{< relref "./docs/Kubernetes/installing-etcd.md" >}})
- [Deploying Components on the Control Plane]({{< relref "./docs/Kubernetes/deploying-control-plane-components.md" >}})
- [Deploying a Node Component]({{< relref "./docs/Kubernetes/deploying-a-node-component.md" >}})
+ - [Automatic Cluster Deployment]({{< relref "./docs/Kubernetes/eggo-automatic-deployment.md" >}})
+ - [Tool Introduction]({{< relref "./docs/Kubernetes/eggo-tool-introduction.md" >}})
+ - [Deploying a Cluster]({{< relref "./docs/Kubernetes/eggo-deploying-a-cluster.md" >}})
+ - [Dismantling a Cluster]({{< relref "./docs/Kubernetes/eggo-dismantling-a-cluster.md" >}})
- [Running the Test Pod]({{< relref "./docs/Kubernetes/running-the-test-pod.md" >}})
- [Third-Party Software Deployment Guide]({{< relref "./docs/thirdparty_migration/thidrparty.md" >}})
- [OpenStack Victoria Deployment Guide]({{< relref "./docs/thirdparty_migration/OpenStack-victoria.md" >}})
- [Installing and Deploying an HA Cluster]({{< relref "./docs/thirdparty_migration/installha.md" >}})
- [Desktop Environment User Guide]({{< relref "./docs/desktop/desktop.md" >}})
- [UKUI]({{< relref "./docs/desktop/ukui.md" >}})
- - [Installation UKUI]({{< relref "./docs/desktop/install-UKUI.md" >}})
+ - [Installation UKUI]({{< relref "./docs/desktop/installing-UKUI.md" >}})
- [UKUI User Guide]({{< relref "./docs/desktop/UKUI-user-guide.md" >}})
- [DDE]({{< relref "./docs/desktop/dde.md" >}})
- - [install-DDE]({{< relref "./docs/desktop/install-DDE.md" >}})
- - [DDE User Guide]({{< relref "./docs/desktop/DDE-User-Manual.md" >}})
+ - [install-DDE]({{< relref "./docs/desktop/installing-DDE.md" >}})
+ - [DDE User Guide]({{< relref "./docs/desktop/DDE-user-guide.md" >}})
- [XFCE]({{< relref "./docs/desktop/xfce.md" >}})
- - [Xfce Installation Guide]({{< relref "./docs/desktop/Install_XFCE.md" >}})
+ - [Xfce Installation Guide]({{< relref "./docs/desktop/installing-Xfce.md" >}})
- [Xfce User Guide]({{< relref "./docs/desktop/Xfce_userguide.md" >}})
- [Toolset User Guide]({{< relref "./docs/userguide/overview.md" >}})
- [patch-tracking]({{< relref "./docs/userguide/patch-tracking.md" >}})