even_deny_root
diff --git a/docs/en/docs/StratoVirt/Install_StratoVirt.md b/docs/en/docs/StratoVirt/Install_StratoVirt.md
index b9a9fd8a19c52b9d938e248ee42d75d7af938541..4751e1131703fe67f0b865e685a5e202193e85d8 100644
--- a/docs/en/docs/StratoVirt/Install_StratoVirt.md
+++ b/docs/en/docs/StratoVirt/Install_StratoVirt.md
@@ -31,7 +31,7 @@ To use StratoVirt virtualization, it is necessary to install StratoVirt. Before
```
$ stratovirt -version
- StratoVirt 2.0.0
+ StratoVirt 2.1.0
```
diff --git a/docs/en/docs/StratoVirt/Prepare_env.md b/docs/en/docs/StratoVirt/Prepare_env.md
index 55689ef71286e37fc7fa38c433ee94d42742a8c1..54f80ed274b0a0e438c5decdfcfebdaed29abded 100644
--- a/docs/en/docs/StratoVirt/Prepare_env.md
+++ b/docs/en/docs/StratoVirt/Prepare_env.md
@@ -4,7 +4,7 @@
## Usage
- StratoVirt can run on VMs with the x86_64 or AArch64 processor architecture.
-- You are advised to compile, debug, and deploy StratoVirt on openEuler 21.09.
+- You are advised to compile, debug, and deploy StratoVirt on openEuler 22.03 LTS.
- StratoVirt can run with non-root permissions.
## Environment Requirements
@@ -50,17 +50,17 @@ StratoVirt of the current version supports only the PE kernel image of the x86_6
1. Run the following commands to obtain the kernel source code of openEuler:
```
- $ git clone https://gitee.com/openeuler/kernel
+ $ git clone https://gitee.com/openeuler/kernel.git
$ cd kernel
```
-2. Run the following command to check and switch to the kernel version 5.10:
+2. Run the following command to check and switch to the kernel version openEuler-22.03-LTS:
```
- $ git checkout kernel-5.10
+ $ git checkout openEuler-22.03-LTS
```
-3. Configure and compile the Linux kernel. It is better to use the recommended configuration file ([Obtain configuration file](https://gitee.com/openeuler/stratovirt/tree/master/docs/kernel_config)). Copy it to the kernel directory, and rename it as **.config**. You can also run the following command to configure the kernel as prompted:
+3. Configure and compile the Linux kernel. It is better to use the recommended configuration file ([Obtain configuration file](https://gitee.com/openeuler/stratovirt/tree/master/docs/kernel_config)). Copy it to the kernel directory, and rename it as **.config**. And run the command 'make olddefconfig' to update to the latest default configuration (otherwise, there may be options for subsequent compilation that need to be manually selected). You can also run the following command to configure the kernel as prompted:
```
$ make menuconfig
@@ -107,21 +107,21 @@ The rootfs image is a file system image. When StratoVirt is started, the ext4 im
4. Obtain the latest alpine-mini rootfs of the corresponding processor architecture.
- - If the AArch64 processor architecture is used, run the following command:
+ - If the AArch64 processor architecture is used, you can get the latest rootfs from the [alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/). For example, alpine-minirootfs-3.16.0-aarch64.tar.gz, the reference command is as follows:
```
- $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/aarch64/alpine-minirootfs-3.12.0-aarch64.tar.gz
- $ tar -zxvf alpine-minirootfs-3.12.0-aarch64.tar.gz
- $ rm alpine-minirootfs-3.12.0-aarch64.tar.gz
+ $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/aarch64/alpine-minirootfs-3.16.0-aarch64.tar.gz
+ $ tar -zxvf alpine-minirootfs-3.16.0-aarch64.tar.gz
+ $ rm alpine-minirootfs-3.16.0-aarch64.tar.gz
```
- - If the x86_64 processor architecture is used, run the following command:
+ - If the x86_64 processor architecture is used, you can get the latest rootfs from the [alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/). For example, alpine-minirootfs-3.16.0-x86_64.tar.gz, the reference command is as follows:
```
- $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-minirootfs-3.12.0-x86_64.tar.gz
- $ tar -zxvf alpine-minirootfs-3.12.0-x86_64.tar.gz
- $ rm alpine-minirootfs-3.12.0-x86_64.tar.gz
+ $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-minirootfs-3.16.0-x86_64.tar.gz
+ $ tar -zxvf alpine-minirootfs-3.16.0-x86_64.tar.gz
+ $ rm alpine-minirootfs-3.16.0-x86_64.tar.gz
```
diff --git a/docs/en/docs/StratoVirt/VM_management.md b/docs/en/docs/StratoVirt/VM_management.md
index c58a74382c74a42883b19100627ab73d38cffeb6..7796925b1433d96d8259ec5eedb96337caba55b8 100644
--- a/docs/en/docs/StratoVirt/VM_management.md
+++ b/docs/en/docs/StratoVirt/VM_management.md
@@ -165,25 +165,51 @@ QMP provides the quit command to exit a VM, that is, to exit the StratoVirt proc
StratoVirt allows you to adjust the number of disks when a VM is running. That is, you can add or delete VM disks without interrupting services.
-#### Hot Plugged-in Disks
+**Note**
+
+* For a standard VM, the **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel.
+
+* For a standard VM, devices can be hot added to the root port. The root port device must be configured before the VM is started.
+
+* You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner.
+
+#### Hot Adding Disks
**Usage:**
+Lightweight VM:
+
```
{"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}}
{"execute": "device_add", "arguments": {"id": "drive-0", "driver": "virtio-blk-mmio", "addr": "0x1"}}
```
+Standard VM:
+
+```
+{"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}}
+{"execute":"device_add", "arguments":{"id":"drive-0", "driver":"virtio-blk-pci", "drive": "drive-0", "addr":"0x0", "bus": "pcie.1"}}
+```
+
**Parameters:**
-- The value of **node-name** in **blockdev-add** must be the same as the value of **id** in **device_add**. For example, both values are **drive-0** in the preceding example.
-- **/path/to/block** is the image path of the hot plugged-in disks. It cannot be the path of the disk image that boots the rootfs.
-- For **addr**, **0x0** is mapped to **vda** of the VM, **0x1** is mapped to **vdb**, and so on. To be compatible with the QMP protocol, **addr** can be replaced by **lun**, but **lun=0** is mapped to the **vdb** of the client.
-- StratoVirt supports a maximum of six virtio-blk disks. Note this when hot plugging in disks.
+- For a lightweight VM, the value of **node-name** in **blockdev-add** must be the same as that of **id** in **device_add**. For example, the values of **node-name** and **id** are both **drive-0** as shown above.
+
+- For a standard VM, the value of **drive** must be the same as that of **node-name** in **blockdev-add**.
+
+- **/path/to/block** is the image path of the hot added disks. It cannot be the path of the disk image that boots the rootfs.
+
+- For a lightweight VM, the value of **addr**, starting from **0x0**, is mapped to a virtio device on the VM. **0x0** is mapped to **vda**, **0x1** is mapped to **vdb**, and so on. To be compatible with the QMP protocol, **addr** can be replaced by **lun**, but **lun=0** is mapped to the **vdb** of the guest machine. For a standard VM, the value of **addr** must be **0x0**.
+
+- For a standard VM, **bus** indicates the name of the bus to mount the device. Currently, the device can be hot added only to the root port device. The value of **bus** must be the ID of the root port device.
+
+- For a lightweight VM, StratoVirt supports a maximum of six virtio-blk disks. Note this when hot adding disks. For a standard VM, the maximum number of hot added disks depends on the number of root port devices.
**Example:**
+Lightweight VM:
+
```
<- {"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}}
-> {"return": {}}
@@ -191,29 +217,70 @@ StratoVirt allows you to adjust the number of disks when a VM is running. That i
-> {"return": {}}
```
-#### Hot Plugged-out Disks
+Standard VM:
+
+```
+<- {"execute": "blockdev-add", "arguments": {"node-name": "drive-0", "file": {"driver": "file", "filename": "/path/to/block"}, "cache": {"direct": true}, "read-only": false}}
+-> {"return": {}}
+<- {"execute":"device_add", "arguments":{"id":"drive-0", "driver":"virtio-blk-pci", "drive": "drive-0", "addr":"0x0", "bus": "pcie.1"}}
+-> {"return": {}}
+```
+#### Hot Removing Disks
**Usage:**
-**{"execute": "device_del", "arguments": {"id":"drive-0"}}**
+Lightweight VM:
+
+```
+{"execute": "device_del", "arguments": {"id":"drive-0"}}
+```
+
+Standard VM:
+
+```
+{"execute": "device_del", "arguments": {"id":"drive-0"}}
+{"execute": "blockdev-del", "arguments": {"node-name": "drive-0"}}
+```
**Parameters:**
-**id** indicates the ID of the hot plugged-out disk.
+**id** indicates the ID of the disk to be hot removed.
+- **node-name** indicates the backend name of the disk.
**Example:**
+Lightweight VM:
+
```
<- {"execute": "device_del", "arguments": {"id": "drive-0"}}
-> {"event":"DEVICE_DELETED","data":{"device":"drive-0","path":"drive-0"},"timestamp":{"seconds":1598513162,"microseconds":367129}}
-> {"return": {}}
```
+Standard VM:
+
+```
+<- {"execute": "device_del", "arguments": {"id":"drive-0"}}
+-> {"return": {}}
+-> {"event":"DEVICE_DELETED","data":{"device":"drive-0","path":"drive-0"},"timestamp":{"seconds":1598513162,"microseconds":367129}}
+<- {"execute": "blockdev-del", "arguments": {"node-name": "drive-0"}}
+-> {"return": {}}
+```
+
+A **DEVICE_DELETED** event indicates that the device is removed from StratoVirt.
### Hot-Pluggable NICs
StratoVirt allows you to adjust the number of NICs when a VM is running. That is, you can add or delete VM NICs without interrupting services.
-#### Hot Plugged-in NICs
+**Note**
+
+* For a standard VM, the **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel.
+
+* For a standard VM, devices can be hot added to the root port. The root port device must be configured before the VM is started.
+
+* You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner.
+
+#### Hot Adding NICs
**Preparations (Requiring the root Permission)**
@@ -239,21 +306,36 @@ StratoVirt allows you to adjust the number of NICs when a VM is running. That is
**Usage:**
+Lightweight VM:
+
```
{"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}}
{"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-mmio", "addr":"0x0"}}
```
+Standard VM:
+
+```
+{"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}}
+{"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-pci", "addr":"0x0", "netdev": "net-0", "bus": "pcie.1"}}
+```
+
**Parameters:**
-- **id** in **netdev_add** must be the same as that in **device_add**. **ifname** is the name of the backend tap device.
+- For a lightweight VM, **id** in **netdev_add** must be the same as that in **device_add**. **ifname** is the name of the backend tap device.
+
+- For a standard VM, the value of **netdev** must be the value of **id** in **netdev_add**.
-- For **addr**, **0x0** is mapped to **eth0** of the VM, **0x1** is mapped to **eth1**, and so on.
+- For a lightweight VM, the value of **addr**, starting from **0x0**, is mapped to an NIC on the VM. **0x0** is mapped to **eth0 **, **0x1** is mapped to **eth1**. For a standard VM, the value of **addr** must be **0x0**.
-- StratoVirt supports a maximum of two virtio-net NICs. Therefore, pay attention to the specification restrictions when hot plugging in NICs.
+- For a standard VM, **bus** indicates the name of the bus to mount the device. Currently, the device can be hot added only to the root port device. The value of **bus** must be the ID of the root port device.
+
+- For a lightweight VM, StratoVirt supports a maximum of two virtio-net NICs. Therefore, pay attention to the specification restrictions when hot adding in NICs. For a standard VM, the maximum number of hot added disks depends on the number of root port devices.
**Example:**
+Lightweight VM:
+
```
<- {"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}}
-> {"return": {}}
@@ -263,24 +345,120 @@ StratoVirt allows you to adjust the number of NICs when a VM is running. That is
**addr:0x0** corresponds to **eth0** in the VM.
-#### Hot Plugged-out NICs
+Standard VM:
+
+```
+<- {"execute":"netdev_add", "arguments":{"id":"net-0", "ifname":"tap0"}}
+-> {"return": {}}
+<- {"execute":"device_add", "arguments":{"id":"net-0", "driver":"virtio-net-pci", "addr":"0x0", "netdev": "net-0", "bus": "pcie.1"}}
+-> {"return": {}}
+```
+
+#### Hot Removing NICs
**Usage:**
-**{"execute": "device_del", "arguments": {"id": "net-0"}}**
+Lightweight VM:
+
+```
+{"execute": "device_del", "arguments": {"id": "net-0"}}
+```
+
+Standard VM:
+
+```
+{"execute": "device_del", "arguments": {"id":"net-0"}}
+{"execute": "netdev_del", "arguments": {"id": "net-0"}}
+```
+
**Parameters:**
**id**: NIC ID, for example, **net-0**.
+- **id** in **netdev_del** indicates the backend name of the NIC.
+
**Example:**
+Lightweight VM:
+
```
<- {"execute": "device_del", "arguments": {"id": "net-0"}}
-> {"event":"DEVICE_DELETED","data":{"device":"net-0","path":"net-0"},"timestamp":{"seconds":1598513339,"microseconds":97310}}
-> {"return": {}}
```
+Standard VM:
+
+```
+<- {"execute": "device_del", "arguments": {"id":"net-0"}}
+-> {"return": {}}
+-> {"event":"DEVICE_DELETED","data":{"device":"net-0","path":"net-0"},"timestamp":{"seconds":1598513339,"microseconds":97310}}
+<- {"execute": "netdev_del", "arguments": {"id": "net-0"}}
+-> {"return": {}}
+```
+
+A **DEVICE_DELETED** event indicates that the device is removed from StratoVirt.
+
+### Hot-swappable Pass-through Devices
+
+You can add or delete the passthrough devices of a StratoVirt standard VM when it is running.
+
+**Note**
+
+* The **CONFIG_HOTPLUG_PCI_PCIE=y** configuration must be enabled for the VM kernel.
+
+* Devices can be hot added to the root port. The root port device must be configured before the VM is started.
+
+* You are not advised to hot swap a device when the VM is being started, stopped, or under high internal pressure. Otherwise, the VM may become abnormal because the drivers on the VM cannot respond in a timely manner.
+
+#### Hot Adding Pass-through Devices
+
+**Usage:**
+
+```
+{"execute":"device_add", "arguments":{"id":"vfio-0", "driver":"vfio-pci", "bus": "pcie.1", "addr":"0x0", "host": "0000:1a:00.3"}}
+```
+
+**Parameters:**
+
+- **id** indicates the ID of the hot added device.
+
+- **bus** indicates the name of the bus to mount the device.
+
+- **addr** indicates the slot and function numbers to mount the device. Currently, **addr** must be set to **0x0**.
+
+- **host** indicates the domain number, bus number, slot number, and function number of the passthrough device on the host machine.
+
+**Example:**
+
+```
+<- {"execute":"device_add", "arguments":{"id":"vfio-0", "driver":"vfio-pci", "bus": "pcie.1", "addr":"0x0", "host": "0000:1a:00.3"}}
+-> {"return": {}}
+```
+
+#### Hot Removing Pass-through Devices
+
+**Usage:**
+
+```
+{"execute": "device_del", "arguments": {"id": "vfio-0"}}
+```
+
+**Parameters:**
+
+- **id** indicates the ID of the device to be hot removed, which is specified when the device is hot added.
+
+**Example:**
+
+```
+<- {"execute": "device_del", "arguments": {"id": "vfio-0"}}
+-> {"return": {}}
+-> {"event":"DEVICE_DELETED","data":{"device":"vfio-0","path":"vfio-0"},"timestamp":{"seconds":1614310541,"microseconds":554250}}
+```
+
+A **DEVICE_DELETED** event indicates that the device is removed from StratoVirt.
+
## Using Ballon Devices
The balloon device is used to reclaim idle memory from a VM. It called by running the QMP command.
diff --git a/docs/en/docs/StratoVirt/interconnect_isula.md b/docs/en/docs/StratoVirt/interconnect_isula.md
index 02ee4d5cb3629a2e396738b0a6e426e66ba99903..baa77be73278f76cbd80117a3228f012bf3fdb39 100644
--- a/docs/en/docs/StratoVirt/interconnect_isula.md
+++ b/docs/en/docs/StratoVirt/interconnect_isula.md
@@ -51,7 +51,7 @@ The following describes how to install and configure iSulad and kata-containers.
],
```
-3. Restart isulad.
+3. Restart **isulad**.
```shell
# systemctl daemon-reload
@@ -76,15 +76,18 @@ The following describes how to install and configure iSulad and kata-containers.
This section describes how to interconnect StratoVirt with kata-containers to access the iSula container ecosystem.
-1. Modify the kata configuration file. Its default path is **/usr/share/defaults/kata-containers/configuration.toml**. You can also configure the file by referring to **configuration-stratovirt.toml** in the same directory. Set the Hypervisor type of the secure container to **stratovirt**, kernel to the absolute path of the kernel image of kata-containers, and initrd to the **initrd** image file of kata-containers. (If you use Yum to install kata-containers, the two image files are downloaded and stored in the **/var/lib/kata/** directory by default. You can also use other images during the configuration.)
+#### Connecting to a Lightweight VM
- The configurations are as follows:
+1. Modify the kata configuration file. Its default path is **/usr/share/defaults/kata-containers/configuration.toml**. You can also configure the file by referring to **configuration-stratovirt.toml** in the same directory. Modify the **hypervisor** type of the secure container to **stratovirt**, **kernel** to the absolute path of the kernel image of kata-containers, and **initrd** to the **initrd** image file of kata-containers. (If you use Yum to install kata-containers, the two image files are downloaded and stored in the **/var/lib/kata/** directory by default. You can also use other images during the configuration.)
+
+ The modified configurations are as follows:
```shell
[hypervisor.stratovirt]
path = "/usr/bin/stratovirt"
kernel = "/var/lib/kata/kernel"
initrd = "/var/lib/kata/kata-containers-initrd.img"
+ machine_type = "microvm"
block_device_driver = "virtio-mmio"
use_vsock = true
enable_netmon = true
@@ -95,7 +98,7 @@ This section describes how to interconnect StratoVirt with kata-containers to ac
disable_vhost_net = true
```
-2. Run the `isula` command **root** permissions to start the BusyBox secure container and interconnect StratoVirt with it.
+2. Run the `isula` command with **root** permissions to start the BusyBox secure container and interconnect StratoVirt with it.
```shell
# isula run -tid --runtime "io.containerd.kata.v2" --net=none --name test busybox:latest sh
@@ -103,7 +106,7 @@ This section describes how to interconnect StratoVirt with kata-containers to ac
3. Run the `isula ps` command to check whether the secure container **test** is running properly. Then run the following command to access the container:
- ```
+ ```shell
# isula exec –ti test sh
```
@@ -141,3 +144,80 @@ This section describes how to interconnect StratoVirt with kata-containers to ac
```
You can now run container commands in the **test** container.
+
+#### Connecting to a Standard VM
+
+To use a StratoVirt standard VM as the sandbox of a secure container, you need to modify some other configurations.
+
+1. The configurations are as follows:
+
+ ```shell
+ [hypervisor.stratovirt]
+ path = "/usr/bin/stratovirt"
+ kernel = "/var/lib/kata/kernel"
+ initrd = "/var/lib/kata/kata-containers-initrd.img"
+ # x86_64 architecture
+ machine_type = "q35"
+ # AArch64 architecture
+ machine_type = "virt"
+ block_device_driver = "virtio-blk"
+ pcie_root_port = 2
+ use_vsock = true
+ enable_netmon = true
+ internetworking_model = "tcfilter"
+ sandbox_cgroup_with_emulator = false
+ disable_new_netns = false
+ disable_block_device_use = false
+ disable_vhost_net = true
+ ```
+
+ In the configurations above, modify the VM type according to the architecture of the host machine. Change the value of **block_device_driver** to **virtio-blk**. StratoVirt supports only devices hot-plugged to the root port. Set a proper value of **pcie_root_port** based on the number of devices to be hot-plugged.
+
+2. Install the firmware required for starting a standard VM.
+
+ x86_64 architecture:
+
+ ```shell
+ # yum install -y edk2-ovmf
+ ```
+
+ AArch64 architecture:
+
+ ```shell
+ # yum install -y edk2-aarch64
+ ```
+
+3. Build and replace the binary file of kata-containers 2.x.
+
+ Currently, a StratoVirt standard VMs can only be used as the sandbox of a kata-containers 2.x container (corresponding to the openEuler-21.09 branch in the kata-containers repository). You need to download and compile the kata-containers source code and replace the **containerd-shim-kata-v2** binary file in the **/usr/bin** directory.
+
+ ```shell
+ # mkdir -p /root/go/src/github.com/
+ # cd /root/go/src/github.com/
+ # git clone https://gitee.com/src-openeuler/kata-containers.git
+ # cd kata-containers
+ # git checkout openEuler-21.09
+ # ./apply-patches
+ # cd src/runtime
+ # make
+ ```
+
+ Back up the kata binary file in the **/usr/bin/** directory and replace it with the compiled binary file **containerd-shim-kata-v2**.
+
+ ```shell
+ # cp /usr/bin/containerd-shim-kata-v2 /usr/bin/containerd-shim-kata-v2.bk
+ # cp containerd-shim-kata-v2 /usr/bin/containerd-shim-kata-v2
+ ```
+
+4. Run the `isula` command with **root** permissions to start the BusyBox secure container and interconnect StratoVirt with it.
+
+ ```shell
+ # isula run -tid --runtime "io.containerd.kata.v2" --net=none --name test busybox:latest sh
+ ```
+
+5. Run the `isula ps` command to check whether the secure container **test** is running properly. Then run the following command to access the container:
+
+ ```shell
+ # isula exec -ti test sh
+ ```
+
diff --git a/docs/en/docs/TailorCustom/figures/flowchart.png b/docs/en/docs/TailorCustom/figures/flowchart.png
new file mode 100644
index 0000000000000000000000000000000000000000..d3a71e8bfdb886222151cea3b2a3c0e8d8eae64a
Binary files /dev/null and b/docs/en/docs/TailorCustom/figures/flowchart.png differ
diff --git a/docs/en/docs/TailorCustom/imageTailor-user-guide.md b/docs/en/docs/TailorCustom/imageTailor-user-guide.md
new file mode 100644
index 0000000000000000000000000000000000000000..2ad4ae70147104cf945e4eeeedfa07e587a552e0
--- /dev/null
+++ b/docs/en/docs/TailorCustom/imageTailor-user-guide.md
@@ -0,0 +1,928 @@
+# ImageTailor User Guide
+
+ - [Introduction](#introduction)
+ - [Installation](#installation)
+ - [Software and Hardware Requirements](#software-and-hardware-requirements)
+ - [Obtaining the Installation Package](#obtaining-the-installation-package)
+ - [Installing imageTailor](#installing-imagetailor)
+ - [Directory Description](#directory-description)
+ - [Image Customization](#image-customization)
+ - [Overall Process](#overall-process)
+ - [Customizing Service Packages](#customizing-service-packages)
+ - [Setting a Local Repo Source](#setting-a-local-repo-source)
+ - [Adding Files](#adding-files)
+ - [Adding RPM Packages](#adding-rpm-packages)
+ - [Adding Hook Scripts](#adding-hook-scripts)
+ - [Configuring System Parameters](#configuring-system-parameters)
+ - [Configuring Host Parameters](#configuring-host-parameters)
+ - [Configuring Initial Passwords](#configuring-initial-passwords)
+ - [Configuring Partitions](#configuring-partitions)
+ - [Configuring the Network](#configuring-the-network)
+ - [Configuring Kernel Parameters](#configuring-kernel-parameters)
+ - [Creating an Image](#creating-an-image)
+ - [Command Description](#command-description)
+ - [Image Creation Guide](#image-creation-guide)
+ - [Tailoring Time Zones](#tailoring-time-zones)
+ - [Customization Example](#customization-example)
+
+
+
+## Introduction
+
+In addition to the kernel, an operating system contains various peripheral packages. These peripheral packages provide functions of a general-purpose operating system but also cause the following problems:
+
+- A large number of resources (such as memory, disks, and CPUs) are occupied, resulting in low system performance.
+- Unnecessary functions increase the development and maintenance costs.
+
+To address these problems, openEuler provides the imageTailor tool for tailoring and customization images. You can tailor unnecessary peripheral packages in the OS image or add service packages or files as required. This tool includes the following functions:
+
+- System package tailoring: Tailors system commands, libraries, and drivers based on the list of RPM packages to be installed.
+- System configuration modification: Configures the host name, startup services, time zone, network, partitions, drivers to be loaded, and kernel version.
+- Software package addition: Adds custom RPM packages or files to the system.
+
+
+
+## Installation
+
+This section uses openEuler 22.03 LTS in the AArch64 architecture as an example to describe the installation method.
+
+### Software and Hardware Requirements
+
+The software and hardware requirements of imageTailor are as follows:
+
+- The architecture is x86_64 or AArch64.
+
+- The OS is openEuler 22.03 LTS (the kernel version is 5.10 and the Python version is 3.9, which meet the tool requirements).
+
+- The root directory **/** of the host to run the tool have at least 40 GB space.
+
+- The Python version is 3.9 or later.
+
+- The kernel version is 5.10 or later.
+
+- The SElinux service is disabled.
+
+ ```shell
+ $ sudo setenforce 0
+ $ getenforce
+ Permissive
+ ```
+
+
+
+### Obtaining the Installation Package
+
+Download the openEuler release package to install and use imageTailor.
+
+1. Obtain the ISO image file and the corresponding verification file.
+
+ The image must be an everything image. Assume that the image is to be stored in the **root** directory. Run the following commands:
+
+ ```shell
+ $ cd /root/temp
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum
+ ```
+
+3. Obtain the verification value in the sha256sum verification file.
+
+ ```shell
+ $ cat openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum
+ ```
+
+4. Calculate the verification value of the ISO image file.
+
+ ```shell
+ $ sha256sum openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ ```
+
+5. Compare the verification value in the sha256sum file with that of the ISO image. If they are the same, the file integrity is verified. Otherwise, the file integrity is damaged. You need to obtain the file again.
+
+### Installing imageTailor
+
+The following uses openEuler 22.03 LTS in AArch64 architecture as an example to describe how to install imageTailor.
+
+1. Ensure that openEuler 22.03 LTS (or a running environment that meets the requirements of imageTailor) has been installed on the host.
+
+ ```shell
+ $ cat /etc/openEuler-release
+ openEuler release 22.03 LTS
+ ```
+
+2. Create a **/etc/yum.repos.d/local.repo** file to configure the Yum source. The following is an example of the configuration file. **baseurl** indicates the directory for mounting the ISO image.
+
+ ```shell
+ [local]
+ name=local
+ baseurl=file:///root/imageTailor_mount
+ gpgcheck=0
+ enabled=1
+ ```
+
+3. Run the following commands as the **root** user to mount the image to the **/root/imageTailor_mount** directory as the Yum source (ensure that the value of **baseurl** is the same as that configured in the repo file and the disk space of the directory is greater than 20 GB):
+
+ ```shell
+ $ mkdir /root/imageTailor_mount
+ $ sudo mount -o loop /root/temp/openEuler-22.03-LTS-everything-aarch64-dvd.iso /root/imageTailor_mount/
+ ```
+
+4. Make the Yum source take effect.
+
+ ```shell
+ $ yum clean all
+ $ yum makecache
+ ```
+
+5. Install the imageTailor tool as the **root** user.
+
+ ```shell
+ $ sudo yum install -y imageTailor
+ ```
+
+6. Run the following command as the **root** user to verify that the tool has been installed successfully:
+
+ ```shell
+ $ cd /opt/imageTailor/
+ $ sudo ./mkdliso -h
+ -------------------------------------------------------------------------------------------------------------
+ Usage: mkdliso -p product_name -c configpath [--minios yes|no|force] [-h] [--sec]
+ Options:
+ -p,--product Specify the product to make, check custom/cfg_yourProduct.
+ -c,--cfg-path Specify the configuration file path, the form should be consistent with custom/cfg_xxx
+ --minios Make minios: yes|no|force
+ --sec Perform security hardening
+ -h,--help Display help information
+
+ Example:
+ command:
+ ./mkdliso -p openEuler -c custom/cfg_openEuler --sec
+
+ help:
+ ./mkdliso -h
+ -------------------------------------------------------------------------------------------------------------
+ ```
+
+### Directory Description
+
+After imageTailor is installed, the directory structure of the tool package is as follows:
+
+```shell
+[imageTailor]
+ |-[custom]
+ |-[cfg_openEuler]
+ |-[usr_file] // Stores files to be added.
+ |-[usr_install] //Stores hook scripts to be added.
+ |-[all]
+ |-[conf]
+ |-[hook]
+ |-[cmd.conf] // Configures the default commands and libraries used by an ISO image.
+ |-[rpm.conf] // Configures the list of RPM packages and drivers installed by default for an ISO image.
+ |-[security_s.conf] // Configures security hardening policies.
+ |-[sys.conf] // Configures ISO image system parameters.
+ |-[kiwi] // Basic configurations of imageTailor.
+ |-[repos] //RPM sources for obtaining the RPM packages required for creating an ISO image.
+ |-[security-tool] // Security hardening tool.
+ |-mkdliso // Executable script for creating an ISO image.
+```
+
+## Image Customization
+
+This section describes how to use the imageTailor tool to package the service RPM packages, custom files, drivers, commands, and libraries to the target ISO image.
+
+### Overall Process
+
+The following figure shows the process of using imageTailor to customize an image.
+
+
+
+The steps are described as follows:
+
+- Check software and hardware environment: Ensure that the host for creating the ISO image meets the software and hardware requirements.
+
+- Customize service packages: Add RPM packages (including service RPM packages, commands, drivers, and library files) and files (including custom files, commands, drivers, and library files).
+
+ - Adding service RPM packages: Add RPM packages to the ISO image as required. For details, see [Installation](#installation).
+ - Adding custom files: If you want to perform custom operations such as hardware check, system configuration check, and driver installation when the target ISO system is installed or started, you can compile custom files and package them to the ISO image.
+ - Adding drivers, commands, and library files: If the RPM package source of openEuler does not contain the required drivers, commands, or library files, you can use imageTailor to package the corresponding drivers, commands, or library files into the ISO image.
+
+- Configure system parameters:
+
+ - Configuring host parameters: To ensure that the ISO image is successfully installed and started, you need to configure host parameters.
+ - Configuring partitions: You can configure service partitions based on the service plan and adjust system partitions.
+ - Configuring the network: You can set system network parameters as required, such as the NIC name, IP address, and subnet mask.
+ - Configuring the initial password: To ensure that the ISO image is successfully installed and started, you need to configure the initial passwords of the **root** user and GRUB.
+ - Configuring kernel parameters: You can configure the command line parameters of the kernel as required.
+
+- Configure security hardening policies.
+
+ ImageTailor provides default security hardening policies. You can modify **security_s.conf** (in the ISO image customization phase) to perform secondary security hardening on the system based on service requirements. For details, see the [Security Hardening Guide](https://docs.openeuler.org/en/docs/22.03_LTS/docs/SecHarden/secHarden.html).
+
+- Create an ISO image:
+
+ Use the imageTailor tool to create an ISO image.
+
+### Customizing Service Packages
+
+You can pack service RPM packages, custom files, drivers, commands, and library files into the target ISO image as required.
+
+#### Setting a Local Repo Source
+
+To customize an ISO image, you must set a repo source in the **/opt/imageTailor/repos/euler_base/** directory. This section describes how to set a local repo source.
+
+1. Download the ISO file released by openEuler. (The RPM package of the everything image released by the openEuler must be used.)
+ ```shell
+ $ cd /opt
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ ```
+
+2. Create a mount directory **/opt/openEuler_repo** and mount the ISO file to the directory.
+ ```shell
+ $ sudo mkdir -p /opt/openEuler_repo
+ $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo
+ mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only.
+ ```
+
+3. Copy the RPM packages in the ISO file to the **/opt/imageTailor/repos/euler_base/** directory.
+ ```shell
+ $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base
+ $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base
+ $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base
+ $ sudo ls /opt/imageTailor/repos/euler_base|wc -l
+ 2577
+ $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo
+ $ cd /opt/imageTailor
+ ```
+
+#### Adding Files
+
+You can add files to an ISO image as required. The file types include custom files, drivers, commands, or library file. Store the files to the **/opt/imageTailor/custom/cfg_openEuler/usr_file** directory.
+
+##### Precautions
+
+- The commands to be packed must be executable. Otherwise, imageTailor will fail to pack the commands into the ISO.
+
+- The file stored in the **/opt/imageTailor/custom/cfg_openEuler/usr_file** directory will be generated in the root directory of the ISO. Therefore, the directory structure of the file must be a complete path starting from the root directory so that imageTailor can place the file in the correct directory.
+
+ For example, if you want **file1** to be in the **/opt** directory of the ISO, create an **opt** directory in the **usr_file** directory and copy **file1** to the **opt** directory. For example:
+
+ ```shell
+ $ pwd
+ /opt/imageTailor/custom/cfg_openEuler/usr_file
+
+ $ tree
+ .
+ ├── etc
+ │ ├── default
+ │ │ └── grub
+ │ └── profile.d
+ │ └── csh.precmd
+ └── opt
+ └── file1
+
+ 4 directories, 3 files
+ ```
+
+- The paths in **/opt/imageTailor/custom/cfg_openEuler/usr_file** must be real paths. For example, the paths do not contain soft links. You can run the `realpath` or `readlink -f` command to query the real path.
+
+- If you need to invoke a custom script in the system startup or installation phase, that is, a hook script, store the file in the **hook** directory.
+
+#### Adding RPM Packages
+
+##### Procedure
+
+To add RPM packages (drivers, commands, or library files) to an ISO image, perform the following steps:
+
+> **NOTE:**
+>
+>- The **rpm.conf** and **cmd.conf** files are stored in the **/opt/imageTailor/custom/cfg_openEuler/** directory.
+>- The RPM package tailoring granularity below indicates **sys_cut='no'**. For details about the cutout granularity, see [Configuring Host Parameters](#configuring-host-parameters).
+>- If no local repo source is configured, configure a local repo source by referring to [Setting a Local Repo Source](#setting-a-local-repo-source).
+>
+
+1. Check whether the **/opt/imageTailor/repos/euler_base/** directory contains the RPM package to be added.
+
+ - If yes, go to step 2.
+ - If no, go to step 3.
+2. Configure the RPM package information in the **\** section in the **rpm.conf** file.
+ - For the RPM package tailoring granularity, no further action is required.
+ - For other tailoring granularities, go to step 4.
+3. Obtain the RPM package and store it in the **/opt/imageTailor/custom/cfg_openEuler/usr_rpm** directory. If the RPM package depends on other RPM packages, store the dependency packages to this directory because the added RPM package and its dependent RPM packages must be packed into the ISO image at the same time.
+ - For the RPM package tailoring granularity, go to step 4.
+ - For other tailoring granularities, no further action is required.
+4. Configure the drivers, commands, and library files to be retained in the RPM package in the **rpm.conf** and **cmd.conf** files. If there are common files to be tailored, configure them in the **\\** section in the **cmd.conf** file.
+
+
+##### Configuration File Description
+
+| Operation | Configuration File| Section |
+| :----------- | :----------- | :----------------------------------------------------------- |
+| Adding drivers | rpm.conf | \ \ \ Note: The **driver_name** is the relative path of **/lib/modules/{kernel_version_number}/kernel/**.|
+| Adding commands | cmd.conf | \ \ \ |
+| Adding library files | cmd.conf | \ \ \ |
+| Deleting other files| cmd.conf | \ \ \ Note: The file name must be an absolute path.|
+
+**Example**
+
+- Adding drivers
+
+ ```shell
+
+
+
+
+ ......
+
+ ```
+
+- Adding commands
+
+ ```shell
+
+
+
+
+ ......
+
+ ```
+
+- Adding library files
+
+ ```shell
+
+
+
+
+
+ ```
+
+- Deleting other files
+
+ ```shell
+
+
+
+
+
+ ```
+
+#### Adding Hook Scripts
+
+A hook script is invoked by the OS during startup and installation to execute the actions defined in the script. The directory for storing hook scripts of imageTailor is **custom/cfg_openEuler/usr_install/hook directory**, which has different subdirectories. Each subdirectory represents an OS startup or installation phase. Store the scripts based on the phases in which the scripts are invoked.
+
+##### Script Naming Rule
+
+The script name must start with **S+number** (the number must be at least two digits). The number indicates the execution sequence of the hook script. Example: **S01xxx.sh**
+
+> **NOTE:**
+>
+>The scripts in the **hook** directory are executed using the `source` command. Therefore, exercise caution when using the `exit` command in the scripts because the entire installation script exits after the `exit` command is executed.
+
+
+
+##### Description of hook Subdirectories
+
+| Subdirectory | Script Example | Time for Execution | Description |
+| :-------------------- | :---------------------| :------------------------------- | :----------------------------------------------------------- |
+| insmod_drv_hook | N/A | After OS drivers are loaded | N/A |
+| custom_install_hook | S01custom_install.sh | After the drivers are loaded, that is, after **insmod_drv_hook** is executed| You can customize the OS installation process by using a custom script.|
+| env_check_hook | S01check_hw.sh | Before the OS installation initialization | The script is used to check hardware specifications and types before initialization.|
+| set_install_ip_hook | S01set_install_ip.sh | When network configuration is being performed during OS installation initialization. | You can customize the network configuration by using a custom script.|
+| before_partition_hook | S01checkpart.sh | Before partitioning | You can check correctness of the partition configuration file by using a custom script.|
+| before_setup_os_hook | N/A | Before the repo file is decompressed | You can customize partition mounting. If the decompression path of the installation package is not the root partition specified in the partition configuration, customize partition mounting and assign the decompression path to the input global variable.|
+| before_mkinitrd_hook | S01install_drv.sh | Before the `mkinitrd` command is run | The hook script executed before running the `mkinitrd` command when **initrd** is saved to the disk. You can add and update driver files in **initrd**.|
+| after_setup_os_hook | N/A | After OS installation | After the installation is complete, you can perform custom operations on the system files, such as modifying **grub.cfg**.|
+| install_succ_hook | N/A | When the OS is successfully installed | The scripts in this subdirectory are used to parse the installation information and send information of whether the installation succeeds.**install_succ_hook** cannot be set to **install_break**.|
+| install_fail_hook | N/A | When the OS installation fails | The scripts in this subdirectory are used to parse the installation information and send information of whether the installation succeeds.**install_fail_hook** cannot be set to **install_break**.|
+
+### Configuring System Parameters
+
+Before creating an ISO image, you need to configure system parameters, including host parameters, initial passwords, partitions, network, compilation parameters, and system command line parameters.
+
+#### Configuring Host Parameters
+
+ The **\ \** section in the **/opt/imageTailor/custom/cfg_openEuler/sys.conf** file is used to configure common system parameters, such as the host name and kernel boot parameters.
+
+The default configuration provided by openEuler is as follows. You can modify the configuration as required.
+
+```shell
+
+ sys_service_enable='ipcc'
+ sys_service_disable='cloud-config cloud-final cloud-init-local cloud-init'
+ sys_utc='yes'
+ sys_timezone=''
+ sys_cut='no'
+ sys_usrrpm_cut='no'
+ sys_hostname='Euler'
+ sys_usermodules_autoload=''
+ sys_gconv='GBK'
+
+```
+
+The parameters are described as follows:
+
+- sys_service_enable
+
+ This parameter is optional. Services enabled by the OS by default. Separate multiple services with spaces. If you do not need to add a system service, use the default value **ipcc**. Pay attention to the following during the configuration:
+
+ - Default system services cannot be deleted.
+ - You can configure service-related services, but the repo source must contain the service RPM package.
+ - By default, only the services configured in this parameter are enabled. If a service depends on other services, you need to configure the depended services in this parameter.
+
+- sys_service_disable
+
+ This parameter is optional. Services that are not allowed to automatically start upon system startup. Separate multiple services with spaces. If no system service needs to be disabled, leave this parameter blank.
+
+- sys_utc
+
+ (Mandatory) Indicates whether to use coordinated universal time (UTC) time. The value can be **yes** or **no**. The default value is **yes**.
+
+- sys_timezone
+
+ This parameter is optional. Sets the time zone. The value can be a time zone supported by openEuler, which can be queried in the **/usr/share/zoneinfo/zone.tab** file.
+
+- sys_cut
+
+ (Mandatory) Indicates whether to tailor the RPM packages. The value can be **yes**, **no**, or **debug**.**yes** indicates that the RPM packages are tailored. **no** indicates that the RPM packages are not tailored (only the RPM packages in the **rpm.conf** file is installed). **debug** indicates that the RPM packages are tailored but the `rpm` command is retained for customization after installation. The default value is **no**.
+
+ > NOTE:
+ >
+ > - imageTailor installs the RPM package added by the user, deletes the files configured in the **\** section of the **cmd.conf** file, and then deletes the commands, libraries, and drivers that are not configured in **cmd.conf** or **rpm.conf**.
+ > - When **sys_cut='yes'** is configured, imageTailor does not support the installation of the `rpm` command. Even if the `rpm` command is configured in the **rpm.conf** file, the configuration does not take effect.
+
+- sys_usrrpm_cut
+
+ (Mandatory) Indicates whether to tailor the RPM packages added by users to the **/opt/imageTailor/custom/cfg_openEuler/usr_rpm** directory. The value can be **yes** or **no**. The default value is **no**.
+
+ - **sys_usrrpm_cut='yes'**: imageTailor installs the RPM packages added by the user, deletes the file configured in the **\** section in the **cmd.conf** file, and then deletes the commands, libraries, and drivers that are not configured in **cmd.conf** or **rpm.conf**.
+
+ - **sys_usrrpm_cut='no'**: imageTailor installs the RPM packages added by the user but does not delete the files in the RPM packages.
+
+- sys_hostname
+
+ (Mandatory) Host name. After the OS is deployed in batches, you are advised to change the host name of each node to ensure that the host name of each node is unique.
+
+ The host name must be a combination of letters, digits, and hyphens (-) and must start with a letter or digit. Letters are case sensitive. The value contains a maximum of 63 characters. The default value is **Euler**.
+
+- sys_usermodules_autoload
+
+ (Optional) Driver loaded during system startup. When configuring this parameter, you do not need to enter the file extension **.ko**. If there are multiple drivers, separate them by space. By default, this parameter is left blank, indicating that no additional driver is loaded.
+
+- sys_gconv
+
+ (Optional) This parameter is used to tailor **/usr/lib/gconv** and **/usr/lib64/gconv**. The options are as follows:
+
+ - **null**/**NULL**: indicates that this parameter is not configured. If **sys_cut='yes'** is configured, **/usr/lib/gconv** and **/usr/lib64/gconv** will be deleted.
+ - **all**/**ALL**: keeps **/usr/lib/gconv** and **/usr/lib64/gconv**.
+ - **xxx,xxx**: keeps the corresponding files in the **/usr/lib/gconv** and **/usr/lib64/gconv** directories. If multiple files need to be kept, use commas (,) to separate them.
+
+- sys_man_cut
+
+ (Optional) Indicates whether to tailor the man pages. The value can be **yes** or **no**. The default value is **yes**.
+
+
+
+> NOTE:
+>
+> If both **sys_cut** and **sys_usrrpm_cut** are configured, **sys_cut** is used. The following rules apply:
+>
+> - sys_cut='no'
+>
+> No matter whether **sys_usrrpm_cut** is set to **yes** or **no**, the system RPM package tailoring granularity is used. That is, imageTailor installs the RPM packages in the repo source and the RPM packages in the **usr_rpm** directory, however, the files in the RPM package are not deleted. Even if some files in the RPM packages are not required, imageTailor will delete them.
+>
+> - sys_cut='yes'
+>
+> - sys_usrrpm_cut='no'
+>
+> System RPM package tailoring granularity: imageTailor deletes files in the RPM packages in the repo sources as configured.
+>
+> - sys_usrrpm_cut='yes'
+>
+> System and user RPM package tailoring granularity: imageTailor deletes files in the RPM packages in the repo sources and the **usr_rpm** directory as configured.
+>
+
+
+
+#### Configuring Initial Passwords
+
+The **root** and GRUB passwords must be configured during OS installation. Otherwise, you cannot log in to the OS as the **root** user after the OS is installed using the tailored ISO image. This section describes how to configure the initial passwords.
+
+>  NOTE:
+>
+> You must configure the initial **root** and GRUB passwords manually.
+
+##### Configuring the Initial Password of the root User
+
+###### Introduction
+
+The initial password of the **root** user is stored in the **/opt/imageTailor/custom/cfg_openEuler/rpm.conf** file. You can modify this file to set the initial password of the **root** user.
+
+> **NOTE:**
+>
+>- If the `--minios yes/force` parameter is required when you run the `mkdliso` command to create an ISO image, you need to enter the corresponding information in the **/opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf** file.
+
+The default configuration of the initial password of the **root** user in the **/opt/imageTailor/custom/cfg_openEuler/rpm.conf** file is as follows. Add a password of your choice.
+
+```
+
+
+
+```
+
+The parameters are described as follows:
+
+- **group**: group to which the user belongs.
+- **pwd**: ciphertext of the initial password. The encryption algorithm is SHA-512. Replace **${pwd}** with the actual ciphertext.
+- **home**: home directory of the user.
+- **name**: name of the user to be configured.
+
+###### Modification Method
+
+Before creating an ISO image, you need to change the initial password of the **root** user. The following describes how to set the initial password of the **root** user (**root** permissions are required):
+
+1. Add a user for generating a password, for example, **testUser**.
+
+ ```shell
+ $ sudo useradd testUser
+ ```
+
+2. Set the password of **testUser**. Run the following command and set the password as prompted:
+
+ ```shell
+ $ sudo passwd testUser
+ Changing password for user testUser.
+ New password:
+ Retype new password:
+ passwd: all authentication tokens updated successfully.
+ ```
+
+3. View the **/etc/shadow** file. The content following **testUser** (string between two colons) is the ciphertext of the password.
+
+ ``` shell script
+ $ sudo cat /etc/shadow | grep testUser
+ testUser:$6$YkX5uFDGVO1VWbab$jvbwkZ2Kt0MzZXmPWy.7bJsgmkN0U2gEqhm9KqT1jwQBlwBGsF3Z59heEXyh8QKm3Qhc5C3jqg2N1ktv25xdP0:19052:0:90:7:35::
+ ```
+
+4. Copy and paste the ciphertext to the **pwd** field in the **/opt/imageTailor/custom/cfg_openEuler/rpm.conf** file.
+ ``` shell script
+
+
+
+ ```
+
+5. If the `--minios yes/force` parameter is required when you run the `mkdliso` command to create an ISO image, configure the **pwd** field of the corresponding user in **/opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf**.
+
+ ``` shell script
+
+
+
+ ```
+
+##### Configuring the Initial GRUB Password
+
+The initial GRUB password is stored in the **/opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub** file. Modify this file to configure the initial GRUB password. If the initial GRUB password is not configured, the ISO image will fail to be created.
+
+>  NOTE:
+>
+> - The **root** permissions are required for configuring the initial GRUB password.
+> - The default user corresponding to the GRUB password is **root**.
+>
+> - The `grub2-set-password` command must exist in the system. If the command does not exist, install it in advance.
+
+1. Run the following command and set the GRUB password as prompted:
+
+ ```shell
+ $ sudo grub2-set-password -o ./
+ Enter password:
+ Confirm password:
+ grep: .//grub.cfg: No such file or directory
+ WARNING: The current configuration lacks password support!
+ Update your configuration with grub2-mkconfig to support this feature.
+ ```
+
+2. After the command is executed, the **user.cfg** file is generated in the current directory. The content starting with **grub.pbkdf2.sha512** is the encrypted GRUB password.
+
+ ```shell
+ $ sudo cat user.cfg
+ GRUB2_PASSWORD=grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1
+ B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02
+ 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615
+ ```
+
+3. Copy the preceding ciphertext and add the following configuration to the **/opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub** file:
+
+ ```shell
+ GRUB_PASSWORD="grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1
+ B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02
+ 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615"
+ ```
+
+
+#### Configuring Partitions
+
+If you want to adjust system partitions or service partitions, modify the **\** section in the **/opt/imageTailor/custom/cfg_openEuler/sys.conf** file.
+
+> **NOTE:**
+>
+>- System partition: partition for storing the OS.
+>- Service partition: partition for service data.
+>- The type of a partition is determined by the content it stores, not the size, mount path, or file system.
+>- Partition configuration is optional. You can manually configure partitions after OS installation.
+
+ The format of **\** is as follows:
+
+disk_ID mount _path partition _size partition_type file_system [Secondary formatting flag]
+
+The default configuration is as follows:
+
+``` shell script
+
+hd0 /boot 512M primary ext4 yes
+hd0 /boot/efi 200M primary vfat yes
+hd0 / 30G primary ext4
+hd0 - - extended -
+hd0 /var 1536M logical ext4
+hd0 /home max logical ext4
+
+```
+
+The parameters are described as follows:
+
+- disk_ID:
+ ID of a disk. Set this parameter in the format of **hd***x*, where *x* indicates the *x*th disk.
+
+ > **NOTE:**
+ >
+ >Partition configuration takes effect only when the disk can be recognized.
+
+- mount_path:
+ Mount path to a specified partition. You can configure service partitions and adjust the default system partition. If you do not mount partitions, set this parameter to **-**.
+
+ > **NOTE:**
+ >
+ >- You must configure the mount path to **/**. You can adjust mount paths to other partitions according to your needs.
+ >- When the UEFI boot mode is used, the partition configuration in the x86_64 architecture must contain the mount path **/boot**, and the partition configuration in the AArch64 architecture must contain the mount path **/boot/efi**.
+
+- partition_size:
+ The value types are as follows:
+
+ - G/g: The unit of a partition size is GB, for example, 2G.
+ - M/m: The unit of a partition size is MB, for example, 300M.
+ - T/t: The unit of a partition size is TB, for example, 1T.
+ - MAX/max: The rest space of a hard disk is used to create a partition. This value can only be assigned to the last partition.
+
+ > **NOTE:**
+>
+ >- A partition size value cannot contain decimal numbers. If there are decimal numbers, change the unit of the value to make the value an integer. For example, 1.5 GB should be changed to 1536 MB.
+ >- When the partition size is set to **MAX**/**max**, the size of the remaining partition cannot exceed the limit of the supported file system type (the default file system type is **ext4**, and the maximum size is **16T**).
+
+- partition_type:
+ The values of partition types are as follows:
+
+ - primary: primary partitions
+ - extended: extended partition (configure only *disk_ID* for this partition)
+ - logical: logical partitions
+
+- file_system:
+ Currently, **ext4** and **vfat** file systems are supported.
+
+- [Secondary formatting flag]:
+ Indicates whether to format the disk during secondary installation. This parameter is optional.
+
+ - The value can be **yes** or **no**. The default value is **no**.
+
+ > **NOTE:**
+ >
+ >Secondary formatting indicates that openEuler has been installed on the disk before this installation. If the partition table configuration (partition size, mount point, and file type) used in the previous installation is the same as that used in the current installation, this flag can be used to configure whether to format the previous partitions, except the **/boot** and **/** partitions. If the target host is installed for the first time, this flag does not take effect, and all partitions with specified file systems are formatted.
+
+#### Configuring the Network
+
+The system network parameters are stored in **/opt/imageTailor/custom/cfg_openEuler/sys.conf**. You can modify the network parameters of the target ISO image, such as the NIC name, IP address, and subnet mask, by configuring **\\** in this file.
+
+The default network configuration in the **sys.conf** file is as follows. **netconfig-0** indicates the **eth0** NIC. If you need to configure an additional NIC, for example, **eth1**, add **\\** to the configuration file and set the parameters of **eth1**.
+
+```shell
+
+BOOTPROTO="dhcp"
+DEVICE="eth0"
+IPADDR=""
+NETMASK=""
+STARTMODE="auto"
+
+```
+
+The following table describes the parameters.
+
+- | Parameter | Mandatory or Not| Value | Description |
+ | :-------- | -------- | :------------------------------------------------ | :----------------------------------------------------------- |
+ | BOOTPROTO | Yes | none / static / dhcp | **none**: No protocol is used for boot, and no IP address is assigned. **static**: An IP address is statically assigned. **dhcp**: An IP address is dynamically obtained using the dynamic host configuration protocol (DHCP).|
+ | DEVICE | Yes | Example: **eth1** | NIC name. |
+ | IPADDR | Yes | Example: **192.168.11.100** | IP address. This parameter must be configured only when the value of **BOOTPROTO** is **static**.|
+ | NETMASK | Yes | - | Subnet mask. This parameter must be configured only when the value of **BOOTPROTO** is **static**.|
+ | STARTMODE | Yes | manual / auto / hotplug / ifplugd / nfsroot / off | NIC start mode. **manual**: A user runs the `ifup` command on a terminal to start an NIC. **auto**/**hotplug**/**ifplug**/**nfsroot**: An NIC is started when the OS identifies it. **off**: An NIC cannot be started in any situations. For details about the parameters, run the `man ifcfg` command on the host that is used to create the ISO image.|
+
+
+#### Configuring Kernel Parameters
+
+To ensure stable and efficient running of the system, you can modify kernel command line parameters as required. For an OS image created by imageTailor, you can modify the **GRUB_CMDLINE_LINUX** configuration in the **/opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub** file to modify the kernel command line parameters. The default settings of the kernel command line parameters in **GRUB_CMDLINE_LINUX** are as follows:
+
+```shell
+GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 crashkernel=512M oops=panic softlockup_panic=1 reserve_kbox_mem=16M crash_kexec_post_notifiers panic=3 console=tty0"
+```
+
+The meanings of the configurations are as follows (for details about other common kernel command line parameters, see related kernel documents):
+
+- net.ifnames=0 biosdevname=0
+
+ Name the NIC in traditional mode.
+
+- crashkernel=512M
+
+ The memory space reserved for kdump is 512 MB.
+
+- oops=panic panic=3
+
+ The kernel panics when an oops error occurs, and the system restarts 3 seconds later.
+
+- softlockup_panic=1
+
+ The kernel panics when a soft-lockup is detected.
+
+- reserve_kbox_mem=16M
+
+ The memory space reserved for Kbox is 16 MB.
+
+- console=tty0
+
+ Specifies **tty0** as the output device of the first virtual console.
+
+- crash_kexec_post_notifiers
+
+ After the system crashes, the function registered with the panic notification chain is called first, and then kdump is executed.
+
+### Creating an Image
+
+After customizing the operating system, you can use the `mkdliso` script to create the OS image file. The OSimage created using imageTailor is an ISO image file.
+
+#### Command Description
+
+##### Syntax
+
+**mkdliso -p openEuler -c custom/cfg_openEuler [--minios yes|no|force] [--sec] [-h]**
+
+##### Parameter Description
+
+| Parameter| Mandatory| Description | Value Range |
+| -------- | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| -p | Yes | Specifies the product name. | **openEuler** |
+| c | Yes | Specifies the relative path of the configuration file. | **custom/cfg_openEuler** |
+| --minios | No | Specifies whether to create the **initrd** file that is used to boot the system during system installation. | The default value is **yes**. **yes**: The **initrd** file will be created when the command is executed for the first time. When a subsequent `mkdliso` is executed, the system checks whether the **initrd** file exists in the **usr_install/boot** directory using sha256 verification. If the **initrd** file exists, it is not created again. Otherwise, it is created. **no**: The **initrd** file is not created. The **initrd** file used for system boot and running is the same. **force**: The **initrd** file will be created forcibly, regardless of whether it exists in the **usr_install/boot** directory or not.|
+| --sec | No | Specifies whether to perform security hardening on the generated ISO file. If this parameter is not specified, the user should undertake the resultant security risks| N/A |
+| -h | No | Obtains help information. | N/A |
+
+#### Image Creation Guide
+
+To create an ISO image using`mkdliso`, perform the following steps:
+
+> NOTE:
+>
+> - The absolute path to `mkdliso` must not contain spaces. Otherwise, the ISO image creation will fail.
+> - In the environment for creating the ISO image, the value of **umask** must be set to **0022**.
+
+1. Run the `mkdliso` command as the **root** user to generate the ISO image file. The following command is used for reference:
+
+ ```shell
+ # sudo /opt/imageTailor/mkdliso -p openEuler -c custom/cfg_openEuler --sec
+ ```
+
+ After the command is executed, the created files are stored in the **/opt/imageTailor/result/{date}** directory, including **openEuler-aarch64.iso** and **openEuler-aarch64.iso.sha256**.
+
+2. Verify the integrity of the ISO image file. Assume that the date and time is **2022-03-21-14-48**.
+
+ ```shell
+ $ cd /opt/imageTailor/result/2022-03-21-14-48/
+ $ sha256sum -c openEuler-aarch64.iso.sha256
+ ```
+
+ If the following information is displayed, the ISO image creation is complete.
+
+ ```
+ openEuler-aarch64.iso: OK
+ ```
+
+ If the following information is displayed, the image is incomplete. The ISO image file is damaged and needs to be created again.
+
+ ```shell
+ openEuler-aarch64.iso: FAILED
+ sha256sum: WARNING: 1 computed checksum did NOT match
+ ```
+
+3. View the logs.
+
+ After an image is created, you can view logs as required (for example, when an error occurs during image creation). When an image is created for the first time, the corresponding log file and security hardening log file are compressed into a TAR package (the log file is named in the format of **sys_custom_log_{Date}.tar.gz**) and stored in the **result/log directory**. Only the latest 50 compressed log packages are stored in this directory. If the number of compressed log packages exceeds 50, the earliest files will be overwritten.
+
+
+
+### Tailoring Time Zones
+
+After the customized ISO image is installed, you can tailor the time zones supported by the openEuler system as required. This section describes how to tailor the time zones.
+
+The information about time zones supported by openEuler is stored in the time zone folder **/usr/shre/zoninfo**. You can run the following command to view the time zone information:
+
+```shell
+$ ls /usr/share/zoneinfo/
+Africa/ America/ Asia/ Atlantic/ Australia/ Etc/ Europe/
+Pacific/ zone.tab
+```
+
+Each subfolder represents an area. The current areas include continents, oceans, and **Etc**. Each area folder contains the locations that belong to it. Generally, a location is a city or an island.
+
+All time zones are in the format of *area/location*. For example, if China Standard Time is used in southern China, the time zone is Asia/Shanghai (location may not be the capital). The corresponding time zone file is:
+
+```
+/usr/share/zoneinfo/Asia/Shanghai
+```
+
+If you want to tailor some time zones, delete the corresponding time zone files.
+
+### Customization Example
+
+This section describes how to use imageTailor to create an ISO image.
+
+1. Check whether the environment used to create the ISO meets the requirements.
+
+ ``` shell
+ $ cat /etc/openEuler-release
+ openEuler release 22.03 LTS
+ ```
+
+2. Ensure that the root directory has at least 40 GB free space.
+
+ ```shell
+ $ df -h
+ Filesystem Size Used Avail Use% Mounted on
+ ......
+ /dev/vdb 196G 28K 186G 1% /
+ ```
+
+3. Install the imageTailor tailoring tool. For details, see [Installation](#installation).
+
+ ```shell
+ $ sudo yum install -y imageTailor
+ $ ll /opt/imageTailor/
+ total 88K
+ drwxr-xr-x. 3 root root 4.0K Mar 3 08:00 custom
+ drwxr-xr-x. 10 root root 4.0K Mar 3 08:00 kiwi
+ -r-x------. 1 root root 69K Mar 3 08:00 mkdliso
+ drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 repos
+ drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 security-tool
+ ```
+
+4. Configure a local repo source.
+
+ ```shell
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ $ sudo mkdir -p /opt/openEuler_repo
+ $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo
+ mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only.
+ $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base
+ $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base
+ $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base
+ $ sudo ls /opt/imageTailor/repos/euler_base|wc -l
+ 2577
+ $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo
+ $ cd /opt/imageTailor
+ ```
+
+5. Change the **root** and GRUB passwords.
+
+ Replace **\${pwd}** with the encrypted password by referring to [Configuring Initial Passwords](#configuring-initial-passwords).
+
+ ```shell
+ $ cd /opt/imageTailor/
+ $ sudo vi custom/cfg_openEuler/usr_file/etc/default/grub
+ GRUB_PASSWORD="${pwd1}"
+ $
+ $ sudo vi kiwi/minios/cfg_minios/rpm.conf
+
+
+
+ $
+ $ sudo vi custom/cfg_openEuler/rpm.conf
+
+
+
+ ```
+
+6. Run the tailoring command.
+
+ ```shell
+ $ sudo rm -rf /opt/imageTailor/result
+ $ sudo ./mkdliso -p openEuler -c custom/cfg_openEuler --minios force
+ ......
+ Complete release iso file at: result/2022-03-09-15-31/openEuler-aarch64.iso
+ move all mkdliso log file to result/log/sys_custom_log_20220309153231.tar.gz
+ $ ll result/2022-03-09-15-31/
+ total 889M
+ -rw-r--r--. 1 root root 889M Mar 9 15:32 openEuler-aarch64.iso
+ -rw-r--r--. 1 root root 87 Mar 9 15:32 openEuler-aarch64.iso.sha256
+ ```
diff --git a/docs/en/docs/Virtualization/LibcarePlus.md b/docs/en/docs/Virtualization/LibcarePlus.md
new file mode 100644
index 0000000000000000000000000000000000000000..d554c3fc59c4cbd702a3ec10c7dcc74285fcc7f7
--- /dev/null
+++ b/docs/en/docs/Virtualization/LibcarePlus.md
@@ -0,0 +1,410 @@
+# LibcarePlus
+
+
+
+- [LibcarePlus](#libcareplus)
+ - [Overview](#overview)
+ - [Hardware and Software Requirements](#hardware-and-software-requirements)
+ - [Precautions and Constraints](#precautions-and-constraints)
+ - [Installing LibcarePlus](#installing-libcareplus)
+ - [Software Installation Dependencies](#software-installation-dependencies)
+ - [Installing LibcarePlus](#installing-libcareplus-1)
+ - [Creating LibcarePlus Hot Patches](#creating-libcareplus-hot-patches)
+ - [Introduction](#introduction)
+ - [Manual Creation](#manual-creation)
+ - [Creation Through a Script](#creation-through-a-script)
+ - [Applying the LibcarePlus Hot Patch](#applying-the-libcareplus-hot-patch)
+ - [Preparation](#preparation)
+ - [Loading the Hot Patch](#loading-the-hot-patch)
+ - [Querying a Hot Patch](#querying-a-hot-patch)
+ - [Uninstalling the Hot Patch](#uninstalling-the-hot-patch)
+
+
+
+## Overview
+
+LibcarePlus is a hot patch framework for user-mode processes. It can perform hot patch operations on target processes running on the Linux system without restarting the processes. Hot patches can be used to fix CVEs and urgent bugs that do not interrupt application services.
+
+## Hardware and Software Requirements
+
+The following software and hardware requirements must be met to use LibcarePlus on openEuler:
+
+- Currently, the x86 and ARM64 architectures are supported.
+
+- LibcarePlus can run on any Linux distribution that supports **libunwind**, **elfutils**, and **binutils**.
+- LibcarePlus uses the **ptrace()** system call, which requires the kernel configuration option enabled for the corresponding Linux distribution.
+- LibcarePlus needs the symbol table of the original executable file when creating a hot patch. Do not strip the symbol table too early.
+- On the Linux OS where SELinux is enabled, manually adapt the SELinux policies.
+
+## Precautions and Constraints
+
+When using LibcarePlus, comply with the following hot patch specifications and constraints:
+
+- Only the code written in the C language is supported. The assembly language is not supported.
+- Only user-mode programs are supported. Dynamic library patches are not supported.
+- The code file name must comply with the C language identifier naming specifications. That is, the code file name consists of letters (A-Z and a-z), digits (0-9), and underscores (_) but the first character cannot be a digit. Special characters such as hyphens (-) and dollar signs ($) are not allowed.
+- Incremental patches are supported. Multiple patches can be installed on a process. However, you need to design the patch installation and uninstallation management. Generally, the installation and uninstallation comply with the first-in, last-out (FILO) rule.
+- Automatic patch loading is not natively supported. You can design an automatic patch loading method for a specific process.
+- Patch query is supported.
+- The static function patch is restricted by the symbol table that can find the function in the system.
+- Hot patches are process-specific. That is, a hot patch of a dynamic library can be applied only to process that invoke the dynamic library.
+- The number of patches that can be applied to a process is limited by the range of the jump instruction and the size of the hole in the virtual memory address space. Generally, up to 512 patches can be applied to a process.
+- Thread local storage (TLS) variables of the initial executable (IE) model can be modified.
+- Symbols defined in a patch cannot be used in subsequent patches.
+- Hot patches are not supported in the following scenarios:
+ - Infinite loop function, non-exit function, inline function, initialization function, and non-maskable interrupt (NMI) function
+ - Replacing global variables
+ - Functions less than 5 bytes
+ - Modifying the header file
+ - Adding or deleting the input and output parameters of the target function
+ - Changing (adding, deleting, or modifying) data structure members
+ - Modifying the C files that contain GCC macros such as __LINE__ and __FILE__
+ - Modifying the Intel vector assembly instruction
+
+## Installing LibcarePlus
+
+### Software Installation Dependencies
+
+The LibcarePlus running depends on **libunwind**, **elfutils**, and **binutils**. On the openEuler system configured with the Yum repo, you can run the following commands to install the software on which LibcarePlus depends:
+
+``` shell
+# yum install -y binutils elfutils elfutils-libelf-devel libunwind-devel
+```
+
+#### Installing LibcarePlus
+
+```shell
+# yum install libcareplus -y
+```
+
+Check whether LibcarePlus is installed.
+
+``` shell
+# libcare-ctl -h
+usage: libcare-ctl [options] [args]
+
+Options:
+ -v - verbose mode
+ -h - this message
+
+Commands:
+ patch - apply patch to a user-space process
+ unpatch- unapply patch from a user-space process
+ info - show info on applied patches
+```
+
+## Creating LibcarePlus Hot Patches
+
+### Introduction
+
+LibcarePlus hot patch creation methods:
+
+- Manual creation
+- Creation through a script
+
+The process of manually creating a hot patch is complex. For a project with a large amount of code, for example, QEMU, it is extremely difficult to manually create a hot patch. You are advised to use the script provided by LibcarePlus to generate a hot patch file with one click.
+
+#### Manual Creation
+
+The following takes the original file **foo.c** and the patch file **bar.c** as examples to describe how to manually create a hot patch.
+
+1. Prepare the original file and patch file written in the C language. For example, **foo.c** and **bar.c**.
+
+
+ Expand foo.c
+
+
+ ``` c
+ // foo.c
+ #include
+ #include
+
+ void print_hello(void)
+ {
+ printf("Hello world!\n");
+ }
+
+ int main(void)
+ {
+ while (1) {
+ print_hello();
+ sleep(1);
+ }
+ }
+ ```
+
+
+
+
+
+ Expand bar.c
+
+
+ ``` c
+ // bar.c
+ #include
+ #include
+
+ void print_hello(void)
+ {
+ printf("Hello world %s!\n", "being patched");
+ }
+
+ int main(void)
+ {
+ while (1) {
+ print_hello();
+ sleep(1);
+ }
+ }
+ ```
+
+
+
+
+2. Build the original file and patch file to obtain the assembly files **foo.s** and **bar.s**.
+
+ ``` shell
+ # gcc -S foo.c
+ # gcc -S bar.c
+ # ls
+ bar.c bar.s foo.c foo.s
+ ```
+
+3. Run `kpatch_gensrc` to compare **foo.s** and **bar.s** and generate the **foobar.s** file that contains the assembly content of the original file and the differences.
+
+ ``` shell
+ # sed -i 's/bar.c/foo.c/' bar.s
+ # kpatch_gensrc --os=rhel6 -i foo.s -i bar.s -o foobar.s --force-global
+ ```
+
+ By default, `kpatch_gensrc` compares the original files in the same C language. Therefore, before the comparison, you need to run the `sed` command to change the file name **bar.c** in the patch assembly file **bar.s** to the original file name **foo.c**. Call `kpatch_gensrc` to specify the input files as **foo.s** and **bar.s** and the output file as **foobar.s**.
+
+4. Build the assembly file **foo.s** in the original file and the generated assembly file **foobar.s** to obtain the executable files **foo** and **foobar**.
+
+ ``` shell
+ # gcc -o foo foo.s
+ # gcc -o foobar foobar.s -Wl,-q
+ ```
+
+ The **-Wl, -q** linker options reserve the relocation sections in **foobar**.
+
+5. Use `kpatch_strip` to remove the duplicate content from the executables **foo** and **foobar** and reserve the content required for creating hot patches.
+
+ ``` shell
+ # kpatch_strip --strip foobar foobar.stripped
+ # kpatch_strip --rel-fixup foo foobar.stripped
+ # strip --strip-unneeded foobar.stripped
+ # kpatch_strip --undo-link foo foobar.stripped
+ ```
+
+ The options in the preceding command are described as follows:
+
+ - **--strip** removes useless sections for patch creation from **foobar**.
+ - **--rel-fixup** repairs the address of the variables and functions accessed in the patch.
+ - **strip --strip-unneeded** removes the useless symbol information for hot patch relocation.
+ - **--undo-link** changes the symbol address in a patch from absolute to relative.
+
+6. Create a hot patch file.
+
+ After the preceding operations, the contents required for creating the hot patch are obtained. Run the `kpatch_make` command to input parameters Build ID of the original executable file and **foobar.stripped** (output file of `kpatch_strip`) to `kpatch_make` to generate a hot patch file.
+
+ ``` shell
+ # str=$(readelf -n foo | grep 'Build ID')
+ # substr=${str##* }
+ # kpatch_make -b $substr -i 0001 foobar.stripped -o foo.kpatch
+ # ls
+ bar.c bar.s foo foobar foobar.s foobar.stripped foo.c foo.kpatch foo.s
+ ```
+
+ The final hot patch file **foo.kpatch** whose patch ID is **0001** is obtained.
+
+#### Creation Through a Script
+
+This section describes how to use LibcarePlus built-in **libcare-patch-make** script to create a hot patch file. The original file **foo.c** and patch file **bar.c** are used as an example.
+
+1. Run the `diff` command to generate the comparison file of **foo.c** and **bar.c**.
+
+ ``` shell
+ # diff -up foo.c bar.c > foo.patch
+ ```
+
+ The content of the **foo.patch** file is as follows:
+
+
+ Expand foo.patch
+
+
+
+ ``` diff
+ --- foo.c 2020-12-09 15:39:51.159632075 +0800
+ +++ bar.c 2020-12-09 15:40:03.818632220 +0800
+ @@ -1,10 +1,10 @@
+ -// foo.c
+ +// bar.c
+ #include
+ #include
+
+ void print_hello(void)
+ {
+ - printf("Hello world!\n");
+ + printf("Hello world %s!\n", "being patched");
+ }
+
+ int main(void)
+ ```
+
+
+
+
+
+2. Write the **makefile** for building **foo.c** as follows:
+
+
+ Expand makefile
+
+
+ ``` makefile
+ all: foo
+
+ foo: foo.c
+ $(CC) -o $@ $<
+
+ clean:
+ rm -f foo
+
+ install: foo
+ mkdir $$DESTDIR || :
+ cp foo $$DESTDIR
+ ```
+
+
+
+
+
+3. After the **makefile** is done, directly call `libcare-patch-make`. If `libcare-patch-make` asks you which file to install the patch, enter the original file name, as shown in the following:
+
+ ``` shell
+ # libcare-patch-make --clean -i 0001 foo.patch
+ rm -f foo
+ BUILDING ORIGINAL CODE
+ /usr/local/bin/libcare-cc -o foo foo.c
+ INSTALLING ORIGINAL OBJECTS INTO /libcareplus/test/lpmake
+ mkdir $DESTDIR || :
+ cp foo $DESTDIR
+ applying foo.patch...
+ can't find file to patch at input line 3
+ Perhaps you used the wrong -p or --strip option?
+ The text leading up to this was:
+ --------------------------
+ |--- foo.c 2020-12-10 09:43:04.445375845 +0800
+ |+++ bar.c 2020-12-10 09:48:36.778379648 +0800
+ --------------------------
+ File to patch: foo.c
+ patching file foo.c
+ BUILDING PATCHED CODE
+ /usr/local/bin/libcare-cc -o foo foo.c
+ INSTALLING PATCHED OBJECTS INTO /libcareplus/test/.lpmaketmp/patched
+ mkdir $DESTDIR || :
+ cp foo $DESTDIR
+ MAKING PATCHES
+ Fixing up relocation printf@@GLIBC_2.2.5+fffffffffffffffc
+ Fixing up relocation print_hello+0
+ patch for /libcareplus/test/lpmake/foo is in /libcareplus/test/patchroot/700297b7bc56a11e1d5a6fb564c2a5bc5b282082.kpatch
+ ```
+
+ After the command is executed, the output indicates that the hot patch file is in the **patchroot** directory of the current directory, and the executable file is in the **lpmake** directory. By default, the Build ID is used to name a hot patch file generated by a script.
+
+
+
+## Applying the LibcarePlus Hot Patch
+
+This following uses the original file **foo.c** and patch file **bar.c** as an example to describe how to use the LibcarePlus hot patch.
+
+### Preparation
+
+Before using the LibcarePlus hot patch, prepare the original executable file **foo** and hot patch file **foo.kpatch**.
+
+### Loading the Hot Patch
+
+The procedure for applying the LibcarePlus hot patch is as follows:
+
+1. In the first shell window, run the executable file to be patched:
+
+ ``` shell
+ # ./lpmake/foo
+ Hello world!
+ Hello world!
+ Hello world!
+ ```
+
+2. In the second shell window, run the `libcare-ctl` command to apply the hot patch:
+
+ ``` shell
+ # libcare-ctl -v patch -p $(pidof foo) ./foo.kpatch
+ ```
+
+ If the hot patch is applied successfully, the following information is displayed in the second shell window:
+
+ ``` shell
+ 1 patch hunk(s) have been successfully applied to PID '10999'
+ ```
+
+ The following information is displayed for the target process running in the first shell window:
+
+ ``` shell
+ Hello world!
+ Hello world!
+ Hello world being patched!
+ Hello world being patched!
+ ```
+
+
+### Querying a Hot Patch
+
+The procedure for querying a LibcarePlus hot patch is as follows:
+
+1. Run the following command in the second shell window:
+
+ ```shell
+ # libcare-ctl info -p $(pidof foo)
+
+ ```
+
+ If a hot patch is installed, the following information is displayed in the second shell window:
+
+ ```shell
+ Pid: 551763
+ Target: foo
+ Build id: df05a25bdadd282812d3ee5f0a460e69038575de
+ Applied patch number: 1
+ Patch id: 0001
+ ```
+
+
+
+
+### Uninstalling the Hot Patch
+
+The procedure for uninstalling the LibcarePlus hot patch is as follows:
+
+1. Run the following command in the second shell window:
+
+ ``` shell
+ # libcare-ctl unpatch -p $(pidof foo)
+ ```
+
+ If the hot patch is uninstalled successfully, the following information is displayed in the second shell window:
+
+ ``` shell
+ 1 patch hunk(s) were successfully cancelled from PID '10999'
+ ```
+
+2. The following information is displayed for the target process running in the first shell window:
+
+ ``` shell
+ Hello world being patched!
+ Hello world being patched!
+ Hello world!
+ Hello world!
+ ```
diff --git a/docs/en/docs/Virtualization/tool-guide.md b/docs/en/docs/Virtualization/tool-guide.md
index 06bbb1650c8cb6a6ff1a132afc346de9b42ea321..565ebecf35494d5cc45e3a4f8f2a4973841fa0dd 100644
--- a/docs/en/docs/Virtualization/tool-guide.md
+++ b/docs/en/docs/Virtualization/tool-guide.md
@@ -1,196 +1 @@
-# Tool Guide
-
-- [vmtop](#vmtop)
-
-## vmtop
-
-### Overview
-
-vmtop is a user-mode tool running on the host machine. You can use the vmtop tool to dynamically view the usage of VM resources in real time, such as CPU usage, memory usage, and the number of vCPU traps. Therefore, the vmtop tool can be used to locate virtualization problems and optimize performance.
-
-#### Multi-Architecture Support
-
-Currently, the vmtop supports the AArch64 and x86_64 processor architectures.
-
-#### Display Item Description
-
-The vmtop display items vary according to the processor architecture. This document describes the meaning of each display item and whether it is displayed in the corresponding architecture.
-Note: The following sampling difference refers to the difference between two times of data obtained in a specified interval.
-
-##### **Display Items of the AArch64 and x86_64 Architectures**
-
-- VM/task-name: VM/Process name
-- DID: VM ID
-- PID: PID of the qemu process of the VM
-- %CPU: CPU usage of a process
-- EXTsum: Total number of KVM exits (sampling difference)
-- S: Process status
-- P: ID of the physical CPU occupied by a process
-- %ST: Ratio of the preemption time to the CPU running time
-- %GUE: Ratio of the VM internal occupation time to the CPU running time
-- %HYP: Virtualization overhead ratio
-
-##### Display Items Only for the Aarch64 Architecture
-
-- EXThvc: Number of hvc-exits (sampling difference)
-- EXTwfe: Number of wfe-exits (sampling difference)
-- EXTwfi: Number of wfi-exits (sampling difference)
-- EXTmmioU: Number of mmioU-exits (sampling difference)
-- EXTmmioK: Number of mmioK-exits (sampling difference)
-- EXTfp: Number of fp-exits (sampling difference)
-- EXTirq: Number of irq-exits (sampling difference)
-- EXTsys64: Number of sys64 exits (sampling difference)
-- EXTmabt: Number of mem abort exits (sampling difference)
-
-##### Display Items Only for the x86_64 Architecture
-
-- PFfix: Number of page faults (sampling difference)
-- PFgu: Number of times that page faults are injected to the guest OS (sampling difference)
-- INvlpg: Number of times that a TLB item is flushed (one of the TLB items, which is not fixed)
-- EXTio: Number of io VM-exit times (sampling difference)
-- EXTmmio: Number of mmio VM-exit times (sampling difference)
-- EXThalt: Number of halt VM-exit times (sampling difference)
-- EXTsig: Number of VM-exits caused by signal processing (sampling difference)
-- EXTirq: Number of VM-exits caused by interrupts (sampling difference)
-- EXTnmiW: Number of VM-exit times caused by non-maskable interrupts (sampling difference)
-- EXTirqW: Interruptwindow mechanism. When the interrupt function is enabled, exit is used to inject interrupts (sampling difference)
-- IrqIn: Number of times that IRQ interrupts are injected (sampling difference)
-- NmiIn: Number of times that NMI interrupts are injected (sampling difference)
-- TLBfl: Number of times that the entire TLB is flushed (sampling difference)
-- HostReL: Number of times that the host status is overloaded (sampling difference)
-- Hyperv: Number of times that the guest OS is simulated to call hypercal in virtualization-assistant mode (sampling difference)
-- EXTcr: Number of times that the access to the CR register exits (sampling difference)
-- EXTrmsr: Number of times that the read MSR exits (sampling difference)
-- EXTwmsr: Number of times that the write MSR exits (sampling difference)
-- EXTapic: Number of APIC write times (sampling difference)
-- EXTeptv: Ept page fault exit times (sampling difference)
-- EXTeptm: Number of Ept error exits (sampling difference)
-- EXTpau: Number of times that the VCPU pauses and exits (sampling difference)
-
-### Usage
-
-vmtop is a command line tool. You can directly run the vmtop in command line mode.
-In addition, the vmtop tool provides different options for querying different information.
-
-#### Syntax
-
-```sh
-vmtop [option]
-```
-
-#### Option Description
-
-- -d: sets the refresh interval, in seconds.
-- -H: displays the VM thread information.
-- -n: sets the number of refresh times and exits after the refresh is complete.
-- -b: displays Batch mode, which can be used to redirect to a file.
-- -h: displays help information.
-- -v: displays versions.
-- -p: monitors the VM with a specified ID.
-
-#### Keyboard Shortcut
-
-Shortcut key used when the vmtop is running.
-
-- H: displays or stops the VM thread information. The information is displayed by default.
-- up/down: moves the VM list upwards or downwards.
-- left/right: moves the cursor leftwards or rightwards to display the columns that are hidden due to the screen width.
-- f: enters the editing mode of a monitoring item and selects the monitoring item to be enabled.
-- q: exits the vmtop process.
-
-### Example
-
-Run the vmtop command on the host.
-
-```sh
-vmtop
-```
-
-The command output is as follows:
-
-```sh
-vmtop - 2020-09-14 09:54:48 - 1.0
-Domains: 1 running
-
- DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P %ST %GUE %HYP
- 2 example 4054916 13.0 0 0 1206 10 0 144 62 174 0 1452 S 106 0.0 99.7 16.0
-```
-
-As shown in the output, there is only one VM named "example" on the host. The ID is 2. The CPU usage is 13.0%. The total number of traps within one second is 1452. The physical CPU occupied by the VM process is CPU 106. The ratio of the VM internal occupation time to the CPU running time is 99.7%.
-
-1. Display VM thread information.
-Press H to display the thread information.
-
-```sh
-vmtop - 2020-09-14 10:11:27 - 1.0
-Domains: 1 running
-
- DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P %ST %GUE %HYP
- 2 example 4054916 13.0 0 0 1191 17 4 120 76 147 0 1435 S 119 0.0 123.7 4.0
- |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0
- |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0
- |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0
- |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117 0.0 0.0 0.0
- |_ CPU 0/KVM 4054933 3.0 0 0 280 6 4 28 19 41 0 350 S 105 0.0 27.9 0.0
- |_ CPU 1/KVM 4054934 3.0 0 0 260 0 0 16 12 36 0 308 S 31 0.0 20.0 0.0
- |_ CPU 2/KVM 4054935 3.0 0 0 341 0 0 44 20 26 0 387 R 108 0.0 27.9 4.0
- |_ CPU 3/KVM 4054936 5.0 0 0 310 11 0 32 25 44 0 390 S 103 0.0 47.9 0.0
- |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126 0.0 0.0 0.0
- |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118 0.0 0.0 0.0
- |_ worker 4143738 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0
-```
-
-The example VM has 11 threads, including the vCPU thread, vnc_worker, and IO mon_iotreads. Each thread also displays detailed CPU usage and trap information.
-
-2. Select the monitoring item.
-Enter f to edit the monitoring item.
-
-```sh
-field filter - select which field to be showed
-Use up/down to navigate, use space to set whether chosen filed to be showed
-'q' to quit to normal display
-
- * DID
- * VM/task-name
- * PID
- * %CPU
- * EXThvc
- * EXTwfe
- * EXTwfi
- * EXTmmioU
- * EXTmmioK
- * EXTfp
- * EXTirq
- * EXTsys64
- * EXTmabt
- * EXTsum
- * S
- * P
- * %ST
- * %GUE
- * %HYP
-```
-
-All monitoring items are displayed by default. You can press the up or down key to select a monitoring item, press the space key to set whether to display or hide the monitoring item, and press the q key to exit.
-After %ST, %GUE, and %HYP are hidden, the following information is displayed:
-
-```sh
-vmtop - 2020-09-14 10:23:25 - 1.0
-Domains: 1 running
-
- DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P
- 2 example 4054916 12.0 0 0 1213 14 1 144 68 168 0 1464 S 125
- |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 125
- |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119
- |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120
- |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117
- |_ CPU 0/KVM 4054933 2.0 0 0 303 6 0 29 10 35 0 354 S 98
- |_ CPU 1/KVM 4054934 4.0 0 0 279 0 0 39 17 49 0 345 S 1
- |_ CPU 2/KVM 4054935 3.0 0 0 283 0 0 33 20 40 0 343 S 122
- |_ CPU 3/KVM 4054936 3.0 0 0 348 8 1 43 21 44 0 422 S 110
- |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126
- |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118
- |_ worker 1794 0.0 0 0 0 0 0 0 0 0 0 0 S 126
-```
-
-%ST, %GUE, and %HYP will not be displayed on the screen.
+To help users better use virtualization, openEuler provides a set of tools, including vmtop and LibcarePlus. This section describes how to install and use these tools.
\ No newline at end of file
diff --git a/docs/en/docs/Virtualization/installation-to-virtualization.md b/docs/en/docs/Virtualization/virtualization-installation.md
similarity index 100%
rename from docs/en/docs/Virtualization/installation-to-virtualization.md
rename to docs/en/docs/Virtualization/virtualization-installation.md
diff --git a/docs/en/docs/Virtualization/vmtop.md b/docs/en/docs/Virtualization/vmtop.md
new file mode 100644
index 0000000000000000000000000000000000000000..346cc01c219e3ca7d349c36ef39650cbb20a86df
--- /dev/null
+++ b/docs/en/docs/Virtualization/vmtop.md
@@ -0,0 +1,196 @@
+# Tool Guide
+
+- [vmtop](#vmtop)
+
+## vmtop
+
+### Overview
+
+vmtop is a user-mode tool running on the host machine. You can use the vmtop tool to dynamically view the usage of VM resources in real time, such as CPU usage, memory usage, and the number of vCPU traps. Therefore, the vmtop tool can be used to locate virtualization problems and optimize performance.
+
+#### Multi-Architecture Support
+
+Currently, the vmtop supports the AArch64 and x86_64 processor architectures.
+
+#### Display Item Description
+
+The vmtop display items vary according to the processor architecture. This document describes the meaning of each display item and whether it is displayed in the corresponding architecture.
+Note: The following sampling difference refers to the difference between two times of data obtained in a specified interval.
+
+##### **Display Items of the AArch64 and x86_64 Architectures**
+
+- VM/task-name: VM/Process name
+- DID: VM ID
+- PID: PID of the qemu process of the VM
+- %CPU: CPU usage of a process
+- EXTsum: Total number of KVM exits (sampling difference)
+- S: Process status
+- P: ID of the physical CPU occupied by a process
+- %ST: Ratio of the preemption time to the CPU running time
+- %GUE: Ratio of the VM internal occupation time to the CPU running time
+- %HYP: Virtualization overhead ratio
+
+##### Display Items Only for the Aarch64 Architecture
+
+- EXThvc: Number of hvc-exits (sampling difference)
+- EXTwfe: Number of wfe-exits (sampling difference)
+- EXTwfi: Number of wfi-exits (sampling difference)
+- EXTmmioU: Number of mmioU-exits (sampling difference)
+- EXTmmioK: Number of mmioK-exits (sampling difference)
+- EXTfp: Number of fp-exits (sampling difference)
+- EXTirq: Number of irq-exits (sampling difference)
+- EXTsys64: Number of sys64 exits (sampling difference)
+- EXTmabt: Number of mem abort exits (sampling difference)
+
+##### Display Items Only for the x86_64 Architecture
+
+- PFfix: Number of page faults (sampling difference)
+- PFgu: Number of times that page faults are injected to the guest OS (sampling difference)
+- INvlpg: Number of times that a TLB item is flushed (one of the TLB items, which is not fixed)
+- EXTio: Number of io VM-exit times (sampling difference)
+- EXTmmio: Number of mmio VM-exit times (sampling difference)
+- EXThalt: Number of halt VM-exit times (sampling difference)
+- EXTsig: Number of VM-exits caused by signal processing (sampling difference)
+- EXTirq: Number of VM-exits caused by interrupts (sampling difference)
+- EXTnmiW: Number of VM-exit times caused by non-maskable interrupts (sampling difference)
+- EXTirqW: Interruptwindow mechanism. When the interrupt function is enabled, exit is used to inject interrupts (sampling difference)
+- IrqIn: Number of times that IRQ interrupts are injected (sampling difference)
+- NmiIn: Number of times that NMI interrupts are injected (sampling difference)
+- TLBfl: Number of times that the entire TLB is flushed (sampling difference)
+- HostReL: Number of times that the host status is overloaded (sampling difference)
+- Hyperv: Number of times that the guest OS is simulated to call hypercall in virtualization-assistant mode (sampling difference)
+- EXTcr: Number of times that the access to the CR register exits (sampling difference)
+- EXTrmsr: Number of times that the read MSR exits (sampling difference)
+- EXTwmsr: Number of times that the write MSR exits (sampling difference)
+- EXTapic: Number of APIC write times (sampling difference)
+- EXTeptv: Ept page fault exit times (sampling difference)
+- EXTeptm: Number of Ept error exits (sampling difference)
+- EXTpau: Number of times that the VCPU pauses and exits (sampling difference)
+
+### Usage
+
+vmtop is a command line tool. You can directly run the vmtop in command line mode.
+In addition, the vmtop tool provides different options for querying different information.
+
+#### Syntax
+
+```sh
+vmtop [option]
+```
+
+#### Option Description
+
+- -d: sets the refresh interval, in seconds.
+- -H: displays the VM thread information.
+- -n: sets the number of refresh times and exits after the refresh is complete.
+- -b: displays Batch mode, which can be used to redirect to a file.
+- -h: displays help information.
+- -v: displays versions.
+- -p: monitors the VM with a specified ID.
+
+#### Keyboard Shortcut
+
+Shortcut key used when the vmtop is running.
+
+- H: displays or stops the VM thread information. The information is displayed by default.
+- up/down: moves the VM list upwards or downwards.
+- left/right: moves the cursor leftwards or rightwards to display the columns that are hidden due to the screen width.
+- f: enters the editing mode of a monitoring item and selects the monitoring item to be enabled.
+- q: exits the vmtop process.
+
+### Example
+
+Run the vmtop command on the host.
+
+```sh
+vmtop
+```
+
+The command output is as follows:
+
+```sh
+vmtop - 2020-09-14 09:54:48 - 1.0
+Domains: 1 running
+
+ DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P %ST %GUE %HYP
+ 2 example 4054916 13.0 0 0 1206 10 0 144 62 174 0 1452 S 106 0.0 99.7 16.0
+```
+
+As shown in the output, there is only one VM named "example" on the host. The ID is 2. The CPU usage is 13.0%. The total number of traps within one second is 1452. The physical CPU occupied by the VM process is CPU 106. The ratio of the VM internal occupation time to the CPU running time is 99.7%.
+
+1. Display VM thread information.
+Press H to display the thread information.
+
+```sh
+vmtop - 2020-09-14 10:11:27 - 1.0
+Domains: 1 running
+
+ DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P %ST %GUE %HYP
+ 2 example 4054916 13.0 0 0 1191 17 4 120 76 147 0 1435 S 119 0.0 123.7 4.0
+ |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0
+ |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119 0.0 0.0 0.0
+ |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0
+ |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117 0.0 0.0 0.0
+ |_ CPU 0/KVM 4054933 3.0 0 0 280 6 4 28 19 41 0 350 S 105 0.0 27.9 0.0
+ |_ CPU 1/KVM 4054934 3.0 0 0 260 0 0 16 12 36 0 308 S 31 0.0 20.0 0.0
+ |_ CPU 2/KVM 4054935 3.0 0 0 341 0 0 44 20 26 0 387 R 108 0.0 27.9 4.0
+ |_ CPU 3/KVM 4054936 5.0 0 0 310 11 0 32 25 44 0 390 S 103 0.0 47.9 0.0
+ |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126 0.0 0.0 0.0
+ |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118 0.0 0.0 0.0
+ |_ worker 4143738 0.0 0 0 0 0 0 0 0 0 0 0 S 120 0.0 0.0 0.0
+```
+
+The example VM has 11 threads, including the vCPU thread, vnc_worker, and IO mon_iotreads. Each thread also displays detailed CPU usage and trap information.
+
+2. Select the monitoring item.
+Enter f to edit the monitoring item.
+
+```sh
+field filter - select which field to be showed
+Use up/down to navigate, use space to set whether chosen filed to be showed
+'q' to quit to normal display
+
+ * DID
+ * VM/task-name
+ * PID
+ * %CPU
+ * EXThvc
+ * EXTwfe
+ * EXTwfi
+ * EXTmmioU
+ * EXTmmioK
+ * EXTfp
+ * EXTirq
+ * EXTsys64
+ * EXTmabt
+ * EXTsum
+ * S
+ * P
+ * %ST
+ * %GUE
+ * %HYP
+```
+
+All monitoring items are displayed by default. You can press the up or down key to select a monitoring item, press the space key to set whether to display or hide the monitoring item, and press the q key to exit.
+After %ST, %GUE, and %HYP are hidden, the following information is displayed:
+
+```sh
+vmtop - 2020-09-14 10:23:25 - 1.0
+Domains: 1 running
+
+ DID VM/task-name PID %CPU EXThvc EXTwfe EXTwfi EXTmmioU EXTmmioK EXTfp EXTirq EXTsys64 EXTmabt EXTsum S P
+ 2 example 4054916 12.0 0 0 1213 14 1 144 68 168 0 1464 S 125
+ |_ qemu-kvm 4054916 0.0 0 0 0 0 0 0 0 0 0 0 S 125
+ |_ qemu-kvm 4054928 0.0 0 0 0 0 0 0 0 0 0 0 S 119
+ |_ signalfd_com 4054929 0.0 0 0 0 0 0 0 0 0 0 0 S 120
+ |_ IO mon_iothr 4054932 0.0 0 0 0 0 0 0 0 0 0 0 S 117
+ |_ CPU 0/KVM 4054933 2.0 0 0 303 6 0 29 10 35 0 354 S 98
+ |_ CPU 1/KVM 4054934 4.0 0 0 279 0 0 39 17 49 0 345 S 1
+ |_ CPU 2/KVM 4054935 3.0 0 0 283 0 0 33 20 40 0 343 S 122
+ |_ CPU 3/KVM 4054936 3.0 0 0 348 8 1 43 21 44 0 422 S 110
+ |_ memory_lock 4054940 0.0 0 0 0 0 0 0 0 0 0 0 S 126
+ |_ vnc_worker 4054944 0.0 0 0 0 0 0 0 0 0 0 0 S 118
+ |_ worker 1794 0.0 0 0 0 0 0 0 0 0 0 0 S 126
+```
+
+%ST, %GUE, and %HYP will not be displayed on the screen.
diff --git a/docs/en/docs/desktop/kubesphere.md b/docs/en/docs/desktop/kubesphere.md
index 32baa2eb1941917d254052a63ff8a2bcc8b90b6f..40a47a96f1ac6e5c02d50301952e3078df4d3ddc 100644
--- a/docs/en/docs/desktop/kubesphere.md
+++ b/docs/en/docs/desktop/kubesphere.md
@@ -1,4 +1,4 @@
-# KubeSphere Installation Guide
+# KubeSphere Deployment Guide
This document describes how to install and deploy Kubernetes and KubeSphere clusters on openEuler 21.09.
diff --git a/docs/en/docs/rubik/example-of-isolation-for-hybrid-deployed-services.md b/docs/en/docs/rubik/example-of-isolation-for-hybrid-deployed-services.md
new file mode 100644
index 0000000000000000000000000000000000000000..d224456156c4a78a4e1e6f778084e7e0c8d380f2
--- /dev/null
+++ b/docs/en/docs/rubik/example-of-isolation-for-hybrid-deployed-services.md
@@ -0,0 +1,233 @@
+## Example of Isolation for Hybrid Deployed Services
+
+### Environment Preparation
+
+Check whether the kernel supports isolation of hybrid deployed services.
+
+```bash
+# Check whether isolation of hybrid deployed services is enabled in the **/boot/config-** system configuration.
+# If **CONFIG_QOS_SCHED=y**, the function is enabled. Example:
+cat /boot/config-5.10.0-60.18.0.50.oe2203.x86_64 | grep CONFIG_QOS
+CONFIG_QOS_SCHED=y
+```
+
+Install the Docker engine.
+
+```bash
+yum install -y docker-engine
+docker version
+# The following shows the output of docker version.
+Client:
+ Version: 18.09.0
+ EulerVersion: 18.09.0.300
+ API version: 1.39
+ Go version: go1.17.3
+ Git commit: aa1eee8
+ Built: Wed Mar 30 05:07:38 2022
+ OS/Arch: linux/amd64
+ Experimental: false
+
+Server:
+ Engine:
+ Version: 18.09.0
+ EulerVersion: 18.09.0.300
+ API version: 1.39 (minimum version 1.12)
+ Go version: go1.17.3
+ Git commit: aa1eee8
+ Built: Tue Mar 22 00:00:00 2022
+ OS/Arch: linux/amd64
+ Experimental: false
+```
+
+### Hybrid Deployed Services
+
+**Online Service ClickHouse**
+
+Use the clickhouse-benchmark tool to test the performance and collect statistics on performance metrics such as QPS, P50, P90, and P99. For details, see https://clickhouse.com/docs/en/operations/utilities/clickhouse-benchmark/.
+
+**Offline Service Stress**
+
+Stress is a CPU-intensive test tool. You can specify the **--cpu** parameter to start multiple concurrent CPU-intensive tasks to increase the stress on the system.
+
+### Usage
+
+1) Start a ClickHouse container (online service).
+
+2) Access the container and run the **clickhouse-benchmark** command. Set the number of concurrent threads to 10, the number of query times to 10000, and the total query time to 30s.
+
+3) Start a Stress container (offline service) at the same time and concurrently execute 10 CPU-intensive tasks to increase the stress on the environment.
+
+4) After the **clickhouse-benchmark** command is executed, a performance test report is generated.
+
+The **test_demo.sh** script for the isolation test for hybrid deployed services is as follows:
+
+```bash
+#!/bin/bash
+
+with_offline=${1:-no_offline}
+enable_isolation=${2:-no_isolation}
+stress_num=${3:-10}
+concurrency=10
+timeout=30
+output=/tmp/result.json
+online_container=
+offline_container=
+
+exec_sql="echo \"SELECT * FROM system.numbers LIMIT 10000000 OFFSET 10000000\" | clickhouse-benchmark -i 10000 -c $concurrency -t $timeout"
+
+function prepare()
+{
+ echo "Launch clickhouse container."
+ online_container=$(docker run -itd \
+ -v /tmp:/tmp:rw \
+ --ulimit nofile=262144:262144 \
+ -p 34424:34424 \
+ yandex/clickhouse-server)
+
+ sleep 3
+ echo "Clickhouse container lauched."
+}
+
+function clickhouse()
+{
+ echo "Start clickhouse benchmark test."
+ docker exec $online_container bash -c "$exec_sql --json $output"
+ echo "Clickhouse benchmark test done."
+}
+
+function stress()
+{
+ echo "Launch stress container."
+ offline_container=$(docker run -itd joedval/stress --cpu $stress_num)
+ echo "Stress container launched."
+
+ if [ $enable_isolation == "enable_isolation" ]; then
+ echo "Set stress container qos level to -1."
+ echo -1 > /sys/fs/cgroup/cpu/docker/$offline_container/cpu.qos_level
+ fi
+}
+
+function benchmark()
+{
+ if [ $with_offline == "with_offline" ]; then
+ stress
+ sleep 3
+ fi
+ clickhouse
+ echo "Remove test containers."
+ docker rm -f $online_container
+ docker rm -f $offline_container
+ echo "Finish benchmark test for clickhouse(online) and stress(offline) colocation."
+ echo "===============================clickhouse benchmark=================================================="
+ cat $output
+ echo "===============================clickhouse benchmark=================================================="
+}
+
+prepare
+benchmark
+```
+
+### Test Results
+
+Independently execute the online service ClickHouse.
+
+```bash
+sh test_demo.sh no_offline no_isolation
+```
+
+The QoS **baseline data** (QPS/P50/P90/P99) of the online service is as follows:
+
+```json
+{
+"localhost:9000": {
+"statistics": {
+"QPS": 1.8853412284364512,
+......
+},
+"query_time_percentiles": {
+......
+"50": 0.484905256,
+"60": 0.519641313,
+"70": 0.570876148,
+"80": 0.632544937,
+"90": 0.728295525,
+"95": 0.808700418,
+"99": 0.873945121,
+......
+}
+}
+}
+```
+
+Start the offline service Stress, disable isolation of hybrid deployed services, and execute the **test_demo.sh** script.
+
+```bash
+# **with_offline** indicates that the offline service Stress is enabled.
+# **no_isolation** indicates that isolation of hybrid deployed services is disabled.
+sh test_demo.sh with_offline no_isolation
+```
+
+**When isolation of hybrid deployed services is disabled**, the QoS data (QPS/P80/P90/P99) of the ClickHouse service is as follows:
+
+```json
+{
+"localhost:9000": {
+"statistics": {
+"QPS": 0.9424028693636205,
+......
+},
+"query_time_percentiles": {
+......
+"50": 0.840476774,
+"60": 1.304607373,
+"70": 1.393591017,
+"80": 1.41277543,
+"90": 1.430316688,
+"95": 1.457534764,
+"99": 1.555646855,
+......
+}
+}
+```
+
+Start the offline service Stress, enable isolation of hybrid deployed services, and execute the **test_demo.sh** script.
+
+```bash
+# **with_offline** indicates that the offline service Stress is enabled.
+# **enable_isolation** indicates that isolation of hybrid deployed services is enabled.
+sh test_demo.sh with_offline enable_isolation
+```
+
+**When isolation of hybrid deployed services is enabled**, the QoS data (QPS/P80/P90/P99) of the ClickHouse service is as follows:
+
+```json
+{
+"localhost:9000": {
+"statistics": {
+"QPS": 1.8825798759270718,
+......
+},
+"query_time_percentiles": {
+......
+"50": 0.485725185,
+"60": 0.512629901,
+"70": 0.55656488,
+"80": 0.636395956,
+"90": 0.734695906,
+"95": 0.804118275,
+"99": 0.887807409,
+......
+}
+}
+}
+```
+
+The following table lists the test results.
+
+| Service Deployment Mode | QPS | P50 | P90 | P99 |
+| -------------------------------------- | ------------- | ------------- | ------------- | ------------- |
+| ClickHouse (baseline) | 1.885 | 0.485 | 0.728 | 0.874 |
+| ClickHouse + Stress (isolation disabled)| 0.942 (-50%) | 0.840 (-42%) | 1.430 (-49%) | 1.556 (-44%) |
+| ClickHouse + Stress (isolation enabled) | 1.883 (-0.11%) | 0.486 (-0.21%) | 0.735 (-0.96%) | 0.888 (-1.58%) |
+
+When isolation of hybrid deployed services is disabled, the QPS of ClickHouse decreases from approximately 1.9 to 0.9, the service response delay (P90) increases from approximately 0.7s to 1.4s, and the QoS decreases by about 50%. When isolation of hybrid deployed services is enabled, the QPS and response delay (P50/P90/P99) of ClickHouse decrease by less than 2% compared with the baseline, and the QoS remains unchanged.
diff --git a/docs/en/docs/rubik/http-apis.md b/docs/en/docs/rubik/http-apis.md
new file mode 100644
index 0000000000000000000000000000000000000000..e5d315a0ad6a82ec9a14000f190ec24d29e6c07b
--- /dev/null
+++ b/docs/en/docs/rubik/http-apis.md
@@ -0,0 +1,67 @@
+# HTTP APIs
+
+## Overview
+
+The open APIs of Rubik are all HTTP APIs, including the API for setting or updating the pod priority, API for detecting the Rubik availability, and API for querying the Rubik version.
+
+## APIs
+
+### API for Setting or Updating the Pod Priority
+
+Rubik provides the function of setting or updating the pod priority. External systems can call this API to send pod information. Rubik sets the priority based on the received pod information to isolate resources. The API calling format is as follows:
+
+```bash
+HTTP POST /run/rubik/rubik.sock
+{
+ "Pods": {
+ "podaaa": {
+ "CgroupPath": "kubepods/burstable/podaaa",
+ "QosLevel": 0
+ },
+ "podbbb": {
+ "CgroupPath": "kubepods/burstable/podbbb",
+ "QosLevel": -1
+ }
+ }
+}
+```
+
+In the **Pods** settings, specify information about the pods whose priorities need to be set or updated. At least one pod must be specified for each HTTP request, and **CgroupPath** and **QosLevel** must be specified for each pod. The meanings of **CgroupPath** and **QosLevel** are as follows:
+
+| Item | Value Type| Value Range| Description |
+| ---------- | ---------- | ------------ | ------------------------------------------------------- |
+| QosLevel | Integer | 0, -1 | pod priority. The value **0** indicates that the service is an online service, and the value **-1** indicates that the service is an offline service. |
+| CgroupPath | String | Relative path | cgroup subpath of the pod (relative path in the cgroup subsystem)|
+
+The following is an example of calling the API:
+
+```sh
+curl -v -H "Accept: application/json" -H "Content-type: application/json" -X POST --data '{"Pods": {"podaaa": {"CgroupPath": "kubepods/burstable/podaaa","QosLevel": 0},"podbbb": {"CgroupPath": "kubepods/burstable/podbbb","QosLevel": -1}}}' --unix-socket /run/rubik/rubik.sock http://localhost/
+```
+
+### API for Detecting Availability
+
+As an HTTP service, Rubik provides an API for detecting whether Rubik is running.
+
+API format: HTTP/GET /ping
+
+The following is an example of calling the API:
+
+```sh
+curl -XGET --unix-socket /run/rubik/rubik.sock http://localhost/ping
+```
+
+If **ok** is returned, the Rubik service is running.
+
+### API for Querying Version Information
+
+Rubik allows you to query the current Rubik version number through an HTTP request.
+
+API format: HTTP/GET /version
+
+The following is an example of calling the API:
+
+```sh
+curl -XGET --unix-socket /run/rubik/rubik.sock http://localhost/version
+{"Version":"0.0.1","Release":"1","Commit":"29910e6","BuildTime":"2021-05-12"}
+```
diff --git a/docs/en/docs/rubik/installation-and-deployment.md b/docs/en/docs/rubik/installation-and-deployment.md
new file mode 100644
index 0000000000000000000000000000000000000000..6a713ec22549a757db709aed54149ed749d78b80
--- /dev/null
+++ b/docs/en/docs/rubik/installation-and-deployment.md
@@ -0,0 +1,199 @@
+# Installation and Deployment
+
+## Overview
+
+This document describes how to install and deploy the Rubik component.
+
+## Software and Hardware Requirements
+
+### Hardware
+
+* Architectures: x86 and AArch64
+* Drive: 1 GB or more
+* Memory: 100 MB or more
+
+### Software
+
+* OS: openEuler 22.03-LTS
+* Kernel: openEuler 22.03-LTS kernel
+
+### Environment Preparation
+
+* Install the openEuler OS. For details, see the _openEuler 22.03-LTS Installation Guide_.
+* Install and deploy Kubernetes. For details, see the _Kubernetes Cluster Deployment Guide_.
+* Install the Docker or iSulad container engine. If the iSulad container engine is used, you need to install the isula-build container image building tool.
+
+## Installing Rubik
+
+Rubik is deployed on each Kubernetes node as a DaemonSet. Therefore, you need to perform the following steps to install the Rubik RPM package on each node.
+
+1. Configure the Yum repositories openEuler 22.03-LTS and openEuler 22.03-LTS:EPOL (Rubik component is available only in the EPOL repository).
+
+ ```
+ # openEuler 22.03-LTS officially released repository
+ name=openEuler22.03
+ baseurl=https://repo.openeuler.org/openEuler-22.03-LTS/everything/$basearch/
+ enabled=1
+ gpgcheck=1
+ gpgkey=https://repo.openeuler.org/openEuler-22.03-LTS/everything/RPM-GPG-KEY-openEuler
+ ```
+
+ ```
+ # openEuler 22.03-LTS:EPOL officially released repository
+ name=Epol
+ baseurl=https://repo.openeuler.org/openEuler-22.03-LTS/EPOL/$basearch/
+ enabled=1
+ gpgcheck=0
+ ```
+
+2. Install Rubik as the **root** user.
+
+ ```shell
+ sudo yum install -y rubik
+ ```
+
+
+> **Note**:
+>
+> Files related to the Rubik tool are installed in the **/var/lib/rubik** directory.
+
+## Deploying Rubik
+
+Rubik runs as a container in a Kubernetes cluster in hybrid deployment scenarios. It is used to isolate and restrict resources for services with different priorities to prevent offline services from interfering with online services, improving the overall resource utilization and ensuring the service quality of online services. Currently, Rubik supports isolation and restriction of CPU and memory resources, which must be used together with the openEuler 22.03-LTS kernel. To enable the memory priority feature (that is, to implement memory resource tiering for services with different priorities), you need to set **/proc/sys/vm/memcg_qos_enable**. The value can be **0** or **1**. The default value **0** indicates that the feature is disabled, and the value **1** indicates that the feature is enabled.
+
+```bash
+sudo echo 1 > /proc/sys/vm/memcg_qos_enable
+```
+
+### Deploying Rubik DaemonSet
+
+1. Use the Docker or isula-build engine to build Rubik images. Because Rubik is deployed as a DaemonSet, each node requires a Rubik image. After building an image on a node, use the `docker save/load` command to load the Rubik image to each node of Kubernetes. Alternatively, build a Rubik image on each node. The following uses isula-build as an example. The command is as follows:
+
+```sh
+isula-build ctr-img build -f /var/lib/rubik/Dockerfile --tag rubik:0.1.0 .
+```
+
+2. On the Kubernetes master node, change the Rubik image name in the `/var/lib/rubik/rubik-daemonset.yaml` file to the name of the image built in the previous step.
+
+```yaml
+...
+containers:
+- name: rubik-agent
+ image: rubik:0.1.0 # The image name must be the same as the Rubik image name built in the previous step.
+ imagePullPolicy: IfNotPresent
+...
+```
+
+3. On the Kubernetes master node, run the `kubectl` command to deploy the Rubik DaemonSet. Rubik is automatically deployed on all Kubernetes nodes.
+
+```sh
+kubectl apply -f /var/lib/rubik/rubik-daemonset.yaml
+```
+
+4. Run the `kubectl get pods -A` command to check whether Rubik has been deployed on each node in the cluster. (The number of rubik-agents is the same as the number of nodes and all rubik-agents are in the Running state.)
+
+```sh
+[root@localhost rubik]# kubectl get pods -A
+NAMESPACE NAME READY STATUS RESTARTS AGE
+...
+kube-system rubik-agent-76ft6 1/1 Running 0 4s
+...
+```
+
+## Common Configuration Description
+
+The Rubik deployed using the preceding method is started with the default configuration. You can modify the Rubik configuration as required by modifying the **config.json** section in the **rubik-daemonset.yaml** file and then redeploy the Rubik DaemonSet.
+
+This section describes common configurations of **config.json**.
+
+### Configuration Item Description
+
+```yaml
+# The configuration items are in the **config.json** section of the **rubik-daemonset.yaml** file.
+{
+ "autoConfig": true,
+ "autoCheck": false,
+ "logDriver": "stdio",
+ "logDir": "/var/log/rubik",
+ "logSize": 1024,
+ "logLevel": "info",
+ "cgroupRoot": "/sys/fs/cgroup"
+}
+```
+
+| Item | Value Type| Value Range | Description |
+| ---------- | ---------- | ------------------ | ------------------------------------------------------------ |
+| autoConfig | Boolean | **true** or **false** | **true**: enables automatic pod awareness. **false**: disables automatic pod awareness.|
+| autoCheck | Boolean | **true** or **false** | **true**: enables pod priority check. **false**: disables pod priority check.|
+| logDriver | String | **stdio** or **file** | **stdio**: prints logs to the standard output. The scheduling platform collects and dumps logs. **file**: prints files to the log directory specified by **logDir**.|
+| logDir | String | Absolute path | Directory for storing logs. |
+| logSize | Integer | [10,1048576] | Total size of logs, in MB. If the total size of logs reaches the upper limit, the earliest logs will be discarded.|
+| logLevel | String | **error**, **info**, or **debug**| Log level. |
+| cgroupRoot | String | Absolute path | cgroup mount point. |
+
+### Automatic Configuration of Pod Priorities
+
+If **autoConfig** is set to **true** in the Rubik configuration to enable automatic pod awareness, you only need to specify the priority in **annotation** in the YAML file when deploying the service pod. After the deployment, Rubik automatically detects the creation and update of the pods on the current node, and sets the pod priorities based on the configured priorities.
+
+### Pod Priority Configuration Depending on kubelet
+
+The automatic configuration of the pod priority depends on the event notification created by **api-server pod**, which has a certain delay. The pod priority cannot be configured before the process is started. As a result, the service performance may fluctuate. You can disable the automatic priority configuration option by modifying the kubelet source code. After the container cgroup is created and before the container process is started, call the Rubik HTTP API to configure the pod priorities. For details about how to use the HTTP API, see [HTTP APIs](./http-apis.md).
+
+### Automatic Verification of Pod Priorities
+
+Rubik supports consistency check on the pod QoS priority configurations of the current node during startup. It checks whether the configuration in the Kubernetes cluster is consistent with the pod priority configuration of Rubik. This function is disabled by default. You can enable or disable it using the **autoCheck** option. If this function is enabled, Rubik automatically verifies and corrects the pod priority configuration of the current node when it is started or restarted.
+
+## Example of Configuring Rubik for Offline Services
+
+After Rubik is successfully deployed, you can modify the YAML file of a service based on the following configuration example to specify the offline service. Then Rubik can configure the priority of the service after the service is deployed to isolate resources.
+
+The following is an example of deploying the online service Nginx:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: nginx
+ namespace: qosexample
+ annotations:
+ volcano.sh/preemptable: "false" # If **volcano.sh/preemptable** is set to **true**, the service is an offline service. If it is set to **false**, the service is an online service. The default value is **false**.
+spec:
+ containers:
+ - name: nginx
+ image: nginx
+ resources:
+ limits:
+ memory: "200Mi"
+ cpu: "1"
+ requests:
+ memory: "200Mi"
+ cpu: "1"
+```
+
+## Restrictions
+
+- The maximum number of concurrent HTTP requests that Rubik can receive is 1,000 QPS. If the number of concurrent HTTP requests exceeds the upper limit, an error is reported.
+
+- The maximum number of pods in a single request received by Rubik is 100. If the number of pods exceeds the threshold, an error is reported.
+
+- Only one Rubik can be deployed on each Kubernetes node. Multiple Rubiks may conflict with each other.
+
+- Rubik does not provide port access and can communicate only through sockets.
+
+- Rubik accepts only valid HTTP request paths and network protocols: http://localhost/ (POST), http://localhost/ping (GET), and http://localhost/version (GET). For details about the functions of HTTP requests, see HTTP APIs(./http-apis.md).
+
+- Rubik drive requirement: 1 GB or more.
+
+- Rubik memory requirement: 100 MB or more.
+
+- Services cannot be switched from a low priority (offline services) to a high priority (online services). For example, if service A is set to an offline service and then to an online service, Rubik reports an error.
+
+- When a container is mounted to a directory, the service side must ensure that the minimum permission on the Rubik local socket directory **/run/Rubik** is 700.
+
+- When the Rubik server is available, the timeout interval of a single request is 120s. If the Rubik process enters the T (suspended or tracing) or D (uninterruptible sleep) state, the server becomes unavailable. In this case, the Rubik service does not respond to any request. To avoid this problem, set the timeout interval on the client to avoid infinite waiting.
+
+- If hybrid deployment is used, the original **cgroup cpu share** function has the following restrictions:
+
+ If both online and offline tasks are running on the CPU, the CPU share configuration of offline tasks cannot take effect.
+
+ If the current CPU has only online or offline tasks, the CPU share configuration takes effect.
diff --git a/docs/en/docs/rubik/overview.md b/docs/en/docs/rubik/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..bcf79fc622ecefba547bf0c2f62272d9751b3496
--- /dev/null
+++ b/docs/en/docs/rubik/overview.md
@@ -0,0 +1,17 @@
+# Rubik User Guide
+
+## Overview
+
+Low server resource utilization has always been a recognized challenge in the industry. With the development of cloud native technologies, hybrid deployment of online (high-priority) and offline (low-priority) services becomes an effective means to improve resource utilization.
+
+In hybrid service deployment scenarios, Rubik container scheduling properly schedules resources based on QoS levels to greatly improve resource utilization while ensuring the quality of online services.
+
+Rubik supports the following features:
+
+- Pod CPU priority configuration
+- Pod memory priority configuration
+
+This document is intended for community developers, open source enthusiasts, and partners who use the openEuler system and want to learn and use Rubik. Users must:
+
+* Know basic Linux operations.
+* Be familiar with basic operations of Kubernetes and Docker/iSulad.
diff --git a/docs/en/docs/secGear/api-description.md b/docs/en/docs/secGear/api-description.md
index 6e48a8c839f582fe2e2902059de17a20f20ef4d1..ec94310aafe1a80053827506f69fdcfffff2ef65 100644
--- a/docs/en/docs/secGear/api-description.md
+++ b/docs/en/docs/secGear/api-description.md
@@ -10,6 +10,9 @@ Creates an enclave API.
Initialization API. The function calls different TEE creation functions based on the type to initialize the enclave context in different TEE solutions. This API is called by the REE.
+> **说明:**
+> Due to Intel SGX restrictions, memory mapping contention exists when multiple thread invoke cc_enclave_create concurrently. As a result, the creation of the enclave API may fail. Avoid concurrent invocations of cc_enclave_create in your code.
+
**Function Declarations:**
cc_enclave_result_t cc_enclave_create(const char* path, enclave_type_t type, uint32_t version,uint32_t flags,const enclave_features_t* features,uint32_t features_count,
diff --git a/docs/en/docs/thirdparty_migration/OpenStack-train.md b/docs/en/docs/thirdparty_migration/OpenStack-train.md
new file mode 100644
index 0000000000000000000000000000000000000000..0816483f03cfe1641f1822bceb98e84f0e273b58
--- /dev/null
+++ b/docs/en/docs/thirdparty_migration/OpenStack-train.md
@@ -0,0 +1,2961 @@
+# OpenStack-Wallaby Deployment Guide
+
+
+
+- [OpenStack-Wallaby Deployment Guide](#openstack-wallaby-deployment-guide)
+ - [OpenStack](#openstack)
+ - [Conventions](#conventions)
+ - [Preparing the Environment](#preparing-the-environment)
+ - [Environment Configuration](#environment-configuration)
+ - [Installing the SQL Database](#installing-the-sql-database)
+ - [Installing RabbitMQ](#installing-rabbitmq)
+ - [Installing Memcached](#installing-memcached)
+ - [OpenStack Installation](#openstack-installation)
+ - [Installing Keystone](#installing-keystone)
+ - [Installing Glance](#installing-glance)
+ - [Installing Placement](#installing-placement)
+ - [Installing Nova](#installing-nova)
+ - [Installing Neutron](#installing-neutron)
+ - [Installing Cinder](#installing-cinder)
+ - [Installing Horizon](#installing-horizon)
+ - [Installing Tempest](#installing-tempest)
+ - [Installing Ironic](#installing-ironic)
+ - [Installing Kolla](#installing-kolla)
+ - [Installing Trove](#installing-trove)
+ - [Installing Swift](#installing-swift)
+ - [Installing Cyborg](#installing-cyborg)
+ - [Installing Aodh](#installing-aodh)
+ - [Installing Gnocchi](#installing-gnocchi)
+ - [Installing Ceilometer](#installing-ceilometer)
+ - [Installing Heat](#installing-heat)
+ - [OpenStack Quick Installation](#openstack-quick-installation)
+
+
+## OpenStack
+
+OpenStack is an open source cloud computing infrastructure software project developed by the community. It provides an operating platform or tool set for deploying the cloud, offering scalable and flexible cloud computing for organizations.
+
+As an open source cloud computing management platform, OpenStack consists of several major components, such as Nova, Cinder, Neutron, Glance, Keystone, and Horizon. OpenStack supports almost all cloud environments. The project aims to provide a cloud computing management platform that is easy-to-use, scalable, unified, and standardized. OpenStack provides an infrastructure as a service (IaaS) solution that combines complementary services, each of which provides an API for integration.
+
+The official source of openEuler 20.03-LTS-SP3 now supports OpenStack Train. You can configure the Yum source then deploy OpenStack by following the instructions of this document.
+
+## Conventions
+
+OpenStack supports multiple deployment modes. This document includes two deployment modes: **All in One** and **Distributed**. The conventions are as follows:
+
+**All in One** mode:
+
+```text
+Ignores all possible suffixes.
+```
+
+**Distributed** mode:
+
+```text
+A suffix of (CTL) indicates that the configuration or command applies only to the control node.
+A suffix of (CPT) indicates that the configuration or command applies only to the compute node.
+A suffix of (STG) indicates that the configuration or command applies only to the storage node.
+In other cases, the configuration or command applies to both the control node and compute node.
+```
+
+***Note***
+
+The services involved in the preceding conventions are as follows:
+
+- Cinder
+- Nova
+- Neutron
+
+## Preparing the Environment
+
+### Environment Configuration
+
+1. Start the OpenStack Train Yum source.
+
+ ```shell
+ yum update
+ yum install openstack-release-train
+ yum clean all && yum makecache
+ ```
+
+ **Note**: Enable the EPOL repository for the Yum source if it is not enabled already.
+
+ ```shell
+ vi /etc/yum.repos.d/openEuler.repo
+
+ [EPOL]
+ name=EPOL
+ baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/
+ enabled=1
+ gpgcheck=1
+ gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
+ EOF
+ ```
+
+2. Change the host name and mapping.
+
+ Set the host name of each node:
+
+ ```shell
+ hostnamectl set-hostname controller (CTL)
+ hostnamectl set-hostname compute (CPT)
+ ```
+
+ Assuming the IP address of the controller node is **10.0.0.11** and the IP address of the compute node (if any) is **10.0.0.12**, add the following information to the **/etc/hosts** file:
+
+ ```shell
+ 10.0.0.11 controller
+ 10.0.0.12 compute
+ ```
+
+### Installing the SQL Database
+
+1. Run the following command to install the software package:
+
+ ```shell
+ yum install mariadb mariadb-server python3-PyMySQL
+ ```
+
+2. Run the following command to create and edit the **/etc/my.cnf.d/openstack.cnf** file:
+
+ ```shell
+ vim /etc/my.cnf.d/openstack.cnf
+
+ [mysqld]
+ bind-address = 10.0.0.11
+ default-storage-engine = innodb
+ innodb_file_per_table = on
+ max_connections = 4096
+ collation-server = utf8_general_ci
+ character-set-server = utf8
+ ```
+
+ ***Note***
+
+ **`bind-address` is set to the management IP address of the controller node.**
+
+3. Run the following commands to start the database service and configure it to automatically start upon system boot:
+
+ ```shell
+ systemctl enable mariadb.service
+ systemctl start mariadb.service
+ ```
+
+4. (Optional) Configure the default database password:
+
+ ```shell
+ mysql_secure_installation
+ ```
+
+ ***Note***
+
+ **Perform operations as prompted.**
+
+### Installing RabbitMQ
+
+1. Run the following command to install the software package:
+
+ ```shell
+ yum install rabbitmq-server
+ ```
+
+2. Start the RabbitMQ service and configure it to automatically start upon system boot:
+
+ ```shell
+ systemctl enable rabbitmq-server.service
+ systemctl start rabbitmq-server.service
+ ```
+
+3. Add the OpenStack user:
+
+ ```shell
+ rabbitmqctl add_user openstack RABBIT_PASS
+ ```
+
+ ***Note***
+
+ **Replace *RABBIT_PASS* to set the password for the openstack user.**
+
+4. Run the following command to set the permission of the **openstack** user to allow the user to perform configuration, write, and read operations:
+
+ ```shell
+ rabbitmqctl set_permissions openstack ".*" ".*" ".*"
+ ```
+
+### Installing Memcached
+
+1. Run the following command to install the dependency package:
+
+ ```shell
+ yum install memcached python3-memcached
+ ```
+
+2. Open the **/etc/sysconfig/memcached** file in insert mode.
+
+ ```shell
+ vim /etc/sysconfig/memcached
+
+ OPTIONS="-l 127.0.0.1,::1,controller"
+ ```
+
+3. Run the following command to start the Memcached service and configure it to automatically start upon system boot:
+
+ ```shell
+ systemctl enable memcached.service
+ systemctl start memcached.service
+ ```
+
+ ***Note***
+
+ **After the service is started, you can run `memcached-tool controller stats` to ensure that the service is started properly and available. You can replace `controller` with the management IP address of the controller node.**
+
+## OpenStack Installation
+
+### Installing Keystone
+
+1. Create the **keystone** database and grant permissions:
+
+ ``` sql
+ mysql -u root -p
+
+ MariaDB [(none)]> CREATE DATABASE keystone;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
+ IDENTIFIED BY 'KEYSTONE_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
+ IDENTIFIED BY 'KEYSTONE_DBPASS';
+ MariaDB [(none)]> exit
+ ```
+
+ ***Note***
+
+ **Replace *KEYSTONE_DBPASS* to set the password for the keystone database.**
+
+2. Install the software package:
+
+ ```shell
+ yum install openstack-keystone httpd mod_wsgi
+ ```
+
+3. Configure Keystone:
+
+ ```shell
+ vim /etc/keystone/keystone.conf
+
+ [database]
+ connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
+
+ [token]
+ provider = fernet
+ ```
+
+ ***Description***
+
+ In the **[database]** section, configure the database entry .
+
+ In the **[token]** section, configure the token provider .
+
+ ***Note:***
+
+ **Replace *KEYSTONE_DBPASS* with the password of the keystone database.**
+
+4. Synchronize the database:
+
+ ```shell
+ su -s /bin/sh -c "keystone-manage db_sync" keystone
+ ```
+
+5. Initialize the Fernet keystore:
+
+ ```shell
+ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
+ keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
+ ```
+
+6. Start the service:
+
+ ```shell
+ keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
+ --bootstrap-admin-url http://controller:5000/v3/ \
+ --bootstrap-internal-url http://controller:5000/v3/ \
+ --bootstrap-public-url http://controller:5000/v3/ \
+ --bootstrap-region-id RegionOne
+ ```
+
+ ***Note***
+
+ **Replace *ADMIN_PASS* to set the password for the admin user.**
+
+7. Configure the Apache HTTP server:
+
+ ```shell
+ vim /etc/httpd/conf/httpd.conf
+
+ ServerName controller
+ ```
+
+ ```shell
+ ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
+ ```
+
+ ***Description***
+
+ Configure **ServerName** to use the control node.
+
+ ***Note***
+ **If the ServerName item does not exist, create it.
+
+8. Start the Apache HTTP service:
+
+ ```shell
+ systemctl enable httpd.service
+ systemctl start httpd.service
+ ```
+
+9. Create environment variables:
+
+ ```shell
+ cat << EOF >> ~/.admin-openrc
+ export OS_PROJECT_DOMAIN_NAME=Default
+ export OS_USER_DOMAIN_NAME=Default
+ export OS_PROJECT_NAME=admin
+ export OS_USERNAME=admin
+ export OS_PASSWORD=ADMIN_PASS
+ export OS_AUTH_URL=http://controller:5000/v3
+ export OS_IDENTITY_API_VERSION=3
+ export OS_IMAGE_API_VERSION=2
+ EOF
+ ```
+
+ ***Note***
+
+ **Replace *ADMIN_PASS* with the password of the admin user.**
+
+10. Create domains, projects, users, and roles in sequence. python3-openstackclient must be installed first:
+
+ ```shell
+ yum install python3-openstackclient
+ ```
+
+ Import the environment variables:
+
+ ```shell
+ source ~/.admin-openrc
+ ```
+
+ Create the project **service**. The domain **default** has been created during keystone-manage bootstrap.
+
+ ```shell
+ openstack domain create --description "An Example Domain" example
+ ```
+
+ ```shell
+ openstack project create --domain default --description "Service Project" service
+ ```
+
+ Create the (non-admin) project **myproject**, user **myuser**, and role **myrole**, and add the role **myrole** to **myproject** and **myuser**.
+
+ ```shell
+ openstack project create --domain default --description "Demo Project" myproject
+ openstack user create --domain default --password-prompt myuser
+ openstack role create myrole
+ openstack role add --project myproject --user myuser myrole
+ ```
+
+11. Perform the verification.
+
+ Cancel the temporary environment variables **OS_AUTH_URL** and **OS_PASSWORD**.
+
+ ```shell
+ source ~/.admin-openrc
+ unset OS_AUTH_URL OS_PASSWORD
+ ```
+
+ Request a token for the **admin** user:
+
+ ```shell
+ openstack --os-auth-url http://controller:5000/v3 \
+ --os-project-domain-name Default --os-user-domain-name Default \
+ --os-project-name admin --os-username admin token issue
+ ```
+
+ Request a token for user **myuser**:
+
+ ```shell
+ openstack --os-auth-url http://controller:5000/v3 \
+ --os-project-domain-name Default --os-user-domain-name Default \
+ --os-project-name myproject --os-username myuser token issue
+ ```
+
+### Installing Glance
+
+1. Create the database, service credentials, and the API endpoints.
+
+ Create the database:
+
+ ```sql
+ mysql -u root -p
+
+ MariaDB [(none)]> CREATE DATABASE glance;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
+ IDENTIFIED BY 'GLANCE_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
+ IDENTIFIED BY 'GLANCE_DBPASS';
+ MariaDB [(none)]> exit
+ ```
+
+ ***Note:***
+
+ **Replace *GLANCE_DBPASS* to set the password for the glance database.**
+
+ Create the service credential:
+
+ ```shell
+ source ~/.admin-openrc
+
+ openstack user create --domain default --password-prompt glance
+ openstack role add --project service --user glance admin
+ openstack service create --name glance --description "OpenStack Image" image
+ ```
+
+ Create the API endpoints for the image service:
+
+ ```shell
+ openstack endpoint create --region RegionOne image public http://controller:9292
+ openstack endpoint create --region RegionOne image internal http://controller:9292
+ openstack endpoint create --region RegionOne image admin http://controller:9292
+ ```
+
+2. Install the software package:
+
+ ```shell
+ yum install openstack-glance
+ ```
+
+3. Configure Glance:
+
+ ```shell
+ vim /etc/glance/glance-api.conf
+
+ [database]
+ connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000
+ auth_url = http://controller:5000
+ memcached_servers = controller:11211
+ auth_type = password
+ project_domain_name = Default
+ user_domain_name = Default
+ project_name = service
+ username = glance
+ password = GLANCE_PASS
+
+ [paste_deploy]
+ flavor = keystone
+
+ [glance_store]
+ stores = file,http
+ default_store = file
+ filesystem_store_datadir = /var/lib/glance/images/
+ ```
+
+ ***Description:***
+
+ In the **[database]** section, configure the database entry.
+
+ In the **[keystone_authtoken]** and **[paste_deploy]** sections, configure the identity authentication service entry.
+
+ In the **[glance_store]** section, configure the local file system storage and the location of image files.
+
+ ***Note***
+
+ **Replace *GLANCE_DBPASS* with the password of the glance database.**
+
+ **Replace *GLANCE_PASS* with the password of user glance.**
+
+4. Synchronize the database:
+
+ ```shell
+ su -s /bin/sh -c "glance-manage db_sync" glance
+ ```
+
+5. Start the service:
+
+ ```shell
+ systemctl enable openstack-glance-api.service
+ systemctl start openstack-glance-api.service
+ ```
+
+6. Perform the verification.
+
+ Download the image:
+
+ ```shell
+ source ~/.admin-openrc
+
+ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
+ ```
+
+ ***Note***
+
+ **If the Kunpeng architecture is used in your environment, download the image of the AArch64 version. the cirros-0.5.2-aarch64-disk.img image file has been tested.**
+
+ Upload the image to the image service:
+
+ ```shell
+ openstack image create --disk-format qcow2 --container-format bare \
+ --file cirros-0.4.0-x86_64-disk.img --public cirros
+ ```
+
+ Confirm the image upload and verify the attributes:
+
+ ```shell
+ openstack image list
+ ```
+
+### Installing Placement
+
+1. Create a database, service credentials, and API endpoints.
+
+ Create a database.
+
+ Access the database as the **root** user. Create the **placement** database, and grant permissions.
+
+ ```shell
+ mysql -u root -p
+ MariaDB [(none)]> CREATE DATABASE placement;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
+ IDENTIFIED BY 'PLACEMENT_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
+ IDENTIFIED BY 'PLACEMENT_DBPASS';
+ MariaDB [(none)]> exit
+ ```
+
+ **Note**:
+
+ **Replace *PLACEMENT_DBPASS* to set the password for the placement database.**
+
+ ```shell
+ source admin-openrc
+ ```
+
+ Run the following commands to create the Placement service credentials, create the **placement** user, and add the **admin** role to the **placement** user:
+
+ Create the Placement API Service.
+
+ ```shell
+ openstack user create --domain default --password-prompt placement
+ openstack role add --project service --user placement admin
+ openstack service create --name placement --description "Placement API" placement
+ ```
+
+ Create API endpoints of the **placement** service.
+
+ ```shell
+ openstack endpoint create --region RegionOne placement public http://controller:8778
+ openstack endpoint create --region RegionOne placement internal http://controller:8778
+ openstack endpoint create --region RegionOne placement admin http://controller:8778
+ ```
+
+2. Perform the installation and configuration.
+
+ Install the software package:
+
+ ```shell
+ yum install openstack-placement-api
+ ```
+
+ Configure Placement:
+
+ Edit the **/etc/placement/placement.conf** file:
+
+ In the **[placement_database]** section, configure the database entry.
+
+ In **[api]** and **[keystone_authtoken]** sections, configure the identity authentication service entry.
+
+ ```shell
+ # vim /etc/placement/placement.conf
+ [placement_database]
+ # ...
+ connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
+ [api]
+ # ...
+ auth_strategy = keystone
+ [keystone_authtoken]
+ # ...
+ auth_url = http://controller:5000/v3
+ memcached_servers = controller:11211
+ auth_type = password
+ project_domain_name = Default
+ user_domain_name = Default
+ project_name = service
+ username = placement
+ password = PLACEMENT_PASS
+ ```
+
+ Replace **PLACEMENT_DBPASS** with the password of the **placement** database, and replace **PLACEMENT_PASS** with the password of the **placement** user.
+
+ Synchronize the database:
+
+ ```shell
+ su -s /bin/sh -c "placement-manage db sync" placement
+ ```
+
+ Start the httpd service.
+
+ ```shell
+ systemctl restart httpd
+ ```
+
+3. Perform the verification.
+
+ Run the following command to check the status:
+
+ ```shell
+ . admin-openrc
+ placement-status upgrade check
+ ```
+
+ Run the following command to install osc-placement and list the available resource types and features:
+
+ ```shell
+ yum install python3-osc-placement
+ openstack --os-placement-api-version 1.2 resource class list --sort-column name
+ openstack --os-placement-api-version 1.6 trait list --sort-column name
+ ```
+
+### Installing Nova
+
+1. Create a database, service credentials, and API endpoints.
+
+ Create a database.
+
+ ```sql
+ mysql -u root -p (CTL)
+
+ MariaDB [(none)]> CREATE DATABASE nova_api;
+ MariaDB [(none)]> CREATE DATABASE nova;
+ MariaDB [(none)]> CREATE DATABASE nova_cell0;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
+ IDENTIFIED BY 'NOVA_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
+ IDENTIFIED BY 'NOVA_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
+ IDENTIFIED BY 'NOVA_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
+ IDENTIFIED BY 'NOVA_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
+ IDENTIFIED BY 'NOVA_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
+ IDENTIFIED BY 'NOVA_DBPASS';
+ MariaDB [(none)]> exit
+ ```
+
+ **Note**:
+
+ **Replace *NOVA_DBPASS* to set the password for the nova database.**
+
+ ```shell
+ source ~/.admin-openrc (CTL)
+ ```
+
+ Run the following command to create the Nova service certificate:
+
+ ```shell
+ openstack user create --domain default --password-prompt nova (CTL)
+ openstack role add --project service --user nova admin (CTL)
+ openstack service create --name nova --description "OpenStack Compute" compute (CTL)
+ ```
+
+ Create a Nova API endpoint.
+
+ ```shell
+ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 (CTL)
+ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 (CTL)
+ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 (CTL)
+ ```
+
+2. Install the software packages:
+
+ ```shell
+ yum install openstack-nova-api openstack-nova-conductor \ (CTL)
+ openstack-nova-novncproxy openstack-nova-scheduler
+
+ yum install openstack-nova-compute (CPT)
+ ```
+
+ **Note**:
+
+ **If the ARM64 architecture is used, you also need to run the following command:**
+
+ ```shell
+ yum install edk2-aarch64 (CPT)
+ ```
+
+3. Configure Nova:
+
+ ```shell
+ vim /etc/nova/nova.conf
+
+ [DEFAULT]
+ enabled_apis = osapi_compute,metadata
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
+ my_ip = 10.0.0.1
+ use_neutron = true
+ firewall_driver = nova.virt.firewall.NoopFirewallDriver
+ compute_driver=libvirt.LibvirtDriver (CPT)
+ instances_path = /var/lib/nova/instances/ (CPT)
+ lock_path = /var/lib/nova/tmp (CPT)
+
+ [api_database]
+ connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api (CTL)
+
+ [database]
+ connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova (CTL)
+
+ [api]
+ auth_strategy = keystone
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ memcached_servers = controller:11211
+ auth_type = password
+ project_domain_name = Default
+ user_domain_name = Default
+ project_name = service
+ username = nova
+ password = NOVA_PASS
+
+ [vnc]
+ enabled = true
+ server_listen = $my_ip
+ server_proxyclient_address = $my_ip
+ novncproxy_base_url = http://controller:6080/vnc_auto.html (CPT)
+
+ [glance]
+ api_servers = http://controller:9292
+
+ [oslo_concurrency]
+ lock_path = /var/lib/nova/tmp (CTL)
+
+ [placement]
+ region_name = RegionOne
+ project_domain_name = Default
+ project_name = service
+ auth_type = password
+ user_domain_name = Default
+ auth_url = http://controller:5000/v3
+ username = placement
+ password = PLACEMENT_PASS
+
+ [neutron]
+ auth_url = http://controller:5000
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ region_name = RegionOne
+ project_name = service
+ username = neutron
+ password = NEUTRON_PASS
+ service_metadata_proxy = true (CTL)
+ metadata_proxy_shared_secret = METADATA_SECRET (CTL)
+ ```
+
+ Description
+
+ In the **[default]** section, enable the compute and metadata APIs, configure the RabbitMQ message queue entry, configure **my_ip**, and enable the network service **neutron**.
+
+ In the **[api_database]** and **[database]** sections, configure the database entry.
+
+ In the **[api]** and **[keystone_authtoken]** sections, configure the identity service entry.
+
+ In the **[vnc]** section, enable and configure the entry for the remote console.
+
+ In the **[glance]** section, configure the API address for the image service.
+
+ In the **[oslo_concurrency]** section, configure the lock path.
+
+ In the **[placement]** section, configure the entry of the Placement service.
+
+ **Note**:
+
+ **Replace *RABBIT_PASS* with the password of the openstack user in RabbitMQ.**
+
+ **Set *my_ip* to the management IP address of the controller node.**
+
+ **Replace *NOVA_DBPASS* with the password of the nova database.**
+
+ **Replace *NOVA_PASS* with the password of the nova user.**
+
+ **Replace *PLACEMENT_PASS* with the password of the placement user.**
+
+ **Replace *NEUTRON_PASS* with the password of the neutron user.**
+
+ **Replace *METADATA_SECRET* with a proper metadata agent secret.**
+
+ Others
+
+ Check whether VM hardware acceleration (x86 architecture) is supported:
+
+ ```shell
+ egrep -c '(vmx|svm)' /proc/cpuinfo (CPT)
+ ```
+
+ If the returned value is **0**, hardware acceleration is not supported. You need to configure libvirt to use QEMU instead of KVM.
+
+ ```shell
+ vim /etc/nova/nova.conf (CPT)
+
+ [libvirt]
+ virt_type = qemu
+ ```
+
+ If the returned value is **1** or a larger value, hardware acceleration is supported. You can set the value of **virt_type** to **kvm**.
+
+ **Note**:
+
+ **If the ARM64 architecture is used, you also need to run the following command on the compute node:**
+
+ ```shell
+
+ mkdir -p /usr/share/AAVMF
+ chown nova:nova /usr/share/AAVMF
+
+ ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
+ /usr/share/AAVMF/AAVMF_CODE.fd
+ ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
+ /usr/share/AAVMF/AAVMF_VARS.fd
+
+ vim /etc/libvirt/qemu.conf
+
+ nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
+ /usr/share/AAVMF/AAVMF_VARS.fd", \
+ "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
+ /usr/share/edk2/aarch64/vars-template-pflash.raw"]
+ ```
+ In addition, when the deployment environment in the ARM architecture is nested virtualization, configure the **[libvirt]** section as follows:
+
+ ```shell
+ [libvirt]
+ virt_type = qemu
+ cpu_mode = custom
+ cpu_model = cortex-a72
+ ```
+
+4. Synchronize the database.
+
+ Run the following command to synchronize the **nova-api** database:
+
+ ```shell
+ su -s /bin/sh -c "nova-manage api_db sync" nova (CTL)
+ ```
+
+ Run the following command to register the **cell0** database:
+
+ ```shell
+ su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova (CTL)
+ ```
+
+ Create the **cell1** cell:
+
+ ```shell
+ su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova (CTL)
+ ```
+
+ Synchronize the **nova** database:
+
+ ```shell
+ su -s /bin/sh -c "nova-manage db sync" nova (CTL)
+ ```
+
+ Verify whether **cell0** and **cell1** are correctly registered:
+
+ ```shell
+ su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova (CTL)
+ ```
+
+ Add compute node to the OpenStack cluster:
+
+ ```shell
+ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (CPT)
+ ```
+
+5. Start the services:
+
+ ```shell
+ systemctl enable \ (CTL)
+ openstack-nova-api.service \
+ openstack-nova-scheduler.service \
+ openstack-nova-conductor.service \
+ openstack-nova-novncproxy.service
+
+ systemctl start \ (CTL)
+ openstack-nova-api.service \
+ openstack-nova-scheduler.service \
+ openstack-nova-conductor.service \
+ openstack-nova-novncproxy.service
+ ```
+
+ ```shell
+ systemctl enable libvirtd.service openstack-nova-compute.service (CPT)
+ systemctl start libvirtd.service openstack-nova-compute.service (CPT)
+ ```
+
+6. Perform the verification.
+
+ ```shell
+ source ~/.admin-openrc (CTL)
+ ```
+
+ List the service components to verify that each process is successfully started and registered:
+
+ ```shell
+ openstack compute service list (CTL)
+ ```
+
+ List the API endpoints in the identity service to verify the connection to the identity service:
+
+ ```shell
+ openstack catalog list (CTL)
+ ```
+
+ List the images in the image service to verify the connections:
+
+ ```shell
+ openstack image list (CTL)
+ ```
+
+ Check whether the cells are running properly and whether other prerequisites are met.
+
+ ```shell
+ nova-status upgrade check (CTL)
+ ```
+
+### Installing Neutron
+
+1. Create the database, service credentials, and API endpoints.
+
+ Create the database:
+
+ ```sql
+ mysql -u root -p (CTL)
+
+ MariaDB [(none)]> CREATE DATABASE neutron;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
+ IDENTIFIED BY 'NEUTRON_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
+ IDENTIFIED BY 'NEUTRON_DBPASS';
+ MariaDB [(none)]> exit
+ ```
+
+ ***Note***
+
+ **Replace *NEUTRON_DBPASS* to set the password for the neutron database.**
+
+ ```shell
+ source ~/.admin-openrc (CTL)
+ ```
+
+ Create the **neutron** service credential:
+
+ ```shell
+ openstack user create --domain default --password-prompt neutron (CTL)
+ openstack role add --project service --user neutron admin (CTL)
+ openstack service create --name neutron --description "OpenStack Networking" network (CTL)
+ ```
+
+ Create the API endpoints of the Neutron service:
+
+ ```shell
+ openstack endpoint create --region RegionOne network public http://controller:9696 (CTL)
+ openstack endpoint create --region RegionOne network internal http://controller:9696 (CTL)
+ openstack endpoint create --region RegionOne network admin http://controller:9696 (CTL)
+ ```
+
+2. Install the software packages:
+
+ ```shell
+ yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset \ (CTL)
+ openstack-neutron-ml2
+ ```
+
+ ```shell
+ yum install openstack-neutron-linuxbridge ebtables ipset (CPT)
+ ```
+
+3. Configure Neutron.
+
+ Set the main configuration items:
+
+ ```shell
+ vim /etc/neutron/neutron.conf
+
+ [database]
+ connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron (CTL)
+
+ [DEFAULT]
+ core_plugin = ml2 (CTL)
+ service_plugins = router (CTL)
+ allow_overlapping_ips = true (CTL)
+ transport_url = rabbit://openstack:RABBIT_PASS@controller
+ auth_strategy = keystone
+ notify_nova_on_port_status_changes = true (CTL)
+ notify_nova_on_port_data_changes = true (CTL)
+ api_workers = 3 (CTL)
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000
+ auth_url = http://controller:5000
+ memcached_servers = controller:11211
+ auth_type = password
+ project_domain_name = Default
+ user_domain_name = Default
+ project_name = service
+ username = neutron
+ password = NEUTRON_PASS
+
+ [nova]
+ auth_url = http://controller:5000 (CTL)
+ auth_type = password (CTL)
+ project_domain_name = Default (CTL)
+ user_domain_name = Default (CTL)
+ region_name = RegionOne (CTL)
+ project_name = service (CTL)
+ username = nova (CTL)
+ password = NOVA_PASS (CTL)
+
+ [oslo_concurrency]
+ lock_path = /var/lib/neutron/tmp
+ ```
+
+ ***Description***
+
+ Configure the database entry in the **[database]** section.
+
+ Enable the ML2 and router plugins, allow IP address overlapping, and configure the RabbitMQ message queue entry in the **[default]** section.
+
+ Configure the identity authentication service entry in the **[default]** and **[keystone]** sections.
+
+ Enable the network to notify the change of the compute network topology in the **[default]** and **[nova]** sections.
+
+ Configure the lock path in the **[oslo_concurrency]** section.
+
+ ***Note***
+
+ **Replace *NEUTRON_DBPASS* with the password of the neutron database.**
+
+ **Replace *RABBIT_PASS* with the password of the openstack user in RabbitMQ.**
+
+ **Replace *NEUTRON_PASS* with the password of the neutron user.**
+
+ **Replace *NOVA_PASS* with the password of the nova user.**
+
+ Configure the ML2 plugin:
+
+ ```shell
+ vim /etc/neutron/plugins/ml2/ml2_conf.ini
+
+ [ml2]
+ type_drivers = flat,vlan,vxlan
+ tenant_network_types = vxlan
+ mechanism_drivers = linuxbridge,l2population
+ extension_drivers = port_security
+
+ [ml2_type_flat]
+ flat_networks = provider
+
+ [ml2_type_vxlan]
+ vni_ranges = 1:1000
+
+ [securitygroup]
+ enable_ipset = true
+ ```
+
+ Create the symbolic link for /etc/neutron/plugin.ini.
+
+ ```shell
+ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
+ ```
+
+ **Note**
+
+ **Enable flat, vlan, and vxlan networks, enable the linuxbridge and l2population mechanisms, and enable the port security extension driver in the [ml2] section.**
+
+ **Configure the flat network as the provider virtual network in the [ml2_type_flat] section.**
+
+ **Configure the range of the VXLAN network identifier in the [ml2_type_vxlan] section.**
+
+ **Set ipset enabled in the [securitygroup] section.**
+
+ **Remarks**
+
+ **The actual configurations of l2 can be modified based as required. In this example, the provider network + linuxbridge is used.**
+
+ Configure the Linux bridge agent:
+
+ ```shell
+ vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
+
+ [linux_bridge]
+ physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
+
+ [vxlan]
+ enable_vxlan = true
+ local_ip = OVERLAY_INTERFACE_IP_ADDRESS
+ l2_population = true
+
+ [securitygroup]
+ enable_security_group = true
+ firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
+ ```
+
+ ***Description***
+
+ Map the provider virtual network to the physical network interface in the **[linux_bridge]** section.
+
+ Enable the VXLAN overlay network, configure the IP address of the physical network interface that processes the overlay network, and enable layer-2 population in the **[vxlan]** section.
+
+ Enable the security group and configure the linux bridge iptables firewall driver in the **[securitygroup]** section.
+
+ ***Note***
+
+ **Replace *PROVIDER_INTERFACE_NAME* with the physical network interface.**
+
+ **Replace *OVERLAY_INTERFACE_IP_ADDRESS* with the management IP address of the controller node.**
+
+ Configure the Layer-3 agent:
+
+ ```shell
+ vim /etc/neutron/l3_agent.ini (CTL)
+
+ [DEFAULT]
+ interface_driver = linuxbridge
+ ```
+
+ ***Description***
+
+ Set the interface driver to linuxbridge in the **[default]** section.
+
+ Configure the DHCP agent:
+
+ ```shell
+ vim /etc/neutron/dhcp_agent.ini (CTL)
+
+ [DEFAULT]
+ interface_driver = linuxbridge
+ dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
+ enable_isolated_metadata = true
+ ```
+
+ ***Description***
+
+ In the **[default]** section, configure the linuxbridge interface driver and Dnsmasq DHCP driver, and enable the isolated metadata.
+
+ Configure the metadata agent:
+
+ ```shell
+ vim /etc/neutron/metadata_agent.ini (CTL)
+
+ [DEFAULT]
+ nova_metadata_host = controller
+ metadata_proxy_shared_secret = METADATA_SECRET
+ ```
+
+ ***Description***
+
+ In the **[default]**, configure the metadata host and the shared secret.
+
+ ***Note***
+
+ **Replace *METADATA_SECRET* with a proper metadata agent secret.**
+
+4. Configure Nova:
+
+ ```shell
+ vim /etc/nova/nova.conf
+
+ [neutron]
+ auth_url = http://controller:5000
+ auth_type = password
+ project_domain_name = Default
+ user_domain_name = Default
+ region_name = RegionOne
+ project_name = service
+ username = neutron
+ password = NEUTRON_PASS
+ service_metadata_proxy = true (CTL)
+ metadata_proxy_shared_secret = METADATA_SECRET (CTL)
+ ```
+
+ ***Description***
+
+ In the **[neutron]** section, configure the access parameters, enable the metadata agent, and configure the secret.
+
+ ***Note***
+
+ **Replace *NEUTRON_PASS* with the password of the neutron user.**
+
+ **Replace *METADATA_SECRET* with a proper metadata agent secret.**
+
+5. Synchronize the database:
+
+ ```shell
+ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
+ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
+ ```
+
+6. Run the following command to restart the compute API service:
+
+ ```shell
+ systemctl restart openstack-nova-api.service
+ ```
+
+7. Start the network service:
+
+ ```shell
+ systemctl enable neutron-server.service neutron-linuxbridge-agent.service \ (CTL)
+ neutron-dhcp-agent.service neutron-metadata-agent.service \
+ neutron-l3-agent.service
+
+ systemctl restart neutron-server.service neutron-linuxbridge-agent.service \ (CTL)
+ neutron-dhcp-agent.service neutron-metadata-agent.service \
+ neutron-l3-agent.service
+
+ systemctl enable neutron-linuxbridge-agent.service (CPT)
+ systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service (CPT)
+ ```
+
+8. Perform the verification.
+
+ Run the following command to verify whether the Neutron agent is started successfully:
+
+ ```shell
+ openstack network agent list
+ ```
+
+### Installing Cinder
+
+1. Create the database, service credentials, and API endpoints.
+
+ Create the database:
+
+ ```sql
+ mysql -u root -p
+
+ MariaDB [(none)]> CREATE DATABASE cinder;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
+ IDENTIFIED BY 'CINDER_DBPASS';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
+ IDENTIFIED BY 'CINDER_DBPASS';
+ MariaDB [(none)]> exit
+ ```
+
+ ***Note***
+
+ **Replace *CINDER_DBPASS* to set the password for the cinder database.**
+
+ ```shell
+ source ~/.admin-openrc
+ ```
+
+ Create the Cinder service credentials:
+
+ ```shell
+ openstack user create --domain default --password-prompt cinder
+ openstack role add --project service --user cinder admin
+ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
+ ```
+
+ Create the API endpoints for the block storage service:
+
+ ```shell
+ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
+ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
+ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
+ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
+ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
+ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
+ ```
+
+2. Install the software packages:
+
+ ```shell
+ yum install openstack-cinder-api openstack-cinder-scheduler (CTL)
+ ```
+
+ ```shell
+ yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils \ (STG)
+ openstack-cinder-volume openstack-cinder-backup
+ ```
+
+3. Prepare the storage devices. The following is an example:
+
+ ```shell
+ pvcreate /dev/vdb
+ vgcreate cinder-volumes /dev/vdb
+
+ vim /etc/lvm/lvm.conf
+
+
+ devices {
+ ...
+ filter = [ "a/vdb/", "r/.*/"]
+ ```
+
+ ***Description***
+
+ In the **devices** section, add filters to allow the **/dev/vdb** devices and reject other devices.
+
+4. Prepare the NFS:
+
+ ```shell
+ mkdir -p /root/cinder/backup
+
+ cat << EOF >> /etc/export
+ /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
+ EOF
+
+ ```
+
+5. Configure Cinder:
+
+ ```shell
+ vim /etc/cinder/cinder.conf
+
+ [DEFAULT]
+ transport_url = rabbit://openstack:RABBIT_PASS@controller
+ auth_strategy = keystone
+ my_ip = 10.0.0.11
+ enabled_backends = lvm (STG)
+ backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver (STG)
+ backup_share=HOST:PATH (STG)
+
+ [database]
+ connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000
+ auth_url = http://controller:5000
+ memcached_servers = controller:11211
+ auth_type = password
+ project_domain_name = Default
+ user_domain_name = Default
+ project_name = service
+ username = cinder
+ password = CINDER_PASS
+
+ [oslo_concurrency]
+ lock_path = /var/lib/cinder/tmp
+
+ [lvm]
+ volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver (STG)
+ volume_group = cinder-volumes (STG)
+ iscsi_protocol = iscsi (STG)
+ iscsi_helper = tgtadm (STG)
+ ```
+
+ ***Description***
+
+ In the **[database]** section, configure the database entry.
+
+ In the **[DEFAULT]** section, configure the RabbitMQ message queue entry and **my_ip**.
+
+ In the **[DEFAULT]** and **[keystone_authtoken]** sections, configure the identity authentication service entry.
+
+ In the **[oslo_concurrency]** section, configure the lock path.
+
+ ***Note***
+
+ **Replace *CINDER_DBPASS* with the password of the cinder database.**
+
+ **Replace *RABBIT_PASS* with the password of the openstack user in RabbitMQ.**
+
+ **Set *my_ip* to the management IP address of the controller node.**
+
+ **Replace *CINDER_PASS* with the password of the cinder user.**
+
+ **Replace *HOST:PATH* with the host IP address and the shared path of the NFS.**
+
+6. Synchronize the database:
+
+ ```shell
+ su -s /bin/sh -c "cinder-manage db sync" cinder (CTL)
+ ```
+
+7. Configure Nova:
+
+ ```shell
+ vim /etc/nova/nova.conf (CTL)
+
+ [cinder]
+ os_region_name = RegionOne
+ ```
+
+8. Restart the compute API service:
+
+ ```shell
+ systemctl restart openstack-nova-api.service
+ ```
+
+9. Start the Cinder service:
+
+ ```shell
+ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service (CTL)
+ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service (CTL)
+ ```
+
+ ```shell
+ systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG)
+ openstack-cinder-volume.service \
+ openstack-cinder-backup.service
+ systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \ (STG)
+ openstack-cinder-volume.service \
+ openstack-cinder-backup.service
+ ```
+
+ ***Note***
+
+ If the Cinder volumes are mounted using tgtadm, modify the **/etc/tgt/tgtd.conf** file as follows to ensure that tgtd can discover the iscsi target of cinder-volume.
+
+ ```shell
+ include /var/lib/cinder/volumes/*
+ ```
+
+10. Perform the verification:
+
+ ```shell
+ source ~/.admin-openrc
+ openstack volume service list
+ ```
+
+### Installing Horizon
+
+1. Install the software package:
+
+ ```shell
+ yum install openstack-dashboard
+ ```
+
+2. Modify the file.
+
+ Modify the variables:
+
+ ```text
+ vim /etc/openstack-dashboard/local_settings
+
+ OPENSTACK_HOST = "controller"
+ ALLOWED_HOSTS = ['*', ]
+
+ SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
+
+ CACHES = {
+ 'default': {
+ 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
+ 'LOCATION': 'controller:11211',
+ }
+ }
+
+ OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
+ OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
+ OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
+ OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
+
+ OPENSTACK_API_VERSIONS = {
+ "identity": 3,
+ "image": 2,
+ "volume": 3,
+ }
+ ```
+
+3. Restart the httpd service:
+
+ ```shell
+ systemctl restart httpd.service memcached.service
+ ```
+
+4. Perform the verification.
+ Open the browser, enter in the address bar, and log in to Horizon.
+
+ ***Note***
+
+ **Replace *HOSTIP* with the management plane IP address of the controller node.**
+
+### Installing Tempest
+
+Tempest is the integrated test service of OpenStack. If you need to run a fully automatic test of the functions of the installed OpenStack environment, you are advised to use Tempest. Otherwise, you can choose not to install it.
+
+1. Install Tempest:
+
+ ```shell
+ yum install openstack-tempest
+ ```
+
+2. Initialize the directory:
+
+ ```shell
+ tempest init mytest
+ ```
+
+3. Modify the configuration file:
+
+ ```shell
+ cd mytest
+ vi etc/tempest.conf
+ ```
+
+ Configure the current OpenStack environment information in **tempest.conf**. For details, see the [official example](https://docs.openstack.org/tempest/latest/sampleconf.html).
+
+4. Perform the test:
+
+ ```shell
+ tempest run
+ ```
+
+5. (Optional) Install the tempest extensions.
+ The OpenStack services have provided some tempest test packages. You can install these packages to enrich the tempest test content. In Train, extension tests for Cinder, Glance, Keystone, Ironic and Trove are provided. You can run the following command to install and use them:
+ ```
+ yum install python3-cinder-tempest-plugin python3-glance-tempest-plugin python3-ironic-tempest-plugin python3-keystone-tempest-plugin python3-trove-tempest-plugin
+ ```
+
+### Installing Ironic
+
+Ironic is the bare metal service of OpenStack. If you need to deploy bare metal machines, Ironic is recommended. Otherwise, you can choose not to install it.
+
+1. Set the database.
+
+ The bare metal service stores information in the database. Create a **ironic** database that can be accessed by the **ironic** user and replace **IRONIC_DBPASSWORD** with a proper password.
+
+ ```sql
+ mysql -u root -p
+
+ MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
+ IDENTIFIED BY 'IRONIC_DBPASSWORD';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
+ IDENTIFIED BY 'IRONIC_DBPASSWORD';
+ ```
+
+2. Install the software packages.
+
+ ```shell
+ yum install openstack-ironic-api openstack-ironic-conductor python3-ironicclient
+ ```
+
+ Start the services.
+
+ ```shell
+ systemctl enable openstack-ironic-api openstack-ironic-conductor
+ systemctl start openstack-ironic-api openstack-ironic-conductor
+ ```
+
+3. Create service user authentication.
+
+ 1. Create the bare metal service user:
+
+ ```shell
+ openstack user create --password IRONIC_PASSWORD \
+ --email ironic@example.com ironic
+ openstack role add --project service --user ironic admin
+ openstack service create --name ironic \
+ --description "Ironic baremetal provisioning service" baremetal
+ ```
+
+ 1. Create the bare metal service access entries:
+
+ ```shell
+ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
+ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
+ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
+ ```
+
+4. Configure the ironic-api service.
+
+ Configuration file path: **/etc/ironic/ironic.conf**
+
+ 1. Use **connection** to configure the location of the database as follows. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server.
+
+ ```shell
+ [database]
+
+ # The SQLAlchemy connection string used to connect to the
+ # database (string value)
+
+ connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+ ```
+
+ 1. Configure the ironic-api service to use the RabbitMQ message broker. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ.
+
+ ```shell
+ [DEFAULT]
+
+ # A URL representing the messaging driver to use and its full
+ # configuration. (string value)
+
+ transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+ ```
+
+ You can also use json-rpc instead of RabbitMQ.
+
+ 1. Configure the ironic-api service to use the credential of the identity authentication service. Replace **PUBLIC_IDENTITY_IP** with the public IP address of the identity authentication server and **PRIVATE_IDENTITY_IP** with the private IP address of the identity authentication server, replace **IRONIC_PASSWORD** with the password of the **ironic** user in the identity authentication service.
+
+ ```shell
+ [DEFAULT]
+
+ # Authentication strategy used by ironic-api: one of
+ # "keystone" or "noauth". "noauth" should not be used in a
+ # production environment because all authentication will be
+ # disabled. (string value)
+
+ auth_strategy=keystone
+
+ [keystone_authtoken]
+ # Authentication type to load (string value)
+ auth_type=password
+ # Complete public Identity API endpoint (string value)
+ www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
+ # Complete admin Identity API endpoint. (string value)
+ auth_url=http://PRIVATE_IDENTITY_IP:5000
+ # Service username. (string value)
+ username=ironic
+ # Service account password. (string value)
+ password=IRONIC_PASSWORD
+ # Service tenant name. (string value)
+ project_name=service
+ # Domain name containing project (string value)
+ project_domain_name=Default
+ # User's domain name (string value)
+ user_domain_name=Default
+
+ ```
+
+ 1. Create the bare metal service database table:
+
+ ```shell
+ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
+ ```
+
+ 1. Restart the ironic-api service:
+
+ ```shell
+ sudo systemctl restart openstack-ironic-api
+ ```
+
+5. Configure the ironic-conductor service.
+
+ 1. Replace **HOST_IP** with the IP address of the conductor host.
+
+ ```shell
+ [DEFAULT]
+
+ # IP address of this host. If unset, will determine the IP
+ # programmatically. If unable to do so, will use "127.0.0.1".
+ # (string value)
+
+ my_ip=HOST_IP
+ ```
+
+ 1. Specifies the location of the database. ironic-conductor must use the same configuration as ironic-api. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server.
+
+ ```shell
+ [database]
+
+ # The SQLAlchemy connection string to use to connect to the
+ # database. (string value)
+
+ connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
+ ```
+
+ 1. Configure the ironic-api service to use the RabbitMQ message broker. ironic-conductor must use the same configuration as ironic-api. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ.
+
+ ```shell
+ [DEFAULT]
+
+ # A URL representing the messaging driver to use and its full
+ # configuration. (string value)
+
+ transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
+ ```
+
+ You can also use json-rpc instead of RabbitMQ.
+
+ 1. Configure the credentials to access other OpenStack services.
+
+ To communicate with other OpenStack services, the bare metal service needs to use the service users to get authenticated by the OpenStack Identity service when requesting other services. The credentials of these users must be configured in each configuration file associated to the corresponding service.
+
+ ```shell
+ [neutron] - Accessing the OpenStack network services.
+ [glance] - Accessing the OpenStack image service.
+ [swift] - Accessing the OpenStack object storage service.
+ [cinder] - Accessing the OpenStack block storage service.
+ [inspector] Accessing the OpenStack bare metal introspection service.
+ [service_catalog] - A special item to store the credential used by the bare metal service. The credential is used to discover the API URL endpoint registered in the OpenStack identity authentication service catalog by the bare metal service.
+ ```
+
+ For simplicity, you can use one service user for all services. For backward compatibility, the user name must be the same as that configured in [keystone_authtoken] of the ironic-api service. However, this is not mandatory. You can also create and configure a different service user for each service.
+
+ In the following example, the authentication information for the user to access the OpenStack network service is configured as follows:
+
+ ```shell
+ The network service is deployed in the identity authentication service domain named RegionOne. Only the public endpoint interface is registered in the service catalog.
+
+ A specific CA SSL certificate is used for HTTPS connection when sending a request.
+
+ The same service user as that configured for ironic-api.
+
+ The dynamic password authentication plugin discovers a proper identity authentication service API version based on other options.
+ ```
+
+ ```shell
+ [neutron]
+
+ # Authentication type to load (string value)
+ auth_type = password
+ # Authentication URL (string value)
+ auth_url=https://IDENTITY_IP:5000/
+ # Username (string value)
+ username=ironic
+ # User's password (string value)
+ password=IRONIC_PASSWORD
+ # Project name to scope to (string value)
+ project_name=service
+ # Domain ID containing project (string value)
+ project_domain_id=default
+ # User's domain id (string value)
+ user_domain_id=default
+ # PEM encoded Certificate Authority to use when verifying
+ # HTTPs connections. (string value)
+ cafile=/opt/stack/data/ca-bundle.pem
+ # The default region_name for endpoint URL discovery. (string
+ # value)
+ region_name = RegionOne
+ # List of interfaces, in order of preference, for endpoint
+ # URL. (list value)
+ valid_interfaces=public
+ ```
+
+ By default, to communicate with other services, the bare metal service attempts to discover a proper endpoint of the service through the service catalog of the identity authentication service. If you want to use a different endpoint for a specific service, specify the endpoint_override option in the bare metal service configuration file.
+
+ ```shell
+ [neutron] ... endpoint_override =
+ ```
+
+ 1. Configure the allowed drivers and hardware types.
+
+ Set enabled_hardware_types to specify the hardware types that can be used by ironic-conductor:
+
+ ```shell
+ [DEFAULT] enabled_hardware_types = ipmi
+ ```
+
+ Configure hardware interfaces:
+
+ ```shell
+ enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
+ ```
+
+ Configure the default value of the interface:
+
+ ```shell
+ [DEFAULT] default_deploy_interface = direct default_network_interface = neutron
+ ```
+
+ If any driver that uses Direct Deploy is enabled, you must install and configure the Swift backend of the image service. The Ceph object gateway (RADOS gateway) can also be used as the backend of the image service.
+
+ 1. Restart the ironic-conductor service:
+
+ ```shell
+ sudo systemctl restart openstack-ironic-conductor
+ ```
+
+6. Configure the httpd service.
+
+ 1. Create the root directory of the httpd used by Ironic, and set the owner and owner group. The directory path must be the same as the path specified by the **http_root** configuration item in the **[deploy]** group in **/etc/ironic/ironic.conf**.
+
+ ```
+ mkdir -p /var/lib/ironic/httproot ``chown ironic.ironic /var/lib/ironic/httproot
+ ```
+
+ 2. Install and configure the httpd Service.
+
+ 1. Install the httpd service. If the httpd service is already installed, skip this step.
+
+ ```
+ yum install httpd -y
+ ```
+ 2. Create the **/etc/httpd/conf.d/openstack-ironic-httpd.conf** file. The file content is as follows:
+
+ ```
+ Listen 8080
+
+
+ ServerName ironic.openeuler.com
+
+ ErrorLog "/var/log/httpd/openstack-ironic-httpd-error_log"
+ CustomLog "/var/log/httpd/openstack-ironic-httpd-access_log" "%h %l %u %t \"%r\" %>s %b"
+
+ DocumentRoot "/var/lib/ironic/httproot"
+
+ Options Indexes FollowSymLinks
+ Require all granted
+
+ LogLevel warn
+ AddDefaultCharset UTF-8
+ EnableSendfile on
+
+
+ ```
+
+ The listening port must be the same as the port specified by **http_url** in the **[deploy]** section of **/etc/ironic/ironic.conf**.
+
+ 3. Restart the httpd service:
+
+ ```
+ systemctl restart httpd
+ ```
+
+
+
+8. Create the deploy ramdisk image.
+
+ The ramdisk image of Train can be created using the ironic-python-agent service or disk-image-builder tool. You can also use the latest ironic-python-agent-builder provided by the community. You can also use other tools.
+ To use the Train native tool, you need to install the corresponding software package.
+
+ ```shell
+ yum install openstack-ironic-python-agent
+ or
+ yum install diskimage-builder
+ ```
+
+ For details, see the [official document](https://docs.openstack.org/ironic/queens/install/deploy-ramdisk.html).
+
+ The following describes how to use the ironic-python-agent-builder to build the deploy image used by ironic.
+
+ 1. Install ironic-python-agent-builder.
+
+ 1. Install the tool:
+
+ ```shell
+ pip install ironic-python-agent-builder
+ ```
+
+ 2. Modify the python interpreter in the following files:
+
+ ```shell
+ /usr/bin/yum /usr/libexec/urlgrabber-ext-down
+ ```
+
+ 3. Install the other necessary tools:
+
+ ```shell
+ yum install git
+ ```
+
+ **DIB** depends on the `semanage` command. Therefore, check whether the `semanage --help` command is available before creating an image. If the system displays a message indicating that the command is unavailable, install the command:
+
+ ```shell
+ # Check which package needs to be installed.
+ [root@localhost ~]# yum provides /usr/sbin/semanage
+ Loaded plug-in: fastestmirror
+ Loading mirror speeds from cached hostfile
+ * base: mirror.vcu.edu
+ * extras: mirror.vcu.edu
+ * updates: mirror.math.princeton.edu
+ policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
+ Source: base
+ Matching source:
+ File name: /usr/sbin/semanage
+ # Install.
+ [root@localhost ~]# yum install policycoreutils-python
+ ```
+
+ 2. Create the image.
+
+ For Arm architecture, add the following information:
+ ```shell
+ export ARCH=aarch64
+ ```
+
+ Basic usage:
+
+ ```shell
+ usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
+ [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
+ distribution
+
+ positional arguments:
+ distribution Distribution to use
+
+ optional arguments:
+ -h, --help show this help message and exit
+ -r RELEASE, --release RELEASE
+ Distribution release to use
+ -o OUTPUT, --output OUTPUT
+ Output base file name
+ -e ELEMENT, --element ELEMENT
+ Additional DIB element to use
+ -b BRANCH, --branch BRANCH
+ If set, override the branch that is used for ironic-
+ python-agent and requirements
+ -v, --verbose Enable verbose logging in diskimage-builder
+ --extra-args EXTRA_ARGS
+ Extra arguments to pass to diskimage-builder
+ ```
+
+ Example:
+
+ ```shell
+ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
+ ```
+
+ 3. Allow SSH login.
+
+ Initialize the environment variables and create the image:
+
+ ```shell
+ export DIB_DEV_USER_USERNAME=ipa \
+ export DIB_DEV_USER_PWDLESS_SUDO=yes \
+ export DIB_DEV_USER_PASSWORD='123'
+ ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
+ ```
+
+ 4. Specify the code repository.
+
+ Initialize the corresponding environment variables and create the image:
+
+ ```shell
+ # Specify the address and version of the repository.
+ DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
+ DIB_REPOREF_ironic_python_agent=origin/develop
+
+ # Clone code from Gerrit.
+ DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
+ DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
+ ```
+
+ Reference: [source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html).
+
+ The specified repository address and version are verified successfully.
+
+ 5. Note
+
+The template of the PXE configuration file of the native OpenStack does not support the ARM64 architecture. You need to modify the native OpenStack code.
+
+In Train, Ironic provided by the community does not support the boot from ARM 64-bit UEFI PXE. As a result, the format of the generated grub.cfg file (generally in /tftpboot/) is incorrect, causing the PXE boot failure.
+
+You need to modify the code logic for generating the grub.cfg file.
+
+The following TLS error is reported when Ironic sends a request to IPA to query the command execution status:
+
+By default, both IPA and Ironic of Train have TLS authentication enabled to send requests to each other. Disable TLS authentication according to the description on the official website.
+
+1. Add **ipa-insecure=1** to the following configuration in the Ironic configuration file (**/etc/ironic/ironic.conf**):
+
+```
+[agent]
+verify_ca = False
+
+[pxe]
+pxe_append_params = nofb nomodeset vga=normal coreos.autologin ipa-insecure=1
+```
+
+2. Add the IPA configuration file **/etc/ironic_python_agent/ironic_python_agent.conf** to the ramdisk image and configure the TLS as follows:
+
+**/etc/ironic_python_agent/ironic_python_agent.conf** (The **/etc/ironic_python_agent** directory must be created in advance.)
+
+```
+[DEFAULT]
+enable_auto_tls = False
+```
+
+Set the permission:
+
+```
+chown -R ipa.ipa /etc/ironic_python_agent/
+```
+
+3. Modify the startup file of the IPA service and add the configuration file option.
+
+ vim usr/lib/systemd/system/ironic-python-agent.service
+
+ ```
+ [Unit]
+ Description=Ironic Python Agent
+ After=network-online.target
+
+ [Service]
+ ExecStartPre=/sbin/modprobe vfat
+ ExecStart=/usr/local/bin/ironic-python-agent --config-file /etc/ironic_python_agent/ironic_python_agent.conf
+ Restart=always
+ RestartSec=30s
+
+ [Install]
+ WantedBy=multi-user.target
+ ```
+
+
+Other services such as ironic-inspector are also provided for OpenStack Train. Install the services based on site requirements.
+
+### Installing Kolla
+
+Kolla provides the OpenStack service with the container-based deployment function that is ready for the production environment.
+
+The installation of Kolla is simple. You only need to install the corresponding RPM packages:
+
+```
+yum install openstack-kolla openstack-kolla-ansible
+```
+
+After the installation is complete, you can run commands such as `kolla-ansible`, `kolla-build`, `kolla-genpwd`, `kolla-mergepwd` to create an image or deploy a container environment.
+
+### Installing Trove
+Trove is the database service of OpenStack. If you need to use the database service provided by OpenStack, Trove is recommended. Otherwise, you can choose not to install it.
+
+1. Set the database.
+
+ The database service stores information in the database. Create a **trove** database that can be accessed by the **trove** user and replace **TROVE_DBPASSWORD** with a proper password.
+
+ ```sql
+ mysql -u root -p
+
+ MariaDB [(none)]> CREATE DATABASE trove CHARACTER SET utf8;
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'localhost' \
+ IDENTIFIED BY 'TROVE_DBPASSWORD';
+ MariaDB [(none)]> GRANT ALL PRIVILEGES ON trove.* TO 'trove'@'%' \
+ IDENTIFIED BY 'TROVE_DBPASSWORD';
+ ```
+
+2. Create service user authentication.
+
+ 1. Create the **Trove** service user.
+
+ ```shell
+ openstack user create --domain default --password-prompt trove
+ openstack role add --project service --user trove admin
+ openstack service create --name trove --description "Database" database
+ ```
+ **Description:** Replace *TROVE_PASSWORD* with the password of the **trove** user.
+
+ 1. Create the **Database** service access entry
+
+ ```shell
+ openstack endpoint create --region RegionOne database public http://controller:8779/v1.0/%\(tenant_id\)s
+ openstack endpoint create --region RegionOne database internal http://controller:8779/v1.0/%\(tenant_id\)s
+ openstack endpoint create --region RegionOne database admin http://controller:8779/v1.0/%\(tenant_id\)s
+ ```
+
+3. Install and configure the **Trove** components.
+
+ 1. Install the **Trove** package:
+ ```shell script
+ yum install openstack-trove python3-troveclient
+ ```
+
+ 2. Configure **trove.conf**:
+ ```shell script
+ vim /etc/trove/trove.conf
+
+ [DEFAULT]
+ log_dir = /var/log/trove
+ trove_auth_url = http://controller:5000/
+ nova_compute_url = http://controller:8774/v2
+ cinder_url = http://controller:8776/v1
+ swift_url = http://controller:8080/v1/AUTH_
+ rpc_backend = rabbit
+ transport_url = rabbit://openstack:RABBIT_PASS@controller:5672
+ auth_strategy = keystone
+ add_addresses = True
+ api_paste_config = /etc/trove/api-paste.ini
+ nova_proxy_admin_user = admin
+ nova_proxy_admin_pass = ADMIN_PASSWORD
+ nova_proxy_admin_tenant_name = service
+ taskmanager_manager = trove.taskmanager.manager.Manager
+ use_nova_server_config_drive = True
+ # Set these if using Neutron Networking
+ network_driver = trove.network.neutron.NeutronDriver
+ network_label_regex = .*
+
+ [database]
+ connection = mysql+pymysql://trove:TROVE_DBPASSWORD@controller/trove
+
+ [keystone_authtoken]
+ www_authenticate_uri = http://controller:5000/
+ auth_url = http://controller:5000/
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ project_name = service
+ username = trove
+ password = TROVE_PASSWORD
+ ```
+ **Description:**
+ - In the **[Default]** section, **nova_compute_url** and **cinder_url** are endpoints created by Nova and Cinder in Keystone.
+ - **nova_proxy_XXX** is a user who can access the Nova service. In the preceding example, the **admin** user is used.
+ - **transport_url** is the **RabbitMQ** connection information, and **RABBIT_PASS** is the RabbitMQ password.
+ - In the **[database]** section, **connection** is the information of the database created for Trove in MySQL.
+ - Replace **TROVE_PASSWORD** in the Trove user information with the password of the **trove** user.
+
+ 3. Configure **trove-guestagent.conf**:
+ ```shell script
+ vim /etc/trove/trove-guestagent.conf
+
+ rabbit_host = controller
+ rabbit_password = RABBIT_PASS
+ trove_auth_url = http://controller:5000/
+ ```
+ **Description:** **guestagent** is an independent component in Trove and needs to be pre-built into the virtual machine image created by Trove using Nova.
+ After the database instance is created, the guestagent process is started to report heartbeat messages to the Trove through the message queue (RabbitMQ).
+ Therefore, you need to configure the user name and password of the RabbitMQ.
+ **Since Victoria, Trove uses a unified image to run different types of databases. The database service runs in the Docker container of the Guest VM.**
+ - Replace **RABBIT_PASS** with the RabbitMQ password.
+
+ 4. Generate the **Trove** database table.
+ ```shell script
+ su -s /bin/sh -c "trove-manage db_sync" trove
+ ```
+
+4. Complete the installation and configuration.
+ 1. Configure the **Trove** service to automatically start:
+ ```shell script
+ systemctl enable openstack-trove-api.service \
+ openstack-trove-taskmanager.service \
+ openstack-trove-conductor.service
+ ```
+ 2. Start the services:
+ ```shell script
+ systemctl start openstack-trove-api.service \
+ openstack-trove-taskmanager.service \
+ openstack-trove-conductor.service
+ ```
+### Installing Swift
+
+Swift provides a scalable and highly available distributed object storage service, which is suitable for storing unstructured data in large scale.
+
+1. Create the service credentials and API endpoints.
+
+ Create the service credential:
+
+ ``` shell
+ # Create the swift user.
+ openstack user create --domain default --password-prompt swift
+ # Add the admin role for the swift user.
+ openstack role add --project service --user swift admin
+ # Create the swift service entity.
+ openstack service create --name swift --description "OpenStack Object Storage" object-store
+ ```
+
+ Create the Swift API endpoints.
+
+ ```shell
+ openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(project_id\)s
+ openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(project_id\)s
+ openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1
+ ```
+
+
+2. Install the software packages:
+
+ ```shell
+ yum install openstack-swift-proxy python3-swiftclient python3-keystoneclient python3-keystonemiddleware memcached (CTL)
+ ```
+
+3. Configure the proxy-server.
+
+ The Swift RPM package contains a **proxy-server.conf** file which is basically ready to use. You only need to change the values of **ip** and swift **password** in the file.
+
+ ***Note***
+
+ **Replace password with the password you set for the swift user in the identity service.**
+
+4. Install and configure the storage node. (STG)
+
+ Install the supported program packages:
+ ```shell
+ yum install xfsprogs rsync
+ ```
+
+ Format the /dev/vdb and /dev/vdc devices into XFS:
+
+ ```shell
+ mkfs.xfs /dev/vdb
+ mkfs.xfs /dev/vdc
+ ```
+
+ Create the mount point directory structure:
+
+ ```shell
+ mkdir -p /srv/node/vdb
+ mkdir -p /srv/node/vdc
+ ```
+
+ Find the UUID of the new partition:
+
+ ```shell
+ blkid
+ ```
+
+ Add the following to the **/etc/fstab** file:
+
+ ```shell
+ UUID="" /srv/node/vdb xfs noatime 0 2
+ UUID="" /srv/node/vdc xfs noatime 0 2
+ ```
+
+ Mount the devices:
+
+ ```shell
+ mount /srv/node/vdb
+ mount /srv/node/vdc
+ ```
+ ***Note***
+
+ **If the disaster recovery function is not required, you only need to create one device and skip the following rsync configuration.**
+
+ (Optional) Create or edit the **/etc/rsyncd.conf** file to include the following content:
+
+ ```shell
+ [DEFAULT]
+ uid = swift
+ gid = swift
+ log file = /var/log/rsyncd.log
+ pid file = /var/run/rsyncd.pid
+ address = MANAGEMENT_INTERFACE_IP_ADDRESS
+
+ [account]
+ max connections = 2
+ path = /srv/node/
+ read only = False
+ lock file = /var/lock/account.lock
+
+ [container]
+ max connections = 2
+ path = /srv/node/
+ read only = False
+ lock file = /var/lock/container.lock
+
+ [object]
+ max connections = 2
+ path = /srv/node/
+ read only = False
+ lock file = /var/lock/object.lock
+ ```
+ **Replace *MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node.**
+
+ Start the rsyncd service and configure it to start upon system startup.
+
+ ```shell
+ systemctl enable rsyncd.service
+ systemctl start rsyncd.service
+ ```
+
+5. Install and configure the components on storage nodes. (STG)
+
+ Install the software packages:
+
+ ```shell
+ yum install openstack-swift-account openstack-swift-container openstack-swift-object
+ ```
+
+ Edit **account-server.conf**, **container-server.conf**, and **object-server.conf** in the **/etc/swift directory** and replace **bind_ip** with the management network IP address of the storage node.
+
+ Ensure the proper ownership of the mount point directory structure.
+
+ ```shell
+ chown -R swift:swift /srv/node
+ ```
+
+ Create the recon directory and ensure that it has the correct ownership.
+
+ ```shell
+ mkdir -p /var/cache/swift
+ chown -R root:swift /var/cache/swift
+ chmod -R 775 /var/cache/swift
+ ```
+
+6. Create the account ring. (CTL)
+
+ Switch to the **/etc/swift** directory:
+
+ ```shell
+ cd /etc/swift
+ ```
+
+ Create the basic **account.builder** file:
+
+ ```shell
+ swift-ring-builder account.builder create 10 1 1
+ ```
+
+ Add each storage node to the ring:
+
+ ```shell
+ swift-ring-builder account.builder add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6202 --device DEVICE_NAME --weight DEVICE_WEIGHT
+ ```
+
+ **Replace *STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node. Replace *DEVICE_NAME* with the name of the storage device on the same storage node.**
+
+ ***Note***
+ **Repeat this command to each storage device on each storage node.**
+
+ Verify the ring contents:
+
+ ```shell
+ swift-ring-builder account.builder
+ ```
+
+ Rebalance the ring:
+
+ ```shell
+ swift-ring-builder account.builder rebalance
+ ```
+
+7. Create the container ring. (CTL)
+
+ Switch to the **/etc/swift** directory:
+
+ Create the basic **container.builder** file:
+
+ ```shell
+ swift-ring-builder container.builder create 10 1 1
+ ```
+
+ Add each storage node to the ring:
+
+ ```shell
+ swift-ring-builder container.builder \
+ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6201 \
+ --device DEVICE_NAME --weight 100
+
+ ```
+
+ **Replace *STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node. Replace *DEVICE_NAME* with the name of the storage device on the same storage node.**
+
+ ***Note***
+ **Repeat this command to every storage devices on every storage nodes.**
+
+ Verify the ring contents:
+
+ ```shell
+ swift-ring-builder container.builder
+ ```
+
+ Rebalance the ring:
+
+ ```shell
+ swift-ring-builder container.builder rebalance
+ ```
+
+8. Create the object ring. (CTL)
+
+ Switch to the **/etc/swift** directory:
+
+ Create the basic **object.builder** file:
+
+ ```shell
+ swift-ring-builder object.builder create 10 1 1
+ ```
+
+ Add each storage node to the ring:
+
+ ```shell
+ swift-ring-builder object.builder \
+ add --region 1 --zone 1 --ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS --port 6200 \
+ --device DEVICE_NAME --weight 100
+ ```
+
+ **Replace *STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS* with the management network IP address of the storage node. Replace *DEVICE_NAME* with the name of the storage device on the same storage node.**
+
+ ***Note***
+ **Repeat this command to every storage devices on every storage nodes.**
+
+ Verify the ring contents:
+
+ ```shell
+ swift-ring-builder object.builder
+ ```
+
+ Rebalance the ring:
+
+ ```shell
+ swift-ring-builder object.builder rebalance
+ ```
+
+ Distribute ring configuration files:
+
+ Copy **account.ring.gz**, **container.ring.gz**, and **object.ring.gz** to the **/etc/swift** directory on each storage node and any additional nodes running the proxy service.
+
+
+
+9. Complete the installation.
+
+ Edit the **/etc/swift/swift.conf** file:
+
+ ``` shell
+ [swift-hash]
+ swift_hash_path_suffix = test-hash
+ swift_hash_path_prefix = test-hash
+
+ [storage-policy:0]
+ name = Policy-0
+ default = yes
+ ```
+
+ **Replace test-hash with a unique value.**
+
+ Copy the **swift.conf** file to the **/etc/swift** directory on each storage node and any additional nodes running the proxy service.
+
+ Ensure correct ownership of the configuration directory on all nodes:
+
+ ```shell
+ chown -R root:swift /etc/swift
+ ```
+
+ On the controller node and any additional nodes running the proxy service, start the object storage proxy service and its dependencies, and configure them to start upon system startup.
+
+ ```shell
+ systemctl enable openstack-swift-proxy.service memcached.service
+ systemctl start openstack-swift-proxy.service memcached.service
+ ```
+
+ On the storage node, start the object storage services and configure them to start upon system startup.
+
+ ```shell
+ systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+ systemctl start openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
+
+ systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+ systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
+
+ systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+
+ systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
+ ```
+### Installing Cyborg
+
+Cyborg provides acceleration device support for OpenStack, for example, GPUs, FPGAs, ASICs, NPs, SoCs, NVMe/NOF SSDs, ODPs, DPDKs, and SPDKs.
+
+1. Initialize the databases.
+
+```
+CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+```
+
+2. Create Keystone resource objects.
+
+```
+$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+ accelerator public http://:6666/v1
+$ openstack endpoint create --region RegionOne \
+ accelerator internal http://:6666/v1
+$ openstack endpoint create --region RegionOne \
+ accelerator admin http://:6666/v1
+```
+
+3. Install Cyborg
+
+```
+yum install openstack-cyborg
+```
+
+4. Configure Cyborg
+
+Modify **/etc/cyborg/cyborg.conf**.
+
+```
+[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+```
+
+Set the user names, passwords, and IP addresses as required.
+
+1. Synchronize the database table.
+
+```
+cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+```
+
+6. Start the Cyborg services.
+
+```
+systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+```
+
+### Installing Aodh
+
+1. Create the database.
+
+```
+CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+```
+
+2. Create Keystone resource objects.
+
+```
+openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+```
+
+3. Install Aodh.
+
+```
+yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+```
+
+4. Modify the configuration file.
+
+```
+[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+```
+
+5. Initialize the database.
+
+```
+aodh-dbsync
+```
+
+6. Start the Aodh services.
+
+```
+systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+```
+
+### Installing Gnocchi
+
+1. Create the database.
+
+```
+CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+```
+
+2. Create Keystone resource objects.
+
+```
+openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+```
+
+3. Install Gnocchi.
+
+```
+yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+```
+
+1. Modify the **/etc/gnocchi/gnocchi.conf** configuration file.
+
+```
+[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+```
+
+5. Initialize the database.
+
+```
+gnocchi-upgrade
+```
+
+6. Start the Gnocchi services.
+
+```
+systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+```
+
+### Installing Ceilometer
+
+1. Create Keystone resource objects.
+
+```
+openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+```
+
+2. Install Ceilometer.
+
+```
+yum install openstack-ceilometer-notification openstack-ceilometer-central
+```
+
+1. Modify the **/etc/ceilometer/pipeline.yaml** configuration file.
+
+```
+publishers:
+ # set address of Gnocchi
+ # + filter out Gnocchi-related activity meters (Swift driver)
+ # + set default archive policy
+ - gnocchi://?filter_project=service&archive_policy=low
+```
+
+4. Modify the **/etc/ceilometer/ceilometer.conf** configuration file.
+
+```
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+```
+
+5. Initialize the database.
+
+```
+ceilometer-upgrade
+```
+
+6. Start the Ceilometer services.
+
+```
+systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+```
+
+### Installing Heat
+
+1. Creat the **heat** database and grant proper privileges to it. Replace **HEAT_DBPASS** with a proper password.
+
+```
+CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+```
+
+2. Create a service credential. Create the **heat** user and add the **admin** role to it.
+
+```
+openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+```
+
+3. Create the **heat** and **heat-cfn** services and their API enpoints.
+
+```
+openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration" cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+```
+
+4. Create additional OpenStack management information, including the **heat** domain and its administrator **heat_domain_admin**, the **heat_stack_owner** role, and the **heat_stack_user** role.
+
+```
+openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+```
+
+5. Install the software packages.
+
+```
+yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+```
+
+6. Modify the configuration file **/etc/heat/heat.conf**.
+
+```
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+```
+
+7. Initialize the **heat** database table.
+
+```
+su -s /bin/sh -c "heat-manage db_sync" heat
+```
+
+8. Start the services.
+
+```
+systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+```
+
+## OpenStack Quick Installation
+
+The OpenStack SIG provides the Ansible script for one-click deployment of OpenStack in All in One or Distributed modes. Users can use the script to quickly deploy an OpenStack environment based on openEuler RPM packages. The following uses the All in One mode installation as an example.
+
+1. Install the OpenStack SIG Tool.
+
+ ```shell
+ pip install openstack-sig-tool
+ ```
+
+2. Configure the OpenStack Yum source.
+
+ ```shell
+ yum install openstack-release-train
+ ```
+
+ **Note**: Enable the EPOL repository for the Yum source if it is not enabled already.
+
+ ```shell
+ vi /etc/yum.repos.d/openEuler.repo
+
+ [EPOL]
+ name=EPOL
+ baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/
+ enabled=1
+ gpgcheck=1
+ gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
+ EOF
+
+3. Update the Ansible configurations.
+
+ Open the **/usr/local/etc/inventory/all_in_one.yaml** file and modify the configuration based on the environment and requirements. Modify the file as follows:
+
+ ```shell
+ all:
+ hosts:
+ controller:
+ ansible_host:
+ ansible_ssh_private_key_file:
+ ansible_ssh_user: root
+ vars:
+ mysql_root_password: root
+ mysql_project_password: root
+ rabbitmq_password: root
+ project_identity_password: root
+ enabled_service:
+ - keystone
+ - neutron
+ - cinder
+ - placement
+ - nova
+ - glance
+ - horizon
+ - aodh
+ - ceilometer
+ - cyborg
+ - gnocchi
+ - kolla
+ - heat
+ - swift
+ - trove
+ - tempest
+ neutron_provider_interface_name: br-ex
+ default_ext_subnet_range: 10.100.100.0/24
+ default_ext_subnet_gateway: 10.100.100.1
+ neutron_dataplane_interface_name: eth1
+ cinder_block_device: vdb
+ swift_storage_devices:
+ - vdc
+ swift_hash_path_suffix: ash
+ swift_hash_path_prefix: has
+ children:
+ compute:
+ hosts: controller
+ storage:
+ hosts: controller
+ network:
+ hosts: controller
+ vars:
+ test-key: test-value
+ dashboard:
+ hosts: controller
+ vars:
+ allowed_host: '*'
+ kolla:
+ hosts: controller
+ vars:
+ # We add openEuler OS support for kolla in OpenStack Queens/Rocky release
+ # Set this var to true if you want to use it in Q/R
+ openeuler_plugin: false
+ ```
+
+ Key Configurations
+
+ | Item | Description|
+ |---|---|
+ | ansible_host | IP address of the all-in-one node.|
+ | ansible_ssh_private_key_file | Key used by the Ansible script for logging in to the all-in-one node.|
+ | ansible_ssh_user | User used by the Ansible script for logging in to the all-in-one node.|
+ | enabled_service | List of services to be installed. You can delete services as required.|
+ | neutron_provider_interface_name | Neutron L3 bridge name. |
+ | default_ext_subnet_range | Neutron private network IP address range. |
+ | default_ext_subnet_gateway | Neutron private network gateway. |
+ | neutron_dataplane_interface_name | NIC used by Neutron. You are advised to use a new NIC to avoid conflicts with existing NICs causing disconnection of the all-in-one node. |
+ | cinder_block_device | Name of the block device used by Cinder.|
+ | swift_storage_devices | Name of the block device used by Swift. |
+
+4. Run the installation command.
+
+ ```shell
+ oos env setup all_in_one
+ ```
+
+ After the command is executed, the OpenStack environment of the All in One mode is successfully deployed.
+
+ The environment variable file **.admin-openrc** is stored in the home directory of the current user.
+
+5. Initialize the Tempest environment.
+
+ If you want to perform the Tempest test in the environment, run the `oos env init all_in_one` command to create the OpenStack resources required by Tempest.
+
+ After the command is executed successfully, a **mytest** directory is generated in the home directory of the user. You can run the `tempest run` command in the directory.
diff --git a/docs/en/docs/thirdparty_migration/OpenStack-victoria.md b/docs/en/docs/thirdparty_migration/OpenStack-victoria.md
deleted file mode 100644
index 28e921a4cb469e89b7948d05150436ae8ba37b41..0000000000000000000000000000000000000000
--- a/docs/en/docs/thirdparty_migration/OpenStack-victoria.md
+++ /dev/null
@@ -1,1832 +0,0 @@
-# OpenStack Victoria Deployment Guide
-
-## OpenStack
-
-OpenStack is an open source cloud computing infrastructure software project developed by the community. It provides an operating platform or tool set for deploying the cloud, offering scalable and flexible cloud computing for organizations.
-
-As an open source cloud computing management platform, OpenStack consists of several major components, such as Nova, Cinder, Neutron, Glance, Keystone, and Horizon. OpenStack supports almost all cloud environments. The project aims to provide a cloud computing management platform that is easy-to-use, scalable, unified, and standardized. OpenStack provides an infrastructure as a service (IaaS) solution that combines complementary services, each of which provides an API for integration.
-
-The official Yum source of openEuler 21.09 supports the Openstack Victoria version. You can configure the official Yum source and then deploy OpenStack by following the instructions of this document.
-
-## Preparing the Environment
-### Environment Configuration
-
-Add controller in the `/etc/hosts` file, for example, for node IP `10.0.0.11`, add the following information:
-
-```shell
-10.0.0.11 controller
-```
-
-### Installing the SQL Database
-
-1. Run the following command to install the software package:
-
- ```plain
- # yum install mariadb mariadb-server python-PyMySQL
- ```
-
-2. Run the following command to create and edit the `/etc/my.cnf.d/openstack.cnf` file:
-
- ```
- vim /etc/my.cnf.d/openstack.cnf
- ```
-
- Copy the following content to the file (set **bind-address** to the management IP address of the controller node):
-
- ```
- [mysqld]
- bind-address = 10.0.0.11
- default-storage-engine = innodb
- innodb_file_per_table = on
- max_connections = 4096
- collation-server = utf8_general_ci
- character-set-server = utf8
- ```
-
-3. Run the following command to start the database service and enable it to automatically start upon system boot:
-
- ```
- # systemctl enable mariadb.service
- # systemctl start mariadb.service
- ```
-
-### Installing RabbitMQ
-
-1. Run the following command to install the software package:
-
- ```
- #yum install rabbitmq-server
- ```
-
-2. Start the RabbitMQ service and enable it to automatically start upon system boot.
-
- ```
- #systemctl enable rabbitmq-server.service
- #systemctl start rabbitmq-server.service
- ```
-
-3. Add an OpenStack user.
-
- ```
- #rabbitmqctl add_user openstack RABBIT_PASS
- ```
-
-4. Replace **RABBIT\_PASS** with the password of the OpenStack user.
-
-5. Run the following command to set the permission of the **openstack** user so that the user can perform configuration, write, and read operations:
-
- ```
- #rabbitmqctl set_permissions openstack ".*" ".*" ".*"
- ```
-
-### Installing Memcached
-
-1. Run the following command to install the target software package:
-
- ```
- #yum install memcached python3-memcached
- ```
-
-2. Run the following command to edit the `/etc/sysconfig/memcached` file:
-
- ```
- #vim /etc/sysconfig/memcached
- OPTIONS="-l 127.0.0.1,::1,controller"
- ```
-
- Change the value of **OPTIONS** to the actual management IP address of the controller node.
-
-3. Run the following command to start the Memcached service and enable it to automatically start upon system boot:
-
- ```
- # systemctl enable memcached.service
- # systemctl start memcached.service
- ```
-
-## Installing OpenStack
-
-### Installing Keystone
-
-1. Log in to the database as the **root** user. Create the **keystone** database, and grant permissions to the user.
-
- ```
- # mysql -u root -p
- MariaDB [(none)]> CREATE DATABASE keystone;
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
- IDENTIFIED BY 'KEYSTONE_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
- IDENTIFIED BY 'KEYSTONE_DBPASS';
- MariaDB [(none)]> exit
- ```
-
- Replace **KEYSTONE\_DBPASS** with the password of the **keystone** database.
-
-2. Run the following command to install the software package:
-
- ```
- #yum install openstack-keystone httpd mod_wsgi
- ```
-
-3. Edit the `/etc/keystone/keystone.conf` file to configure the **keystone** database. In the **\[database]** section, configure the database entry. In the **\[token]** section, configure the token provider.
-
- ```
- # vim /etc/keystone/keystone.conf
- [database]
- connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
- [token]
- provider = fernet
- ```
-
- Replace **KEYSTONE\_DBPASS** with the password of the **keystone** database.
-
-4. Run the following command to synchronize the database.
-
- ```
- su -s /bin/sh -c "keystone-manage db_sync" keystone
- ```
-
-5. Run the following command to initialize the Fernet keystore:
-
- ```
- # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
- # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
- ```
-
-6. Run the following commands to enable the identity service:
-
- ```
- # keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
- --bootstrap-admin-url http://controller:5000/v3/ \
- --bootstrap-internal-url http://controller:5000/v3/ \
- --bootstrap-public-url http://controller:5000/v3/ \
- --bootstrap-region-id RegionOne
- ```
-
- Replace **ADMIN\_PASS** with the password of the **admin** user.
-
-7. Edit the `/etc/httpd/conf/httpd.conf` file and configure the Apache HTTP server.
-
- ```
- #vim /etc/httpd/conf/httpd.conf
- ```
-
- Enable **ServerName** to reference the controller node:
-
- ```
- ServerName controller
- ```
-
- If **ServerName** does not exist, create it.
-
-8. Run the following command to create a link for the `/usr/share/keystone/wsgi-keystone.conf` file:
-
- ```
- #ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
-
- #vim /etc/httpd/conf.d/wsgi-keystone.conf
- ```
-
-9. After the installation is complete, run the following command to start the Apache HTTP service:
-
- ```
- # systemctl enable httpd.service
- # systemctl start httpd.service
- ```
-
-10. Run the following command to set environment variables:
-
- ```
- $ export OS_USERNAME=admin
- $ export OS_PASSWORD=ADMIN_PASS
- $ export OS_PROJECT_NAME=admin
- $ export OS_USER_DOMAIN_NAME=Default
- $ export OS_PROJECT_DOMAIN_NAME=Default
- $ export OS_AUTH_URL=http://controller:5000/v3
- $ export OS_IDENTITY_API_VERSION=3
- ```
-
- Replace **ADMIN\_PASS** with the password set in the **keystone-manage bootstrap** command.
-
-11. Run the following commands to create the domain, project, user, and role:
-
- Create a domain named **example**.
-
- ```
- $ openstack domain create --description "An Example Domain" example
- ```
-
- Note: The domain **default** has been created in **keystone-manage bootstrap**.
-
- Create a project named **service**.
-
- ```
- $ openstack project create --domain default --description "Service Project" service
- ```
-
- Create a non-admin project named **myproject**, a user named **myuser**, and a role named **myrole**. Add the **myrole** role to **myproject** and **myuser**.
-
- ```
- $ openstack project create --domain default --description "Demo Project" myproject
- $ openstack user create --domain default --password-prompt myuser
- $ openstack role create myrole
- $ openstack role add --project myproject --user myuser myrole
- ```
-
-12. Perform the verification.
-
- Cancel the temporary environment variables **OS\_AUTH\_URL** and **OS\_PASSWORD**.
-
- ```
- $ unset OS_AUTH_URL OS_PASSWORD
- ```
-
- Request a token for the **admin** user:
-
- ```
- $ openstack --os-auth-url http://controller:5000/v3 \
- --os-project-domain-name Default --os-user-domain-name Default \
- --os-project-name admin --os-username admin token issue
- ```
-
- Request a token for the **myuser** user:
-
- ```
- $ openstack --os-auth-url http://controller:5000/v3 \
- --os-project-domain-name Default --os-user-domain-name Default \
- --os-project-name myproject --os-username myuser token issue
- ```
-
-13. Create the environment script for the OpenStack client.
-
- Create environment variable scripts for the **admin** and **demo** users.
-
- ```
- # vim admin-openrc
- export OS_PROJECT_DOMAIN_NAME=Default
- export OS_USER_DOMAIN_NAME=Default
- export OS_PROJECT_NAME=admin
- export OS_USERNAME=admin
- export OS_PASSWORD=ADMIN_PASS
- export OS_AUTH_URL=http://controller:5000/v3
- export OS_IDENTITY_API_VERSION=3
- export OS_IMAGE_API_VERSION=2
- #
- ```
-
- ```
- # vim demo-openrc
- export OS_PROJECT_DOMAIN_NAME=Default
- export OS_USER_DOMAIN_NAME=Default
- export OS_PROJECT_NAME=myproject
- export OS_USERNAME=myuser
- export OS_PASSWORD=DEMO_PASS
- export OS_AUTH_URL=http://controller:5000/v3
- export OS_IDENTITY_API_VERSION=3
- export OS_IMAGE_API_VERSION=2
- ```
-
- Replace **ADMIN\_PASS** with the password of the **admin** user.
-
- Replace **DEMO\_PASS** with the password of the **myuser** user.
-
- Run the following script to load environment variables:
-
- ```
- $ source admin-openrc
- ```
-
-### Installing Glance
-
-1. Create a database, service credentials, and API endpoints.
-
- Create a database.
-
- Log in to the database as the **root** user. Create the **glance** database, and grant permissions to the database.
-
- ```
- $ mysql -u root -p
- MariaDB [(none)]> CREATE DATABASE glance;
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
- IDENTIFIED BY 'GLANCE_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
- IDENTIFIED BY 'GLANCE_DBPASS';
- MariaDB [(none)]> exit
- ```
-
- Replace **GLANCE\_DBPASS** with the password of the **glance** database.
-
- ```
- $ source admin-openrc
- ```
-
- Run the following commands to create the **glance** service credential, create the **glance** user, and add the **admin** role to the **glance** user:
-
- ```
- $ openstack user create --domain default --password-prompt glance
- $ openstack role add --project service --user glance admin
- $ openstack service create --name glance --description "OpenStack Image" image
- ```
-
- Create API endpoints for the image service.
-
- ```
- $ openstack endpoint create --region RegionOne image public http://controller:9292
- $ openstack endpoint create --region RegionOne image internal http://controller:9292
- $ openstack endpoint create --region RegionOne image admin http://controller:9292
- ```
-
-2. Perform the installation and configuration.
-
- Install the software package:
-
- ```
- #yum install openstack-glance openstack-glance-api
- ```
-
- Configure Glance:
-
- Edit the **/etc/glance/glance-api.conf** file:
-
- In the **\[database]** section, configure the database entry.
-
- In the **\[keystone\_authtoken]** and **\[paste\_deploy]** sections, configure the identity authentication service entry.
-
- In the **\[glance\_store]** section, configure the local file system storage and the location where image files are stored.
-
- ```
- # vim /etc/glance/glance-api.conf
- [database]
- # ...
- connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
- [keystone_authtoken]
- # ...
- www_authenticate_uri = http://controller:5000
- auth_url = http://controller:5000
- memcached_servers = controller:11211
- auth_type = password
- project_domain_name = Default
- user_domain_name = Default
- project_name = service
- username = glance
- password = GLANCE_PASS
- [paste_deploy]
- # ...
- flavor = keystone
- [glance_store]
- # ...
- stores = file,http
- default_store = file
- filesystem_store_datadir = /var/lib/glance/images/
- ```
-
- In the preceding command, replace **GLANCE\_DBPASS** with the password of the **glance** database, and replace **GLANCE\_PASS** with the password of the **glance** user.
-
- Synchronize the database:
-
- ```
- su -s /bin/sh -c "glance-manage db_sync" glance
- ```
-
- Run the following command to start the image service:
-
- ```
- # systemctl enable openstack-glance-api.service
- # systemctl start openstack-glance-api.service
- ```
-
-3. Perform the verification.
-
- Download the image.
-
- ```
- $ source admin-openrc
- # Note: If the Kunpeng architecture is used in your environment, download the ARM64 image.
- $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
- ```
-
- Upload the image to the image service.
-
- ```
- $ glance image-create --name "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public
- ```
-
- Confirm the image upload and verify the attributes.
-
- ```
- $ glance image-list
- ```
-
-### Installing Placement
-
-1. Create a database, service credentials, and API endpoints.
-
- Create a database.
-
- Access the database as the **root** user. Create the **placement** database, and grant permissions.
-
- ```
- $ mysql -u root -p
- MariaDB [(none)]> CREATE DATABASE placement;
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
- IDENTIFIED BY 'PLACEMENT_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
- IDENTIFIED BY 'PLACEMENT_DBPASS';
- MariaDB [(none)]> exit
- ```
-
- Replace **PLACEMENT\_DBPASS** with the password of the **placement** database.
-
- ```
- $ source admin-openrc
- ```
-
- Run the following commands to create the placement service credentials, create the **placement** user, and add the **admin** role to the **placement** user:
-
- Create the Placement API Service.
-
- ```
- $ openstack user create --domain default --password-prompt placement
- $ openstack role add --project service --user placement admin
- $ openstack service create --name placement --description "Placement API" placement
- ```
-
- Create API endpoints of the Placement service.
-
- ```
- $ openstack endpoint create --region RegionOne placement public http://controller:8778
- $ openstack endpoint create --region RegionOne placement internal http://controller:8778
- $ openstack endpoint create --region RegionOne placement admin http://controller:8778
- ```
-
-2. Perform the installation and configuration.
-
- Install the software package:
-
- ```
- yum install openstack-placement-api
- ```
-
- Configure Placement:
-
- Edit the **/etc/placement/placement.conf** file:
-
- In the **\[placement\_database]** section, configure the database entry.
-
- In **\[api]** and **\[keystone\_authtoken]** sections, configure the identity authentication service entry.
-
- ```
- # vim /etc/placement/placement.conf
- [placement_database]
- # ...
- connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
- [api]
- # ...
- auth_strategy = keystone
- [keystone_authtoken]
- # ...
- auth_url = http://controller:5000/v3
- memcached_servers = controller:11211
- auth_type = password
- project_domain_name = Default
- user_domain_name = Default
- project_name = service
- username = placement
- password = PLACEMENT_PASS
- ```
-
- Replace **PLACEMENT\_DBPASS** with the password of the **placement** database, and replace **PLACEMENT\_PASS** with the password of the **placement** user.
-
- Synchronize the database:
-
- ```
- #su -s /bin/sh -c "placement-manage db sync" placement
- ```
-
- Start the httpd service.
-
- ```
- #systemctl restart httpd
- ```
-
-3. Perform the verification.
-
- Run the following command to check the status:
-
- ```
- $ . admin-openrc
- $ placement-status upgrade check
- ```
-
- Run the following command to install **osc-placement** and list the available resource types and features:
-
- ```
- $ yum install python3-osc-placement
- $ openstack --os-placement-api-version 1.2 resource class list --sort-column name
- $ openstack --os-placement-api-version 1.6 trait list --sort-column name
- ```
-
-### Installing Nova
-
-1. Create a database, service credentials, and API endpoints.
-
- Create a database.
-
- Access the database as the **root** user. Create the **nova**, **nova\_api**, and **nova\_cell0** databases and grant permissions.
-
- ```
- $ mysql -u root -p
- MariaDB [(none)]> CREATE DATABASE nova_api;
- MariaDB [(none)]> CREATE DATABASE nova;
- MariaDB [(none)]> CREATE DATABASE nova_cell0;
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
- IDENTIFIED BY 'NOVA_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
- IDENTIFIED BY 'NOVA_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
- IDENTIFIED BY 'NOVA_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
- IDENTIFIED BY 'NOVA_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
- IDENTIFIED BY 'NOVA_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
- IDENTIFIED BY 'NOVA_DBPASS';
- MariaDB [(none)]> exit
- ```
-
- Replace **NOVA\_DBPASS** with the password of the **nova** database.
-
- Run the following commands to create Nova service credentials, create a **nova** user, and add the **admin** role to the **nova** user:
-
- ```
- $ . admin-openrc
- $ openstack user create --domain default --password-prompt nova
- $ openstack role add --project service --user nova admin
- $ openstack service create --name nova --description "OpenStack Compute" compute
- ```
-
- Create API endpoints for the computing service.
-
- ```
- $ openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
- $ openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
- $ openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
- ```
-
-2. Perform the installation and configuration.
-
- Install the software package:
-
- ```
- # yum install openstack-nova-api openstack-nova-conductor \
- openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-compute
- ```
-
- Configure Nova:
-
- Edit the **/etc/nova/nova.conf** file.
-
- In the **\[default]** section, enable the computing and metadata APIs, configure the RabbitMQ message queue entry, and set **my\_ip**.
-
- In the **\[api\_database]** and **\[database]** sections, configure the database entry.
-
- In the **\[api]** and **\[keystone\_authtoken]** sections, configure the identity service entry.
-
- In the **\[vnc]** section, enable and configure the entry for the remote console.
-
- In the **\[glance]** section, configure the API address for the image service.
-
- In the **\[oslo\_concurrency]** section, configure the lock path.
-
- In the **\[placement]** section, configure the entry of the Placement service.
-
- ```
- # vim /etc/nova/nova.conf
- [DEFAULT]
- # ...
- enabled_apis = osapi_compute,metadata
- transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
- my_ip = 10.0.0.11
- [api_database]
- # ...
- connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
- [database]
- # ...
- connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
- [api]
- # ...
- auth_strategy = keystone
- [keystone_authtoken]
- # ...
- www_authenticate_uri = http://controller:5000/
- auth_url = http://controller:5000/
- memcached_servers = controller:11211
- auth_type = password
- project_domain_name = Default
- user_domain_name = Default
- project_name = service
- username = nova
- password = NOVA_PASS
- [vnc]
- enabled = true
- # ...
- server_listen = $my_ip
- server_proxyclient_address = $my_ip
- novncproxy_base_url = http://controller:6080/vnc_auto.html
- [glance]
- # ...
- api_servers = http://controller:9292
- [oslo_concurrency]
- # ...
- lock_path = /var/lib/nova/tmp
- [placement]
- # ...
- region_name = RegionOne
- project_domain_name = Default
- project_name = service
- auth_type = password
- user_domain_name = Default
- auth_url = http://controller:5000/v3
- username = placement
- password = PLACEMENT_PASS
- [neutron]
- # ...
- auth_url = http://controller:5000
- auth_type = password
- project_domain_name = default
- user_domain_name = default
- region_name = RegionOne
- project_name = service
- username = neutron
- password = NEUTRON_PASS
- ```
-
- Replace **RABBIT\_PASS** with the password of the **openstack** user in RabbitMQ.
-
- Set **my\_ip** to the management IP address of the controller node.
-
- Replace **NOVA\_DBPASS** with the password of the **nova** database.
-
- Replace **NOVA\_PASS** with the password of the **nova** user.
-
- Replace **PLACEMENT\_PASS** with the password of the **placement** user.
-
- Replace **NEUTRON\_PASS** with the password of the **neutron** user.
-
- Run the following command to synchronize the **nova-api** database:
-
- ```
- su -s /bin/sh -c "nova-manage api_db sync" nova
- ```
-
- Run the following command to register the **cell0** database:
-
- ```
- su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
- ```
-
- Create the **cell1** cell:
-
- ```
- su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
- ```
-
- Synchronize the **nova** database:
-
- ```
- su -s /bin/sh -c "nova-manage db sync" nova
- ```
-
- Verify whether **cell0** and **cell1** are correctly registered:
-
- ```
- su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
- ```
-
- Check whether VM hardware acceleration (x86 architecture) is supported:
-
- ```
- $ egrep -c '(vmx|svm)' /proc/cpuinfo
- ```
-
- If the returned value is **0**, hardware acceleration is not supported. You need to configure libvirt to use QEMU instead of KVM.
-
- ```
- # vim /etc/nova/nova.conf
- [libvirt]
- # ...
- virt_type = qemu
- ```
-
- If the returned value is **1** or a larger value, hardware acceleration is supported, and no extra configuration is required.
-
- Start the computing service and its dependencies, and enable the service to start automatically upon system boot.
-
- ```
- # systemctl enable \
- openstack-nova-api.service \
- openstack-nova-scheduler.service \
- openstack-nova-conductor.service \
- openstack-nova-novncproxy.service
- # systemctl start \
- openstack-nova-api.service \
- openstack-nova-scheduler.service \
- openstack-nova-conductor.service \
- openstack-nova-novncproxy.service
- ```
-
- ```
- # systemctl enable libvirtd.service openstack-nova-compute.service
- # systemctl start libvirtd.service openstack-nova-compute.service
- ```
-
- Add the compute nodes to the **cell** database:
-
- Check whether the compute node exists:
-
- ```
- $ . admin-openrc
- $ openstack compute service list --service nova-compute
- ```
-
- Register a compute node:
-
- ```
- #su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
- ```
-
-3. Perform the verification.
-
- ```
- $ . admin-openrc
- ```
-
- List service components to verify that each process is successfully started and registered.
-
- ```
- $ openstack compute service list
- ```
-
- List the API endpoints in the identity service and verify the connection to the identity service.
-
- ```
- $ openstack catalog list
- ```
-
- List the images in the image service and verify the connections:
-
- ```
- $ openstack image list
- ```
-
- Check whether the cells and placement APIs are running properly and whether other prerequisites are met.
-
- ```
- #nova-status upgrade check
- ```
-
-### Installing Neutron
-
-1. Create a database, service credentials, and API endpoints.
-
- Create a database.
-
- Access the database as the **root** user, create the **neutron** database, and grant permissions.
-
- ```
- $ mysql -u root -p
- MariaDB [(none)]> CREATE DATABASE neutron;
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
- IDENTIFIED BY 'NEUTRON_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
- IDENTIFIED BY 'NEUTRON_DBPASS';
- MariaDB [(none)]> exit
- ```
-
- Replace **NEUTRON\_DBPASS** with the password of the **neutron** database.
-
- ```
- $ . admin-openrc
- ```
-
- Run the following commands to create the **neutron** service credential, create the **neutron** user, and add the **admin** role to the **neutron** user:
-
- Create the **neutron** service credential.
-
- ```
- $ openstack user create --domain default --password-prompt neutron
- $ openstack role add --project service --user neutron admin
- $ openstack service create --name neutron --description "OpenStack Networking" network
- ```
-
- Create API endpoints of the network services.
-
- ```
- $ openstack endpoint create --region RegionOne network public http://controller:9696
- $ openstack endpoint create --region RegionOne network internal http://controller:9696
- $ openstack endpoint create --region RegionOne network admin http://controller:9696
- ```
-
-2. Install and configure the self-service network.
-
- Install the software package:
-
- ```
- # yum install openstack-neutron openstack-neutron-ml2 \
- openstack-neutron-linuxbridge ebtables ipset
- ```
-
- Configure Neutron:
-
- Edit the **/etc/neutron/neutron.conf** file:
-
- In the **\[database]** section, configure the database entry.
-
- In the **\[default]** section, enable the ML2 and router plug-ins. Allow IP address overlapping, and configure the RabbitMQ message queue entry.
-
- In the **\[default]** and **\[keystone]** sections, configure the identity authentication service entry.
-
- In the **\[default]** and **\[nova]** sections, enable the network to notify the change of the computing network topology.
-
- In the **\[oslo\_concurrency]** section, configure the lock path.
-
- ```
- # vim /etc/neutron/neutron.conf
- [database]
- # ...
- connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
- [DEFAULT]
- # ...
- core_plugin = ml2
- service_plugins = router
- allow_overlapping_ips = true
- transport_url = rabbit://openstack:RABBIT_PASS@controller
- auth_strategy = keystone
- notify_nova_on_port_status_changes = true
- notify_nova_on_port_data_changes = true
- [keystone_authtoken]
- # ...
- www_authenticate_uri = http://controller:5000
- auth_url = http://controller:5000
- memcached_servers = controller:11211
- auth_type = password
- project_domain_name = default
- user_domain_name = default
- project_name = service
- username = neutron
- password = NEUTRON_PASS
- [nova]
- # ...
- auth_url = http://controller:5000
- auth_type = password
- project_domain_name = default
- user_domain_name = default
- region_name = RegionOne
- project_name = service
- username = nova
- password = NOVA_PASS
- [oslo_concurrency]
- # ...
- lock_path = /var/lib/neutron/tmp
- ```
-
- Replace **NEUTRON\_DBPASS** with the password of the **neutron** database.
-
- Replace **RABBIT\_PASS** with the password of the **openstack** user in RabbitMQ.
-
- Replace **NEUTRON\_PASS** with the password of the **neutron** user.
-
- Replace **NOVA\_PASS** with the password of the **nova** user.
-
- Configure the ML2 plug-in.
-
- Edit the **/etc/neutron/plugins/ml2/ml2\_conf.ini** file.
-
- In the **\[ml2]** section, enable the flat, VLAN, and VXLAN networks, enable the bridge and layer-2 population mechanism, and enable the port security extension driver.
-
- In the **\[ml2\_type\_flat]** section, configure the flat network as the provider virtual network.
-
- In the **\[ml2\_type\_vxlan]** section, configure the VXLAN network identifier range.
-
- In the **\[securitygroup]** section, set **ipset**.
-
- ```
- # vim /etc/neutron/plugins/ml2/ml2_conf.ini
- [ml2]
- # ...
- type_drivers = flat,vlan,vxlan
- tenant_network_types = vxlan
- mechanism_drivers = linuxbridge,l2population
- extension_drivers = port_security
- [ml2_type_flat]
- # ...
- flat_networks = provider
- [ml2_type_vxlan]
- # ...
- vni_ranges = 1:1000
- [securitygroup]
- # ...
- enable_ipset = true
- ```
-
- Configure the Linux bridge agent:
-
- Edit the **/etc/neutron/plugins/ml2/linuxbridge\_agent.ini** file:
-
- In the **\[linux\_bridge]** section, map the provider virtual network to the physical network API.
-
- In the **\[vxlan]** section, enable the VXLAN network. Configure the IP address of the physical network API that processes the coverage network, and enable layer-2 population.
-
- In the **\[securitygroup]** section, enable the security group and configure the **linux bridge iptables** firewall driver.
-
- ```
- # vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
- [linux_bridge]
- physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
- [vxlan]
- enable_vxlan = true
- local_ip = OVERLAY_INTERFACE_IP_ADDRESS
- l2_population = true
- [securitygroup]
- # ...
- enable_security_group = true
- firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- ```
-
- Replace **PROVIDER\_INTERFACE\_NAME** with the physical network API.
-
- Replace **OVERLAY\_INTERFACE\_IP\_ADDRESS** with the management IP address of the controller node.
-
- Configure the Layer 3 proxy.
-
- Edit the **/etc/neutron/l3\_agent.ini** file:
-
- In the **\[default]** section, set the API driver to **linuxbridge**.
-
- ```
- # vim /etc/neutron/l3_agent.ini
- [DEFAULT]
- # ...
- interface_driver = linuxbridge
- ```
-
- Configures the DHCP agent:
-
- Edit the **/etc/neutron/dhcp\_agent.ini** file.
-
- In the **\[default]** section, configure the linuxbridge interface driver and Dnsmasq DHCP driver. Enable the isolated metadata.
-
- ```
- # vim /etc/neutron/dhcp_agent.ini
- [DEFAULT]
- # ...
- interface_driver = linuxbridge
- dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
- enable_isolated_metadata = true
- ```
-
- Configure the metadata proxy.
-
- Edit the **/etc/neutron/metadata\_agent.ini** file.
-
- In the **\[default]**, configure the metadata host and shared secret.
-
- ```
- # vim /etc/neutron/metadata_agent.ini
- [DEFAULT]
- # ...
- nova_metadata_host = controller
- metadata_proxy_shared_secret = METADATA_SECRET
- ```
-
- Replace **METADATA\_SECRET** with a proper metadata agent secret.
-
-3. Configure the computing service.
-
- Edit the **/etc/nova/nova.conf** file.
-
- In the **\[neutron]** section, configure access parameters, enable the metadata proxy, and configure secret.
-
- ```
- # vim /etc/nova/nova.conf
- [neutron]
- # ...
- auth_url = http://controller:5000
- auth_type = password
- project_domain_name = default
- user_domain_name = default
- region_name = RegionOne
- project_name = service
- username = neutron
- password = NEUTRON_PASS
- service_metadata_proxy = true
- metadata_proxy_shared_secret = METADATA_SECRET
- ```
-
- Replace **NEUTRON\_PASS** with the password of the **neutron** user.
-
- Replace **METADATA\_SECRET** with a proper metadata agent secret.
-
-4. Complete the installation.
-
- Add a link:
-
- ```
- #ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- ```
-
- Synchronize the database:
-
- ```
- # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
- --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
- ```
-
- Run the following command to restart the computing API service:
-
- ```
- #systemctl restart openstack-nova-api.service
- ```
-
- Start the network service and enable the service to start automatically upon system boot.
-
- ```
- # systemctl enable neutron-server.service \
- neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
- neutron-metadata-agent.service
- # systemctl start neutron-server.service \
- neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
- neutron-metadata-agent.service
- # systemctl enable neutron-l3-agent.service
- # systemctl start neutron-l3-agent.service
- ```
-
-5. Perform the verification.
-
- Run the following command to list the neutron agents:
-
- ```
- $ openstack network agent list
- ```
-
-### Installing Cinder
-
-1. Create a database, service credentials, and API endpoints.
-
- Create a database.
-
- Access the database as the **root** user. Create the **cinder** database, and grant permissions.
-
- ```
- $ mysql -u root -p
- MariaDB [(none)]> CREATE DATABASE cinder;
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
- IDENTIFIED BY 'CINDER_DBPASS';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
- IDENTIFIED BY 'CINDER_DBPASS';
- MariaDB [(none)]> exit
- ```
-
- Replace **CINDER\_DBPASS** with the password for the **cinder** database.
-
- ```
- $ source admin-openrc
- ```
-
- Create Cinder service credentials:
-
- Create the **cinder** user.
-
- Add the **admin** role to the **cinder** user.
-
- Create the **cinderv2** and **cinderv3** services.
-
- ```
- $ openstack user create --domain default --password-prompt cinder
- $ openstack role add --project service --user cinder admin
- $ openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
- $ openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
- ```
-
- Create API endpoints for the block storage service.
-
- ```
- $ openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%s
- $ openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%s
- $ openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%s
- $ openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%s
- $ openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%s
- $ openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%s
- ```
-
-2. Install and configure the controller node.
-
- Install the software package:
-
- ```
- #yum install openstack-cinder
- ```
-
- Configure Cinder:
-
- Edit the **/etc/cinder/cinder.conf** file.
-
- In the **\[database]** section, configure the database entry.
-
- In the **\[DEFAULT]** section, configure the RabbitMQ message queue entry and **my\_ip**.
-
- In the **\[DEFAULT]** and **\[keystone\_authtoken]** sections, configure the identity authentication service entry.
-
- In the **\[oslo\_concurrency]** section, configure the lock path.
-
- ```
- # vim /etc/cinder/cinder.conf
- [database]
- # ...
- connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
- [DEFAULT]
- # ...
- transport_url = rabbit://openstack:RABBIT_PASS@controller
- auth_strategy = keystone
- my_ip = 10.0.0.11
- [keystone_authtoken]
- # ...
- www_authenticate_uri = http://controller:5000
- auth_url = http://controller:5000
- memcached_servers = controller:11211
- auth_type = password
- project_domain_name = default
- user_domain_name = default
- project_name = service
- username = cinder
- password = CINDER_PASS
- [oslo_concurrency]
- # ...
- lock_path = /var/lib/cinder/tmp
- ```
-
- Replace **CINDER\_DBPASS** with the password of the **cinder** database.
-
- Replace **RABBIT\_PASS** with the password of the **openstack** user in RabbitMQ.
-
- Set **my\_ip** to the management IP address of the controller node.
-
- Replace **CINDER\_PASS** with the password of the **cinder** user.
-
- Synchronize the database:
-
- ```
- su -s /bin/sh -c "cinder-manage db sync" cinder
- ```
-
- Configure the block storage for the compute nodes.
-
- Edit the **/etc/nova/nova.conf** file.
-
- ```
- # vim /etc/nova/nova.conf
- [cinder]
- os_region_name = RegionOne
- ```
-
- Complete the installation.
-
- Restart the computing API service.
-
- ```
- systemctl restart openstack-nova-api.service
- ```
-
- Start the block storage service.
-
- ```
- # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
- # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
- ```
-
-3. Install and configure the storage node.
-
- Install the software package:
-
- ```
- yum install lvm2 device-mapper-persistent-data targetcli python3-keystone
- ```
-
- Start the service:
-
- ```
- # systemctl enable lvm2-lvmetad.service
- # systemctl start lvm2-lvmetad.service
- ```
-
- Create the LVM physical volume **/dev/sdb**.
-
- ```
- pvcreate /dev/sdb
- ```
-
- Create the LVM volume group **cinder-volumes**.
-
- ```
- vgcreate cinder-volumes /dev/sdb
- ```
-
- Edit the **/etc/lvm/lvm.conf** file.
-
- In the **devices** section, add filtering to allow the **/dev/sdb** device to reject other devices.
-
- devices {
-
- ...
-
- filter = \[ "a/sdb/", "r/.\*/"]
-
- Edit the **/etc/cinder/cinder.conf** file.
-
- In the **\[lvm]** section, configure the LVM backend using the LVM driver, cinder-volumes volume group, iSCSI protocol, and appropriate iSCSI services.
-
- In the **\[DEFAULT]** section, enable the LVM backend and configure the location of the API of the image service.
-
- ```
- [lvm]
- volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
- volume_group = cinder-volumes
- target_protocol = iscsi
- target_helper = lioadm
- [DEFAULT]
- # ...
- enabled_backends = lvm
- glance_api_servers = http://controller:9292
- ```
-
- Complete the installation.
-
- ```
- # systemctl enable openstack-cinder-volume.service target.service
- # systemctl start openstack-cinder-volume.service target.service
- ```
-
-4. Install and configure the backup service.
-
- Edit the **/etc/cinder/cinder.conf** file.
-
- In the **\[DEFAULT]** section, configure the backup options.
-
- ```
- [DEFAULT]
- # ...
- # Note: openEuler 21.09 does not provide the OpenStack Swift software package. You need to install it manually. Alternatively, you can use another backup backend, for example, NFS. The NFS has been tested and verified and can be used properly.
- backup_driver = cinder.backup.drivers.swift.SwiftBackupDriver
- backup_swift_url = SWIFT_URL
- ```
-
- Replace **SWIFT\_URL** with the URL of the object storage service. The URL can be found through the object storage API endpoint.
-
- ```
- $ openstack catalog show object-store
- ```
-
- Complete the installation.
-
- ```
- # systemctl enable openstack-cinder-backup.service
- # systemctl start openstack-cinder-backup.service
- ```
-
-5. Perform the verification.
-
- List service components and verify that each step is successful.
-
- ```
- $ source admin-openrc
- $ openstack volume service list
- ```
-
- Note: Currently, the Swift component is not supported. If possible, you can configure the interconnection with Ceph.
-
-### Installing Horizon
-
-1. Install the software package:
-
- ```plain
- yum install openstack-horizon
- ```
-
-2. Modify the `/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py` file.
-
- Modify the variables.
-
- ```plain
- ALLOWED_HOSTS = ['*', ]
- OPENSTACK_HOST = "controller"
- OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
- ```
-
- Add variables.
-
- ```plain
- OPENSTACK_API_VERSIONS = {
- "identity": 3,
- "image": 2,
- "volume": 3,
- }
- WEBROOT = "/dashboard/"
- COMPRESS_OFFLINE = True
- OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
- OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
- LOGIN_URL = '/dashboard/auth/login/'
- LOGOUT_URL = '/dashboard/auth/logout/'
- ```
-
-3. Run the following command in the **/usr/share/openstack-dashboard** directory:
-
- ```plain
- ./manage.py compress
- ```
-
-4. Restart the httpd service.
-
- ```plain
- systemctl restart httpd
- ```
-
-5. Perform the verification
- Open a browser and enter **http://***\* in the address box to log in to Horizon.
-
-### Installing Tempest
-
-Tempest is the integrated test service of OpenStack. If you need to run a fully automatic test of the functions of the installed OpenStack environment, you are advised to use Tempest. Otherwise, you can choose not to install it.
-
-1. Install Tempest:
- ```
- yum install openstack-tempest
- ```
-2. Initialize the directory:
-
- ```
- tempest init mytest
- ```
-3. Modify the configuration file:
-
- ```
- cd mytest
- vi etc/tempest.conf
- ```
- Configure the current OpenStack environment information in **tempest.conf**. For details, see the [official example](https://docs.openstack.org/tempest/latest/sampleconf.html).
-
-4. Perform the test:
-
- ```
- tempest run
- ```
-
-### Installing Ironic
-
-Ironic is the bare metal service of OpenStack. If you need to deploy bare metal machines, you are advised to use Ironic. Otherwise, you can choose not to install it.
-
-1. Set the database.
-
- The bare metal service stores information in the database. Create a **ironic** database that can be accessed by the **ironic** user and replace **IRONIC_DBPASSWORD** with a proper password.
-
- ```
- # mysql -u root -p MariaDB [(none)]> CREATE DATABASE ironic CHARACTER SET utf8;
-
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'localhost' \
- IDENTIFIED BY 'IRONIC_DBPASSWORD';
-
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic.* TO 'ironic'@'%' \
- IDENTIFIED BY 'IRONIC_DBPASSWORD';
- ```
-
-2. Install and configure the components.
-
- ##### Creating Service User Authentication
-
- 1. Create the bare metal service user:
-
- ```
- $ openstack user create --password IRONIC_PASSWORD \
- --email ironic@example.com ironic
- $ openstack role add --project service --user ironic admin
- $ openstack service create --name ironic --description \
- "Ironic baremetal provisioning service" baremetal
-
- $ openstack service create --name ironic-inspector --description "Ironic inspector baremetal provisioning service" baremetal-introspection
- $ openstack user create --password IRONIC_INSPECTOR_PASSWORD --email ironic_inspector@example.com ironic_inspector
- $ openstack role add --project service --user ironic-inspector admin
- ```
-
- 2. Create the bare metal service access entries:
-
- ```
- $ openstack endpoint create --region RegionOne baremetal admin http://$IRONIC_NODE:6385
- $ openstack endpoint create --region RegionOne baremetal public http://$IRONIC_NODE:6385
- $ openstack endpoint create --region RegionOne baremetal internal http://$IRONIC_NODE:6385
- $ openstack endpoint create --region RegionOne baremetal-introspection internal http://172.20.19.13:5050/v1
- $ openstack endpoint create --region RegionOne baremetal-introspection public http://172.20.19.13:5050/v1
- $ openstack endpoint create --region RegionOne baremetal-introspection admin http://172.20.19.13:5050/v1
- ```
-
- ##### Configuring the ironic-api Service
-
- Configuration file path: **/etc/ironic/ironic.conf**.
-
- 1. Use **connection** to configure the location of the database as follows. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace **DB_IP** with the IP address of the database server.
-
- ```
- [database]
-
- # The SQLAlchemy connection string used to connect to the
- # database (string value)
-
- connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
- ```
-
- 2. Configure the ironic-api service to use the RabbitMQ message broker. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ.
-
- ```
- [DEFAULT]
-
- # A URL representing the messaging driver to use and its full
- # configuration. (string value)
-
- transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
- ```
-
- You can also use json-rpc instead of RabbitMQ.
-
- 3. Configure the ironic-api service to use the credential of the identity authentication service. Replace **PUBLIC_IDENTITY_IP** with the public IP address of the identity authentication server and **PRIVATE_IDENTITY_IP** with the private IP address of the identity authentication server, replace **IRONIC_PASSWORD** with the password of the **ironic** user in the identity authentication service.
-
- ```
- [DEFAULT]
-
- # Authentication strategy used by ironic-api: one of
- # "keystone" or "noauth". "noauth" should not be used in a
- # production environment because all authentication will be
- # disabled. (string value)
-
- auth_strategy=keystone
-
- [keystone_authtoken]
- # Authentication type to load (string value)
- auth_type=password
- # Complete public Identity API endpoint (string value)
- www_authenticate_uri=http://PUBLIC_IDENTITY_IP:5000
- # Complete admin Identity API endpoint. (string value)
- auth_url=http://PRIVATE_IDENTITY_IP:5000
- # Service username. (string value)
- username=ironic
- # Service account password. (string value)
- password=IRONIC_PASSWORD
- # Service tenant name. (string value)
- project_name=service
- # Domain name containing project (string value)
- project_domain_name=Default
- # User's domain name (string value)
- user_domain_name=Default
- ```
-
- 4. Create the bare metal service database table:
-
- ```
- $ ironic-dbsync --config-file /etc/ironic/ironic.conf create_schema
- ```
-
- 5. Restart the ironic-api service:
-
- ```
- sudo systemctl restart openstack-ironic-api
- ```
-
- ##### Configuring the ironic-conductor Service.
-
- 1. Replace **HOST_IP** with the IP address of the conductor host.
-
- ```
- [DEFAULT]
-
- # IP address of this host. If unset, will determine the IP
- # programmatically. If unable to do so, will use "127.0.0.1".
- # (string value)
-
- my_ip=HOST_IP
- ```
-
- 2. Specifies the location of the database. ironic-conductor must use the same configuration as ironic-api. Replace **IRONIC_DBPASSWORD** with the password of user **ironic** and replace DB_IP with the IP address of the database server.
-
- ```
- [database]
-
- # The SQLAlchemy connection string to use to connect to the
- # database. (string value)
-
- connection = mysql+pymysql://ironic:IRONIC_DBPASSWORD@DB_IP/ironic
- ```
-
- 3. Configure the ironic-api service to use the RabbitMQ message broker. ironic-conductor must use the same configuration as ironic-api. Replace **RPC_\*** with the detailed address and the credential of RabbitMQ.
-
- ```
- [DEFAULT]
-
- # A URL representing the messaging driver to use and its full
- # configuration. (string value)
-
- transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
- ```
-
- You can also use json-rpc instead of RabbitMQ.
-
- 4. Configure the credentials to access other OpenStack services.
-
- To communicate with other OpenStack services, the bare metal service needs to use the service users to get authenticated by the OpenStack Identity service when requesting other services. The credentials of these users must be configured in each configuration file associated to the corresponding service.
-
- ```
- [neutron] - Accessing the OpenStack network services.
- [glance] - Accessing the OpenStack image service.
- [swift] - Accessing the OpenStack object storage service.
- [cinder] - Accessing the OpenStack block storage service.
- [inspector] Accessing the OpenStack bare metal introspection service.
- [service_catalog] - A special item to store the credential used by the bare metal service. The credential is used to discover the API URL endpoint registered in the OpenStack identity authentication service catalog by the bare metal service.
- ```
-
- For simplicity, you can use one service user for all services. For backward compatibility, the user name must be the same as that configured in **[keystone_authtoken]** of the ironic-api service. However, this is not mandatory. You can also create and configure a different service user for each service.
-
- In the following example, the authentication information for the user to access the OpenStack network service is configured as follows:
-
- ```
- The network service is deployed in the identity authentication service domain named RegionOne. Only the public endpoint interface is registered in the service catalog.
-
- A specific CA SSL certificate is used for HTTPS connection when sending a request.
-
- The same service user as that configured for ironic-api.
-
- The dynamic password authentication plugin discovers a proper identity authentication service API version based on other options.
- ```
-
- ```
- [neutron]
-
- # Authentication type to load (string value)
- auth_type = password
- # Authentication URL (string value)
- auth_url=https://IDENTITY_IP:5000/
- # Username (string value)
- username=ironic
- # User's password (string value)
- password=IRONIC_PASSWORD
- # Project name to scope to (string value)
- project_name=service
- # Domain ID containing project (string value)
- project_domain_id=default
- # User's domain id (string value)
- user_domain_id=default
- # PEM encoded Certificate Authority to use when verifying
- # HTTPs connections. (string value)
- cafile=/opt/stack/data/ca-bundle.pem
- # The default region_name for endpoint URL discovery. (string
- # value)
- region_name = RegionOne
- # List of interfaces, in order of preference, for endpoint
- # URL. (list value)
- valid_interfaces=public
- ```
-
- By default, to communicate with other services, the bare metal service attempts to discover a proper endpoint of the service through the service catalog of the identity authentication service. If you want to use a different endpoint for a specific service, specify the endpoint_override option in the bare metal service configuration file.
-
- ```
- [neutron] ... endpoint_override =
- ```
-
- 5. Configure the allowed drivers and hardware types.
-
- Set enabled_hardware_types to specify the hardware types that can be used by ironic-conductor:
-
- ```
- [DEFAULT] enabled_hardware_types = ipmi
- ```
-
- Configure hardware interfaces:
-
- ```
- enabled_boot_interfaces = pxe enabled_deploy_interfaces = direct,iscsi enabled_inspect_interfaces = inspector enabled_management_interfaces = ipmitool enabled_power_interfaces = ipmitool
- ```
-
- Configure the default value of the interface:
-
- ```
- [DEFAULT] default_deploy_interface = direct default_network_interface = neutron
- ```
-
- If any driver that uses Direct Deploy is enabled, you must install and configure the Swift backend of the image service. The Ceph object gateway (RADOS gateway) can also be used as the backend of the image service.
-
- 6. Restart the ironic-conductor service.
-
- ```
- sudo systemctl restart openstack-ironic-conductor
- ```
-
- ##### Configuring the ironic-inspector Service
-
- Configuration file path: **/etc/ironic-inspector/inspector.conf**
-
- 1. Create the database:
-
- ```
- # mysql -u root -p
-
- MariaDB [(none)]> CREATE DATABASE ironic_inspector CHARACTER SET utf8;
-
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'localhost' \ IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
- MariaDB [(none)]> GRANT ALL PRIVILEGES ON ironic_inspector.* TO 'ironic_inspector'@'%' \
- IDENTIFIED BY 'IRONIC_INSPECTOR_DBPASSWORD';
- ```
-
- 2. Use **connection** to configure the location of the database as follows. Replace **IRONIC_INSPECTOR_DBPASSWORD** with the password of user **ironic_inspector** and replace **DB_IP** with the IP address of the database server.
-
- ```
- [database]
- backend = sqlalchemy
- connection = mysql+pymysql://ironic_inspector:IRONIC_INSPECTOR_DBPASSWORD@DB_IP/ironic_inspector
- ```
-
- 3. Configure the communication address of the message queue:
-
- ```
- [DEFAULT] transport_url = rabbit://RPC_USER:RPC_PASSWORD@RPC_HOST:RPC_PORT/
- ```
-
- 4. Configure the Keystone authentication:
-
- ```
- [DEFAULT]
-
- auth_strategy = keystone
-
- [ironic]
-
- api_endpoint = http://IRONIC_API_HOST_ADDRRESS:6385
- auth_type = password
- auth_url = http://PUBLIC_IDENTITY_IP:5000
- auth_strategy = keystone
- ironic_url = http://IRONIC_API_HOST_ADDRRESS:6385
- os_region = RegionOne
- project_name = service
- project_domain_name = default
- user_domain_name = default
- username = IRONIC_SERVICE_USER_NAME
- password = IRONIC_SERVICE_USER_PASSWORD
- ```
-
- 5. Configure the ironic inspector dnsmasq service:
-
- ```
- # Configuration file path: /etc/ironic-inspector/dnsmasq.conf
- port=0
- interface=enp3s0 #Replace with the actual listening network interface.
- dhcp-range=172.20.19.100,172.20.19.110 #Replace with the actual DHCP IP address range.
- bind-interfaces
- enable-tftp
-
- dhcp-match=set:efi,option:client-arch,7
- dhcp-match=set:efi,option:client-arch,9
- dhcp-match=aarch64, option:client-arch,11
- dhcp-boot=tag:aarch64,grubaa64.efi
- dhcp-boot=tag:!aarch64,tag:efi,grubx64.efi
- dhcp-boot=tag:!aarch64,tag:!efi,pxelinux.0
-
- tftp-root=/tftpboot #Replace with the actual tftpboot directory.
- log-facility=/var/log/dnsmasq.log
- ```
-
- 6. Start the services:
-
- ```
- $ systemctl enable --now openstack-ironic-inspector.service
- $ systemctl enable --now openstack-ironic-inspector-dnsmasq.service
- ```
-
-3. Creatie the deploy ramdisk Image.
-
- Currently, you can use the ironic python agent builder to build the ramdisk image. The following describes how to use this tool to build the deploy image used by ironic.
-
- ##### Installing ironic-python-agent-builder
-
- 1. Install Python 3 on the local host, switch the local Python to Python 3, and resolve the problems after the switching (for example, the Yum source cannot be used).
-
- ```
- yum install python36
- ```
-
- 2. Install the tool:
-
- ```
- pip install ironic-python-agent-builder
- ```
-
- 3. Modify the python interpreter in the following file:
-
- ```
- /usr/bin/yum /usr/libexec/urlgrabber-ext-down
- ```
-
- 4. Install the other necessary tools:
-
- ```
- yum install git
- ```
-
- `DIB` depends on the `semanage` command. Therefore, check whether the `semanage --help` command is available before creating an image. If the system displays a message indicating that the command is unavailable, install the command:
-
- ```
- # Check which package needs to be installed.
- [root@localhost ~]# yum provides /usr/sbin/semanage
- Loaded plug-in: fastestmirror
- Loading mirror speeds from cached hostfile
- * base: mirror.vcu.edu
- * extras: mirror.vcu.edu
- * updates: mirror.math.princeton.edu
- policycoreutils-python-2.5-34.el7.aarch64 : SELinux policy core python utilities
- Source: base
- Matching source:
- File name: /usr/sbin/semanage
- # Install.
- [root@localhost ~]# yum install policycoreutils-python
- ```
-
- ##### Creating the Image
-
- According to the test result, only version 8 is supported for CentOS. In addition, centos8-minimal lacks some NIC drivers. As a result, all NICs are in the down state after the Dell physical machine is started. Therefore, CentOS 8 is used in the example. Add the following environment variables:
-
- ```
- export DIB_PYTHON_VERSION=3 \
- export DIB_RELEASE=8 \
- export DIB_YUM_MINIMAL_CREATE_INTERFACES
- ```
-
- For `arm` architecture, add the following information in addition:
-
- ```
- export ARCH=aarch64
- ```
-
- ###### Common Image
-
- Basic usage:
-
- ```
- usage: ironic-python-agent-builder [-h] [-r RELEASE] [-o OUTPUT] [-e ELEMENT]
- [-b BRANCH] [-v] [--extra-args EXTRA_ARGS]
- distribution
-
- positional arguments:
- distribution Distribution to use
-
- optional arguments:
- -h, --help show this help message and exit
- -r RELEASE, --release RELEASE
- Distribution release to use
- -o OUTPUT, --output OUTPUT
- Output base file name
- -e ELEMENT, --element ELEMENT
- Additional DIB element to use
- -b BRANCH, --branch BRANCH
- If set, override the branch that is used for ironic-
- python-agent and requirements
- -v, --verbose Enable verbose logging in diskimage-builder
- --extra-args EXTRA_ARGS
- Extra arguments to pass to diskimage-builder
- ```
-
- Example:
-
- ```
- ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky
- ```
-
- ###### Allowing SSH login
-
- Initialize the environment variables and create the image:
-
- ```
- export DIB_DEV_USER_USERNAME=ipa \
- export DIB_DEV_USER_PWDLESS_SUDO=yes \
- export DIB_DEV_USER_PASSWORD='123'
- ironic-python-agent-builder centos -o /mnt/ironic-agent-ssh -b origin/stable/rocky -e selinux-permissive -e devuser
- ```
-
- ###### Specifying the Code Repository
-
- Initialize the corresponding environment variables and create the image:
-
- ```
- # Specify the address and version of the repository.
- DIB_REPOLOCATION_ironic_python_agent=git@172.20.2.149:liuzz/ironic-python-agent.git
- DIB_REPOREF_ironic_python_agent=origin/develop
-
- # Clone code from Gerrit.
- DIB_REPOLOCATION_ironic_python_agent=https://review.opendev.org/openstack/ironic-python-agent
- DIB_REPOREF_ironic_python_agent=refs/changes/43/701043/1
- ```
-
- Reference: [source-repositories](https://docs.openstack.org/diskimage-builder/latest/elements/source-repositories/README.html).
-
- The specified repository address and version are verified successfully.
diff --git a/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md b/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md
index ea61416534de5d9c149b0f3680f5d26ad0db93f5..486d1856d5d70faa55066435483d203939059cf4 100644
--- a/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md
+++ b/docs/en/docs/thirdparty_migration/OpenStack-wallaby.md
@@ -31,7 +31,7 @@ OpenStack is an open source cloud computing infrastructure software project deve
As an open source cloud computing management platform, OpenStack consists of several major components, such as Nova, Cinder, Neutron, Glance, Keystone, and Horizon. OpenStack supports almost all cloud environments. The project aims to provide a cloud computing management platform that is easy-to-use, scalable, unified, and standardized. OpenStack provides an infrastructure as a service (IaaS) solution that combines complementary services, each of which provides an API for integration.
-The official source of openEuler 21.09 now supports OpenStack Wallaby. You can configure the Yum source then deploy OpenStack by following the instructions of this document.
+The official source of openEuler 22.03 LTS now supports OpenStack Wallaby. You can configure the Yum source then deploy OpenStack by following the instructions of this document.
## Conventions
@@ -64,33 +64,26 @@ The services involved in the preceding conventions are as follows:
### Environment Configuration
-1. Configure the openEuler 21.09 official Yum source. Enable the EPOL software repository to support OpenStack.
+1. Configure the openEuler 22.03 LTS official Yum source. Enable the EPOL software repository to support OpenStack.
```shell
- cat << EOF >> /etc/yum.repos.d/21.09-OpenStack_Wallaby.repo
- [OS]
- name=OS
- baseurl=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/
- enabled=1
- gpgcheck=1
- gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler
+ yum update
+ yum install openstack-release-wallaby
+ yum clean all && yum makecache
+ ```
- [everything]
- name=everything
- baseurl=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/
- enabled=1
- gpgcheck=1
- gpgkey=http://repo.openeuler.org/openEuler-21.09/everything/$basearch/RPM-GPG-KEY-openEuler
+ **Note**: Enable the EPOL repository for the Yum source if it is not enabled already.
+
+ ```shell
+ vi /etc/yum.repos.d/openEuler.repo
[EPOL]
name=EPOL
- baseurl=http://repo.openeuler.org/openEuler-21.09/EPOL/main/$basearch/
+ baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/
enabled=1
gpgcheck=1
- gpgkey=http://repo.openeuler.org/openEuler-21.09/OS/$basearch/RPM-GPG-KEY-openEuler
+ gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
EOF
-
- yum clean all && yum makecache
```
2. Change the host name and mapping.
@@ -102,7 +95,7 @@ The services involved in the preceding conventions are as follows:
hostnamectl set-hostname compute (CPT)
```
- Assuming the IP address of the controller node is `10.0.0.11` and the IP address of the compute node (if any) is `10.0.0.12`, add the following information to the `/etc/hosts` file:
+ Assuming the IP address of the controller node is **10.0.0.11** and the IP address of the compute node (if any) is **10.0.0.12**, add the following information to the **/etc/hosts** file:
```shell
10.0.0.11 controller
@@ -2211,7 +2204,7 @@ chown -R ipa.ipa /etc/ironic_python_agent/
### Installing Kolla
-Kolla provides the OpenStack service with the container-based deployment function that is ready for the production environment. The Kolla and Kolla-ansible services are introduced in openEuler in version 21.09.
+Kolla provides the OpenStack service with the container-based deployment function that is ready for the production environment. The Kolla and Kolla-ansible services are introduced in openEuler in version 22.03 LTS.
The installation of Kolla is simple. You only need to install the corresponding RPM packages:
@@ -2596,7 +2589,7 @@ Swift provides a scalable and highly available distributed object storage servic
Rebalance the ring:
```shell
- swift-ring-builder account.builder rebalance
+ swift-ring-builder container.builder rebalance
```
8. Create the object ring. (CTL)
@@ -2631,7 +2624,7 @@ Swift provides a scalable and highly available distributed object storage servic
Rebalance the ring:
```shell
- swift-ring-builder account.builder rebalance
+ swift-ring-builder object.builder rebalance
```
Distribute ring configuration files:
@@ -2686,3 +2679,530 @@ Swift provides a scalable and highly available distributed object storage servic
systemctl start openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
```
+
+### Installing Cyborg
+
+Cyborg provides acceleration device support for OpenStack, for example, GPUs, FPGAs, ASICs, NPs, SoCs, NVMe/NOF SSDs, ODPs, DPDKs, and SPDKs.
+
+1. Initialize the databases.
+
+```
+CREATE DATABASE cyborg;
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'localhost' IDENTIFIED BY 'CYBORG_DBPASS';
+GRANT ALL PRIVILEGES ON cyborg.* TO 'cyborg'@'%' IDENTIFIED BY 'CYBORG_DBPASS';
+```
+
+2. Create Keystone resource objects.
+
+```
+$ openstack user create --domain default --password-prompt cyborg
+$ openstack role add --project service --user cyborg admin
+$ openstack service create --name cyborg --description "Acceleration Service" accelerator
+
+$ openstack endpoint create --region RegionOne \
+ accelerator public http://:6666/v1
+$ openstack endpoint create --region RegionOne \
+ accelerator internal http://:6666/v1
+$ openstack endpoint create --region RegionOne \
+ accelerator admin http://:6666/v1
+```
+
+3. Install Cyborg
+
+```
+yum install openstack-cyborg
+```
+
+4. Configure Cyborg
+
+Modify **/etc/cyborg/cyborg.conf**.
+
+```
+[DEFAULT]
+transport_url = rabbit://%RABBITMQ_USER%:%RABBITMQ_PASSWORD%@%OPENSTACK_HOST_IP%:5672/
+use_syslog = False
+state_path = /var/lib/cyborg
+debug = True
+
+[database]
+connection = mysql+pymysql://%DATABASE_USER%:%DATABASE_PASSWORD%@%OPENSTACK_HOST_IP%/cyborg
+
+[service_catalog]
+project_domain_id = default
+user_domain_id = default
+project_name = service
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[placement]
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = placement
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+
+[keystone_authtoken]
+memcached_servers = localhost:11211
+project_domain_name = Default
+project_name = service
+user_domain_name = Default
+password = PASSWORD
+username = cyborg
+auth_url = http://%OPENSTACK_HOST_IP%/identity
+auth_type = password
+```
+
+Set the user names, passwords, and IP addresses as required.
+
+1. Synchronize the database table.
+
+```
+cyborg-dbsync --config-file /etc/cyborg/cyborg.conf upgrade
+```
+
+6. Start the Cyborg services.
+
+```
+systemctl enable openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+systemctl start openstack-cyborg-api openstack-cyborg-conductor openstack-cyborg-agent
+```
+
+### Installing Aodh
+
+1. Create the database.
+
+```
+CREATE DATABASE aodh;
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'localhost' IDENTIFIED BY 'AODH_DBPASS';
+
+GRANT ALL PRIVILEGES ON aodh.* TO 'aodh'@'%' IDENTIFIED BY 'AODH_DBPASS';
+```
+
+2. Create Keystone resource objects.
+
+```
+openstack user create --domain default --password-prompt aodh
+
+openstack role add --project service --user aodh admin
+
+openstack service create --name aodh --description "Telemetry" alarming
+
+openstack endpoint create --region RegionOne alarming public http://controller:8042
+
+openstack endpoint create --region RegionOne alarming internal http://controller:8042
+
+openstack endpoint create --region RegionOne alarming admin http://controller:8042
+```
+
+3. Install Aodh.
+
+```
+yum install openstack-aodh-api openstack-aodh-evaluator openstack-aodh-notifier openstack-aodh-listener openstack-aodh-expirer python3-aodhclient
+```
+
+4. Modify the configuration file.
+
+```
+[database]
+connection = mysql+pymysql://aodh:AODH_DBPASS@controller/aodh
+
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+auth_strategy = keystone
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = aodh
+password = AODH_PASS
+interface = internalURL
+region_name = RegionOne
+```
+
+5. Initialize the database.
+
+```
+aodh-dbsync
+```
+
+6. Start the Aodh services.
+
+```
+systemctl enable openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+
+systemctl start openstack-aodh-api.service openstack-aodh-evaluator.service openstack-aodh-notifier.service openstack-aodh-listener.service
+```
+
+### Installing Gnocchi
+
+1. Create the database.
+
+```
+CREATE DATABASE gnocchi;
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'localhost' IDENTIFIED BY 'GNOCCHI_DBPASS';
+
+GRANT ALL PRIVILEGES ON gnocchi.* TO 'gnocchi'@'%' IDENTIFIED BY 'GNOCCHI_DBPASS';
+```
+
+2. Create Keystone resource objects.
+
+```
+openstack user create --domain default --password-prompt gnocchi
+
+openstack role add --project service --user gnocchi admin
+
+openstack service create --name gnocchi --description "Metric Service" metric
+
+openstack endpoint create --region RegionOne metric public http://controller:8041
+
+openstack endpoint create --region RegionOne metric internal http://controller:8041
+
+openstack endpoint create --region RegionOne metric admin http://controller:8041
+```
+
+3. Install Gnocchi.
+
+```
+yum install openstack-gnocchi-api openstack-gnocchi-metricd python3-gnocchiclient
+```
+
+1. Modify the **/etc/gnocchi/gnocchi.conf** configuration file.
+
+```
+[api]
+auth_mode = keystone
+port = 8041
+uwsgi_mode = http-socket
+
+[keystone_authtoken]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_name = Default
+user_domain_name = Default
+project_name = service
+username = gnocchi
+password = GNOCCHI_PASS
+interface = internalURL
+region_name = RegionOne
+
+[indexer]
+url = mysql+pymysql://gnocchi:GNOCCHI_DBPASS@controller/gnocchi
+
+[storage]
+# coordination_url is not required but specifying one will improve
+# performance with better workload division across workers.
+coordination_url = redis://controller:6379
+file_basepath = /var/lib/gnocchi
+driver = file
+```
+
+5. Initialize the database.
+
+```
+gnocchi-upgrade
+```
+
+6. Start the Gnocchi services.
+
+```
+systemctl enable openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+
+systemctl start openstack-gnocchi-api.service openstack-gnocchi-metricd.service
+```
+
+### Installing Ceilometer
+
+1. Create Keystone resource objects.
+
+```
+openstack user create --domain default --password-prompt ceilometer
+
+openstack role add --project service --user ceilometer admin
+
+openstack service create --name ceilometer --description "Telemetry" metering
+```
+
+2. Install Ceilometer.
+
+```
+yum install openstack-ceilometer-notification openstack-ceilometer-central
+```
+
+1. Modify the **/etc/ceilometer/pipeline.yaml** configuration file.
+
+```
+publishers:
+ # set address of Gnocchi
+ # + filter out Gnocchi-related activity meters (Swift driver)
+ # + set default archive policy
+ - gnocchi://?filter_project=service&archive_policy=low
+```
+
+4. Modify the **/etc/ceilometer/ceilometer.conf** configuration file.
+
+```
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+
+[service_credentials]
+auth_type = password
+auth_url = http://controller:5000/v3
+project_domain_id = default
+user_domain_id = default
+project_name = service
+username = ceilometer
+password = CEILOMETER_PASS
+interface = internalURL
+region_name = RegionOne
+```
+
+5. Initialize the database.
+
+```
+ceilometer-upgrade
+```
+
+6. Start the Ceilometer services.
+
+```
+systemctl enable openstack-ceilometer-notification.service openstack-ceilometer-central.service
+
+systemctl start openstack-ceilometer-notification.service openstack-ceilometer-central.service
+```
+
+### Installing Heat
+
+1. Creat the **heat** database and grant proper privileges to it. Replace **HEAT_DBPASS** with a proper password.
+
+```
+CREATE DATABASE heat;
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'HEAT_DBPASS';
+GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'HEAT_DBPASS';
+```
+
+2. Create a service credential. Create the **heat** user and add the **admin** role to it.
+
+```
+openstack user create --domain default --password-prompt heat
+openstack role add --project service --user heat admin
+```
+
+3. Create the **heat** and **heat-cfn** services and their API enpoints.
+
+```
+openstack service create --name heat --description "Orchestration" orchestration
+openstack service create --name heat-cfn --description "Orchestration" cloudformation
+openstack endpoint create --region RegionOne orchestration public http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration internal http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne orchestration admin http://controller:8004/v1/%\(tenant_id\)s
+openstack endpoint create --region RegionOne cloudformation public http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation internal http://controller:8000/v1
+openstack endpoint create --region RegionOne cloudformation admin http://controller:8000/v1
+```
+
+4. Create additional OpenStack management information, including the **heat** domain and its administrator **heat_domain_admin**, the **heat_stack_owner** role, and the **heat_stack_user** role.
+
+```
+openstack user create --domain heat --password-prompt heat_domain_admin
+openstack role add --domain heat --user-domain heat --user heat_domain_admin admin
+openstack role create heat_stack_owner
+openstack role create heat_stack_user
+```
+
+5. Install the software packages.
+
+```
+yum install openstack-heat-api openstack-heat-api-cfn openstack-heat-engine
+```
+
+6. Modify the configuration file **/etc/heat/heat.conf**.
+
+```
+[DEFAULT]
+transport_url = rabbit://openstack:RABBIT_PASS@controller
+heat_metadata_server_url = http://controller:8000
+heat_waitcondition_server_url = http://controller:8000/v1/waitcondition
+stack_domain_admin = heat_domain_admin
+stack_domain_admin_password = HEAT_DOMAIN_PASS
+stack_user_domain_name = heat
+
+[database]
+connection = mysql+pymysql://heat:HEAT_DBPASS@controller/heat
+
+[keystone_authtoken]
+www_authenticate_uri = http://controller:5000
+auth_url = http://controller:5000
+memcached_servers = controller:11211
+auth_type = password
+project_domain_name = default
+user_domain_name = default
+project_name = service
+username = heat
+password = HEAT_PASS
+
+[trustee]
+auth_type = password
+auth_url = http://controller:5000
+username = heat
+password = HEAT_PASS
+user_domain_name = default
+
+[clients_keystone]
+auth_uri = http://controller:5000
+```
+
+7. Initialize the **heat** database table.
+
+```
+su -s /bin/sh -c "heat-manage db_sync" heat
+```
+
+8. Start the services.
+
+```
+systemctl enable openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+systemctl start openstack-heat-api.service openstack-heat-api-cfn.service openstack-heat-engine.service
+```
+
+## OpenStack Quick Installation
+
+The OpenStack SIG provides the Ansible script for one-click deployment of OpenStack in All in One or Distributed modes. Users can use the script to quickly deploy an OpenStack environment based on openEuler RPM packages. The following uses the All in One mode installation as an example.
+
+1. Install the OpenStack SIG Tool.
+
+ ```shell
+ pip install openstack-sig-tool
+ ```
+
+2. Configure the OpenStack Yum source.
+
+ ```shell
+ yum install openstack-release-wallaby
+ ```
+
+ **Note**: Enable the EPOL repository for the Yum source if it is not enabled already.
+
+ ```shell
+ vi /etc/yum.repos.d/openEuler.repo
+
+ [EPOL]
+ name=EPOL
+ baseurl=http://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/$basearch/
+ enabled=1
+ gpgcheck=1
+ gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS/OS/$basearch/RPM-GPG-KEY-openEuler
+ EOF
+
+3. Update the Ansible configurations.
+
+ Open the **/usr/local/etc/inventory/all_in_one.yaml** file and modify the configuration based on the environment and requirements. Modify the file as follows:
+
+ ```shell
+ all:
+ hosts:
+ controller:
+ ansible_host:
+ ansible_ssh_private_key_file:
+ ansible_ssh_user: root
+ vars:
+ mysql_root_password: root
+ mysql_project_password: root
+ rabbitmq_password: root
+ project_identity_password: root
+ enabled_service:
+ - keystone
+ - neutron
+ - cinder
+ - placement
+ - nova
+ - glance
+ - horizon
+ - aodh
+ - ceilometer
+ - cyborg
+ - gnocchi
+ - kolla
+ - heat
+ - swift
+ - trove
+ - tempest
+ neutron_provider_interface_name: br-ex
+ default_ext_subnet_range: 10.100.100.0/24
+ default_ext_subnet_gateway: 10.100.100.1
+ neutron_dataplane_interface_name: eth1
+ cinder_block_device: vdb
+ swift_storage_devices:
+ - vdc
+ swift_hash_path_suffix: ash
+ swift_hash_path_prefix: has
+ children:
+ compute:
+ hosts: controller
+ storage:
+ hosts: controller
+ network:
+ hosts: controller
+ vars:
+ test-key: test-value
+ dashboard:
+ hosts: controller
+ vars:
+ allowed_host: '*'
+ kolla:
+ hosts: controller
+ vars:
+ # We add openEuler OS support for kolla in OpenStack Queens/Rocky release
+ # Set this var to true if you want to use it in Q/R
+ openeuler_plugin: false
+ ```
+
+ Key Configurations
+
+ | Item | Description|
+ |---|---|
+ | ansible_host | IP address of the all-in-one node.|
+ | ansible_ssh_private_key_file | Key used by the Ansible script for logging in to the all-in-one node.|
+ | ansible_ssh_user | User used by the Ansible script for logging in to the all-in-one node.|
+ | enabled_service | List of services to be installed. You can delete services as required.|
+ | neutron_provider_interface_name | Neutron L3 bridge name. |
+ | default_ext_subnet_range | Neutron private network IP address range. |
+ | default_ext_subnet_gateway | Neutron private network gateway. |
+ | neutron_dataplane_interface_name | NIC used by Neutron. You are advised to use a new NIC to avoid conflicts with existing NICs causing disconnection of the all-in-one node. |
+ | cinder_block_device | Name of the block device used by Cinder.|
+ | swift_storage_devices | Name of the block device used by Swift. |
+
+4. Run the installation command.
+
+ ```shell
+ oos env setup all_in_one
+ ```
+
+ After the command is executed, the OpenStack environment of the All in One mode is successfully deployed.
+
+ The environment variable file **.admin-openrc** is stored in the home directory of the current user.
+
+5. Initialize the Tempest environment.
+
+ If you want to perform the Tempest test in the environment, run the `oos env init all_in_one` command to create the OpenStack resources required by Tempest.
+
+ After the command is executed successfully, a **mytest** directory is generated in the home directory of the user. You can run the `tempest run` command in the directory.
\ No newline at end of file
diff --git a/docs/en/docs/userguide/patch-tracking.md b/docs/en/docs/userguide/patch-tracking.md
index 1f427a5d9f65e5d3ab41e023a57ad91d867071cf..67586ab4d770d79716daa72eff2dad599842b591 100644
--- a/docs/en/docs/userguide/patch-tracking.md
+++ b/docs/en/docs/userguide/patch-tracking.md
@@ -335,4 +335,4 @@ During the operation of the patch-tracking, the following error message may occu
#### Cause Analysis
-The preceding problem is caused by the unstable network access between the patch-tracking and GitHub API. Ensure that the patch-tracking is operating in a stable network environment (for example, HUAWEI CLOUD Hong Kong).
\ No newline at end of file
+The preceding problem is caused by the unstable network access between the patch-tracking and GitHub API. Ensure that the patch-tracking is operating in a stable network environment (for example, Huawei Cloud Hong Kong).
\ No newline at end of file
diff --git a/docs/en/menu/index.md b/docs/en/menu/index.md
index 16cd18d03bcba167e5c95b9fef2dcc6df4bfd1f3..16989988896db89673cf23704125f00088a507bb 100644
--- a/docs/en/menu/index.md
+++ b/docs/en/menu/index.md
@@ -5,7 +5,7 @@ headless: true
- [Release Notes]({{< relref "./docs/Releasenotes/release_notes.md" >}})
- [User Notice]({{< relref "./docs/Releasenotes/user-notice.md" >}})
- [Introduction]({{< relref "./docs/Releasenotes/introduction.md" >}})
- - [Installing the OS]({{< relref "./docs/Releasenotes/installing-the-os.md" >}})
+ - [OS Installation]({{< relref "./docs/Releasenotes/installing-the-os.md" >}})
- [Key Features]({{< relref "./docs/Releasenotes/key-features.md" >}})
- [Known Issues]({{< relref "./docs/Releasenotes/known-issues.md" >}})
- [Resolved Issues]({{< relref "./docs/Releasenotes/resolved-issues.md" >}})
@@ -17,7 +17,7 @@ headless: true
- [Installation Guide]({{< relref "./docs/Installation/Installation.md" >}})
- [Installation on Servers]({{< relref "./docs/Installation/install-server.md" >}})
- [Installation Preparations]({{< relref "./docs/Installation/installation-preparations.md" >}})
- - [Installation Mode]({{< relref "./docs/Installation/installation-mode.md" >}})
+ - [Installation Modes]({{< relref "./docs/Installation/Installation-Modes1.md" >}})
- [Installation Guideline]({{< relref "./docs/Installation/installation-guideline.md" >}})
- [Using Kickstart for Automatic Installation]({{< relref "./docs/Installation/using-kickstart-for-automatic-installation.md" >}})
- [FAQs]({{< relref "./docs/Installation/faqs.md" >}})
@@ -57,7 +57,7 @@ headless: true
- [Appendix]({{< relref "./docs/SecHarden/appendix.md" >}})
- [Virtualization User Guide]({{< relref "./docs/Virtualization/virtualization.md" >}})
- [Introduction to Virtualization]({{< relref "./docs/Virtualization/introduction-to-virtualization.md" >}})
- - [Installation to Virtualization]({{< relref "./docs/Virtualization/installation-to-virtualization.md" >}})
+ - [Installing Virtualization]({{< relref "./docs/Virtualization/virtualization-installation.md" >}})
- [Environment Preparation]({{< relref "./docs/Virtualization/environment-preparation.md" >}})
- [VM Configuration]({{< relref "./docs/Virtualization/vm-configuration.md" >}})
- [Managing VMs]({{< relref "./docs/Virtualization/managing-vms.md" >}})
@@ -67,14 +67,19 @@ headless: true
- [VM Maintainability Management]({{< relref "./docs/Virtualization/vm-maintainability-management.md" >}})
- [Best Practices]({{< relref "./docs/Virtualization/best-practices.md" >}})
- [Tool Guide]({{< relref "./docs/Virtualization/tool-guide.md" >}})
+ - [vmtop]({{< relref "./docs/Virtualization/vmtop.md" >}})
+ - [LibcarePlus]({{< relref "./docs/Virtualization/LibcarePlus.md" >}})
- [Appendix]({{< relref "./docs/Virtualization/appendix.md" >}})
- [StratoVirt Virtualization User Guide]({{< relref "./docs/StratoVirt/StratoVrit_guidence.md" >}})
- - [Introduction to StratoVirt]({{< relref "./docs/StratoVirt/StratoVirt_intoduction.md" >}})
+ - [Introduction to StratoVirt]({{< relref "./docs/StratoVirt/StratoVirt_introduction.md" >}})
- [Installing StratoVirt]({{< relref "./docs/StratoVirt/Install_StratoVirt.md" >}})
- [Preparing the Environment]({{< relref "./docs/StratoVirt/Prepare_env.md" >}})
- [Configuring a VM]({{< relref "./docs/StratoVirt/VM_configuration.md" >}})
- [VM Management]({{< relref "./docs/StratoVirt/VM_management.md" >}})
- - [Connecting to the iSula Secure Container]({{< relref "./docs/StratoVirt/connecting-to-the-isula-secure-container.md" >}})
+ - [Connecting to the iSula Secure Container]({{< relref "./docs/StratoVirt/interconnect_isula.md" >}})
+ - [Interconnecting with libvirt]({{< relref "./docs/StratoVirt/Interconnect_libvirt.md" >}})
+ - [StratoVirt VFIO Instructions]({{< relref "./docs/StratoVirt/StratoVirt_VFIO_instructions.md" >}})
+- [Container User Guide]({{< relref "./docs/Container/container.md" >}})
- [iSulad Container Engine]({{< relref "./docs/Container/isulad-container-engine.md" >}})
- [Installation, Upgrade and Uninstallation]({{< relref "./docs/Container/installation-upgrade-Uninstallation.md" >}})
- [Installation and Configuration]({{< relref "./docs/Container/installation-configuration.md" >}})
@@ -83,8 +88,6 @@ headless: true
- [Application Scenarios]({{< relref "./docs/Container/application-scenarios.md" >}})
- [Container Management]({{< relref "./docs/Container/container-management.md" >}})
- [Interconnection with the CNI Network]({{< relref "./docs/Container/interconnection-with-the-cni-network.md" >}})
-})
- - [Container Resource Management]({{< relref "./docs/Container/container-resource-management.md" >}})
- [Privileged Container]({{< relref "./docs/Container/privileged-container.md" >}})
- [CRI]({{< relref "./docs/Container/cri.md" >}})
- [Image Management]({{< relref "./docs/Container/image-management.md" >}})
@@ -134,17 +137,24 @@ headless: true
- [Application Scenarios]({{< relref "./docs/A-Tune/application-scenarios.md" >}})
- [FAQs]({{< relref "./docs/A-Tune/faqs.md" >}})
- [Appendixes]({{< relref "./docs/A-Tune/appendixes.md" >}})
+- [openEuler Embedded User Guide]({{< relref "./docs/Embedded/embedded.md" >}})
+ - [Installation and Running]({{< relref "./docs/Embedded/installation-and-running.md" >}})
+ - [Build Guide]({{< relref "./docs/Embedded/building-guide.md" >}})
+ - [Quick Build Guide]({{< relref "./docs/Embedded/quick-build-guide.md" >}})
+ - [Container Build Guide]({{< relref "./docs/Embedded/container-build-guide.md" >}})
+ - [Installation and Running]({{< relref "./docs/Embedded/installation-and-running.md" >}})
+ - [Application Development Using openEuler Embedded SDK]({{< relref "./docs/Embedded/application-development-using-sdk.md" >}})
+- [Kernel Live Upgrade Guide]({{< relref "./docs/KernelLiveUpgrade/KernelLiveUpgrade.md" >}})
+ - [Installation and Deployment]({{< relref "./docs/KernelLiveUpgrade/installation-and-deployment.md" >}})
+ - [How to Run]({{< relref "./docs/KernelLiveUpgrade/how-to-run.md" >}})
+ - [Common Problems and Solutions]({{< relref "./docs/KernelLiveUpgrade/common-problems-and-solutions.md" >}})
- [Application Development Guide]({{< relref "./docs/ApplicationDev/application-development.md" >}})
- - [Preparation]({{< relref "./docs/ApplicationDev/preparation.md" >}})
+ - [Preparation]({{< relref "./docs/ApplicationDev/preparations-for-development-environment.md" >}})
- [Using GCC for Compilation]({{< relref "./docs/ApplicationDev/using-gcc-for-compilation.md" >}})
- [Using Make for Compilation]({{< relref "./docs/ApplicationDev/using-make-for-compilation.md" >}})
- [Using JDK for Compilation]({{< relref "./docs/ApplicationDev/using-jdk-for-compilation.md" >}})
- [Building an RPM Package]({{< relref "./docs/ApplicationDev/building-an-rpm-package.md" >}})
- [FAQ]({{< relref "./docs/ApplicationDev/FAQ.md" >}})
-- [Kernel Hot Upgrade Guide]({{< relref "./docs/KernelLiveUpgrade/KernelLiveUpgrade.md" >}})
- - [Installation and Deployment]({{< relref "./docs/KernelLiveUpgrade/installation-and-deployment.md" >}})
- - [How to Run]({{< relref "./docs/KernelLiveUpgrade/how-to-run.md" >}})
- - [Common Problems and Solutions]({{< relref "./docs/KernelLiveUpgrade/common-problems-and-solutions.md" >}})
- [secGear Development Guide]({{< relref "./docs/secGear/secGear.md" >}})
- [Introduction to secGear]({{< relref "./docs/secGear/introduction-to-secGear.md" >}})
- [Installing secGear]({{< relref "./docs/secGear/installing-secGear.md" >}})
@@ -164,10 +174,15 @@ headless: true
- [Deploying a Cluster]({{< relref "./docs/Kubernetes/eggo-deploying-a-cluster.md" >}})
- [Dismantling a Cluster]({{< relref "./docs/Kubernetes/eggo-dismantling-a-cluster.md" >}})
- [Running the Test Pod]({{< relref "./docs/Kubernetes/running-the-test-pod.md" >}})
+- [KubeEdge User Guide]({{< relref "./docs/KubeEdge/overview.md" >}})
+ - [KubeEdge Usage Guide]({{< relref "./docs/KubeEdge/kubeedge-usage-guide.md" >}})
+ - [KubeEdge Deployment Guide]({{< relref "./docs/KubeEdge/kubeedge-deployment-guide.md" >}})
- [Third-Party Software Deployment Guide]({{< relref "./docs/thirdparty_migration/thidrparty.md" >}})
- - [OpenStack Victoria Deployment Guide]({{< relref "./docs/thirdparty_migration/OpenStack-victoria.md" >}})
- - [Installing and Deploying an HA Cluster]({{< relref "./docs/thirdparty_migration/installha.md" >}})
- - [KubeSphere Installation Guide]({{< relref "./docs/thirdparty_migration/kubesphere.md" >}})
+ - [OpenStack-Wallaby Deployment Guide]({{< relref "./docs/thirdparty_migration/OpenStack-wallaby.md" >}})
+ - [HA User Guide]({{< relref "./docs/desktop/ha.md" >}})
+ - [Deploying an HA Cluster]({{< relref "./docs/thirdparty_migration/installha.md" >}})
+ - [HA Usage Example]({{< relref "./docs/desktop/HA_usage_example.md" >}})
+ - [KubeSphere Deployment Guide]({{< relref "./docs/desktop/kubesphere.md" >}})
- [Desktop Environment User Guide]({{< relref "./docs/desktop/desktop.md" >}})
- [UKUI]({{< relref "./docs/desktop/ukui.md" >}})
- [UKUI Installation]({{< relref "./docs/desktop/installing-UKUI.md" >}})
@@ -196,5 +211,15 @@ headless: true
- [About KubeOS]({{< relref "./docs/KubeOS/about-kubeos.md" >}})
- [Installation and Deployment]({{< relref "./docs/KubeOS/installation-and-deployment.md" >}})
- [Usage Instructions]({{< relref "./docs/KubeOS/usage-instructions.md" >}})
+- [Rubik User Guide]({{< relref "./docs/rubik/overview.md" >}})
+ - [Installation and Deployment]({{< relref "./docs/rubik/installation-and-deployment.md" >}})
+ - [HTTP APIs]({{< relref "./docs/rubik/http-apis.md" >}})
+ - [Example of Isolation for Hybrid Deployed Services]({{< relref "./docs/rubik/example-of-isolation-for-hybrid-deployed-services.md" >}})
- [Image Tailoring and Customization Tool]({{< relref "./docs/TailorCustom/overview.md" >}})
- - [isocut Usage Guide]({{< relref "./docs/TailorCustom/isocut-usage-guide.md" >}})
\ No newline at end of file
+ - [isocut Usage Guide]({{< relref "./docs/TailorCustom/isocut-usage-guide.md" >}})
+ - [ImageTailor User Guide]({{< relref "./docs/TailorCustom/imageTailor-user-guide.md" >}})
+- [Gazelle User Guide]({{< relref "./docs/Gazelle/Gazelle.md" >}})
+- [NestOS User Guide]({{< relref "./docs/NestOS/overview.md" >}})
+ - [Installation and Deployment]({{< relref "./docs/NestOS/installation-and-deployment.md" >}})
+ - [Setting Up Kubernetes and iSulad]({{< relref "./docs/NestOS/usage.md" >}})
+ - [Feature Description]({{< relref "./docs/NestOS/feature-description.md" >}})
\ No newline at end of file
diff --git "a/docs/zh/docs/ApplicationDev/\344\275\277\347\224\250GCC\347\274\226\350\257\221.md" "b/docs/zh/docs/ApplicationDev/\344\275\277\347\224\250GCC\347\274\226\350\257\221.md"
index 57c425e982aa613c94e36b4cb49ec066d5027e85..8d4e70ee93191070950bc1d4d1762224bf0f996d 100644
--- "a/docs/zh/docs/ApplicationDev/\344\275\277\347\224\250GCC\347\274\226\350\257\221.md"
+++ "b/docs/zh/docs/ApplicationDev/\344\275\277\347\224\250GCC\347\274\226\350\257\221.md"
@@ -126,7 +126,7 @@ _options_ :编译选项。
_filenames_ :文件名称。
-GCC是一个功能强大的编译器,其 _options_ 参数取值很多,但有些大部分并不常用,常用的 _options_ 取值如[表2](#table1342946175212)所示。
+GCC是一个功能强大的编译器,其 _options_ 参数取值很多,但大部分并不常用,常用的 _options_ 取值如[表2](#table1342946175212)所示。
**表 2** GCC常用的编译选项
@@ -214,7 +214,7 @@ GCC是一个功能强大的编译器,其 _options_ 参数取值很多,但有
-shared
|
默认选项,可省略。
-- 可以生成动态库文件。
- 进行动态编译,优先链接动态库,只有没有动态库是才会链接同名的静态库。
+- 可以生成动态库文件。
- 进行动态编译,优先链接动态库,只有没有动态库时才会链接同名的静态库。
|
-
|
@@ -243,7 +243,7 @@ GCC是一个功能强大的编译器,其 _options_ 参数取值很多,但有
- 分别编译各个源文件,之后对编译后输出的目标文件链接。编译时只重新编译修改的文件,未修改的文件不用重新编译。
- 示例:分别编译test1.c,test2.c,在将二者的目标文件test1.o,test2.o链接成test可执行文件。
+ 示例:分别编译test1.c,test2.c,再将二者的目标文件test1.o,test2.o链接成test可执行文件。
```
$ gcc -c test1.c
@@ -262,7 +262,7 @@ GCC是一个功能强大的编译器,其 _options_ 参数取值很多,但有
- 资源利用不一样。
- 静态库为生成的可执行文件的一部分,而动态库为单独的文件。所以使用静态库和和动态库的可执行文件大小和占用的磁盘空间大小不一样,导致资源利用不一样。
+ 静态库为生成的可执行文件的一部分,而动态库为单独的文件。所以使用静态库和动态库的可执行文件大小和占用的磁盘空间大小不一样,导致资源利用不一样。
- 扩展性与兼容性不一样
diff --git "a/docs/zh/docs/ApplicationDev/\345\274\200\345\217\221\347\216\257\345\242\203\345\207\206\345\244\207.md" "b/docs/zh/docs/ApplicationDev/\345\274\200\345\217\221\347\216\257\345\242\203\345\207\206\345\244\207.md"
index 52cc0e383673cb8a2f8dc20ffd0cea2e4d62d7b3..5e5266e4cbb8bafaf142ea64c1e0f7ef00d43e76 100644
--- "a/docs/zh/docs/ApplicationDev/\345\274\200\345\217\221\347\216\257\345\242\203\345\207\206\345\244\207.md"
+++ "b/docs/zh/docs/ApplicationDev/\345\274\200\345\217\221\347\216\257\345\242\203\345\207\206\345\244\207.md"
@@ -426,7 +426,7 @@ $ export PATH=$JAVA_HOME/bin:$PATH
$ dnf list installed | grep gtk
```
-如果显示gtk2或者gtk3,则表示您已安装该库,可以直接跳过进入下一步,否则在root权限西下运行如下命令自动下载安装gtk库。
+如果显示gtk2或者gtk3,则表示您已安装该库,可以直接跳过进入下一步,否则在root权限下运行如下命令自动下载安装gtk库。
```
# dnf -y install gtk2 libXtst libXrender xauth
@@ -448,7 +448,7 @@ $ mkdir ~/.ssh
然后在.ssh目录下编辑config文件并保存:
-1. 使用vim打卡config文件
+1. 使用vim打开config文件
```
$ vim config
diff --git "a/docs/zh/docs/Embedded/SDK\345\272\224\347\224\250\345\274\200\345\217\221.md" "b/docs/zh/docs/Embedded/SDK\345\272\224\347\224\250\345\274\200\345\217\221.md"
new file mode 100644
index 0000000000000000000000000000000000000000..4fb31398f8d49862122892e7a1cc3cacf0f2cfd0
--- /dev/null
+++ "b/docs/zh/docs/Embedded/SDK\345\272\224\347\224\250\345\274\200\345\217\221.md"
@@ -0,0 +1,175 @@
+# 基于openEuler Embedded的SDK应用开发
+
+当前发布的镜像除了体验openEuler Embedded的基本功能外,还可以进行基本的应用开发,也即在openEuler Embedded上运行用户自己的程序。本章主要介绍如何基于openEuler Embedded的SDK进行应用开发。
+
+
+
+- [安装SDK](#安装SDK)
+- [使用SDK编译hello world样例](#使用SDK编译hello world样例)
+- [使用SDK编译内核模块样例](#使用SDK编译内核模块样例)
+
+
+
+### 安装SDK
+
+1. **执行SDK自解压安装脚本**
+
+运行如下命令:
+
+```
+sh openeuler-glibc-x86_64-openeuler-image-aarch64-qemu-aarch64-toolchain-22.03.sh
+```
+
+根据提示输入工具链的安装路径,默认路径是`/opt/openeuler/\/`;若不设置,则按默认路径安装;也可以配置相对路径或绝对路径。
+
+举例如下:
+
+```
+sh ./openeuler-glibc-x86_64-openeuler-image-armv7a-qemu-arm-toolchain-22.03.sh``
+openEuler embedded(openEuler Embedded Reference Distro) SDK installer version 22.03
+================================================================
+Enter target directory for SDK (default: /opt/openeuler/22.03): sdk
+You are about to install the SDK to "/usr1/openeuler/sdk". Proceed [Y/n]? y
+Extracting SDK...............................................done
+Setting it up...SDK has been successfully set up and is ready to be used.
+Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
+$ . /usr1/openeuler/sdk/environment-setup-armv7a-openeuler-linux-gnueabi
+```
+
+2. **设置SDK环境变量**
+
+运行source命令。上例中前一步执行结束最后已打印source命令,运行即可。
+
+```
+. /usr1/openeuler/myfiles/sdk/environment-setup-armv7a-openeuler-linux-gnueabi
+```
+
+3. **查看是否安装成功**
+
+运行如下命令,查看是否安装成功、环境设置成功。
+
+```
+arm-openeuler-linux-gnueabi-gcc -v
+```
+
+### 使用SDK编译hello world样例
+
+1. **准备代码**
+
+以构建一个hello world程序为例,运行在openEuler Embedded根文件系统镜像中。
+
+创建一个hello.c文件,源码如下:
+
+``` c
+#include
+
+int main(void)
+{
+ printf("hello world\n");
+}
+```
+
+编写CMakelist.txt,和hello.c文件放在同一个目录。
+
+```
+project(hello C)
+
+add_executable(hello hello.c)
+```
+
+2. **编译生成二进制**
+
+进入hello.c文件所在目录,使用工具链编译,命令如下:
+
+```
+cmake ..
+make
+```
+
+把编译好的hello程序拷贝到openEuler Embedded系统的/tmp/某个目录下(例如/tmp/myfiles/)。如何拷贝可以参考前文所述[使能共享文件系统场景](./安装与运行.html#使能共享文件系统场景)。
+
+3. **运行用户态程序**
+
+在openEuler Embedded系统中运行hello程序。
+
+```
+cd /tmp/myfiles/
+./hello
+```
+
+如运行成功,则会输出\"hello world\"。
+
+### 使用SDK编译内核模块样例
+
+1. **准备环境**
+
+在设置好SDK环境的基础之上,编译内核模块还需准备相应环境,但只需要准备一次即可。运行如下命令会创建相应的内核模块编译环境:
+
+```
+cd /sysroots/-openeuler-linux/usr/src/kernel
+make module_prepare
+```
+
+2. **准备代码**
+
+以编译一个内核模块为例,运行在openEuler Embedded内核中。
+
+创建一个hello.c文件,源码如下:
+
+```c
+#include
+#include
+
+static int hello_init(void)
+{
+ printk("Hello, openEuler Embedded!\r\n");
+ return 0;
+}
+
+static void hello_exit(void)
+{
+ printk("Byebye!");
+}
+
+module_init(hello_init);
+module_exit(hello_exit);
+
+MODULE_LICENSE(GPL);
+```
+
+编写Makefile,和hello.c文件放在同一个目录:
+
+```
+ KERNELDIR := ${KERNEL_SRC_DIR}
+ CURRENT_PATH := $(shell pwd)
+
+ target := hello
+ obj-m := $(target).o
+
+ build := kernel_modules
+
+ kernel_modules:
+ $(MAKE) -C $(KERNELDIR) M=$(CURRENT_PATH) modules
+ clean:
+ $(MAKE) -C $(KERNELDIR) M=$(CURRENT_PATH) clean
+```
+
+`KERNEL_SRC_DIR` 为SDK中内核源码树的目录,该变量在安装SDK后会被自动设置。
+
+3. **编译生成内核模块**
+
+进入hello.c文件所在目录,使用工具链编译,命令如下:
+
+ make
+
+将编译好的hello.ko拷贝到openEuler Embedded系统的目录下。
+
+如何拷贝可以参考前文所述[使能共享文件系统场景](./安装与运行.html#使能共享文件系统场景)。
+
+4. **插入内核模块**
+
+在openEuler Embedded系统中插入内核模块:
+
+ insmod hello.ko
+
+如运行成功,则会在内核日志中出现"Hello, openEuler Embedded!"。
\ No newline at end of file
diff --git a/docs/zh/docs/Embedded/embedded.md b/docs/zh/docs/Embedded/embedded.md
index e63829588a6eee4a48b041d73159f4a895572fc2..44412fc68e8ae992af80ef44cb9abc591949b5c7 100755
--- a/docs/zh/docs/Embedded/embedded.md
+++ b/docs/zh/docs/Embedded/embedded.md
@@ -1,5 +1,5 @@
# openEuler Embedded 用户指南
-openEuler Embedded是基于openEuler社区面向嵌入式场景的Linux版本。由于嵌入式系统应用受到多个因素的约束,如资源、功耗、多样性等,使得面向服务器领域的Linux及相应的构建系统很难满足嵌入式场景的要求,因此业界广泛采用[Yocto](https://www.yoctoproject.org/)来定制化构建嵌入式Linux。openEuler Embedded当前也采用的Yocto构建,但实现了与openEuler其他版本代码同源,具体的构建方法请参考[SIG-Yocto](https://gitee.com/openeuler/community/tree/master/sig/sig-Yocto)下相关代码仓中的内容。
+openEuler Embedded是基于openEuler社区面向嵌入式场景的Linux版本,旨在成为一个高质量的以Linux为中心的嵌入式系统软件平台。openEuler Embedded在内核版本、软件包版本等代码层面会与openEuler其他场景的Linux保持一致,共同演进,不同之处在于针对嵌入场景的内核配置、软件包的组合与配置、代码特性补丁的不同。
-本文档主要用于介绍如何获取预先构建好的镜像,如何运行镜像,以及如何基于镜像完成基本的嵌入式Linux应用开发。使用人员需要具备基本的Linux操作系统知识。
+本文档主要用于介绍如何获取预先构建好的镜像,如何运行镜像,如何基于镜像完成基本的嵌入式Linux应用开发以及如何构建openEuler Embedded。使用人员需要具备基本的Linux操作系统知识。
diff --git "a/docs/zh/docs/Embedded/openEuler Embedded 22.03\345\217\221\350\241\214\350\257\264\346\230\216.md" "b/docs/zh/docs/Embedded/openEuler Embedded 22.03\345\217\221\350\241\214\350\257\264\346\230\216.md"
new file mode 100644
index 0000000000000000000000000000000000000000..915644291ba1b4365d91eec3376e2a8d49c7fbb2
--- /dev/null
+++ "b/docs/zh/docs/Embedded/openEuler Embedded 22.03\345\217\221\350\241\214\350\257\264\346\230\216.md"
@@ -0,0 +1,32 @@
+# openEuler Embedded 22.03发行说明
+
+openEuler Embedded 22.03是openEuler Embedded第一次正式发布,包含的内容大概如下:
+
+## 内核
+
+- 内核升级到 5.10.0-60.17.0
+
+- 内核支持Preempt-RT补丁
+
+- 内核支持树莓派4B相关补丁
+
+## 软件包
+
+- 支持80+软件包,详见[当前所支持的软件包](https://openeuler.gitee.io/yocto-meta-openeuler/features/software_package_description.html)
+
+## 亮点特性
+
+- 多OS混合部署框架的初步实现,支持openEuler Embedded和Zephyr的混合部署,详见[多OS混合部署框架](https://openeuler.gitee.io/yocto-meta-openeuler/features/mcs.html)
+- 分布式软总线的初步集成,详见[分布式软总线](https://openeuler.gitee.io/yocto-meta-openeuler/features/distributed_soft_bus.html)
+
+- 安全加固指导,详见[安全加固说明](https://openeuler.gitee.io/yocto-meta-openeuler/security_hardening/index.html)
+- 基于Preempt-RT的软实时,详见[软实时系统介绍](https://openeuler.gitee.io/yocto-meta-openeuler/features/preempt_rt.html)
+
+## 南向生态
+
+- 新增树莓派4B支持,详见[树莓派4B的支持](https://openeuler.gitee.io/yocto-meta-openeuler/features/raspberrypi.html)
+
+## 构建系统
+
+- 初步的openEuler Embedded构建体系, 详见[快速构建指导](./快速构建指导.html)
+- 容器化构建,详见[容器构建指导](./容器构建指导.html)
\ No newline at end of file
diff --git "a/docs/zh/docs/Embedded/openEuler Embedded\346\236\204\345\273\272\346\214\207\345\257\274.markdown" "b/docs/zh/docs/Embedded/openEuler Embedded\346\236\204\345\273\272\346\214\207\345\257\274.markdown"
new file mode 100644
index 0000000000000000000000000000000000000000..eb0f6a5302d2f502ebf770c6954beea3bb3c3955
--- /dev/null
+++ "b/docs/zh/docs/Embedded/openEuler Embedded\346\236\204\345\273\272\346\214\207\345\257\274.markdown"
@@ -0,0 +1,7 @@
+openEuler Embedded构建指导
+=====================
+
+本文档主要提供了两种方式准备构建环境,并描述了详细的构建流程。用户根据需要选择一种方式即可。
+
+- 初步的openEuler Embedded构建体系, 详见[快速构建指导](./快速构建指导.html)
+- 容器化构建,详见[容器构建指导](./容器构建指导.html)
\ No newline at end of file
diff --git a/docs/zh/docs/Embedded/public_sys-resources/hosttools.png b/docs/zh/docs/Embedded/public_sys-resources/hosttools.png
new file mode 100644
index 0000000000000000000000000000000000000000..83bb076a7763d522dde36168b1f713e524141531
Binary files /dev/null and b/docs/zh/docs/Embedded/public_sys-resources/hosttools.png differ
diff --git "a/docs/zh/docs/Embedded/\344\275\277\347\224\250\346\226\271\346\263\225.md" "b/docs/zh/docs/Embedded/\344\275\277\347\224\250\346\226\271\346\263\225.md"
deleted file mode 100644
index 20c649cd8cfda1bf9eb14710545db39fc94a0f6b..0000000000000000000000000000000000000000
--- "a/docs/zh/docs/Embedded/\344\275\277\347\224\250\346\226\271\346\263\225.md"
+++ /dev/null
@@ -1,45 +0,0 @@
-# 使用方法
-
-当前发布的镜像除了体验openEuler Embedded的基本功能外,还可以进行基本的用户态应用开发,也即在openEuler embedded上运行用户自己的程序。本章主要介绍如何基于镜像完成基本的嵌入式Linux应用开发。
-
-
-1. **环境准备**
-
-由于当前镜像采用了linaro arm/aarch64 gcc 7.3.1工具链构建,因此建议应用开发也使用相同的工具链接进行,可以从如下链接中获取相应工具链:
-- [linaro arm](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/arm-linux-gnueabi/gcc-linaro-7.3.1-2018.05-x86_64_arm-linux-gnueabi.tar.xz)
-- [linrao arm sysroot](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/arm-linux-gnueabi/sysroot-glibc-linaro-2.25-2018.05-arm-linux-gnueabi.tar.xz)
-- [linaro aarch64](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz)
-- [linrao aarch64 sysroot](https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/sysroot-glibc-linaro-2.25-2018.05-aarch64-linux-gnu.tar.xz)
-
-下载并解压到指定的目录中,例如/opt/openEuler_toolchain。
-
-2. **创建并编译用户态程序**
-
-以构建一个hello openEuler程序为例,运行在aarch64的标准根文件系统镜像中。
-
-在宿主机中,创建一个hello.c文件,源码如下:
-```c
-#include
-
-int main(void)
-{
- printf("hello openEuler\r\n");
-}
-```
-
-然后在宿主机上使用对应的工具链编译, 相应命令如下:
-```shell
-export PATH=$PATH:/opt/openEuler_toolchain/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin
-aarch64-linux-gnu-gcc --sysroot= hello.c -o hello
-mv hello /temp
-```
-把交叉编译好的hello程序拷贝到/tmp目录下,然后参照使能共享文件系统中的描述,使得openEuler embedded可以访问宿主机的目录。
-
-3. **运行用户态程序**
-
-在openEuler embedded中运行hello程序。
-```shell
-cd /tmp/host
-./hello
-```
-如运行成功,openEuler Embedded的shell中就会输出hello openEuler。
diff --git "a/docs/zh/docs/Embedded/\345\256\211\350\243\205\344\270\216\350\277\220\350\241\214.md" "b/docs/zh/docs/Embedded/\345\256\211\350\243\205\344\270\216\350\277\220\350\241\214.md"
index f20caf19133d09982ed3266bfcd6e78196e0a862..e7cd05ef7a4c32eced095a4a6a5c3fde63dd682b 100644
--- "a/docs/zh/docs/Embedded/\345\256\211\350\243\205\344\270\216\350\277\220\350\241\214.md"
+++ "b/docs/zh/docs/Embedded/\345\256\211\350\243\205\344\270\216\350\277\220\350\241\214.md"
@@ -11,13 +11,14 @@
- [极简运行场景](#极简运行场景)
- [使能共享文件系统场景](#使能共享文件系统场景)
- [使能网络场景](#使能网络场景)
-
+
## 获取镜像
+
当前发布的已构建好的镜像,只支持arm和aarch64两种架构,且只支持qemu中ARM virt-4.0平台,您可以通过如下链接获得相应的镜像:
-- [qemu_arm](https://repo.openeuler.org/openEuler-21.09/embedded_img/qemu-arm):32位arm架构,ARM Cortex A15处理器
-- [qemu_aarch64](https://repo.openeuler.org/openEuler-21.09/embedded_img/qemu-aarch64):64位aarch64架构,ARM Cortex A57处理器
+- [qemu_arm](https://repo.openeuler.org/openEuler-22.03-LTS/embedded_img/arm32/arm-std/):32位arm架构,ARM Cortex A15处理器
+- [qemu_aarch64](https://repo.openeuler.org/openEuler-22.03-LTS/embedded_img/arm64/aarch64-std/):64位aarch64架构,ARM Cortex A57处理器
只要相应环境支持qemu仿真器(版本5.0以上),您可以将提供的openEuler Embedded镜像部署在物理裸机、云环境、容器或虚拟机上。
@@ -26,129 +27,161 @@
所下载的镜像,由以下几部分组成:
- 内核镜像**zImage**: 基于openEuler社区Linux 5.10代码构建得到。相应的内核配置可通过如下链接获取:
- - [arm(cortex a15)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-21.09/config/arm/defconfig-kernel)
- - [arm(cortex a57)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-21.09/config/arm64/defconfig-kernel), 针对aarch64架构,额外增加了镜像自解压功能,可以参见相应的[patch](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-21.09/patches/arm64/0001-arm64-add-zImage-support-for-arm64.patch)
+ - [arm(cortex a15)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-22.03-LTS/config/arm/defconfig-kernel)
+ - [arm(cortex a57)](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-22.03-LTS/config/arm64/defconfig-kernel),针对aarch64架构,额外增加了镜像自解压功能,可以参见相应的[patch](https://gitee.com/openeuler/yocto-embedded-tools/blob/openEuler-22.03-LTS/patches/arm64/0001-arm64-add-zImage-support-for-arm64.patch)
+
+- 根文件系统镜像
+
+ - **openeuler-image-qemu-xxx.cpio.gz**:标准根文件系统镜像,进行了必要安全加固,增加了audit、cracklib、OpenSSH、Linux PAM、shadow、iSula容器所支持的软件包。
-- 根文件系统镜像(依据具体需求,以下二选一)
- - **initrd_tiny**:极简根文件系统镜像,只包含基本功能。包含 busybox 和基本的 glibc 库。该镜像功能简单,但内存消耗很小,适合探索 Linux内核相关功能。
- - **initrd**:标准根文件系统镜像,在极简根文件系统镜像的基础上,进行了必要安全加固,增加了audit、cracklib、OpenSSH、Linux PAM、shadow、iSula容器等软件包。该镜像适合进行更加丰富的功能探索。
+- SDK(Software Development Kit)工具
+
+ - **openeuler-glibc-x86_64-xxxxx.sh**:openEuler Embedded SDK自解压安装包,SDK包含了进行开发(用户态程序、内核模块等)所必需的工具、库和头文件等。
## 运行镜像
通过运行镜像,一方面您可以体验openEuler Embedded的功能,一方面也可以完成基本的嵌入式Linux开发。
---
+
**注意事项**
-- 建议使用QEMU5.0以上版本运行镜像,由于一些额外功能(网络、共享文件系统)需要依赖QEMU的virtio-net, virtio-fs等特性,如未在QEMU中使能,则运行时可能会产生错误,此时可能需要从源码重新编译QEMU。
+- 建议使用QEMU 5.0以上版本运行镜像,由于一些额外功能(网络、共享文件系统)需要依赖QEMU的virtio-net, virtio-fs等特性,如未在QEMU中使能,则运行时可能会产生错误,此时可能需要从源码重新编译QEMU。
- 运行镜像时,建议把内核镜像和根文件系统镜像放在同一目录下。
-> **说明:**
- >后续说明以标准根文件系统为例(initrd)。
-
---
+QEMU的下载与安装可以参考[QEMU官方网站](https://www.qemu.org/download/#linux) , 或者下载[源码](https://www.qemu.org/download/#source)单独编译安装。安装好后可以运行如下命令确认:
+
+```
+qemu-system-aarch64 --version
+```
### 极简运行场景
-该场景下,qemu未使能网络和共享文件系统,适合快速的功能体验。
+该场景下,QEMU未使能网络和共享文件系统,适合快速的功能体验。
-1. **启动qemu**
+1. **启动QEMU**
针对arm(ARM Cortex A15),运行如下命令:
-```shell
+
+```
qemu-system-arm -M virt-4.0 -cpu cortex-a15 -nographic -kernel zImage -initrd initrd
```
+
针对aarch64(ARM Cortex A57),运行如下命令:
-```shell
+
+```
qemu-system-aarch64 -M virt-4.0 -cpu cortex-a57 -nographic -kernel zImage -initrd initrd
```
> **说明:**
- >- 由于标准根文件系统镜像进行了安全加固,因此第一次启动时,需要为登录用户名root设置密码,且密码强度有相应要求, 需要数字、字母、特殊字符组合最少8位,例如openEuler@2021。
- >- 当使用极简根文件系统镜像时,系统会自动登录, 无需输入用户名和密码。
+>
+>由于标准根文件系统镜像进行了安全加固,因此第一次启动时,需要为登录用户名root设置密码,且密码强度有相应要求, 需要数字、字母、特殊字符组合最少8位,例如openEuler@2021。
2. **检查运行是否成功**
-qemu运行成功并登录后,将会呈现openEuler Embedded的Shell。
+QEMU运行成功并登录后,将会呈现openEuler Embedded的Shell。
### 使能共享文件系统场景
-通过共享文件系统,可以使得运行qemu仿真器的宿主机和openEuler Embedded共享文件,这样在宿主机上交叉编译的程序,拷贝到共享目录中,即可在openEuler Embedded上运行。
+通过共享文件系统,可以使得运行QEMU仿真器的宿主机和openEuler Embedded共享文件,这样在宿主机上交叉编译的程序,拷贝到共享目录中,即可在openEuler Embedded上运行。
假设将宿主机的/tmp目录作为共享目录,并事先在其中创建了名为hello_openeuler.txt的文件,使能共享文件系统功能的操作指导如下:
-1. **启动qemu**
+1. **启动QEMU**
针对arm(ARM Cortex A15),运行如下命令:
-```sh
+
+```
qemu-system-arm -M virt-4.0 -cpu cortex-a15 -nographic -kernel zImage -initrd initrd -device virtio-9p-device,fsdev=fs1,mount_tag=host -fsdev local,security_model=passthrough,id=fs1,path=/tmp
```
+
针对aarch64(ARM Cortex A57),运行如下命令:
-```sh
+
+```
qemu-system-aarch64 -M virt-4.0 -cpu cortex-a57 -nographic -kernel zImage -initrd initrd -device virtio-9p-device,fsdev=fs1,mount_tag=host -fsdev local,security_model=passthrough,id=fs1,path=/tmp
```
2. **映射文件系统**
在openEuler Embedded启动并登录之后,需要运行如下命令,映射(mount)共享文件系统
-```shell
+
+```
cd /tmp
mkdir host
mount -t 9p -o trans=virtio,version=9p2000.L host /tmp/host
```
+
即把共享文件系统映射到openEuler Embedded的/tmp/host目录下。
3. **检查共享是否成功**
在openEuler Embedded中,运行如下命令:
-```shell
+```
cd /tmp/host
ls
```
+
如能发现hello_openeuler.txt,则共享成功。
### 使能网络场景
-通过qemu的virtio-net和宿主机上的虚拟网卡,可以实现宿主机和openEuler embedded之间的网络通信。
-1. **启动qemu**
+通过QEMU的virtio-net和宿主机上的虚拟网卡,可以实现宿主机和openEuler Embedded之间的网络通信。除了通过virtio-fs实现文件共享外,还可以通过网络的方式,例如 **scp** 命令,实现宿主机和 openEuler Embedded传输文件。
+
+1. **启动QEMU**
针对arm(ARM Cortex A15),运行如下命令:
-```shell
+
+```
qemu-system-arm -M virt-4.0 -cpu cortex-a15 -nographic -kernel zImage -initrd initrd -device virtio-net-device,netdev=tap0 -netdev tap,id=tap0,script=/etc/qemu-ifup
```
+
针对aarch64(ARM Cortex A57),运行如下命令:
-```shell
+
+```
qemu-system-aarch64 -M virt-4.0 -cpu cortex-a57 -nographic -kernel zImage -initrd initrd -device virtio-net-device,netdev=tap0 -netdev tap,id=tap0,script=/etc/qemu-ifup
```
+
2. **宿主上建立虚拟网卡**
-在宿主机上需要建立名为tap0的虚拟网卡,可以借助/etc/qemu-ifup脚本实现,其执行需要root权限,具体内容如下:
+在宿主机上需要建立名为tap0的虚拟网卡,可以借助脚本实现,创建qemu-ifup脚本,放在/etc/下,具体内容如下:
-```sh
+```
#!/bin/bash
ifconfig $1 192.168.10.1 up
```
+
+其执行需要root权限:
+
+```
+chmod a+x qemu-ifup
+```
+
通过qemu-ifup脚本,宿主机上将创建名为tap0的虚拟网卡,地址为192.168.10.1。
-3. **配置openEuler embedded网卡**
+3. **配置openEuler Embedded网卡**
-openEuler Embedded登陆后,执行如下命令:
-```shell
+openEuler Embedded登录后,执行如下命令:
+
+```
ifconfig eth0 192.168.10.2
```
4. **确认网络连通**
在openEuler Embedded中,执行如下命令:
-```shell
+
+```
ping 192.168.10.1
```
如能ping通,则宿主机和openEuler Embedded之间的网络是连通的。
---
+
> **说明:**
- >如需openEuler embedded借助宿主机访问互联网,则需要在宿主机上建立网桥,此处不详述,如有需要,请自行查阅相关资料。
+>
+>如需openEuler Embedded借助宿主机访问互联网,则需要在宿主机上建立网桥,此处不详述,如有需要,请自行查阅相关资料。
\ No newline at end of file
diff --git "a/docs/zh/docs/Embedded/\345\256\271\345\231\250\346\236\204\345\273\272\346\214\207\345\257\274.markdown" "b/docs/zh/docs/Embedded/\345\256\271\345\231\250\346\236\204\345\273\272\346\214\207\345\257\274.markdown"
new file mode 100644
index 0000000000000000000000000000000000000000..7d08267df88947b0b322bb0b0714e2accf0ab31b
--- /dev/null
+++ "b/docs/zh/docs/Embedded/\345\256\271\345\231\250\346\236\204\345\273\272\346\214\207\345\257\274.markdown"
@@ -0,0 +1,151 @@
+容器构建指导
+==============================
+
+由于openEuler Embedded构建过程需要基于openEuler操作系统,且需要安装较多系统工具和构建工具。为方便开发人员快速搭建构建环境,我们将构建过程所依赖的操作系统和工具封装到一个容器中,这就使得开发人员可以快速搭建一个构建环境,进而投入到代码开发中去,避免在准备环境阶段消耗大量时间。
+
+
+
+- [环境准备](#环境准备)
+ - [安装docker](#安装docker)
+ - [获取容器镜像](#获取容器镜像)
+ - [准备容器构建环境](#准备容器构建环境)
+- [版本构建](#版本构建)
+ - [下载源码](#下载源码)
+ - [编译构建](#编译构建)
+ - [构建结果说明](#构建结果说明)
+
+
+## 环境准备
+
+需要使用docker创建容器环境,为了确保docker成功安装,需满足以下软件硬件要求:
+
+- 操作系统: 推荐使用Ubuntu、Debian和RHEL(Centos、Fedora等)
+- 内核: 推荐3.8及以上的内核
+- 驱动: 内核必须支持一种合适的存储驱动,例如: Device Mapper、AUFS、vfs、btrfs、ZFS
+- 架构: 运行64位架构的计算机(x86\_64和amd64)
+
+### 安装docker
+
+-------------
+
+1. 检查当前环境是否已安装docker工具
+
+运行如下命令,可以看到当前docker版本信息,则说明当前环境已安装docker,无需再次安装。
+
+```
+docker version
+```
+
+2. 如果没有安装,可参考官方链接安装
+
+官网地址: ,openEuler环境可参考Centos安装Docker。
+
+例如openEuler环境docker安装命令如下:
+
+```
+sudo yum install docker
+```
+
+### 获取容器镜像
+
+---------------
+
+通过`docker pull`命令拉取华为云中的镜像到宿主机。命令如下:
+
+```
+docker pull swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container
+```
+
+### 准备容器构建环境
+
+-------------------
+
+#### 1.启动容器
+
+可通过`docker run`命令启动容器,为了保证容器启动后可以在后台运行,且可以正常访问网络,建议使用如下命令启动:
+
+```
+docker run -idt --network host swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container bash
+```
+
+参数说明:
+
+- -i 让容器的标准输入保持打开
+- -d 让 Docker 容器在后台以守护态(Daemonized)形式运行
+- -t 选项让Docker分配一个伪终端(pseudo-tty)并绑定到容器的标准输入上
+- --network 将容器连接到(host)网络
+- swr.cn-north-4.myhuaweicloud.com/openeuler-embedded/openeuler-container 指定镜像名称
+- bash 进入容器的方式
+
+#### 2.查看已启动的容器id
+
+```
+docker ps
+```
+
+#### 3.进入容器
+
+```
+docker exec -it 容器id bash
+```
+
+构建环境已准备完成,下面就可以在容器中进行构建了。
+
+## 版本构建
+
+### 下载源码
+
+1. 获取源码下载脚本
+
+```
+git clone https://gitee.com/openeuler/yocto-meta-openeuler.git -b openEuler-22.03-LTS -v /usr1/openeuler/src/yocto-meta-openeuler
+```
+
+2. 通过脚本下载源码
+
+```
+cd /usr1/openeuler/src/yocto-meta-openeuler/scripts
+sh download_code.sh /usr1/openeuler/src
+```
+
+### 编译构建
+
+- 编译架构: aarch64-std、aarch64-pro、arm-std、raspberrypi4-64
+- 构建目录: /usr1/build
+- 源码目录: /usr1/openeuler/src
+- 编译器所在路径: /usr1/openeuler/gcc/openeuler\_gcc\_arm64le
+
+> **说明:**
+>- 不同的编译架构使用不同的编译器,aarch64-std、aarch64-pro、raspberrypi4-64使用openeuler\_gcc\_arm64le编译器,arm-std使用openeuler\_gcc\_arm32le编译器。
+>- 下面以以aarch64-std目标架构编译为例。
+
+1. 将/usr1目录所属群组改为openeuler,否则切换至openeuler用户构建会存在权限问题。
+
+```
+chown -R openeuler:users /usr1
+```
+
+2. 切换至openeuler用户。
+
+```
+su openeuler
+```
+
+3. 进入构建脚本所在路径,运行编译脚本。
+
+```
+cd /usr1/openeuler/src/yocto-meta-openeuler/scripts
+source compile.sh aarch64-std /usr1/build /usr1/openeuler/gcc/openeuler_gcc_arm64le
+bitbake openeuler-image
+```
+
+### 构建结果说明
+
+结果件默认生成在构建目录下的output目录下,例如上面aarch64-std的构建结果件生成在/usr1/build/output目录下,如下表:
+
+| filename | description |
+| ---------------------------------------------------------- | ----------------------------------- |
+| Image-\* | openEuler Embedded image |
+| openeuler-glibc-x86\_64-openeuler-image-\*-toolchain-\*.sh | openEuler Embedded sdk toolchain |
+| openeuler-image-qemu-aarch64-\*.rootfs.cpio.gz | openEuler Embedded file system |
+| zImage | openEuler Embedded compressed image |
\ No newline at end of file
diff --git "a/docs/zh/docs/Embedded/\345\277\253\351\200\237\346\236\204\345\273\272\346\214\207\345\257\274.markdown" "b/docs/zh/docs/Embedded/\345\277\253\351\200\237\346\236\204\345\273\272\346\214\207\345\257\274.markdown"
new file mode 100644
index 0000000000000000000000000000000000000000..d5d41a8bce0218b1398c7ab25e137d86511352e9
--- /dev/null
+++ "b/docs/zh/docs/Embedded/\345\277\253\351\200\237\346\236\204\345\273\272\346\214\207\345\257\274.markdown"
@@ -0,0 +1,145 @@
+快速构建指导
+=====================
+
+本章主要介绍如何构建openEuler Embedded。
+
+
+
+- [环境准备](#环境准备)
+ - [Yocto中主机端命令使用](#Yocto中主机端命令使用)
+ - [openEuler Embedded所需构建工具](#openEuler Embedded所需构建工具)
+ - [已安装好工具的构建容器](#已安装好工具的构建容器)
+- [版本构建](#版本构建)
+ - [构建代码下载](#构建代码下载)
+ - [编译构建](#编译构建)
+ - [构建结果说明](#构建结果说明)
+
+
+环境准备
+--------------
+
+### Yocto中主机端命令使用
+
+Yocto或者说Bitbake本质上是一组python程序,其最小运行环境要求如下:
+
+- Python3 \> 3.6.0
+- Git \> 1.8.3.1
+- Tar \> 1.28
+
+在构建过程中所需要的其他工具,Yocto都可以根据相应的软件包配方自行构建出来,从而达到自包含的效果。在这个过程中,Yocto还会依据自身需要,对相应的工具打上yocto专属补丁(如dnf,rpm等)。这些主机工具会在第一次的构建中从源码开始构建,因此Yocto第一次构建比较费时。
+
+为了加速构建特别是第一次构建,openEuler Embedded采取了"能用原生工具就用原生工具,能不构建就不构建"的策略,尽可能使用主机上预编译的原生的工具。这就需要依赖主机上软件包管理工具(apt, dnf, yum, zypper等)实现安装好。
+
+Yocto是通过HOSTTOOLS变量来实现主机工具的引入,为会每个在HOSTTOOLS中列出的工具建立相应的软链接。为了避免来自主机对构建环境的污染,Yocto会重新准备不同于主机的环境,例如PATH变量等,因此如果新增依赖主机上的某个命令,需显示在Yocto的HOSTTOOLS变量中增加,否则即使主机上存在,Yocto构建时也会报错找不到相应的工具。相应流程如下图所示:
+
+
+
+当前openEuler Embedded所需要主机工具已经默认在local.conf.sample中的HOSTTOOLS定义,主要工具描述如下:
+
+| 工具名 | 用途 |
+| ------ | ------------- |
+| cmake | cmake构建工具 |
+| ninjia | ninja构建系统 |
+
+### openEuler Embedded所需构建工具
+
+- 构建OS
+
+ [操作系统:openEuler-20.03-LTS-SP2](https://repo.openeuler.org/openEuler-20.03-LTS-SP2/docker_img/x86_64/openEuler-docker.x86_64.tar.xz)
+
+- 安装系统额外工具
+
+```
+yum -y install tar cmake gperf sqlite-devel chrpath gcc-c++ patch rpm-build flex autoconf automake m4 bison bc libtool gettext-devel createrepo\_c rpcgen texinfo hostname python meson dosfstools mtools parted ninja-build autoconf-archive libmpc-devel gmp-devel
+```
+
+- 预编译的交叉工具链和库
+
+ Yocto可以构建出交叉编译所需的交叉工具链和C库,但整个流程复杂且耗时,不亚于内核乃至镜像的构建,而且除了第一次构建,后面很少会再涉及。同时,绝大部分开发者都不会直接与工具链和C库构建打交道。所以为了简化该流程,openEuler Embedded采取的策略是采用预编译的交叉工具链和库,会专门维护和发布相应的带有C库的工具链。
+
+ 目前我们提供了对arm32位和aarch64位两种架构的工具链支持,通过如下方式可以获得:
+
+ - 下载rpm包: `wget https://repo.openeuler.org/openEuler-22.03-LTS/EPOL/main/x86_64/Packages/gcc-cross-1.0-0.oe2203.x86_64.rpm`
+ - 解压rpm包: `rpm2cpio gcc-cross-1.0-0.oe2203.x86_64.rpm | cpio -id`
+ - 解压后可以看到当前路径下会有tmp目录,编译链存放于该目录下
+ - ARM 32位工具链: openeuler_gcc_arm32le.tar.xz
+ - ARM 64位工具链: openeuler_gcc_arm64le.tar.xz
+
+### 已安装好工具的构建容器
+
+openEuler Embedded的构建过程中会使用到大量的各式各样的主机工具。如前文所述,为了加速构建,openEuler Embedded依赖主机事先安装好相应的工具,但这也会带来一不同主机环境会有不同的工具版本的问题,例如构建需要cmake高于1.9版本,但主机上最高只有cmake 1.8。为了解决这一问题,openEuler Embedded提供了专门的构建容器,提供统一的构建环境。
+
+使用者可以通过如下链接获得容器镜像直接用于编译:
+
+ [openEuler Embedded构建容器的基础镜像](https://repo.openeuler.org/openEuler-21.03/docker_img/x86_64/openEuler-docker.x86_64.tar.xz)
+
+具体构建指导请参考[容器构建指导](./容器构建指导.html)。
+
+## 版本构建
+
+### 构建代码下载
+
+openEuler Embedded整个构建工程的文件布局如下,假设openeuler\_embedded为顶层目录:
+
+><顶层目录openeuler_embedded>
+>├── src 源代码目录,包含所有软件包代码、内核代码和Yocto构建代码
+>├── build openEuler Embedded的构建目录,生成的各种镜像放在此目录下
+
+1. 获取源码下载脚本
+
+ 将脚本下载到指定目录,例如下载到src/yocto-meta-openeuler目录下:
+
+ ```
+ git clone https://gitee.com/openeuler/yocto-meta-openeuler.git -b openEuler-22.03-LTS -v src/yocto-meta-openeuler
+ ```
+
+ 脚本为src/yocto-meta-openeuler/scripts/download\_code.sh,此脚本有3个参数:
+
+ - 参数1:下载的源码路径,默认相对脚本位置下载,例如前面样例,代码仓会下到src/下
+ - 参数2:下载的分支,默认值见脚本,不同分支按版本确定
+ - 参数3:下代码的xml文件,标准manifest格式,按xml配置下代码
+
+2. 通过脚本下载源码
+
+ - 下载最新代码:
+
+ ```
+ sh src/yocto-meta-openeuler/scripts/download_code.sh
+ ```
+
+ - 下载指定版本代码:
+
+ ```
+ sh src/yocto-meta-openeuler/scripts/download_code.sh "" "" "manifest.xml"
+ ```
+
+ 指定openEuler Embedded版本的代码的manifest.xml文件从openEuler Embedded发布件目录embedded\_img/source-list/下获取。
+
+### 编译构建
+
+一键式构建脚本:src/yocto-meta-openeuler/scripts/compile.sh, 具体细节可以参考该脚本。
+
+编译脚本的主要流程说明:
+
+1. 设置PATH增加额外工具路径
+2. TEMPLATECONF指定local.conf.sample等配置文件路径
+3. 调用poky仓的oe-init-build-env进行初始化配置
+4. 在编译目录的conf/local.conf中配置MACHINE,按需增加额外新增的层
+5. 在编译目录执行bitbake openeuler-image编译openEuler Embedded的image和sdk
+6. 执行完发布件在编译目录的output目录下
+
+运行编译脚本,以编译标准arm架构为例,编译方法如下:
+
+ source src/yocto-meta-openeuler/scripts/compile.sh arm-std
+ bitbake openeuler-image #执行第一条source后,会提示出bitbake命令
+
+### 构建结果说明
+
+结果件默认生成在构建目录下的output目录下,例如上面arm的构建结果件生成在/usr1/build/output目录下,如下表:
+
+| filename | description |
+| ---------------------------------------------------------- | ----------------------------------- |
+| Image-\* | openEuler Embedded image |
+| openeuler-glibc-x86\_64-openeuler-image-\*-toolchain-\*.sh | openEuler Embedded sdk toolchain |
+| openeuler-image-qemu-aarch64-\*.rootfs.cpio.gz | openEuler Embedded file system |
+| zImage | openEuler Embedded compressed image |
\ No newline at end of file
diff --git a/docs/zh/docs/gazelle/gazelle.md b/docs/zh/docs/Gazelle/Gazelle.md
similarity index 42%
rename from docs/zh/docs/gazelle/gazelle.md
rename to docs/zh/docs/Gazelle/Gazelle.md
index f2a933c70afd734443ab359e63f7c2805b133638..61f298c062ab38d308e39817799d15aeb6ab1d46 100644
--- a/docs/zh/docs/gazelle/gazelle.md
+++ b/docs/zh/docs/Gazelle/Gazelle.md
@@ -1,90 +1,75 @@
-
+# 用户态协议栈Gazelle用户指南
-# gazelle
+## 简介
-## Introduction
-gazelle是高性能的用户态协议栈,通过dpdk在用户态直接读写网卡报文,共享大页内存传递报文,并使用轻量级lwip协议栈。能够大幅提高应用的网络IO吞吐能力.
+Gazelle是一款高性能用户态协议栈。它基于DPDK在用户态直接读写网卡报文,共享大页内存传递报文,使用轻量级LwIP协议栈。能够大幅提高应用的网络I/O吞吐能力。专注于数据库网络性能加速,如MySQL、redis等。
+- 高性能
+报文零拷贝,无锁,灵活scale-out,自适应调度。
+- 通用性
+完全兼容POSIX,零修改,适用不同类型的应用。
-## Compile
-- 编译依赖软件包
-cmake gcc-c++ lwip dpdk-devel(>=21.11-2)
-numactl-devel libpcap-devel libconfig-devel libboundscheck rpm-build
-- 编译
-``` sh
-#创建目录
-mkdir -p ~/rpmbuild/SPECS
-mkdir -p ~/rpmbuild/SOURCES
-
-#创建压缩包
-mkdir gazelle-1.0.0
-mv build gazelle-1.0.0
-mv src gazelle-1.0.0
-tar zcvf gazelle-1.0.0.tar.gz gazelle-1.0.0/
-
-#编包
-mv gazelle-1.0.0.tar.gz ~/rpmbuild/SPECS
-cp gazelle.spec ~/rpmbuild/SPECS
-cd ~/rpmbuild/SPECS
-rpmbuild -bb gazelle.spec
-
-#编出的包
-ls ~/rpmbuild/RPMS
-```
+单进程且网卡支持多队列时,只需使用liblstack.so有更短的报文路径。其余场景使用ltran进程分发报文到各个线程。
-## Install
-``` sh
+## 安装
+配置openEuler的yum源,直接使用yum命令安装
+```sh
#dpdk >= 21.11-2
yum install dpdk
yum install libconfig
-yum install numacttl
+yum install numactl
yum install libboundscheck
yum install libpcap
yum install gazelle
-
```
-## Use
-### 1. 安装ko模块
+## 使用方法
+配置运行环境,使用Gazelle加速应用程序步骤如下:
+### 1. 使用root权限安装ko
+根据实际情况选择使用ko,提供虚拟网口、绑定网卡到用户态功能。
+若使用虚拟网口功能,则使用rte_kni.ko
``` sh
-modprobe uio
-insmod /usr/lib/modules/5.10.0-54.0.0.27.oe1.x86_64/extra/dpdk/igb_uio.ko
-insmod /usr/lib/modules/5.10.0-54.0.0.27.oe1.x86_64/extra/dpdk/rte_kni.ko carrier="on"
+modprobe rte_kni carrier="on"
+```
+网卡从内核驱动绑为用户态驱动的ko,根据实际情况选择一种
+``` sh
+#若IOMMU能使用
+modprobe vfio-pci
+
+#若IOMMU不能使用,且VFIO支持noiommu
+modprobe vfio enable_unsafe_noiommu_mode=1
+modprobe vfio-pci
+
+#其它情况
+modprobe igb_uio
```
+
### 2. dpdk绑定网卡
-- 对于虚拟网卡或一般物理网卡,绑定到驱动igb_uio
+将网卡绑定到步骤1选择的驱动。为用户态网卡驱动提供网卡资源访问接口。
``` sh
+#使用vfio-pci
+dpdk-devbind -b vfio-pci enp3s0
+
+#使用igb_uio
dpdk-devbind -b igb_uio enp3s0
```
-- 1822网卡绑定到驱动vfio-pci(由kernel提供)
+
+### 3. 大页内存配置
+Gazelle使用大页内存提高效率。使用root权限配置系统预留大页内存,可选用任意页大小。因每页内存都需要一个fd,使用内存较大时,建议使用1G的大页,避免占用过多fd。
+根据实际情况,选择一种页大小,配置足够的大页内存即可。配置大页操作如下:
``` sh
-modprobe vfio-pci
-dpdk-devbind -b vfio-pci enp3s0
-```
+#配置2M大页内存:在node0上配置 2M * 1024 = 2G
+echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
-### 3. 大页内存配置
-dpdk提供了高效的大页内存管理和共享机制,gazelle的报文数据、无锁队列等都使用了大页内存。大页内存需要root用户配置。2M或1G大页按实际需要配置,推荐使用2M大页内存,该内存是本机上ltran和所有lstack可以使用的总内存,具体方法如下:
-- 2M大页配置
- - 配置系统大页数量
- ``` sh
- #示例:在node0上配置2M * 2000 = 4000M
- echo 2000 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
- echo 0 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
- echo 0 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
- echo 0 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
- # 查看配置结果
- grep Huge /proc/meminfo
- ```
-- 1G大页配置
-1G大页配置方法与2M类似
- - 配置系统大页数量
- ``` sh
- #示例:在node0上配置1G * 5 = 5G
- echo 5 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
- ```
+#配置1G大页内存:在node0上配置1G * 5 = 5G
+echo 5 > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
+
+#查看配置结果
+grep Huge /proc/meminfo
+```
### 4. 挂载大页内存
-创建两个目录,分别给lstack的进程、ltran进程使用。操作步骤如下:
+创建两个目录,分别给lstack的进程、ltran进程访问大页内存使用。操作步骤如下:
``` sh
mkdir -p /mnt/hugepages
mkdir -p /mnt/hugepages-2M
@@ -94,34 +79,35 @@ mount -t hugetlbfs nodev /mnt/hugepages
mount -t hugetlbfs nodev /mnt/hugepages-2M
```
-### 5. 应用程序从内核协议栈切换至用户态协议栈
-+ 一种方式:重新编译程序
-修改应用的makefile文件,使其链接liblstack.so。示例如下:
-``` makefile
-#在makefile中添加
-ifdef USE_GAZELLE
- -include /etc/gazelle/lstack.Makefile
-endif
-gcc test.c -o test $(LSTACK_LIBS)
+### 5. 应用程序使用Gazelle
+有两种使能Gazelle方法,根据需要选择其一
+- 重新编译应用程序,链接Gazelle的库
+修改应用makefile文件链接liblstack.so,示例如下:
```
+#makefile中添加Gazelle的Makefile
+-include /etc/gazelle/lstack.Makefile
-+ 另一个方式:使用LD_PRELOAD
+#编译添加LSTACK_LIBS变量
+gcc test.c -o test ${LSTACK_LIBS}
```
-GAZELLE_BIND_PROCNAME=test(具体进程名) LD_PRELOAD=/usr/lib64/liblstack.so ./test
+
+- 使用LD_PRELOAD加载Gazelle的库
+GAZELLE_BIND_PROCNAME环境变量指定进程名,LD_PRELOAD指定Gazelle库路径
+```
+GAZELLE_BIND_PROCNAME=test LD_PRELOAD=/usr/lib64/liblstack.so ./test
```
### 6. 配置文件
-- lstack.conf用于指定lstack的启动参数,Gazelle发布件会包括ltran.conf供用户参考,路径为/etc/gazelle/lstack.conf, 配置文件参数如下
+- lstack.conf用于指定lstack的启动参数,默认路径为/etc/gazelle/lstack.conf, 配置文件参数如下
|选项|参数格式|说明|
|:---|:---|:---|
|dpdk_args|--socket-mem(必需) --huge-dir(必需) --proc-type(必需) --legacy-mem --map-perfect 等|dpdk初始化参数,参考dpdk说明|
|use_ltran| 0/1 | 是否使用ltran |
-|num_cpus|"0,2,4 ..."|lstack线程绑定的cpu编号,编号的数量为lstack线程个数(小于等于网卡多队列数量),仅在use_ltran=0时生效,如果机器不支持网卡多队列,lstack线程数量应该为1|
-|num_weakup|"1,3,5 ..."|weakup线程绑定的cpu编号,编号的数量为weakup线程个数,与lstack线程的数量保持一致|
-|numa_bind|0/1|是否支持将用户线程绑定到与某lstack线程相同numa内|
+|num_cpus|"0,2,4 ..."|lstack线程绑定的cpu编号,编号的数量为lstack线程个数(小于等于网卡多队列数量)。可按NUMA选择cpu|
+|num_wakeup|"1,3,5 ..."|wakeup线程绑定的cpu编号,编号的数量为wakeup线程个数,与lstack线程的数量保持一致。与numcpus选择对应NUMA的cpu。不配置则为不使用唤醒线程|
|low_power_mode|0/1|是否开启低功耗模式,暂不支持|
-|kni_swith|0/1|rte_kni开关,默认为0|
+|kni_swith|0/1|rte_kni开关,默认为0。只有不使用ltran时才能开启|
|host_addr|"192.168.xx.xx"|协议栈的IP地址,必须和redis-server配置 文件里的“bind”字段保存一致。|
|mask_addr|"255.255.xx.xx"|掩码地址|
|gateway_addr|"192.168.xx.1"|网关地址|
@@ -137,10 +123,8 @@ kni_switch=0
low_power_mode=0
-num_cpus="2"
-num_weakup="3"
-
-numa_bind=1
+num_cpus="2,22"
+num_wakeup="3,23"
host_addr="192.168.1.10"
mask_addr="255.255.255.0"
@@ -148,7 +132,7 @@ gateway_addr="192.168.1.1"
devices="aa:bb:cc:dd:ee:ff"
```
-- ltran.conf用于指定ltran启动的参数,Gazelle发布件会包括ltran.conf供用户参考,路径为/etc/gazelle/ltran.conf,仅在lstack.conf内配置use_ltran=1时生效,配置文件格式如下
+- ltran.conf用于指定ltran启动的参数,默认路径为/etc/gazelle/ltran.conf。使用ltran时,lstack.conf内配置use_ltran=1,配置参数如下:
|功能分类|选项|参数格式|说明|
|:---|:---|:---|:---|
@@ -182,24 +166,26 @@ bond_macs="aa:bb:cc:dd:ee:ff"
bond_ports="0x1"
tcp_conn_scan_interval=10
-```
-### 7. 启动
-- 不使用ltran模式(use_ltran=0)时,不需要启动ltran
-- 启动ltran,如果不指定--config-file,则使用默认路径/etc/gazelle/ltran.conf
+```
+### 7. 启动应用程序
+- 启动ltran进程
+单进程且网卡支持多队列,则直接使用网卡多队列分发报文到各线程,不启动ltran进程,lstack.conf的use_ltran配置为0.
+启动ltran时不使用-config-file指定配置文件,则使用默认路径/etc/gazelle/ltran.conf
``` sh
ltran --config-file ./ltran.conf
```
-- 启动redis,如果不指定环境变量LSTACK_CONF_PATH,则使用默认路径/etc/gazelle/lstack.conf
+- 启动应用程序
+启动应用程序前不使用环境变量LSTACK_CONF_PATH指定配置文件,则使用默认路径/etc/gazelle/lstack.conf
``` sh
export LSTACK_CONF_PATH=./lstack.conf
-redis-server redis.conf
+LD_PRELOAD=/usr/lib64/liblstack.so GAZELLE_BIND_PROCNAME=redis-server redis-server redis.conf
```
### 8. API
-liblstack.so编译进应用程序后wrap网络编程标准接口,应用程序无需修改代码。
+Gazelle wrap应用程序POSIX接口,应用程序无需修改代码。
-### 9. gazellectl
-- 不使用ltran模式时不支持gazellectl ltran xxx 命令
+### 9. 调测命令
+- 不使用ltran模式时不支持gazellectl ltran xxx命令,以及lstack -r命令
```
Usage: gazellectl [-h | help]
or: gazellectl ltran {quit | show} [LTRAN_OPTIONS] [time]
@@ -228,54 +214,50 @@ Usage: gazellectl [-h | help]
#### 1. dpdk配置文件的位置
如果是root用户,dpdk启动后的配置文件将会放到/var/run/dpdk目录下;
如果是非root用户,dpdk配置文件的路径将由环境变量XDG_RUNTIME_DIR决定;
-+ 如果XDG_RUNTIME_DIR为空,dpdk配置文件放到/tmp/dpdk目录下;
-+ 如果XDG_RUNTIME_DIR不为空,dpdk配置文件放到变量XDG_RUNTIME_DIR下;
-+ 注意有些机器会默认设置XDG_RUNTIME_DIR
+- 如果XDG_RUNTIME_DIR为空,dpdk配置文件放到/tmp/dpdk目录下;
+- 如果XDG_RUNTIME_DIR不为空,dpdk配置文件放到变量XDG_RUNTIME_DIR下;
+- 注意有些机器会默认设置XDG_RUNTIME_DIR
+
+## 约束限制
-## Constraints
-- 提供的命令行、配置文件以及配置大页内存需要root权限执行或修改。非root用户使用,需先提权以及修改文件权限。
-- 若要把用户态网卡绑回内核驱动,必须先将Gazelle退出。
+使用 Gazelle 存在一些约束限制:
+#### 功能约束
- 不支持accept阻塞模式或者connect阻塞模式。
-- 最多只支持20000个链接(需要保证进程内,非网络连接的fd个数小于2000个)。
-- 协议栈当前只支持tcp、icmp、arp、ipv4。
-- 大页内存不支持在挂载点里创建子目录重新挂载。
-- 在对端ping时,要求指定报文长度小于等于14000。
+- 最多支持1500个TCP连接。
+- 当前仅支持TCP、ICMP、ARP、IPv4 协议。
+- 在对端ping Gazelle时,要求指定报文长度小于等于14000B。
- 不支持使用透明大页。
-- 需要保证ltran的可用大页内存 >=1G
-- 需要保证应用实例协议栈线程的可用大页内存 >=800M
-- 不支持32位系统使用。
- ltran不支持使用多种类型的网卡混合组bond。
- ltran的bond1主备模式,只支持链路层故障主备切换(例如网线断开),不支持物理层故障主备切换(例如网卡下电、拔网卡)。
-- 构建X86版本使用-march=native选项,基于构建环境的CPU(Intel® Xeon® Gold 5118 CPU @ 2.30GHz)指令集进行优化。要求运行环境CPU支持SSE4.2、AVX、AVX2、AVX-512指令集。
-- 最大IP分片数为10(ping最大包长14790),TCP协议不使用IP分片。
-- sysctl配置网卡rp_filter参数为1,否则可能使用内核协议栈
-- 虚拟机网卡不支持多队列。
-- 不使用ltran模式,kni网口只支持本地通讯使用,且需要启动前配置NetworkManager不管理kni网卡
-- 虚拟kni网口的ip及mac地址,需要与lstack配置文件保持一致
-
-## Security risk note
-gazelle有如下安全风险,用户需要评估使用场景风险
-1. 共享内存
+- 虚拟机网卡不支持多队列。
+#### 操作约束
+- 提供的命令行、配置文件默认root权限。非root用户使用,需先提权以及修改文件所有者。
+- 将用户态网卡绑回到内核驱动,必须先退出Gazelle。
+- 大页内存不支持在挂载点里创建子目录重新挂载。
+- ltran需要最低大页内存为1GB。
+- 每个应用实例协议栈线程最低大页内存为800MB 。
+- 仅支持64位系统。
+- 构建x86版本的Gazelle使用了-march=native选项,基于构建环境的CPU(Intel® Xeon® Gold 5118 CPU @ 2.30GHz指令集进行优化。要求运行环境CPU支持 SSE4.2、AVX、AVX2、AVX-512 指令集。
+- 最大IP分片数为10(ping 最大包长14790B),TCP协议不使用IP分片。
+- sysctl配置网卡rp_filter参数为1,否则可能不按预期使用Gazelle协议栈,而是依然使用内核协议栈。
+- 不使用ltran模式,KNI网口不可配置只支持本地通讯使用,且需要启动前配置NetworkManager不管理KNI网卡。
+- 虚拟KNI网口的IP及mac地址,需要与lstack.conf配置文件保持一致 。
+
+## 风险提示
+Gazelle可能存在如下安全风险,用户需要根据使用场景评估风险。
+
+**共享内存**
- 现状
-大页内存mount至/mnt/hugepages-2M目录,链接liblstack.so的进程初始化时在/mnt/hugepages-2M目录下创建文件,每个文件对应2M大页内存,并mmap这些文件。ltran在收到lstask的注册信息后,根据大页内存配置信息也mmap目录下文件,实现大页内存共享。
-ltran在/mnt/hugepages目录的大页内存同理。
-- 当前消减措施
-大页文件权限600,只有OWNER用户才能访问文件,默认root用户,支持配置成其它用户;
-大页文件有dpdk文件锁,不能直接写或者mmap。
-- 风险点
-属于同一用户的恶意进程模仿DPDK实现逻辑,通过大页文件共享大页内存,写破坏大页内存,导致gazelle程序crash。建议用户下的进程属于同一信任域。
-2. 流量限制
-- 风险点
-gazelle没有做流量限制,用户有能力发送最大网卡线速流量的报文到网络。
-3. 进程仿冒
-- 风险点
-合法注册到ltran的两个lstack进程,进程A可仿冒进程B发送仿冒消息给ltran,修改ltran的转发控制信息,造成进程B通讯异常,进程B报文转发给进程A等问题。建议lstack进程都为可信任进程。
-
-## How to Contribute
-We are happy to provide guidance for the new contributors.
-Please sign the CLA before contributing.
-
-## Licensing
-gazelle is licensed under the Mulan PSL v2.
-
-
+ 大页内存 mount 至 /mnt/hugepages-2M 目录,链接 liblstack.so 的进程初始化时在 /mnt/hugepages-2M 目录下创建文件,每个文件对应 2M 大页内存,并 mmap 这些文件。ltran 在收到 lstask 的注册信息后,根据大页内存配置信息也 mmap 目录下文件,实现大页内存共享。
+ ltran 在 /mnt/hugepages 目录的大页内存同理。
+- 当前消减措施
+ 大页文件权限 600,只有 OWNER 用户才能访问文件,默认 root 用户,支持配置成其它用户;
+ 大页文件有 DPDK 文件锁,不能直接写或者映射。
+- 风险点
+ 属于同一用户的恶意进程模仿DPDK实现逻辑,通过大页文件共享大页内存,写破坏大页内存,导致Gazelle程序crash。建议用户下的进程属于同一信任域。
+
+**流量限制**
+Gazelle没有做流量限制,用户有能力发送最大网卡线速流量的报文到网络,可能导致网络流量拥塞。
+
+**进程仿冒**
+合法注册到ltran的两个lstack进程,进程A可仿冒进程B发送仿冒消息给ltran,修改ltran的转发控制信息,造成进程B通讯异常,进程B报文转发给进程A信息泄露等问题。建议lstack进程都为可信任进程。
diff --git "a/docs/zh/docs/KubeEdge/KubeEdge\351\203\250\347\275\262\346\214\207\345\215\227.md" "b/docs/zh/docs/KubeEdge/KubeEdge\351\203\250\347\275\262\346\214\207\345\215\227.md"
index ec91c0ddd4988d4dc162b9a9761c310d30116601..12babcdc63ffa6f877843392451b0ce427a7b97e 100644
--- "a/docs/zh/docs/KubeEdge/KubeEdge\351\203\250\347\275\262\346\214\207\345\215\227.md"
+++ "b/docs/zh/docs/KubeEdge/KubeEdge\351\203\250\347\275\262\346\214\207\345\215\227.md"
@@ -20,9 +20,9 @@ iSulad 是一个轻量级容器 runtime 守护程序,专为 IOT 和 Cloud 基
| 组件 | 版本 |
| ---------- | --------------------------------- |
-| OS | openEuler 21.09 |
+| OS | openEuler 22.03 |
| Kubernetes | 1.20.2-4 |
-| iSulad | 2.0.9-20210625.165022.git5a088d9c |
+| iSulad | 2.0.11 |
| KubeEdge | v1.8.0 |
### 节点规划(示例)
@@ -64,7 +64,7 @@ $ ./setup-cloud.sh
> 提示:在云侧节点可以访问外网的条件下建议优先选用 `kubeadm` 工具部署 k8s,示例:
```bash
-$ kubeadm init --apiserver-advertise-address=云侧IP --kubernetes-version v1.20.11 --pod-network-cidr=10.244.0.0/16 --upload-certs --cri-socket=/var/run/isulad.sock
+$ kubeadm init --apiserver-advertise-address=云侧IP --kubernetes-version v1.20.15 --pod-network-cidr=10.244.0.0/16 --upload-certs --cri-socket=/var/run/isulad.sock
...
Your Kubernetes control-plane has initialized successfully!
...
diff --git a/docs/zh/docs/NestOS/overview.md b/docs/zh/docs/NestOS/overview.md
new file mode 100644
index 0000000000000000000000000000000000000000..808961df722a8cf8acff12bf86210b2daaab2b60
--- /dev/null
+++ b/docs/zh/docs/NestOS/overview.md
@@ -0,0 +1,3 @@
+# NestOS用户指南
+
+本文介绍云底座操作系统NestOS的安装部署与各个特性说明和使用方法,使用户能够快速了解并使用NestOS。NestOS搭载iSulad、docker、podman等主流容器基础平台,克服了由于用户修改系统内容、用户服务对系统组件依赖,以及系统软件包升级时不稳定中间态等种种导致升级过程不可靠的因素,最终以一种轻量级、定制化的操作系统呈现出来,并且具备十分便捷的集群组建能力。镜像下载地址详见[NestOS仓库](https://gitee.com/openeuler/NestOS)。
\ No newline at end of file
diff --git "a/docs/zh/docs/NestOS/\344\275\277\347\224\250\346\226\271\346\263\225.md" "b/docs/zh/docs/NestOS/\344\275\277\347\224\250\346\226\271\346\263\225.md"
new file mode 100644
index 0000000000000000000000000000000000000000..24148750aca8fbee383faa0774d12dbea6054ee5
--- /dev/null
+++ "b/docs/zh/docs/NestOS/\344\275\277\347\224\250\346\226\271\346\263\225.md"
@@ -0,0 +1,493 @@
+# K8S+iSulad 搭建
+
+**以下步骤在master节点和node节点均需执行**,本教程以master为例
+
+## 开始之前
+
+需准备如下内容
+1.NestOS-22.03-date.x86_64.iso
+2.一台主机用作master,一台主机用作node
+
+## 组件下载
+
+编辑源文件,添加k8s的阿里云源
+
+```
+vi /etc/yum.repos.d/openEuler.repo
+```
+
+添加如下内容
+
+```
+[kubernetes]
+name=Kubernetes
+baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
+enabled=1
+gpgcheck=1
+repo_gpgcheck=1
+gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
+```
+
+下载k8s组件以及同步系统时间所用组件
+
+```
+rpm-ostree install kubelet kubeadm kubectl ntp ntpdate wget
+```
+
+重启生效
+
+```
+systemctl reboot
+```
+
+选择最新的版本分支进入系统
+
+## 配置环境
+
+### 修改主机名,以master为例
+
+```
+hostnamectl set-hostname k8s-master
+sudo -i
+```
+
+编辑/etc/hosts
+
+```
+vi /etc/hosts
+```
+
+添加如下内容,ip为主机ip
+
+```
+192.168.237.133 k8s-master
+192.168.237.135 k8s-node01
+```
+
+### 同步系统时间
+
+```
+ntpdate time.windows.com
+systemctl enable ntpd
+```
+
+```
+关闭swap分区,防火墙,selinux
+```
+
+NestOS默认无swap分区,默认关闭防火墙
+关闭selinux如下
+
+```
+vi /etc/sysconfig/selinux
+修改为SELINUX=disabled
+```
+
+### 网络配置,开启相应的转发机制
+
+创建配置文件
+
+```
+vi /etc/sysctl.d/k8s.conf
+```
+
+添加如下内容
+
+```
+net.bridge.bridge-nf-call-iptables=1
+net.bridge.bridge-nf-call-ip6tables=1
+net.ipv4.ip_forward=1
+```
+
+使配置生效
+
+```
+modprobe br_netfilter
+sysctl -p /etc/sysctl.d/k8s.conf
+```
+
+## 配置iSula
+
+查看k8s需要的系统镜像,需注意pause的版本号
+
+```
+kubeadm config images list
+```
+
+修改daemon配置文件
+
+```
+vi /etc/isulad/daemon.json
+```
+
+```
+##关于添加项的解释说明##
+registry-mirrors 设置为"docker.io"
+insecure-registries 设置为"rnd-dockerhub.huawei.com"
+pod-sandbox-image 设置为"registry.aliyuncs.com/google_containers/pause:3.5"(使用阿
+里云,pause版本可在上一步查看)
+network-plugin 设置为"cni"。
+cni-bin-dir 设置为"/opt/cni/bin";
+cni-conf-dir 设置为"/etc/cni/net.d"
+```
+
+修改后的完整文件如下
+
+```
+{"group": "isula",
+"default-runtime": "lcr",
+"graph": "/var/lib/isulad",
+"state": "/var/run/isulad",
+"engine": "lcr",
+"log-level": "ERROR",
+"pidfile": "/var/run/isulad.pid",
+"log-opts": {
+"log-file-mode": "0600",
+"log-path": "/var/lib/isulad",
+"max-file": "1",
+"max-size": "30KB"
+},
+"log-driver": "stdout",
+"container-log": {
+"driver": "json-file"
+},
+"hook-spec": "/etc/default/isulad/hooks/default.json",
+"start-timeout": "2m",
+"storage-driver": "overlay2",
+"storage-opts": [
+"overlay2.override_kernel_check=true"
+],
+"registry-mirrors": [
+"docker.io"
+],
+"insecure-registries": [
+"rnd-dockerhub.huawei.com"
+],
+"pod-sandbox-image": "registry.aliyuncs.com/google_containers/pause:3.5",
+"native.umask": "secure",
+"network-plugin": "cni",
+"cni-bin-dir": "/opt/cni/bin",
+"cni-conf-dir": "/etc/cni/net.d",
+"image-layer-check": false,
+"use-decrypted-key": true,
+"insecure-skip-verify-enforce": false
+}
+```
+
+启动相关服务
+
+```
+systemctl restart isulad
+systemctl enable isulad
+systemctl enable kubelet
+```
+
+**以上为master,node节点均需执行的操作。**
+
+## master节点初始化
+
+**该部分仅master节点执行。**
+初始化,在这一步会拉取镜像,需等待一小段时间。也可在该步骤之前手动拉取镜像。
+
+```
+kubeadm init --kubernetes-version=1.22.2 --apiserver-advertise-
+address=192.168.237.133 --cri-socket=/var/run/isulad.sock --image-repository
+registry.aliyuncs.com/google_containers --service-cidr=10.10.0.0/16 --pod-
+network-cidr=10.122.0.0/16
+```
+
+```
+##关于初始化参数的解释说明##
+kubernetes-version 为当前安装的版本
+apiserver-advertise-address 为master节点ip
+cri-socket 指定引擎为isulad
+image-repository 指定镜像源为阿里云,可省去修改tag的步骤
+service-cidr 指定service分配的ip段
+pod-network-cidr 指定pod分配的ip段
+```
+
+初始化成功后,复制最后两行内容方便后续node节点加入使用
+
+```
+kubeadm join 192.168.237.133:6443 --token j7kufw.yl1gte0v9qgxjzjw --discovery-
+token-ca-cert-hash
+sha256:73d337f5edd79dd4db997d98d329bd98020b712f8d7833c33a85d8fe44d0a4f5 --cri-
+socket=/var/run/isulad.sock
+```
+
+**注意**:添加--cri-socket=/var/run/isulad.sock以使用isulad为容器引擎
+查看下载好的镜像
+
+```
+isula images
+```
+
+按照初始化成功所提示,配置集群
+
+```
+mkdir -p $HOME/.kube
+cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
+chown $(id -u):$(id -g) $HOME/.kube/config
+export KUBECONFIG=/etc/kubernetes/admin.conf
+source /etc/profile
+```
+
+查看健康状态
+
+```
+kubectl get cs
+```
+
+可能存在controller-manager,scheduler状态为unhealthy的情况,解决方法如下:
+编辑相关配置文件
+
+```
+vi /etc/kubernetes/manifests/kube-controller-manager.yaml
+```
+
+```
+注释如下内容:
+--port=0
+修改hostpath:
+将所有/usr/libexec/kubernetes/kubelet-plugins/volume/exec 修改为/opt/libexec/...
+```
+
+```
+vi /etc/kubernetes/manifests/kube-scheduler.yaml
+```
+
+```
+注释如下内容:
+--port=0
+```
+
+修改完成后,再次查看健康状态
+
+## 配置网络插件
+
+仅需要在master节点配置网络插件,但是要在**所有节点**提前拉取镜像,拉取镜像指令如下。
+
+```
+isula pull calico/node:v3.19.3
+isula pull calico/cni:v3.19.3
+isula pull calico/kube-controllers:v3.19.3
+isula pull calico/pod2daemon-flexvol:v3.19.3
+```
+
+**以下步骤仅在master节点执行**
+获取配置文件
+
+```
+wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml
+```
+
+编辑calico.yaml 修改所有/usr/libexec/... 为 /opt/libexec/...
+然后执行如下命令完成calico的安装:
+
+```
+kubectl apply -f calico.yaml
+```
+
+通过kubectl get pod -n kube-system查看calico是否安装成功
+通过kubectl get pod -n kube-system查看是否所有pod状态都为running
+
+## node节点加入集群
+
+在node节点执行如下指令,将node节点加入集群
+
+```
+kubeadm join 192.168.237.133:6443 --token j7kufw.yl1gte0v9qgxjzjw --discovery-
+token-ca-cert-hash
+sha256:73d337f5edd79dd4db997d98d329bd98020b712f8d7833c33a85d8fe44d0a4f5 --cri-
+socket=/var/run/isulad.sock
+```
+
+通过kubectl get node 查看master,node节点状态是否为ready
+
+至此,k8s部署成功。
+
+# rpm-ostree使用
+
+## rpm-ostree安装软件包
+
+安装wget
+
+```
+rpm-ostree install wget
+```
+
+重启系统,可在启动时通过键盘上下按键选择rpm包安装完成后或安装前的系统状态,其中【ostree:0】为安装之后的版本。
+
+```
+systemctl reboot
+```
+
+查看wget是否安装成功
+
+```
+rpm -qa | grep wget
+```
+
+## rpm-ostree 手动更新升级 NestOS
+
+在NestOS中执行命令可查看当前rpm-ostree状态,可看到当前版本号
+
+```
+rpm-ostree status
+```
+
+执行检查命令查看是否有升级可用,发现存在新版本
+
+```
+rpm-ostree upgrade --check
+```
+
+预览版本的差异
+
+```
+rpm-ostree upgrade --preview
+```
+
+在最新版本中,我们将nano包做了引入。
+执行如下指令会下载最新的ostree和RPM数据,不需要进行部署
+
+```
+rpm-ostree upgrade --download-only
+```
+
+重启NestOS,重启后可看到系统的新旧版本两个状态,选择最新版本的分支进入
+
+```
+rpm-ostree upgrade --reboot
+```
+
+## 比较NestOS版本差别
+
+检查状态,确认此时ostree有两个版本,分别为LTS.20210927.dev.0和LTS.20210928.dev.0
+
+```
+rpm-ostree status
+```
+
+根据commit号比较2个ostree的差别
+
+```
+rpm-ostree db diff 55eed9bfc5ec fe2408e34148
+```
+
+## 系统回滚
+
+当一个系统更新完成,之前的NestOS部署仍然在磁盘上,如果更新导致了系统出现问题,可以使用之前的部署回滚系统。
+
+### 临时回滚
+
+要临时回滚到之前的OS部署,在系统启动过程中按住shift键,当引导加载菜单出现时,在菜单中选择相关的分支。
+
+### 永久回滚
+
+要永久回滚到之前的操作系统部署,登录到目标节点,运行rpm-ostree rollback,此操作将使用之前的系统部署作为默认部署,并重新启动到其中。
+执行命令,回滚到前面更新前的系统。
+
+```
+rpm-ostree rollback
+```
+
+重启后失效。
+
+## 切换版本
+
+在上一步将NestOS回滚到了旧版本,可以通过命令切换当前 NestOS 使用的rpm-ostree版本,将旧版本切换为新版本。
+
+```
+rpm-ostree deploy -r 22.03.20220325.dev.0
+```
+
+重启后确认目前NestOS已经使用的是新版本的ostree了。
+
+
+# zincati自动更新使用
+
+zincati负责NestOS的自动更新,zincati通过cincinnati提供的后端来检查当前是否有可更新版本,若检测到有可能新版本,会通过rpm-ostree进行下载。
+
+目前系统默认关闭zincati自动更新服务,可通过修改配置文件设置为开机自动启动自动更新服务。
+
+```
+vi /etc/zincati/config.d/95-disable-on-dev.toml
+```
+
+将updates.enabled设置为true
+同时增加配置文件,修改cincinnati后端地址
+
+```
+vi /etc/zincati/config.d/update-cincinnati.toml
+```
+
+添加如下内容
+
+```
+[cincinnati]
+base_url="http://nestos.org.cn:8080"
+```
+
+重新启动zincati服务
+
+```
+systemctl restart zincati.service
+```
+
+当有新版本时,zincati会自动检测到可更新版本,此时查看rpm-ostree状态,可以看到状态是“busy”,说明系统正在升级中。
+
+一段时间后NestOS将自动重启,此时再次登录NestOS,可以再次确认rpm-ostree的状态,其中状态转为"idle",而且当前版本已经是“20220325”,这说明rpm-ostree版本已经升级了。
+
+查看zincati服务的日志,确认升级的过程和重启系统的日志。另外日志显示的"auto-updates logic enabled"也说明更新是自动的。
+
+# 定制NestOS
+
+我们可以使用nestos-installer 工具对原始的NestOS ISO文件进行加工,将Ignition文件打包进去从而生成定制的 NestOS ISO文件。使用定制的NestOS ISO文件可以在系统启动完成后自动执行NestOS的安装,因此NestOS的安装会更加简单。
+
+在开始定制NestOS之前,需要做如下准备工作:
+
+- 下载 NestOS ISO
+- 准备 config.ign文件
+
+## 生成定制NestOS ISO文件
+
+### 设置参数变量
+
+```
+$ export COREOS_ISO_ORIGIN_FILE=nestos-22.03.20220324.x86_64.iso
+$ export COREOS_ISO_CUSTOMIZED_FILE=my-nestos.iso
+$ export IGN_FILE=config.ign
+```
+
+### ISO文件检查
+
+确认原始的NestOS ISO文件中是没有包含Ignition配置。
+
+```
+$ nestos-installer iso ignition show $COREOS_ISO_ORIGIN_FILE
+
+Error: No embedded Ignition config.
+```
+
+### 生成定制NestOS ISO文件
+
+将Ignition文件和原始NestOS ISO文件打包生成定制的NestOS ISO文件。
+
+```
+$ nestos-installer iso ignition embed $COREOS_ISO_ORIGIN_FILE --ignition-file $IGN_FILE $COREOS_ISO_ORIGIN_FILE --output $COREOS_ISO_CUSTOMIZED_FILE
+```
+
+### ISO文件检查
+
+确认定制NestOS ISO 文件中已经包含Ignition配置了
+
+```
+$ nestos-installer iso ignition show $COREOS_ISO_CUSTOMIZED_FILE
+```
+
+执行命令,将会显示Ignition配置内容
+
+## 安装定制NestOS ISO文件
+
+使用定制的 NestOS ISO 文件可以直接引导安装,并根据Ignition自动完成NestOS的安装。在完成安装后,我们可以直接在虚拟机的控制台上用nest/password登录NestOS。
\ No newline at end of file
diff --git "a/docs/zh/docs/NestOS/\345\212\237\350\203\275\347\211\271\346\200\247\346\217\217\350\277\260.md" "b/docs/zh/docs/NestOS/\345\212\237\350\203\275\347\211\271\346\200\247\346\217\217\350\277\260.md"
new file mode 100644
index 0000000000000000000000000000000000000000..3bcae9e309daebe65d13476442d18846ef708445
--- /dev/null
+++ "b/docs/zh/docs/NestOS/\345\212\237\350\203\275\347\211\271\346\200\247\346\217\217\350\277\260.md"
@@ -0,0 +1,103 @@
+# 功能特性描述
+
+## 容器技术
+
+NestOS通过容器化 (containerized) 的运算环境向应用程序提供运算资源,应用程序之间共享系统内核和资源,但是彼此之间又互不可见。这意味着应用程序将不会再被直接安装到操作系统中,而是通过 Docker 运行在容器中。大大降低了操作系统、应用程序及运行环境之间的耦合度。相对于传统的应用程序部署部署方式而言,在NestOS 集群中部署应用程序更加灵活便捷,应用程序运行环境之间的干扰更少,而且操作系统自身的维护也更加容易。
+
+## rpm-ostree
+
+### 系统更新
+
+rpm-ostree是一种镜像/包混合系统,可以看成是rpm和ostree的合体。一方面它提供了基于rpm的软件包安装管理方式,另一方面它提供了基于ostree的操作系统更新升级。rpm-ostree将这两种操作都视为对操作系统的更新,每次对系统的更新都像rpm-ostree在提交“Transaction-事务”,从而确保更新全部成功或全部失败,允许在更新系统后回滚到更新前的状态。
+
+rpm-ostree在更新操作系统的时候会有2个bootable区域,分别为更新前和更新后的,对系统的更新升级只有在重启操作系统后才生效。如果软件安装或升级出现问题,通过rpm-ostree回滚会使nestos系统返回到先前的状态。我们可以查看nestos的“/ostree/”和“/boot/”目录,它们是ostree Repository环境并且可以观察到boot使用哪个ostree。
+
+### 文件系统
+
+在rpm-ostree的文件系统布局中,只有/etc和/var是唯一可写的目录,/var中的任何数据不会被触及,而是在升级过程中共享。在系统升级的过程中采用新的默认值/etc,并将更改添加到顶部。这意味着升级将会接收/etc中新的默认文件,这是一个非常关键的特性。
+
+Ostree旨在可以并行安装多个独立操作系统的版本,ostree依赖于一个新的ostree 目录,该目录实际上可以并行安装在现有的操作系统或者是占据物理/root目录的发行版本中。每台客户机和每组部署上都存储在 /ostree/deploy/STATEROOT/CHECKSUM 上,而且还有一个ostree存储库存储在 /ostree/repo 中。每个部署主要由一组指向存储库的硬链接组成,这意味着每个版本都进行了重复数据的删除并且升级过程中只消耗了与新文件成比例的磁盘空间加上一些恒定的开销。
+
+Ostree模型强调的是OS只读内容保存在 /usr 中,它附带了用于创建Linux只读安装以防止无意损坏的代码,对于给定的操作系统,每个部署之间都有一个 /var 共享的可供读写的目录。Ostree核心代码不触及该目录的内容,如何管理和升级状态取决于每个操作系统中的代码。
+
+### 系统扩展
+
+出于安全性和可维护性的考虑,nestos让基础镜像尽可能保持小巧和精简。但是在某些情况下,需要向基本操作系统本身添加软件,例如驱动软件,VPN等等,因为它们比较难容器化。这些包拓展了操作系统的功能,为此,rpm-ostree将这些包视为拓展,而不是仅仅在用户运行时提供。也就是说,目前nestos对于实际安装哪些包没有限制,默认情况下,软件包是从openEuler仓库下载的。
+
+要对软件包进行分层,需要重新编写一个systemd单元来执行rpm-ostree命令安装所需要的包,所做的更改应⽤于新部署,重新启动才能生效。
+
+## nestos-installer
+
+nestos-installer是一个帮助安装Nestos的程序,它可以执行以下操作:
+
+(1)安装操作系统到一个目标磁盘,可使用Ignition和首次引导内核参数对其进行自定义(nestos-installer install)
+
+(2)下载并验证各种云平台、虚拟化或者裸机平台的操作系统映像(nestos-installer download)
+
+(3)列出可供下载的nestos镜像(nestos-installer list-stream)
+
+(4)在ISO中嵌入一个Ignition配置,以自定义地从中启动操作系统(nestos-installer iso ignition)
+
+(5)将Ignition配置包装在initd映像中,该映像可以被加入到PXE initramfs中以自定义从中启动的操作系统(nestos-installer pxe ignition)
+
+## zincati
+
+Zincati是nestos自动更新的代理,它作为Cincinnati和rpm-ostree的客户端,负责自动更新/重启机器。Zincati有如下特点:
+
+(1)支持自动更新代理,支持分阶段推出
+
+(2)通过toml配置文件支持运行时自定义,用户自定义配置文件可覆盖默认配置
+
+(3)多种更新策略
+
+(4)通过维护窗口每周在特定时间范围内进行更新的策略
+
+(5)收集和导出本地运行的zincati内部指标,可提供给Prometheus以减轻跨大量节点的监控任务
+
+(6)具有可配置优先级的日志记录
+
+(7)通过Cincinnati协议支持复杂的更新图
+
+(8)通过外部锁管理器支持集群范围的重启编排
+
+## 系统初始化(Ignition)
+
+Ignition 是一个与分发无关的配置实用程序,不仅用于安装,还读取配置文件(JSON 格式)并根据该配置nestos系统。可配置的组件包括存储和文件系统、systemd单元和用户。
+
+Ignition仅在系统第一次引导期间运行一次(在initramfs中)。因为 Ignition 在启动过程的早期运行,所以它可以在用户空间开始启动之前重新分区磁盘、格式化文件系统、创建用户和写入文件。 因此,systemd 服务在 systemd 启动时已经写入磁盘,从而加快了启动速度。
+
+(1)Ignition 仅在第一次启动时运行
+
+Ignition 旨在用作配置工具,而不是配置管理工具。 Ignition 鼓励不可变的基础设施,其中机器修改要求用户丢弃旧节点并重新配置机器。
+
+(2)Ignition不是在任何情况下都可以完成配置
+
+Ignition 执行它需要的操作,使系统与 Ignition 配置中描述的状态相匹配。 如果由于任何原因 Ignition 无法提供配置要求的确切机器,Ignition 会阻止机器成功启动。例如,如果用户想要获取托管在 https://example.com/foo.conf 的文档并将其写入磁盘,如果无法解析给定的 URL,Ignition 将阻止机器启动。
+
+(3)Ignition只是声明性配置
+
+Ignition配置只描述了系统的状态,没有列出 Ignition 应该采取的一系列步骤。
+
+Ignition 配置不允许用户提供任意逻辑(包括 Ignition 运行的脚本)。用户只需描述哪些文件系统必须存在、哪些文件必须创建、哪些用户必须存在等等。任何进一步的定制都必须使用由 Ignition 创建的 systemd 服务。
+
+(4)Ignition配置不应手写
+
+Ignition 配置被设计为人类可读但难以编写,是为了阻止用户尝试手动编写配置。可以使用Butane或类似工具生成或转化生成Ignition 配置。
+
+## Afterburn
+
+Afterburn是类似于云平台一样的一次性代理,可以用于与特定的提供者的元数据端点进行交互,通常和Ignition结合使用。
+
+Afterburn包含了很多可以在实例生命周期中不同时间段运行的模块。下面的服务是根据特定的平台可能在第一次启动时在initramfs中运行的:
+
+(1)设置本地主机名
+
+(2)加入网络命令行参数
+
+以下的功能是在一定条件下,作为systemd服务单元在一些平台上才能使用的:
+
+(1)为本地系统用户安装公共SSH密钥
+
+(2)从实例元数据中检索属性
+
+(3)给提供者登记以便报道成功的启动或实例供应
\ No newline at end of file
diff --git "a/docs/zh/docs/NestOS/\345\256\211\350\243\205\344\270\216\351\203\250\347\275\262.md" "b/docs/zh/docs/NestOS/\345\256\211\350\243\205\344\270\216\351\203\250\347\275\262.md"
new file mode 100644
index 0000000000000000000000000000000000000000..6eb2dca5495b9ce02d1aac47fbde14e36d776f92
--- /dev/null
+++ "b/docs/zh/docs/NestOS/\345\256\211\350\243\205\344\270\216\351\203\250\347\275\262.md"
@@ -0,0 +1,129 @@
+# 安装与部署
+
+## 在 VMware 上部署 NestOS
+
+本指南展示了如何在VMware虚拟机管理程序上配置最新的 NestOS。
+
+目前NestOS仅支持x86_64架构。
+
+### 开始之前
+
+ 在开始部署 NestOS 之前,需要做如下准备工作:
+
+- 下载 NestOS ISO
+- 准备 config.bu 文件
+- 配置 butane 工具(Linux环境/win10环境)
+- 安装有VMware的宿主机
+
+### 初步安装与启动
+
+#### 启动 NestOS
+
+初次启动 NestOS ,ignition 尚未安装,可根据系统提示使用 nestos-installer 组件进行ignition的安装。
+
+### 配置 ignition 文件
+
+#### 获取 Butane
+
+可以通过 Butane 将 bu 文件转化为 igniton 文件。ignition 配置文件被设计为可读但难以编写,是为了
+阻止用户尝试手动编写配置。
+Butane 提供了多种环境的支持,可以在 linux/windows 宿主机中或容器环境中进行配置。
+
+```
+docker pull quay.io/coreos/butane:release
+```
+
+#### 生成登录密码
+
+在宿主机执行如下命令,并输入你的密码。
+
+```
+# openssl passwd -1 -salt yoursalt
+Password:
+$1$yoursalt$1QskegeyhtMG2tdh0ldQN0
+```
+
+#### 生成ssh-key
+
+在宿主机执行如下命令,获取公钥和私钥以供后续 ssh 登录。
+
+```
+# ssh-keygen -N '' -f ./id_rsa
+Generating public/private rsa key pair.
+Your identification has been saved in ./id_rsa
+Your public key has been saved in ./id_rsa.pub
+The key fingerprint is:
+SHA256:4fFpDDyGHOYEd2fPaprKvvqst3T1xBQuk3mbdon+0Xs root@host-12-0-0-141
+```
+
+```
+The key's randomart image is:
++---[RSA 3072]----+
+| ..= . o . |
+| * = o * . |
+| + B = * |
+| o B O + . |
+| S O B o |
+| * = . . |
+| . +o . . |
+| +.o . .E |
+| o*Oo ... |
++----[SHA256]-----+
+```
+
+可以在当前目录查看id_rsa.pub公钥
+
+```
+# cat id_rsa.pub
+ssh-rsa
+AAAAB3NzaC1yc2...
+```
+
+#### 编写bu文件
+
+进行最简单的初始配置,如需更多详细的配置,参考后面的 ignition 详解。
+如下为最简单的 config.bu 文件
+
+```
+variant: fcos
+version: 1.1.0
+passwd:
+ users:
+ - name: nest
+ password_hash: "$1$yoursalt$1QskegeyhtMG2tdh0ldQN0"
+ ssh_authorized_keys:
+ - "ssh-rsa
+ AAAAB3NzaC1yc2EAAA..."
+```
+
+#### 生成ignition文件
+
+将 config.bu 通过 Butane 工具转换为 config.ign 文件,如下为在容器环境下进行转换。
+
+```
+# docker run --interactive --rm quay.io/coreos/butane:release \
+--pretty --strict < your_config.bu > transpiled_config.ign
+```
+
+### 安装 NestOS
+
+将宿主机生成的的config.ign文件通过scp拷贝到前面初步启动的 NestOS 中,该OS目前运行在内存中,
+并没有安装到硬盘。
+
+```
+sudo -i
+scp root@your_ipAddress:/root/config.ign /root
+```
+
+根据系统所给提示,执行如下指令完成安装。
+
+```
+nestos-installer install /dev/sda --ignition-file config.ign
+```
+
+安装完成后重启 NestOS 。
+
+```
+systemctl reboot
+```
+完成
diff --git "a/docs/zh/docs/Releasenotes/\345\205\263\351\224\256\347\211\271\346\200\247.md" "b/docs/zh/docs/Releasenotes/\345\205\263\351\224\256\347\211\271\346\200\247.md"
index f9dbd82912423d67b8f21338f6f3f26017037794..a2869c4bb55d7179bebc3d239f03d08d9a1f9f3a 100644
--- "a/docs/zh/docs/Releasenotes/\345\205\263\351\224\256\347\211\271\346\200\247.md"
+++ "b/docs/zh/docs/Releasenotes/\345\205\263\351\224\256\347\211\271\346\200\247.md"
@@ -68,20 +68,25 @@ eggo是openEuler云原生Sig组K8S集群部署管理项目,提供高效稳定
- **边缘轻量化**,内存占用少,可在资源受限情况下工作。
## 嵌入式镜像
-提供轻量化、安全和容器等基础能力,支持ARM32、ARM64芯片架构。
-- **轻量化**,提供OS镜像 < 5M, 运行底噪<15 M, 以及<5S快速启动等能力。
-- **安全加固**,对账户口令、文件权限等资源安全加固,OS默认安全使能
-- **轻量容器**,面向嵌入式场景的轻量容器运行时,支持标准容器镜像的部署运行。
-- **多架构支持**,支持ARM32和ARM64芯片架构。
-- **混合部署**,:支持soc内实时和非实时多平面混合部署。
-- **开放小型化裁剪**,开放基于Yocto构建包的小型化定制裁剪能力。ARM交叉编译工具
-- **支持ARM交叉编译工具**,基于社区10.3版本gcc提供ARM版本交叉编译工具链
-- **A支持分布式软总线**,集成鸿蒙的分布式软总线,实现欧拉嵌入式设备之间互联互通
+- **轻量化能力**,开放yocto小型化构建裁剪框架,支撑OS镜像轻量化定制,提供OS镜像 < 5M,以及<5S快速启动等能力。
+- **多硬件支持**,新增支持树莓派4B作为嵌入式场景通用硬件。
+- **软实时内核**,基于linux5.10内核提供软实时能力,软实时中断响应时延微秒级。
+- **混合关键性部署**,实现soc内实时和非实时多平面混合部署,并支持zephyr实时内核。
+- **分布式软总线基础能力**,集成鸿蒙的分布式软总线,实现欧拉嵌入式设备之间互联互通。
+- **嵌入式软件包支持**,新增80+嵌入式领域常用软件包的构建。
## secPaver
secPaver是一款SELinux安全策略开发工具,用于辅助开发人员为应用程序开发安全策略。
- **策略管理**,提供高阶配置语言,根据策略配置文件内容生成SELinux策略文件,降低SElinux使用门槛。
+## NestOS
+NestOS是一款在openEuler社区CloudNative sig组孵化的云底座操作系统,专注于提供最佳的容器主机,大规模下安全的运行容器化工作负载。
+- **开箱即用的容器平台**,搭载了iSulad、docker、podman 、cri-o等主流容器基础平台。
+- **简单易用的安装配置过程**,采用了Ignition技术,提供个性化配置。
+- **安全可靠的包管理方式**,使用rpm-ostree进行包管理。
+- **友好可控的自动更新代理**,采用zincati实现无感升级。
+- **紧密配合的双系统分区**,双系统分区设计确保系统安全。
+
## 更多的第三方应用支持
- **KubeSphere**,KubeSphere 是在 Kubernetes 之上构建的以应用为中心的容器平台,完全开源,由青云科技发起,并由 openEuler 社区 SIG-KubeSphere 提供支持和维护。
- **OpenStack Wallaby**,OpenStack版本更新到Wallaby。Wallaby是2021年4月份发布的最新稳定版本,包含nova、kolla、cyborg、tacker等核心项目的重要更新。
diff --git "a/docs/zh/docs/Releasenotes/\347\263\273\347\273\237\345\256\211\350\243\205.md" "b/docs/zh/docs/Releasenotes/\347\263\273\347\273\237\345\256\211\350\243\205.md"
index 3cf60381b59fd7336fbc7c804bf554011ed9c0c6..8436fea535c79081405866281a920274a6e44f26 100644
--- "a/docs/zh/docs/Releasenotes/\347\263\273\347\273\237\345\256\211\350\243\205.md"
+++ "b/docs/zh/docs/Releasenotes/\347\263\273\347\273\237\345\256\211\350\243\205.md"
@@ -2,11 +2,12 @@
## 发布件
-openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03-LTS/ISO/)、[虚拟机镜像](http://repo.openeuler.org/openEuler-22.03-LTS/virtual_machine_img/)、[容器镜像](http://repo.openeuler.org/openEuler-22.03-LTS/docker_img/)和[repo源](http://repo.openeuler.org/openEuler-22.03-LTS/)。ISO发布包请参见[表1](#table8396719144315)。容器清单参见[表3](#table1276911538154)。repo源方便在线使用,repo源目录请参见[表4](#table953512211576)。
+openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03-LTS/ISO/)、[虚拟机镜像](http://repo.openeuler.org/openEuler-22.03-LTS/virtual_machine_img/)、[容器镜像](http://repo.openeuler.org/openEuler-22.03-LTS/docker_img/)、[嵌入式镜像](http://repo.openeuler.org/openEuler-22.03-LTS/embedded_img/)和[repo源](http://repo.openeuler.org/openEuler-22.03-LTS/)。ISO发布包请参见[表1](#table8396719144315)。容器清单参见[表3](#table1276911538154)。repo源方便在线使用,repo源目录请参见[表5](#table953512211576)。
**表 1** 发布ISO列表
+
名称
|
描述
@@ -64,6 +65,7 @@ openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03
**表 2** 虚拟机镜像
+
名称
|
描述
@@ -108,9 +110,22 @@ openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03
|
---|
-**表 4** repo源列表
+**表 4** 嵌入式镜像列表
+
+| 名称 | 描述 |
+| -------------------------------------- | ------------------------------- |
+| arm64/aarch64-std/zImage | aarch64架构下支持qemu的内核镜像 |
+| arm64/aarch64-std/\*toolchain-22.03.sh | aarch64架构下对应的开发编译链 |
+| arm64/aarch64-std/\*rootfs.cpio.gz | aarch64架构下支持qemu的文件系统 |
+| arm32/arm-std/zImage | arm架构下支持qemu的内核镜像 |
+| arm32/arm-std/\*toolchain-22.03.sh | arm架构下对应的开发编译链 |
+| arm32/arm-std/\*rootfs.cpio.gz | arm架构下支持qemu的文件系统 |
+| source-list/manifest.xml | 构建使用的源码清单 |
+
+**表 5** repo源列表
+
目录
|
描述
@@ -142,6 +157,11 @@ openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03
| 存放虚拟机镜像
|
+embedded_img
+ |
+存放嵌入式镜像
+ |
+
everything
|
存放全量软件包源
@@ -170,11 +190,12 @@ openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03
|
+
## 最小硬件要求
-安装 openEuler 22.03-LTS 所需的最小硬件要求如[表5](#zh-cn_topic_0182825778_tff48b99c9bf24b84bb602c53229e2541)所示。
+安装 openEuler 22.03-LTS 所需的最小硬件要求如[表6](#zh-cn_topic_0182825778_tff48b99c9bf24b84bb602c53229e2541)所示。
-**表 5** 最小硬件要求
+**表 6** 最小硬件要求
部件名称
@@ -204,9 +225,9 @@ openEuler发布件包括[ISO发布包](http://repo.openeuler.org/openEuler-22.03
## 硬件兼容性
-openEuler已验证支持的服务器和各部件典型配置请参见[表6](#zh-cn_topic_0227922427_table39822012)。openEuler后续将逐步增加对其他服务器的支持,也欢迎广大合作伙伴/开发者参与贡献和验证。openEuler当前支持的服务器可见[兼容列表](https://www.openeuler.org/zh/compatibility/)。
+openEuler已验证支持的服务器和各部件典型配置请参见[表7](#zh-cn_topic_0227922427_table39822012)。openEuler后续将逐步增加对其他服务器的支持,也欢迎广大合作伙伴/开发者参与贡献和验证。openEuler当前支持的服务器可见[兼容列表](https://www.openeuler.org/zh/compatibility/)。
-**表 6** 支持的服务器及典型配置
+**表 7** 支持的服务器及典型配置
厂商
diff --git "a/docs/zh/docs/StratoVirt/StratoVirt\344\273\213\347\273\215.md" "b/docs/zh/docs/StratoVirt/StratoVirt\344\273\213\347\273\215.md"
index 26f48c6700fa4ff57709f780b9464f399ebddc29..9842e25a699a15057e001e585e46da8d4a05ef60 100644
--- "a/docs/zh/docs/StratoVirt/StratoVirt\344\273\213\347\273\215.md"
+++ "b/docs/zh/docs/StratoVirt/StratoVirt\344\273\213\347\273\215.md"
@@ -46,6 +46,6 @@ StratoVirt核心架构自顶向下分为三层:
#### 约束
-- 仅支持Linux操作系统,推荐内核版本为4.19;
-- 虚拟机操作系统仅支持Linux,内核版本建议为4.19;
+- 仅支持Linux操作系统,推荐内核版本为4.19, 5.10;
+- 虚拟机操作系统仅支持Linux,内核版本建议为4.19, 5.10;
- 最大支持254个CPU;
diff --git "a/docs/zh/docs/StratoVirt/\345\207\206\345\244\207\344\275\277\347\224\250\347\216\257\345\242\203.md" "b/docs/zh/docs/StratoVirt/\345\207\206\345\244\207\344\275\277\347\224\250\347\216\257\345\242\203.md"
index f9890b5e586d654b7d44fe3a36329c8b42ec54e0..5c17d7213dd925c78d18a73921501ad26119b99f 100644
--- "a/docs/zh/docs/StratoVirt/\345\207\206\345\244\207\344\275\277\347\224\250\347\216\257\345\242\203.md"
+++ "b/docs/zh/docs/StratoVirt/\345\207\206\345\244\207\344\275\277\347\224\250\347\216\257\345\242\203.md"
@@ -4,7 +4,7 @@
## 使用说明
- StratoVirt仅支持运行于x86_64和AArch64处理器架构下并启动相同架构的Linux虚拟机。
-- 建议在 openEuler 21.09 版本编译、调测和部署该版本 StratoVirt。
+- 建议在 openEuler 22.03 LTS 版本编译、调测和部署该版本 StratoVirt。
- StratoVirt支持以非root权限运行。
## 环境要求
@@ -50,17 +50,17 @@
1. 获取openEuler的kernel源代码,参考命令如下:
```
- $ git clone https://gitee.com/openeuler/kernel
+ $ git clone https://gitee.com/openeuler/kernel.git
$ cd kernel
```
-2. 查看并切换kernel的版本到5.10,参考命令如下:
+2. 查看并切换kernel的版本到openEuler-22.03-LTS,参考命令如下:
```
- $ git checkout kernel-5.10
+ $ git checkout openEuler-22.03-LTS
```
-3. 配置并编译Linux kernel。可使用推荐配置([获取配置文件](https://gitee.com/openeuler/stratovirt/tree/master/docs/kernel_config)),将其复制到kernel路径下并重命名为`.config`。也可通过以下命令进行交互,根据提示完成kernel配置。
+3. 配置并编译Linux kernel。目前有两种方式可以生成配置文件:1. 使用推荐配置([获取配置文件](https://gitee.com/openeuler/stratovirt/tree/master/docs/kernel_config)),将指定版本的推荐文件复制到kernel路径下并重命名为`.config`, 并执行命令`make olddefconfig`更新到最新的默认配置(否则后续编译可能有选项需要手动选择)。2. 通过以下命令进行交互,根据提示完成kernel配置,可能会提示缺少指定依赖,按照提示使用`yum install`命令进行安装。
```
$ make menuconfig
@@ -109,21 +109,21 @@ rootfs镜像是一种文件系统镜像,在StratoVirt启动时可以装载带
4. 获取对应处理器架构的最新alpine-mini rootfs。
- - 如果是AArch64处理器架构,参考命令如下:
+ - 对于AArch64处理器架构,从[alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/)网站获取最新alpine-mini rootfs,例如:alpine-minirootfs-3.16.0-aarch64.tar.gz ,参考命令如下:
```
- $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/aarch64/alpine-minirootfs-3.12.0-aarch64.tar.gz
- $ tar -zxvf alpine-minirootfs-3.12.0-aarch64.tar.gz
- $ rm alpine-minirootfs-3.12.0-aarch64.tar.gz
+ $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/aarch64/alpine-minirootfs-3.16.0-aarch64.tar.gz
+ $ tar -zxvf alpine-minirootfs-3.16.0-aarch64.tar.gz
+ $ rm alpine-minirootfs-3.16.0-aarch64.tar.gz
```
- - 如果是x86_64处理器架构,参考命令如下:
+ - 对于x86_64处理器架构,从[alpine](http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/)网站获取指定架构最新alpine-mini rootfs,例如:alpine-minirootfs-3.16.0-x86_64.tar.gz,参考命令如下:
```
- $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-minirootfs-3.12.0-x86_64.tar.gz
- $ tar -zxvf alpine-minirootfs-3.12.0-x86_64.tar.gz
- $ rm alpine-minirootfs-3.12.0-x86_64.tar.gz
+ $ wget http://dl-cdn.alpinelinux.org/alpine/latest-stable/releases/x86_64/alpine-minirootfs-3.16.0-x86_64.tar.gz
+ $ tar -zxvf alpine-minirootfs-3.16.0-x86_64.tar.gz
+ $ rm alpine-minirootfs-3.16.0-x86_64.tar.gz
```
diff --git "a/docs/zh/docs/StratoVirt/\345\256\211\350\243\205StratoVirt.md" "b/docs/zh/docs/StratoVirt/\345\256\211\350\243\205StratoVirt.md"
index 81a8d9d217989ec082e11e17e7f3e988bcd369e0..fdfc80929b29556253fc4375a4046b9e0c87b53f 100644
--- "a/docs/zh/docs/StratoVirt/\345\256\211\350\243\205StratoVirt.md"
+++ "b/docs/zh/docs/StratoVirt/\345\256\211\350\243\205StratoVirt.md"
@@ -33,7 +33,7 @@
```
$ stratovirt -version
- StratoVirt 2.0.0
+ StratoVirt 2.1.0
```
diff --git "a/docs/zh/docs/StratoVirt/\345\257\271\346\216\245libvirt.md" "b/docs/zh/docs/StratoVirt/\345\257\271\346\216\245libvirt.md"
index d655ecb9f388445fcb3f888f0abead1011416c6f..3b22a83845f1b1404c6e7384ffe11161c6453d3e 100644
--- "a/docs/zh/docs/StratoVirt/\345\257\271\346\216\245libvirt.md"
+++ "b/docs/zh/docs/StratoVirt/\345\257\271\346\216\245libvirt.md"
@@ -460,7 +460,7 @@ XML 中还有一些体系架构相关的配置,如 pflash、主板等。
/path/to/standard_vm_kernel
console=hvc0 root=/dev/vda reboot=k panic=1 rw
/path/to/pflash
- /path/to/OVMF_VARS
+ /path/to/OVMF_VARS
@@ -626,7 +626,7 @@ libvirt 使用 virsh 命令来管理虚拟机,当 StratoVirt 平台和 libvirt
### 管理虚拟机生命周期
-假设用户已经按照需要完成一个名为 stratovirt 的虚拟机配置文件 st.xml ,则对应生命周期管理的命令如下:
+假设用户已经按照需要完成一个名为 StratoVirt 的虚拟机配置文件 st.xml ,则对应生命周期管理的命令如下:
- 创建虚拟机
@@ -634,40 +634,40 @@ libvirt 使用 virsh 命令来管理虚拟机,当 StratoVirt 平台和 libvirt
virsh create st.xml
```
- 虚拟机创建完成后,可以通过 **virsh list** 命令查看,会存在一个名为 stratovirt 的虚拟机。
+ 虚拟机创建完成后,可以通过 **virsh list** 命令查看,会存在一个名为 StratoVirt 的虚拟机。
- 挂起虚拟机
```shell
- virsh suspend stratovirt
+ virsh suspend StratoVirt
```
- 虚拟机挂起后,虚拟机暂停运行。可以通过 **virsh list** 命令查看,虚拟机 stratovirt 的状态为 paused 。
+ 虚拟机挂起后,虚拟机暂停运行。可以通过 **virsh list** 命令查看,虚拟机 StratoVirt 的状态为 paused 。
- 恢复虚拟机
```
- virsh resume stratovirt
+ virsh resume StratoVirt
```
- 虚拟机恢复后,可以通过 **virsh list** 命令查看,虚拟机 stratovirt 的状态为 running 。
+ 虚拟机恢复后,可以通过 **virsh list** 命令查看,虚拟机 StratoVirt 的状态为 running 。
- 销毁虚拟机
```
- virsh destroy stratovirt
+ virsh destroy StratoVirt
```
- 虚拟机销毁后,使用 **virsh list** 查看虚拟机,发现虚拟机 stratovirt 不存在。
+ 虚拟机销毁后,使用 **virsh list** 查看虚拟机,发现虚拟机 StratoVirt 不存在。
### 登录虚拟机
-虚拟机创建完成后,可以通过 **virsh console** 登录虚拟机内部操作虚拟机。假设虚拟机名称为 stratovirt,参考命令如下:
+虚拟机创建完成后,可以通过 **virsh console** 登录虚拟机内部操作虚拟机。假设虚拟机名称为 StratoVirt,参考命令如下:
```
-virsh console stratovirt
+virsh console StratoVirt
```
diff --git "a/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\347\256\241\347\220\206.md" "b/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\347\256\241\347\220\206.md"
index ce06098110d1d07a323be4a5f4694893c829455c..17944d23d63c6785e54315fe6c09809079c0a84c 100644
--- "a/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\347\256\241\347\220\206.md"
+++ "b/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\347\256\241\347\220\206.md"
@@ -72,7 +72,7 @@ StratoVirt可以对虚拟机进行启动、暂停、恢复、退出等生命周
### 创建并启动虚拟机
-根据虚拟机配置可知,可以通过命令行参数或json文件指定虚拟机配置,并在主机通过stratovirt命令创建并启动虚拟机。
+通过命令行参数指定虚拟机配置,创建并启动虚拟机。
- 使用命令行参数给出虚拟机配置,创建并启动虚拟机的命令如下:
@@ -80,23 +80,9 @@ StratoVirt可以对虚拟机进行启动、暂停、恢复、退出等生命周
$ /path/to/stratovirt -[参数1] [参数选项] -[参数2] [参数选项] ...
```
-
-
-- 使用json文件给出虚拟机配置,创建并启动虚拟机的命令如下:
-
-```
-$ /path/to/stratovirt \
- -config /path/to/json \
- -qmp unix:/path/to/socket
-```
-
-其中,/path/to/json为json配置文件的路径。/path/to/socket为用户指定的socket文件(如/tmp/stratovirt.socket),使用上述命令会自动创建socket文件。为确保虚拟机能够正常启动,在创建socket文件前确保该文件不存在。
-
-
-
-> 
+> 说明:
>
-> 虚拟机启动后,内部会有eth0和eth1两张网卡。这两张网卡预留用于网卡热插拔。热插的第一张网卡是eth0,热插的第二张网卡是eth1,目前只支持热插2张virtio-net网卡。
+> 轻量虚拟启动后,内部会有eth0和eth1两张网卡。这两张网卡预留用于网卡热插拔。热插的第一张网卡是eth0,热插的第二张网卡是eth1,目前只支持热插两张virtio-net网卡。
@@ -120,7 +106,7 @@ StratoVirt当前采用QMP管理虚拟机,暂停、恢复、退出虚拟机等
-> 
+> 说明:
>
> QMP提供了stop、cont、quit和query-status等来管理和查询虚拟机状态。
>
diff --git a/docs/zh/docs/TailorCustom/figures/flowchart.png b/docs/zh/docs/TailorCustom/figures/flowchart.png
new file mode 100644
index 0000000000000000000000000000000000000000..e4fecb8b310f204d6cfd07449ccc3c93d1badd51
Binary files /dev/null and b/docs/zh/docs/TailorCustom/figures/flowchart.png differ
diff --git a/docs/zh/docs/TailorCustom/figures/lack_pack.png b/docs/zh/docs/TailorCustom/figures/lack_pack.png
new file mode 100644
index 0000000000000000000000000000000000000000..a4b7f1da15da70f63a86aae360e89017c2b98f2d
Binary files /dev/null and b/docs/zh/docs/TailorCustom/figures/lack_pack.png differ
diff --git "a/docs/zh/docs/TailorCustom/imageTailor \344\275\277\347\224\250\346\214\207\345\215\227.md" "b/docs/zh/docs/TailorCustom/imageTailor \344\275\277\347\224\250\346\214\207\345\215\227.md"
new file mode 100644
index 0000000000000000000000000000000000000000..10cfe8d67d4100cec6974db37e47c4eb0e3f5699
--- /dev/null
+++ "b/docs/zh/docs/TailorCustom/imageTailor \344\275\277\347\224\250\346\214\207\345\215\227.md"
@@ -0,0 +1,933 @@
+# imageTailor 使用指南
+
+- [简介](#简介)
+- [安装工具](#安装工具)
+ - [软硬件要求](#软硬件要求)
+ - [获取安装包](#获取安装包)
+ - [安装 imageTailor](#安装-imageTailor)
+ - [目录介绍](#目录介绍)
+- [定制系统](#定制系统)
+ - [总体流程](#总体流程)
+ - [定制业务包](#定制业务包)
+ - [配置本地 repo 源](#配置本地-repo-源)
+ - [添加文件](#添加文件)
+ - [添加 RPM 包](#添加-RPM-包)
+ - [添加 hook 脚本](#添加-hook-脚本)
+ - [配置系统参数](#配置系统参数)
+ - [配置主机参数](#配置主机参数)
+ - [配置初始密码](#配置初始密码)
+ - [配置分区](#配置分区)
+ - [配置网络](#配置网络)
+ - [配置内核参数](#配置内核参数)
+ - [制作系统](#制作系统)
+ - [命令介绍](#命令介绍)
+ - [制作指导](#制作指导)
+ - [裁剪时区](#裁剪时区)
+ - [定制示例](#定制示例)
+
+
+
+## 简介
+
+操作系统除内核外,还包含各种功能的外围包。通用操作系统包含较多外围包,提供了丰富的功能,但是这也带来了一些影响:
+
+- 占用资源(内存、磁盘、CPU 等)多,导致系统运行效率低
+- 很多功能用户不需要,增加了开发和维护成本
+
+因此,openEuler 提供了 imageTailor 镜像裁剪定制工具。用户可以根据需求裁剪操作系统镜像中不需要的外围包,或者添加所需的业务包或文件。该工具主要提供了以下功能:
+
+- 系统包裁剪定制:用户可以选择默认安装以及裁剪的rpm,也支持用户裁剪定制系统命令、库、驱动。
+- 系统配置定制:用户可以配置主机名、启动服务、时区、网络、分区、加载驱动、版本号等。
+- 用户文件定制:支持用户添加定制文件到系统镜像中。
+
+
+
+## 安装工具
+
+本节以 openEuler 22.03 LTS 版本 AArch64 架构为例,说明安装方法。
+
+### 软硬件要求
+
+安装和运行 imageTailor 需要满足以下软硬件要求:
+
+- 机器架构为 x86_64 或者 AArch64
+
+- 操作系统为 openEuler 22.03 LTS(该版本内核版本为 5.10,python 版本为 3.9,满足工具要求)
+
+- 运行工具的机器根目录 '/' 需要 40 GB 以上空间
+
+- python 版本 3.9 及以上
+
+- kernel 内核版本 5.10 及以上
+
+- 关闭 SElinux 服务
+
+ ```shell
+ $ sudo setenforce 0
+ $ getenforce
+ Permissive
+ ```
+
+
+
+### 获取安装包
+
+安装和使用 imageTailor 工具,首先需要下载 openEuler 发布件。
+
+1. 获取 ISO 镜像文件和对应的校验文件。
+
+ 镜像必须为 everything 版本,此处假设存放在 root 目录,参考命令如下:
+
+ ```shell
+ $ cd /root/temp
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum
+ ```
+
+3. 获取 sha256sum 校验文件中的校验值。
+
+ ```shell
+ $ cat openEuler-22.03-LTS-everything-aarch64-dvd.iso.sha256sum
+ ```
+
+4. 计算 ISO 镜像文件的校验值。
+
+ ```shell
+ $ sha256sum openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ ```
+
+5. 对比上述 sha256sum 文件的检验值和 ISO 镜像的校验值,如果两者相同,说明文件完整性检验成功。否则说明文件完整性被破坏,需要重新获取文件。
+
+### 安装 imageTailor
+
+此处以 openEuler 22.03 LTS 版本的 AArch64 架构为例,介绍如何安装 imageTailor 工具。
+
+1. 确认机器已经安装操作系统 openEuler 22.03 LTS( imageTailor 工具的运行环境)。
+
+ ```shell
+ $ cat /etc/openEuler-release
+ openEuler release 22.03 LTS
+ ```
+
+2. 创建文件 /etc/yum.repos.d/local.repo,配置对应 yum 源。配置内容参考如下,其中 baseurl 是用于挂载 ISO 镜像的目录:
+
+ ```shell
+ [local]
+ name=local
+ baseurl=file:///root/imageTailor_mount
+ gpgcheck=0
+ enabled=1
+ ```
+
+3. 使用 root 权限,挂载光盘镜像到 /root/imageTailor_mount 目录(请与上述 repo 文件中配置的 baseurl 保持一致,且建议该目录的磁盘空间大于 20 GB)作为 yum 源,参考命令如下:
+
+ ```shell
+ $ mkdir /root/imageTailor_mount
+ $ sudo mount -o loop /root/temp/openEuler-22.03-LTS-everything-aarch64-dvd.iso /root/imageTailor_mount/
+ ```
+
+4. 使 yum 源生效:
+
+ ```shell
+ $ yum clean all
+ $ yum makecache
+ ```
+
+5. 使用 root 权限,安装 imageTailor 裁剪工具:
+
+ ```shell
+ $ sudo yum install -y imageTailor
+ ```
+
+6. 使用 root 权限,确认工具已安装成功。
+
+ ```shell
+ $ cd /opt/imageTailor/
+ $ sudo ./mkdliso -h
+ -------------------------------------------------------------------------------------------------------------
+ Usage: mkdliso -p product_name -c configpath [--minios yes|no|force] [-h] [--sec]
+ Options:
+ -p,--product Specify the product to make, check custom/cfg_yourProduct.
+ -c,--cfg-path Specify the configuration file path, the form should be consistent with custom/cfg_xxx
+ --minios Make minios: yes|no|force
+ --sec Perform security hardening
+ -h,--help Display help information
+
+ Example:
+ command:
+ ./mkdliso -p openEuler -c custom/cfg_openEuler --sec
+
+ help:
+ ./mkdliso -h
+ -------------------------------------------------------------------------------------------------------------
+ ```
+
+### 目录介绍
+
+imageTailor 工具安装完成后,工具包的目录结构如下:
+
+```shell
+[imageTailor]
+ |-[custom]
+ |-[cfg_openEuler]
+ |-[usr_file] // 存放用户添加的文件
+ |-[usr_install] // 存放用户的 hook 脚本
+ |-[all]
+ |-[conf]
+ |-[hook]
+ |-[cmd.conf] // 配置 ISO 镜像默认使用的命令和库
+ |-[rpm.conf] // 配置 ISO 镜像默认安装的 RPM 包和驱动列表
+ |-[security_s.conf] // 配置安全加固策略
+ |-[sys.conf] // 配置 ISO 镜像系统参数
+ |-[kiwi] // imageTailor 基础配置
+ |-[repos] // RPM 源,制作 ISO 镜像需要的 RPM 包
+ |-[security-tool] // 安全加固工具
+ |-mkdliso // 制作 ISO 镜像的可执行脚本
+```
+
+## 定制系统
+
+本章介绍使用 imageTailor 工具将业务 RPM 包、自定义文件、驱动、命令和文件打包至目标 ISO 镜像。
+
+### 总体流程
+
+使用 imageTailor 工具定制系统的流程请参见下图:
+
+
+
+各流程含义如下:
+
+- 检查软硬件环境:确认制作 ISO 镜像的机器满足软硬件要求。
+
+- 定制业务包:包括添加 RPM 包(包括业务 RPM 包、命令、驱动、库文件)和添加文件(包括自定义文件、命令、驱动、库文件)
+
+ - 添加业务 RPM 包:用户可以根据需要,添加 RPM 包到 ISO 镜像。具体要求请参见 [安装工具](# 安装工具) 章节。
+ - 添加自定义文件:若用户希望在目标 ISO 系统安装或启动时,能够进行自定义的硬件检查、系统配置检查、驱动安装等操作,可编写自定义文件,并打包到 ISO 镜像。
+ - 添加驱动、命令、库文件:当 openEuler 的 RPM 包源未包含用户需要的驱动、命令或库文件时,可以使用 imageTailor 工具将对应驱动、命令或库文件打包至 ISO 镜像。
+
+- 配置系统参数
+
+ - 配置主机参数:为了确保 ISO 镜像安装和启动成功,需要配置主机参数。
+ - 配置分区:用户可以根据业务规划配置业务分区,同时可以调整系统分区。
+ - 配置网络:用户可以根据需要配置系统网络参数,例如:网卡名称、IP 地址、子网掩码。
+ - 配置初始密码:为了确保 ISO 镜像安装和启动成功,需要配置 root 初始密码和 grub 初始密码。
+ - 配置内核参数:用户可以根据需求配置内核的命令行参数。
+
+- 配置安全加固策略
+
+ imageTailor 提供了默认地安全加固策略。用户可以根据业务需要,通过编辑 security_s.conf 对系统进行二次加固(仅在系统 ISO 镜像定制阶段),具体的操作方法请参见 《 [安全加固指南](https://docs.openeuler.org/zh/docs/22.03_LTS/docs/SecHarden/secHarden.html) 》。
+
+- 制作操作系统 ISO 镜像
+
+ 使用 imageTailor 工具制作操作系统 ISO 镜像。
+
+### 定制业务包
+
+用户可以根据业务需要,将业务 RPM 包、自定义文件、驱动、命令和库文件打包至目标 ISO 镜像。
+
+#### 配置本地 repo 源
+
+定制 ISO 操作系统镜像,必须在 /opt/imageTailor/repos/euler_base/ 目录配置 repo 源。本节主要介绍配置本地 repo 源的方法。
+
+1. 下载 openEuler 发布的 ISO (必须使用 openEuler 发布 everything 版本镜像 的 RPM 包)。
+ ```shell
+ $ cd /opt
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ ```
+
+2. 创建挂载目录 /opt/openEuler_repo ,并挂载 ISO 到该目录 。
+ ```shell
+ $ sudo mkdir -p /opt/openEuler_repo
+ $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo
+ mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only.
+ ```
+
+3. 拷贝 ISO 中的 RPM 包到 /opt/imageTailor/repos/euler_base/ 目录下。
+ ```shell
+ $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base
+ $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base
+ $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base
+ $ sudo ls /opt/imageTailor/repos/euler_base|wc -l
+ 2577
+ $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo
+ $ cd /opt/imageTailor
+ ```
+
+#### 添加文件
+
+用户可以根据需要添加文件到 ISO 镜像,此处的文件类型可以是用户自定义文件、驱动、命令、库文件。用户只需要将文件放至 /opt/imageTailor/custom/cfg_openEuler/usr_file 目录下即可。
+
+##### 注意事项
+
+- 命令必须具有可执行权限,否则 imageTailor 工具无法将该命令打包至 ISO 中。
+
+- 存放在 /opt/imageTailor/custom/cfg_openEuler/usr_file 目录下的文件,会生成在 ISO 根目录下,所以文件的目录结构必须是从根目录开始的完整路径,以便 imageTailor 工具能够将该文件放至正确的目录下。
+
+ 例如:假设希望文件 file1 在 ISO 的 /opt 目录下,则需要在 usr_file 目录下新建 opt 目录,再将 file1 文件拷贝至 opt 目录。如下:
+
+ ```shell
+ $ pwd
+ /opt/imageTailor/custom/cfg_openEuler/usr_file
+
+ $ tree
+ .
+ ├── etc
+ │ ├── default
+ │ │ └── grub
+ │ └── profile.d
+ │ └── csh.precmd
+ └── opt
+ └── file1
+
+ 4 directories, 3 files
+ ```
+
+- 存放在 /opt/imageTailor/custom/cfg_openEuler/usr_file 目录下的目录必须是真实路径(例如路径中不包含软连接。可在系统中使用 `realpath` 或 `readlink -f` 命令查询真实路径)。
+
+- 如果需要在系统启动或者安装阶段调用用户提供的脚本,即 hook 脚本,则需要将该文件放在 hook 目录下。
+
+#### 添加 RPM 包
+
+##### 操作流程
+
+用户可以添加 RPM 包(驱动、命令或库文件)到 ISO 镜像,操作步骤如下:
+
+> **说明:**
+>
+>- 下述 rpm.conf 和 cmd.conf 均在 /opt/imageTailor/custom/cfg_openEuler/ 目录下。
+>- 下述 RPM 包裁剪粒度是指 sys_cut='no' 。裁剪粒度详情请参见 [配置主机参数](#配置主机参数) 。
+>- 若没有配置本地 repo 源,请参见 [配置本地 repo 源 ](#配置本地 repo 源)进行配置。
+>
+
+1. 确认 /opt/imageTailor/repos/euler_base/ 目录中是否包含需要添加的 RPM 包。
+
+ - 是,请执行步骤 2 。
+ - 否,请执行步骤 3 。
+2. 在 rpm.conf 的 \ 字段配置该 RPM 包信息。
+ - 若为 RPM 包裁剪粒度,则操作完成。
+ - 若为其他裁剪粒度,请执行步骤 4 。
+3. 用户自己提供 RPM 包,放至 /opt/imageTailor/custom/cfg_openEuler/usr_rpm 目录下。如果 RPM 包依赖于其他 RPM 包,也必须将依赖包放至该目录,因为新增 RPM 包需要和依赖 RPM 包同时打包至 ISO 镜像。
+ - 若为用户 RPM 包文件裁剪,则执行 4 。
+ - 其他裁剪粒度,则操作完成。
+4. 请在 rpm.conf 和 cmd.conf 中配置该 RPM 包中要保留的驱动、命令和库文件。如果有要裁剪的普通文件,需要在 cmd.conf 文件中的 \\ 区域配置。
+
+
+##### 配置文件说明
+
+| 对象 | 对应配置文件 | 填写区域 |
+| :----------- | :----------- | :----------------------------------------------------------- |
+| 添加驱动 | rpm.conf | \ \ \ 说明:其中驱动名称所在路径为 " /lib/modules/{内核版本号}/kernel/ " 的相对路径 |
+| 添加命令 | cmd.conf | \ \ \ |
+| 添加库文件 | cmd.conf | \ \ \ |
+| 删除其他文件 | cmd.conf | \ \ \ 说明:普通文件名称必须包含绝对路径 |
+
+**示例**
+
+- 添加驱动
+
+ ```shell
+
+
+
+
+ ......
+
+ ```
+
+- 添加命令
+
+ ```shell
+
+
+
+
+ ......
+
+ ```
+
+- 添加库文件
+
+ ```shell
+
+
+
+
+
+ ```
+
+- 删除其他文件
+
+ ```shell
+
+
+
+
+
+ ```
+
+#### 添加 hook 脚本
+
+hook 脚本由 OS 在启动和安装过程中调用,执行脚本中定义的动作。imageTailor 工具存放 hook 脚本的目录为 custom/cfg_openEuler/usr_install/hook,且其下有不同子目录,每个子目录代表 OS 启动或安装的不同阶段,用户根据脚本需要被调用的阶段存放,OS 会在对应阶段调用该脚本。用户可以根据需要存放自定义脚本到指定目录。
+
+##### **脚本命名规则**
+
+用户可自定义脚本名称,必须 "S+数字(至少两位,个位数以0开头)" 开头,数字代表 hook 脚本的执行顺序。脚本名称示例:S01xxx.sh
+
+> **说明:**
+>
+>hook 目录下的脚本是通过 source 方式调用,所以脚本中需要谨慎使用 exit 命令,因为调用 exit 命令之后,整个安装的脚本程序也同步退出了。
+
+
+
+##### hook 子目录说明
+
+| hook 子目录 | hook 脚本举例 | hook 执行点 | 说明 |
+| :-------------------- | :---------------------| :------------------------------- | :----------------------------------------------------------- |
+| insmod_drv_hook | 无 | 加载 OS 驱动之后 | 无 |
+| custom_install_hook | S01custom_install.sh | 驱动加载完成后(即 insmod_drv_hook 执行后) | 用户可以自定义安装过程,不需要使用 OS 默认安装流程。 |
+| env_check_hook | S01check_hw.sh | 安装初始化之前 | 初始化之前检查硬件配置规格、获取硬件类型。 |
+| set_install_ip_hook | S01set_install_ip.sh | 安装初始化过程中,配置网络时 | 用户根据自身组网,自定义网络配置。 |
+| before_partition_hook | S01checkpart.sh | 在分区前调用 | 用户可以在分区之前检查分区配置文件是否正确。 |
+| before_setup_os_hook | 无 | 解压repo之前 | 用户可以进行自定义分区挂载操作。 如果安装包解压的路径不是分区配置中指定的根分区。则需要用户自定义分区挂载,并将解压路径赋值给传入的全局变量。 |
+| before_mkinitrd_hook | S01install_drv.sh | 执行 mkinitrd 操作之前 | initrd 放在硬盘的场景下,执行 mkinitrd 操作之前的 hook。用户可以进行添加、更新驱动文件等自定义操作。 |
+| after_setup_os_hook | 无 | 安装完系统之后 | 用户可以在安装完成之后进行系统文件的自定义操作,包括修改 grub.cfg 等 |
+| install_succ_hook | 无 | 系统安装流程成功结束 | 用户执行解析安装信息,回传安装是否成功等操作。install_succ_hook 不可以设置为 install_break。 |
+| install_fail_hook | 无 | 系统安装失败 | 用户执行解析安装信息,回传安装是否成功等操作。install_fail_hook 不可以设置为 install_break。 |
+
+### 配置系统参数
+
+开始制作操作系统 ISO 镜像之前,需要配置系统参数,包括主机参数、初始密码、分区、网络、编译参数和系统命令行参数。
+
+#### 配置主机参数
+
+ /opt/imageTailor/custom/cfg_openEuler/sys.conf 文件的 \ \ 区域用于配置系统的常用参数,例如主机名、内核启动参数等。
+
+openEuler 提供的默认配置如下,用户可以需要进行修改:
+
+```shell
+
+ sys_service_enable='ipcc'
+ sys_service_disable='cloud-config cloud-final cloud-init-local cloud-init'
+ sys_utc='yes'
+ sys_timezone=''
+ sys_cut='no'
+ sys_usrrpm_cut='no'
+ sys_hostname='Euler'
+ sys_usermodules_autoload=''
+ sys_gconv='GBK'
+
+```
+
+配置中的各参数含义如下:
+
+- sys_service_enable
+
+ 可选配置。OS 默认启用的服务,多个服务请以空格分开。如果用户不需要新增系统服务,请保持默认值,默认值为 ipcc 。配置时请注意:
+
+ - 只能在默认配置的基础上增加系统服务,不能删减系统服务。
+ - 可以配置业务相关的服务,但是需要 repo 源中包含业务 RPM 包。
+ - 默认只开启该参数中配置的服务,如果服务依赖其他服务,需要将被依赖的服务也配置在该参数中。
+
+- sys_service_disable
+
+ 可选配置。禁止服务开机自启动的服务,多个服务请以空格分开。如果用户没有需要禁用的系统服务,请修改该参数为空。
+
+- sys_utc
+
+ 必选配置。是否采用 UTC 时间。yes 表示采用,no 表示不采用,默认值为 yes 。
+
+- sys_timezone
+
+ 可选配置。设置时区,即该单板所处的时区。可配置的范围为 openEuler 支持的时区,可通过 /usr/share/zoneinfo/zone.tab 文件查询。
+
+- sys_cut
+
+ 必选配置。是否裁剪 RPM 包。可配置为 yes、no 或者 debug 。yes 表示裁剪,no 表示不裁剪(仅安装 rpm.conf 中的 RPM 包),debug 表示裁剪但会保留 `rpm` 命令方便安装后定制。默认值为 no 。
+
+ > 说明:
+ >
+ > - imageTailor 工具会先安装用户添加的 RPM 包,再删除 cmd.conf
+ > 中 \ 区域的文件,最后删除
+ > cmd.conf 和 rpm.conf 中未配置的命令、库和驱动。
+ > - sys_cut='yes' 时,imageTailor 工具不支持 `rpm` 命令的安装,即使在 rpm.conf 中配置了也不生效。
+
+- sys_usrrpm_cut
+
+ 必选配置。是否裁剪用户添加到 /opt/imageTailor/custom/cfg_openEuler/usr_rpm 目录下的 RPM 包。yes 表示裁剪,no 表示不裁剪。默认值为 no 。
+
+ - sys_usrrpm_cut='yes' :imageTailor 工具会先安装用户添加的 RPM 包,然后删除 cmd.conf 中 \ 区域配置的文件,最后删除 cmd.conf 和 rpm.conf 中未配置的命令、库和驱动。
+
+ - sys_usrrpm_cut='no' :imageTailor 工具会安装用户添加的 RPM 包,不删除用户 RPM 包中的文件。
+
+- sys_hostname
+
+ 必选配置。主机名。大批量部署 OS 时,部署成功后,建议修改每个节点的主机名,确保各个节点的主机名不重复。
+
+ 主机名要求:字母、数字、"-" 的组合,首字母必须是字母或数字。字母支持大小写。字符个数不超过 63 。默认值为 Euler 。
+
+- sys_usermodules_autoload
+
+ 可选配置。系统启动阶段加载的驱动,配置该参数时,不需要填写后缀 .ko 。如果有多个驱动,请以空格分开。默认为空,不加载额外驱动。
+
+- sys_gconv
+
+ 可选配置。该参数用于定制 /usr/lib/gconv, /usr/lib64/gconv ,配置取值为:
+
+ - null/NULL:表示不配置。如果裁剪系统(sys_cut=“yes”),则/usr/lib/gconv 和 /usr/lib64/gconv 会被删除。
+ - all/ALL:不裁剪 /usr/lib/gconv 和 /usr/lib64/gconv 。
+ - xxx,xxx: 保留 /usr/lib/gconv 和 /usr/lib64/gconv 目录下对应的文件。若需要保留多个文件,可用 "," 分隔。
+
+- sys_man_cut
+
+ 可选配置。配置是否裁剪 man 文档。yes 表示裁剪,no 表示不裁剪。默认值为 yes 。
+
+
+
+> 说明:
+>
+> sys_cut 和 sys_usrrpm_cut 同时配置时,sys_cut 优先级更高,即遵循如下原则:
+>
+> - sys_cut='no'
+>
+> 无论 sys_usrrpm_cut='no' 还是 sys_usrrpm_cut='yes' ,都为系统 RPM 包裁剪粒度,即imageTailor 会安装 repo 源中的 RPM 包和 usr_rpm 目录下的 RPM 包,但不会裁剪 RPM 包中的文件。即使用户不需要这些 RPM 包中的部分文件,imageTailor 也不会进行裁剪。
+>
+> - sys_cut='yes'
+>
+> - sys_usrrpm_cut='no'
+>
+> 系统 RPM 包文件裁剪粒度:imageTailor 会根据用户配置,裁剪 repo 源中 RPM 包的文件。
+>
+> - sys_usrrpm_cut='yes'
+>
+> 系统和用户 RPM 包文件裁剪粒度:imageTailor 会根据用户的配置,裁剪 repo 源和 usr_rpm 目录中 RPM 包的文件。
+>
+
+
+
+#### 配置初始密码
+
+操作系统安装时,必须具有 root 初始密码和 grub 初始密码,否则裁剪得到的 ISO 在安装后无法使用 root 账号进行登录。本节介绍配置初始密码的方法。
+
+> 说明:
+>
+> root 初始密码和 grub 初始密码,必须由用户自行配置。
+
+##### 配置 root 初始密码
+
+###### 简介
+
+root 初始密码保存在 "/opt/imageTailor/custom/cfg_openEuler/rpm.conf" 中,用户通过修改该文件配置 root 初始密码。
+
+> **说明:**
+>
+>- 若使用 `mkdliso` 命令制作 ISO 镜像时需要使用 --minios yes/force 参数(制作在系统安装时进行系统引导的 initrd),则还需要在 /opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf 中填写相应信息。
+
+/opt/imageTailor/custom/cfg_openEuler/rpm.conf 中 root 初始密码的默认配置如下,需要用户自行添加:
+
+```
+
+
+
+```
+
+各参数含义如下:
+
+- group:用户所属组。
+- pwd:用户初始密码的加密密文,加密算法为 SHA-512。${pwd} 需要替换成用户实际的加密密文。
+- home:用户的家目录。
+- name:需要配置用户的用户名。
+
+###### 修改方法
+
+用户在制作 ISO 镜像前需要修改 root 用户的初始密码,这里给出设置 root 初始密码的方法(需使用 root 权限):
+
+1. 添加用于生成密码的用户,此处假设 testUser。
+
+ ```shell
+ $ sudo useradd testUser
+ ```
+
+2. 设置 testUser 用户的密码。参考命令如下,根据提示设置密码:
+
+ ```shell
+ $ sudo passwd testUser
+ Changing password for user testUser.
+ New password:
+ Retype new password:
+ passwd: all authentication tokens updated successfully.
+ ```
+
+3. 查看 /etc/shadow 文件,testUser 后的内容(两个 : 间的字符串)即为加密后的密码。
+
+ ``` shell script
+ $ sudo cat /etc/shadow | grep testUser
+ testUser:$6$YkX5uFDGVO1VWbab$jvbwkZ2Kt0MzZXmPWy.7bJsgmkN0U2gEqhm9KqT1jwQBlwBGsF3Z59heEXyh8QKm3Qhc5C3jqg2N1ktv25xdP0:19052:0:90:7:35::
+ ```
+
+4. 拷贝上述加密密码替换 /opt/imageTailor/custom/cfg_openEuler/rpm.conf 中的 pwd 字段,如下所示:
+ ``` shell script
+
+
+
+ ```
+
+5. 若使用 `mkdliso` 命令制作 ISO 镜像时需要使用 --minios yes/force 参数,请修改 /opt/imageTailor/kiwi/minios/cfg_minios/rpm.conf 中对应用户的 pwd 字段。
+
+ ``` shell script
+
+
+
+ ```
+
+##### 配置 grub 初始密码
+
+grub 初始密码保存在 /opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub 中,用户通过修改该文件配置 grub 初始密码。如果未配置 grub 初始密码,制作 ISO 镜像会失败。
+
+> 说明:
+>
+> - 配置 grub 初始密码需要使用 root 权限。
+> - grub 密码对应的默认用户为 root 。
+>
+> - 系统中需有 grub2-set-password 命令,若不存在,请提前安装该命令。
+
+1. 执行如下命令,根据提示设置 grub 密码:
+
+ ```shell
+ $ sudo grub2-set-password -o ./
+ Enter password:
+ Confirm password:
+ grep: .//grub.cfg: No such file or directory
+ WARNING: The current configuration lacks password support!
+ Update your configuration with grub2-mkconfig to support this feature.
+ ```
+
+2. 命令执行完成后,会在当前目录生成 user.cfg 文件,grub.pbkdf2.sha512 开头的内容即 grub 加密密码。
+
+ ```shell
+ $ sudo cat user.cfg
+ GRUB2_PASSWORD=grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1
+ B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02
+ 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615
+ ```
+
+3. 复制上述密文,并在 /opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub 文件中增加如下配置:
+
+ ```shell
+ GRUB_PASSWORD="grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1
+ B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02
+ 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615"
+ ```
+
+
+#### 配置分区
+
+若用户想调整系统分区或业务分区,可以通过修改 /opt/imageTailor/custom/cfg_openEuler/sys.conf 文件中的 \ 实现。
+
+> **说明:**
+>
+>- 系统分区:存放操作系统的分区
+>- 业务分区:存放业务数据的分区
+>- 差别:在于存放的内容,而每个分区的大小、挂载路径和文件系统类型都不是区分业务分区和系统分区的依据。
+>- 配置分区为可选项,用户也可以在安装 OS 之后,手动配置分区
+
+ \ 的配置格式为:
+
+hd 磁盘号 挂载路径 分区大小 分区类型 文件系统类型 [二次格式化标志位]
+
+其默认配置如下:
+
+``` shell script
+
+hd0 /boot 512M primary ext4 yes
+hd0 /boot/efi 200M primary vfat yes
+hd0 / 30G primary ext4
+hd0 - - extended -
+hd0 /var 1536M logical ext4
+hd0 /home max logical ext4
+
+```
+
+各参数含义如下:
+
+- hd 磁盘号
+ 磁盘的编号。请按照 hdx 的格式填写,x 指第 x 块盘。
+
+ > **说明:**
+ >
+ >分区配置只在被安装机器的磁盘能被识别时才有效。
+
+- 挂载路径
+ 指定分区挂载的路径。用户既可以配置业务分区,也可以对默认配置中的系统分区进行调整。如果不挂载,则设置为 '-'。
+
+ > **说明:**
+ >
+ >- 分区配置中必须有 '/' 挂载路径。其他的请用户自行调整。
+ >- 采用 UEFI 引导时,在 x86_64 的分区配置中必须有 '/boot' 挂载路径,在 AArch64 的分区配置中必须有 '/boot/efi' 挂载路径。
+
+- 分区大小
+ 分区大小的取值有以下四种:
+
+ - G/g:指定以 GB 为单位的分区大小,例如:2G。
+ - M/m:指定以 MB 为单位的分区大小,例如:300M。
+ - T/t:指定以 TB 为单位的分区大小,例如:1T。
+ - MAX/max:指定将硬盘上剩余的空间全部用来创建一个分区。只能在最后一个分区配置该值。
+
+ > **说明:**
+>
+ >- 分区大小不支持小数,如果是小数,请换算成其他单位,调整为整数的数值。例如:不能填写 1.5G,应填写为 1536M。
+ >- 分区大小取 MAX/max 值时,剩余分区大小不能超过支持文件系统类型的限制(默认文件系统类型 ext4,限制大小 16T)。
+
+- 分区类型
+ 分区有以下三种:
+
+ - 主分区: primary
+ - 扩展分区:extended(该分区只需配置 hd 磁盘号即可)
+ - 逻辑分区:logical
+
+- 文件系统类型
+ 目前支持的文件系统类型有:ext4、vfat
+
+- 二次格式化标志位
+ 可选配置,表示二次安装时是否格式化:
+
+ - 是:yes
+ - 否:no 。不配置默认为 no 。
+
+ > **说明:**
+ >
+ >二次格式化是指本次安装之前,磁盘已安装过 openEuler 系统。当前一次安装跟本次安装的使用相同的分区表配置(分区大小,挂载点,文件类型)时,该标志位可以配置是否格式化之前的分区,'/boot' 和 '/' 分区除外,每次都会重新格式化。如果目标机器第一次安装,则该标志位不生效,所有指定了文件系统的分区都会进行格式化。
+
+#### 配置网络
+
+系统网络参数保存在 /opt/imageTailor/custom/cfg_openEuler/sys.conf 中,用户可以通过该文件的\\ 配置修改目标 ISO 镜像的网络参数,例如:网卡名称、IP地址、子网掩码。
+
+sys.conf 中默认的网络配置如下,其中 netconfig-0 代表网卡 eth0。如果需要配置多块网卡,例如eth1,请在配置文件中增加 \\,并在其中填写网卡 eth1 的各项参数。
+
+```shell
+
+BOOTPROTO="dhcp"
+DEVICE="eth0"
+IPADDR=""
+NETMASK=""
+STARTMODE="auto"
+
+```
+
+各参数含义请参见下表:
+
+- | 参数名称 | 是否必配 | 参数值 | 说明 |
+ | :-------- | -------- | :------------------------------------------------ | :----------------------------------------------------------- |
+ | BOOTPROTO | 是 | none / static / dhcp | none:引导时不使用协议,不配地址 static:静态分配地址 dhcp:使用 DHCP 协议动态获取地址 |
+ | DEVICE | 是 | 如:eth1 | 网卡名称 |
+ | IPADDR | 是 | 如:192.168.11.100 | IP 地址 当 BOOTPROTO 参数为 static 时,该参数必配;其他情况下,该参数不用配置 |
+ | NETMASK | 是 | - | 子网掩码 当 BOOTPROTO 参数为 static 时,该参数必配;其他情况下,该参数不用配置 |
+ | STARTMODE | 是 | manual / auto / hotplug / ifplugd / nfsroot / off | 启用网卡的方法: manual:用户在终端执行 ifup 命令启用网卡。 auto \ hotplug \ ifplug \ nfsroot:当 OS 识别到该网卡时,便启用该网卡。 off:任何情况下,网卡都无法被启用。 各参数更具体的说明请在制作 ISO 镜像的机器上执行 `man ifcfg` 命令查看。 |
+
+
+#### 配置内核参数
+
+为了系统能够更稳定高效地运行,用户可以根据需要修改内核命令行参数。imageTailor 工具制作的 OS 镜像,可以通过修改 /opt/imageTailor/custom/cfg_openEuler/usr_file/etc/default/grub 中的 GRUB_CMDLINE_LINUX 配置实现内核命令行参数修改。 GRUB_CMDLINE_LINUX 中内核命令行参数的默认配置如下:
+
+```shell
+GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 crashkernel=512M oops=panic softlockup_panic=1 reserve_kbox_mem=16M crash_kexec_post_notifiers panic=3 console=tty0"
+```
+
+此处各配置含义如下(其余常见的内核命令行参数请查阅内核相关文档):
+
+- net.ifnames=0 biosdevname=0
+
+ 以传统方式命名网卡。
+
+- crashkernel=512M
+
+ 为 kdump 预留的内存空间大小为 512 MB。
+
+- oops=panic panic=3
+
+ 内核 oops 时直接 panic,并且 3 秒后重启系统。
+
+- softlockup_panic=1
+
+ 在检测到软死锁(soft-lockup)时让内核 panic。
+
+- reserve_kbox_mem=16M
+
+ 为 kbox 预留的内存空间大小为 16 MB。
+
+- console=tty0
+
+ 指定第一个虚拟控制台的输出设备为 tty0。
+
+- crash_kexec_post_notifiers
+
+ 系统 crash 后,先调用注册到 panic 通知链上的函数,再执行 kdump。
+
+### 制作系统
+
+操作系统定制完成后,可以通过 mkdliso 脚本制作系统镜像文件。 imageTailor 制作的 OS 为 ISO 格式的镜像文件。
+
+#### 命令介绍
+
+##### 命令格式
+
+**mkdliso -p openEuler -c custom/cfg_openEuler [--minios yes|no|force] [--sec] [-h]**
+
+##### 参数说明
+
+| 参数名称 | 是否必选 | 参数含义 | 取值范围 |
+| -------- | -------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| -p | 是 | 设置产品名称 | openEuler |
+| c | 是 | 指定配置文件的相对路径 | custom/cfg_openEuler |
+| --minios | 否 | 制作在系统安装时进行系统引导的 initrd | 默认为 yes yes:第一次执行命令时会制作 initrd,之后执行命令会判断 'usr_install/boot' 目录下是否存在 initrd(sha256 校验)。如果存在,就不重新制作 initrd,否则制作 initrd 。 no:不制作 initrd,采用原有方式,系统引导和运行使用的 initrd 相同。 force:强制制作 initrd,不管 'usr_install/boot' 目录下是否存在 initrd。 |
+| --sec | 否 | 是否对生成的 ISO 进行安全加固 如果用户不输入该参数,则由此造成的安全风险由用户承担 | 无 |
+| -h | 否 | 获取帮助信息 | 无 |
+
+#### 制作指导
+
+使用 mkdliso 制作 ISO 镜像的操作步骤如下:
+
+> 说明:
+>
+> - mkdliso 所在的绝对路径中不能有空格,否则会导致制作 ISO 失败。
+> - 制作 ISO 的环境中,umask 的值必须设置为 0022。
+
+1. 使用 root 权限,执行 mkdliso 命令,生成 IOS 镜像文件。参考命令如下:
+
+ ```shell
+ # sudo /opt/imageTailor/mkdliso -p openEuler -c custom/cfg_openEuler --sec
+ ```
+
+ 命令执行完成后,制作出的新文件在 /opt/imageTailor/result/{日期} 目录下,包括 openEuler-aarch64.iso 和 openEuler-aarch64.iso.sha256 。
+
+2. 验证 ISO 镜像文件的完整性。此处假设日期为 2022-03-21-14-48 。
+
+ ```shell
+ $ cd /opt/imageTailor/result/2022-03-21-14-48/
+ $ sha256sum -c openEuler-aarch64.iso.sha256
+ ```
+
+ 回显如下,表示 ISO 镜像文件完整,ISO 制作完成。
+
+ ```
+ openEuler-aarch64.iso: OK
+ ```
+
+ 若回显如下,表示镜像不完整,说明 ISO 镜像文件完整性被破坏,需要重新制作。
+
+ ```shell
+ openEuler-aarch64.iso: FAILED
+ sha256sum: WARNING: 1 computed checksum did NOT match
+ ```
+
+3. 查看日志
+
+ 镜像制作完成后,可以根据需要(例如制作出错时)查看日志。第一次制作镜像时,对应的日志和安全加固日志被压缩为一个 tar 包(日志的命名格式为:sys_custom_log_{*日期* }.tar.gz),存放在 result/log 目录下。该目录只保留最近时间的 50 个日志压缩包,超过 50 个时会对旧文件进行覆盖。
+
+
+
+### 裁剪时区
+
+定制完成的 ISO 镜像安装后,用户可以根据需求裁剪 openEuler 系统支持的时区。本节介绍裁剪时区的方法。
+
+openEuler 操作系统支持的时区信息存放在时区文件夹 /usr/shre/zoninfo 下,可通过如下命令查看:
+
+```shell
+$ ls /usr/share/zoneinfo/
+Africa/ America/ Asia/ Atlantic/ Australia/ Etc/ Europe/
+Pacific/ zone.tab
+```
+
+其中每个子文件夹代表一个 Area ,当前 Area 包括:大陆、海洋以及 Etc 。每个 Area 文件夹内部则包含了隶属于其的 Location 。一个 Location 一般为一座城市或者一个岛屿。
+
+所有时区均以 Area/Location 的形式来表示,比如中国大陆南部使用北京时间,其时区为 Asia/Shanghai(Location 并不一定会使用首都)。对应的,其时区文件为:
+
+```
+/usr/share/zoneinfo/Asia/Shanghai
+```
+
+若用户希望裁剪某些时区,则只需将对应的时区文件删除即可。
+
+### 定制示例
+
+本节给出使用 imageTailor 工具定制一个 ISO 操作系统镜像的简易方案,方便用户了解制作的整体流程。
+
+1. 检查制作 ISO 所在环境是否满足要求。
+
+ ``` shell
+ $ cat /etc/openEuler-release
+ openEuler release 22.03 LTS
+ ```
+
+2. 确保根目录有 40 GB 以上空间。
+
+ ```shell
+ $ df -h
+ Filesystem Size Used Avail Use% Mounted on
+ ......
+ /dev/vdb 196G 28K 186G 1% /
+ ```
+
+3. 安装 imageTailor 裁剪工具。具体安装方法请参见 [安装工具](#安装工具) 章节。
+
+ ```shell
+ $ sudo yum install -y imageTailor
+ $ ll /opt/imageTailor/
+ total 88K
+ drwxr-xr-x. 3 root root 4.0K Mar 3 08:00 custom
+ drwxr-xr-x. 10 root root 4.0K Mar 3 08:00 kiwi
+ -r-x------. 1 root root 69K Mar 3 08:00 mkdliso
+ drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 repos
+ drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 security-tool
+ ```
+
+4. 配置本地 repo 源。
+
+ ```shell
+ $ wget https://repo.openeuler.org/openEuler-22.03-LTS/ISO/aarch64/openEuler-22.03-LTS-everything-aarch64-dvd.iso
+ $ sudo mkdir -p /opt/openEuler_repo
+ $ sudo mount openEuler-22.03-LTS-everything-aarch64-dvd.iso /opt/openEuler_repo
+ mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only.
+ $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base
+ $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base
+ $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base
+ $ sudo ls /opt/imageTailor/repos/euler_base|wc -l
+ 2577
+ $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo
+ $ cd /opt/imageTailor
+ ```
+
+5. 修改 grub/root 密码
+
+ 以下 ${pwd} 的实际内容请参见 [配置初始密码](#配置初始密码) 章节生成并替换。
+
+ ```shell
+ $ cd /opt/imageTailor/
+ $ sudo vi custom/cfg_openEuler/usr_file/etc/default/grub
+ GRUB_PASSWORD="${pwd1}"
+ $
+ $ sudo vi kiwi/minios/cfg_minios/rpm.conf
+
+
+
+ $
+ $ sudo vi custom/cfg_openEuler/rpm.conf
+
+
+
+ ```
+
+6. 执行裁剪命令。
+
+ ```shell
+ $ sudo rm -rf /opt/imageTailor/result
+ $ sudo ./mkdliso -p openEuler -c custom/cfg_openEuler --minios force
+ ......
+ Complete release iso file at: result/2022-03-09-15-31/openEuler-aarch64.iso
+ move all mkdliso log file to result/log/sys_custom_log_20220309153231.tar.gz
+ $ ll result/2022-03-09-15-31/
+ total 889M
+ -rw-r--r--. 1 root root 889M Mar 9 15:32 openEuler-aarch64.iso
+ -rw-r--r--. 1 root root 87 Mar 9 15:32 openEuler-aarch64.iso.sha256
+ ```
+
+
diff --git "a/docs/zh/docs/TailorCustom/imageTailor\344\275\277\347\224\250\346\214\207\345\215\227.md" "b/docs/zh/docs/TailorCustom/imageTailor\344\275\277\347\224\250\346\214\207\345\215\227.md"
deleted file mode 100644
index c918c59d2fa87699c6f76be42b97d81ba2bba41c..0000000000000000000000000000000000000000
--- "a/docs/zh/docs/TailorCustom/imageTailor\344\275\277\347\224\250\346\214\207\345\215\227.md"
+++ /dev/null
@@ -1,884 +0,0 @@
-# imageTailor 使用指南
-
-- [1. 概述](#1、概述)
- - [1.1 背景介绍](#1.1、背景介绍)
- - [1.2 功能介绍](#1.2、功能介绍)
-- [2. 发布包介绍](#2、发布包介绍)
- - [2.1 获取发布包](#2.1、获取发布包)
- - [2.2 安装 imageTailor 工具](#2.2、安装imageTailor工具)
- - [2.3 imageTailor 包目录介绍](#2.3、imageTailor包目录介绍)
-- [3. 定制 OS](#3、定制OS)
- - [3.1 使用流程](#3.1、使用流程)
- - [3.2 检查软硬件环境](#3.2、检查软硬件环境)
- - [3.3 定制业务包](#3.3、定制业务包)
- - [3.4 配置系统参数](#3.4、配置系统参数)
- - [3.5 制作 OS](#3.5、制作OS)
- - [3.6 时区裁剪](#3.6、时区裁剪)
- - [3.7 裁剪流程使用实例](#3.7、裁剪流程使用实例)
-
-## 1、概述
-介绍 imageTailor 是操作系统镜像文件级裁剪定制工具,该工具是基于开源软件 kiwi 进行功能增强开发,实现了多种裁剪定制功能,
-包括系统包裁剪定制、系统配置裁剪定制、用户文件定制等,最后通过 cdrkit 生成 iso 格式的镜像文件。
-
-### 1.1、背景介绍
-操作系统除内核外,还包含大量提供各种功能的外围包。因此,通用的操作系统功能较多,占用资源多,这带来了两方面的影响:
-
-- 大量占用有限的内存、磁盘、CPU 等资源,导致系统运行低效。
-- 很多功能客户不会使用,增加开发和维护成本。
-
-OS 定制特性正是基于这样的现状提出。openEuler 提供 imageTailor 定制工具,根据客户的需求量身打造操作系统。用户可裁剪不需要的外围包,还可以添加自己的业务包。
-
-### 1.2、功能介绍
-
-#### 1.2.1、基本功能
-imageTailor 裁剪定制工具为用户提供以下功能:
-
-- 系统包裁剪定制:用户可选择默认安装的 RPM,定制裁剪系统命令、库、驱动。
-- 系统配置定制:可以配置主机名、启动服务、时区/UTC、网络、分区、加载驱动、版本号。
-- 用户软件包定制:将用户的 RPM 包安装到系统中,将用户文件定制到系统中。
-
-#### 1.2.2、基本概念
-
-- 驱动:硬件驱动,为内核模块(后缀名为 ko)。一般是硬件部门将自己开发的驱动添加进 OS 中。驱动存放于 /lib/modules/{内核版本号}/kernel/ 目录下。
-- 命令:可执行文件,用户定制的命令可以是标准 openEuler 自带的,也可以是用户自己开发的。只有当可执行文件位于 /bin、/sbin、/usr/bin 或 /usr/sbin 时,才会被 imageTailor 工具识别为命令,如果是放在其他目录下,则被视为普通的文件。
-- 库:库是一种可执行程序的二进制形式,可以被操作系统载入内存执行。它是别人已写好的、可复用的代码。库文件一般存放在 /lib、/lib64、/usr/lib/、/usr/lib64、/usr/X11R6/lib 或 /usr/X11R6/lib64 目录下,统一以 lib 开头,格式为 libxxx。
-- 自定义文件:这里可以是安装脚本、配置脚本或其他文件。imageTailor 工具对自定义文件不做任何限制,只要是用户自己提供的文件,imageTailor 都默认打包至 OS 中。
-- RPM:RPM 包可以简单的理解为 linux 环境中的软件包。具体说来,一个 RPM 包就是能够让应用软件运行的全部文件的集合,它记录了二进制软件的内容、安装的位置、软件包的描述信息、软件包之间的依赖关系等信息。linux 中安装和卸载 RPM 包都有对应的命令。
-
-
-## 2、发布包介绍
-
-### 2.1、获取发布包
-用户需下载 openEuler 发布包。
-
-1. 获取 openEuler 发布包
-
- AArch64 架构的镜像下载链接为:
- ```
- http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-{时间版本}/ISO/aarch64/openEuler-22.03-LTS-aarch64-dvd.iso
-
- eg:
- http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-2022-03-05-12-01-45/ISO/aarch64/openEuler-22.03-LTS-aarch64-dvd.iso
- ```
-
- > **说明:**
- > x86_64 架构的镜像下载链接为:
- >
- > http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-{时间版本}/ISO/x86_64/openEuler-22.03-LTS-x86_64-dvd.iso
-
-2. 校验发布包完整性
-
- 1. 以 openEuler-22.03-LTS-aarch64-dvd.iso 为例。
-
- 2. 下载 iso 到制作 OS 的机器上,并下载与 iso 同名的 sha256sum 文件(此处为 openEuler-22.03-LTS-x86_64-dvd.iso.sha256sum)
- ```
- $ wget http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-2022-03-05-12-01-45/ISO/x86_64/openEuler-22.03-LTS-x86_64-dvd.iso
- $ wget http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-2022-03-05-12-01-45/ISO/x86_64/openEuler-22.03-LTS-x86_64-dvd.iso.sha256sum
- ```
-
- 3. 获取 sha256 文件中的校验值;计算 openEuler-22.03-LTS-aarch64-dvd.iso 的 sha256 值。
- ```
- $ cat openEuler-22.03-LTS-aarch64-dvd.iso.sha256sum
- d22787ac209e544bb43c08f7c44f54373cafb79430e7fcae3d488d197f73793a openEuler-22.03-LTS-aarch64-dvd.iso
- $ sha256sum openEuler-22.03-LTS-aarch64-dvd.iso
- d22787ac209e544bb43c08f7c44f54373cafb79430e7fcae3d488d197f73793a openEuler-22.03-LTS-aarch64-dvd.iso
- ```
-
- 4. 对比以上两个计算的校验值是否一致。不一致说明文件完整性已被破坏,需要重新获取。
-
-
-### 2.2、安装imageTailor工具
-
-此处以 openEuler 22.03 LTS 版本的 x86_64 架构为例,介绍 imageTailor 裁剪工具的安装操作
-
-1. 确认机器已经安装操作系统 openEuler 22.03 LTS(imageTailor 裁剪工具的运行环境)
-
- ```
- $ cat /etc/openEuler-release
- openEuler release 22.03 LTS
- ```
-
-2. 下载对应架构的 ISO 镜像(必须是 everything 版本),并存放在任一目录(建议该目录磁盘空间大于 20 GB),此处假设存放在 /root/imageTailor_mount 目录。
-
- x86_64 架构的镜像下载链接为:
- ```
- http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-{时间版本}/ISO/x86_64/openEuler-22.03-LTS-everything-x86_64-dvd.iso
- ```
- 其他详细下载链接参考 2.1、获取发布包
-
-3. 创建文件 /etc/yum.repos.d/local.repo,配置对应 yum 源。配置内容参考如下,其中 baseurl 是用于挂载 ISO 镜像的目录:
-
- ```
- [local]
- name=local
- baseurl=file:///root/imageTailor_mount
- gpgcheck=0
- enabled=1
- ```
-
-4. 使用 root 权限,挂载光盘镜像到 /root/imageTailor_mount 目录(请与上述 repo 文件中配置的 baseurl 保持一致)作为 yum 源,参考命令如下:
-
- ```
- $ sudo mount -o loop /root/temp/openEuler-22.03-LTS-everything-x86_64-dvd.iso /root/imageTailor_mount/
- mount: /root/imageTailor_mount: WARNING: source write-protected, mounted read-only.
- ```
-
-5. 使 yum 源生效:
-
- ```
- yum clean all
- yum makecache
- ```
-
-6. 使用 root 权限,安装 imageTailor 裁剪工具:
-
- ```shell
- sudo yum install -y imageTailor
- ```
-
-7. 使用 root 权限,确认工具已安装成功。
-
- ```
- $ cd /opt/imageTailor/
- $ sudo ./mkdliso -h
- -------------------------------------------------------------------------------------------------------------
- Usage: mkdliso -p product_name -c configpath [--minios yes|no|force] [-h] [--sec]
- Options:
- -p,--product Specify the product to make, check custom/cfg_yourProduct.
- -c,--cfg-path Specify the configuration file path, the form should be consistent with custom/cfg_xxx
- --minios Make minios: yes|no|force
- --sec Perform security hardening
- -h,--help Display help information
-
- Example:
- command:
- ./mkdliso -p openEuler -c custom/cfg_openEuler --sec
-
- help:
- ./mkdliso -h
- -------------------------------------------------------------------------------------------------------------
- $
- ```
-
-### 2.3、imageTailor包目录介绍
-imageTailor 工具包含用户需要的 RPM 包源、imageTailor 定制工具和用户的配置文件。工具包部分目录结构如下:
-
- ```
- [imageTailor]
- |-[custom]
- |-[cfg_openEuler]
- |-[usr_file] // 存放用户添加的文件
- |-[usr_install] // 存放用户的 hook 脚本
- |-[all]
- |-[conf]
- |-[hook]
- |-[cmd.conf] // 配置 OS 默认使用的命令和库
- |-[rpm.conf] // 配置 OS 默认安装的 RPM 包和驱动列表
- |-[security_s.conf] // 配置安全加固策略
- |-[sys.conf] // 配置 OS 系统参数
- |-[kiwi] // imageTailor 基础配置
- |-[repos] // RPM 源,制作 OS 需要的 RPM 包
- |-[security-tool] // 安全加固工具
- |-mkdliso // 制作 OS 的可执行脚本
- ```
-
-## 3、定制OS
-使用 imageTailor 工具将业务 RPM 包、自定义文件、驱动、命令和文件打包至 OS 中,制作出 OS。
-
-### 3.1、使用流程
-
-定制工具的使用流程为:获取发布包、检查软硬件环境、定制业务包、配置系统参数和制作 OS。
-
-使用定制工具进行 OS 定制的流程请参见 图1
-- 图1 定制工具使用流程图
-
- 
-
-步骤说明如 表1
-- 表1 定制步骤说明
-
- | 步骤 | | 说明 |
- | :--- | :--- | :--- |
- | 检查软硬件环境 | |了解制作 OS 的机器需要满足的软硬件要求。 |
- | 定制业务包 | 添加业务 RPM 包 | 当用户希望同时部署业务和 OS,可以将业务 RPM 包打包至 OS 中。如果是用户自己提供的 RPM 包,请放在 custom/cfg_openEuler/usr_rpm 目录下。 |
- | | 添加自定义文件 | 当用户希望 OS 在安装或启动时,能够进行硬件检查、系统配置检查、安装驱动或其他操作,可编写自定义文件,并将该文件打包至 OS 中。 |
- | | 添加驱动 | 当 openEuler 的 RPM 包源未包含用户需要的驱动时,可以使用 imageTailor 工具将该驱动打包至 OS 中。 |
- | | 添加命令 | 当 openEuler 的 RPM 包源未包含用户需要的命令时,可以使用 imageTailor 工具将该命令打包至 OS 中。 |
- | 定制业务包 | 添加库文件 | 当 openEuler 的 RPM 包源未包含用户需要的库文件时,可以使用 imageTailor 工具将该库文件打包至 OS 中。 |
- | 配置系统参数 | 配置主机参数 | 为了确保 OS 安装和启动成功,需要配置主机参数。 |
- | | 配置分区 | 用户可以根据业务规划配置业务分区,同时可以调整系统分区。 |
- | | 配置网络 | 根据产品的组网情况,配置网络参数,例如:网卡名称、IP 地址、子网掩码。 |
- | 配置系统参数 | 配置编译参数 | 当用户希望 imageTailor 工具先编译代码,再将其添加至 OS 中,则需要配置编译参数。 |
- | 配置安全加固策略 | | openEuler 发布给产品时已进行安全加固,满足公司安全规定。产品如果需要根据业务情况进行二次加固,只能在定制阶段进行,编辑 security_s.conf 进行安全加固。具体操作请参见安全加固操作指南。 |
- | 制作OS | | 使用 imageTailor 工具提供的脚本制作 OS。 |
-
-### 3.2、检查软硬件环境
-
-为了确保 OS 定制成功,用于制作 OS 的机器需要满足一定的软硬件要求
-- 机器架构为 x86_64 或者 AArch64
-- 操作系统为 openEuler 22.03 LTS
-
- > **说明:**
- >
- >- openEuler 22.03 满足 python 3.9 与 kernel 5.10 要求
-
-- python 版本 3.9 以上
-- kernel 内核版本 5.10 以上
-- 用户裁剪机器根目录 '/' 需要 40 GB 以上空间
-- 关闭 SElinux 服务
- ```
- $ sudo setenforce 0
- $ getenforce
- Permissive
- ```
-
-### 3.3、定制业务包
-
-用户可以根据业务需要,将业务 RPM 包、自定义文件、驱动、命令和库文件打包至 OS 中。
-
-#### 3.3.1、操作指导
-
-用户要添加驱动、命令、库文件和脚本,可能是文件的形式,也可能是 RPM 包的形式。不同的形式有不同的定制方法。
-
-1. 添加文件
-
- 这里的文件可以是驱动、命令、库文件,也可以是脚本。用户只需将文件放至 custom/cfg_openEuler/usr_file 目录下即可。
-
- **注意事项**:
-
- - 命令必须具有可执行权限,否则 imageTailor 工具无法将该命令打包至 OS 中。
-
- - 在 custom/cfg_openEuler/usr_file 目录下放置文件时,文件的目录结构必须是从根目录开始的完整目录,以便 imageTailor 工具能够将该文件放至正确的目录下。
-
- 例如:用户希望文件 file1 最终能放至 /opt 目录下,则需要在 usr_file 目录下新建 opt 目录,再将 file1 文件拷贝至 opt 目录。如下:
-
- ```
- $ tree
- .
- ├── etc
- │ ├── default
- │ │ └── grub
- │ └── profile.d
- │ └── csh.precmd
- └── opt
- └── file1
-
- 4 directories, 3 files
- & pwd
- /opt/imageTailor/custom/cfg_openEuler/usr_file
- ```
-
- - 上述说的目录需要是系统中的真实路径(路径中不含软链接,可在系统中使用 realpath 或 readlink -f 命令查询真实路径)。
-
- - 在 usr_file 新建目录时,若是系统中已有的目录,新建的目录权限需要和系统中目录权限保持一致,不然会导致 usr_file 下目录权限覆盖系统中目录的权限,可能导致系统问题。在添加目录之前请确定系统中目录的权限。
-
- - 如果用户自己提供的脚本希望由 OS 在启动和安装阶段调用,即为 hook 脚本,需要将其放在 hook 目录下。
-
-2. 添加 RPM 包
-
- 1. 确认 repos 源中是否包含该 RPM 包。
- - 有,请执行 3
- - 无,请执行 2
- 2. 用户自己提供 RPM 包,放至 custom/cfg_openEuler/usr_rpm 目录下。如果 RPM 包依赖于其他 RPM 包,也必须将依赖包打包至 OS 中。
- 请用户根据采用哪种裁剪粒度,来执行后续的操作。
- - 用户 RPM 包文件裁剪,则执行 4
- - 其他裁剪粒度,则操作完成。
- 3. 在 rpm.conf 的 \ 字段配置该 RPM 包信息。请用户根据采用哪种裁剪粒度,来执行后续的操作。
- - RPM 包裁剪粒度,则操作完成
- - 其他裁剪粒度,则执行 4
- 4. 在 cmd.conf 和 rpm.conf 中配置该 RPM 包中要保留的驱动、命令和库文件。如果有要裁剪的普通文件,也需要在 cmd.conf 文件中配置。
-
-3. 表 配置文件填写说明
-
-- 表 配置文件填写说明
-
- | 对象 | 操作 | 需要填写的配置文件 | 填写的区域 | 填写方法 | 注意事项 | 填写示例 |
- | :--- | :--- | :--- |:--- | :--- | :--- | :--- |
- | 驱动 | 保留 | custom/cfg_openEuler/rpm.conf | \\ | 新增一行 \ | 填写驱动名称时请填写相对于 "/lib/modules/{内核版本号}/kernel/" 的相对路径。 | 如下 图1 |
- | 命令 | 保留 | custom/cfg_openEuler/cmd.conf | \\ | 新增一行 \ | 填写命令名称。 | 如下 图2 |
- | 库文件 | 保留 | custom/cfg_openEuler/cmd.conf | \\ | 新增一行 \ | 填写库文件名称。 | 如下 图3 |
- | 其他文件 | 删除 | custom/cfg_openEuler/cmd.conf | \\ | 新增一行 \ | 填写普通文件名称,请填写绝对路径。 | 如下 图4 |
-
-- 图1 drivers 填写示例
-
- 
-
-- 图2 tools 填写示例
-
- 
-
-- 图3 libs 填写示例
-
- 
-
-- 图4 delete 填写示例
-
- 
-
-
-#### 3.3.2、定制示例:添加驱动
-
-1. 用户场景
-
- 当 imageTailor 默认配置中会裁剪部分驱动,而用户又希望添加 repos 源中的某款硬件驱动至 OS 中。
-
-2. 操作步骤
-
- 由于该硬件驱动已经在 repos 源中,因此用户只需将其填写配置在 rpm.conf 中即可,避免被 imageTailor 裁剪掉。
-
- 1. 在 custom/cfg_openEuler/rpm.conf 中的 \\ 区域添加该驱动信息。
-
- 具体操作请参见 表 《配置文件填写说明》,填写时关注表中的注意事项一列。
-
- 2. (可选)当用户希望 OS 在启动时自动加载驱动,则需要配置 sys.conf 文件。
-
- 具体操作请参见 表 《sysconfig 系统参数说明》 中的 sys_usermodules_autoload 参数。
-
-#### 3.3.3、定制实例:添加命令
-
-1. 用户场景
-
- 用户由于业务需要,希望添加某个命令至 OS 中。但用户不知道该命令是否已包含在 repos 中。
-
-2. 操作步骤
-
- 1. 确认该命令属于的 RPM 包名称。如果命令中没有,则代表 repos 源中没有该命令,请用户自行获取该命令所属的 RPM 包。
- - 是,请执行 3
- - 否,请用户到标准 openEuler 的 ISO 文件中获取 RPM 包。获取后,执行 2。
- 2. 将包含命令的 RPM 包放至 usr_rpm 目录下。
- 3. 在 custom/cfg_openEuler/cmd.conf 文件的 \\ 区域添加命令信息区域添加该命令信息。
-
- 具体操作请参见 表 《配置文件填写说明》,填写时关注表中的注意事项一列。
-
-#### 3.3.4、定制实例:添加库文件
-
-1. 用户场景
-
- 用户在添加命令或驱动时,出现了缺少库文件的问题( imageTailor 会自动报错),则需要先添加库文件。
-
-2. 操作步骤
-
- 1. 确认该库文件属于的 RPM 包名称。如果库文件中没有,则代表 repos 源中没有该库文件,请用户自行获取该库文件。
- - 是,请执行 3
- - 否,请用户到标准 openEuler 的 ISO 文件中获取 RPM 包。获取后,执行 2。
- 2. 将包含库文件的 RPM 包放至 usr_rpm 目录下。
- 3. 在 custom/cfg_openEuler/cmd.conf 文件的 \\ 区域添加命令信息区域添加该库文件信息。
-
- 具体操作请参见 表 《配置文件填写说明》,填写时关注表中的注意事项一列。
-
-#### 3.3.5、参考:hook 目录说明
-
-hook 在 linux 中指钩子函数,由 OS 在启动和安装过程中调用,执行钩子函数中定义的动作。
-
-hook 目录为 custom/cfg_openEuler/usr_install/hook/,openEuler 在该目录下规划了子目录,
-每个子目录代表 OS 启动和安装的不同阶段,用户将脚本放在子目录下,OS 则会在相应的阶段调用该脚本。
-
-- 表 hook 目录说明
-
- | hook 目录 | 脚本命名规则 | hook 脚本形式 | hook 执行点 | 说明 |
- | :--- | :--- | :--- |:--- | :--- |
- | insmod_drv_hook | 如下 脚本命名规则 | 无 | 加载OS驱动之后 | 无 |
- | custom_install_hook | | eg: S01custom_install.sh | 驱动加载完成后 | 用户可以自定义安装过程,不需要使用 OS 默认安装流程。如果用户不添加脚本,openEuler 采用自己默认的安装流程。 |
- | env_check_hook | | eg: S01check_hw.sh S02get_hwtype.sh | 安装初始化之前 | 初始化之前检查硬件配置规格、获取硬件类型。如果用户不添加脚本,则 openEuler 不做任何动作。 |
- | set_install_ip_hook | | eg: S01set_install_ip.sh | 安装初始化过程中,配置网络时 | 用户根据自身组网,自定义网络配置。如果用户不添加脚本,则 openEuler 不做任何动作。 |
- | before_partition_hook | | eg: S01checkpart.sh $part_conf_file(分区配置文件路径) | 在分区前调用 | 用户可以在分区之前检查分区配置文件是否正确。如果用户不添加脚本,则 openEuler 不做任何动作。 |
- | before_setup_os_hook | | 无 | 解压repo之前 | 用户可以进行自定义分区挂载操作。 如果安装包解压的路径不是分区配置中指定的根分区。则需要用户自定义分区挂载,并将解压路径赋值给传入的全局变量。 使用方法:eval $1=待解压的路径 |
- | before_mkinitrd_hook | | eg: S01install_drv.sh | 执行 mkinitrd 操作之前 | initrd 放在硬盘的场景下,执行 mkinitrd 操作之前的 hook。用户可以进行添加、更新驱动文件等自定义操作。如果用户不添加脚本,则 openEuler 不做任何动作。 |
- | after_setup_os_hook | | 无 | 安装完系统之后 | OS 包解压到硬盘之后的 hook,用户可以进行系统文件的自定义操作,包括修改 grub.cfg 等 |
- | install_succ_hook | | 无 | 系统安装流程成功结束 | 用户执行解析安装信息,回传安装是否成功等操作。install_succ_hook 不可以设置为 install_break。 |
- | install_fail_hook | | 无 | 系统安装失败 | 用户执行解析安装信息,回传安装是否成功等操作。install_fail_hook 不可以设置为 install_break。 |
-
-- 脚本命名规则:用户可自定义脚本名称(支持多个),必须以 S+ 数字开头,一位数前面补 0,代表 hook 脚本执行顺序。脚本名称格式示例:S01xxx.sh
-
-> **说明:**
->
->- hook 目录下的钩子脚本是通过 source 方式调用,所以脚本中需要谨慎使用 exit 命令,因为调用 exit 命令之后,整个安装的脚本程序也同步退出了。
-
-### 3.4、配置系统参数
-
-配置系统参数包含:主机参数、分区、网络和编译参数。
-
-系统参数都在 custom/cfg_openEuler/sys.conf 文件中。
-
-#### 3.4.1、配置主机参数
-
-为了确保 OS 安装成功,需要配置主机参数,例如:主机名、内核启动参数。
-
-在 sys.conf 文件的 \\ 区域配置系统常用参数。
-
-- openEuler 提供的默认配置如下:
-
- ```
-
- sys_service_enable='ipcc'
- sys_service_disable='cloud-config cloud-final cloud-init-local cloud-init'
- sys_utc='yes'
- sys_timezone=''
- sys_cut='no'
- sys_usrrpm_cut='no'
- sys_hostname='Euler'
- sys_usermodules_autoload=''
- sys_gconv='GBK'
-
- ```
- 参数说明请参见下表1
-
-- 表1 sysconfig 系统参数说明
-
- | 参数 | 说明 | 取值范围 | 默认值 | 是否必配 |
- | :--- | :--- | :--- |:--- | :--- |
- | sys_service_enable | OS 默认启用的服务,如果有多个服务请以空格分开。请注意 以下几点:
- 只能在默认配置的基础上增加系统服务,不能删减系统 服务。
- 可以配置业务相关的服务。需要先按照 3.3-定制业务包 中描述的方法将业务 RPM 包打包至 OS 中 sys_usrrpm_cut,然后才能在此处配置。
- 默认只开启该配置文件中的服务,如果服务需要依赖 其他服务,需要将依赖的服务也配置在该参数中。
| 无 | "ipcc" | 选配。如果用户没有新增的系 统服务,请保持默认值。 |
- | sys_service_disable | 禁止服务开机自启动,用法参考 sys_service_enable。 | 无 | 无 | 选配。如果用户没有禁用的系 统服务,请保持默认值 |
- | sys_utc | 是否采用 UTC 时间。 | yes: 是 no: 否 | yes | 必配 |
- | sys_timezone | 设置时区,即该单板所处的时区。 | 查询 openEuler 支持的时区,请在制作 OS 的机器上查看 /usr/share/zoneinfo/zone.tab 文件。 | RPC: 中国 | 选配 |
- | sys_cut | 是否裁剪。
- 裁剪版的 OS 会先安装 rpm.conf 中的 RPM 包,然后卸载掉 cmd.conf 中不包含的命令和库文件, 最后卸载掉 rpm.conf中 \\ 区域的 RPM 包。
**说明:** imageTailor 工具不支持 RPM 管理包本身的安装, 即使用户在 rpm.conf 中配置了 RPM 包本身,imageTailor 工具也不会安装。
- 不裁剪版的 OS 仅安装 rpm.conf 中的 RPM 包,不再做其他操作。 用户根据业务需要决定是否裁剪。
| - yes:裁剪 - no: 不裁剪 - debug: 裁剪但是会保留 RPM 命令本身 该参数和 sys_usrrpm_cut 参数的具体含义请参见表后的 sys_cut 和 sys_usrrpm_cut 参数说明。 | 'no' | 必配 |
- | sys_usrrpm_cut | 是否裁剪用户添加的 RPM 包中的指定文件。
- 裁剪:imageTailor 工具会先安装用户添加的 RPM 包,再删除掉 cmd.conf 中 \ 区域的文件。最后会删除掉 cmd.conf 和 rpm.conf 中未配置的命令、库和驱动。
- 不裁剪:imageTailor 工具会安装用户添加的 RPM 包,不删除用户 RPM 包中的文件。
**须知:** 在不裁剪的情况下,当用户自己制作的 RPM 包中有安装过程中生成的文件,需要在 cmd.conf 或 rpm.conf 中配置,避免 imageTailor 将其删除。
用户根据业务需要决定是否裁剪 | - yes:裁剪 - no: 不裁剪 该参数和 sys_usrrpm_cut 参数的具体含义请参见表后的 sys_cut 和 sys_usrrpm_cut 参数说明。 | 'no' | 必配 |
- | sys_hostname | 主机名。当用户大批量部署 OS 时,建议部署成功 后,修改每个节点的主机名,确保不重复。 | 主机名请按照以下要求输入:可以是字母、数字、 "-" 的组合。字符支持大小写。字符个数小于等于 63。 首字符必须是字母或数字。 | 'Euler' | 必配 |
- | sys_usermodules_autoload | 系统启动阶段加载的驱动,输入该参数的值时,请去 掉后缀名 .ko。如果有多个驱动,请以空格分开。 | 无 | 无 | 选配 |
- | sys_gconv | 该参数用于定制 /usr/lib/gconv, /usr/lib64/gconv。 | - null/NULL:表示不配置,为空,如果系统裁 剪,/usr/lib/gconv, /usr/lib64/gconv 就会被删除。
- all/ALL:/usr/lib/gconv, /usr/lib64/gconv 不裁剪。
- xxx,xxx: 保留 /usr/lib/gconv, /usr/lib64/gconv 目录下对应的文件,如果多个,可用逗号 "," 分隔。 | 默认不配置,会被裁剪。 | 选配 |
- | sys_man_cut | 该参数用来指定是否裁剪 man 文档。 | yes:裁剪 man 文档 no:不裁剪man文档 | yes | 选配 |
-
-- sys_cut 和 sys_usrrpm_cut 参数说明
-
- 1. sys_cut="no":
-
- sys_usrrpm_cut="no" 或 sys_usrrpm_cut="yes"
-
- 系统 RPM 包裁剪粒度:imageTailor 会安装 repo 源中的 RPM 包和 usr_rpm 目录下的 RPM 包,但不会裁剪 RPM 包中的文件。
- 即使这些 RPM 包中的部分文件,用户不想要,imageTailor 也不会进行裁剪。
-
- **具体流程为:**
- - 安装 custom/cfg_openEuler/rpm.conf 中 bootstrap 字段的 RPM 包。
- - 安装 custom/cfg_openEuler/usr_rpm 目录下的 RPM 包。
- - 拷贝 custom/cfg_openEuler/usr_file 目录下的文件至相应目录。
-
- > **说明:**
- >
- >- 默认值为 no,请不要修改此参数,可能会导致裁剪出来的 ISO 安装失败。若需要配置成 yes/debug ,请与 openEuler 联系。
-
- 2. sys_cut="yes":
-
- 1. sys_usrrpm_cut="no"
-
- 系统 RPM 包文件裁剪粒度:imageTailor 会根据用户配置,裁剪 repo 源中 RPM 包的文件。
-
- **具体流程为:**
- - 安装 custom/cfg_openEuler/rpm.conf 中 bootstrap 字段的 RPM 包。
- - 安装 custom/cfg_openEuler/usr_rpm 目录下的 RPM 包。
- - 拷贝 custom/cfg_openEuler/usr_file 目录下的文件至相应目录。
- - 读取 custom/cfg_openEuler/rpm.conf 中的 \ 区域内容,裁剪掉未配置的驱动。
- - 读取 custom/cfg_openEuler/cmd.conf 中的 \ 内容,裁剪掉未配置的命令。
- - 读取 custom/cfg_openEuler/cmd.conf 中的 \ 内容,裁剪掉未配置的库文件。
- - 读取 custom/cfg_openEuler/cmd.conf 中的 \ 内容,裁剪掉配置的文件。
-
- > **说明:**
- >
- >- 即使用户 RPM 包中的驱动、命令和库文件没有配置在配置文件中,imageTailor 也不会裁剪,会保留 usr_rpm 目录中所有 RPM 包的完整性。
-
- 2. sys_usrrpm_cut="yes"
-
- 系统和用户 RPM 包文件裁剪粒度:imageTailor 会根据用户的配置,裁剪 repo 源和 usr_rpm 目录中 RPM 包的文件。
-
- **具体流程为:**
- - 安装 custom/cfg_openEuler/rpm.conf 中 bootstrap 字段的 RPM 包。
- - 安装 custom/cfg_openEuler/usr_rpm 目录下的 RPM 包。
- - 拷贝 custom/cfg_openEuler/usr_file 目录下的文件至相应目录。
- - 读取 custom/cfg_openEuler/rpm.conf 中的 \ 区域内容,裁剪掉未配置的驱动。
- - 读取 custom/cfg_openEuler/cmd.conf 中的 \ 内容,裁剪掉未配置的命令。
- - 读取 custom/cfg_openEuler/cmd.conf 中的 \ 内容,裁剪掉未配置的库文件。
- - 读取 custom/cfg_openEuler/cmd.conf 中的 \ 内容,裁剪掉配置的文件。
- - 读取 custom/cfg_openEuler/rpm.conf 中的 \ 内容,卸载掉配置的 RPM 包。
-
- > **说明:**
- >
- >- 如果用户 RPM 包中的驱动、命令和库文件没有配置在配置文件中,则 imageTailor 会进行裁剪。
-
-
-#### 3.4.2、配置初始密码
-
-##### 3.4.2.1、配置 root 初始密码
-
-配置 root 初始密码需要在 "custom/cfg_openEuler/rpm.conf" 中填写相应信息:
-
-用户需要在该文件中配置 root 初始密码,否则,裁剪得到的 ISO 在安装后无法使用 root 账号进行登录。
-
-1. 默认配置
-
- 在 rpm.conf 中有以下配置项:
- ```
-
-
-
- ```
- 参数说明:
- - group 为用户所属组。
- - pwd 为用户初始密码的加密密文,加密算法为 SHA-512。${pwd} 需要替换成用户实际的加密密文,生成方式参见下文 修改初始密码 部分。
- - home 为用户的家目录。
- - name 为要配置用户的用户名。
-
-2. 修改初始密码
-
- 如果用户在做包时想修改 root 用户的初始密码,请按照如下步骤操作(需使用 root 权限):
- 1. 选择一台安装有 openEuler 的环境上。
- 2. 添加用户 user。
- 3. 通过 passwd 命令设置 user 密码。
- 4. 查看 /etc/shadow 文件,拷贝其中的内容,替换 rpm.conf 中的 pwd 字段。
- 5. 脚本如下:
- ``` shell script
- $ sudo useradd testx
- $
- $ sudo passwd testx
- Changing password for user testx.
- New password:
- Retype new password:
- passwd: all authentication tokens updated successfully.
- $
- $ sudo cat /etc/shadow | grep testx
- testx:$6$YkX5uFDGVO1VWbab$jvbwkZ2Kt0MzZXmPWy.7bJsgmkN0U2gEqhm9KqT1jwQBlwBGsF3Z59heEXyh8QKm3Qhc5C3jqg2N1ktv25xdP0:19052:0:90:7:35::
- ```
-
-##### 3.4.2.2、配置 grub2 初始密码
-
-配置grub2初始密码需要在 custom/cfg_openEuler/usr_file/etc/default/grub 中填写相应信息:
-
-1. 配置初始密码
-
- 在 usr_file/etc/default/grub 文件中增加以下配置项
- ```
- GRUB_PASSWORD=${pwd}
- ```
-
- > **说明:**
- >
- >- 密码默认用户为 root。
- >- ${pwd} 需要替换成用户实际的加密密文,密文的首尾请勿使用引号,否则会造成系统安装后无法修改 grub2 密码。
- >- 若用户未配置 grub2 初始密码,会导致:裁剪后的 ISO 安装失败。
-
-2. 修改初始密码
-
- 如果用户在做包时想设置 grub2 的初始密码,请按照如下步骤操作(需使用 root 权限):
- 1. 选择一台安装有 grub2-set-password 命令的环境上。
- 2. 执行 "grub2-set-password -o ./" 命令,设置密码。
- 3. 查看当前目录下的 user.cfg 文件中,复制文件内以 grub.pbkdf2.sha512 开头的密文,替换 grub 文件中的 ${pwd} 字段。
- 4. 脚本如下:
- ``` shell script
- $ sudo grub2-set-password -o ./
- Enter password:
- Confirm password:
- grep: .//grub.cfg: No such file or directory
- WARNING: The current configuration lacks password support!
- Update your configuration with grub2-mkconfig to support this feature.
- $
- $ ll
- total 4.0K
- -rw-------. 1 root root 298 Mar 1 20:54 user.cfg
- $
- $ sudo cat user.cfg
- GRUB2_PASSWORD=grub.pbkdf2.sha512.10000.CE285BE1DED0012F8B2FB3DEA38782A5B1040FEC1E49D5F602285FD6A972D60177C365F1
- B5D4CB9D648AD4C70CF9AA2CF9F4D7F793D4CE008D9A2A696A3AF96A.0AF86AB3954777F40D324816E45DD8F66CA1DE836DC7FBED053DB02
- 4456EE657350A27FF1E74429546AD9B87BE8D3A13C2E686DD7C71D4D4E85294B6B06E0615
- ```
-
-#### 3.4.3、配置分区
-
-用户可以根据业务规划配置业务分区,同时可以调整系统分区。
-
-> **说明:**
->
->- 系统分区:存放操作系统的分区
->- 业务分区:存放业务数据的分区
->- 差别:在于存放的内容,而每个分区的大小、挂载路径和文件系统类型都不是区分业务分区和系统分区的依据。
->- 配置分区为可选项,用户也可以在安装 OS 之后,手动配置分区
-
-1. 用户可以在 "custom/cfg_openEuler/sys.conf" 文件中的 \ 中配置分区
-
-2. 分区配置
-
- 配置格式:
- hd磁盘号 挂载路径 分区大小 分区类型 文件系统类型
-
- eg:
- ``` shell script
-
- hd0 /boot 512M primary ext4 yes
- hd0 /boot/efi 200M primary vfat yes
- hd0 / 30G primary ext4
- hd0 - - extended -
- hd0 /var 1536M logical ext4
- hd0 /home max logical ext4
-
- ```
-
-3. 参数说明
- - 磁盘号:
- 磁盘的编号。请按照 hdx 的格式填写,x 指第 x 块盘
-
- > **说明:**
- >
- >- 分区配置只在被安装机器的磁盘能被识别时才有效。
-
- - 挂载路径:
- 指定分区挂载的路径。用户既可以配置业务分区,也可以对默认配置中的系统分区进行调整。如果不挂载,则设置为 '-'。
-
- > **说明:**
- >
- >- 分区配置中必须有 '/' 挂载路径。其他的请用户自行调整。
- >- 采用 UEFI 引导时,在 X86_64 的分区配置中必须有 '/boot' 挂载路径,在 AArch64 的分区配置中必须有 '/boot/efi' 挂载路径。
-
- - 分区大小:
- 分区大小的取值有以下四种:
- - G/g:指定以 G 为单位的分区大小,例如:2G。
- - M/m:指定以 M 为单位的分区大小,例如:300M。
- - T/t:指定以 T 为单位的分区大小,例如:1T。
- - MAX/max:指定将硬盘上剩余的空间全部用来创建一个分区。只能在最后一个分区配置该值。
-
- extended 分区无需配置,其他分区类型必选。
-
- > **说明:**
- >
- >- 分区大小不支持小数,如果是小数,请换算成其他单位,调整为整数的数值。例如:不能填写 1.5G,应填写为 1536M。
- >- 分区大小取 MAX/max 值时,剩余分区大小不能超过支持文件系统类型的限制(默认文件系统类型 ext3、ext4,限制大小 16T)。
-
- - 分区类型:
- 分区有以下三种:
- - 主分区: primary
- - 扩展分区:extended
- - 逻辑分区:logical
-
- - 文件系统类型:
- 目前支持的文件系统类型有:
- - ext4、vfat
-
- - 二次格式化标志位:
- 二次安装时是否格式化。
- - 是:yes
- - 否:no
-
- > **说明:**
- >
- >- 二次格式化是指本次安装之前,磁盘已安装过 openEuler 系统。当先前一次安装跟本次安装的使用相同的分区表配置(分区大小,挂载点,文件类型)时,该标志位可以配置是否格式化之前的分区,'/boot' 和 '/' 分区除外,每次都会重新格式化。如果目标板第一次安装,则该标志位不生效,所有指定了文件系统的分区都会进行格式化。
-
-#### 3.4.4、配置网络
-根据产品的组网情况,配置网络参数,例如:网卡名称、IP地址、子网掩码。
-
-1. 网络参数简介:
-
- 在 sys.conf 文件的 \\ 中配置网络参数,netconfig-0 代表网卡 eth0。
- 如果您需要配置多块网卡,例如eth1,请在配置文件中增加 \\,并在其中填写网卡 eth1 的各项参数。
-
- 参数如下:
-
- ```
-
- BOOTPROTO="dhcp"
- DEVICE="eth0" //网卡名称
- IPADDR="" //IP地址
- NETMASK="" //子网掩码
- STARTMODE="auto" //启用网卡的方法
-
- ```
-
-2. 网络参数说明:
-
- | 参数名称 | 参数值 | 说明 | 是否匹配 |
- | :----- | :---- | :---- | :----|
- | BOOTPROTO | none / static / dhcp | none:引导时不使用协议,不配地址 static:静态分配地址 dhcp:使用 DHCP 协议动态获取地址 | 必配 |
- | DEVICE | 如 eth1 | 网卡名称 | 必配 |
- | IPADDR | 如:192.168.11.100 | IP 地址 当 BOOTPROTO 参数为 static 时,该参数必配;其他情况下,该参数不用配置 | 必配 |
- | NETMASK | 如:255.255.255.0 | 子网掩码 当 BOOTPROTO 参数为 static 时,该参数必配;其他情况下,该参数不用配置 | 必配 |
- | STARTMODE | manual / auto / hotplug / ifplugd / nfsroot / off | 启用网卡的方法: - manual:用户在终端执行 ifup 命令启用网卡。 - auto \ hotplug \ ifplug \ nfsroot:当 OS 识别到该网卡时,便启用该网卡。 - off:任何情况下,网卡都无法被启用。 各参数更具体的说明请在制作 OS 的机器上执行 man ifcfg 命令查看。 | 必配 |
-
-#### 3.4.5、配置系统命令行参数
-
-根据服务器规格( CPU 个数,内存大小等),用户需要修改系统命令行参数,使主机能够更加稳定、高效的运行。
-
-1. 配置说明
-
- 在 custom/cfg_openEuler/usr_file/etc/default/grub 中的 "GRUB_CMDLINE_LINUX" 中配置内核命令行参数。
-
- ```
- GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 crashkernel=512M oops=panic softlockup_panic=1 reserve_kbox_mem=16M crash_kexec_post_notifiers panic=3 console=tty0"
- ```
-
-2. 参数介绍
-
- - net.ifnames=0 biosdevname=0,以传统方式命名网卡,比如 eth0、eth1。
- - crashkernel=512M,为 kdump 预留的内存空间大小。
- - oops=panic panic=3,内核 oops 时直接 panic,并且 3 秒后重启系统。
- - softlockup_panic=1,在检测到软死锁(soft-lockup)的时候让内核 panic。
- - reserve_kbox_mem=16M,为 kbox 预留的内存空间大小。
- - console=tty0,指定第一个虚拟控制台 tty0 为输出设备。
- - crash_kexec_post_notifiers,系统 crash 后,先调用注册到 panic 通知链上的函数,再执行 kdump。
-
- 其余常见的系统命令行参数请查阅内核相关文档。
-
-
-### 3.5、制作OS
-
-当确认已完成定制后,执行 mkdliso 脚本制作 OS。imageTailor 制作出的 OS 为 ISO 镜像文件(iso 格式)。
-
-请在 mkdliso 所在的目录下执行该脚本(需使用 root 权限):
-
-```
-$ sudo ./mkdliso -p openEuler -c custom/cfg_openEuler [--minios yes|no|force] [--sec] [-h]
-```
-
-> **须知:**
->
->- mkdliso 所在的绝对路径中不能有空格,否则会导致制作 OS 失败。
->- 制作 OS 的环境上,umask 的值必须设置为 0022。
-
-1. 参数说明
-
- | 参数名称 | 是否必选 | 参数含义 | 取值范围 |
- | :----- | :---- | :---- | :---- |
- | -p | Y | 设置产品名称 | openEuler |
- | -c | Y | 指定配置文件的相对路径 | custom/cfg_openEuler |
- | --minios | N | 制作在系统安装时进行系统引导的 initrd | 默认为 yes - yes:第一次执行命令时会制作 initrd,之后执行命令会判断 'usr_install/boot' 目录下是否存在 initrd(sha256 校验)。如果存在,就不重新制作 initrd。 - no:不制作 initrd,采用原有方式,系统引导和运行使用的 initrd 相同。 - force:强制制作 initrd,不管 'usr_install/boot' 目录下是否存在 initrd。 |
- | --sec | N | 表示是否进行安全加固 如果用户不输入该参数,则由此造成的安全风险由用户去承担 | 无 |
- | -h | N | 获取帮助信息 | 无 |
-
-2. 示例
-
- ```
- sudo ./mkdliso -p openEuler -c custom/cfg_openEuler --sec
- ```
-
-3. 获取和验证 OS
-
- 制作出的 ISO 文件在 result/{日期} 目录下获取,
- 目录下有两个新生成的文件:openEuler-aarch64.iso 和 openEuler-aarch64.iso.sha256。
-
- sha256 文件用于验证 iso 文件是否完整。操作如下
- ```
- $ sha256sum -c openEuler-aarch64.iso.sha256
- openEuler-aarch64.iso: OK # 表示OS完整
- ```
- 当有如下显示时,表示 ISO 不完整:
- ```
- $ sha256sum -c openEuler-aarch64.iso.sha256
- openEuler-aarch64.iso: FAILED
- sha256sum: WARNING: 1 computed checksum did NOT match
- ```
-
- > **说明:**
- >
- >- sha256 文件只能验证 ISO 文件的完整性,如果校验成功说明文件完整性没有破坏,可以使用;如果校验失败则可以确认文件完整性已被破坏,需要重新获取。
-
-4. 查看日志
-
-每一次制作 OS 时,制作 OS 的日志和安全加固日志被压缩为一个 tar 包,存放在 result/log 目录下。
-
-日志命名格式为:sys_custom_log_{日期}.tar.gz。
-log 目录下的日志只保留 50 个,若超过 50 个则覆盖最旧的压缩包。
-
-### 3.6、时区裁剪
-
-openEuler 的用户可以根据自己的需求来对系统支持的时区进行裁剪。本节内容主要包括时区裁剪的方法以及当前 openEuler 支持的时区列表。
-
-#### 3.6.1、裁剪方法
-
-1. 系统中所有的时区信息均存放于时区文件夹:
-
- ```
- $ ls /usr/share/zoneinfo/
- Africa/ America/ Asia/ Atlantic/ Australia/ Etc/ Europe/
- Pacific/ zone.tab
- ```
- 其中每一个子文件夹代表了一个 "Area"。当前 "Area" 包括了三类:大陆、海洋以及 "Etc"。
- 每个 "Area" 文件夹内部则包含了隶属于其的 "Location"。一个 "Location" 一般为一座城市或者一个岛屿。
- 所有时区均以 "Area/Location" 的形式来表示,比如中国大陆南部使用北京时间,其时区为 "Asia/Shanghai"(Location 并不一定会使用首都),对应的,其时区文件为:
- ```
- /usr/share/zoneinfo/Asia/Shanghai
- ```
- 若用户希望裁剪某些时区,则只需将这些时区对应的时区文件删除即可。
-
-
-### 3.7、裁剪流程使用实例
-
-以 openEuler 22.03 LTS, x86_64 机器为例。sudo 部分即为需要 root 权限
-
-1. 检查当前环境
-
- ```
- $ cat /etc/openEuler-release
- openEuler release 22.03 LTS
- ```
-
-2. 配置 yum 源,使 yum 源生效
-
- yum 源要求使用 everything 的源
- ```
- $ yum clean all && yum makecache
- ```
-
-3. 添加磁盘,保证 '/opt' 或 '/' 有 40 GB 以上空间
-
- 此处以 /dev/vdb 挂载 /opt 为例
- ```
- $ sudo mkfs.ext4 /dev/vdb
- $ mkdir -p /opt
- $ sudo mount /dev/vdb /opt
- $
- $ df -h
- Filesystem Size Used Avail Use% Mounted on
- ......
- /dev/vdb 196G 28K 186G 1% /opt
- ```
-
-4. 安装 imageTailor 裁剪工具(imageTailor)
-
- ```
- $ sudo yum install -y imageTailor
- $ ll /opt/imageTailor/
- total 88K
- drwxr-xr-x. 3 root root 4.0K Mar 3 08:00 custom
- drwxr-xr-x. 10 root root 4.0K Mar 3 08:00 kiwi
- -r-x------. 1 root root 69K Mar 3 08:00 mkdliso
- drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 repos
- drwxr-xr-x. 2 root root 4.0K Mar 9 14:48 security-tool
- $
- ```
-
-5. 配置 RPM 包
-
- 以 openEuler 发布 iso 的 RPM 包为例。如果用户需要添加或删除 RPM 包,请参考 3.3、定制业务包 。
- ```
- $ wget http://121.36.84.172/dailybuild/openEuler-22.03-LTS/openeuler-2022-03-05-12-01-45/ISO/x86_64/openEuler-22.03-LTS-x86_64-dvd.iso
- $ sudo mkdir -p /opt/openEuler_repo
- $ sudo mount openEuler-22.03-LTS-x86_64-dvd.iso /opt/openEuler_repo
- mount: /opt/openEuler_repo: WARNING: source write-protected, mounted read-only.
- $ sudo rm -rf /opt/imageTailor/repos/euler_base && sudo mkdir -p /opt/imageTailor/repos/euler_base
- $ sudo cp -ar /opt/openEuler_repo/Packages/* /opt/imageTailor/repos/euler_base
- $ sudo chmod -R 644 /opt/imageTailor/repos/euler_base
- $ sudo ls /opt/imageTailor/repos/euler_base|wc -l
- 2577
- $ sudo umount /opt/openEuler_repo && sudo rm -rf /opt/openEuler_repo && sudo rm -rf openEuler-22.03-LTS-x86_64-dvd.iso
- $ cd /opt/imageTailor
- ```
-
-6. 修改 grub/root 密码
-
- grub/root 密码配置与 3.4.2、配置初始密码
- ```
- $ sudo vi custom/cfg_openEuler/usr_file/etc/default/grub # 添加以下字符串在文末
- GRUB_PASSWORD="grub.pbkdf2.sha512.10000.D4D775602C4E9F76EF4A9A6E726486941C8AAFB4762227E6973690ED5A760D59247E7E6ECA72472FBEEBFD9DB60F8EE56A4078094542C790BF0967879BE2D60C.B2742F38995B4B716EA7B0E639D02BE6C4E649E30576E5F5505B85844172B831841DA80D264FD14B025F3C8804158E7FC082998664BD03A92663FB4CE293807B"
- $ sudo sed -i 's/user pwd=""/user pwd="$6$yJiM4Ips$eto\/4erJfqkYcAtYw87Ne5Gj1ERWG2iAQiiO4iLMaYgqZOTPn5k\/xM9EEdofQZXAuJxxuYE8G6Tmgl4LaSopq1"/g' custom/cfg_openEuler/rpm.conf
- $ sudo sed -i 's/user pwd=""/user pwd="$6$yJiM4Ips$eto\/4erJfqkYcAtYw87Ne5Gj1ERWG2iAQiiO4iLMaYgqZOTPn5k\/xM9EEdofQZXAuJxxuYE8G6Tmgl4LaSopq1"/g' kiwi/minios/cfg_minios/rpm.conf
- # 以下命令仅 x86_64 需要修改
- $ sudo sed -i 's/user pwd=""/user pwd="$6$yJiM4Ips$eto\/4erJfqkYcAtYw87Ne5Gj1ERWG2iAQiiO4iLMaYgqZOTPn5k\/xM9EEdofQZXAuJxxuYE8G6Tmgl4LaSopq1"/g' kiwi/minios/cfg_minios_efi/rpm.conf
- ```
-
-7. 执行裁剪命令
-
- ```
- $ sudo rm -rf /opt/imageTailor/result
- $ sudo ./mkdliso -p openEuler -c custom/cfg_openEuler --minios force
- ......
- Complete release iso file at: result/2022-03-09-15-31/openEuler-x86_64.iso
- move all mkdliso log file to result/log/sys_custom_log_20220309153231.tar.gz
- $ ll result/2022-03-09-15-31/
- total 889M
- -rw-r--r--. 1 root root 889M Mar 9 15:32 openEuler-x86_64.iso
- -rw-r--r--. 1 root root 87 Mar 9 15:32 openEuler-x86_64.iso.sha256
- ```
diff --git "a/docs/zh/docs/TailorCustom/isocut\344\275\277\347\224\250\346\214\207\345\215\227.md" "b/docs/zh/docs/TailorCustom/isocut\344\275\277\347\224\250\346\214\207\345\215\227.md"
index 27ad1109d36d81f834570b45b672d20aff5139c6..79d839523105d0e9415064c965a7f5b294ae2463 100644
--- "a/docs/zh/docs/TailorCustom/isocut\344\275\277\347\224\250\346\214\207\345\215\227.md"
+++ "b/docs/zh/docs/TailorCustom/isocut\344\275\277\347\224\250\346\214\207\345\215\227.md"
@@ -7,6 +7,8 @@
- [命令介绍](#命令介绍)
- [软件包来源](#软件包来源)
- [操作指导](#操作指导)
+- [FAQ](#FAQ)
+ - [默认 rpm 包列表安装系统失败](#默认-rpm-包列表安装系统失败)
## 简介
@@ -80,9 +82,9 @@ openEuler 光盘镜像较大,下载、传输镜像很耗时。另外,使用
```shell
$ sudo isocut -h
Checking input ...
- usage: isocut [-h] [-t temporary_path] [-r rpm_path] source_iso dest_iso
+ usage: isocut [-h] [-t temporary_path] [-r rpm_path] [-k file_path] source_iso dest_iso
- Cut EulerOS iso to small one
+ Cut openEuler iso to small one
positional arguments:
source_iso source iso image
@@ -92,6 +94,7 @@ openEuler 光盘镜像较大,下载、传输镜像很耗时。另外,使用
-h, --help show this help message and exit
-t temporary_path temporary path
-r rpm_path extern rpm packages path
+ -k file_path kickstart file
```
@@ -106,17 +109,18 @@ openEuler 光盘镜像较大,下载、传输镜像很耗时。另外,使用
镜像裁剪定制工具通过 isocut 命令执行功能。命令的使用格式为:
-**isocut** [ --help | -h ] [ -t <*temp_path*> ] [ -r <*rpm_path*> ] < *source_iso* > < *dest_iso* >
+**isocut** [ --help | -h ] [ -t <*temp_path*> ] [ -r <*rpm_path*> ] [ -k <*file_path*> ] < *source_iso* > < *dest_iso* >
#### 参数说明
-| 参数 | 是否必选 | 参数含义 |
-| ------------ | -------- | -------------------------------------------------------- |
-| --help \| -h | 否 | 查询命令的帮助信息。 |
-| -t <*temp_path*> | 否 | 指定工具运行的临时目录 *temp_path*,其中 *temp_path* 为绝对路径。默认为 /tmp 。 |
-| -r <*rpm_path*> | 否 | 用户需要额外添加到 ISO 镜像中的 RPM 包路径。 |
-| *source_iso* | 是 | 用于裁剪的 ISO 源镜像所在路径和名称。不指定路径时,默认当前路径。 |
-| *dest_iso* | 是 | 裁剪定制生成的 ISO 新镜像存放路径和名称。不指定路径时,默认当前路径。 |
+| 参数 | 是否必选 | 参数含义 |
+| ---------------- | -------- | ------------------------------------------------------------ |
+| --help \| -h | 否 | 查询命令的帮助信息。 |
+| -t <*temp_path*> | 否 | 指定工具运行的临时目录 *temp_path*,其中 *temp_path* 为绝对路径。默认为 /tmp 。 |
+| -r <*rpm_path*> | 否 | 用户需要额外添加到 ISO 镜像中的 RPM 包路径。 |
+| -k <*file_path*> | 否 | 用户需要使用 kickstart 自动安装,指定 kickstart 模板路径。 |
+| *source_iso* | 是 | 用于裁剪的 ISO 源镜像所在路径和名称。不指定路径时,默认当前路径。 |
+| *dest_iso* | 是 | 裁剪定制生成的 ISO 新镜像存放路径和名称。不指定路径时,默认当前路径。 |
@@ -126,7 +130,7 @@ openEuler 光盘镜像较大,下载、传输镜像很耗时。另外,使用
- 原有 ISO 镜像。该情况通过配置文件 /etc/isocut/rpmlist 指定需要安装的 RPM 软件包,配置格式为 "软件包名.对应架构",例如:kernel.aarch64 。
-- 额外指定。执行 **isocut** 时使用 -r 参数指定软件包所在路径。
+- 额外指定。执行 **isocut** 时使用 -r 参数指定软件包所在路径,并将添加的 RPM 包按上述格式添加到配置文件 /etc/isocut/rpmlist 中。
@@ -135,11 +139,136 @@ openEuler 光盘镜像较大,下载、传输镜像很耗时。另外,使用
>- 裁剪定制镜像时,若无法找到配置文件中指定的 RPM 包,则镜像中不会添加该 RPM 包。
>- 若 RPM 包的依赖有问题,则裁剪定制镜像时可能会报错。
-
-### 操作指导
+### kickstart 功能介绍
+
+用户需要实现镜像自动化安装,可以通过 kickstart 的方式。在执行 **isocut** 时使用 -k 参数指定 kickstart 文件。
+
+isocut 为用户提供了 kickstart 模板,路径是 /etc/isocut/anaconda-ks.cfg,用户可以基于该模板修改。
+
+#### 修改 kickstart 模板
+
+若用户需要使用 isocut 工具提供的 kickstart 模板,需要修改以下内容:
+
+- 必须在文件 /etc/isocut/anaconda-ks.cfg 中配置 root 和 grub2 的密码。否则镜像自动化安装会卡在设置密码的环节,等待用户手动输入密码。
+- 如果要添加额外 RPM 包,并使用 kickstart 自动安装,则在 /etc/isocut/rpmlist 和 kickstart 文件的 %packages 字段都要指定该 RPM 包。
+
+接下来介绍 kickstart 文件详细修改方法。
+
+##### 配置初始密码
+
+###### 配置 root 初始密码
+/etc/isocut/anaconda-ks.cfg 中 root 初始密码的默认配置如下,其中 ${pwd} 需要替换成用户实际的加密密文:
+
+```shell
+rootpw --iscrypted ${pwd}
+```
+
+这里给出设置 root 初始密码的方法(需使用 root 权限):
+
+1. 添加用于生成密码的用户,此处假设 testUser。
+
+ ``` shell script
+ $ sudo useradd testUser
+ ```
+
+2. 设置 testUser 用户的密码。参考命令如下,根据提示设置密码:
+
+ ``` shell script
+ $ sudo passwd testUser
+ Changing password for user testUser.
+ New password:
+ Retype new password:
+ passwd: all authentication tokens updated successfully.
+ ```
+3. 查看 /etc/shadow 文件,获取加密密码(用户 testUser 后,两个 : 间的字符串,此处使用 *** 代替)。
+
+ ``` shell script
+ $ sudo cat /etc/shadow | grep testUser
+ testUser:***:19052:0:90:7:35::
+ ```
+
+4. 拷贝上述加密密码替换 /etc/isocut/anaconda-ks.cfg 中的 pwd 字段,如下所示(请用实际内容替换 *** ):
+ ``` shell script
+ rootpw --iscrypted ***
+ ```
+
+###### 配置 grub2 初始密码
+
+/etc/isocut/anaconda-ks.cfg 文件中添加以下配置,配置 grub2 初始密码。其中 ${pwd} 需要替换成用户实际的加密密文:
+
+```shell
+%addon com_huawei_grub_safe --iscrypted --password='${pwd}'
+%end
+```
+
+> 说明:
+>
+> - 配置 grub 初始密码需要使用 root 权限。
+> - grub 密码对应的默认用户为 root 。
+>
+> - 系统中需有 grub2-set-password 命令,若不存在,请提前安装该命令。
+
+1. 执行如下命令,根据提示设置 grub2 密码:
+
+ ```shell
+ $ sudo grub2-set-password -o ./
+ Enter password:
+ Confirm password:
+ grep: .//grub.cfg: No such file or directory
+ WARNING: The current configuration lacks password support!
+ Update your configuration with grub2-mkconfig to support this feature.
+ ```
+
+2. 命令执行完成后,会在当前目录生成 user.cfg 文件,grub.pbkdf2.sha512 开头的内容即 grub2 加密密码。
+
+ ```shell
+ $ sudo cat user.cfg
+ GRUB2_PASSWORD=grub.pbkdf2.sha512.***
+ ```
+
+3. 复制上述密文,并在 /etc/isocut/anaconda-ks.cfg 文件中增加如下配置:
+
+ ```shell
+ %addon com_huawei_grub_safe --iscrypted --password='grub.pbkdf2.sha512.***'
+ %end
+ ```
+
+##### 配置 %packages 字段
+
+如果需要添加额外 RPM 包,并使用 kickstart 自动安装,需要在 /etc/isocut/rpmlist 和 kickstart 文件的 %packages 字段都指定该 RPM 包。
+
+此处介绍在 /etc/isocut/anaconda-ks.cfg 文件中添加 RPM 包。
+
+/etc/isocut/anaconda-ks.cfg 文件的 %packages 默认配置如下:
+
+```shell
+%packages --multilib --ignoremissing
+acl.aarch64
+aide.aarch64
+......
+NetworkManager.aarch64
+%end
+```
+
+将额外指定的 RPM 软件包添加到 %packages 配置中,需要遵循如下配置格式:
+
+"软件包名.对应架构",例如:kernel.aarch64
+
+```shell
+%packages --multilib --ignoremissing
+acl.aarch64
+aide.aarch64
+......
+NetworkManager.aarch64
+kernel.aarch64
+%end
+```
+
+
+### 操作指导
> **说明:**
>
@@ -200,3 +329,56 @@ openEuler 光盘镜像较大,下载、传输镜像很耗时。另外,使用
```shell
sudo isocut -t /home/temp -r /home/rpms /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso
```
+
+ **场景三**:使用 kickstart 文件实现自动化安装,需要修改 /etc/isocut/anaconda-ks.cfg 文件
+ ```shell
+ sudo isocut -t /home/temp -k /etc/isocut/anaconda-ks.cfg /home/isocut_iso/openEuler-20.03-LTS-SP3-aarch64-dvd.iso /home/result/new.iso
+ ```
+
+
+## FAQ
+
+### 默认 rpm 包列表安装系统失败
+
+#### 背景描述
+
+用户使用 isocut 裁剪镜像时通过配置文件 /etc/isocut/rpmlist 指定需要安装的软件包。
+
+由于不同版本会有软件包减少,可能导致裁剪镜像时出现缺包等问题。
+因此 /etc/isocut/rpmlist 中默认只包含 kernel 软件包。
+保证默认配置裁剪镜像必定成功。
+
+#### 问题描述
+
+使用默认配置裁剪出来的 iso 镜像,能够裁剪成功,但是安装可能失败。
+
+安装报错缺包,报错截图如下:
+
+
+
+#### 原因分析
+
+使用默认配置的 RPM 软件包列表,裁剪的 iso 镜像在安装时缺少必要的 RPM 包。
+缺少的包如报错的图示,并且在不同版本中,缺少的 RPM 包也可能是不同的,以安装时实际报错为准。
+
+#### 解决方案
+
+1. 增加缺少的包
+
+ 1. 根据报错的提示整理缺少的 RPM 包列表
+ 2. 将上述 RPM 包列表添加到配置文件 /etc/isocut/rpmlist 中。
+ 3. 再次裁剪安装 iso 镜像
+
+ 以问题描述中的缺包情况为例,修改 rpmlist 配置文件如下:
+ ```shell
+ $ cat /etc/isocut/rpmlist
+ kernel.aarch64
+ lvm2.aarch64
+ chrony.aarch64
+ authselect.aarch64
+ shim.aarch64
+ efibootmgr.aarch64
+ grub2-efi-aa64.aarch64
+ dosfstools.aarch64
+ ```
+
diff --git a/docs/zh/docs/TailorCustom/public_sys-resources/delete.PNG b/docs/zh/docs/TailorCustom/public_sys-resources/delete.PNG
deleted file mode 100644
index 2309b8f37bdbe58916f235f29422f7cb703789d1..0000000000000000000000000000000000000000
Binary files a/docs/zh/docs/TailorCustom/public_sys-resources/delete.PNG and /dev/null differ
diff --git a/docs/zh/docs/TailorCustom/public_sys-resources/drivers.png b/docs/zh/docs/TailorCustom/public_sys-resources/drivers.png
deleted file mode 100644
index 1bb43150cde548a5d6cf227aed5c354cbb31ec89..0000000000000000000000000000000000000000
Binary files a/docs/zh/docs/TailorCustom/public_sys-resources/drivers.png and /dev/null differ
diff --git a/docs/zh/docs/TailorCustom/public_sys-resources/flowchart.png b/docs/zh/docs/TailorCustom/public_sys-resources/flowchart.png
deleted file mode 100644
index e89a0863f29abd11f1cde8bac1deac130e37f8c9..0000000000000000000000000000000000000000
Binary files a/docs/zh/docs/TailorCustom/public_sys-resources/flowchart.png and /dev/null differ
diff --git a/docs/zh/docs/TailorCustom/public_sys-resources/libs.png b/docs/zh/docs/TailorCustom/public_sys-resources/libs.png
deleted file mode 100644
index 7d3d797c3bf55a0353b98644d9794320fe9a1318..0000000000000000000000000000000000000000
Binary files a/docs/zh/docs/TailorCustom/public_sys-resources/libs.png and /dev/null differ
diff --git a/docs/zh/docs/TailorCustom/public_sys-resources/tools.png b/docs/zh/docs/TailorCustom/public_sys-resources/tools.png
deleted file mode 100644
index b7bf100e26359588985a8703275832254cdb33e0..0000000000000000000000000000000000000000
Binary files a/docs/zh/docs/TailorCustom/public_sys-resources/tools.png and /dev/null differ
diff --git "a/docs/zh/docs/rubik/\346\267\267\351\203\250\351\232\224\347\246\273\347\244\272\344\276\213.md" "b/docs/zh/docs/rubik/\346\267\267\351\203\250\351\232\224\347\246\273\347\244\272\344\276\213.md"
new file mode 100644
index 0000000000000000000000000000000000000000..48895cccee6c323e519ea3eaeb2248fccdbedf6a
--- /dev/null
+++ "b/docs/zh/docs/rubik/\346\267\267\351\203\250\351\232\224\347\246\273\347\244\272\344\276\213.md"
@@ -0,0 +1,233 @@
+## 混部隔离示例
+
+### 环境准备
+
+查看内核是否支持混部隔离功能
+
+```bash
+# 查看/boot/config-系统配置是否开启混部隔离功能
+# 若CONFIG_QOS_SCHED=y则说明使能了混部隔离功能,例如:
+cat /boot/config-5.10.0-60.18.0.50.oe2203.x86_64 | grep CONFIG_QOS
+CONFIG_QOS_SCHED=y
+```
+
+安装docker容器引擎
+
+```bash
+yum install -y docker-engine
+docker version
+# 如下为docker version显示结果
+Client:
+ Version: 18.09.0
+ EulerVersion: 18.09.0.300
+ API version: 1.39
+ Go version: go1.17.3
+ Git commit: aa1eee8
+ Built: Wed Mar 30 05:07:38 2022
+ OS/Arch: linux/amd64
+ Experimental: false
+
+Server:
+ Engine:
+ Version: 18.09.0
+ EulerVersion: 18.09.0.300
+ API version: 1.39 (minimum version 1.12)
+ Go version: go1.17.3
+ Git commit: aa1eee8
+ Built: Tue Mar 22 00:00:00 2022
+ OS/Arch: linux/amd64
+ Experimental: false
+```
+
+### 混部业务
+
+**在线业务(clickhouse)**
+
+使用clickhouse-benmark测试工具进行性能测试,统计出QPS/P50/P90/P99等相关性能指标,用法参考:https://clickhouse.com/docs/zh/operations/utilities/clickhouse-benchmark/
+
+**离线业务(stress)**
+
+stress是一个CPU密集型测试工具,可以通过指定--cpu参数启动多个并发CPU密集型任务给系统环境加压
+
+### 使用说明
+
+1)启动一个clickhouse容器(在线业务)。
+
+2)进入容器内执行clickhouse-benchmark命令,设置并发线程数为10个、查询10000次、查询总时间30s。
+
+3)同时启动一个stress容器(离线业务),并发执行10个CPU密集型任务对环境进行加压。
+
+4)clickhouse-benchmark执行完后输出一个性能测试报告。
+
+混部隔离测试脚本(**test_demo.sh**)如下:
+
+```bash
+#!/bin/bash
+
+with_offline=${1:-no_offline}
+enable_isolation=${2:-no_isolation}
+stress_num=${3:-10}
+concurrency=10
+timeout=30
+output=/tmp/result.json
+online_container=
+offline_container=
+
+exec_sql="echo \"SELECT * FROM system.numbers LIMIT 10000000 OFFSET 10000000\" | clickhouse-benchmark -i 10000 -c $concurrency -t $timeout"
+
+function prepare()
+{
+ echo "Launch clickhouse container."
+ online_container=$(docker run -itd \
+ -v /tmp:/tmp:rw \
+ --ulimit nofile=262144:262144 \
+ -p 34424:34424 \
+ yandex/clickhouse-server)
+
+ sleep 3
+ echo "Clickhouse container lauched."
+}
+
+function clickhouse()
+{
+ echo "Start clickhouse benchmark test."
+ docker exec $online_container bash -c "$exec_sql --json $output"
+ echo "Clickhouse benchmark test done."
+}
+
+function stress()
+{
+ echo "Launch stress container."
+ offline_container=$(docker run -itd joedval/stress --cpu $stress_num)
+ echo "Stress container launched."
+
+ if [ $enable_isolation == "enable_isolation" ]; then
+ echo "Set stress container qos level to -1."
+ echo -1 > /sys/fs/cgroup/cpu/docker/$offline_container/cpu.qos_level
+ fi
+}
+
+function benchmark()
+{
+ if [ $with_offline == "with_offline" ]; then
+ stress
+ sleep 3
+ fi
+ clickhouse
+ echo "Remove test containers."
+ docker rm -f $online_container
+ docker rm -f $offline_container
+ echo "Finish benchmark test for clickhouse(online) and stress(offline) colocation."
+ echo "===============================clickhouse benchmark=================================================="
+ cat $output
+ echo "===============================clickhouse benchmark=================================================="
+}
+
+prepare
+benchmark
+```
+
+### 测试结果
+
+单独执行clickhouse在线业务
+
+```bash
+sh test_demo.sh no_offline no_isolation
+```
+
+得到在线业务的QoS(QPS/P50/P90/P99等指标)**基线数据**如下:
+
+```json
+{
+"localhost:9000": {
+"statistics": {
+"QPS": 1.8853412284364512,
+......
+},
+"query_time_percentiles": {
+......
+"50": 0.484905256,
+"60": 0.519641313,
+"70": 0.570876148,
+"80": 0.632544937,
+"90": 0.728295525,
+"95": 0.808700418,
+"99": 0.873945121,
+......
+}
+}
+}
+```
+
+启用stress离线业务,未开启混部隔离功能下,执行test_demo.sh测试脚本
+
+```bash
+# with_offline参数表示启用stress离线业务
+# no_isolation参数表示未开启混部隔离功能
+sh test_demo.sh with_offline no_isolation
+```
+
+**未开启混部隔离的情况下**,clickhouse业务QoS数据(QPS/P80/P90/P99等指标)如下:
+
+```json
+{
+"localhost:9000": {
+"statistics": {
+"QPS": 0.9424028693636205,
+......
+},
+"query_time_percentiles": {
+......
+"50": 0.840476774,
+"60": 1.304607373,
+"70": 1.393591017,
+"80": 1.41277543,
+"90": 1.430316688,
+"95": 1.457534764,
+"99": 1.555646855,
+......
+}
+}
+```
+
+启用stress离线业务,开启混部隔离功能下,执行test_demo.sh测试脚本
+
+```bash
+# with_offline参数表示启用stress离线业务
+# enable_isolation参数表示开启混部隔离功能
+sh test_demo.sh with_offline enable_isolation
+```
+
+**开启混部隔离功能的情况下**,clickhouse业务QoS数据(QPS/P80/P90/P99等指标)如下:
+
+```json
+{
+"localhost:9000": {
+"statistics": {
+"QPS": 1.8825798759270718,
+......
+},
+"query_time_percentiles": {
+......
+"50": 0.485725185,
+"60": 0.512629901,
+"70": 0.55656488,
+"80": 0.636395956,
+"90": 0.734695906,
+"95": 0.804118275,
+"99": 0.887807409,
+......
+}
+}
+}
+```
+
+从上面的测试结果整理出一个表格如下:
+
+| 业务部署方式 | QPS | P50 | P90 | P99 |
+| -------------------------------------- | ------------- | ------------- | ------------- | ------------- |
+| 单独运行clickhouse在线业务(基线) | 1.885 | 0.485 | 0.728 | 0.874 |
+| clickhouse+stress(未开启混部隔离功能) | 0.942(-50%) | 0.840(-42%) | 1.430(-49%) | 1.556(-44%) |
+| clickhouse+stress(开启混部隔离功能) | 1.883(-0.11%) | 0.486(-0.21%) | 0.735(-0.96%) | 0.888(-1.58%) |
+
+在未开启混部隔离功能的情况下,在线业务clickhouse的QPS从1.9下降到0.9,同时业务的响应时延(P90)也从0.7s增大到1.4s,在线业务QoS下降了50%左右;而在开启混部隔离功能的情况下,不管是在线业务的QPS还是响应时延(P50/P90/P99)相比于基线值下降不到2%,在线业务QoS基本没有变化。
diff --git "a/docs/zh/docs/secGear/\345\256\211\350\243\205secGear.md" "b/docs/zh/docs/secGear/\345\256\211\350\243\205secGear.md"
index 79b42ebe903f90da14f70629e24dc07e2dcb5d86..222b2a0abbe66b1bcdb7c810a2e1c8fcf2318f11 100644
--- "a/docs/zh/docs/secGear/\345\256\211\350\243\205secGear.md"
+++ "b/docs/zh/docs/secGear/\345\256\211\350\243\205secGear.md"
@@ -1,14 +1,41 @@
# 安装 secGear
-## 环境要求
+## 操作系统
-当前 secGear 仅支持以下软硬件,后续会逐步支持更多的软硬件。
+openEuler 21.03、openEuler 20.03 LTS SP2 或更高版本
-- 处理器:当前 secGear 仅支持 x86_64 处理器架构,且该处理器需要支持 Intel SGX (Intel Software Guard Extensions)功能
+## CPU 架构
-- 操作系统:openEuler 21.03 或更高版本
+#### x86_64
+需要有 Intel SGX ( Intel Software Guard Extensions ) 功能
+
+#### AArch64
+
+ - 硬件要求
+
+ | 项目 | 版本 |
+ | ------ | --------------------------------------------- |
+ | 服务器 | TaiShan 200 服务器(型号 2280),仅限双路服务器 |
+ | 主板 | 鲲鹏主板 |
+ | BMC | 1711 单板(型号 BC82SMMAB) |
+ | CPU | 鲲鹏 920 处理器(型号 7260、5250、5220) |
+ | 机箱 | 不限,建议 8 盘或 12 盘 |
+
+ 要求运行的泰山服务器已经预置了 TrustZone 特性,即预装 iTrustee 安全 OS 以及配套的 BMC、BIOS 固件。
+
+ - 运行环境要求
+
+ CA(Client Application)应用需要 REE(Rich Execution Environment)侧 patch 才能实现与 TEE(Trusted Execution Environment)侧的 TA(Trusted Application)应用通信,在安装 secGear 前,请先搭建运行环境,操作如下:
+
+ 1. [搭建 TA/CA 应用运行环境](https://support.huaweicloud.com/dpmg-tz-kunpengcctrustzone/kunpengtrustzone_04_0007.html#:~:text=%E6%90%AD%E5%BB%BA%3E%20%E6%90%AD%E5%BB%BA%E6%AD%A5%E9%AA%A4-,%E6%90%AD%E5%BB%BA%E6%AD%A5%E9%AA%A4,-%E6%9B%B4%E6%96%B0%E6%97%B6%E9%97%B4%EF%BC%9A2022)
+
+ 2. 申请调测环境TA应用开发者证书
+
+ a. [生成 configs.xml 文件](https://support.huaweicloud.com/dpmg-tz-kunpengcctrustzone/kunpengtrustzone_04_0009.html#:~:text=%E5%88%86%E4%BA%AB-,%E7%94%9F%E6%88%90configs.xml%E6%96%87%E4%BB%B6,-%E6%A0%B9%E6%8D%AEmanifest.txt)
+
+ b. [申请 TA 开发者证书](https://support.huaweicloud.com/dpmg-tz-kunpengcctrustzone/kunpengtrustzone_04_0009.html#:~:text=%E5%86%85%E5%AE%B9%E4%BA%88%E4%BB%A5%E6%9B%BF%E6%8D%A2%E3%80%82-,TA%E5%BC%80%E5%8F%91%E8%80%85%E8%AF%81%E4%B9%A6%E7%94%B3%E8%AF%B7,-%E7%94%9F%E6%88%90%E6%9C%AC%E5%9C%B0%E5%AF%86)
## 安装指导
@@ -21,15 +48,9 @@
#yum install secGear-devel
```
-
-
-2. 查看安装是否成功。参考命令和回显如下表示安装成功。
+2. 查看安装是否成功。参考命令如下,若回显有对应软件包,表示安装成功:
```shell
#rpm -q secGear
- secGear-1.0-1.oe1.x86_64
#rpm -q secGear-devel
- secGear-devel-1.0-1.oe1.x86_64
- ```
-
-
+ ```
\ No newline at end of file
diff --git "a/docs/zh/docs/secGear/\345\274\200\345\217\221secGear\345\272\224\347\224\250\347\250\213\345\272\217.md" "b/docs/zh/docs/secGear/\345\274\200\345\217\221secGear\345\272\224\347\224\250\347\250\213\345\272\217.md"
index 2b3773d619e324698a39b41440eaf3a78d320239..f74da9feea5066cff569e07f4861a84c6ffedfef 100644
--- "a/docs/zh/docs/secGear/\345\274\200\345\217\221secGear\345\272\224\347\224\250\347\250\213\345\272\217.md"
+++ "b/docs/zh/docs/secGear/\345\274\200\345\217\221secGear\345\272\224\347\224\250\347\250\213\345\272\217.md"
@@ -419,7 +419,7 @@
将设备公钥文件 rsa_public_key_cloud.pem 复制到 enclave 目录。此处的设备公钥用于使用临时生成的 aes 密钥对安全区动态库进行加密。
- 说明:rsa_public_key_cloud.pem 文件下载路径:https://gitee.com/openeuler/secGear/blob/master/examples/helloworld/enclave/rsa_public_key_cloud.pem
+ 说明:rsa_public_key_cloud.pem 文件下载路径:https://gitee.com/openeuler/itrustee_sdk/blob/master/build/signtools/cloud/rsa_public_key_cloud.pem
diff --git a/docs/zh/menu/index.md b/docs/zh/menu/index.md
index c949493c2f2d73db52dc8750ca2af78e1c90e68b..0ba65f3444902f949557a474d5b4deda7bae1a6e 100644
--- a/docs/zh/menu/index.md
+++ b/docs/zh/menu/index.md
@@ -139,8 +139,12 @@ headless: true
- [常见问题与解决方法]({{< relref "./docs/A-Tune/常见问题与解决方法.md" >}})
- [附录]({{< relref "./docs/A-Tune/附录.md" >}})
- [Embedded用户指南]({{< relref "./docs/Embedded/embedded.md" >}})
+ - [发行说明]({{< relref "./docs/Embedded/openEuler Embedded 22.03发行说明.md" >}})
+ - [构建指导]({{< relref "./docs/Embedded/openEuler Embedded构建指导.md" >}})
+ - [快速构建指导]({{< relref "./docs/Embedded/快速构建指导.md" >}})
+ - [容器构建指导]({{< relref "./docs/Embedded/容器构建指导.md" >}})
- [安装与运行]({{< relref "./docs/Embedded/安装与运行.md" >}})
- - [使用方法]({{< relref "./docs/Embedded/使用方法.md" >}})
+ - [SDK应用开发]({{< relref "./docs/Embedded/SDK应用开发.md" >}})
- [内核热升级指南]({{< relref "./docs/KernelLiveUpgrade/KernelLiveUpgrade.md" >}})
- [安装与部署]({{< relref "./docs/KernelLiveUpgrade/安装与部署.md" >}})
- [使用方法]({{< relref "./docs/KernelLiveUpgrade/使用方法.md" >}})
@@ -209,6 +213,12 @@ headless: true
- [云原生混合部署rubik用户指南]({{< relref "./docs/rubik/overview.md" >}})
- [安装与部署]({{< relref "./docs/rubik/安装与部署.md" >}})
- [http接口文档]({{< relref "./docs/rubik/http接口文档.md" >}})
+ - [混部隔离示例]({{< relref "./docs/rubik/混部隔离示例.md" >}})
- [镜像裁剪定制工具使用指南]({{< relref "./docs/TailorCustom/overview.md" >}})
- [isocut 使用指南]({{< relref "./docs/TailorCustom/isocut使用指南.md" >}})
- - [imageTailor 使用指南]({{< relref "./docs/TailorCustom/imageTailor使用指南.md" >}})
+ - [imageTailor 使用指南]({{< relref "./docs/TailorCustom/imageTailor 使用指南.md" >}})
+- [Gazelle用户指南]({{< relref "./docs/Gazelle/Gazelle.md" >}})
+- [NestOS用户指南]({{< relref "./docs/NestOS/overview.md" >}})
+ - [安装与部署]({{< relref "./docs/NestOS/安装与部署.md" >}})
+ - [使用方法]({{< relref "./docs/NestOS/使用方法.md" >}})
+ - [功能特性描述]({{< relref "./docs/NestOS/功能特性描述.md" >}})
diff --git a/score_admins.yaml b/score_admins.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..b1a8b426ac1e462edcfe0acb3e6018967839f66d
--- /dev/null
+++ b/score_admins.yaml
@@ -0,0 +1,6 @@
+score_admins:
+ amy_mayun
+ zhangcuihong
+ lanlanbenming
+ rachel_123456
+ hebin03
\ No newline at end of file
|
---|
|
---|
|
---|
|