diff --git a/docs/en/docs/A-Ops/exception-detection-service-manual.md b/docs/en/docs/A-Ops/exception-detection-service-manual.md
index a75cbbe350014b81541a42f8eaa9c98eb59c0bb7..293879a04d9b517ae73faf279d3fb0480159181c 100644
--- a/docs/en/docs/A-Ops/exception-detection-service-manual.md
+++ b/docs/en/docs/A-Ops/exception-detection-service-manual.md
@@ -693,6 +693,7 @@ When started, the adoctor-check-scheduler and adoctor-check-executor services be
- page: Current page number.
- per_page: Number of records on each page. The maximum value is 50.
- access_token: Access token.
+
4. For example, to obtain the exception detection results of check_item1 on host1 from 2021-9-8 11:24:10 to 2021-9-9 09:00:00 on the first page only, display 30 results on each page, and sort the results in ascending order by **check_item**, run the following command:
```shell
diff --git a/docs/en/docs/A-Tune/figures/picture1.png b/docs/en/docs/A-Tune/figures/picture1.png
new file mode 100644
index 0000000000000000000000000000000000000000..624d148b98bc9890befbecc53f29d6a4890d06af
Binary files /dev/null and b/docs/en/docs/A-Tune/figures/picture1.png differ
diff --git a/docs/en/docs/A-Tune/figures/picture4.png b/docs/en/docs/A-Tune/figures/picture4.png
new file mode 100644
index 0000000000000000000000000000000000000000..c576fd0369008e847e6943d6f99351caccf9f3e5
Binary files /dev/null and b/docs/en/docs/A-Tune/figures/picture4.png differ
diff --git a/docs/en/docs/A-Tune/installation-and-deployment.md b/docs/en/docs/A-Tune/installation-and-deployment.md
index 1b5cd8ceb3c0a67cf13fc2b188679a69de5c8f58..ba1b7c5139c591570a9345262c209ba572ae774a 100644
--- a/docs/en/docs/A-Tune/installation-and-deployment.md
+++ b/docs/en/docs/A-Tune/installation-and-deployment.md
@@ -4,18 +4,13 @@ This chapter describes how to install and deploy A-Tune.
- [Installation and Deployment](#installation-and-deployment)
- [Software and Hardware Requirements](#software-and-hardware-requirements)
- - [Hardware Requirement](#hardware-requirement)
- - [Software Requirement](#software-requirement)
- [Environment Preparation](#environment-preparation)
- [A-Tune Installation](#a-tune-installation)
- - [Installation Modes](#installation-modes)
- - [Installation Procedure](#installation-procedure)
- [A-Tune Deployment](#a-tune-deployment)
- - [Overview](#overview)
- - [Example](#example)
- - [Example](#example-1)
- [Starting A-Tune](#starting-a-tune)
- [Starting A-Tune Engine](#starting-a-tune-engine)
+ - [Distributed Deployment](#distributed-deployment)
+ - [Cluster Deployment](#cluster-deployment)
@@ -28,11 +23,11 @@ This chapter describes how to install and deploy A-Tune.
### Software Requirement
-- OS: openEuler 22.03 LTS
+- OS: openEuler 22.03
## Environment Preparation
-For details about installing an openEuler OS, see _openEuler 22.03 LTS Installation Guide_.
+For details about installing an openEuler OS, see _openEuler 22.03 Installation Guide_.
## A-Tune Installation
@@ -40,16 +35,19 @@ This section describes the installation modes and methods of the A-Tune.
### Installation Modes
-A-Tune can be installed in single-node or distributed mode.
+A-Tune can be installed in single-node, distributed, and cluster modes.
-- Single-node mode
+- Single-node mode
The client and server are installed on the same system.
-- Distributed mode
+- Distributed mode
The client and server are installed on different systems.
+- Cluster mode
+ A cluster consists of a client and more than one servers.
+
The installation modes are as follows:
@@ -61,13 +59,14 @@ The installation modes are as follows:
To install the A-Tune, perform the following steps:
-1. Mount an openEuler ISO file.
+1. Mount an openEuler ISO image.
```
- # mount openEuler-22.03-LTS-aarch64-dvd.iso /mnt
+ # mount openEuler-22.03-LTS-everything-x86_64-dvd.iso /mnt
```
+ > Use the **everything** ISO image.
-2. Configure the local yum source.
+2. Configure the local Yum source.
```
# vim /etc/yum.repos.d/local.repo
@@ -83,7 +82,7 @@ To install the A-Tune, perform the following steps:
enabled=1
```
-3. Import the GPG public key of the RPM digital signature to the system.
+3. Import the GPG public key of the RPM digital signature to the system.
```
# rpm --import /mnt/RPM-GPG-KEY-openEuler
@@ -100,13 +99,13 @@ To install the A-Tune, perform the following steps:
# yum install atune-engine -y
```
-5. For a distributed mode, install an A-Tune client on associated server.
+5. For a distributed mode, install an A-Tune client on associated server.
```
# yum install atune-client -y
```
-6. Check whether the installation is successful.
+6. Check whether the installation is successful.
```
# rpm -qa | grep atune
@@ -129,9 +128,7 @@ This section describes how to deploy A-Tune.
The configuration items in the A-Tune configuration file **/etc/atuned/atuned.cnf** are described as follows:
-- A-Tune service startup configuration
-
- You can modify the parameter value as required.
+- A-Tune service startup configuration (modify the parameter values as required).
- **protocol**: Protocol used by the gRPC service. The value can be **unix** or **tcp**. **unix** indicates the local socket communication mode, and **tcp** indicates the socket listening port mode. The default value is **unix**.
- **address**: Listening IP address of the gRPC service. The default value is **unix socket**. If the gRPC service is deployed in distributed mode, change the value to the listening IP address.
@@ -161,7 +158,7 @@ The configuration items in the A-Tune configuration file **/etc/atuned/atuned.c
- **tlsengineclientcertfile**: Client certificate path of the A-Tune engine service.
- **tlsengineclientkeyfile**: Client key path of the A-Tune engine service.
-- System information
+- System information
System is the parameter information required for system optimization. You must modify the parameter information according to the actual situation.
@@ -169,15 +166,15 @@ The configuration items in the A-Tune configuration file **/etc/atuned/atuned.c
- **network**: NIC information to be collected during the analysis process or specified NIC during NIC optimization.
- **user**: User name used for ulimit optimization. Currently, only the user **root** is supported.
-- Log information
+- Log information
Change the log level as required. The default log level is info. Log information is recorded in the **/var/log/messages** file.
-- Monitor information
+- Monitor information
Hardware information that is collected by default when the system is started.
-- Tuning information
+- Tuning information
Tuning is the parameter information required for offline tuning.
@@ -281,9 +278,7 @@ The configuration items in the A-Tune configuration file **/etc/atuned/atuned.c
The configuration items in the configuration file **/etc/atuned/engine.cnf** of the A-Tune engine are described as follows:
-- Startup configuration of the A-Tune engine service
-
- You can modify the parameter value as required.
+- Startup configuration of the A-Tune engine service (modify the parameter values as required).
- **engine_host**: Listening address of the A-Tune engine service. The default value is localhost.
- **engine_port**: Listening port of the A-Tune engine service. The value ranges from 0 to 65535. The default value is 3838.
@@ -323,7 +318,25 @@ The configuration items in the configuration file **/etc/atuned/engine.cnf** of
## Starting A-Tune
-After the A-Tune is installed, you need to start the A-Tune service.
+After A-Tune is installed, you need to configure the A-Tune service before starting it.
+- Configure the A-Tune service.
+ Modify the network adapter and drive information in the **atuned.cnf** configuration file.
+ > Note:
+ >
+ > If atuned is installed through `make install`, the network adapter and drive information in the configuration file is automatically updated to the default devices on the machine. To collect data from other devices, perform the following steps to configure atuned.
+
+ Run the following command to search for the network adapter that needs to be specified for optimization or data collection, and change the value of **network** in the **/etc/atuned/atuned.cnf** file to the specified network adapter:
+ ```
+ ip addr
+ ```
+ Run the following command to search for the drive that need to be specified for optimization or data collection, and change the value of **disk** in the **/etc/atuned/atuned.cnf** file to the specified drive:
+ ```
+ fdisk -l | grep dev
+ ```
+- About the certificate:
+ The A-Tune engine and client use the gRPC communication protocol. Therefore, you need to configure a certificate to ensure system security. For information security purposes, A-Tune does not provide a certificate generation method. You need to configure a system certificate by yourself.
+ If security is not considered, set **rest_tls** and **engine_tls** in the **/etc/atuned/atuned.cnf** file to **false**, set **engine_tls** in the **/etc/atuned/engine.cnf** file to **false**.
+ A-Tune is not liable for any consequences incurred if no security certificate is configured.
- Start the atuned service.
@@ -362,5 +375,132 @@ To use AI functions, you need to start the A-Tune engine service.
If the following command output is displayed, the service is started successfully:

+
+## Distributed Deployment
+
+### Purpose of Distributed Deployment
+A-Tune supports distributed deployment to implement distributed architecture and on-demand deployment. The components of A-Tune can be deployed separately. Lightweight component deployment has little impact on services and avoids installing too many dependencies to reduce the system load.
+
+This document describes only a common deployment mode: deploying the client and server on the same node and deploying the engine module on another node. For details about other deployment modes, contact A-Tune developers.
+
+**Deployment relationship**
+
+
+### Configuration File
+In distributed deployment mode, you need to configure the write the IP address and port number of the engine in the configuration file so that other components can access the engine component through the IP address.
+
+1. Modify the **/etc/atuned/atuned.cnf** file on the server node.
+ - Change the values of **engine_host** and **engine_port** in line 34 to the IP address and port number of the engine node. For the deployment in the preceding figure, the values are **engine_host = 192.168.0.1 engine_port = 3838**.
+ - Change the values of **rest_tls** and **engine_tls** in lines 49 and 55 to **false**. Otherwise, you need to apply for and configure certificates. You do not need to configure SSL certificates in the test environment. However, you need to configure SSL certificates in the production environment to prevent security risks.
+2. Modify the** /etc/atuned/engine.cnf** file on the engine node.
+ - Change the values of **engine_host** and **engine_port** in lines 17 and 18 to the IP address and port number of the engine node. For the deployment in the preceding figure, the value are **engine_host = 192.168.0.1 engine_port = 3838**.
+ - Change the value of **engine_tls** in line 22 to **false**.
+3. After modifying the configuration file, restart the service for the modification to take effect.
+ - Run the `systemctl restart atuned command` on the server node.
+ - Run the `systemctl restart atune-engine` command on the engine node.
+4. (Optional) Run the `tuning` command in the **A-Tune/examples/tuning/compress** folder.
+ - For details, see **A-Tune/examples/tuning/compress/README**.
+ - Run the `atune-adm tuning --project compress --detail compress_client.yaml` command.
+ - This step is to check whether the distributed deployment is successful.
+
+### Precautions
+1. This document does not describe how to configure the authentication certificates. You can set **rest_tls** or **engine_tls** in the **atuned.cnf** and **engine.cnf** files to **false** if necessary.
+2. After modifying the configuration file, restart the service. Otherwise, the modification does not take effect.
+3. Do not enable the proxy when using A-Tune.
+4. The **disk** and **network** items of the **[system]** section in the **atuned.cnf** file need to be modified. For details about how to modify the items, see the [A-Tune User Guide](https://gitee.com/gaoruoshu/A-Tune/blob/master/Documentation/UserGuide/A-Tune%E7%94%A8%E6%88%B7%E6%8C%87%E5%8D%97.md).
+
+### Example
+#### atuned.cnf
+```bash
+# ......
+
+# the tuning optimizer host and port, start by engine.service
+# if engine_host is same as rest_host, two ports cannot be same
+# the port can be set between 0 to 65535 which not be used
+engine_host = 192.168.0.1
+engine_port = 3838
+
+# ......
+```
+#### engine.cnf
+```bash
+[server]
+# the tuning optimizer host and port, start by engine.service
+# if engine_host is same as rest_host, two ports cannot be same
+# the port can be set between 0 to 65535 which not be used
+engine_host = 192.168.0.1
+engine_port = 3838
+```
+## Cluster Deployment
+
+### Purpose of Cluster Deployment
+To support fast tuning in multi-node scenarios, A-Tune supports dynamic tuning of parameter settings on multiple nodes at the same time. In this way, you do not need to tune each node separately, improving tuning efficiency.
+Cluster deployment mode consists of one master node and several agent nodes. The client and server are deployed on the master node to receive commands and interact with the engine. Other nodes receive instructions from the master node and configure the parameters of the current node.
+
+**Deployment relationship**
+ 
+
+In the preceding figure, the client and server are deployed on the node whose IP address is 192.168.0.0. Project files are stored on this node. Other nodes do not contain project files.
+The master node communicates with the agent nodes through TCP. Therefore, you need to modify the configuration file.
+
+### Modifications to atuned.cnf
+1. Set the value of **protocol** to **tcp**.
+2. Set the value of **address** to the IP address of the current node.
+3. Set the value of **connect** to the IP addresses of all nodes. The first IP address is the IP address of the master node, and the subsequent IP addresses are the IP addresses of agent nodes. Use commas (,) to separate the IP addresses.
+4. During debugging, you can set **rest_tls** and **engine_tls** to **false**.
+5. Perform the same modification on the **atuned.cnf** files of all the master and agent nodes.
+
+### Precautions
+1. The values of **engine_host** and **engine_port** must be consistent in the **engine.cnf** file and the **atuned.cnf** file on the server.
+2. This document does not describe how to configure the authentication certificates. You can set **rest_tls** or **engine_tls** in the **atuned.cnf** and **engine.cnf** files to **false** if necessary.
+3. After modifying the configuration file, restart the service. Otherwise, the modification does not take effect.
+4. Do not enable the proxy when using A-Tune.
+
+### Example
+#### atuned.cnf
+```bash
+# ......
+
+[server]
+# the protocol grpc server running on
+# ranges: unix or tcp
+protocol = tcp
+
+# the address that the grpc server to bind to
+# default is unix socket /var/run/atuned/atuned.sock
+# ranges: /var/run/atuned/atuned.sock or ip address
+address = 192.168.0.0
+
+# the atune nodes in cluster mode, separated by commas
+# it is valid when protocol is tcp
+connect = 192.168.0.0,192.168.0.1,192.168.0.2,192.168.0.3
+
+# the atuned grpc listening port
+# the port can be set between 0 to 65535 which not be used
+port = 60001
+
+# the rest service listening port, default is 8383
+# the port can be set between 0 to 65535 which not be used
+rest_host = localhost
+rest_port = 8383
+
+# the tuning optimizer host and port, start by engine.service
+# if engine_host is same as rest_host, two ports cannot be same
+# the port can be set between 0 to 65535 which not be used
+engine_host = 192.168.1.1
+engine_port = 3838
+
+# ......
+```
+#### engine.cnf
+```bash
+[server]
+# the tuning optimizer host and port, start by engine.service
+# if engine_host is same as rest_host, two ports cannot be same
+# the port can be set between 0 to 65535 which not be used
+engine_host = 192.168.1.1
+engine_port = 3838
+```
+Note: For details about the **engine.cnf** file, see the configuration file for distributed deployment.
diff --git a/docs/en/docs/Administration/configuring-the-network.md b/docs/en/docs/Administration/configuring-the-network.md
index bbfd946365805129cfee91fc6c2b06f8884a5b5f..566df23c2a0c5fe5a86865635c67145586432ea5 100644
--- a/docs/en/docs/Administration/configuring-the-network.md
+++ b/docs/en/docs/Administration/configuring-the-network.md
@@ -457,7 +457,6 @@ PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.0.10
-GATEWAY=192.168.0.1
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
diff --git a/docs/en/docs/Administration/configuring-the-repo-server.md b/docs/en/docs/Administration/configuring-the-repo-server.md
index 429aed1118e8db24fb03852d78f8a352be3acdda..c7f8f1ad936f80344469a2b429818a63c4d652d7 100644
--- a/docs/en/docs/Administration/configuring-the-repo-server.md
+++ b/docs/en/docs/Administration/configuring-the-repo-server.md
@@ -1,7 +1,7 @@
# Configuring the Repo Server
> **NOTE:**
-> openEuler provides multiple repo sources for online usage. For details about the repo sources, see [System Installation](./../Releasenotes/installing-the-os.html). If you cannot obtain the openEuler repo source online, you can use the ISO release package provided by openEuler to create a local openEuler repo source. This section uses the **openEuler-22.03-LTS-aarch64-dvd.iso** file as an example. Modify the ISO file as required.
+> openEuler provides multiple repo sources for online usage. For details about the repo sources, see [System Installation](../Releasenotes/installing-the-os.md). If you cannot obtain the openEuler repo source online, you can use the ISO release package provided by openEuler to create a local openEuler repo source. This section uses the **openEuler-22.03-LTS-aarch64-dvd.iso** file as an example. Modify the ISO file as required.
diff --git a/docs/en/docs/Administration/faqs.md b/docs/en/docs/Administration/faqs.md
index c18ba56153b6a704b05e80c2d9a8393fdc8c35be..6d99cfd36e0b1ea2f4087fab582e6969270645a2 100644
--- a/docs/en/docs/Administration/faqs.md
+++ b/docs/en/docs/Administration/faqs.md
@@ -9,11 +9,11 @@
- [Installation Failure Caused by Software Package Conflict, File Conflict, or Missing Software Package](#installation-failure-caused-by-software-package-conflict-file-conflict-or-missing-software-package)
- [Failed to Downgrade libiscsi](#failed-to-downgrade-libiscsi)
- [Failed to Downgrade xfsprogs](#failed-to-downgrade-xfsprogs)
- - [CPython/Lib Detects CVE-2019-9674: Zip Bomb](#cpythonlib-detects-cve-2019-9674-zip-bomb)
+ - [CVE-2019-9674 Zip Bomb Detected in the CPython/Lib](#cve-2019-9674-zip-bomb-detected-in-the-cpythonlib)
- [ReDoS Attack Occurs Due to Improper Use of glibc Regular Expressions](#redos-attack-occurs-due-to-improper-use-of-glibc-regular-expressions)
- [An Error Is Reported When gdbm-devel Is Installed or Uninstalled During the Installation and Uninstallation of httpd-devel and apr-util-devel](#an-error-is-reported-when-gdbm-devel-is-installed-or-uninstalled-during-the-installation-and-uninstallation-of-httpd-devel-and-apr-util-devel)
- [An rpmdb Error Is Reported When Running the yum or dnf Command After the System Is Rebooted](#an-rpmdb-error-is-reported-when-running-the-yum-or-dnf-command-after-the-system-is-rebooted)
- - [Failed to Run `rpmrebuild -d /home/test filesystem` to Rebuild the **filesystem** Package](#failed-to-run-rpmrebuild--d-hometest-filesystem-to-rebuild-the-filesystem-package)
+ - [Failed to Run rpmrebuild -d /home/test filesystem to Rebuild the filesystem Package](#failed-to-run-rpmrebuild--d-hometest-filesystem-to-rebuild-the-filesystem-package)
@@ -241,7 +241,7 @@ Run the following command to uninstall the **xfsprogs-xfs_scrub** subpackage and
yum remove xfsprogs-xfs_scrub
```
-## CPython/Lib Detects CVE-2019-9674: Zip Bomb
+## CVE-2019-9674 Zip Bomb Detected in the CPython/Lib
### Symptom
@@ -333,7 +333,7 @@ Step 1 Run the `kill -9` command to terminate all running RPM-related commands.
Step 2 Delete all **/var/lib/rpm/__db.00*** files.
Step 3 Run the `rpmdb --rebuilddb` command to rebuild the RPM database.
-## Failed to Run `rpmrebuild -d /home/test filesystem` to Rebuild the **filesystem** Package
+## Failed to Run rpmrebuild -d /home/test filesystem to Rebuild the filesystem Package
### Symptom
diff --git a/docs/en/docs/Administration/setting-up-the-database-server.md b/docs/en/docs/Administration/setting-up-the-database-server.md
index 000f8f03a830276fcab208747b8f059e79915562..3401a5a64f441468e8a1587fccdc241a1f8de249 100644
--- a/docs/en/docs/Administration/setting-up-the-database-server.md
+++ b/docs/en/docs/Administration/setting-up-the-database-server.md
@@ -1,25 +1,24 @@
# Setting Up the Database Server
-
- [Setting Up the Database Server](#setting-up-the-database-server)
- - [PostgreSQL Server](#postgresql-server)
- - [Software Description](#software-description)
- - [Configuring the Environment](#configuring-the-environment)
- - [Installing, Running, and Uninstalling PostgreSQL](#installing-running-and-uninstalling-postgresql)
- - [Managing Database Roles](#managing-database-roles)
- - [Managing Databases](#managing-databases)
- - [MariaDB Server](#mariadb-server)
- - [Software Description](#software-description-1)
- - [Configuring the Environment](#configuring-the-environment-1)
- - [Installing, Running, and Uninstalling MariaDB Server](#installing-running-and-uninstalling-mariadb-server)
- - [Managing Database Users](#managing-database-users)
- - [Managing Databases](#managing-databases-1)
- - [MySQL Server](#mysql-server)
- - [Software Description](#software-description-2)
- - [Configuring the Environment](#configuring-the-environment-2)
- - [Installing, Running, and Uninstalling MySQL](#installing-running-and-uninstalling-mysql)
- - [Managing Database Users](#managing-database-users-1)
- - [Managing Databases](#managing-databases-2)
+ - [PostgreSQL Server](#postgresql-server)
+ - [Software Description](#software-description)
+ - [Configuring the Environment](#configuring-the-environment)
+ - [Installing, Running, and Uninstalling PostgreSQL](#installing-running-and-uninstalling-postgresql)
+ - [Managing Database Roles](#managing-database-roles)
+ - [Managing Databases](#managing-databases)
+ - [MariaDB Server](#mariadb-server)
+ - [Software Description](#software-description-1)
+ - [Configuring the Environment](#configuring-the-environment-1)
+ - [Installing, Running, and Uninstalling MariaDB Server](#installing-running-and-uninstalling-mariadb-server)
+ - [Managing Database Users](#managing-database-users)
+ - [Managing Databases](#managing-databases-1)
+ - [MySQL Server](#mysql-server)
+ - [Software Description](#software-description-2)
+ - [Configuring the Environment](#configuring-the-environment-2)
+ - [Installing, Running, and Uninstalling MySQL](#installing-running-and-uninstalling-mysql)
+ - [Managing Database Users](#managing-database-users-1)
+ - [Managing Databases](#managing-databases-2)
@@ -206,7 +205,7 @@
#### Installing PostgreSQL
-1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.html).
+1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md).
2. Clear the cache.
```
@@ -890,7 +889,7 @@ Each storage engine manages and stores data in different ways, and supports diff
>- In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section.
> \# mkdir /data
-##### Method 1: Using fdisk for Drive Management as the **root** user
+##### Method 1: Using fdisk for Drive Management as the root user
1. Create a partition, for example, **/dev/sdb**.
```
@@ -926,10 +925,10 @@ Each storage engine manages and stores data in different ways, and supports diff

-##### Method 2: Using LVM for Drive Management as the **root** user
+##### Method 2: Using LVM for Drive Management as the root user
> **NOTE:**
>Install the LVM2 package in the image as follows:
->1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.html). If the repository has been configured, skip this step.
+>1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). If the repository has been configured, skip this step.
>2. Install LVM2.
> **\# yum install lvm2**
@@ -991,7 +990,7 @@ Each storage engine manages and stores data in different ways, and supports diff
#### Installing MariaDB
-1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.html).
+1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md).
2. Clear the cache.
```
@@ -1517,7 +1516,7 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard
>- In a non-performance test, run the following command as the **root** user to create a data directory. Then skip this section.
> \# mkdir /data
-##### Method 1: Using fdisk for Drive Management as the **root** user
+##### Method 1: Using fdisk for Drive Management as the root user
1. Create a partition, for example, **/dev/sdb**.
```
@@ -1553,10 +1552,10 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard

-##### Method 2: Using LVM for Drive Management as the **root** user
+##### Method 2: Using LVM for Drive Management as the root user
> **NOTE:**
>Install the LVM2 package in the image as follows:
->1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.html). If the repository has been configured, skip this step.
+>1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md). If the repository has been configured, skip this step.
>2. Install LVM2.
> **\# yum install lvm2**
@@ -1619,7 +1618,7 @@ The Structured Query Language \(SQL\) used by MySQL is the most common standard
#### Installing MySQL
-1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.html).
+1. Configure the local yum source. For details, see [Configuring the Repo Server](./configuring-the-repo-server.md).
2. Clear the cache.
```
diff --git a/docs/en/docs/ApplicationDev/installing-obs.md b/docs/en/docs/ApplicationDev/installing-obs.md
new file mode 100644
index 0000000000000000000000000000000000000000..ddeaf777144bb613a3bcdd506adba7bbb6fb3c1e
--- /dev/null
+++ b/docs/en/docs/ApplicationDev/installing-obs.md
@@ -0,0 +1,98 @@
+# Installing the OBS Tool
+
+
+ - [Description](#description)
+ - [Supported Architectures](#supported-architectures)
+ - [OBS Installation](#obs-installation)
+ - [OBS Deployment](#obs-deployment)
+ - [Usage Instructions](#usage-instructions)
+
+
+
+## Description
+Open Build Service (OBS) is a general tool for building source packages into RPM packages or Linux images.
+obs-server is the software package of OBS.
+
+## Supported Architectures
+OBS supports x86_64 and AArch64 architectures.
+
+
+## OBS Installation
+
+openEuler 22.03 LTS for the AArch64 architecture is used as an example to demonstrate how to install the multi-architecture obs-server packages.
+
+1. Check whether the OS is openEuler 22.03 LTS.
+
+ ``` shell script
+ $ cat /etc/openEuler-release
+ openEuler release 22.03 LTS
+ ```
+
+2. Configure the Yum source. The repo source for the multi-architecture obs-server must be placed before the **everything** repo source. An example Yum source configuration is as follows:
+
+ ``` shell script
+ [obs]
+ name=obs
+ baseurl=https://repo.openeuler.org/openEuler-22.03-LTS/EPOL/update/multi_version/obs-server/2.10.11/aarch64/
+ enabled=1
+ gpgcheck=0
+
+ [everything]
+ name=everything
+ baseurl=https://repo.openeuler.org/openEuler-22.03-LTS/everything/aarch64/
+ enabled=1
+ gpgcheck=0
+ ```
+ RUn the following command to open the repo source file and add the preceding content.
+ ``` shell script
+ $ sudo vi /etc/yum.repos.d/xxx.repo
+ ```
+
+3. Enable the Yum source.
+
+ ``` shell script
+ $ sudo yum clean all
+ $ sudo yum makecache
+ ```
+
+4. Check whether OBS packages of other versions exist.
+
+ ``` shell script
+ $ sudo rpm -qa obs-server obs-common obs-api mod_passenger obs-api-deps obs-bundled-gems passenger ruby ruby-help ruby-irb rubygem-bundler rubygem-io-console rubygem-json rubygem-openssl rubygem-psych rubygem-rake rubygem-rdoc rubygems rubygem-bigdecimal rubygem-did_you_mean
+ ```
+
+5. (Optional) To prevent conflicts, uninstall OBS packages of other versions.
+
+ ``` shell script
+ $ sudo yum remove -y obs-server obs-common obs-api mod_passenger obs-api-deps obs-bundled-gems passenger ruby ruby-help ruby-irb rubygem-bundler rubygem-io-console rubygem-json rubygem-openssl rubygem-psych rubygem-rake rubygem-rdoc rubygems rubygem-bigdecimal rubygem-did_you_mean
+ ```
+
+ > **Note**
+ >
+ >- The example repo source is the multi-architecture version of obs-server released with openEuler 22.03 LTS.
+ >- Installation dependency packages of different versions may conflict, causing installation failure. You are advised to uninstall the preceding software packages before installation.
+
+6. Install obs-server packages.
+
+ ``` shell script
+ $ sudo yum install -y obs-api obs-server
+ ```
+
+7. Check whether obs-server packages are successfully installed.
+
+ ``` shell script
+ $ rpm -qa | grep obs-server
+ obs-server-2.10.11-6.oe2203.noarch
+ $ rpm -qa | grep obs-api
+ obs-api-2.10.11-6.oe2203.noarch
+ ```
+
+## OBS Deployment
+
+1. Obtain the deployment script at https://gitee.com/openeuler/infrastructure/tree/master/obs/tf/startup.
+
+2. Run the **restart_service.sh** script to deploy the OBS tool.
+
+## Usage Instructions
+
+You can build RPM packages using the OBS web UI or the osc CLI tool. For details, see [Building an RPM Package](./building-an-rpm-package.md).
\ No newline at end of file
diff --git a/docs/en/docs/Quickstart/quick-start.md b/docs/en/docs/Quickstart/quick-start.md
index 114370df952a195ab519c1633981f78148520b6d..9e83d9eb3aa1de3ac12caa560e76fbe0b8980295 100644
--- a/docs/en/docs/Quickstart/quick-start.md
+++ b/docs/en/docs/Quickstart/quick-start.md
@@ -68,7 +68,7 @@ This document uses openEuler 22.03_LTS installed on the TaiShan 200 server as an
|
- - 64-bit Arm architecture
- 64-bit Intel x86 architecture
+ | - 64-bit ARM architecture
- 64-bit Intel x86 architecture
|
CPU
diff --git a/docs/en/docs/Releasenotes/key-features.md b/docs/en/docs/Releasenotes/key-features.md
index 964190a3dcc9c13cfa0a5c232b86fcfedd73bff1..16cc8dd4d7f192cbf4aada7a20e0b940864ba9d6 100644
--- a/docs/en/docs/Releasenotes/key-features.md
+++ b/docs/en/docs/Releasenotes/key-features.md
@@ -16,7 +16,7 @@ In-depth optimizations of scheduling, I/O, and memory management have been perfo
- **Huge page vmalloc performance optimization**: When calling vmalloc() to allocate spaces that exceed the minimum size of huge pages, the huge page instead of the base page is used to map the memory, improving the TLB usage and reducing TLB misses.
-- **Uncorrectable error (UCE) fault tolerance**: When a UCE occurs in the copy_from_user scenario, the system kills the associated user-mode process instead of kernel panicking. This feature is disabled by default. You can enable it by adding **CONFIG_UCE_KERNEL_RECOVERY** to the kernel configuration, and configure it in the kernel boot parameter interface **/proc/cmdline** (add **uce_kernel_recovery=[0,4]**) and proc interface **/proc/sys/kernel/uce_kernel_recovery**.
+- **Uncorrectable error (UCE) tolerance**: When the system is running, the hardware memory error triggered by the kernel mode is handled in the same way as the kernel panic. Based on the analysis, in some scenarios, only the running of user-mode processes is affected. You can kill the user-mode processes and isolate the error page, but do not need to handle memory errors through a system panic. Based on this idea, solutions are provided for the uaccess (**copy_{from, to}_user**, **{get, put}_user**), COW, and core dump scenarios to prevent system reset and improve system reliability.
## File System for New Media
diff --git a/docs/en/docs/SecHarden/appendix.md b/docs/en/docs/SecHarden/appendix.md
index 2c47d84fc9055ad6390ee0eb7e63cd76f9b6eff3..40227cab43973692dfc1845bf653b4104365f6fe 100644
--- a/docs/en/docs/SecHarden/appendix.md
+++ b/docs/en/docs/SecHarden/appendix.md
@@ -2,7 +2,7 @@
This chapter describes the file permissions and **umask** values.
-- [Appendix](#appendix.md)
+- [Appendix](#appendix)
- [Permissions on Files and Directories](#permissions-on-files-and-directories)
- [umask Values](#umask-values)
diff --git a/docs/en/docs/SecHarden/authentication-and-authorization.md b/docs/en/docs/SecHarden/authentication-and-authorization.md
index a0f70e5a5122eb5c2cd074db591e238a125d88e0..35af3922102b69bb2ec6e19ee06d8c49ca2441ab 100644
--- a/docs/en/docs/SecHarden/authentication-and-authorization.md
+++ b/docs/en/docs/SecHarden/authentication-and-authorization.md
@@ -2,7 +2,7 @@
- [Authentication and Authorization](#authentication-and-authorization)
- [Setting a Warning for Remote Network Access](#setting-a-warning-for-remote-network-access)
- - [Forestalling Unauthorized System Restart by Holding Down Ctrl, Alt, and Delete](#forestalling-unauthorized-system-restart-by-holding-down-ctrl-alt-and-delete)
+ - [Forestalling Unauthorized System Restart by Pressing Ctrl+Alt+Delete](#forestalling-unauthorized-system-restart-by-holding-down-ctrl-alt-and-delete)
- [Setting an Automatic Exit Interval for Shell](#setting-an-automatic-exit-interval-for-shell)
- [Setting the Default umask Value for Users to 0077](#setting-the-default-umask-value-for-users-to-0077)
- [Setting the GRUB2 Encryption Password](#setting-the-grub2-encryption-password)
@@ -25,16 +25,16 @@ This setting can be implemented by modifying the **/etc/issue.net** file. Repl
Authorized users only. All activities may be monitored and reported.
```
-## Forestalling Unauthorized System Restart by Holding Down Ctrl, Alt, and Delete
+## Forestalling Unauthorized System Restart by Pressing Ctrl+Alt+Delete
### Description
-By default, you can restart the system by pressing **Ctrl**, **Alt**, and **Delete**. You are advised to disable this function to prevent data loss due to misoperations.
+By default, you can restart the system by pressing **Ctrl**+**Alt**+**Delete**. You are advised to disable this function to prevent data loss due to misoperations.
### Implementation
-To disable the feature of restarting the system by holding down **Ctrl**, **Alt**, and **Delete**, perform the following steps:
+To disable the feature of restarting the system by pressing **Ctrl**+**Alt**+**Delete**, perform the following steps:
1. Run the following commands to delete the two **ctrl-alt-del.target** files:
@@ -68,10 +68,10 @@ export TMOUT=300
### Description
-The **umask** value is used to set default permission on files and directories. A smaller **umask** value indicates that group users or other users have incorrect permission, which brings system security risks. Therefore, the default **umask** value must be set to **0077** for all users, that is, the default permission on user directories is **700** and the permission on user files is **600**. The **umask** value indicates the complement of a permission. For details about how to convert the **umask** value to a permission, see [umask Values](#umask-values.md).
+The **umask** value is used to set default permission on files and directories. A smaller **umask** value indicates that group users or other users have incorrect permission, which brings system security risks. Therefore, the default **umask** value must be set to **0077** for all users, that is, the default permission on user directories is **700** and the permission on user files is **600**. The **umask** value indicates the complement of a permission. For details about how to convert the **umask** value to a permission, see [umask Values](./appendix.md/#umask-values).
> **NOTE:**
->By default, the **umask** value of the openEuler user is set to **0077**.
+>By default, the **umask** value of the openEuler user is set to **0022**.
### Implementation
diff --git a/docs/en/docs/StratoVirt/StratoVrit_guidence.md b/docs/en/docs/StratoVirt/StratoVirt_guidence.md
similarity index 100%
rename from docs/en/docs/StratoVirt/StratoVrit_guidence.md
rename to docs/en/docs/StratoVirt/StratoVirt_guidence.md
diff --git a/docs/en/docs/StratoVirt/VM_configuration.md b/docs/en/docs/StratoVirt/VM_configuration.md
index c303569a3300e1ae816c6dcb74576017e1fd992a..2c9e43dbf845be23e45ddc86304e812f6e701661 100644
--- a/docs/en/docs/StratoVirt/VM_configuration.md
+++ b/docs/en/docs/StratoVirt/VM_configuration.md
@@ -475,6 +475,43 @@ Standard VMs:
> If **deflate-on-oom** is set to **false**, when the guest memory is insufficient, the balloon device does not automatically release the memory. As a result, the guest OOM may occur, the processes may be killed, and even the VM cannot run properly.
+### RNG Configuration
+
+#### Introduction
+
+Virtio RNG is a paravirtualized random number generator that generates hardware random numbers for the guest.
+
+#### Configuration Methods
+
+Virtio RNG can be configured as the Virtio MMIO device or Virtio PCI device. To configure the Virtio RNG device as a Virtio MMIO device, run the following command:
+
+```
+-object rng-random,id=objrng0,filename=/path/to/random_file
+-device virtio-rng-device,rng=objrng0,max-bytes=1234,period=1000
+```
+
+To configure the Virtio RNG device as a Virtio PCI device, run the following command:
+
+```
+-object rng-random,id=objrng0,filename=/path/to/random_file
+-device virtio-rng-pci,rng=objrng0,max-bytes=1234,period=1000,bus=pcie.0,addr=0x1.0x0,id=rng-id[,multifunction=on]
+```
+
+Parameters:
+- **filename**: path of the character device used to generate random numbers on the host, for example, **/dev/random**.
+- **period**: period for limiting the read rate of random number characters, in milliseconds.
+- **max-bytes**: maximum number of bytes of a random number generated by a character device within a period.
+- **bus**: name of the bus to which the Virtio RNG device is mounted.
+- **addr**: address of the Virtio RNG device. The parameter format is **addr=***[slot].[function]*, where *slot* and *function* indicate the slot number and function number of the device respectively. The slot number and function number are hexadecimal numbers. The function number of the Virtio RNG device is **0x0**.
+
+#### Precautions
+
+- If **period** and **max-bytes** are not configured, the read rate of random number characters is not limited.
+- Otherwise, the value range of **max-bytes/period\*1000** is [64, 1000000000]. It is recommended that the value be not too small to prevent the rate of obtaining random number characters from being too slow.
+- Only the average number of random number characters can be limited, and the burst traffic cannot be limited.
+- If the guest needs to use the Virtio RNG device, the guest kernel requires the following configurations: **CONFIG_HW_RANDOM=y**, **CONFIG_HW_RANDOM_VIA=y**, and **CONFIG_HW_RANDOM_VIRTIO=y**.
+- When configuring the Virtio RNG device, check whether the entropy pool is sufficient to avoid VM freezing. For example, if the character device path is **/dev/random**, you can check **/proc/sys/kernel/random/entropy_avail** to view the current entropy pool size. When the entropy pool is full, the entropy pool size is **4096**. Generally, the value is greater than 1000.
+
## Configuration Examples
### Lightweight VMs
diff --git a/docs/en/docs/Virtualization/LibcarePlus.md b/docs/en/docs/Virtualization/LibcarePlus.md
index d2d9f66982d7ea6ebda11ed0074250a371c0e114..1a6fe540b61c631256a9612b1d451887a398dc7a 100644
--- a/docs/en/docs/Virtualization/LibcarePlus.md
+++ b/docs/en/docs/Virtualization/LibcarePlus.md
@@ -341,7 +341,7 @@ The procedure for applying the LibcarePlus hot patch is as follows:
2. In the second shell window, run the `libcare-ctl` command to apply the hot patch:
``` shell
- # libcare-ctl -v patch -p $(pidof foo) ./foo.kpatch
+ # libcare-ctl -v patch -p $(pidof foo) ./patchroot/BuildID.kpatch
```
If the hot patch is applied successfully, the following information is displayed in the second shell window:
@@ -391,7 +391,7 @@ The procedure for uninstalling the LibcarePlus hot patch is as follows:
1. Run the following command in the second shell window:
``` shell
- # libcare-ctl unpatch -p $(pidof foo)
+ # libcare-ctl unpatch -p $(pidof foo) -i 0001
```
If the hot patch is uninstalled successfully, the following information is displayed in the second shell window:
diff --git a/docs/en/docs/Virtualization/best-practices.md b/docs/en/docs/Virtualization/best-practices.md
index cea53d5bd45a268056aa0311b9647499e92c5929..8e47aec55b23a1cbce76474ec46eac08df05dd6f 100644
--- a/docs/en/docs/Virtualization/best-practices.md
+++ b/docs/en/docs/Virtualization/best-practices.md
@@ -608,7 +608,7 @@ Currently, openEuler 22.03 LTS provides the libtpms and swtpm sources. You can r
```
> **Note:**
- > Do not configure the ACPI feature for a VM running openEuler 20.09 in the AArch64 architecture, because the VM trusted boot does not support the ACPI feature. Otherwise, the VM cannot recognize the vTPM device after startup. For openEuler earlier than version 22.03 in the AArch64 architecture, set the value of **tpm model** to ****.
+ > Do not configure the ACPI feature for a VM running openEuler 20.09 in the AArch64 architecture, because the VM trusted boot does not support the ACPI feature. Otherwise, the VM cannot recognize the vTPM device after startup. For openEuler earlier than version 22.03 in the AArch64 architecture, set the value of **tpm model** to **\**.
2. Create the VM.
diff --git a/docs/en/docs/desktop/HA_usage_example.md b/docs/en/docs/desktop/HA_usage_example.md
index b48a0da88b82b42b648e65c2de779f35d99b7271..75ef1cca34707f0aeb1780243dfc8c7f1d3d7c26 100644
--- a/docs/en/docs/desktop/HA_usage_example.md
+++ b/docs/en/docs/desktop/HA_usage_example.md
@@ -1,6 +1,6 @@
# HA Usage Example
-This section describes how to get started with the HA cluster and add an instance. If you are not familiar with HA installation, see [Installing and Deploying HA](./install-deploy-HA.md\).
+This section describes how to get started with the HA cluster and add an instance. If you are not familiar with HA installation, see [Installing and Deploying HA](./installing-and-deploying-HA.md).
diff --git a/docs/en/docs/desktop/UKUI-user-guide.md b/docs/en/docs/desktop/UKUI-user-guide.md
index 8eb58636778d8d589fc931a6f3c3c5e9a1f31424..3dacb17e7e4384136bc8d797abc56781019b6883 100755
--- a/docs/en/docs/desktop/UKUI-user-guide.md
+++ b/docs/en/docs/desktop/UKUI-user-guide.md
@@ -340,15 +340,15 @@ The options are described in table below.
## FAQ
-### Can’t login to the system after locking the screen?
+### I Cannot Login to the System After Locking the Screen
-- Switch to character terminal by < Ctrl + Alt + F2 >.
+- Switch to character terminal by pressing **Ctrl + Alt + F2**.
- Input the user-name and passwd to login to the system.
- Do "sudo rm -rf ~/.Xauthority".
-- Switch to graphical interface by < Ctrl + Alt + F1 >, and input the passwd.
+- Switch to graphical interface by pressing **Ctrl + Alt + F1**, and input the password.
@@ -356,7 +356,7 @@ The options are described in table below.
### Shortcut Key
|Shortcut Key|Function|
-| :------ | :-----
+| :------ | :-----|
| F5 | Refresh the desktop |
| F1 | Open the user-guide|
| Alt + Tab | Switch the window |
diff --git a/docs/en/docs/desktop/installing-GNOME.md b/docs/en/docs/desktop/installing-GNOME.md
index 65965b1e12f2369d76a68d457d0b9b091afcfccf..d9ed6d154910d8c261290b5f93fe793aa9e889ad 100644
--- a/docs/en/docs/desktop/installing-GNOME.md
+++ b/docs/en/docs/desktop/installing-GNOME.md
@@ -45,7 +45,7 @@ In this case, many unnecessary packages may be installed. You can run the follow
dconf-editor devhelp eog epiphany evince evolution-data-server file-roller folks \
gcab gcr gdk-pixbuf2 gdm gedit geocode-glib gfbgraph gjs glib2 glibmm24 \
glib-networking gmime30 gnome-autoar gnome-backgrounds gnome-bluetooth \
- gnome-boxes gnome-builder gnome-calculator gnome-calendar gnome-characters \
+ gnome-builder gnome-calculator gnome-calendar gnome-characters \
gnome-clocks gnome-color-manager gnome-contacts gnome-control-center \
gnome-desktop3 gnome-disk-utility gnome-font-viewer gnome-getting-started-docs \
gnome-initial-setup gnome-keyring gnome-logs gnome-menus gnome-music \
diff --git a/docs/en/menu/index.md b/docs/en/menu/index.md
index 91190d5fc49a0b82b96f2e1d6aac0e4f2e2ba313..8ffabf8e965e4a1afda4c274aa57bb92e0f21163 100644
--- a/docs/en/menu/index.md
+++ b/docs/en/menu/index.md
@@ -70,7 +70,7 @@ headless: true
- [vmtop]({{< relref "./docs/Virtualization/vmtop.md" >}})
- [LibcarePlus]({{< relref "./docs/Virtualization/LibcarePlus.md" >}})
- [Appendix]({{< relref "./docs/Virtualization/appendix.md" >}})
-- [StratoVirt Virtualization User Guide]({{< relref "./docs/StratoVirt/StratoVrit_guidence.md" >}})
+- [StratoVirt Virtualization User Guide]({{< relref "./docs/StratoVirt/StratoVirt_guidence.md" >}})
- [Introduction to StratoVirt]({{< relref "./docs/StratoVirt/StratoVirt_introduction.md" >}})
- [Installing StratoVirt]({{< relref "./docs/StratoVirt/Install_StratoVirt.md" >}})
- [Preparing the Environment]({{< relref "./docs/StratoVirt/Prepare_env.md" >}})
@@ -155,6 +155,7 @@ headless: true
- [Using JDK for Compilation]({{< relref "./docs/ApplicationDev/using-jdk-for-compilation.md" >}})
- [Building an RPM Package]({{< relref "./docs/ApplicationDev/building-an-rpm-package.md" >}})
- [FAQ]({{< relref "./docs/ApplicationDev/FAQ.md" >}})
+ - [Installing the OBS Tool]({{< relref "./docs/ApplicationDev/installing-obs.md" >}})
- [secGear Development Guide]({{< relref "./docs/secGear/secGear.md" >}})
- [Introduction to secGear]({{< relref "./docs/secGear/introduction-to-secGear.md" >}})
- [Installing secGear]({{< relref "./docs/secGear/installing-secGear.md" >}})
diff --git "a/docs/zh/docs/Administration/\345\237\272\347\241\200\351\205\215\347\275\256.md" "b/docs/zh/docs/Administration/\345\237\272\347\241\200\351\205\215\347\275\256.md"
index f863f5c20fd6a2627bbaddc58f3d794e0649d739..3b6ce032ebba13ed3e086befdf43bd686aff7ed4 100644
--- "a/docs/zh/docs/Administration/\345\237\272\347\241\200\351\205\215\347\275\256.md"
+++ "b/docs/zh/docs/Administration/\345\237\272\347\241\200\351\205\215\347\275\256.md"
@@ -1,4 +1,4 @@
-# 基础配置
+# 基础配置
- [基础配置](#基础配置)
@@ -386,7 +386,7 @@ $ date +"format"
#### 修改日期
-修改当前的日期,添加\-\-set或者-s参数。在root权限下执行如下命令,其中 _YYYY_ 代表年份,_MM_ 代表月份,_DD_ 代表某天,请根据实际情况修改:
+修改当前的日期,添加\-\-set或者-s参数。在root权限下执行如下命令,其中 _YYYY_ 代表年份,_MM_ 代表月份,_DD_ 代表某天,请根据实际情况修改:(注意,执行修改日期操作后,相应的时间会重置为00:00:00)
```
# date --set YYYY-MM-DD
diff --git "a/docs/zh/docs/Administration/\347\256\241\347\220\206\350\277\233\347\250\213.md" "b/docs/zh/docs/Administration/\347\256\241\347\220\206\350\277\233\347\250\213.md"
index a0df8539f2f0b639d27dd96b10312a129c8178a0..f986778339c035c786afb4b706624f9e0ac27dbf 100644
--- "a/docs/zh/docs/Administration/\347\256\241\347\220\206\350\277\233\347\250\213.md"
+++ "b/docs/zh/docs/Administration/\347\256\241\347\220\206\350\277\233\347\250\213.md"
@@ -236,9 +236,9 @@ crontab命令用于安装、删除或者显示用于驱动cron后台进程的表
crontab命令的常用方法如下:
- crontab -u //设置某个用户的cron服务,root用户在执行crontab时需要此参数。
-- crontab -l //列出某个用户cron服务的详细内容。
-- crontab -r //删除某个用户的cron服务。
-- crontab -e //编辑某个用户的cron服务。
+- crontab -l //列出当前用户cron服务的详细内容。
+- crontab -r //删除当前用户的cron服务。
+- crontab -e //编辑当前用户的cron服务。
例如root查看自己的cron设置。命令如下:
diff --git "a/docs/zh/docs/Administration/\351\205\215\347\275\256\347\275\221\347\273\234.md" "b/docs/zh/docs/Administration/\351\205\215\347\275\256\347\275\221\347\273\234.md"
index 2b68f5a214c347c0efae46fd30788c2b7c57cc9a..2d92318df4619510f178ddb1c9d22820533d8850 100644
--- "a/docs/zh/docs/Administration/\351\205\215\347\275\256\347\275\221\347\273\234.md"
+++ "b/docs/zh/docs/Administration/\351\205\215\347\275\256\347\275\221\347\273\234.md"
@@ -544,6 +544,7 @@ $ hostnamectl status
```
# hostnamectl set-hostname name
+# exec bash
```
#### 设定特定主机名
diff --git "a/docs/zh/docs/Driver_Development_Specifications/openEuler\351\251\261\345\212\250\345\274\200\345\217\221\350\247\204\350\214\203.md" "b/docs/zh/docs/Driver_Development_Specifications/openEuler\351\251\261\345\212\250\345\274\200\345\217\221\350\247\204\350\214\203.md"
new file mode 100644
index 0000000000000000000000000000000000000000..9ad5d9aebce949928fc772b6fdbc16c119e9d8e5
--- /dev/null
+++ "b/docs/zh/docs/Driver_Development_Specifications/openEuler\351\251\261\345\212\250\345\274\200\345\217\221\350\247\204\350\214\203.md"
@@ -0,0 +1,49 @@
+# openEuler驱动开发规范
+
+## 目的
+
+为规范和统一 openEuler 驱动开发的提交流程及方式,使驱动在 openEuler上使能,特制定 openEuler 驱动开发规范。
+
+## 适用范围
+
+openEuler 驱动开发规范适用于 openEuler 在开发以及已发布的所有版本的开发过程。
+
+包括但不限于以下版本:
+
+- openEuler 20.03 LTS SP3
+- openEuler 22.03 LTS
+- openEuler 22.03 LTS SP1
+
+## 对驱动的基本要求
+
+openEuler的目标:
+
+- 成为技术创新的平台,加速技术创新、成熟及落地应用。
+- 维护安全稳定、可靠、性能领先以及生态丰富的稳定内核,方便产业界快速应用。
+
+因此,满足以上原则的驱动可以提交至openEuler。
+
+### 签署贡献者协议 (CLA)
+
+贡献者贡献openEuler社区前,需签署贡献者协议[CLA](https://openeuler.org/zh/community/contribution/)。
+
+>**说明** :CLA签署后大约需要一周时间生效。
+
+### 驱动要求
+
+驱动需满足如下要求:
+
+1. 名称不能与系统已有名称发生冲突。
+2. 对照openEuler社区KABI检测。
+3. 正确的驱动版本信息。
+4. 驱动模块参数需要解释说明。
+5. 如有配套工具一并提供。
+6. 声明 license 信息。
+7. 建议增加驱动与操作系统发行版耦合方式的规范,如直接检查 /etc/openEuler-release 文件或者其它作为技术路线的判断,不再与具体的发行版信息耦合。
+8. 在社区建仓开发时同步提供驱动安装指导。
+
+## 参考资料
+
+- [Kernel SIG | openEuler Kernel 补丁合入规范](https://mp.weixin.qq.com/s/rSH79v7btJfsdivC2mki1w)
+- [如何参与 openEuler 内核开发](https://mp.weixin.qq.com/s/a42a5VfayFeJgWitqbI8Qw)
+
diff --git "a/docs/zh/docs/Installation/\344\275\277\347\224\250kickstart\350\207\252\345\212\250\345\214\226\345\256\211\350\243\205.md" "b/docs/zh/docs/Installation/\344\275\277\347\224\250kickstart\350\207\252\345\212\250\345\214\226\345\256\211\350\243\205.md"
index 1d3c08e3709ef5da84d546ea8174d09640997c45..c13734d003a2b0874b845190cf5d479166dad5fa 100644
--- "a/docs/zh/docs/Installation/\344\275\277\347\224\250kickstart\350\207\252\345\212\250\345\214\226\345\256\211\350\243\205.md"
+++ "b/docs/zh/docs/Installation/\344\275\277\347\224\250kickstart\350\207\252\345\212\250\345\214\226\345\256\211\350\243\205.md"
@@ -79,7 +79,7 @@ TFTP(Trivial File Transfer Protocol,简单文件传输协议),该协议
使用kickstart进行openEuler系统的半自动化安装的环境要求如下:
- 物理机/虚拟机(虚拟机创建可参考对应厂商的资料)。包括使用kickstart工具进行自动化安装的计算机和被安装的计算机。
-- httpd:存放kickstart文件。
+- httpd:部署kickstart文件和系统安装文件。
- ISO: openEuler-22.03_LTS-aarch64-dvd.iso
### 操作步骤
@@ -198,7 +198,7 @@ TFTP(Trivial File Transfer Protocol,简单文件传输协议),该协议
使用kickstart进行openEuler系统的全自动化安装的环境要求如下:
- 物理机/虚拟机(虚拟机创建可参考对应厂商的资料)。包括使用kickstart工具进行自动化安装的计算机和被安装的计算机。
-- httpd:存放kickstart文件。
+- httpd:部署kickstart文件和系统安装文件。
- tftp:提供vmlinuz和initrd文件。
- dhcpd/pxe:提供DHCP服务。
- ISO:openEuler-22.03_LTS-aarch64-dvd.iso。
diff --git "a/docs/zh/docs/SecHarden/\345\270\220\346\210\267\345\217\243\344\273\244.md" "b/docs/zh/docs/SecHarden/\345\270\220\346\210\267\345\217\243\344\273\244.md"
index 9081d360a4ba23cfcb6a2ee6bc11143ab9b678bc..62ab72fc3b15064e746625115e7a4f3e80c8f074 100644
--- "a/docs/zh/docs/SecHarden/\345\270\220\346\210\267\345\217\243\344\273\244.md"
+++ "b/docs/zh/docs/SecHarden/\345\270\220\346\210\267\345\217\243\344\273\244.md"
@@ -183,7 +183,7 @@ pam\_pwquality.so和pam\_pwhistory.so的配置项请分别参见[表2](#table201
### 实现
-口令有效期的设置通过修改/etc/login.defs文件实现,加固项如[表7](#zh-cn_topic_0152100281_t77b5d0753721450c81911c18b74e82eb)所示。表中所有的加固项都在文件/etc/login.defs中。表中字段直接通过修改配置文件完成。
+口令有效期的设置通过修改/etc/login.defs文件实现,加固项如[表4](#zh-cn_topic_0152100281_t77b5d0753721450c81911c18b74e82eb)所示。表中所有的加固项都在文件/etc/login.defs中。表中字段直接通过修改配置文件完成。
**表 4** login.defs配置项说明所示
diff --git "a/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\351\205\215\347\275\256.md" "b/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\351\205\215\347\275\256.md"
index 81ea81f9707004e40400becc7fc4abd0b5537746..bdba95f1fa470cefef483f2079ec5685fa64c6bc 100644
--- "a/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\351\205\215\347\275\256.md"
+++ "b/docs/zh/docs/StratoVirt/\350\231\232\346\213\237\346\234\272\351\205\215\347\275\256.md"
@@ -73,7 +73,7 @@ StratoVirt 能够运行的最小配置为:
| ---------------- | ----------------------------------------------- | -------------------------------------------------------------------------- |
| -name | *VMname* | 配置虚拟机名称(字符长度:1-255字符) |
| -kernel | /path/to/vmlinux.bin | 配置内核镜像 |
-| -append | console=ttyS0 root=/dev/vda reboot=k panic=1 rw | 配置内核命令行参数,标准虚拟化AArch64平台使用console=ttyAMA0,而不是ttyS0. |
+| -append | console=ttyS0 root=/dev/vda reboot=k panic=1 rw | 配置内核命令行参数,标准虚拟化X86_64平台默认使用console=ttyS0,AArch64平台默认使用console=ttyAMA0。在配置了virtio-console设备但是没有配置serial串口设备时,需要配置为console=hvc0(与架构平台无关) |
| -initrd | /path/to/initrd.img | 配置initrd文件 |
| -smp | [cpus=]个数 | 配置cpu个数,范围[1, 254] |
| -m | 内存大小MiB、内存大小GiB,默认单位MiB | 配置内存大小,范围[256MiB, 512GiB] |
@@ -502,8 +502,8 @@ Virtio RNG配置为Virtio PCI设备时,命令行参数如下:
- filename:在host上用于生成随机数的字符设备路径,例如/dev/random;
- period:限制随机数字符速率的定时周期,单位为毫秒;
- max-bytes:在period时间内字符设备生成随机数的最大字节数;
-- bus:Vritio RNG设备挂载的总线名称;
-- addr:Vritio RNG设备地址,参数格式为addr=[slot].[function],分别表示设备的slot号和function号,均使用十六进制表示,其中Virtio RNG设备的function号为0x0。
+- bus:Virtio RNG设备挂载的总线名称;
+- addr:Virtio RNG设备地址,参数格式为addr=[slot].[function],分别表示设备的slot号和function号,均使用十六进制表示,其中Virtio RNG设备的function号为0x0。
#### 注意事项
diff --git "a/docs/zh/docs/thirdparty_migration/HA\347\232\204\344\275\277\347\224\250\345\256\236\344\276\213.md" "b/docs/zh/docs/thirdparty_migration/HA\347\232\204\344\275\277\347\224\250\345\256\236\344\276\213.md"
new file mode 100644
index 0000000000000000000000000000000000000000..aa45aee64fefe8cc27e36631cb36f5fc04ebb0fc
--- /dev/null
+++ "b/docs/zh/docs/thirdparty_migration/HA\347\232\204\344\275\277\347\224\250\345\256\236\344\276\213.md"
@@ -0,0 +1,512 @@
+# HA使用实例
+
+本章介绍如何快速使用HA高可用集群,以及添加一个实例。若不了解怎么安装,请参考[HA的安装与部署文档](https://docs.openeuler.org/zh/docs/22.09/docs/thirdparty_migration/HA%E7%9A%84%E5%AE%89%E8%A3%85%E4%B8%8E%E9%83%A8%E7%BD%B2.html)。
+
+
+ - [快速使用指南](#快速使用指南)
+ - [登陆页面](#登陆页面)
+ - [主页面](#主页面)
+ - [导航栏](#导航栏)
+ - [顶部操作区](#顶部操作区)
+ - [资源节点列表区](#资源节点列表区)
+ - [节点操作浮动区](#节点操作浮动区)
+ - [首选项配置](#首选项配置)
+ - [添加资源](#添加资源)
+ - [添加普通资源](#添加普通资源)
+ - [添加组资源](#添加组资源)
+ - [添加克隆资源](#添加克隆资源)
+ - [编辑资源](#编辑资源)
+ - [设置资源关系](#设置资源关系)
+ - [高可用mysql实例配置](#高可用mysql实例配置)
+ - [配置虚拟IP](#配置虚拟ip)
+ - [配置NFS存储](#配置nfs存储)
+ - [配置mysql](#配置mysql)
+ - [添加上述资源为组资源](#添加上述资源为组资源)
+ - [仲裁设备配置](#仲裁设备配置)
+ - [安装仲裁所需软件包](#安装仲裁所需软件包)
+ - [修改主机名称及/etc/hosts文件](#修改主机名称及/etc/hosts文件)
+ - [配置仲裁设备并添加到集群](#配置仲裁设备并添加到集群)
+ - [配置仲裁设备](#配置仲裁设备)
+ - [关闭防火墙](#关闭防火墙)
+ - [进行身份认证](#进行身份认证)
+ - [将仲裁设备添加到集群](#将仲裁设备添加到集群)
+ - [检查仲裁设备的配置状态](#检查仲裁设备的配置状态)
+ - [管理仲裁设备服务](#管理仲裁设备服务)
+ - [管理集群中的仲裁设备](#管理集群中的仲裁设备)
+ - [更改仲裁设备设置](#更改仲裁设备设置)
+ - [删除仲裁设备](#删除仲裁设备)
+ - [销毁仲裁设备](#销毁仲裁设备)
+
+## 快速使用指南
+
+- 以下操作均以社区新开发的管理平台为例。
+
+### 登陆页面
+用户名为`hacluster`,密码为该用户在主机上设置的密码。
+
+
+
+### 主页面
+登录系统后显示主页面,主页面由四部分组成:侧边导航栏、顶部操作区、资源节点列表区以及节点操作浮动区。
+
+以下将详细介绍这四部分的特点与使用方法。
+
+
+
+#### 导航栏
+侧边导航栏由两部分组成:高可用集群软件名称和 logo 以及系统导航。系统导航由三项组成:【系统】、【集群配置】和【工具】。【系统】是默认选项,也是主页面的对应项,主要展示系统中所有资源的相关信息以及操作入口;【集群配置】下设【首选项配置】和【心跳配置】两项;【工具】下设【日志下载】和【集群快捷操作】两项,点击后以弹出框的形式出现。
+
+#### 顶部操作区
+登录用户是静态显示,鼠标滑过用户图标,出现操作菜单项,包括【刷新设置】和【退出登录】两项,点击【刷新设置】,弹出【刷新设置】对话框,包含【刷新设置】选项,可以设置系统的自动刷新模式,包括【不自动刷新】、【每 5 秒刷新】和【每 10 秒刷新】三种选择,默认选择【不自动刷新】、【退出登录】即可注销本次登录,系统将自动跳到登录页面,此时,如果希望继续访问系统,则需要重新进行登录。
+
+
+
+#### 资源节点列表区
+资源节点列表集中展现系统中所有资源的【资源名】、【状态】、【资源类型】、【服务】、【运行节点】等资源信息,以及系统中所有的节点和节点的运行情况等节点信息。同时提供资源的【添加】、【编辑】、【启动】、【停止】、【清理】、【迁移】、【回迁】、【删除】和【关系】操作。
+
+#### 节点操作浮动区
+节点操作浮动区域默认是收起的状态,每当点击资源节点列表表头中的节点时,右侧会弹出节点操作扩展区域,如图所示,该区域由收起按钮、节点名称、停止和备用四个部分组成,提供节点的【停止】和【备用】操作。点击区域左上角的箭头,该区域收起。
+
+### 首选项配置
+以下操作均可用命令行配置,现只做简单示例,若想使用更多命令可以使用``pcs --help``进行查询。
+
+```
+# pcs property set stonith-enabled=false
+# pcs property set no-quorum-policy=ignore
+```
+``pcs property``查看全部设置
+
+
+
+- 点击侧边导航栏中的【首选项配置】按钮,弹出【首选项配置】对话框。将No Quorum Policy和Stonith Enabled由默认状态改为如下对应状态;修改完成后,点击【确定】按钮完成配置。
+
+
+
+#### 添加资源
+##### 添加普通资源
+单击【添加普通资源】,弹出【创建资源】对话框,其中资源的所有必填配置项均在【基本】页面内,选择【基本】页面内的【资源类型】后会进一步给出该类资源的其他必填配置项以及选填配置项。填写资源配置信息时,对话框右侧会出现灰色文字区域,对当前的配置项进行解释说明。全部必填项配置完毕后,单击【确定】按钮即可创建普通资源,单击【取消】按钮,取消本次添加动作。【实例属性】、【元属性】或者【操作属性】页面中的选填配置项为选填项,不配置不会影响资源的创建过程,可以根据场景需要可选择修改,否则将按照系统默认值处理。
+
+下面以apache为例,添加apache资源
+```
+# pcs resource create httpd ocf:heartbeat:apache
+```
+查看资源运行状态
+```
+# pcs status
+```
+
+
+
+- 添加apache资源
+
+
+- 若回显为如下,则资源添加成功
+
+
+- 资源创建成功并启动,运行于其中一个节点上,例如ha1;成功访问apache界面。
+
+
+
+##### 添加组资源
+添加组资源时,集群中需要至少存在一个普通资源。鼠标点击【添加组资源】,弹出【创建资源】对话框。【基本】页面内均为必填项,填写完毕后,点击【确定】按钮,即可完成资源的添加,点击【取消】按钮,取消本次添加动作。
+
+- **注:组资源的启动是按照子资源的顺序启动的,所以选择子资源时需要注意按照顺序选择。**
+
+
+
+若回显为如下,则资源添加成功
+
+
+
+##### 添加克隆资源
+鼠标点击【添加克隆资源】,弹出【创建资源】对话框。【基本】页面内填写克隆对象,资源名称会自动生成,填写完毕后,点击【确定】按钮,即可完成资源的添加,点击【取消】按钮,取消本次添加动作。
+
+
+
+若回显为如下,则资源添加成功
+
+
+#### 编辑资源
+- 启动资源:资源节点列表中选中一个目标资源,要求:该资源处于非运行状态。对该资源执行启动动作。
+- 停止资源:资源节点列表中选中一个目标资源,要求:该资源处于运行状态。对该资源执行停止操作。
+- 清理资源:资源节点列表中选中一个目标资源,对该资源执行清理操作。
+- 迁移资源:资源节点列表中选中一个目标资源,要求:该资源为处于运行状态的普通资源或者组资源,执行迁移操作可以将资源迁移到指定节点上运行。
+- 回迁资源:资源节点列表中选中一个目标资源,要求:该资源已经完成迁移动作,执行回迁操作,可以清除该资源的迁移设置,资源重新迁回到原来的节点上运行。
+点击按钮后,列表中该资源项的变化状态与启动资源时一致。
+- 删除资源:资源节点列表中选中一个目标资源,对该资源执行删除操作。
+
+#### 设置资源关系
+资源关系即为目标资源设定限制条件,资源的限制条件分为三种:资源位置、资源协同和资源顺序。
+- 资源位置:资源位置:设置集群中的节点对于该资源的运行级别,由此确定启动或者切换时资源在哪个节点上运行,运行级别按照从高到低的顺序依次为:Primary Node、Standby 1。
+- 资源协同:设置目标资源与集群中的其他资源是否运行在同一节点上,同节点资源表示该资源与目标资源必须运行在相同节点上,互斥节点资源表示该资源与目标资源不能运行在相同的节点上。
+- 资源顺序:设置目标资源与集群中的其他资源启动时的先后顺序,前置资源是指目标资源运行之前,该资源必须已经运行;后置资源是指目标资源运行之后,该资源才能运行。
+
+## 高可用mysql实例配置
+- 先单独配置三个普通资源,待成功后添加为组资源。
+### 配置虚拟IP
+在首页中点击添加-->添加普通资源,并按如下进行配置。
+
+
+
+- 资源创建成功并启动,运行于其中一个节点上,例如ha1;可以ping通并连接,登录后可正常执行各种操作;资源切换到ha2运行;能够正常访问。
+- 若回显为如下,则资源添加成功
+
+
+
+### 配置NFS存储
+- 另外找一台机器作为nfs服务端进行配置
+
+安装软件包
+
+```
+# yum install -y nfs-utils rpcbind
+```
+关闭防火墙
+```
+# systemctl stop firewalld && systemctl disable firewalld
+```
+修改/etc/selinux/config文件中SELINUX状态为disabled
+```
+# SELINUX=disabled
+```
+启动服务
+```
+# systemctl start rpcbind && systemctl enable rpcbind
+# systemctl start nfs-server && systemctl enable nfs-server
+```
+服务端创建一个共享目录
+```
+# mkdir -p /test
+```
+修改NFS配置文件
+```
+# vim /etc/exports
+# /test *(rw,no_root_squash)
+```
+重新加载服务
+```
+# systemctl reload nfs
+```
+
+客户端安装软件包,先把mysql安装上,为了把下面nfs挂载到mysql数据路径
+```
+# yum install -y nfs-utils mariadb-server
+```
+在首页中点击添加-->添加普通资源,并按如下进行配置NFS资源。
+
+
+
+- 资源创建成功并启动,运行于其中一个节点上,例如ha1;nfs成功挂载到/var/lib/mysql路径下。资源切换到ha2运行;nfs从ha1节点取消挂载,并自动在ha2节点上挂载成功。
+- 若回显为如下,则资源添加成功
+
+
+
+### 配置mysql
+在首页中点击添加-->添加普通资源,并按如下进行配置mysql资源。
+
+
+
+- 若回显为如下,则资源添加成功
+
+
+
+### 添加上述资源为组资源
+- 按资源启动顺序添加三个资源
+
+在首页中点击添加-->添加组资源,并按如下进行配置组资源。
+
+
+
+- 组资源创建成功并启动,若回显与上述三个普通资源成功现象一致,则资源添加成功
+
+
+
+- 将ha1节点备用,成功迁移到ha2节点,运行正常
+
+
+
+## 仲裁设备配置
+选择一台新的机器,作为仲裁设备。当前软件包不支持使用systemct进行服务测试,启停服务使用pcs操作,以下附上详细操作步骤。
+
+### 安装仲裁所需软件包
+- 安装corosync-qdevice在现有集群的节点上
+```
+[root@node1:~]# yum install corosync-qdevice
+[root@node2:~]# yum install corosync-qdevice
+```
+- 安装pcs和corosync-qnetd在仲裁设备主机上
+```
+[root@qdevice:~]# yum install pcs corosync-qnetd
+```
+- 在仲裁设备主机上启动pcsd服务并在系统启动时启用pcsd
+```
+[root@qdevice:~]# systemctl start pcsd.service
+[root@qdevice:~]# systemctl enable pcsd.service
+```
+
+### 修改主机名称及/etc/hosts文件
+**注:三台主机均需要进行以下操作,现以其中一台为例。**
+
+在仲裁功能使用前,需要确认修改主机名并将所有主机名写入/etc/hosts文件中,设置hacluster用户密码。
+- 修改主机名
+```
+hostnamectl set-hostname node1
+```
+- 编辑/etc/hosts文件并写入IP,主机名字段
+```
+10.1.167.105 ha1
+10.1.167.105 ha2
+10.1.167.106 qdevice
+```
+- 设置hacluster用户密码
+```
+passwd hacluster
+```
+
+### 配置仲裁设备并添加到集群
+以下过程为配置仲裁设备并将该仲裁设备添加到集群中。
+- 用于仲裁设备的节点是qdevice
+- 仲裁设备的model为net
+- 集群节点是node1和node2
+
+#### 配置仲裁设备
+在将用于托管仲裁设备的节点上,使用以下命令配置仲裁设备。此命令配置和启动的仲裁设备model为net,并将设备配置为在引导时启动。
+```
+[root@qdevice:~]# pcs qdevice setup model net --enable --start
+Quorum device 'net' initialized
+quorum device enabled
+Starting quorum device...
+quorum device started
+```
+配置仲裁设备后,可以查看其状态。当前状态表明corosync-qnetd守护程序正在运行,此时没有客户端连接到它。使用--full命令可以展示详细的输出内容。
+```
+[root@qdevice:~]# pcs qdevice status net --full
+QNetd address: *:5403
+TLS: Supported (client certificate required)
+Connected clients: 0
+Connected clusters: 0
+Maximum send/receive size: 32768/32768 bytes
+```
+
+#### 关闭防火墙
+```
+systemctl stop firewalld && systemctl disable firewalld
+```
+- 修改 /etc/selinux/config 文件中SELINUX状态为disabled
+```
+SELINUX=disabled
+```
+
+#### 进行身份认证
+从现有集群中的一个hacluster节点,对托管仲裁设备的节点上的用户进行身份验证。这允许pcs集群上连接到pcs主机上qdevice,但不允许主机pcs上qdevice连接到pcs集群上。
+```
+[root@node1:~] # pcs host auth qdevice
+Username: hacluster
+Password:
+qdevice: Authorized
+```
+
+#### 将仲裁设备添加到集群
+在添加仲裁设备之前,可以通过 pcs quorum config 命令查看仲裁设备的当前配置,以便之后进行比较。
+```
+[root@node1:~]# pcs quorum config
+Options:
+```
+通过 pcs quorum status 命令查看仲裁设备的当前状态,输出结果表明集群尚未使用仲裁设备,并且Qdevice每个节点的成员身份状态为NR(未注册)。
+```
+[root@node1:~]# pcs quorum status
+Quorum information
+------------------
+Date: Wed Jun 29 13:15:36 2016
+Quorum provider: corosync_votequorum
+Nodes: 2
+Node ID: 1
+Ring ID: 1/8272
+Quorate: Yes
+
+Votequorum information
+----------------------
+Expected votes: 2
+Highest expected: 2
+Total votes: 2
+Quorum: 1
+Flags: 2Node Quorate
+
+Membership information
+----------------------
+ Nodeid Votes Qdevice Name
+ 1 1 NR node1 (local)
+ 2 1 NR node2
+```
+以下命令将之前创建的仲裁设备添加到集群中。注意不能在一个集群中同时使用多个仲裁设备。但是,一个仲裁设备可以同时被多个集群使用。此示例将仲裁设备配置为ffsplit算法。
+```
+[root@node1:~]# pcs quorum device add model net host=qdevice algorithm=ffsplit
+Setting up qdevice certificates on nodes...
+node2: Succeeded
+node1: Succeeded
+Enabling corosync-qdevice...
+node1: corosync-qdevice enabled
+node2: corosync-qdevice enabled
+Sending updated corosync.conf to nodes...
+node1: Succeeded
+node2: Succeeded
+Corosync configuration reloaded
+Starting corosync-qdevice...
+node1: corosync-qdevice started
+node2: corosync-qdevice started
+```
+
+#### 检查仲裁设备的配置状态
+在集群端,执行以下命令来查看配置的变化情况。通过 pcs quorum config 命令显示已配置的仲裁设备信息。
+```
+[root@node1:~]# pcs quorum config
+Options:
+Device:
+ Model: net
+ algorithm: ffsplit
+ host: qdevice
+```
+pcs quorum status命令显示仲裁运行时状态,表明仲裁设备正在使用中。每个集群节点的成员信息状态值的含义Qdevice如下:
+- A/NA— quorum device 是否存活,表示qdevice和之间是否有心跳corosync。这应始终表明仲裁设备处于活动状态。
+- V/NV—V当仲裁设备给一个节点投票时设置。在此示例中,两个节点都设置为,V因为它们可以相互通信。如果将集群拆分为两个单节点集群,则其中一个节点将设置为V,另一个节点将设置为NV。
+- MW/NMW— 内部仲裁设备标志已设置 ( MW) 或未设置 ( NMW)。默认情况下,未设置标志,值为NMW。
+```
+[root@node1:~]# pcs quorum status
+Quorum information
+------------------
+Date: Wed Jun 29 13:17:02 2016
+Quorum provider: corosync_votequorum
+Nodes: 2
+Node ID: 1
+Ring ID: 1/8272
+Quorate: Yes
+
+Votequorum information
+----------------------
+Expected votes: 3
+Highest expected: 3
+Total votes: 3
+Quorum: 2
+Flags: Quorate Qdevice
+
+Membership information
+----------------------
+ Nodeid Votes Qdevice Name
+ 1 1 A,V,NMW node1 (local)
+ 2 1 A,V,NMW node2
+ 0 1 Qdevice
+```
+通过pcs quorum device status命令显示仲裁设备运行时状态。
+```
+[root@node1:~]# pcs quorum device status
+Qdevice information
+-------------------
+Model: Net
+Node ID: 1
+Configured node list:
+ 0 Node ID = 1
+ 1 Node ID = 2
+Membership node list: 1, 2
+
+Qdevice-net information
+----------------------
+Cluster name: mycluster
+QNetd host: qdevice:5403
+Algorithm: ffsplit
+Tie-breaker: Node with lowest node ID
+State: Connected
+```
+在仲裁设备端,执行以下命令,显示corosync-qnetd守护程序的状态。
+```
+[root@qdevice:~]# pcs qdevice status net --full
+QNetd address: *:5403
+TLS: Supported (client certificate required)
+Connected clients: 2
+Connected clusters: 1
+Maximum send/receive size: 32768/32768 bytes
+Cluster "mycluster":
+ Algorithm: ffsplit
+ Tie-breaker: Node with lowest node ID
+ Node ID 2:
+ Client address: ::ffff:192.168.122.122:50028
+ HB interval: 8000ms
+ Configured node list: 1, 2
+ Ring ID: 1.2050
+ Membership node list: 1, 2
+ TLS active: Yes (client certificate verified)
+ Vote: ACK (ACK)
+ Node ID 1:
+ Client address: ::ffff:192.168.122.121:48786
+ HB interval: 8000ms
+ Configured node list: 1, 2
+ Ring ID: 1.2050
+ Membership node list: 1, 2
+ TLS active: Yes (client certificate verified)
+ Vote: ACK (ACK)
+```
+
+### 管理仲裁设备服务
+PCS 提供了在本地主机 (corosync-qnetd)上管理仲裁设备服务的能力,如以下示例命令所示。请注意,这些命令仅影corosync-qnetd服务。
+```
+[root@qdevice:~]# pcs qdevice start net
+[root@qdevice:~]# pcs qdevice stop net
+[root@qdevice:~]# pcs qdevice enable net
+[root@qdevice:~]# pcs qdevice disable net
+[root@qdevice:~]# pcs qdevice kill net
+```
+当执行 pcs qdevice stop net 命令时可以看到 state 从成功变为失败,再次执行pcs qdevice start net 时,状态变为成功。
+
+
+### 管理集群中的仲裁设备
+可以使用多种pcs命令来更改集群中的仲裁设备设置、禁用仲裁设备和删除仲裁设备。
+
+#### 更改仲裁设备设置
+**注意:要更改host中 quorum device model 的选项net,请使用 pcs quorum device remove 和 pcs quorum device add 命令正确设置配置,除非旧主机和新主机是同一台机器。**
+
+- 以下命令将仲裁设备算法更改为lms
+```
+[root@node1:~]# pcs quorum device update model algorithm=lms
+Sending updated corosync.conf to nodes...
+node1: Succeeded
+node2: Succeeded
+Corosync configuration reloaded
+Reloading qdevice configuration on nodes...
+node1: corosync-qdevice stopped
+node2: corosync-qdevice stopped
+node1: corosync-qdevice started
+node2: corosync-qdevice started
+```
+
+#### 删除仲裁设备
+- 以下命令删除集群节点上配置的仲裁设备
+```
+[root@node1:~]# pcs quorum device remove
+Sending updated corosync.conf to nodes...
+node1: Succeeded
+node2: Succeeded
+Corosync configuration reloaded
+Disabling corosync-qdevice...
+node1: corosync-qdevice disabled
+node2: corosync-qdevice disabled
+Stopping corosync-qdevice...
+node1: corosync-qdevice stopped
+node2: corosync-qdevice stopped
+Removing qdevice certificates from nodes...
+node1: Succeeded
+node2: Succeeded
+```
+删除仲裁设备后,在显示仲裁设备状态时应该会看到以下错误消息。
+```
+[root@node1:~]# pcs quorum device status
+Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QDevice socket (is QDevice running?): No such file or directory
+```
+
+#### 销毁仲裁设备
+- 以下命令禁用和停止仲裁设备主机上的仲裁设备并删除其所有配置文件
+```
+[root@qdevice:~]# pcs qdevice destroy net
+Stopping quorum device...
+quorum device stopped
+quorum device disabled
+Quorum device 'net' configuration files removed
+```
\ No newline at end of file
diff --git "a/docs/zh/docs/thirdparty_migration/HA\347\232\204\345\256\211\350\243\205\344\270\216\351\203\250\347\275\262.md" "b/docs/zh/docs/thirdparty_migration/HA\347\232\204\345\256\211\350\243\205\344\270\216\351\203\250\347\275\262.md"
new file mode 100644
index 0000000000000000000000000000000000000000..30bff920ad53457cef4bd4a9aebd62d168042076
--- /dev/null
+++ "b/docs/zh/docs/thirdparty_migration/HA\347\232\204\345\256\211\350\243\205\344\270\216\351\203\250\347\275\262.md"
@@ -0,0 +1,185 @@
+# HA的安装与部署
+
+本章介绍如何安装和部署HA高可用集群。
+
+
+- [HA的安装与部署](#ha的安装与部署)
+ - [安装与部署](#安装与部署)
+ - [修改主机名称及/etc/hosts文件](#修改主机名称及etchosts文件)
+ - [配置yum源](#配置yum源)
+ - [安装HA软件包组件](#安装ha软件包组件)
+ - [设置hacluster用户密码](#设置hacluster用户密码)
+ - [修改`/etc/corosync/corosync.conf`文件](#修改etccorosynccorosyncconf文件)
+ - [管理服务](#管理服务)
+ - [关闭防火墙](#关闭防火墙)
+ - [管理pcs服务](#管理pcs服务)
+ - [管理pacemaker服务](#管理pacemaker服务)
+ - [管理corosync服务](#管理corosync服务)
+ - [节点鉴权](#节点鉴权)
+ - [访问前端管理平台](#访问前端管理平台)
+
+
+## 安装与部署
+- 环境准备:需要至少两台安装了openEuler 20.03 LTS SP2的物理机/虚拟机(现以两台为例),安装方法参考《openEuler 20.03 LTS SP2 安装指南》。
+
+### 修改主机名称及/etc/hosts文件
+- **注:两台主机均需要进行以下操作,现以其中一台为例。**
+
+在使用HA软件之前,需要确认修改主机名并将所有主机名写入/etc/hosts文件中。
+- 修改主机名
+```
+# hostnamectl set-hostname ha1
+```
+
+- 编辑`/etc/hosts`文件并写入以下字段
+```
+172.30.30.65 ha1
+172.30.30.66 ha2
+```
+
+### 配置yum源
+成功安装系统后,会默认配置好yum源,文件位置存放在`/etc/yum.repos.d/openEuler.repo`文件中,HA软件包会用到以下源:
+```
+[OS]
+name=OS
+baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/
+enabled=1
+gpgcheck=1
+gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler
+
+[everything]
+name=everything
+baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/
+enabled=1
+gpgcheck=1
+gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/everything/$basearch/RPM-GPG-KEY-openEuler
+
+[EPOL]
+name=EPOL
+baseurl=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/EPOL/$basearch/
+enabled=1
+gpgcheck=1
+gpgkey=http://repo.openeuler.org/openEuler-20.03-LTS-SP2/OS/$basearch/RPM-GPG-KEY-openEuler
+```
+
+### 安装HA软件包组件
+```
+# yum install -y corosync pacemaker pcs fence-agents fence-virt corosync-qdevice sbd drbd drbd-utils
+```
+
+### 设置hacluster用户密码
+```
+# passwd hacluster
+```
+
+### 修改`/etc/corosync/corosync.conf`文件
+```
+totem {
+ version: 2
+ cluster_name: hacluster
+ crypto_cipher: none
+ crypto_hash: none
+}
+logging {
+ fileline: off
+ to_stderr: yes
+ to_logfile: yes
+ logfile: /var/log/cluster/corosync.log
+ to_syslog: yes
+ debug: on
+ logger_subsys {
+ subsys: QUORUM
+ debug: on
+ }
+}
+quorum {
+ provider: corosync_votequorum
+ expected_votes: 2
+ two_node: 1
+ }
+nodelist {
+ node {
+ name: ha1
+ nodeid: 1
+ ring0_addr: 172.30.30.65
+ }
+ node {
+ name: ha2
+ nodeid: 2
+ ring0_addr: 172.30.30.66
+ }
+ }
+```
+### 管理服务
+#### 关闭防火墙
+```
+# systemctl stop firewalld
+```
+修改/etc/selinux/config文件中SELINUX状态为disabled
+```
+# SELINUX=disabled
+```
+
+#### 管理pcs服务
+- 启动pcs服务:
+```
+# systemctl start pcsd
+```
+
+- 查询pcs服务状态:
+```
+# systemctl status pcsd
+```
+
+若回显为如下,则服务启动成功。
+
+
+
+#### 管理pacemaker服务
+- 启动pacemaker服务:
+```
+# systemctl start pacemaker
+```
+
+- 查询pacemaker服务状态:
+```
+# systemctl status pacemaker
+```
+
+若回显为如下,则服务启动成功。
+
+
+
+#### 管理corosync服务
+- 启动corosync服务:
+```
+# systemctl start corosync
+```
+
+- 查询corosync服务状态:
+```
+# systemctl status corosync
+```
+
+若回显为如下,则服务启动成功。
+
+
+
+### 节点鉴权
+- **注:一个节点上执行即可**
+```
+# pcs host auth ha1 ha2
+```
+
+### 访问前端管理平台
+上述服务启动成功后,打开浏览器(建议使用:Chrome,Firfox),在浏览器导航栏中输入`https://localhost:2224`即可。
+- 此界面为原生管理平台
+
+
+
+若安装社区新开发的管理平台请参考此文档`https://gitee.com/openeuler/ha-api/blob/master/docs/build.md`
+- 下面为社区新开发的管理平台
+
+
+
+- 下一章将介绍如何快速使用HA高可用集群,以及添加一个实例。请参考HA的使用实例。
\ No newline at end of file
diff --git a/docs/zh/docs/thirdparty_migration/OpenStack-train.md b/docs/zh/docs/thirdparty_migration/OpenStack-train.md
index 7c24f41d45e5b641c0552a4ae85cb6b193f9ef51..79294fbd6fde049f7dfc9ae029198939b0a396be 100644
--- a/docs/zh/docs/thirdparty_migration/OpenStack-train.md
+++ b/docs/zh/docs/thirdparty_migration/OpenStack-train.md
@@ -70,7 +70,7 @@ OpenStack 支持多种形态部署,此文档支持`ALL in One`以及`Distribut
### 环境配置
-1. 启动OpenStack Train yum源
+1. 安装和启用OpenStack Train yum源
```shell
yum update
@@ -176,7 +176,7 @@ OpenStack 支持多种形态部署,此文档支持`ALL in One`以及`Distribut
**替换 `RABBIT_PASS`,为 OpenStack 用户设置密码**
-4. 设置openstack用户权限,允许进行配置、写、读:
+4. 设置openstack用户权限,允许进行配置、写、读。
```shell
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
diff --git a/docs/zh/docs/thirdparty_migration/OpenStack-wallaby.md b/docs/zh/docs/thirdparty_migration/OpenStack-wallaby.md
index 4a0a41dd021a30fba587e9868627459854fcfa2e..9e65331fefac12f767c7eeb208e9dbd02dce7d95 100644
--- a/docs/zh/docs/thirdparty_migration/OpenStack-wallaby.md
+++ b/docs/zh/docs/thirdparty_migration/OpenStack-wallaby.md
@@ -2212,7 +2212,7 @@ chown -R ipa.ipa /etc/ironic_python_agent/
Kolla为OpenStack服务提供生产环境可用的容器化部署的功能。openEuler 22.03 LTS中引入了Kolla和Kolla-ansible服务。
-Kolla的安装十分简单,只需要安装对应的RPM包即可
+Kolla的安装十分简单,只需要安装对应的RPM包即可。
```
yum install openstack-kolla openstack-kolla-ansible
diff --git a/docs/zh/docs/thirdparty_migration/ha.md b/docs/zh/docs/thirdparty_migration/ha.md
new file mode 100644
index 0000000000000000000000000000000000000000..9bd53cb9841fecb2cc3f1950d9180598fbc2964b
--- /dev/null
+++ b/docs/zh/docs/thirdparty_migration/ha.md
@@ -0,0 +1,3 @@
+# HA 用户指南
+
+本节主要描述HA的安装和使用。
\ No newline at end of file
diff --git a/docs/zh/menu/index.md b/docs/zh/menu/index.md
index 3d7f6b188bb39e0f12d537b58a9a88c3834a9c4d..27226a697c1474f7fa44a273facc6a5d8ef329ff 100644
--- a/docs/zh/menu/index.md
+++ b/docs/zh/menu/index.md
@@ -157,6 +157,7 @@ headless: true
- [构建RPM包]({{< relref "./docs/ApplicationDev/构建RPM包.md" >}})
- [FAQ]({{< relref "./docs/ApplicationDev/FAQ.md" >}})
- [安装obs工具]({{< relref "./docs/ApplicationDev/安装obs工具.md" >}})
+- [openEuler驱动开发规范]({{< relref "./docs/Driver_Development_Specifications/openEuler驱动开发规范.md" >}})
- [secGear开发指南]({{< relref "./docs/secGear/secGear.md" >}})
- [认识secGear]({{< relref "./docs/secGear/认识secGear.md" >}})
- [安装secGear]({{< relref "./docs/secGear/安装secGear.md" >}})
@@ -182,9 +183,9 @@ headless: true
- [第三方软件安装指南]({{< relref "./docs/thirdparty_migration/thidrparty.md" >}})
- [OpenStack-Wallaby部署指南]({{< relref "./docs/thirdparty_migration/OpenStack-wallaby.md" >}})
- [OpenStack-Train部署指南]({{< relref "./docs/thirdparty_migration/OpenStack-train.md" >}})
-- [HA 用户指南]({{< relref "./docs/desktop/ha.md" >}})
- - [部署 HA]({{< relref "./docs/desktop/installha.md" >}})
- - [HA 使用实例]({{< relref "./docs/desktop/usecase.md" >}})
+ - [HA 用户指南]({{< relref "./docs/thirdparty_migration/ha.md" >}})
+ - [部署 HA]({{< relref "./docs/thirdparty_migration/HA的安装与部署.md" >}})
+ - [HA 使用实例]({{< relref "./docs/thirdparty_migration/HA的使用实例.md" >}})
- [桌面环境用户指南]({{< relref "./docs/desktop/desktop.md" >}})
- [UKUI]({{< relref "./docs/desktop/ukui.md" >}})
- [安装 UKUI]({{< relref "./docs/desktop/安装UKUI.md" >}})
|