diff --git a/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md b/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md
index 28542efee4d9c8dda4afe59ab9c20562dc31699c..333f7c8924c59786f9e714bc027d73d1fbed8d28 100644
--- a/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md
+++ b/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md
@@ -8,7 +8,7 @@
- [References](#referencesmd)
- [Installing CMake](#installing-cmakemd)
- [How Do I Install GCC 7.3.0?](#how-do-i-install-gcc-7-3-0md)
- - [What Do I Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-do-i-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)
+ - [What to Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-to-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)
Overview
When setting up the environment for PyTorch model development and running, you can manually build and install the modules adapted to the PyTorch framework on a server.
@@ -33,10 +33,11 @@ When setting up the environment for PyTorch model development and running, you c
#### Prerequisites
-- The development or operating environment of CANN has been installed. For details, see the _CANN Software Installation Guide_.
+- The development or operating environment of CANN has been installed. For details, see the _CANN Software Installation Guide_.
- CMake 3.12.0 or later has been installed. For details about how to install CMake, see [Installing CMake](#installing-cmakemd).
- GCC 7.3.0 or later has been installed. For details about how to install and use GCC 7.3.0, see [How Do I Install GCC 7.3.0?](#how-do-i-install-gcc-7-3-0md).
-- Python 3.7.5 or 3.8 has been installed.
+- Python 3.7.5, 3.8, or 3.9 has been installed.
+- Note that PyTorch 1.5 does not support Python 3.9 build and installation. Only Torch 1.8.1 supports Python 3.9 build and installation.
- The Patch and Git tools have been installed in the environment. To install the tools for Ubuntu and CentOS, run the following commands:
- Ubuntu
@@ -70,10 +71,13 @@ When setting up the environment for PyTorch model development and running, you c
3. Obtain the PyTorch source code.
- 1. Run the following command to obtain the PyTorch source code adapted to Ascend AI Processors:
+ 1. Run the following command to obtain the PyTorch source code adapted to Ascend AI Processors and switch to the required branch:
```
git clone https://gitee.com/ascend/pytorch.git
+ # By default, the masterf branch is used. If other branches are required, run the git checkout command to switch to that branch.
+ # git checkout -b 2.0.3.tr5 remotes/origin/2.0.3.tr5
+
```
The directory structure of the downloaded source code is as follows:
@@ -106,7 +110,7 @@ When setting up the environment for PyTorch model development and running, you c
git clone -b v1.8.1 --depth=1 https://github.com/pytorch/pytorch.git
```
- 3. Run the following commands to go to the native PyTorch code directory **pytorch** and obtain the PyTorch passive dependency code:
+ 3. Go to the native PyTorch code directory **pytorch** and obtain the PyTorch passive dependency code.
```
cd pytorch
@@ -143,6 +147,9 @@ When setting up the environment for PyTorch model development and running, you c
bash build.sh --python=3.7
or
bash build.sh --python=3.8
+ or
+ bash build.sh --python=3.9 # PyTorch 1.5 does not support build and installation using Python 3.9.
+
```
Specify the Python version in the environment for build. The generated binary package is stored in the current dist directory **pytorch/pytorch/dist**.
@@ -179,7 +186,7 @@ After the software packages are installed, configure environment variables to us
export HCCL_WHITELIST_DISABLE=1 # Disable the HCCL trustlist.
# Scenario 2: Multi-node scenario
export HCCL_WHITELIST_DISABLE=1 # Disable the HCCL trustlist.
- export HCCL_IF_IP="1.1.1.1" # 1.1.1.1 is the NIC IP address of the host. Change it based on the site requirements. Ensure that the NIC IP addresses in use can communicate with each other in the cluster.
+ export HCCL_IF_IP="1.1.1.1" # Replace 1.1.1.1 with the actual NIC IP address of the host. Ensure that the NIC IP addresses in use can communicate with each other in the cluster.
```
3. \(Optional\) Configure function or performance environment variables in the NPU scenario. The variables are disabled by default.
@@ -338,7 +345,7 @@ After the software packages are installed, configure environment variables to us
apex
│ ├─patch # Directory of the patch adapted to Ascend AI Processors
│ ├─npu.patch
- │ ├─scripts # Build and create a directory.
+ │ ├─scripts # Build and creation directory
│ ├─gen.sh
│ ├─src # Source code directory
│ ├─tests # Directory for storing test cases
@@ -358,7 +365,7 @@ After the software packages are installed, configure environment variables to us
│ ├─apex # Directory for storing the native Apex code
│ ├─patch # Directory of the patch adapted to Ascend AI Processors
│ ├─npu.patch
- │ ├─scripts # Build and create a directory.
+ │ ├─scripts # Build and creation directory
│ ├─gen.sh
│ ├─src # Source code directory
│ ├─tests # Directory for storing test cases
@@ -384,7 +391,7 @@ After the software packages are installed, configure environment variables to us
The full code adapted to Ascend AI Processors is generated in the **apex/apex** directory.
- 2. Go to the full code directory **apex/apex**, and compile and generate the binary installation package of Apex.
+ 2. Go to the full code directory **apex/apex**, and build and generate the binary installation package of Apex.
```
cd ../apex
@@ -414,12 +421,12 @@ After the software packages are installed, configure environment variables to us
- **[How Do I Install GCC 7.3.0?](#how-do-i-install-gcc-7-3-0md)**
-- **[What Do I Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-do-i-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)**
+- **[What to Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-to-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)**
Installing CMake
-Procedure for upgrading CMake to 3.12.1
+The following describes how to install CMake 3.12.1.
1. Obtain the CMake software package.
@@ -447,8 +454,7 @@ Procedure for upgrading CMake to 3.12.1
ln -s /usr/local/cmake/bin/cmake /usr/bin/cmake
```
-5. Run the following command to check whether CMake has been installed:
-
+5. Check whether CMake has been installed.
```
cmake --version
```
@@ -525,7 +531,7 @@ Perform the following steps as the **root** user.
5. Set the environment variable.
- Training must be performed in the compilation environment with GCC upgraded. If you want to run training, configure the following environment variable in your training script:
+ Training must be performed in the compilation environment with GCC upgraded. Therefore, configure the following environment variable in your training script:
```
export LD_LIBRARY_PATH=${install_path}/lib64:${LD_LIBRARY_PATH}
@@ -537,11 +543,11 @@ Perform the following steps as the **root** user.
>Skip this step if you do not need to use the compilation environment with GCC upgraded.
-What Do I Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?
+What to Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?
#### Symptom
-During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" " is displayed.
+During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" is displayed.

diff --git a/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md b/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md
index fccfe6ab1811a7325772b63b6e640c99fc47e6ff..a3d28aff597a55f41c28d61e8e044dd7799e2275 100644
--- a/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md
+++ b/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md
@@ -57,10 +57,10 @@ The following are the environment variables required for starting the inference
export PATH=/usr/local/python3.7.5/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/python3.7.5/lib:$LD_LIBRARY_PATH
-# Sets the logical ID of a processor.
+# Set the logical ID of a processor.
export ASCEND_DEVICE_ID=0
-# Outputs log information. Replace it as required.
+# Output log information. Replace it as required.
export ASCEND_SLOG_PRINT_TO_STDOUT=1
export ASCEND_GLOBAL_LOG_LEVEL=0
@@ -416,11 +416,11 @@ The following uses the ResNet-50 model as an example to describe how to perform
2. Edit the inference script.
- Create a model script file **resnet50\_infer\_for\_pytorch.py** and write code by referring to [Sample Code]().
+ Create a model script file **resnet50\_infer\_for\_pytorch.py** and write code. For how to write the code, see [Sample Code]().
3. Run inference.
- Set environment variables by referring to [Environment Variable Configuration](#environment-variable-configurationmd) and then run the following command:
+ Set environment variables (for how to set them, see [Environment Variable Configuration](#environment-variable-configurationmd)) and then run the following command:
```
python3 pytorch-resnet50-apex.py --data /data/imagenet \
@@ -491,13 +491,13 @@ However, the mixed precision training is limited by the precision range expresse
#### Initializing the Mixed Precision Model
-1. To use the mixed precision module Apex, you need to import the amp module from the Apex library as follows:
+1. To use the mixed precision module Apex, import the amp module from the Apex library.
```
from apex import amp
```
-2. After the amp module is imported, you need to initialize it so that it can modify the model, optimizer, and PyTorch internal functions. The initialization code is as follows:
+2. Initialize the amp module so that it can modify the model, optimizer, and PyTorch internal functions.
```
model, optimizer = amp.initialize(model, optimizer)
@@ -585,7 +585,7 @@ Perform the following steps as the **root** user.
5. Set the environment variable.
- The build environment after GCC upgrade is required for training. Therefore, you need to configure the following environment variable in the training script:
+ Training must be performed in the compilation environment with GCC upgraded. Therefore, configure the following environment variable in the training script:
```
export LD_LIBRARY_PATH=${install_path}/lib64:${LD_LIBRARY_PATH}
@@ -594,6 +594,6 @@ Perform the following steps as the **root** user.
**$\{install\_path\}** indicates the GCC 7.3.0 installation path configured in [3.](#en-us_topic_0000001146754749_en-us_topic_0000001072593337_l75d31a2874534a2092e80a5f865b46f0). In this example, the GCC 7.3.0 installation path is **/usr/local/linux\_gcc7.3.0/**.
> **NOTE:**
- >The environment variable needs to be configured only when you need to use the build environment after the GCC upgrade.
+ >Skip this step if you do not the compilation environment with GCC upgraded.
diff --git a/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md b/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md
index 78351f5798567a8c2ffbcb5c6fe592b9d8635816..48ddd588986a17becfa7645d7a5c905a05cb490b 100644
--- a/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md
+++ b/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md
@@ -444,13 +444,13 @@ You can develop an operator adaptation plugin to convert the formats of the inpu
3. Define the main adaptation function of the operator.
- Determine the adaptation theme function for custom operators based on the dispatch function in the registered operator.
+ Determine the main adaptation function for custom operators based on the dispatch function in the registered operator.
4. Implement the main adaptation functions.
- Implement the operator adaptation theme function and construct the corresponding input, output, and attributes based on the TBE operator prototype.
+ Implement the operator's main adaptation function and construct the corresponding input, output, and attributes based on the TBE operator prototype.
-5. Use the **TORCH\_LIBRARY\_IMPL** macro to associate the operator description func in the **native\_functions.yaml** file generated during the operator registration. \(Only PyTorch 1.8.1 requires this step.\)
+5. \(Only PyTorch 1.8.1 requires this step.\) Use the **TORCH\_LIBRARY\_IMPL** macro to associate the operator description func in the **native\_functions.yaml** file generated during the operator registration.
**TORCH\_LIBRARY\_IMPL** is a macro provided by PyTorch for registered operator distribution. To use it, perform the following steps:
@@ -618,7 +618,7 @@ The following uses the torch.add\(\) operator as an example to describe how to a
}
```
-5. Use the **TORCH\_LIBRARY\_IMPL** macro to associate the registered operator. \(Only PyTorch 1.8.1 requires this step.\)
+5. \(Only PyTorch 1.8.1 requires this step.\) Use the **TORCH\_LIBRARY\_IMPL** macro to associate the registered operator.
```
TORCH_LIBRARY_IMPL(aten, NPU, m) {
@@ -827,7 +827,7 @@ pip3.7 install torchvision --no-deps
#### Symptom
-During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" " is displayed.
+During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" is displayed.

@@ -900,7 +900,7 @@ The custom TBE operator has been developed and adapted to PyTorch. However, the
There should be no error in this step. The log added in **add** should be displayed. If an error occurs, check the code to ensure that no newly developed code affects the test.
- 3. The newly developed custom TBE operator is combined into CANN. Logs are added to the operator entry as the running identifier.
+ 3. Combine the newly developed custom TBE operator into CANN. Add logs to the operator entry as the running identifier.
4. After the compilation and installation of CANN are complete, call **python3.7.5 test\_add.py** to perform the test.
> **NOTE:**
@@ -1047,7 +1047,7 @@ The following describes how to upgrade CMake to 3.12.1.
ln -s /usr/local/cmake/bin/cmake /usr/bin/cmake
```
-5. Run the following command to check whether CMake has been installed:
+5. Check whether CMake has been installed.
```
cmake --version
diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md"
index 2c606fdd00673c1876bddcd3cc3e13de476e5a29..298c52f3bd607673ef2105d431b9d45706406139 100644
--- "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md"
+++ "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md"
@@ -220,68 +220,6 @@ y:一个tensor,数据类型和shape与输入一致
Opset v9/v10/v11/v12/v13
-AdaptiveAvgPool2D
-
-### 功能
-
-对输入进行2d自适应平均池化计算
-
-### 边界
-
-【输入】
-
-一个输入
-
-x:一个tensor,数据类型:float16、float32
-
-【属性】
-
-一个属性:
-
-output\_size:int型数组,指定输出的hw的shape大小
-
-【输出】
-
-一个输出
-
-y:一个tensor,数据类型:与x类型一致
-
-### 支持的ONNX版本
-
-自定义算子,无对应onnx版本
-
-AdaptiveMaxPool2D
-
-### 功能
-
-对输入进行2d自适应最大池化计算
-
-### 边界
-
-【输入】
-
-一个输入
-
-x:一个tensor,数据类型:float16、float32、float64
-
-【属性】
-
-一个属性:
-
-output\_size:int型数组,指定输出的hw的shape大小
-
-【输出】
-
-两个输出
-
-y:一个tensor,数据类型:与x类型一致
-
-argmax:一个tensor,数据类型:int32,int64
-
-### 支持的ONNX版本
-
-自定义算子,无对应onnx版本
-
Add
### 功能
@@ -306,36 +244,6 @@ C:一个张量,数据类型与A相同
Opset v8/v9/v10/v11/v12/v13
-Addcmul
-
-### 功能
-
-元素级计算\(x1 \* x2\) \* value + input\_data
-
-### 边界
-
-【输入】
-
-四个输入
-
-input\_data:一个tensor,数据类型:float16、float32、int32、int8、uint8
-
-x1: 一个tensor,类型与input\_data相同
-
-x2: 一个tensor,类型与input\_data相同
-
-value: 一个tensor,类型与input\_data相同
-
-【输出】
-
-一个输出
-
-y:一个tensor,数据类型:y与输入相同
-
-### 支持的ONNX版本
-
-自定义算子,无对应onnx版本
-
AffineGrid
### 功能
@@ -3569,38 +3477,6 @@ reshaped:一个张量
Opset v8/v9/v10/v11/v12/v13
-ReverseSequence
-
-### 功能
-
-根据指定长度对batch序列进行排序
-
-### 边界
-
-【输入】
-
-2个输入
-
-x:tensor,rank \>= 2,数据类型:float16、float32
-
-sequence\_lens:tensor,每个batch的指定长度,数据类型:int64
-
-【输出】
-
-一个输出
-
-y:一个张量,和输入x同样的type和shape
-
-【属性】
-
-batch\_axis:int,默认为1,含义:指定batch轴
-
-time\_axis:int,默认为1,含义:指定time轴
-
-### 支持的ONNX版本
-
-Opset v10/v11/v12/v13
-
### 功能