diff --git a/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md b/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md index 28542efee4d9c8dda4afe59ab9c20562dc31699c..333f7c8924c59786f9e714bc027d73d1fbed8d28 100644 --- a/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md +++ b/docs/en/PyTorch Installation Guide/PyTorch Installation Guide.md @@ -8,7 +8,7 @@ - [References](#referencesmd) - [Installing CMake](#installing-cmakemd) - [How Do I Install GCC 7.3.0?](#how-do-i-install-gcc-7-3-0md) - - [What Do I Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-do-i-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd) + - [What to Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-to-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)

Overview

When setting up the environment for PyTorch model development and running, you can manually build and install the modules adapted to the PyTorch framework on a server. @@ -33,10 +33,11 @@ When setting up the environment for PyTorch model development and running, you c #### Prerequisites -- The development or operating environment of CANN has been installed. For details, see the _CANN Software Installation Guide_. +- The development or operating environment of CANN has been installed. For details, see the _CANN Software Installation Guide_. - CMake 3.12.0 or later has been installed. For details about how to install CMake, see [Installing CMake](#installing-cmakemd). - GCC 7.3.0 or later has been installed. For details about how to install and use GCC 7.3.0, see [How Do I Install GCC 7.3.0?](#how-do-i-install-gcc-7-3-0md). -- Python 3.7.5 or 3.8 has been installed. +- Python 3.7.5, 3.8, or 3.9 has been installed. +- Note that PyTorch 1.5 does not support Python 3.9 build and installation. Only Torch 1.8.1 supports Python 3.9 build and installation. - The Patch and Git tools have been installed in the environment. To install the tools for Ubuntu and CentOS, run the following commands: - Ubuntu @@ -70,10 +71,13 @@ When setting up the environment for PyTorch model development and running, you c 3. Obtain the PyTorch source code. - 1. Run the following command to obtain the PyTorch source code adapted to Ascend AI Processors: + 1. Run the following command to obtain the PyTorch source code adapted to Ascend AI Processors and switch to the required branch: ``` git clone https://gitee.com/ascend/pytorch.git + # By default, the masterf branch is used. If other branches are required, run the git checkout command to switch to that branch. + # git checkout -b 2.0.3.tr5 remotes/origin/2.0.3.tr5 + ``` The directory structure of the downloaded source code is as follows: @@ -106,7 +110,7 @@ When setting up the environment for PyTorch model development and running, you c git clone -b v1.8.1 --depth=1 https://github.com/pytorch/pytorch.git ``` - 3. Run the following commands to go to the native PyTorch code directory **pytorch** and obtain the PyTorch passive dependency code: + 3. Go to the native PyTorch code directory **pytorch** and obtain the PyTorch passive dependency code. ``` cd pytorch @@ -143,6 +147,9 @@ When setting up the environment for PyTorch model development and running, you c bash build.sh --python=3.7 or bash build.sh --python=3.8 + or + bash build.sh --python=3.9 # PyTorch 1.5 does not support build and installation using Python 3.9. + ``` Specify the Python version in the environment for build. The generated binary package is stored in the current dist directory **pytorch/pytorch/dist**. @@ -179,7 +186,7 @@ After the software packages are installed, configure environment variables to us export HCCL_WHITELIST_DISABLE=1 # Disable the HCCL trustlist. # Scenario 2: Multi-node scenario export HCCL_WHITELIST_DISABLE=1 # Disable the HCCL trustlist. - export HCCL_IF_IP="1.1.1.1" # 1.1.1.1 is the NIC IP address of the host. Change it based on the site requirements. Ensure that the NIC IP addresses in use can communicate with each other in the cluster. + export HCCL_IF_IP="1.1.1.1" # Replace 1.1.1.1 with the actual NIC IP address of the host. Ensure that the NIC IP addresses in use can communicate with each other in the cluster. ``` 3. \(Optional\) Configure function or performance environment variables in the NPU scenario. The variables are disabled by default. @@ -338,7 +345,7 @@ After the software packages are installed, configure environment variables to us apex │ ├─patch # Directory of the patch adapted to Ascend AI Processors │ ├─npu.patch - │ ├─scripts # Build and create a directory. + │ ├─scripts # Build and creation directory │ ├─gen.sh │ ├─src # Source code directory │ ├─tests # Directory for storing test cases @@ -358,7 +365,7 @@ After the software packages are installed, configure environment variables to us │ ├─apex # Directory for storing the native Apex code │ ├─patch # Directory of the patch adapted to Ascend AI Processors │ ├─npu.patch - │ ├─scripts # Build and create a directory. + │ ├─scripts # Build and creation directory │ ├─gen.sh │ ├─src # Source code directory │ ├─tests # Directory for storing test cases @@ -384,7 +391,7 @@ After the software packages are installed, configure environment variables to us The full code adapted to Ascend AI Processors is generated in the **apex/apex** directory. - 2. Go to the full code directory **apex/apex**, and compile and generate the binary installation package of Apex. + 2. Go to the full code directory **apex/apex**, and build and generate the binary installation package of Apex. ``` cd ../apex @@ -414,12 +421,12 @@ After the software packages are installed, configure environment variables to us - **[How Do I Install GCC 7.3.0?](#how-do-i-install-gcc-7-3-0md)** -- **[What Do I Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-do-i-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)** +- **[What to Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?](#what-to-do-if-torch-1-5-0xxxx-and-torchvision-do-not-match-when-torch--whl-is-installedmd)**

Installing CMake

-Procedure for upgrading CMake to 3.12.1 +The following describes how to install CMake 3.12.1. 1. Obtain the CMake software package. @@ -447,8 +454,7 @@ Procedure for upgrading CMake to 3.12.1 ln -s /usr/local/cmake/bin/cmake /usr/bin/cmake ``` -5. Run the following command to check whether CMake has been installed: - +5. Check whether CMake has been installed. ``` cmake --version ``` @@ -525,7 +531,7 @@ Perform the following steps as the **root** user. 5. Set the environment variable. - Training must be performed in the compilation environment with GCC upgraded. If you want to run training, configure the following environment variable in your training script: + Training must be performed in the compilation environment with GCC upgraded. Therefore, configure the following environment variable in your training script: ``` export LD_LIBRARY_PATH=${install_path}/lib64:${LD_LIBRARY_PATH} @@ -537,11 +543,11 @@ Perform the following steps as the **root** user. >Skip this step if you do not need to use the compilation environment with GCC upgraded. -

What Do I Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?

+

What to Do If "torch 1.5.0xxxx" and "torchvision" Do Not Match When torch-\*.whl Is Installed?

#### Symptom -During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" " is displayed. +During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" is displayed. ![](figures/en-us_image_0000001190081735.png) diff --git a/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md b/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md index fccfe6ab1811a7325772b63b6e640c99fc47e6ff..a3d28aff597a55f41c28d61e8e044dd7799e2275 100644 --- a/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md +++ b/docs/en/PyTorch Online Inference Guide/PyTorch Online Inference Guide.md @@ -57,10 +57,10 @@ The following are the environment variables required for starting the inference export PATH=/usr/local/python3.7.5/bin:$PATH export LD_LIBRARY_PATH=/usr/local/python3.7.5/lib:$LD_LIBRARY_PATH -# Sets the logical ID of a processor. +# Set the logical ID of a processor. export ASCEND_DEVICE_ID=0 -# Outputs log information. Replace it as required. +# Output log information. Replace it as required. export ASCEND_SLOG_PRINT_TO_STDOUT=1 export ASCEND_GLOBAL_LOG_LEVEL=0 @@ -416,11 +416,11 @@ The following uses the ResNet-50 model as an example to describe how to perform 2. Edit the inference script. - Create a model script file **resnet50\_infer\_for\_pytorch.py** and write code by referring to [Sample Code](). + Create a model script file **resnet50\_infer\_for\_pytorch.py** and write code. For how to write the code, see [Sample Code](). 3. Run inference. - Set environment variables by referring to [Environment Variable Configuration](#environment-variable-configurationmd) and then run the following command: + Set environment variables (for how to set them, see [Environment Variable Configuration](#environment-variable-configurationmd)) and then run the following command: ``` python3 pytorch-resnet50-apex.py --data /data/imagenet \ @@ -491,13 +491,13 @@ However, the mixed precision training is limited by the precision range expresse #### Initializing the Mixed Precision Model -1. To use the mixed precision module Apex, you need to import the amp module from the Apex library as follows: +1. To use the mixed precision module Apex, import the amp module from the Apex library. ``` from apex import amp ``` -2. After the amp module is imported, you need to initialize it so that it can modify the model, optimizer, and PyTorch internal functions. The initialization code is as follows: +2. Initialize the amp module so that it can modify the model, optimizer, and PyTorch internal functions. ``` model, optimizer = amp.initialize(model, optimizer) @@ -585,7 +585,7 @@ Perform the following steps as the **root** user. 5. Set the environment variable. - The build environment after GCC upgrade is required for training. Therefore, you need to configure the following environment variable in the training script: + Training must be performed in the compilation environment with GCC upgraded. Therefore, configure the following environment variable in the training script: ``` export LD_LIBRARY_PATH=${install_path}/lib64:${LD_LIBRARY_PATH} @@ -594,6 +594,6 @@ Perform the following steps as the **root** user. **$\{install\_path\}** indicates the GCC 7.3.0 installation path configured in [3.](#en-us_topic_0000001146754749_en-us_topic_0000001072593337_l75d31a2874534a2092e80a5f865b46f0). In this example, the GCC 7.3.0 installation path is **/usr/local/linux\_gcc7.3.0/**. >![](public_sys-resources/icon-note.gif) **NOTE:** - >The environment variable needs to be configured only when you need to use the build environment after the GCC upgrade. + >Skip this step if you do not the compilation environment with GCC upgraded. diff --git a/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md b/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md index 78351f5798567a8c2ffbcb5c6fe592b9d8635816..48ddd588986a17becfa7645d7a5c905a05cb490b 100644 --- a/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md +++ b/docs/en/PyTorch Operator Development Guide/PyTorch Operator Development Guide.md @@ -444,13 +444,13 @@ You can develop an operator adaptation plugin to convert the formats of the inpu 3. Define the main adaptation function of the operator. - Determine the adaptation theme function for custom operators based on the dispatch function in the registered operator. + Determine the main adaptation function for custom operators based on the dispatch function in the registered operator. 4. Implement the main adaptation functions. - Implement the operator adaptation theme function and construct the corresponding input, output, and attributes based on the TBE operator prototype. + Implement the operator's main adaptation function and construct the corresponding input, output, and attributes based on the TBE operator prototype. -5. Use the **TORCH\_LIBRARY\_IMPL** macro to associate the operator description func in the **native\_functions.yaml** file generated during the operator registration. \(Only PyTorch 1.8.1 requires this step.\) +5. \(Only PyTorch 1.8.1 requires this step.\) Use the **TORCH\_LIBRARY\_IMPL** macro to associate the operator description func in the **native\_functions.yaml** file generated during the operator registration. **TORCH\_LIBRARY\_IMPL** is a macro provided by PyTorch for registered operator distribution. To use it, perform the following steps: @@ -618,7 +618,7 @@ The following uses the torch.add\(\) operator as an example to describe how to a } ``` -5. Use the **TORCH\_LIBRARY\_IMPL** macro to associate the registered operator. \(Only PyTorch 1.8.1 requires this step.\) +5. \(Only PyTorch 1.8.1 requires this step.\) Use the **TORCH\_LIBRARY\_IMPL** macro to associate the registered operator. ``` TORCH_LIBRARY_IMPL(aten, NPU, m) { @@ -827,7 +827,7 @@ pip3.7 install torchvision --no-deps #### Symptom -During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" " is displayed. +During the installation of **torch-**_\*_**.whl**, the message "ERROR: torchvision 0.6.0 has requirement torch==1.5.0, but you'll have torch 1.5.0a0+1977093 which is incompatible" is displayed. ![](figures/en-us_image_0000001144082048.png) @@ -900,7 +900,7 @@ The custom TBE operator has been developed and adapted to PyTorch. However, the There should be no error in this step. The log added in **add** should be displayed. If an error occurs, check the code to ensure that no newly developed code affects the test. - 3. The newly developed custom TBE operator is combined into CANN. Logs are added to the operator entry as the running identifier. + 3. Combine the newly developed custom TBE operator into CANN. Add logs to the operator entry as the running identifier. 4. After the compilation and installation of CANN are complete, call **python3.7.5 test\_add.py** to perform the test. >![](public_sys-resources/icon-note.gif) **NOTE:** @@ -1047,7 +1047,7 @@ The following describes how to upgrade CMake to 3.12.1. ln -s /usr/local/cmake/bin/cmake /usr/bin/cmake ``` -5. Run the following command to check whether CMake has been installed: +5. Check whether CMake has been installed. ``` cmake --version diff --git "a/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.8.1.md" "b/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.8.1.md" deleted file mode 100644 index c280756f98f804d436367f8214ac998f6d955ce1..0000000000000000000000000000000000000000 --- "a/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.8.1.md" +++ /dev/null @@ -1,1232 +0,0 @@ -# Torch - -## Tensors - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [is_tensor](https://pytorch.org/docs/1.8.1/generated/torch.is_tensor.html) | 否 | -| 2 | [is_storage](https://pytorch.org/docs/1.8.1/generated/torch.is_storage.html) | 否 | -| 3 | [is_complex](https://pytorch.org/docs/1.8.1/generated/torch.is_complex.html) | 否 | -| 4 | [is_floating_point](https://pytorch.org/docs/1.8.1/generated/torch.is_floating_point.html) | 否 | -| 5 | [is_nonzero](https://pytorch.org/docs/1.8.1/generated/torch.is_nonzero.html) | 否 | -| 6 | [set_default_dtype](https://pytorch.org/docs/1.8.1/generated/torch.set_default_dtype.html) | 否 | -| 7 | [get_default_dtype](https://pytorch.org/docs/1.8.1/generated/torch.get_default_dtype.html) | 否 | -| 8 | [set_default_tensor_type](https://pytorch.org/docs/1.8.1/generated/torch.set_default_tensor_type.html) | 否 | -| 9 | [numel](https://pytorch.org/docs/1.8.1/generated/torch.numel.html) | 否 | -| 10 | [set_printoptions](https://pytorch.org/docs/1.8.1/generated/torch.set_printoptions.html) | 否 | -| 11 | [set_flush_denormal](https://pytorch.org/docs/1.8.1/generated/torch.set_flush_denormal.html) | 否 | - -### Creation Ops - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [tensor](https://pytorch.org/docs/1.8.1/generated/torch.tensor.html) | 否 | -| 2 | [sparse_coo_tensor](https://pytorch.org/docs/1.8.1/generated/torch.sparse_coo_tensor.html) | 否 | -| 3 | [as_tensor](https://pytorch.org/docs/1.8.1/generated/torch.as_tensor.html) | 否 | -| 4 | [as_strided](https://pytorch.org/docs/1.8.1/generated/torch.as_strided.html) | 否 | -| 5 | [from_numpy](https://pytorch.org/docs/1.8.1/generated/torch.from_numpy.html) | 否 | -| 6 | [zeros](https://pytorch.org/docs/1.8.1/generated/torch.zeros.html) | 否 | -| 7 | [zeros_like](https://pytorch.org/docs/1.8.1/generated/torch.zeros_like.html) | 否 | -| 8 | [ones](https://pytorch.org/docs/1.8.1/generated/torch.ones.html) | 否 | -| 9 | [ones_like](https://pytorch.org/docs/1.8.1/generated/torch.ones_like.html) | 否 | -| 10 | [arange](https://pytorch.org/docs/1.8.1/generated/torch.arange.html) | 否 | -| 11 | [range](https://pytorch.org/docs/1.8.1/generated/torch.range.html) | 否 | -| 12 | [linspace](https://pytorch.org/docs/1.8.1/generated/torch.linspace.html) | 否 | -| 13 | [logspace](https://pytorch.org/docs/1.8.1/generated/torch.logspace.html) | 否 | -| 14 | [eye](https://pytorch.org/docs/1.8.1/generated/torch.eye.html) | 否 | -| 15 | [empty](https://pytorch.org/docs/1.8.1/generated/torch.empty.html) | 否 | -| 16 | [empty_like](https://pytorch.org/docs/1.8.1/generated/torch.empty_like.html) | 否 | -| 17 | [empty_strided](https://pytorch.org/docs/1.8.1/generated/torch.empty_strided.html) | 否 | -| 18 | [full](https://pytorch.org/docs/1.8.1/generated/torch.full.html) | 否 | -| 19 | [full_like](https://pytorch.org/docs/1.8.1/generated/torch.full_like.html) | 否 | -| 20 | [quantize_per_tensor](https://pytorch.org/docs/1.8.1/generated/torch.quantize_per_tensor.html) | 否 | -| 21 | [quantize_per_channel](https://pytorch.org/docs/1.8.1/generated/torch.quantize_per_channel.html) | 否 | -| 22 | [dequantize](https://pytorch.org/docs/1.8.1/generated/torch.dequantize.html) | 否 | -| 23 | [complex](https://pytorch.org/docs/1.8.1/generated/torch.complex.html) | 否 | -| 24 | [polar](https://pytorch.org/docs/1.8.1/generated/torch.polar.html) | 否 | -| 25 | [heaviside](https://pytorch.org/docs/1.8.1/generated/torch.heaviside.html) | 否 | - -### Indexing, Slicing, Joining, Mutating Ops - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [cat](https://pytorch.org/docs/1.8.1/generated/torch.cat.html) | 否 | -| 2 | [chunk](https://pytorch.org/docs/1.8.1/generated/torch.chunk.html) | 否 | -| 3 | [column_stack](https://pytorch.org/docs/1.8.1/generated/torch.column_stack.html) | 否 | -| 4 | [dstack](https://pytorch.org/docs/1.8.1/generated/torch.dstack.html) | 否 | -| 5 | [gather](https://pytorch.org/docs/1.8.1/generated/torch.gather.html) | 否 | -| 6 | [hstack](https://pytorch.org/docs/1.8.1/generated/torch.hstack.html) | 否 | -| 7 | [index_select](https://pytorch.org/docs/1.8.1/generated/torch.index_select.html) | 否 | -| 8 | [masked_select](https://pytorch.org/docs/1.8.1/generated/torch.masked_select.html) | 否 | -| 9 | [movedim](https://pytorch.org/docs/1.8.1/generated/torch.movedim.html) | 否 | -| 10 | [moveaxis](https://pytorch.org/docs/1.8.1/generated/torch.moveaxis.html) | 否 | -| 11 | [narrow](https://pytorch.org/docs/1.8.1/generated/torch.narrow.html) | 否 | -| 12 | [nonzero](https://pytorch.org/docs/1.8.1/generated/torch.nonzero.html) | 否 | -| 13 | [reshape](https://pytorch.org/docs/1.8.1/generated/torch.reshape.html) | 否 | -| 14 | [row_stack](https://pytorch.org/docs/1.8.1/generated/torch.row_stack.html) | 否 | -| 15 | [scatter](https://pytorch.org/docs/1.8.1/generated/torch.scatter.html) | 否 | -| 16 | [scatter_add](https://pytorch.org/docs/1.8.1/generated/torch.scatter_add.html) | 否 | -| 17 | [split](https://pytorch.org/docs/1.8.1/generated/torch.split.html) | 否 | -| 18 | [squeeze](https://pytorch.org/docs/1.8.1/generated/torch.squeeze.html) | 否 | -| 19 | [stack](https://pytorch.org/docs/1.8.1/generated/torch.stack.html) | 否 | -| 20 | [swapaxes](https://pytorch.org/docs/1.8.1/generated/torch.swapaxes.html) | 否 | -| 21 | [swapdims](https://pytorch.org/docs/1.8.1/generated/torch.swapdims.html) | 否 | -| 22 | [t](https://pytorch.org/docs/1.8.1/generated/torch.t.html) | 否 | -| 23 | [take](https://pytorch.org/docs/1.8.1/generated/torch.take.html) | 否 | -| 24 | [tensor_split](https://pytorch.org/docs/1.8.1/generated/torch.tensor_split.html) | 否 | -| 25 | [tile](https://pytorch.org/docs/1.8.1/generated/torch.tile.html) | 否 | -| 26 | [transpose](https://pytorch.org/docs/1.8.1/generated/torch.transpose.html) | 否 | -| 27 | [unbind](https://pytorch.org/docs/1.8.1/generated/torch.unbind.html) | 否 | -| 28 | [unsqueeze](https://pytorch.org/docs/1.8.1/generated/torch.unsqueeze.html) | 否 | -| 29 | [vstack](https://pytorch.org/docs/1.8.1/generated/torch.vstack.html) | 否 | -| 30 | [where](https://pytorch.org/docs/1.8.1/generated/torch.where.html) | 否 | - -## Generators - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [Generator](https://pytorch.org/docs/1.8.1/generated/torch.Generator.html) | 否 | - -## Random sampling - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [seed](https://pytorch.org/docs/1.8.1/generated/torch.seed.html) | 否 | -| 2 | [manual_seed](https://pytorch.org/docs/1.8.1/generated/torch.manual_seed.html) | 否 | -| 3 | [initial_seed](https://pytorch.org/docs/1.8.1/generated/torch.initial_seed.html) | 否 | -| 4 | [get_rng_state](https://pytorch.org/docs/1.8.1/generated/torch.get_rng_state.html) | 否 | -| 5 | [set_rng_state](https://pytorch.org/docs/1.8.1/generated/torch.set_rng_state.html) | 否 | -| 6 | [bernoulli](https://pytorch.org/docs/1.8.1/generated/torch.bernoulli.html) | 否 | -| 7 | [multinomial](https://pytorch.org/docs/1.8.1/generated/torch.multinomial.html) | 否 | -| 8 | [normal](https://pytorch.org/docs/1.8.1/generated/torch.normal.html) | 否 | -| 9 | [poisson](https://pytorch.org/docs/1.8.1/generated/torch.poisson.html) | 否 | -| 10 | [rand](https://pytorch.org/docs/1.8.1/generated/torch.rand.html) | 否 | -| 11 | [rand_like](https://pytorch.org/docs/1.8.1/generated/torch.rand_like.html) | 否 | -| 12 | [randint](https://pytorch.org/docs/1.8.1/generated/torch.randint.html) | 否 | -| 13 | [randint_like](https://pytorch.org/docs/1.8.1/generated/torch.randint_like.html) | 否 | -| 14 | [randn](https://pytorch.org/docs/1.8.1/generated/torch.randn.html) | 否 | -| 15 | [randn_like](https://pytorch.org/docs/1.8.1/generated/torch.randn_like.html) | 否 | -| 16 | [randperm](https://pytorch.org/docs/1.8.1/generated/torch.randperm.html) | 否 | - -### In-place random sampling - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [torch.Tensor.bernoulli_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 2 | [torch.Tensor.cauchy_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 3 | [torch.Tensor.exponential_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 4 | [torch.Tensor.geometric_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 5 | [torch.Tensor.log_normal_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 6 | [torch.Tensor.normal_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 7 | [torch.Tensor.random_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | -| 8 | [torch.Tensor.uniform_()](https://pytorch.org/docs/1.8.1/tensors.html) | 否 | - -### Quasi-random sampling - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [quasirandom.SobolEngine](https://pytorch.org/docs/1.8.1/generated/torch.quasirandom.SobolEngine.html) | 否 | - -## Serialization - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [save](https://pytorch.org/docs/1.8.1/generated/torch.save.html) | 否 | -| 2 | [load](https://pytorch.org/docs/1.8.1/generated/torch.load.html) | 否 | - -## Parallelism - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [get_num_threads](https://pytorch.org/docs/1.8.1/generated/torch.get_num_threads.html) | 否 | -| 2 | [set_num_threads](https://pytorch.org/docs/1.8.1/generated/torch.set_num_threads.html) | 否 | -| 3 | [get_num_interop_threads](https://pytorch.org/docs/1.8.1/generated/torch.get_num_interop_threads.html) | 否 | -| 4 | [set_num_interop_threads](https://pytorch.org/docs/1.8.1/generated/torch.set_num_interop_threads.html) | 否 | - -## Locally disabling gradient computation - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [no_grad](https://pytorch.org/docs/1.8.1/generated/torch.no_grad.html#torch.no_grad) | 否 | -| 2 | [enable_grad](https://pytorch.org/docs/1.8.1/generated/torch.enable_grad.html#torch.enable_grad) | 否 | -| 3 | set_grad_enabled | 否 | - -## Math operations - -### Pointwise Ops - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [abs](https://pytorch.org/docs/1.8.1/generated/torch.abs.html#torch.abs) | 否 | -| 2 | [absolute](https://pytorch.org/docs/1.8.1/generated/torch.absolute.html#torch.absolute) | 否 | -| 3 | [acos](https://pytorch.org/docs/1.8.1/generated/torch.acos.html#torch.acos) | 否 | -| 4 | [arccos](https://pytorch.org/docs/1.8.1/generated/torch.arccos.html#torch.arccos) | 否 | -| 5 | [acosh](https://pytorch.org/docs/1.8.1/generated/torch.acosh.html#torch.acosh) | 否 | -| 6 | [arccosh](https://pytorch.org/docs/1.8.1/generated/torch.arccosh.html#torch.arccosh) | 否 | -| 7 | [add](https://pytorch.org/docs/1.8.1/generated/torch.add.html#torch.add) | 否 | -| 8 | [addcdiv](https://pytorch.org/docs/1.8.1/generated/torch.addcdiv.html#torch.addcdiv) | 否 | -| 9 | [addcmul](https://pytorch.org/docs/1.8.1/generated/torch.addcmul.html#torch.addcmul) | 否 | -| 10 | [angle](https://pytorch.org/docs/1.8.1/generated/torch.angle.html#torch.angle) | 否 | -| 11 | [asin](https://pytorch.org/docs/1.8.1/generated/torch.asin.html#torch.asin) | 否 | -| 12 | [arcsin](https://pytorch.org/docs/1.8.1/generated/torch.arcsin.html#torch.arcsin) | 否 | -| 13 | [asinh](https://pytorch.org/docs/1.8.1/generated/torch.asinh.html#torch.asinh) | 否 | -| 14 | [arcsinh](https://pytorch.org/docs/1.8.1/generated/torch.arcsinh.html#torch.arcsinh) | 否 | -| 15 | [atan](https://pytorch.org/docs/1.8.1/generated/torch.atan.html#torch.atan) | 否 | -| 16 | [arctan](https://pytorch.org/docs/1.8.1/generated/torch.arctan.html#torch.arctan) | 否 | -| 17 | [atanh](https://pytorch.org/docs/1.8.1/generated/torch.atanh.html#torch.atanh) | 否 | -| 18 | [arctanh](https://pytorch.org/docs/1.8.1/generated/torch.arctanh.html#torch.arctanh) | 否 | -| 19 | [atan2](https://pytorch.org/docs/1.8.1/generated/torch.atan2.html#torch.atan2) | 否 | -| 20 | [bitwise_not](https://pytorch.org/docs/1.8.1/generated/torch.bitwise_not.html#torch.bitwise_not) | 否 | -| 21 | [bitwise_and](https://pytorch.org/docs/1.8.1/generated/torch.bitwise_and.html#torch.bitwise_and) | 否 | -| 22 | [bitwise_or](https://pytorch.org/docs/1.8.1/generated/torch.bitwise_or.html#torch.bitwise_or) | 否 | -| 23 | [bitwise_xor](https://pytorch.org/docs/1.8.1/generated/torch.bitwise_xor.html#torch.bitwise_xor) | 否 | -| 24 | [ceil](https://pytorch.org/docs/1.8.1/generated/torch.ceil.html#torch.ceil) | 否 | -| 25 | [clamp](https://pytorch.org/docs/1.8.1/generated/torch.clamp.html#torch.clamp) | 否 | -| 26 | [clip](https://pytorch.org/docs/1.8.1/generated/torch.clip.html#torch.clip) | 否 | -| 27 | [conj](https://pytorch.org/docs/1.8.1/generated/torch.conj.html#torch.conj) | 否 | -| 28 | [copysign](https://pytorch.org/docs/1.8.1/generated/torch.copysign.html#torch.copysign) | 否 | -| 29 | [cos](https://pytorch.org/docs/1.8.1/generated/torch.cos.html#torch.cos) | 否 | -| 30 | [cosh](https://pytorch.org/docs/1.8.1/generated/torch.cosh.html#torch.cosh) | 否 | -| 31 | [deg2rad](https://pytorch.org/docs/1.8.1/generated/torch.deg2rad.html#torch.deg2rad) | 否 | -| 32 | [div](https://pytorch.org/docs/1.8.1/generated/torch.div.html#torch.div) | 否 | -| 33 | [divide](https://pytorch.org/docs/1.8.1/generated/torch.divide.html#torch.divide) | 否 | -| 34 | [digamma](https://pytorch.org/docs/1.8.1/generated/torch.digamma.html#torch.digamma) | 否 | -| 35 | [erf](https://pytorch.org/docs/1.8.1/generated/torch.erf.html#torch.erf) | 否 | -| 36 | [erfc](https://pytorch.org/docs/1.8.1/generated/torch.erfc.html#torch.erfc) | 否 | -| 37 | [erfinv](https://pytorch.org/docs/1.8.1/generated/torch.erfinv.html#torch.erfinv) | 否 | -| 38 | [exp](https://pytorch.org/docs/1.8.1/generated/torch.exp.html#torch.exp) | 否 | -| 39 | [exp2](https://pytorch.org/docs/1.8.1/generated/torch.exp2.html#torch.exp2) | 否 | -| 40 | [expm1](https://pytorch.org/docs/1.8.1/generated/torch.expm1.html#torch.expm1) | 否 | -| 41 | [fake_quantize_per_channel_affine](https://pytorch.org/docs/1.8.1/generated/torch.fake_quantize_per_channel_affine.html#torch.fake_quantize_per_channel_affine) | 否 | -| 42 | [fake_quantize_per_tensor_affine](https://pytorch.org/docs/1.8.1/generated/torch.fake_quantize_per_tensor_affine.html#torch.fake_quantize_per_tensor_affine) | 否 | -| 43 | [fix](https://pytorch.org/docs/1.8.1/generated/torch.fix.html#torch.fix) | 否 | -| 44 | [float_power](https://pytorch.org/docs/1.8.1/generated/torch.float_power.html#torch.float_power) | 否 | -| 45 | [floor](https://pytorch.org/docs/1.8.1/generated/torch.floor.html#torch.floor) | 否 | -| 46 | [floor_divide](https://pytorch.org/docs/1.8.1/generated/torch.floor_divide.html#torch.floor_divide) | 否 | -| 47 | [fmod](https://pytorch.org/docs/1.8.1/generated/torch.fmod.html#torch.fmod) | 否 | -| 48 | [frac](https://pytorch.org/docs/1.8.1/generated/torch.frac.html#torch.frac) | 否 | -| 49 | [imag](https://pytorch.org/docs/1.8.1/generated/torch.imag.html#torch.imag) | 否 | -| 50 | [ldexp](https://pytorch.org/docs/1.8.1/generated/torch.ldexp.html#torch.ldexp) | 否 | -| 51 | [lerp](https://pytorch.org/docs/1.8.1/generated/torch.lerp.html#torch.lerp) | 否 | -| 52 | [lgamma](https://pytorch.org/docs/1.8.1/generated/torch.lgamma.html#torch.lgamma) | 否 | -| 53 | [log](https://pytorch.org/docs/1.8.1/generated/torch.log.html#torch.log) | 否 | -| 54 | [log10](https://pytorch.org/docs/1.8.1/generated/torch.log10.html#torch.log10) | 否 | -| 55 | [log1p](https://pytorch.org/docs/1.8.1/generated/torch.log1p.html#torch.log1p) | 否 | -| 56 | [log2](https://pytorch.org/docs/1.8.1/generated/torch.log2.html#torch.log2) | 否 | -| 57 | [logaddexp](https://pytorch.org/docs/1.8.1/generated/torch.logaddexp.html#torch.logaddexp) | 否 | -| 58 | [logaddexp2](https://pytorch.org/docs/1.8.1/generated/torch.logaddexp2.html#torch.logaddexp2) | 否 | -| 59 | [logical_and](https://pytorch.org/docs/1.8.1/generated/torch.logical_and.html#torch.logical_and) | 否 | -| 60 | [logical_not](https://pytorch.org/docs/1.8.1/generated/torch.logical_not.html#torch.logical_not) | 否 | -| 61 | [logical_or](https://pytorch.org/docs/1.8.1/generated/torch.logical_or.html#torch.logical_or) | 否 | -| 62 | [logical_xor](https://pytorch.org/docs/1.8.1/generated/torch.logical_xor.html#torch.logical_xor) | 否 | -| 63 | [logit](https://pytorch.org/docs/1.8.1/generated/torch.logit.html#torch.logit) | 否 | -| 64 | [hypot](https://pytorch.org/docs/1.8.1/generated/torch.hypot.html#torch.hypot) | 否 | -| 65 | [i0](https://pytorch.org/docs/1.8.1/generated/torch.i0.html#torch.i0) | 否 | -| 66 | [igamma](https://pytorch.org/docs/1.8.1/generated/torch.igamma.html#torch.igamma) | 否 | -| 67 | [igammac](https://pytorch.org/docs/1.8.1/generated/torch.igammac.html#torch.igammac) | 否 | -| 68 | [mul](https://pytorch.org/docs/1.8.1/generated/torch.mul.html#torch.mul) | 否 | -| 69 | [multiply](https://pytorch.org/docs/1.8.1/generated/torch.multiply.html#torch.multiply) | 否 | -| 70 | [mvlgamma](https://pytorch.org/docs/1.8.1/generated/torch.mvlgamma.html#torch.mvlgamma) | 否 | -| 71 | [nan_to_num](https://pytorch.org/docs/1.8.1/generated/torch.nan_to_num.html#torch.nan_to_num) | 否 | -| 72 | [neg](https://pytorch.org/docs/1.8.1/generated/torch.neg.html#torch.neg) | 否 | -| 73 | [negative](https://pytorch.org/docs/1.8.1/generated/torch.negative.html#torch.negative) | 否 | -| 74 | [nextafter](https://pytorch.org/docs/1.8.1/generated/torch.nextafter.html#torch.nextafter) | 否 | -| 75 | [polygamma](https://pytorch.org/docs/1.8.1/generated/torch.polygamma.html#torch.polygamma) | 否 | -| 76 | [pow](https://pytorch.org/docs/1.8.1/generated/torch.pow.html#torch.pow) | 否 | -| 77 | [rad2deg](https://pytorch.org/docs/1.8.1/generated/torch.rad2deg.html#torch.rad2deg) | 否 | -| 78 | [real](https://pytorch.org/docs/1.8.1/generated/torch.real.html#torch.real) | 否 | -| 79 | [reciprocal](https://pytorch.org/docs/1.8.1/generated/torch.reciprocal.html#torch.reciprocal) | 否 | -| 80 | [remainder](https://pytorch.org/docs/1.8.1/generated/torch.remainder.html#torch.remainder) | 否 | -| 81 | [round](https://pytorch.org/docs/1.8.1/generated/torch.round.html#torch.round) | 否 | -| 82 | [rsqrt](https://pytorch.org/docs/1.8.1/generated/torch.rsqrt.html#torch.rsqrt) | 否 | -| 83 | [sigmoid](https://pytorch.org/docs/1.8.1/generated/torch.sigmoid.html#torch.sigmoid) | 否 | -| 84 | [sign](https://pytorch.org/docs/1.8.1/generated/torch.sign.html#torch.sign) | 否 | -| 85 | [sgn](https://pytorch.org/docs/1.8.1/generated/torch.sgn.html#torch.sgn) | 否 | -| 86 | [signbit](https://pytorch.org/docs/1.8.1/generated/torch.signbit.html#torch.signbit) | 否 | -| 87 | [sin](https://pytorch.org/docs/1.8.1/generated/torch.sin.html#torch.sin) | 否 | -| 88 | [sinc](https://pytorch.org/docs/1.8.1/generated/torch.sinc.html#torch.sinc) | 否 | -| 89 | [sinh](https://pytorch.org/docs/1.8.1/generated/torch.sinh.html#torch.sinh) | 否 | -| 90 | [sqrt](https://pytorch.org/docs/1.8.1/generated/torch.sqrt.html#torch.sqrt) | 否 | -| 91 | [square](https://pytorch.org/docs/1.8.1/generated/torch.square.html#torch.square) | 否 | -| 92 | [sub](https://pytorch.org/docs/1.8.1/generated/torch.sub.html#torch.sub) | 否 | -| 93 | [subtract](https://pytorch.org/docs/1.8.1/generated/torch.subtract.html#torch.subtract) | 否 | -| 94 | [tan](https://pytorch.org/docs/1.8.1/generated/torch.tan.html#torch.tan) | 否 | -| 95 | [tanh](https://pytorch.org/docs/1.8.1/generated/torch.tanh.html#torch.tanh) | 否 | -| 96 | [true_divide](https://pytorch.org/docs/1.8.1/generated/torch.true_divide.html#torch.true_divide) | 否 | -| 97 | [trunc](https://pytorch.org/docs/1.8.1/generated/torch.trunc.html#torch.trunc) | 否 | -| 98 | [xlogy](https://pytorch.org/docs/1.8.1/generated/torch.xlogy.html#torch.xlogy) | 否 | - -### Reduction Ops - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [argmax](https://pytorch.org/docs/1.8.1/generated/torch.argmax.html#torch.argmax) | 否 | -| 2 | [argmin](https://pytorch.org/docs/1.8.1/generated/torch.argmin.html#torch.argmin) | 否 | -| 3 | [amax](https://pytorch.org/docs/1.8.1/generated/torch.amax.html#torch.amax) | 否 | -| 4 | [amin](https://pytorch.org/docs/1.8.1/generated/torch.amin.html#torch.amin) | 否 | -| 5 | [all](https://pytorch.org/docs/1.8.1/generated/torch.all.html#torch.all) | 否 | -| 6 | [any](https://pytorch.org/docs/1.8.1/generated/torch.any.html#torch.any) | 否 | -| 7 | [max](https://pytorch.org/docs/1.8.1/generated/torch.max.html#torch.max) | 否 | -| 8 | [min](https://pytorch.org/docs/1.8.1/generated/torch.min.html#torch.min) | 否 | -| 9 | [dist](https://pytorch.org/docs/1.8.1/generated/torch.dist.html#torch.dist) | 否 | -| 10 | [logsumexp](https://pytorch.org/docs/1.8.1/generated/torch.logsumexp.html#torch.logsumexp) | 否 | -| 11 | [mean](https://pytorch.org/docs/1.8.1/generated/torch.mean.html#torch.mean) | 否 | -| 12 | [median](https://pytorch.org/docs/1.8.1/generated/torch.median.html#torch.median) | 否 | -| 13 | [nanmedian](https://pytorch.org/docs/1.8.1/generated/torch.nanmedian.html#torch.nanmedian) | 否 | -| 14 | [mode](https://pytorch.org/docs/1.8.1/generated/torch.mode.html#torch.mode) | 否 | -| 15 | [norm](https://pytorch.org/docs/1.8.1/generated/torch.norm.html#torch.norm) | 否 | -| 16 | [nansum](https://pytorch.org/docs/1.8.1/generated/torch.nansum.html#torch.nansum) | 否 | -| 17 | [prod](https://pytorch.org/docs/1.8.1/generated/torch.prod.html#torch.prod) | 否 | -| 18 | [quantile](https://pytorch.org/docs/1.8.1/generated/torch.quantile.html#torch.quantile) | 否 | -| 19 | [nanquantile](https://pytorch.org/docs/1.8.1/generated/torch.nanquantile.html#torch.nanquantile) | 否 | -| 20 | [std](https://pytorch.org/docs/1.8.1/generated/torch.std.html#torch.std) | 否 | -| 21 | [std_mean](https://pytorch.org/docs/1.8.1/generated/torch.std_mean.html#torch.std_mean) | 否 | -| 22 | [sum](https://pytorch.org/docs/1.8.1/generated/torch.sum.html#torch.sum) | 否 | -| 23 | [unique](https://pytorch.org/docs/1.8.1/generated/torch.unique.html#torch.unique) | 否 | -| 24 | [unique_consecutive](https://pytorch.org/docs/1.8.1/generated/torch.unique_consecutive.html#torch.unique_consecutive) | 否 | -| 25 | [var](https://pytorch.org/docs/1.8.1/generated/torch.var.html#torch.var) | 否 | -| 26 | [var_mean](https://pytorch.org/docs/1.8.1/generated/torch.var_mean.html#torch.var_mean) | 否 | -| 27 | [count_nonzero](https://pytorch.org/docs/1.8.1/generated/torch.count_nonzero.html#torch.count_nonzero) | 否 | - -### Comparison Ops - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [allclose](https://pytorch.org/docs/1.8.1/generated/torch.allclose.html#torch.allclose) | 否 | -| 2 | [argsort](https://pytorch.org/docs/1.8.1/generated/torch.argsort.html#torch.argsort) | 否 | -| 3 | [eq](https://pytorch.org/docs/1.8.1/generated/torch.eq.html#torch.eq) | 否 | -| 4 | [equal](https://pytorch.org/docs/1.8.1/generated/torch.equal.html#torch.equal) | 否 | -| 5 | [ge](https://pytorch.org/docs/1.8.1/generated/torch.ge.html#torch.ge) | 否 | -| 6 | [greater_equal](https://pytorch.org/docs/1.8.1/generated/torch.greater_equal.html#torch.greater_equal) | 否 | -| 7 | [gt](https://pytorch.org/docs/1.8.1/generated/torch.gt.html#torch.gt) | 否 | -| 8 | [greater](https://pytorch.org/docs/1.8.1/generated/torch.greater.html#torch.greater) | 否 | -| 9 | [isclose](https://pytorch.org/docs/1.8.1/generated/torch.isclose.html#torch.isclose) | 否 | -| 10 | [isfinite](https://pytorch.org/docs/1.8.1/generated/torch.isfinite.html#torch.isfinite) | 否 | -| 11 | [isinf](https://pytorch.org/docs/1.8.1/generated/torch.isinf.html#torch.isinf) | 否 | -| 12 | [isposinf](https://pytorch.org/docs/1.8.1/generated/torch.isposinf.html#torch.isposinf) | 否 | -| 13 | [isneginf](https://pytorch.org/docs/1.8.1/generated/torch.isneginf.html#torch.isneginf) | 否 | -| 14 | [isnan](https://pytorch.org/docs/1.8.1/generated/torch.isnan.html#torch.isnan) | 否 | -| 15 | [isreal](https://pytorch.org/docs/1.8.1/generated/torch.isreal.html#torch.isreal) | 否 | -| 16 | [kthvalue](https://pytorch.org/docs/1.8.1/generated/torch.kthvalue.html#torch.kthvalue) | 否 | -| 17 | [le](https://pytorch.org/docs/1.8.1/generated/torch.le.html#torch.le) | 否 | -| 18 | [less_equal](https://pytorch.org/docs/1.8.1/generated/torch.less_equal.html#torch.less_equal) | 否 | -| 19 | [lt](https://pytorch.org/docs/1.8.1/generated/torch.lt.html#torch.lt) | 否 | -| 20 | [less](https://pytorch.org/docs/1.8.1/generated/torch.less.html#torch.less) | 否 | -| 21 | [maximum](https://pytorch.org/docs/1.8.1/generated/torch.maximum.html#torch.maximum) | 否 | -| 22 | [minimum](https://pytorch.org/docs/1.8.1/generated/torch.minimum.html#torch.minimum) | 否 | -| 23 | [fmax](https://pytorch.org/docs/1.8.1/generated/torch.fmax.html#torch.fmax) | 否 | -| 24 | [fmin](https://pytorch.org/docs/1.8.1/generated/torch.fmin.html#torch.fmin) | 否 | -| 25 | [ne](https://pytorch.org/docs/1.8.1/generated/torch.ne.html#torch.ne) | 否 | -| 26 | [not_equal](https://pytorch.org/docs/1.8.1/generated/torch.not_equal.html#torch.not_equal) | 否 | -| 27 | [sort](https://pytorch.org/docs/1.8.1/generated/torch.sort.html#torch.sort) | 否 | -| 28 | [topk](https://pytorch.org/docs/1.8.1/generated/torch.topk.html#torch.topk) | 否 | -| 29 | [msort](https://pytorch.org/docs/1.8.1/generated/torch.msort.html#torch.msort) | 否 | - -### Spectral Ops - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [stft](https://pytorch.org/docs/1.8.1/generated/torch.stft.html#torch.stft) | 否 | -| 2 | [istft](https://pytorch.org/docs/1.8.1/generated/torch.istft.html#torch.istft) | 否 | -| 3 | [bartlett_window](https://pytorch.org/docs/1.8.1/generated/torch.bartlett_window.html#torch.bartlett_window) | 否 | -| 4 | [blackman_window](https://pytorch.org/docs/1.8.1/generated/torch.blackman_window.html#torch.blackman_window) | 否 | -| 5 | [hamming_window](https://pytorch.org/docs/1.8.1/generated/torch.hamming_window.html#torch.hamming_window) | 否 | -| 6 | [hann_window](https://pytorch.org/docs/1.8.1/generated/torch.hann_window.html#torch.hann_window) | 否 | -| 7 | [kaiser_window](https://pytorch.org/docs/1.8.1/generated/torch.kaiser_window.html#torch.kaiser_window) | 否 | - -### Other Operations - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [atleast_1d](https://pytorch.org/docs/1.8.1/generated/torch.atleast_1d.html#torch.atleast_1d) | 否 | -| 2 | [atleast_2d](https://pytorch.org/docs/1.8.1/generated/torch.atleast_2d.html#torch.atleast_2d) | 否 | -| 3 | [atleast_3d](https://pytorch.org/docs/1.8.1/generated/torch.atleast_3d.html#torch.atleast_3d) | 否 | -| 4 | [bincount](https://pytorch.org/docs/1.8.1/generated/torch.bincount.html#torch.bincount) | 否 | -| 5 | [block_diag](https://pytorch.org/docs/1.8.1/generated/torch.block_diag.html#torch.block_diag) | 否 | -| 6 | [broadcast_tensors](https://pytorch.org/docs/1.8.1/generated/torch.broadcast_tensors.html#torch.broadcast_tensors) | 否 | -| 7 | [broadcast_to](https://pytorch.org/docs/1.8.1/generated/torch.broadcast_to.html#torch.broadcast_to) | 否 | -| 8 | [broadcast_shapes](https://pytorch.org/docs/1.8.1/generated/torch.broadcast_shapes.html#torch.broadcast_shapes) | 否 | -| 9 | [bucketize](https://pytorch.org/docs/1.8.1/generated/torch.bucketize.html#torch.bucketize) | 否 | -| 10 | [cartesian_prod](https://pytorch.org/docs/1.8.1/generated/torch.cartesian_prod.html#torch.cartesian_prod) | 否 | -| 11 | [cdist](https://pytorch.org/docs/1.8.1/generated/torch.cdist.html#torch.cdist) | 否 | -| 12 | [clone](https://pytorch.org/docs/1.8.1/generated/torch.clone.html#torch.clone) | 否 | -| 13 | [combinations](https://pytorch.org/docs/1.8.1/generated/torch.combinations.html#torch.combinations) | 否 | -| 14 | [cross](https://pytorch.org/docs/1.8.1/generated/torch.cross.html#torch.cross) | 否 | -| 15 | [cummax](https://pytorch.org/docs/1.8.1/generated/torch.cummax.html#torch.cummax) | 否 | -| 16 | [cummin](https://pytorch.org/docs/1.8.1/generated/torch.cummin.html#torch.cummin) | 否 | -| 17 | [cumprod](https://pytorch.org/docs/1.8.1/generated/torch.cumprod.html#torch.cumprod) | 否 | -| 18 | [cumsum](https://pytorch.org/docs/1.8.1/generated/torch.cumsum.html#torch.cumsum) | 否 | -| 19 | [diag](https://pytorch.org/docs/1.8.1/generated/torch.diag.html#torch.diag) | 否 | -| 20 | [diag_embed](https://pytorch.org/docs/1.8.1/generated/torch.diag_embed.html#torch.diag_embed) | 否 | -| 21 | [diagflat](https://pytorch.org/docs/1.8.1/generated/torch.diagflat.html#torch.diagflat) | 否 | -| 22 | [diagonal](https://pytorch.org/docs/1.8.1/generated/torch.diagonal.html#torch.diagonal) | 否 | -| 23 | [diff](https://pytorch.org/docs/1.8.1/generated/torch.diff.html#torch.diff) | 否 | -| 24 | [einsum](https://pytorch.org/docs/1.8.1/generated/torch.einsum.html#torch.einsum) | 否 | -| 25 | [flatten](https://pytorch.org/docs/1.8.1/generated/torch.flatten.html#torch.flatten) | 否 | -| 26 | [flip](https://pytorch.org/docs/1.8.1/generated/torch.flip.html#torch.flip) | 否 | -| 27 | [fliplr](https://pytorch.org/docs/1.8.1/generated/torch.fliplr.html#torch.fliplr) | 否 | -| 28 | [flipud](https://pytorch.org/docs/1.8.1/generated/torch.flipud.html#torch.flipud) | 否 | -| 29 | [kron](https://pytorch.org/docs/1.8.1/generated/torch.kron.html#torch.kron) | 否 | -| 30 | [rot90](https://pytorch.org/docs/1.8.1/generated/torch.rot90.html#torch.rot90) | 否 | -| 31 | [gcd](https://pytorch.org/docs/1.8.1/generated/torch.gcd.html#torch.gcd) | 否 | -| 32 | [histc](https://pytorch.org/docs/1.8.1/generated/torch.histc.html#torch.histc) | 否 | -| 33 | [meshgrid](https://pytorch.org/docs/1.8.1/generated/torch.meshgrid.html#torch.meshgrid) | 否 | -| 34 | [lcm](https://pytorch.org/docs/1.8.1/generated/torch.lcm.html#torch.lcm) | 否 | -| 35 | [logcumsumexp](https://pytorch.org/docs/1.8.1/generated/torch.logcumsumexp.html#torch.logcumsumexp) | 否 | -| 36 | [ravel](https://pytorch.org/docs/1.8.1/generated/torch.ravel.html#torch.ravel) | 否 | -| 37 | [renorm](https://pytorch.org/docs/1.8.1/generated/torch.renorm.html#torch.renorm) | 否 | -| 38 | [repeat_interleave](https://pytorch.org/docs/1.8.1/generated/torch.repeat_interleave.html#torch.repeat_interleave) | 否 | -| 39 | [roll](https://pytorch.org/docs/1.8.1/generated/torch.roll.html#torch.roll) | 否 | -| 40 | [searchsorted](https://pytorch.org/docs/1.8.1/generated/torch.searchsorted.html#torch.searchsorted) | 否 | -| 41 | [tensordot](https://pytorch.org/docs/1.8.1/generated/torch.tensordot.html#torch.tensordot) | 否 | -| 42 | [trace](https://pytorch.org/docs/1.8.1/generated/torch.trace.html#torch.trace) | 否 | -| 43 | [tril](https://pytorch.org/docs/1.8.1/generated/torch.tril.html#torch.tril) | 否 | -| 44 | [tril_indices](https://pytorch.org/docs/1.8.1/generated/torch.tril_indices.html#torch.tril_indices) | 否 | -| 45 | [triu](https://pytorch.org/docs/1.8.1/generated/torch.triu.html#torch.triu) | 否 | -| 46 | [triu_indices](https://pytorch.org/docs/1.8.1/generated/torch.triu_indices.html#torch.triu_indices) | 否 | -| 47 | [vander](https://pytorch.org/docs/1.8.1/generated/torch.vander.html#torch.vander) | 否 | -| 48 | [view_as_real](https://pytorch.org/docs/1.8.1/generated/torch.view_as_real.html#torch.view_as_real) | 否 | -| 49 | [view_as_complex](https://pytorch.org/docs/1.8.1/generated/torch.view_as_complex.html#torch.view_as_complex) | 否 | - -### BLAS and LAPACK Operations - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [addbmm](https://pytorch.org/docs/1.8.1/generated/torch.addbmm.html#torch.addbmm) | 否 | -| 2 | [addmm](https://pytorch.org/docs/1.8.1/generated/torch.addmm.html#torch.addmm) | 否 | -| 3 | [addmv](https://pytorch.org/docs/1.8.1/generated/torch.addmv.html#torch.addmv) | 否 | -| 4 | [addr](https://pytorch.org/docs/1.8.1/generated/torch.addr.html#torch.addr) | 否 | -| 5 | [baddbmm](https://pytorch.org/docs/1.8.1/generated/torch.baddbmm.html#torch.baddbmm) | 否 | -| 6 | [bmm](https://pytorch.org/docs/1.8.1/generated/torch.bmm.html#torch.bmm) | 否 | -| 7 | [chain_matmul](https://pytorch.org/docs/1.8.1/generated/torch.chain_matmul.html#torch.chain_matmul) | 否 | -| 8 | [cholesky](https://pytorch.org/docs/1.8.1/generated/torch.cholesky.html#torch.cholesky) | 否 | -| 9 | [cholesky_inverse](https://pytorch.org/docs/1.8.1/generated/torch.cholesky_inverse.html#torch.cholesky_inverse) | 否 | -| 10 | [cholesky_solve](https://pytorch.org/docs/1.8.1/generated/torch.cholesky_solve.html#torch.cholesky_solve) | 否 | -| 11 | [dot](https://pytorch.org/docs/1.8.1/generated/torch.dot.html#torch.dot) | 否 | -| 12 | [eig](https://pytorch.org/docs/1.8.1/generated/torch.eig.html#torch.eig) | 否 | -| 13 | [geqrf](https://pytorch.org/docs/1.8.1/generated/torch.geqrf.html#torch.geqrf) | 否 | -| 14 | [ger](https://pytorch.org/docs/1.8.1/generated/torch.ger.html#torch.ger) | 否 | -| 15 | [inner](https://pytorch.org/docs/1.8.1/generated/torch.inner.html#torch.inner) | 否 | -| 16 | [inverse](https://pytorch.org/docs/1.8.1/generated/torch.inverse.html#torch.inverse) | 否 | -| 17 | [det](https://pytorch.org/docs/1.8.1/generated/torch.det.html#torch.det) | 否 | -| 18 | [logdet](https://pytorch.org/docs/1.8.1/generated/torch.logdet.html#torch.logdet) | 否 | -| 19 | [slogdet](https://pytorch.org/docs/1.8.1/generated/torch.slogdet.html#torch.slogdet) | 否 | -| 20 | [lstsq](https://pytorch.org/docs/1.8.1/generated/torch.lstsq.html#torch.lstsq) | 否 | -| 21 | [lu](https://pytorch.org/docs/1.8.1/generated/torch.lu.html#torch.lu) | 否 | -| 22 | [lu_solve](https://pytorch.org/docs/1.8.1/generated/torch.lu_solve.html#torch.lu_solve) | 否 | -| 23 | [lu_unpack](https://pytorch.org/docs/1.8.1/generated/torch.lu_unpack.html#torch.lu_unpack) | 否 | -| 24 | [matmul](https://pytorch.org/docs/1.8.1/generated/torch.matmul.html#torch.matmul) | 否 | -| 25 | [matrix_power](https://pytorch.org/docs/1.8.1/generated/torch.matrix_power.html#torch.matrix_power) | 否 | -| 26 | [matrix_rank](https://pytorch.org/docs/1.8.1/generated/torch.matrix_rank.html#torch.matrix_rank) | 否 | -| 27 | [matrix_exp](https://pytorch.org/docs/1.8.1/generated/torch.matrix_exp.html#torch.matrix_exp) | 否 | -| 28 | [mm](https://pytorch.org/docs/1.8.1/generated/torch.mm.html#torch.mm) | 否 | -| 29 | [mv](https://pytorch.org/docs/1.8.1/generated/torch.mv.html#torch.mv) | 否 | -| 30 | [orgqr](https://pytorch.org/docs/1.8.1/generated/torch.orgqr.html#torch.orgqr) | 否 | -| 31 | [ormqr](https://pytorch.org/docs/1.8.1/generated/torch.ormqr.html#torch.ormqr) | 否 | -| 32 | [outer](https://pytorch.org/docs/1.8.1/generated/torch.outer.html#torch.outer) | 否 | -| 33 | [pinverse](https://pytorch.org/docs/1.8.1/generated/torch.pinverse.html#torch.pinverse) | 否 | -| 34 | [qr](https://pytorch.org/docs/1.8.1/generated/torch.qr.html#torch.qr) | 否 | -| 35 | [solve](https://pytorch.org/docs/1.8.1/generated/torch.solve.html#torch.solve) | 否 | -| 36 | [svd](https://pytorch.org/docs/1.8.1/generated/torch.svd.html#torch.svd) | 否 | -| 37 | [svd_lowrank](https://pytorch.org/docs/1.8.1/generated/torch.svd_lowrank.html#torch.svd_lowrank) | 否 | -| 38 | [pca_lowrank](https://pytorch.org/docs/1.8.1/generated/torch.pca_lowrank.html#torch.pca_lowrank) | 否 | -| 39 | [symeig](https://pytorch.org/docs/1.8.1/generated/torch.symeig.html#torch.symeig) | 否 | -| 40 | [lobpcg](https://pytorch.org/docs/1.8.1/generated/torch.lobpcg.html#torch.lobpcg) | 否 | -| 41 | [trapz](https://pytorch.org/docs/1.8.1/generated/torch.trapz.html#torch.trapz) | 否 | -| 42 | [triangular_solve](https://pytorch.org/docs/1.8.1/generated/torch.triangular_solve.html#torch.triangular_solve) | 否 | -| 43 | [vdot](https://pytorch.org/docs/1.8.1/generated/torch.vdot.html#torch.vdot) | 否 | - -## Utilities - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [compiled_with_cxx11_abi](https://pytorch.org/docs/1.8.1/generated/torch.compiled_with_cxx11_abi.html#torch.compiled_with_cxx11_abi) | 否 | -| 2 | [result_type](https://pytorch.org/docs/1.8.1/generated/torch.result_type.html#torch.result_type) | 否 | -| 3 | [can_cast](https://pytorch.org/docs/1.8.1/generated/torch.can_cast.html#torch.can_cast) | 否 | -| 4 | [promote_types](https://pytorch.org/docs/1.8.1/generated/torch.promote_types.html#torch.promote_types) | 否 | -| 5 | [use_deterministic_algorithms](https://pytorch.org/docs/1.8.1/generated/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms) | 否 | -| 6 | [are_deterministic_algorithms_enabled](https://pytorch.org/docs/1.8.1/generated/torch.are_deterministic_algorithms_enabled.html#torch.are_deterministic_algorithms_enabled) | 否 | -| 7 | [_assert](https://pytorch.org/docs/1.8.1/generated/torch._assert.html#torch._assert) | 否 | - -# Layers (torch.nn) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [Parameter](https://pytorch.org/docs/1.8.1/generated/torch.nn.parameter.Parameter.html#torch.nn.parameter.Parameter) | 否 | -| 2 | [UninitializedParameter](https://pytorch.org/docs/1.8.1/generated/torch.nn.parameter.UninitializedParameter.html#torch.nn.parameter.UninitializedParameter) | 否 | - -## [Containers](https://pytorch.org/docs/1.8.1/nn.html#id1) - - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [Module](https://pytorch.org/docs/1.8.1/generated/torch.nn.Module.html#torch.nn.Module) | 否 | -| 2 | [Sequential](https://pytorch.org/docs/1.8.1/generated/torch.nn.Sequential.html#torch.nn.Sequential) | 否 | -| 3 | [ModuleList](https://pytorch.org/docs/1.8.1/generated/torch.nn.ModuleList.html#torch.nn.ModuleList) | 否 | -| 4 | [ModuleDict](https://pytorch.org/docs/1.8.1/generated/torch.nn.ModuleDict.html#torch.nn.ModuleDict) | 否 | -| 5 | [ParameterList](https://pytorch.org/docs/1.8.1/generated/torch.nn.ParameterList.html#torch.nn.ParameterList) | 否 | -| 6 | [ParameterDict](https://pytorch.org/docs/1.8.1/generated/torch.nn.ParameterDict.html#torch.nn.ParameterDict) | 否 | - -### Global Hooks For Module - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [register_module_forward_pre_hook](https://pytorch.org/docs/1.8.1/generated/torch.nn.modules.module.register_module_forward_pre_hook.html#torch.nn.modules.module.register_module_forward_pre_hook) | 否 | -| 2 | [register_module_forward_hook](https://pytorch.org/docs/1.8.1/generated/torch.nn.modules.module.register_module_forward_hook.html#torch.nn.modules.module.register_module_forward_hook) | 否 | -| 3 | [register_module_backward_hook](https://pytorch.org/docs/1.8.1/generated/torch.nn.modules.module.register_module_backward_hook.html#torch.nn.modules.module.register_module_backward_hook) | 否 | - -## [Convolution Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.Conv1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.Conv1d.html#torch.nn.Conv1d) | 否 | -| 2 | [nn.Conv2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.Conv2d.html#torch.nn.Conv2d) | 否 | -| 3 | [nn.Conv3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.Conv3d.html#torch.nn.Conv3d) | 否 | -| 4 | [nn.ConvTranspose1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ConvTranspose1d.html#torch.nn.ConvTranspose1d) | 否 | -| 5 | [nn.ConvTranspose2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ConvTranspose2d.html#torch.nn.ConvTranspose2d) | 否 | -| 6 | [nn.ConvTranspose3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ConvTranspose3d.html#torch.nn.ConvTranspose3d) | 否 | -| 7 | [nn.LazyConv1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyConv1d.html#torch.nn.LazyConv1d) | 否 | -| 8 | [nn.LazyConv2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyConv2d.html#torch.nn.LazyConv2d) | 否 | -| 9 | [nn.LazyConv3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyConv3d.html#torch.nn.LazyConv3d) | 否 | -| 10 | [nn.LazyConvTranspose1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyConvTranspose1d.html#torch.nn.LazyConvTranspose1d) | 否 | -| 11 | [nn.LazyConvTranspose2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyConvTranspose2d.html#torch.nn.LazyConvTranspose2d) | 否 | -| 12 | [nn.LazyConvTranspose3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyConvTranspose3d.html#torch.nn.LazyConvTranspose3d) | 否 | -| 13 | [nn.Unfold](https://pytorch.org/docs/1.8.1/generated/torch.nn.Unfold.html#torch.nn.Unfold) | 否 | -| 14 | [nn.Fold](https://pytorch.org/docs/1.8.1/generated/torch.nn.Fold.html#torch.nn.Fold) | 否 | - -## [Pooling layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.MaxPool1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.MaxPool1d.html#torch.nn.MaxPool1d) | 否 | -| 2 | [nn.MaxPool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d) | 否 | -| 3 | [nn.MaxPool3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.MaxPool3d.html#torch.nn.MaxPool3d) | 否 | -| 4 | [nn.MaxUnpool1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.MaxUnpool1d.html#torch.nn.MaxUnpool1d) | 否 | -| 5 | [nn.MaxUnpool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.MaxUnpool2d.html#torch.nn.MaxUnpool2d) | 否 | -| 6 | [nn.MaxUnpool3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.MaxUnpool3d.html#torch.nn.MaxUnpool3d) | 否 | -| 7 | [nn.AvgPool1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AvgPool1d.html#torch.nn.AvgPool1d) | 否 | -| 8 | [nn.AvgPool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AvgPool2d.html#torch.nn.AvgPool2d) | 否 | -| 9 | [nn.AvgPool3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AvgPool3d.html#torch.nn.AvgPool3d) | 否 | -| 10 | [nn.FractionalMaxPool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.FractionalMaxPool2d.html#torch.nn.FractionalMaxPool2d) | 否 | -| 11 | [nn.LPPool1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LPPool1d.html#torch.nn.LPPool1d) | 否 | -| 12 | [nn.LPPool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.LPPool2d.html#torch.nn.LPPool2d) | 否 | -| 13 | [nn.AdaptiveMaxPool1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveMaxPool1d.html#torch.nn.AdaptiveMaxPool1d) | 否 | -| 14 | [nn.AdaptiveMaxPool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveMaxPool2d.html#torch.nn.AdaptiveMaxPool2d) | 否 | -| 15 | [nn.AdaptiveMaxPool3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveMaxPool3d.html#torch.nn.AdaptiveMaxPool3d) | 否 | -| 16 | [nn.AdaptiveAvgPool1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveAvgPool1d.html#torch.nn.AdaptiveAvgPool1d) | 否 | -| 17 | [nn.AdaptiveAvgPool2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveAvgPool2d.html#torch.nn.AdaptiveAvgPool2d) | 否 | -| 18 | [nn.AdaptiveAvgPool3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveAvgPool3d.html#torch.nn.AdaptiveAvgPool3d) | 否 | - -## [Padding Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.ReflectionPad1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReflectionPad1d.html#torch.nn.ReflectionPad1d) | 否 | -| 2 | [nn.ReflectionPad2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReflectionPad2d.html#torch.nn.ReflectionPad2d) | 否 | -| 3 | [nn.ReplicationPad1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReplicationPad1d.html#torch.nn.ReplicationPad1d) | 否 | -| 4 | [nn.ReplicationPad2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReplicationPad2d.html#torch.nn.ReplicationPad2d) | 否 | -| 5 | [nn.ReplicationPad3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReplicationPad3d.html#torch.nn.ReplicationPad3d) | 否 | -| 6 | [nn.ZeroPad2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ZeroPad2d.html#torch.nn.ZeroPad2d) | 否 | -| 7 | [nn.ConstantPad1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ConstantPad1d.html#torch.nn.ConstantPad1d) | 否 | -| 8 | [nn.ConstantPad2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ConstantPad2d.html#torch.nn.ConstantPad2d) | 否 | -| 9 | [nn.ConstantPad3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.ConstantPad3d.html#torch.nn.ConstantPad3d) | 否 | - - - -## [Non-linear Activations (weighted sum, nonlinearity)](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.ELU](https://pytorch.org/docs/1.8.1/generated/torch.nn.ELU.html#torch.nn.ELU) | 否 | -| 2 | [nn.Hardshrink](https://pytorch.org/docs/1.8.1/generated/torch.nn.Hardshrink.html#torch.nn.Hardshrink) | 否 | -| 3 | [nn.Hardsigmoid](https://pytorch.org/docs/1.8.1/generated/torch.nn.Hardsigmoid.html#torch.nn.Hardsigmoid) | 否 | -| 4 | [nn.Hardtanh](https://pytorch.org/docs/1.8.1/generated/torch.nn.Hardtanh.html#torch.nn.Hardtanh) | 否 | -| 5 | [nn.Hardswish](https://pytorch.org/docs/1.8.1/generated/torch.nn.Hardswish.html#torch.nn.Hardswish) | 否 | -| 6 | [nn.LeakyReLU](https://pytorch.org/docs/1.8.1/generated/torch.nn.LeakyReLU.html#torch.nn.LeakyReLU) | 否 | -| 7 | [nn.LogSigmoid](https://pytorch.org/docs/1.8.1/generated/torch.nn.LogSigmoid.html#torch.nn.LogSigmoid) | 否 | -| 8 | [nn.MultiheadAttention](https://pytorch.org/docs/1.8.1/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention) | 否 | -| 9 | [nn.PReLU](https://pytorch.org/docs/1.8.1/generated/torch.nn.PReLU.html#torch.nn.PReLU) | 否 | -| 10 | [nn.ReLU](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReLU.html#torch.nn.ReLU) | 否 | -| 11 | [nn.ReLU6](https://pytorch.org/docs/1.8.1/generated/torch.nn.ReLU6.html#torch.nn.ReLU6) | 否 | -| 12 | [nn.RReLU](https://pytorch.org/docs/1.8.1/generated/torch.nn.RReLU.html#torch.nn.RReLU) | 否 | -| 13 | [nn.SELU](https://pytorch.org/docs/1.8.1/generated/torch.nn.SELU.html#torch.nn.SELU) | 否 | -| 14 | [nn.CELU](https://pytorch.org/docs/1.8.1/generated/torch.nn.CELU.html#torch.nn.CELU) | 否 | -| 15 | [nn.GELU](https://pytorch.org/docs/1.8.1/generated/torch.nn.GELU.html#torch.nn.GELU) | 否 | -| 16 | [nn.Sigmoid](https://pytorch.org/docs/1.8.1/generated/torch.nn.Sigmoid.html#torch.nn.Sigmoid) | 否 | -| 17 | [nn.SiLU](https://pytorch.org/docs/1.8.1/generated/torch.nn.SiLU.html#torch.nn.SiLU) | 否 | -| 18 | [nn.Softplus](https://pytorch.org/docs/1.8.1/generated/torch.nn.Softplus.html#torch.nn.Softplus) | 否 | -| 19 | [nn.Softshrink](https://pytorch.org/docs/1.8.1/generated/torch.nn.Softshrink.html#torch.nn.Softshrink) | 否 | -| 20 | [nn.Softsign](https://pytorch.org/docs/1.8.1/generated/torch.nn.Softsign.html#torch.nn.Softsign) | 否 | -| 21 | [nn.Tanh](https://pytorch.org/docs/1.8.1/generated/torch.nn.Tanh.html#torch.nn.Tanh) | 否 | -| 22 | [nn.Tanhshrink](https://pytorch.org/docs/1.8.1/generated/torch.nn.Tanhshrink.html#torch.nn.Tanhshrink) | 否 | -| 23 | [nn.Threshold](https://pytorch.org/docs/1.8.1/generated/torch.nn.Threshold.html#torch.nn.Threshold) | 否 | - -## [Non-linear Activations (other)](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.Softmin](https://pytorch.org/docs/1.8.1/generated/torch.nn.Softmin.html#torch.nn.Softmin) | 否 | -| 2 | [nn.Softmax](https://pytorch.org/docs/1.8.1/generated/torch.nn.Softmax.html#torch.nn.Softmax) | 否 | -| 3 | [nn.Softmax2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.Softmax2d.html#torch.nn.Softmax2d) | 否 | -| 4 | [nn.LogSoftmax](https://pytorch.org/docs/1.8.1/generated/torch.nn.LogSoftmax.html#torch.nn.LogSoftmax) | 否 | -| 5 | [nn.AdaptiveLogSoftmaxWithLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html#torch.nn.AdaptiveLogSoftmaxWithLoss) | 否 | - -## [Normalization Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.BatchNorm1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.BatchNorm1d.html#torch.nn.BatchNorm1d) | 否 | -| 2 | [nn.BatchNorm2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) | 否 | -| 3 | [nn.BatchNorm3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.BatchNorm3d.html#torch.nn.BatchNorm3d) | 否 | -| 4 | [nn.GroupNorm](https://pytorch.org/docs/1.8.1/generated/torch.nn.GroupNorm.html#torch.nn.GroupNorm) | 否 | -| 5 | [nn.SyncBatchNorm](https://pytorch.org/docs/1.8.1/generated/torch.nn.SyncBatchNorm.html#torch.nn.SyncBatchNorm) | 否 | -| 6 | [nn.InstanceNorm1d](https://pytorch.org/docs/1.8.1/generated/torch.nn.InstanceNorm1d.html#torch.nn.InstanceNorm1d) | 否 | -| 7 | [nn.InstanceNorm2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.InstanceNorm2d.html#torch.nn.InstanceNorm2d) | 否 | -| 8 | [nn.InstanceNorm3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.InstanceNorm3d.html#torch.nn.InstanceNorm3d) | 否 | -| 9 | [nn.LayerNorm](https://pytorch.org/docs/1.8.1/generated/torch.nn.LayerNorm.html#torch.nn.LayerNorm) | 否 | -| 10 | [nn.LocalResponseNorm](https://pytorch.org/docs/1.8.1/generated/torch.nn.LocalResponseNorm.html#torch.nn.LocalResponseNorm) | 否 | - - - -## [Recurrent Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.RNNBase](https://pytorch.org/docs/1.8.1/generated/torch.nn.RNNBase.html#torch.nn.RNNBase) | 否 | -| 2 | [nn.RNN](https://pytorch.org/docs/1.8.1/generated/torch.nn.RNN.html#torch.nn.RNN) | 否 | -| 3 | [nn.LSTM](https://pytorch.org/docs/1.8.1/generated/torch.nn.LSTM.html#torch.nn.LSTM) | 否 | -| 4 | [nn.GRU](https://pytorch.org/docs/1.8.1/generated/torch.nn.GRU.html#torch.nn.GRU) | 否 | -| 5 | [nn.RNNCell](https://pytorch.org/docs/1.8.1/generated/torch.nn.RNNCell.html#torch.nn.RNNCell) | 否 | -| 6 | [nn.LSTMCell](https://pytorch.org/docs/1.8.1/generated/torch.nn.LSTMCell.html#torch.nn.LSTMCell) | 否 | -| 7 | [nn.GRUCell](https://pytorch.org/docs/1.8.1/generated/torch.nn.GRUCell.html#torch.nn.GRUCell) | 否 | - - - -## [Transformer Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.Transformer](https://pytorch.org/docs/1.8.1/generated/torch.nn.Transformer.html#torch.nn.Transformer) | 否 | -| 2 | [nn.TransformerEncoder](https://pytorch.org/docs/1.8.1/generated/torch.nn.TransformerEncoder.html#torch.nn.TransformerEncoder) | 否 | -| 3 | [nn.TransformerDecoder](https://pytorch.org/docs/1.8.1/generated/torch.nn.TransformerDecoder.html#torch.nn.TransformerDecoder) | 否 | -| 4 | [nn.TransformerEncoderLayer](https://pytorch.org/docs/1.8.1/generated/torch.nn.TransformerEncoderLayer.html#torch.nn.TransformerEncoderLayer) | 否 | -| 5 | [nn.TransformerDecoderLayer](https://pytorch.org/docs/1.8.1/generated/torch.nn.TransformerDecoderLayer.html#torch.nn.TransformerDecoderLayer) | 否 | - - - -## [Linear Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.Identity](https://pytorch.org/docs/1.8.1/generated/torch.nn.Identity.html#torch.nn.Identity) | 否 | -| 2 | [nn.Linear](https://pytorch.org/docs/1.8.1/generated/torch.nn.Linear.html#torch.nn.Linear) | 否 | -| 3 | [nn.Bilinear](https://pytorch.org/docs/1.8.1/generated/torch.nn.Bilinear.html#torch.nn.Bilinear) | 否 | -| 4 | [nn.LazyLinear](https://pytorch.org/docs/1.8.1/generated/torch.nn.LazyLinear.html#torch.nn.LazyLinear) | 否 | - - - -## [Dropout Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - - - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.Dropout](https://pytorch.org/docs/1.8.1/generated/torch.nn.Dropout.html#torch.nn.Dropout) | 否 | -| 2 | [nn.Dropout2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.Dropout2d.html#torch.nn.Dropout2d) | 否 | -| 3 | [nn.Dropout3d](https://pytorch.org/docs/1.8.1/generated/torch.nn.Dropout3d.html#torch.nn.Dropout3d) | 否 | -| 4 | [nn.AlphaDropout](https://pytorch.org/docs/1.8.1/generated/torch.nn.AlphaDropout.html#torch.nn.AlphaDropout) | 否 | - -## [Sparse Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.Embedding](https://pytorch.org/docs/1.8.1/generated/torch.nn.Embedding.html#torch.nn.Embedding) | 否 | -| 2 | [nn.EmbeddingBag](https://pytorch.org/docs/1.8.1/generated/torch.nn.EmbeddingBag.html#torch.nn.EmbeddingBag) | 否 | - - - -## [Distance Functions](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.CosineSimilarity](https://pytorch.org/docs/1.8.1/generated/torch.nn.CosineSimilarity.html#torch.nn.CosineSimilarity) | 否 | -| 2 | [nn.PairwiseDistance](https://pytorch.org/docs/1.8.1/generated/torch.nn.PairwiseDistance.html#torch.nn.PairwiseDistance) | 否 | - - - -## [Loss Functions](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.L1Loss](https://pytorch.org/docs/1.8.1/generated/torch.nn.L1Loss.html#torch.nn.L1Loss) | 否 | -| 2 | [nn.MSELoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.MSELoss.html#torch.nn.MSELoss) | 否 | -| 3 | [nn.CrossEntropyLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss) | 否 | -| 4 | [nn.CTCLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.CTCLoss.html#torch.nn.CTCLoss) | 否 | -| 5 | [nn.NLLLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss) | 否 | -| 6 | [nn.PoissonNLLLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.PoissonNLLLoss.html#torch.nn.PoissonNLLLoss) | 否 | -| 7 | [nn.GaussianNLLLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.GaussianNLLLoss.html#torch.nn.GaussianNLLLoss) | 否 | -| 8 | [nn.KLDivLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.KLDivLoss.html#torch.nn.KLDivLoss) | 否 | -| 9 | [nn.BCELoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.BCELoss.html#torch.nn.BCELoss) | 否 | -| 10 | [nn.BCEWithLogitsLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.BCEWithLogitsLoss.html#torch.nn.BCEWithLogitsLoss) | 否 | -| 11 | [nn.MarginRankingLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.MarginRankingLoss.html#torch.nn.MarginRankingLoss) | 否 | -| 12 | [nn.HingeEmbeddingLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.HingeEmbeddingLoss.html#torch.nn.HingeEmbeddingLoss) | 否 | -| 13 | [nn.MultiLabelMarginLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.MultiLabelMarginLoss.html#torch.nn.MultiLabelMarginLoss) | 否 | -| 14 | [nn.SmoothL1Loss](https://pytorch.org/docs/1.8.1/generated/torch.nn.SmoothL1Loss.html#torch.nn.SmoothL1Loss) | 否 | -| 15 | [nn.SoftMarginLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.SoftMarginLoss.html#torch.nn.SoftMarginLoss) | 否 | -| 16 | [nn.MultiLabelSoftMarginLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.MultiLabelSoftMarginLoss.html#torch.nn.MultiLabelSoftMarginLoss) | 否 | -| 17 | [nn.CosineEmbeddingLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.CosineEmbeddingLoss.html#torch.nn.CosineEmbeddingLoss) | 否 | -| 18 | [nn.MultiMarginLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.MultiMarginLoss.html#torch.nn.MultiMarginLoss) | 否 | -| 19 | [nn.TripletMarginLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.TripletMarginLoss.html#torch.nn.TripletMarginLoss) | 否 | -| 20 | [nn.TripletMarginWithDistanceLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.TripletMarginWithDistanceLoss.html#torch.nn.TripletMarginWithDistanceLoss) | 否 | - -## [Vision Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.PixelShuffle](https://pytorch.org/docs/1.8.1/generated/torch.nn.PixelShuffle.html#torch.nn.PixelShuffle) | 否 | -| 2 | [nn.PixelUnshuffle](https://pytorch.org/docs/1.8.1/generated/torch.nn.PixelUnshuffle.html#torch.nn.PixelUnshuffle) | 否 | -| 3 | [nn.Upsample](https://pytorch.org/docs/1.8.1/generated/torch.nn.Upsample.html#torch.nn.Upsample) | 否 | -| 4 | [nn.UpsamplingNearest2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.UpsamplingNearest2d.html#torch.nn.UpsamplingNearest2d) | 否 | -| 5 | [nn.UpsamplingBilinear2d](https://pytorch.org/docs/1.8.1/generated/torch.nn.UpsamplingBilinear2d.html#torch.nn.UpsamplingBilinear2d) | 否 | - - - -## [Shuffle Layers](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.ChannelShuffle](https://pytorch.org/docs/1.8.1/generated/torch.nn.ChannelShuffle.html#torch.nn.ChannelShuffle) | 否 | - - - -## [DataParallel Layers (multi-GPU, distributed)](https://pytorch.org/docs/1.8.1/nn.html#id1) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.DataParallel](https://pytorch.org/docs/1.8.1/generated/torch.nn.DataParallel.html#torch.nn.DataParallel) | 否 | -| 2 | [nn.parallel.DistributedDataParallel](https://pytorch.org/docs/1.8.1/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel) | 否 | - -## [Utilities](https://pytorch.org/docs/1.8.1/nn.html#id1) - - - -From the `torch.nn.utils` module - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [clip_grad_norm_](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.clip_grad_norm_.html#torch.nn.utils.clip_grad_norm_) | 否 | -| 2 | [clip_grad_value_](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.clip_grad_value_.html#torch.nn.utils.clip_grad_value_) | 否 | -| 3 | [parameters_to_vector](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.parameters_to_vector.html#torch.nn.utils.parameters_to_vector) | 否 | -| 4 | [vector_to_parameters](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.vector_to_parameters.html#torch.nn.utils.vector_to_parameters) | 否 | -| 5 | [prune.BasePruningMethod](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.BasePruningMethod.html#torch.nn.utils.prune.BasePruningMethod) | 否 | -| 6 | [prune.PruningContainer](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.PruningContainer.html#torch.nn.utils.prune.PruningContainer) | 否 | -| 7 | [prune.Identity](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.Identity.html#torch.nn.utils.prune.Identity) | 否 | -| 8 | [prune.RandomUnstructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.RandomUnstructured.html#torch.nn.utils.prune.RandomUnstructured) | 否 | -| 9 | [prune.L1Unstructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.L1Unstructured.html#torch.nn.utils.prune.L1Unstructured) | 否 | -| 10 | [prune.RandomStructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.RandomStructured.html#torch.nn.utils.prune.RandomStructured) | 否 | -| 11 | [prune.LnStructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.LnStructured.html#torch.nn.utils.prune.LnStructured) | 否 | -| 12 | [prune.CustomFromMask](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.CustomFromMask.html#torch.nn.utils.prune.CustomFromMask) | 否 | -| 13 | [prune.identity](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.identity.html#torch.nn.utils.prune.identity) | 否 | -| 14 | [prune.random_unstructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.random_unstructured.html#torch.nn.utils.prune.random_unstructured) | 否 | -| 15 | [prune.l1_unstructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.l1_unstructured.html#torch.nn.utils.prune.l1_unstructured) | 否 | -| 16 | [prune.random_structured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.random_structured.html#torch.nn.utils.prune.random_structured) | 否 | -| 17 | [prune.ln_structured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.ln_structured.html#torch.nn.utils.prune.ln_structured) | 否 | -| 18 | [prune.global_unstructured](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.global_unstructured.html#torch.nn.utils.prune.global_unstructured) | 否 | -| 19 | [prune.custom_from_mask](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.custom_from_mask.html#torch.nn.utils.prune.custom_from_mask) | 否 | -| 20 | [prune.remove](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.remove.html#torch.nn.utils.prune.remove) | 否 | -| 21 | [prune.is_pruned](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.prune.is_pruned.html#torch.nn.utils.prune.is_pruned) | 否 | -| 22 | [weight_norm](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.weight_norm.html#torch.nn.utils.weight_norm) | 否 | -| 23 | [remove_weight_norm](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.remove_weight_norm.html#torch.nn.utils.remove_weight_norm) | 否 | -| 24 | [spectral_norm](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.spectral_norm.html#torch.nn.utils.spectral_norm) | 否 | -| 25 | [remove_spectral_norm](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.remove_spectral_norm.html#torch.nn.utils.remove_spectral_norm) | 否 | - - - -### Utility functions in other modules - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.utils.rnn.PackedSequence](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.rnn.PackedSequence.html#torch.nn.utils.rnn.PackedSequence) | 否 | -| 2 | [nn.utils.rnn.pack_padded_sequence](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.rnn.pack_padded_sequence.html#torch.nn.utils.rnn.pack_padded_sequence) | 否 | -| 3 | [nn.utils.rnn.pad_packed_sequence](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.rnn.pad_packed_sequence.html#torch.nn.utils.rnn.pad_packed_sequence) | 否 | -| 4 | [nn.utils.rnn.pad_sequence](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.rnn.pad_sequence.html#torch.nn.utils.rnn.pad_sequence) | 否 | -| 5 | [nn.utils.rnn.pack_sequence](https://pytorch.org/docs/1.8.1/generated/torch.nn.utils.rnn.pack_sequence.html#torch.nn.utils.rnn.pack_sequence) | 否 | -| 6 | [nn.Flatten](https://pytorch.org/docs/1.8.1/generated/torch.nn.Flatten.html#torch.nn.Flatten) | 否 | -| 7 | [nn.Unflatten](https://pytorch.org/docs/1.8.1/generated/torch.nn.Unflatten.html#torch.nn.Unflatten) | 否 | - -### Lazy Modules Initialization - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [nn.modules.lazy.LazyModuleMixin](https://pytorch.org/docs/1.8.1/generated/torch.nn.modules.lazy.LazyModuleMixin.html#torch.nn.modules.lazy.LazyModuleMixin) | 否 | - - - - - - - - - - - -# Functions(torch.nn.functional) - -## [Convolution functions](https://pytorch.org/docs/1.8.1/nn.functional.html#convolution-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [conv1d](https://pytorch.org/docs/1.8.1/nn.functional.html#conv1d) | 否 | -| 2 | [conv2d](https://pytorch.org/docs/1.8.1/nn.functional.html#conv2d) | 否 | -| 3 | [conv3d](https://pytorch.org/docs/1.8.1/nn.functional.html#conv3d) | 否 | -| 4 | [conv_transpose1d](https://pytorch.org/docs/1.8.1/nn.functional.html#conv-transpose1d) | 否 | -| 5 | [conv_transpose2d](https://pytorch.org/docs/1.8.1/nn.functional.html#conv-transpose2d) | 否 | -| 6 | [conv_transpose3d](https://pytorch.org/docs/1.8.1/nn.functional.html#conv-transpose3d) | 否 | -| 7 | [unfold](https://pytorch.org/docs/1.8.1/nn.functional.html#unfold) | 否 | -| 8 | [fold](https://pytorch.org/docs/1.8.1/nn.functional.html#fold) | 否 | - -## [Pooling functions](https://pytorch.org/docs/1.8.1/nn.functional.html#pooling-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [avg_pool1d](https://pytorch.org/docs/1.8.1/nn.functional.html#avg-pool1d) | 否 | -| 2 | [avg_pool2d](https://pytorch.org/docs/1.8.1/nn.functional.html#avg-pool2d) | 否 | -| 3 | [avg_pool3d](https://pytorch.org/docs/1.8.1/nn.functional.html#avg-pool3d) | 否 | -| 4 | [max_pool1d](https://pytorch.org/docs/1.8.1/nn.functional.html#max-pool1d) | 否 | -| 5 | [max_pool2d](https://pytorch.org/docs/1.8.1/nn.functional.html#max-pool2d) | 否 | -| 6 | [max_pool3d](https://pytorch.org/docs/1.8.1/nn.functional.html#max-pool3d) | 否 | -| 7 | [max_unpool1d](https://pytorch.org/docs/1.8.1/nn.functional.html#max-unpool1d) | 否 | -| 8 | [max_unpool2d](https://pytorch.org/docs/1.8.1/nn.functional.html#max-unpool2d) | 否 | -| 9 | [max_unpool3d](https://pytorch.org/docs/1.8.1/nn.functional.html#max-unpool3d) | 否 | -| 10 | [lp_pool1d](https://pytorch.org/docs/1.8.1/nn.functional.html#lp-pool1d) | 否 | -| 11 | [lp_pool2d](https://pytorch.org/docs/1.8.1/nn.functional.html#lp-pool2d) | 否 | -| 12 | [adaptive_max_pool1d](https://pytorch.org/docs/1.8.1/nn.functional.html#adaptive-max-pool1d) | 否 | -| 13 | [adaptive_max_pool2d](https://pytorch.org/docs/1.8.1/nn.functional.html#adaptive-max-pool2d) | 否 | -| 14 | [adaptive_max_pool3d](https://pytorch.org/docs/1.8.1/nn.functional.html#adaptive-max-pool3d) | 否 | -| 15 | [adaptive_avg_pool1d](https://pytorch.org/docs/1.8.1/nn.functional.html#adaptive-avg-pool1d) | 否 | -| 16 | [adaptive_avg_pool2d](https://pytorch.org/docs/1.8.1/nn.functional.html#adaptive-avg-pool2d) | 否 | -| 17 | [adaptive_avg_pool3d](https://pytorch.org/docs/1.8.1/nn.functional.html#adaptive-avg-pool3d) | 否 | - -## [Non-linear activation functions](https://pytorch.org/docs/1.8.1/nn.functional.html#non-linear-activation-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [threshold](https://pytorch.org/docs/1.8.1/nn.functional.html#threshold) | 否 | -| 2 | [relu](https://pytorch.org/docs/1.8.1/nn.functional.html#relu) | 否 | -| 3 | [hardtanh](https://pytorch.org/docs/1.8.1/nn.functional.html#hardtanh) | 否 | -| 4 | [hardswish](https://pytorch.org/docs/1.8.1/nn.functional.html#hardswish) | 否 | -| 5 | [relu6](https://pytorch.org/docs/1.8.1/nn.functional.html#relu6) | 否 | -| 6 | [elu](https://pytorch.org/docs/1.8.1/nn.functional.html#elu) | 否 | -| 7 | [selu](https://pytorch.org/docs/1.8.1/nn.functional.html#selu) | 否 | -| 8 | [celu](https://pytorch.org/docs/1.8.1/nn.functional.html#celu) | 否 | -| 9 | [leaky_relu](https://pytorch.org/docs/1.8.1/nn.functional.html#leaky-relu) | 否 | -| 10 | [prelu](https://pytorch.org/docs/1.8.1/nn.functional.html#prelu) | 否 | -| 11 | [rrelu](https://pytorch.org/docs/1.8.1/nn.functional.html#rrelu) | 否 | -| 12 | [glu](https://pytorch.org/docs/1.8.1/nn.functional.html#glu) | 否 | -| 13 | [gelu](https://pytorch.org/docs/1.8.1/nn.functional.html#gelu) | 否 | -| 14 | [logsigmoid](https://pytorch.org/docs/1.8.1/nn.functional.html#logsigmoid) | 否 | -| 15 | [hardshrink](https://pytorch.org/docs/1.8.1/nn.functional.html#hardshrink) | 否 | -| 16 | [tanhshrink](https://pytorch.org/docs/1.8.1/nn.functional.html#tanhshrink) | 否 | -| 17 | [softsign](https://pytorch.org/docs/1.8.1/nn.functional.html#softsign) | 否 | -| 18 | [softplus](https://pytorch.org/docs/1.8.1/nn.functional.html#softplus) | 否 | -| 19 | [softmin](https://pytorch.org/docs/1.8.1/nn.functional.html#softmin) | 否 | -| 20 | [softmax](https://pytorch.org/docs/1.8.1/nn.functional.html#softmax) | 否 | -| 21 | [softshrink](https://pytorch.org/docs/1.8.1/nn.functional.html#softshrink) | 否 | -| 22 | [gumbel_softmax](https://pytorch.org/docs/1.8.1/nn.functional.html#gumbel-softmax) | 否 | -| 23 | [log_softmax](https://pytorch.org/docs/1.8.1/nn.functional.html#log-softmax) | 否 | -| 24 | [tanh](https://pytorch.org/docs/1.8.1/nn.functional.html#tanh) | 否 | -| 25 | [sigmoid](https://pytorch.org/docs/1.8.1/nn.functional.html#sigmoid) | 否 | -| 26 | [hardsigmoid](https://pytorch.org/docs/1.8.1/nn.functional.html#hardsigmoid) | 否 | -| 27 | [silu](https://pytorch.org/docs/1.8.1/nn.functional.html#silu) | 否 | - -## [Normalization functions](https://pytorch.org/docs/1.8.1/nn.functional.html#normalization-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [batch_norm](https://pytorch.org/docs/1.8.1/nn.functional.html#batch-norm) | 否 | -| 2 | [instance_norm](https://pytorch.org/docs/1.8.1/nn.functional.html#instance-norm) | 否 | -| 3 | [layer_norm](https://pytorch.org/docs/1.8.1/nn.functional.html#layer-norm) | 否 | -| 4 | [local_response_norm](https://pytorch.org/docs/1.8.1/nn.functional.html#local-response-norm) | 否 | -| 5 | [normalize](https://pytorch.org/docs/1.8.1/nn.functional.html#normalize) | 否 | - -## [Linear functions](https://pytorch.org/docs/1.8.1/nn.functional.html#linear-functions)[Linear functions](https://pytorch.org/docs/1.8.1/nn.functional.html#linear-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [linear](https://pytorch.org/docs/1.8.1/nn.functional.html#linear) | 否 | -| 2 | [bilinear](https://pytorch.org/docs/1.8.1/nn.functional.html#bilinear) | 否 | - -## [Dropout functions](https://pytorch.org/docs/1.8.1/nn.functional.html#dropout-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [dropout](https://pytorch.org/docs/1.8.1/nn.functional.html#dropout) | 否 | -| 2 | [alpha_dropout](https://pytorch.org/docs/1.8.1/nn.functional.html#alpha-dropout) | 否 | -| 3 | [feature_alpha_dropout](https://pytorch.org/docs/1.8.1/nn.functional.html#feature-alpha-dropout) | 否 | -| 4 | [dropout2d](https://pytorch.org/docs/1.8.1/nn.functional.html#dropout2d) | 否 | -| 5 | [dropout3d](https://pytorch.org/docs/1.8.1/nn.functional.html#dropout3d) | 否 | - -## [Sparse functions](https://pytorch.org/docs/1.8.1/nn.functional.html#sparse-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [embedding](https://pytorch.org/docs/1.8.1/nn.functional.html#embedding) | 否 | -| 2 | [embedding_bag](https://pytorch.org/docs/1.8.1/nn.functional.html#embedding-bag) | 否 | -| 3 | [one_hot](https://pytorch.org/docs/1.8.1/nn.functional.html#one-hot) | 否 | - -## [Distance functions](https://pytorch.org/docs/1.8.1/nn.functional.html#distance-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [pairwise_distance](https://pytorch.org/docs/1.8.1/nn.functional.html#pairwise-distance) | 否 | -| 2 | [cosine_similarity](https://pytorch.org/docs/1.8.1/nn.functional.html#cosine-similarity) | 否 | -| 3 | [pdist](https://pytorch.org/docs/1.8.1/nn.functional.html#pdist) | 否 | - -## [Loss functions](https://pytorch.org/docs/1.8.1/nn.functional.html#loss-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [binary_cross_entropy](https://pytorch.org/docs/1.8.1/nn.functional.html#binary-cross-entropy) | 否 | -| 2 | [binary_cross_entropy_with_logits](https://pytorch.org/docs/1.8.1/nn.functional.html#binary-cross-entropy-with-logits) | 否 | -| 3 | [poisson_nll_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#poisson-nll-loss) | 否 | -| 4 | [cosine_embedding_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#cosine-embedding-loss) | 否 | -| 5 | [cross_entropy](https://pytorch.org/docs/1.8.1/nn.functional.html#cross-entropy) | 否 | -| 6 | [ctc_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#ctc-loss) | 否 | -| 7 | [hinge_embedding_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#hinge-embedding-loss) | 否 | -| 8 | [kl_div](https://pytorch.org/docs/1.8.1/nn.functional.html#kl-div) | 否 | -| 9 | [l1_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#l1-loss) | 否 | -| 10 | [mse_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#mse-loss) | 否 | -| 11 | [margin_ranking_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#margin-ranking-loss) | 否 | -| 12 | [multilabel_margin_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#multilabel-margin-loss) | 否 | -| 13 | [multilabel_soft_margin_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#multilabel-soft-margin-loss) | 否 | -| 14 | [multi_margin_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#multi-margin-loss) | 否 | -| 15 | [nll_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#nll-loss) | 否 | -| 16 | [smooth_l1_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#smooth-l1-loss) | 否 | -| 17 | [soft_margin_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#soft-margin-loss) | 否 | -| 18 | [triplet_margin_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#triplet-margin-loss) | 否 | -| 19 | [triplet_margin_with_distance_loss](https://pytorch.org/docs/1.8.1/nn.functional.html#triplet-margin-with-distance-loss) | 否 | - -## [Vision functions](https://pytorch.org/docs/1.8.1/nn.functional.html#vision-functions) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [pixel_shuffle](https://pytorch.org/docs/1.8.1/nn.functional.html#pixel-shuffle) | 否 | -| 2 | [pixel_unshuffle](https://pytorch.org/docs/1.8.1/nn.functional.html#pixel-unshuffle) | 否 | -| 3 | [pad](https://pytorch.org/docs/1.8.1/nn.functional.html#pad) | 否 | -| 4 | [interpolate](https://pytorch.org/docs/1.8.1/nn.functional.html#interpolate) | 否 | -| 5 | [upsample](https://pytorch.org/docs/1.8.1/nn.functional.html#upsample) | 否 | -| 6 | [upsample_nearest](https://pytorch.org/docs/1.8.1/nn.functional.html#upsample-nearest) | 否 | -| 7 | [upsample_bilinear](https://pytorch.org/docs/1.8.1/nn.functional.html#upsample-bilinear) | 否 | -| 8 | [grid_sample](https://pytorch.org/docs/1.8.1/nn.functional.html#grid-sample) | 否 | -| 9 | [affine_grid](https://pytorch.org/docs/1.8.1/nn.functional.html#affine-grid) | 否 | - -## [DataParallel functions (multi-GPU, distributed)](https://pytorch.org/docs/1.8.1/nn.functional.html#dataparallel-functions-multi-gpu-distributed) - -| 序号 | API名称 | 支持情况 | -| ---- | ------------------------------------------------------------ | -------- | -| 1 | [data_parallel](https://pytorch.org/docs/1.8.1/nn.functional.html#data-parallel) | 否 | - -# [torch.distributed](https://pytorch.org/docs/1.8.1/distributed.html) - -| 序号 | API名称 | 支持情况 | -| ---- | ----------------------------------------- | -------- | -| 1 | torch.distributed.is_available | 否 | -| 2 | torch.distributed.init_process_group | 否 | -| 3 | torch.distributed.Backend | 否 | -| 4 | torch.distributed.get_backend | 否 | -| 5 | torch.distributed.get_rank | 否 | -| 6 | torch.distributed.get_world_size | 否 | -| 7 | torch.distributed.is_initialized | 否 | -| 8 | torch.distributed.is_mpi_available | 否 | -| 9 | torch.distributed.is_nccl_available | 否 | -| 10 | torch.distributed.Store | 否 | -| 11 | torch.distributed.TCPStore | 否 | -| 12 | torch.distributed.HashStore | 否 | -| 13 | torch.distributed.FileStore | 否 | -| 14 | torch.distributed.PrefixStore | 否 | -| 15 | torch.distributed.Store.set | 否 | -| 16 | torch.distributed.Store.get | 否 | -| 17 | torch.distributed.Store.add | 否 | -| 18 | torch.distributed.Store.wait | 否 | -| 19 | torch.distributed.Store.num_keys | 否 | -| 20 | torch.distributed.Store.delete_key | 否 | -| 21 | torch.distributed.Store.set_timeout | 否 | -| 22 | torch.distributed.new_group | 否 | -| 23 | torch.distributed.send | 否 | -| 24 | torch.distributed.recv | 否 | -| 25 | torch.distributed.isend | 否 | -| 26 | torch.distributed.irecv | 否 | -| 27 | is_completed | 否 | -| 28 | wait | 否 | -| 29 | torch.distributed.broadcast | 否 | -| 30 | torch.distributed.broadcast_object_list | 否 | -| 31 | torch.distributed.all_reduce | 否 | -| 32 | torch.distributed.reduce | 否 | -| 33 | torch.distributed.all_gather | 否 | -| 34 | torch.distributed.all_gather_object | 否 | -| 35 | torch.distributed.gather | 否 | -| 36 | torch.distributed.gather_object | 否 | -| 37 | torch.distributed.scatter | 否 | -| 38 | torch.distributed.scatter_object_list | 否 | -| 39 | torch.distributed.reduce_scatter | 否 | -| 40 | torch.distributed.all_to_all | 否 | -| 41 | torch.distributed.barrier | 否 | -| 42 | torch.distributed.ReduceOp | 否 | -| 43 | torch.distributed.reduce_op | 否 | -| 44 | torch.distributed.broadcast_multigpu | 否 | -| 45 | torch.distributed.all_reduce_multigpu | 否 | -| 46 | torch.distributed.reduce_multigpu | 否 | -| 47 | torch.distributed.all_gather_multigpu | 否 | -| 48 | torch.distributed.reduce_scatter_multigpu | 否 | -| 49 | torch.distributed.launch | 否 | -| 50 | torch.multiprocessing.spawn | 否 | - -# torch.npu - -| 序号 | API名称 | npu对应API名称 | 是否支持 | -| ---- | ------------------------------------- | ------------------------------------ | -------- | -| 1 | torch.cuda.current_blas_handle | torch.npu.current_blas_handle | 否 | -| 2 | torch.cuda.current_device | torch.npu.current_device | 是 | -| 3 | torch.cuda.current_stream | torch.npu.current_stream | 是 | -| 4 | torch.cuda.default_stream | torch.npu.default_stream | 是 | -| 5 | torch.cuda.device | torch.npu.device | 否 | -| 6 | torch.cuda.device_count | torch.npu.device_count | 是 | -| 7 | torch.cuda.device_of | torch.npu.device_of | 否 | -| 8 | torch.cuda.get_device_capability | torch.npu.get_device_capability | 否 | -| 9 | torch.cuda.get_device_name | torch.npu.get_device_name | 否 | -| 10 | torch.cuda.init | torch.npu.init | 是 | -| 11 | torch.cuda.ipc_collect | torch.npu.ipc_collect | 否 | -| 12 | torch.cuda.is_available | torch.npu.is_available | 是 | -| 13 | torch.cuda.is_initialized | torch.npu.is_initialized | 是 | -| 14 | torch.cuda.set_device | torch.npu.set_device | 部分支持 | -| 15 | torch.cuda.stream | torch.npu.stream | 是 | -| 16 | torch.cuda.synchronize | torch.npu.synchronize | 是 | -| 17 | torch.cuda.get_rng_state | torch.npu.get_rng_state | 否 | -| 18 | torch.cuda.get_rng_state_all | torch.npu.get_rng_state_all | 否 | -| 19 | torch.cuda.set_rng_state | torch.npu.set_rng_state | 否 | -| 20 | torch.cuda.set_rng_state_all | torch.npu.set_rng_state_all | 否 | -| 21 | torch.cuda.manual_seed | torch.npu.manual_seed | 否 | -| 22 | torch.cuda.manual_seed_all | torch.npu.manual_seed_all | 否 | -| 23 | torch.cuda.seed | torch.npu.seed | 否 | -| 24 | torch.cuda.seed_all | torch.npu.seed_all | 否 | -| 25 | torch.cuda.initial_seed | torch.npu.initial_seed | 否 | -| 26 | torch.cuda.comm.broadcast | torch.npu.comm.broadcast | 否 | -| 27 | torch.cuda.comm.broadcast_coalesced | torch.npu.comm.broadcast_coalesced | 否 | -| 28 | torch.cuda.comm.reduce_add | torch.npu.comm.reduce_add | 否 | -| 29 | torch.cuda.comm.scatter | torch.npu.comm.scatter | 否 | -| 30 | torch.cuda.comm.gather | torch.npu.comm.gather | 否 | -| 31 | torch.cuda.Stream | torch.npu.Stream | 是 | -| 32 | torch.cuda.Stream.query | torch.npu.Stream.query | 否 | -| 33 | torch.cuda.Stream.record_event | torch.npu.Stream.record_event | 是 | -| 34 | torch.cuda.Stream.synchronize | torch.npu.Stream.synchronize | 是 | -| 35 | torch.cuda.Stream.wait_event | torch.npu.Stream.wait_event | 是 | -| 36 | torch.cuda.Stream.wait_stream | torch.npu.Stream.wait_stream | 是 | -| 37 | torch.cuda.Event | torch.npu.Event | 是 | -| 38 | torch.cuda.Event.elapsed_time | torch.npu.Event.elapsed_time | 是 | -| 39 | torch.cuda.Event.from_ipc_handle | torch.npu.Event.from_ipc_handle | 否 | -| 40 | torch.cuda.Event.ipc_handle | torch.npu.Event.ipc_handle | 否 | -| 41 | torch.cuda.Event.query | torch.npu.Event.query | 是 | -| 42 | torch.cuda.Event.record | torch.npu.Event.record | 是 | -| 43 | torch.cuda.Event.synchronize | torch.npu.Event.synchronize | 是 | -| 44 | torch.cuda.Event.wait | torch.npu.Event.wait | 是 | -| 45 | torch.cuda.empty_cache | torch.npu.empty_cache | 是 | -| 46 | torch.cuda.memory_stats | torch.npu.memory_stats | 是 | -| 47 | torch.cuda.memory_summary | torch.npu.memory_summary | 是 | -| 48 | torch.cuda.memory_snapshot | torch.npu.memory_snapshot | 是 | -| 49 | torch.cuda.memory_allocated | torch.npu.memory_allocated | 是 | -| 50 | torch.cuda.max_memory_allocated | torch.npu.max_memory_allocated | 是 | -| 51 | torch.cuda.reset_max_memory_allocated | torch.npu.reset_max_memory_allocated | 是 | -| 52 | torch.cuda.memory_reserved | torch.npu.memory_reserved | 是 | -| 53 | torch.cuda.max_memory_reserved | torch.npu.max_memory_reserved | 是 | -| 54 | torch.cuda.memory_cached | torch.npu.memory_cached | 是 | -| 55 | torch.cuda.max_memory_cached | torch.npu.max_memory_cached | 是 | -| 56 | torch.cuda.reset_max_memory_cached | torch.npu.reset_max_memory_cached | 是 | -| 57 | torch.cuda.nvtx.mark | torch.npu.nvtx.mark | 否 | -| 58 | torch.cuda.nvtx.range_push | torch.npu.nvtx.range_push | 否 | -| 59 | torch.cuda.nvtx.range_pop | torch.npu.nvtx.range_pop | 否 | -| 60 | torch.cuda._sleep | torch.npu._sleep | 否 | -| 61 | torch.cuda.Stream.priority_range | torch.npu.Stream.priority_range | 否 | -| 62 | torch.cuda.get_device_properties | torch.npu.get_device_properties | 否 | -| 63 | torch.cuda.amp.GradScaler | torch.npu.amp.GradScaler | 否 | - -# NPU自定义算子 - -| 序号 | 算子名称 | -| ---- | ---------------------------------------------- | -| 1 | npu_convolution_transpose | -| 2 | npu_conv_transpose2d | -| 3 | npu_convolution_transpose_backward | -| 4 | npu_conv_transpose2d_backward | -| 5 | npu_conv_transpose3d_backward | -| 6 | npu_convolution | -| 7 | npu_convolution_backward | -| 8 | npu_convolution_double_backward | -| 9 | npu_conv2d | -| 10 | npu_conv2d.out | -| 11 | npu_conv2d_backward | -| 12 | npu_conv3d | -| 13 | npu_conv3d.out | -| 14 | npu_conv3d_backward | -| 15 | one_ | -| 16 | npu_sort_v2.out | -| 17 | npu_sort_v2 | -| 18 | npu_format_cast | -| 19 | npu_format_cast_.acl_format | -| 20 | npu_format_cast_.src | -| 21 | npu_transpose_to_contiguous | -| 22 | npu_transpose | -| 23 | npu_transpose.out | -| 24 | npu_broadcast | -| 25 | npu_broadcast.out | -| 26 | npu_dtype_cast | -| 27 | npu_dtype_cast_.Tensor | -| 28 | npu_roi_alignbk | -| 29 | empty_with_format | -| 30 | empty_with_format.names | -| 31 | copy_memory_ | -| 32 | npu_one_hot | -| 33 | npu_stride_add | -| 34 | npu_softmax_cross_entropy_with_logits | -| 35 | npu_softmax_cross_entropy_with_logits_backward | -| 36 | npu_ps_roi_pooling | -| 37 | npu_ps_roi_pooling_backward | -| 38 | npu_roi_align | -| 39 | npu_nms_v4 | -| 40 | npu_lstm | -| 41 | npu_lstm_backward | -| 42 | npu_iou | -| 43 | npu_ptiou | -| 44 | npu_nms_with_mask | -| 45 | npu_pad | -| 46 | npu_bounding_box_encode | -| 47 | npu_bounding_box_decode | -| 48 | npu_gru | -| 49 | npu_gru_backward | -| 50 | npu_set_.source_Storage_storage_offset_format | -| 51 | npu_random_choice_with_mask | -| 52 | npu_batch_nms | -| 53 | npu_slice | -| 54 | npu_slice.out | -| 55 | npu_dropoutV2 | -| 56 | npu_dropoutV2_backward | -| 57 | _npu_dropout | -| 58 | _npu_dropout_inplace | -| 59 | npu_dropout_backward | -| 60 | npu_indexing | -| 61 | npu_indexing.out | -| 62 | npu_ifmr | -| 63 | npu_max.dim | -| 64 | npu_max.names_dim | -| 65 | npu_scatter | -| 66 | npu_max_backward | -| 67 | npu_apply_adam | -| 68 | npu_layer_norm_eval | -| 69 | npu_alloc_float_status | -| 70 | npu_get_float_status | -| 71 | npu_clear_float_status | -| 72 | npu_confusion_transpose | -| 73 | npu_confusion_transpose_backward | -| 74 | npu_bmmV2 | -| 75 | fast_gelu | -| 76 | fast_gelu_backward | -| 77 | npu_sub_sample | -| 78 | npu_deformable_conv2d | -| 79 | npu_deformable_conv2dbk | -| 80 | npu_mish | -| 81 | npu_anchor_response_flags | -| 82 | npu_yolo_boxes_encode | -| 83 | npu_grid_assign_positive | -| 84 | npu_mish_backward | -| 85 | npu_normalize_batch | -| 86 | npu_masked_fill_range | -| 87 | npu_linear | -| 88 | npu_linear_backward | -| 89 | npu_bert_apply_adam | -| 90 | npu_giou | -| 91 | npu_giou_backward | - -详细算子接口说明: - -> ``` -> npu_apply_adam(beta1_power, beta2_power, lr, beta1, beta2, epsilon, grad, use_locking, use_nesterov, out = (var, m, v)) -> ``` - -count adam result. - -- Parameters: - - **beta1_power** (Number) - power of beta1. - - **beta2_power** (Number) - power of beta2. - - **lr** (Number) - learning rate. - - **beta1** (Number) - exponential decay rate for the 1st moment estimates. - - **beta2** (Number) - exponential decay rate for the 2nd moment estimates. - - **epsilon** (Number) - term added to the denominator to improve numerical stability. - - **grad** (Tensor) - the gradient. - - **use_locking** (bool) - If `True` use locks for update operations. - - **use_nesterov** (bool) -If `True`, uses the nesterov update. - - **var** (Tensor) - variables to be optimized. - - **m** (Tensor) - mean value of variables. - - **v** (Tensor) - variance of variables. - -- constraints: - - None - -- Examples: - - None - -> npu_bert_apply_adam(var, m, v, lr, beta1, beta2, epsilon, grad, max_grad_norm, global_grad_norm, weight_decay, ) - -count adam result in bert. - -- Parameters: - - **lr** (Number) - learning rate. - - **beta1** (Number) - exponential decay rate for the 1st moment estimates. - - **beta2** (Number) - exponential decay rate for the 2nd moment estimates. - - **epsilon** (Number) - term added to the denominator to improve numerical stability. - - **grad** (Tensor) - the gradient. - - **max_grad_norm** (Number) - maximum norm for the gradients. - - **global_grad_norm** (Number) - L2_norm for the gradients. - - **weight_decay** (Number) - weight decay - - **var** (Tensor) - variables to be optimized. - - **m** (Tensor) -mean value of variables. - - **v** (Tensor) - variance of variables. - -- constraints: - - None - -- Examples: - - ```python - >>> var_in = torch.rand(321538).uniform_(-32.,21.).npu() - >>> var_in - tensor([ 0.6119, 5.8193, 3.0683, ..., -28.5832, 12.9402, -24.0488], - device='npu:0') - >>> m_in = torch.zeros(321538).npu() - >>> v_in = torchzeros(321538).npu() - >>> grad = torch.rand(321538).uniform_(-0.05,0.03).npu() - >>> grad - tensor([-0.0315, -0.0113, -0.0132, ..., 0.0106, -0.0226, -0.0252], - device='npu:0') - >>> max_grad_norm = -1. - >>> beta1 = 0.9 - >>> beta2 = 0.99 - >>> weight_decay = 0. - >>> lr = 0.1 - >>> epsilon = 1e-06 - >>> global_grad_norm = 0. - >>> var_out, m_out, v_out = torch.npu_bert_apply_adam(var_in, m_in, v_in, lr, beta1, beta2, epsilon, grad, max_grad_norm, global_grad_norm, weight_decay) - >>> var_out - tensor([ 0.7118, 5.9192, 3.1682, ..., -28.6831, 13.0402, -23.9489], - device='npu:0') - >>> m_out - tensor([-0.0032, -0.0011, -0.0013, ..., 0.0011, -0.0023, -0.0025], - device='npu:0') - >>> v_out - tensor([9.9431e-06, 1.2659e-06, 1.7328e-06, ..., 1.1206e-06, 5.0933e-06, - 6.3495e-06], device='npu:0') - ``` - - - diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225.md" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225.md" deleted file mode 100644 index 0ebe58e9f8f8b96d10d4a8c69534cd056ccd5df3..0000000000000000000000000000000000000000 --- "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225.md" +++ /dev/null @@ -1,6296 +0,0 @@ -# PyTorch适配算子清单 -- [PyTorch原生算子与昇腾算子对应表](#PyTorch原生算子与昇腾算子对应表md) -- [PyTorch昇腾自定义算子](#PyTorch昇腾自定义算子md) -

PyTorch原生算子与昇腾算子对应表

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

序号

-

PyTorch 原生算子

-

昇腾适配算子

-

1

-

dropout

-

dropout_npu

-

2

-

dropout_

-

dropout_npu_

-

3

-

abs

-

abs_npu

-

4

-

abs_

-

abs_npu_

-

5

-

abs.out

-

abs_out_npu

-

6

-

acos

-

acos_npu

-

7

-

acos_

-

acos_npu_

-

8

-

acos.out

-

acos_out_npu

-

9

-

adaptive_avg_pool1d

-

adaptive_avg_pool1d_npu

-

10

-

add.Tensor

-

add_npu

-

11

-

add_.Tensor

-

add_npu_

-

12

-

add.out

-

add_out_npu

-

13

-

add.Scalar

-

add_npu

-

14

-

add_.Scalar

-

add_npu_

-

15

-

addmv

-

addmv_npu

-

16

-

addmv_

-

addmv_npu_

-

17

-

addmv.out

-

addmv_out_npu

-

18

-

addr

-

addr_npu

-

19

-

addr_

-

addr_npu_

-

20

-

addr.out

-

addr_out_npu

-

21

-

affine_grid_generator

-

affine_grid_generator_npu

-

22

-

affine_grid_generator_backward

-

affine_grid_generator_backward_npu

-

23

-

all.dim

-

all_npu

-

24

-

all.out

-

all_out_npu

-

25

-

any.dim

-

any_npu

-

26

-

any.out

-

any_out_npu

-

27

-

arange

-

arange_npu

-

28

-

arange.start

-

arange_npu

-

29

-

arange.start_step

-

arange_npu

-

30

-

arange.out

-

arange_out_npu

-

31

-

arange.start_out

-

arange_out_npu

-

32

-

_dim_arange

-

_dim_arange_npu

-

33

-

argmax

-

argmax_npu

-

34

-

argmin

-

argmin_npu

-

35

-

as_strided

-

as_strided_npu

-

36

-

as_strided_

-

as_strided_npu_

-

37

-

asin

-

asin_npu

-

38

-

asin_

-

asin_npu_

-

39

-

asin.out

-

asin_out_npu

-

40

-

atan

-

atan_npu

-

41

-

atan_

-

atan_npu_

-

42

-

atan.out

-

atan_out_npu

-

43

-

baddbmm

-

baddbmm_npu

-

44

-

baddbmm_

-

baddbmm_npu_

-

45

-

baddbmm.out

-

baddbmm_out_npu

-

46

-

bartlett_window

-

bartlett_window_npu

-

47

-

bartlett_window.periodic

-

bartlett_window_npu

-

48

-

batch_norm

-

batch_norm_npu_

-

49

-

_batch_norm_impl_index

-

_batch_norm_impl_index_npu

-

50

-

_batch_norm_impl_index_backward

-

_batch_norm_impl_index_backward_npu

-

51

-

bernoulli

-

bernoulli_npu

-

52

-

bernoulli_.Tensor

-

bernoulli_npu_

-

53

-

bernoulli_.float

-

bernoulli_npu_

-

54

-

binary_cross_entropy

-

binary_cross_entropy_npu

-

55

-

binary_cross_entropy.out

-

binary_cross_entropy_out_npu

-

56

-

binary_cross_entropy_backward

-

binary_cross_entropy_backward_npu

-

57

-

binary_cross_entropy_backward.grad_input

-

binary_cross_entropy_backward_out_npu

-

58

-

binary_cross_entropy_with_logits

-

binary_cross_entropy_with_logits_npu

-

59

-

binary_cross_entropy_with_logits_backward

-

binary_cross_entropy_with_logits_backward_npu

-

60

-

bitwise_not

-

bitwise_not_npu

-

61

-

bitwise_not_

-

bitwise_not_npu_

-

62

-

bitwise_not.out

-

bitwise_not_out_npu

-

63

-

logical_not

-

logical_not_npu

-

64

-

logical_not_

-

logical_not_npu_

-

65

-

logical_not.out

-

logical_not_out_npu

-

66

-

logical_and

-

logical_and_npu

-

67

-

logical_and_

-

logical_and_npu_

-

68

-

logical_and.out

-

logical_and_out_npu

-

69

-

logical_or

-

logical_or_npu

-

70

-

logical_or_

-

logical_or_npu_

-

71

-

logical_or.out

-

logical_or_out_npu

-

72

-

blackman_window

-

blackman_window_npu

-

73

-

blackman_window.periodic

-

blackman_window_npu

-

74

-

bmm

-

bmm_npu

-

75

-

bmm.out

-

bmm_out_npu

-

76

-

cat

-

cat_npu

-

77

-

cat.out

-

cat_out_npu

-

78

-

cat.names

-

cat_npu

-

79

-

cat.names_out

-

cat_out_npu

-

80

-

ceil

-

ceil_npu

-

81

-

ceil_

-

ceil_npu_

-

82

-

ceil.out

-

ceil_out_npu

-

83

-

clamp

-

clamp_npu

-

84

-

clamp_

-

clamp_npu_

-

85

-

clamp.out

-

clamp_out_npu

-

86

-

clamp_max

-

clamp_max_npu

-

87

-

clamp_max_

-

clamp_max_npu_

-

88

-

clamp_max.out

-

clamp_max_out_npu

-

89

-

clamp_min

-

clamp_min_npu

-

90

-

clamp_min_

-

clamp_min_npu_

-

91

-

clamp_min.out

-

clamp_min_out_npu

-

92

-

constant_pad_nd

-

constant_pad_nd_npu

-

93

-

contiguous

-

contiguous_npu

-

94

-

convolution

-

convolution_npu

-

95

-

_convolution

-

_convolution_npu

-

96

-

_convolution_nogroup

-

_convolution_nogroup_npu

-

97

-

conv2d

-

conv2d_npu_

-

98

-

conv3d

-

_conv3d_npu

-

99

-

conv_tbc

-

conv_tbc_npu

-

100

-

conv_tbc_backward

-

conv_tbc_backward_npu

-

101

-

conv_transpose2d.input

-

conv_transpose2d_npu_

-

102

-

conv_transpose3d.input

-

conv_transpose3d_npu_

-

103

-

copy_

-

copy_npu_

-

104

-

cos

-

cos_npu

-

105

-

cos_

-

cos_npu_

-

106

-

cos.out

-

cos_out_npu

-

107

-

cosh

-

cosh_npu

-

108

-

cosh_

-

cosh_npu_

-

109

-

cosh.out

-

cosh_out_npu

-

110

-

_cummax_helper

-

cummax_helper_npu

-

111

-

_cummin_helper

-

cummin_helper_npu

-

112

-

cumprod

-

cumprod_npu

-

113

-

cumprod.out

-

cumprod_out_npu

-

114

-

cumprod.dimname

-

cumprod_npu

-

115

-

cumprod.dimname_out

-

cumprod_out_npu

-

116

-

ctc_loss.IntList

-

ctc_loss_npu

-

117

-

ctc_loss.Tensor

-

ctc_loss_npu

-

118

-

_ctc_loss

-

ctc_loss_npu

-

119

-

_ctc_loss_backward

-

ctc_loss_backward_npu

-

120

-

fill_diagonal_

-

fill_diagonal_npu_

-

121

-

div.Tensor

-

div_npu

-

122

-

div_.Tensor

-

div_npu_

-

123

-

div.out

-

div_out_npu

-

124

-

div.Scalar

-

div_npu

-

125

-

div_.Scalar

-

div_npu_

-

126

-

dot

-

dot_npu

-

127

-

dot.out

-

dot_out_npu

-

128

-

embedding

-

embedding_npu

-

129

-

embedding_backward

-

embedding_backward_npu

-

130

-

embedding_dense_backward

-

embedding_dense_backward_npu

-

131

-

embedding_renorm_

-

embedding_renorm_npu_

-

132

-

_embedding_bag

-

_embedding_bag_npu

-

133

-

empty.memory_format

-

empty_npu

-

134

-

resize_

-

resize_npu_

-

135

-

empty_like

-

empty_like_npu

-

136

-

empty_strided

-

empty_strided_npu

-

137

-

erf

-

erf_npu

-

138

-

erf_

-

erf_npu_

-

139

-

erf.out

-

erf_out_npu

-

140

-

erfc

-

erfc_npu

-

141

-

erfc_

-

erfc_npu_

-

142

-

erfc.out

-

erfc_out_npu

-

143

-

exp

-

exp_npu

-

144

-

exp_

-

exp_npu_

-

145

-

exp.out

-

exp_out_npu

-

146

-

expm1

-

expm1_npu

-

147

-

expm1_

-

expm1_npu_

-

148

-

expm1.out

-

expm1_out_npu

-

149

-

eye

-

eye_npu

-

150

-

eye.m

-

eye_npu

-

151

-

eye.out

-

eye_out_npu

-

152

-

eye.m_out

-

eye_out_npu

-

153

-

fill_.Scalar

-

fill_npu_

-

154

-

fill_.Tensor

-

fill_npu_

-

155

-

floor

-

floor_npu

-

156

-

floor_

-

floor_npu_

-

157

-

floor.out

-

floor_out_npu

-

158

-

floor_divide

-

floor_divide_npu

-

159

-

floor_divide_.Tensor

-

floor_divide_npu_

-

160

-

floor_divide.out

-

floor_divide_out_npu

-

161

-

floor_divide.Scalar

-

floor_divide_npu

-

162

-

floor_divide_.Scalar

-

floor_divide_npu_

-

163

-

frac

-

frac_npu

-

164

-

frac_

-

frac_npu_

-

165

-

frac.out

-

frac_out_npu

-

166

-

full.names

-

full_npu

-

167

-

full

-

full_npu

-

168

-

full.out

-

full_out_npu

-

169

-

grid_sampler

-

grid_sampler_npu

-

170

-

grid_sampler_3d

-

grid_sampler_3d_npu

-

171

-

grid_sampler_3d_backward

-

grid_sampler_3d_backward_npu

-

172

-

hann_window

-

hann_window_npu

-

173

-

hann_window.periodic

-

hann_window_npu

-

174

-

hamming_window

-

hamming_window_npu

-

175

-

hamming_window.periodic

-

hamming_window_npu

-

176

-

hamming_window.periodic_alpha

-

hamming_window_npu

-

177

-

hamming_window.periodic_alpha_beta

-

hamming_window_npu

-

178

-

ger

-

ger_npu

-

179

-

ger.out

-

ger_out_npu

-

180

-

index.Tensor

-

index_npu

-

181

-

index_put_

-

index_put_npu_

-

182

-

index_put

-

index_put_npu

-

183

-

_index_put_impl_

-

_index_put_impl_npu_

-

184

-

inverse

-

inverse_npu

-

185

-

inverse.out

-

inverse_out_npu

-

186

-

isclose

-

isclose_npu

-

187

-

isnan

-

isnan_npu

-

188

-

is_nonzero

-

is_nonzero_npu

-

189

-

kl_div

-

kl_div_npu

-

190

-

kl_div_backward

-

kl_div_backward_npu

-

191

-

kthvalue

-

kthvalue_npu

-

192

-

kthvalue.values

-

kthvalue_out_npu

-

193

-

kthvalue.dimname

-

kthvalue_npu

-

194

-

kthvalue.dimname_out

-

kthvalue_out_npu

-

195

-

native_layer_norm

-

layer_norm_npu

-

196

-

native_layer_norm_backward

-

layer_norm_backward_npu

-

197

-

linspace

-

linspace_npu

-

198

-

linspace.out

-

linspace_out_npu

-

199

-

log

-

log_npu

-

200

-

log_

-

log_npu_

-

201

-

log.out

-

log_out_npu

-

202

-

log10

-

log10_npu

-

203

-

log10_

-

log10_npu_

-

204

-

log10.out

-

log10_out_npu

-

205

-

log1p

-

log1p_npu

-

206

-

log1p_

-

log1p_npu_

-

207

-

log1p.out

-

log1p_out_npu

-

208

-

log2

-

log2_npu

-

209

-

log2_

-

log2_npu_

-

210

-

log2.out

-

log2_out_npu

-

211

-

logspace

-

logspace_npu

-

212

-

logspace.out

-

logspace_out_npu

-

213

-

log_softmax.int

-

log_softmax_npu

-

214

-

log_softmax.Dimname

-

log_softmax_npu

-

215

-

_log_softmax

-

_log_softmax_npu

-

216

-

_log_softmax_backward_data

-

_log_softmax_backward_npu

-

217

-

logsumexp

-

logsumexp_npu

-

218

-

logsumexp.out

-

logsumexp_out_npu

-

219

-

logsumexp.names

-

logsumexp_npu

-

220

-

logsumexp.names_out

-

logsumexp_out_npu

-

221

-

matmul

-

matmul_npu

-

222

-

matmul.out

-

matmul_out_npu

-

223

-

max.dim

-

max_npu

-

224

-

max.dim_max

-

max_out_npu

-

225

-

max_values

-

max_npu

-

226

-

max.names_dim

-

max_npu

-

227

-

max.names_dim_max

-

max_out_npu

-

228

-

max_values.names

-

max_npu

-

229

-

max_pool2d

-

max_pool2d_npu

-

230

-

mean

-

mean_npu

-

231

-

mean.dim

-

mean_npu

-

232

-

mean.out

-

mean_out_npu

-

233

-

mean.names_dim

-

mean_npu

-

234

-

mean.names_out

-

mean_out_npu

-

235

-

median.dim

-

median_npu

-

236

-

median.dim_values

-

median_out_npu

-

237

-

median.names_dim

-

median_npu

-

238

-

median.names_dim_values

-

median_out_npu

-

239

-

min.dim

-

min_npu

-

240

-

min.dim_min

-

min_out_npu

-

241

-

min_values

-

min_npu

-

242

-

min.names_dim

-

min_npu

-

243

-

min.names_dim_min

-

min_out_npu

-

244

-

min_values.names

-

min_npu

-

245

-

mm

-

mm_npu

-

246

-

mm.out

-

mm_out_npu

-

247

-

mul.Tensor

-

mul_npu

-

248

-

mul_.Tensor

-

mul_npu_

-

249

-

mul.out

-

mul_out_npu

-

250

-

mul.Scalar

-

mul_npu

-

251

-

mul_.Scalar

-

mul_npu_

-

252

-

mv

-

mv_npu

-

253

-

mv.out

-

mv_out_npu

-

254

-

narrow_copy

-

narrow_copy_npu

-

255

-

native_batch_norm

-

batch_norm_npu

-

256

-

batch_norm_stats

-

batch_norm_stats_npu

-

257

-

batch_norm_elemt

-

batch_norm_elemt_npu

-

258

-

batch_norm_elemt.out

-

batch_norm_elemt_out_npu

-

259

-

native_batch_norm_backward

-

batch_norm_backward_npu

-

260

-

batch_norm_backward_reduce

-

batch_norm_backward_reduce_npu

-

261

-

_nnpack_spatial_convolution

-

_nnpack_spatial_convolution_npu

-

262

-

ones.names

-

ones_npu

-

263

-

ones

-

ones_npu

-

264

-

ones.out

-

ones_out_npu

-

265

-

ones_like

-

ones_like_npu

-

266

-

cdist

-

cdist_npu

-

267

-

_cdist_forward

-

_cdist_forward_npu

-

268

-

_cdist_backward

-

_cdist_backward_npu

-

269

-

pdist

-

pdist_npu

-

270

-

_pdist_forward

-

_pdist_forward_npu

-

271

-

randperm

-

randperm_npu

-

272

-

randperm.generator

-

randperm_npu

-

273

-

randperm.out

-

randperm_out_npu

-

274

-

randperm.generator_out

-

randperm_out_npu

-

275

-

range.step

-

range_npu

-

276

-

range

-

range_npu

-

277

-

range.out

-

range_out_npu

-

278

-

reciprocal

-

reciprocal_npu

-

279

-

reciprocal_

-

reciprocal_npu_

-

280

-

reciprocal.out

-

reciprocal_out_npu

-

281

-

neg

-

neg_npu

-

282

-

neg_

-

neg_npu_

-

283

-

neg.out

-

neg_out_npu

-

284

-

repeat

-

repeat_npu

-

285

-

repeat_interleave.self_int

-

repeat_interleave_npu

-

286

-

round

-

round_npu

-

287

-

round_

-

round_npu_

-

288

-

round.out

-

round_out_npu

-

289

-

relu

-

relu_npu

-

290

-

relu_

-

relu_npu_

-

291

-

prelu

-

prelu_npu

-

292

-

prelu_backward

-

prelu_backward_npu

-

293

-

gelu

-

gelu_npu

-

294

-

gelu_backward

-

gelu_backward_npu

-

295

-

hardshrink

-

hardshrink_npu

-

296

-

hardshrink_backward

-

hardshrink_backward_npu

-

297

-

rsqrt

-

rsqrt_npu

-

298

-

rsqrt_

-

rsqrt_npu_

-

299

-

rsqrt.out

-

rsqrt_out_npu

-

300

-

selu

-

selu_npu

-

301

-

selu_

-

selu_npu_

-

302

-

celu

-

celu_npu

-

303

-

celu_

-

celu_npu_

-

304

-

sigmoid

-

sigmoid_npu

-

305

-

sigmoid_

-

sigmoid_npu_

-

306

-

sigmoid.out

-

sigmoid_out_npu

-

307

-

sin

-

sin_npu

-

308

-

sin_

-

sin_npu_

-

309

-

sin.out

-

sin_out_npu

-

310

-

sinh

-

sinh_npu

-

311

-

sinh_

-

sinh_npu_

-

312

-

sinh.out

-

sinh_out_npu

-

313

-

slogdet

-

slogdet_npu

-

314

-

softmax.int

-

softmax_npu

-

315

-

softmax.Dimname

-

softmax_npu

-

316

-

_softmax

-

_softmax_npu

-

317

-

_softmax_backward_data

-

_softmax_backward_npu

-

318

-

stack

-

stack_npu

-

319

-

stack.out

-

stack_out_npu

-

320

-

sum

-

sum_npu

-

321

-

sum.dim_IntList

-

sum_npu

-

322

-

sum.dim_DimnameList

-

sum_npu

-

323

-

sum.IntList_out

-

sum_out_npu

-

324

-

sum.DimnameList_out

-

sum_out_npu

-

325

-

sqrt

-

sqrt_npu

-

326

-

sqrt_

-

sqrt_npu_

-

327

-

sqrt.out

-

sqrt_out_npu

-

328

-

std

-

std_npu

-

329

-

std.dim

-

std_dim_npu

-

330

-

std_mean

-

std_mean_npu

-

331

-

std_mean.dim

-

std_mean_dim_npu

-

332

-

std_mean.names_dim

-

std_mean_names_npu

-

333

-

std.out

-

std_out_npu

-

334

-

std.names_dim

-

std_names_npu

-

335

-

std.names_out

-

std_out_npu

-

336

-

prod

-

prod_npu

-

337

-

prod.dim_int

-

prod_npu

-

338

-

prod.int_out

-

prod_out_npu

-

339

-

prod.dim_Dimname

-

prod_npu

-

340

-

prod.Dimname_out

-

prod_out_npu

-

341

-

tan

-

tan_npu

-

342

-

tan_

-

tan_npu_

-

343

-

tan.out

-

tan_out_npu

-

344

-

tanh

-

tanh_npu

-

345

-

tanh_

-

tanh_npu_

-

346

-

tanh.out

-

tanh_out_npu

-

347

-

threshold

-

threshold_npu

-

348

-

threshold_

-

threshold_npu_

-

349

-

threshold.out

-

threshold_out_npu

-

350

-

threshold_backward

-

threshold_backward_npu

-

351

-

one_hot

-

one_hot_npu1

-

352

-

flip

-

flip_npu

-

353

-

roll

-

roll_npu

-

354

-

true_divide.Tensor

-

true_divide_npu

-

355

-

true_divide_.Tensor

-

true_divide_npu_

-

356

-

true_divide.out

-

true_divide_out_npu

-

357

-

true_divide.Scalar

-

true_divide_npu

-

358

-

true_divide_.Scalar

-

true_divide_npu_

-

359

-

trunc

-

trunc_npu

-

360

-

trunc_

-

trunc_npu_

-

361

-

trunc.out

-

trunc_out_npu

-

362

-

_unique2

-

_unique2_npu

-

363

-

var

-

var_npu

-

364

-

var.dim

-

var_npu

-

365

-

var.out

-

var_out_npu

-

366

-

var.names_dim

-

var_npu

-

367

-

var.names_out

-

var_out_npu

-

368

-

var_mean

-

var_mean_npu

-

369

-

var_mean.dim

-

var_mean_npu

-

370

-

var_mean.names_dim

-

var_mean_npu

-

371

-

where.self

-

where_npu

-

372

-

where

-

where_npu

-

373

-

_s_where

-

_s_where_npu

-

374

-

zeros.names

-

zeros_npu

-

375

-

zeros

-

zeros_npu

-

376

-

zeros.out

-

zeros_out_npu

-

377

-

zeros_like

-

zeros_like_npu

-

378

-

norm.ScalarOpt_dtype

-

norm_npu

-

379

-

norm.Scalar

-

norm_npu

-

380

-

norm.ScalarOpt_dim_dtype

-

norm_npu

-

381

-

norm.ScalarOpt_dim

-

norm_npu

-

382

-

norm.dtype_out

-

norm_out_npu

-

383

-

norm.out

-

norm_out_npu

-

384

-

clone

-

clone_npu

-

385

-

resize_as_

-

resize_as_npu_

-

386

-

pow.Tensor_Scalar_out

-

pow_out_npu

-

387

-

pow.Tensor_Scalar

-

pow_npu

-

388

-

zero_

-

zero_npu_

-

389

-

sub.out

-

sub_out_npu

-

390

-

sub.Tensor

-

sub_npu

-

391

-

sub_.Tensor

-

sub_npu_

-

392

-

sub.Scalar

-

sub_npu

-

393

-

sub_.Scalar

-

sub_npu_

-

394

-

rsub.Tensor

-

rsub_npu

-

395

-

rsub.Scalar

-

rsub_npu

-

396

-

addmm.out

-

addmm_out_npu

-

397

-

addmm

-

addmm_npu

-

398

-

addmm_

-

addmm_npu_

-

399

-

quantize_per_tensor

-

quantize_per_tensor_npu

-

400

-

quantize_per_channel

-

quantize_per_channel_npu

-

401

-

to.dtype_layout

-

to_npu

-

402

-

to.device

-

to_device_npu

-

403

-

to.dtype

-

to_dtype_npu

-

404

-

to.other

-

to_other_npu

-

405

-

_local_scalar_dense

-

_local_scalar_dense_npu

-

406

-

lstm.input

-

lstm_npu

-

407

-

lstm.data

-

lstm_npu

-

408

-

gru.input

-

gru_npu_

-

409

-

_pack_padded_sequence

-

_pack_padded_sequence_npu

-

410

-

_pad_packed_sequence

-

_pad_packed_sequence_npu

-

411

-

set_.source_Storage

-

set_npu_

-

412

-

set_.source_Storage_storage_offset

-

set_npu_

-

413

-

set_.source_Tensor

-

set_npu_

-

414

-

set_

-

set_npu_

-

415

-

masked_fill_.Scalar

-

masked_fill_npu_

-

416

-

masked_fill_.Tensor

-

masked_fill_npu_

-

417

-

masked_scatter_

-

masked_scatter_npu_

-

418

-

view

-

view_npu

-

419

-

put_

-

put_npu_

-

420

-

index_add_

-

index_add_npu_

-

421

-

index_add

-

index_add_npu

-

422

-

index_add.dimname

-

index_add_npu

-

423

-

index_fill_.int_Scalar

-

index_fill_npu_

-

424

-

index_fill.int_Scalar

-

index_fill_npu

-

425

-

index_fill_.int_Tensor

-

index_fill_npu_

-

426

-

index_fill.int_Tensor

-

index_fill_npu

-

427

-

scatter_.src

-

scatter_npu_

-

428

-

scatter_.value

-

scatter_npu_

-

429

-

scatter_add_

-

scatter_add_npu_

-

430

-

scatter_add

-

scatter_add_npu

-

431

-

scatter_add.dimname

-

scatter_add_npu

-

432

-

lt_.Scalar

-

lt_npu_

-

433

-

lt_.Tensor

-

lt_npu_

-

434

-

gt_.Scalar

-

gt_npu_

-

435

-

gt_.Tensor

-

gt_npu_

-

436

-

le_.Scalar

-

le_npu_

-

437

-

le_.Tensor

-

le_npu_

-

438

-

ge_.Scalar

-

ge_npu_

-

439

-

ge_.Tensor

-

ge_npu_

-

440

-

eq_.Scalar

-

eq_npu_

-

441

-

eq_.Tensor

-

eq_npu_

-

442

-

ne_.Scalar

-

ne_npu_

-

443

-

ne_.Tensor

-

ne_npu_

-

444

-

bitwise_and.Tensor_out

-

bitwise_and_out_npu

-

445

-

bitwise_and.Scalar_out

-

bitwise_and_out_npu

-

446

-

bitwise_and.Scalar

-

bitwise_and_npu

-

447

-

bitwise_and.Tensor

-

bitwise_and_npu

-

448

-

bitwise_and_.Scalar

-

bitwise_and_npu_

-

449

-

bitwise_and_.Tensor

-

bitwise_and_npu_

-

450

-

__and__.Scalar

-

__and___npu

-

451

-

__and__.Tensor

-

__and___npu

-

452

-

bitwise_or.Tensor_out

-

bitwise_or_out_npu

-

453

-

bitwise_or.Scalar_out

-

bitwise_or_out_npu

-

454

-

bitwise_or.Scalar

-

bitwise_or_npu

-

455

-

bitwise_or.Tensor

-

bitwise_or_npu

-

456

-

bitwise_or_.Scalar

-

bitwise_or_npu_

-

457

-

bitwise_or_.Tensor

-

bitwise_or_npu_

-

458

-

__or__.Scalar

-

__or___npu

-

459

-

__or__.Tensor

-

__or___npu

-

460

-

__ior__.Scalar

-

__ior___npu

-

461

-

__ior__.Tensor

-

__ior___npu

-

462

-

bitwise_xor.Tensor_out

-

bitwise_xor_out_npu

-

463

-

bitwise_xor.Scalar_out

-

bitwise_xor_out_npu

-

464

-

bitwise_xor.Scalar

-

bitwise_xor_npu

-

465

-

bitwise_xor.Tensor

-

bitwise_xor_npu

-

466

-

bitwise_xor_.Scalar

-

bitwise_xor_npu_

-

467

-

bitwise_xor_.Tensor

-

bitwise_xor_npu_

-

468

-

__xor__.Scalar

-

__xor___npu

-

469

-

__xor__.Tensor

-

__xor___npu

-

470

-

__lshift__.Scalar

-

__lshift___npu

-

471

-

__lshift__.Tensor

-

__lshift___npu

-

472

-

__ilshift__.Scalar

-

__iLshift___npu

-

473

-

__ilshift__.Tensor

-

__iLshift___npu

-

474

-

__rshift__.Scalar

-

__rshift___npu

-

475

-

__rshift__.Tensor

-

__rshift___npu

-

476

-

__irshift__.Scalar

-

__iRshift___npu

-

477

-

__irshift__.Tensor

-

__iRshift___npu

-

478

-

atan2_

-

atan2_npu_

-

479

-

tril_

-

tril_npu_

-

480

-

triu_

-

triu_npu_

-

481

-

renorm_

-

renorm_npu_

-

482

-

pow_.Scalar

-

pow_npu_

-

483

-

pow_.Tensor

-

pow_npu_

-

484

-

lerp_.Scalar

-

lerp_npu_

-

485

-

lerp_.Tensor

-

lerp_npu_

-

486

-

fmod_.Scalar

-

fmod_npu_

-

487

-

fmod_.Tensor

-

fmod_npu_

-

488

-

remainder_.Scalar

-

remainder_npu_

-

489

-

remainder_.Tensor

-

remainder_npu_

-

490

-

addbmm_

-

addbmm_npu_

-

491

-

addbmm.out

-

addbmm_out_npu

-

492

-

addbmm

-

addbmm_npu

-

493

-

addcdiv_

-

addcdiv_npu_

-

494

-

random_.from

-

random_npu_

-

495

-

random_.to

-

random_npu_

-

496

-

random_

-

random_npu_

-

497

-

uniform_

-

uniform_npu_

-

498

-

diag.out

-

diag_out_npu

-

499

-

diag

-

diag_npu

-

500

-

cross.out

-

cross_out_npu

-

501

-

cross

-

cross_npu

-

502

-

triu.out

-

triu_out_npu

-

503

-

triu

-

triu_npu

-

504

-

tril.out

-

tril_out_npu

-

505

-

tril

-

tril_npu

-

506

-

tril_indices

-

tril_indices_npu

-

507

-

triu_indices

-

triu_indices_npu

-

508

-

ne.Scalar_out

-

ne_out_npu

-

509

-

ne.Scalar

-

ne_npu

-

510

-

ne.Tensor_out

-

ne_out_npu

-

511

-

ne.Tensor

-

ne_npu

-

512

-

eq.Scalar_out

-

eq_out_npu

-

513

-

eq.Scalar

-

eq_npu

-

514

-

eq.Tensor_out

-

eq_out_npu

-

515

-

eq.Tensor

-

eq_npu

-

516

-

ge.Scalar_out

-

ge_out_npu

-

517

-

ge.Scalar

-

ge_npu

-

518

-

ge.Tensor_out

-

ge_out_npu

-

519

-

ge.Tensor

-

ge_npu

-

520

-

le.Scalar_out

-

le_out_npu

-

521

-

le.Scalar

-

le_npu

-

522

-

le.Tensor_out

-

le_out_npu

-

523

-

le.Tensor

-

le_npu

-

524

-

gt.Scalar_out

-

gt_out_npu

-

525

-

gt.Scalar

-

gt_npu

-

526

-

gt.Tensor_out

-

gt_out_npu

-

527

-

gt.Tensor

-

gt_npu

-

528

-

lt.Scalar_out

-

lt_out_npu

-

529

-

lt.Scalar

-

lt_npu

-

530

-

lt.Tensor_out

-

lt_out_npu

-

531

-

lt.Tensor

-

lt_npu

-

532

-

take.out

-

take_out_npu

-

533

-

take

-

take_npu

-

534

-

index_select.out

-

index_select_out_npu

-

535

-

index_select

-

index_select_npu

-

536

-

index_select.dimname_out

-

index_select_out_npu

-

537

-

index_select.dimname

-

index_select_npu

-

538

-

masked_select.out

-

masked_select_out_npu

-

539

-

masked_select

-

masked_select_npu

-

540

-

nonzero.out

-

nonzero_out_npu

-

541

-

nonzero

-

nonzero_npu

-

542

-

gather.out

-

gather_out_npu

-

543

-

gather

-

gather_npu

-

544

-

gather.dimname_out

-

gather_out_npu

-

545

-

gather.dimname

-

gather_npu

-

546

-

addcmul.out

-

addcmul_out_npu

-

547

-

addcmul

-

addcmul_npu

-

548

-

addcmul_

-

addcmul_npu_

-

549

-

addcdiv.out

-

addcdiv_out_npu

-

550

-

addcdiv

-

addcdiv_npu

-

551

-

_triangular_solve_helper

-

_triangular_solve_helper_npu

-

552

-

_symeig_helper

-

_symeig_helper_npu

-

553

-

_svd_helper

-

_svd_helper_npu

-

554

-

qr.Q

-

qr_out_npu

-

555

-

qr

-

qr_npu

-

556

-

multinomial.out

-

multinomial_out_npu

-

557

-

multinomial

-

multinomial_npu

-

558

-

erfinv

-

erfinv_npu

-

559

-

erfinv_

-

erfinv_npu_

-

560

-

erfinv.out

-

erfinv_out_npu

-

561

-

sign

-

sign_npu

-

562

-

sign_

-

sign_npu_

-

563

-

sign.out

-

sign_out_npu

-

564

-

atan2.out

-

atan2_out_npu

-

565

-

atan2

-

atan2_npu

-

566

-

lerp.Scalar_out

-

lerp_out_npu

-

567

-

lerp.Tensor_out

-

lerp_out_npu

-

568

-

lerp.Scalar

-

lerp_npu

-

569

-

lerp.Tensor

-

lerp_npu

-

570

-

fmod.Scalar_out

-

fmod_out_npu

-

571

-

fmod.Scalar

-

fmod_npu

-

572

-

fmod.Tensor_out

-

fmod_out_npu

-

573

-

fmod.Tensor

-

fmod_npu

-

574

-

remainder.Scalar_out

-

remainder_out_npu

-

575

-

remainder.Scalar

-

remainder_npu

-

576

-

remainder.Tensor_out

-

remainder_out_npu

-

577

-

remainder.Tensor

-

remainder_npu

-

578

-

min.out

-

min_out_npu

-

579

-

min.other

-

min_npu

-

580

-

min

-

min_npu

-

581

-

max.out

-

max_out_npu

-

582

-

max.other

-

max_npu

-

583

-

max

-

max_npu

-

584

-

median

-

median_npu

-

585

-

sort.values

-

sort_out_npu

-

586

-

sort

-

sort_npu

-

587

-

sort.dimname_values

-

sort_out_npu

-

588

-

sort.dimname

-

sort_npu

-

589

-

argsort

-

argsort_npu

-

590

-

argsort.dimname

-

argsort_npu

-

591

-

topk.values

-

topk_out_npu

-

592

-

topk

-

topk_npu

-

593

-

all

-

all_npu

-

594

-

any

-

any_npu

-

595

-

renorm.out

-

renorm_out_npu

-

596

-

renorm

-

renorm_npu

-

597

-

unfold

-

unfold

-

598

-

equal

-

equal_npu

-

599

-

pow.Tensor_Tensor_out

-

pow_out_npu

-

600

-

pow.Tensor_Tensor

-

pow_npu

-

601

-

pow.Scalar_out

-

pow_out_npu

-

602

-

pow.Scalar

-

pow_npu

-

603

-

normal_

-

normal_npu_

-

604

-

normal.Tensor_float_out

-

normal_out_npu

-

605

-

normal.Tensor_float

-

normal_npu

-

606

-

normal.float_Tensor_out

-

normal_out_npu

-

607

-

normal.float_Tensor

-

normal_npu

-

608

-

normal.Tensor_Tensor_out

-

normal_out_npu

-

609

-

normal.Tensor_Tensor

-

normal_npu

-

610

-

normal.float_float

-

normal_npu

-

611

-

normal.float_float_out

-

normal_out_npu

-

612

-

_addr

-

_addr_npu

-

613

-

_addr_

-

_addr_npu_

-

614

-

_addr.out

-

_addr_out_npu

-

615

-

_index_copy_

-

index_copy_npu_

-

616

-

_cumsum

-

_cumsum_npu

-

617

-

_cumsum.out

-

_cumsum_out_npu

-

618

-

_cumprod

-

_cumprod_npu

-

619

-

_cumprod.out

-

_cumprod_out_npu

-

620

-

_var

-

_var_npu

-

621

-

_amp_non_finite_check_and_unscale_

-

_amp_non_finite_check_and_unscale_npu_

-

622

-

_cat

-

_cat_npu

-

623

-

_cat.out

-

_cat_out_npu

-

624

-

_max

-

_max_npu

-

625

-

_max.max

-

_max_out_npu

-

626

-

_min

-

_min_npu

-

627

-

_min.min

-

_min_out_npu

-

628

-

mse_loss.out

-

mse_loss_out_npu

-

629

-

mse_loss

-

mse_loss_npu

-

630

-

mse_loss_backward.grad_input

-

mse_loss_backward_out_npu

-

631

-

mse_loss_backward

-

mse_loss_backward_npu

-

632

-

l1_loss.out

-

l1_loss_out_npu

-

633

-

l1_loss

-

l1_loss_npu

-

634

-

l1_loss_backward.grad_input

-

l1_loss_backward_out_npu

-

635

-

l1_loss_backward

-

l1_loss_backward_npu

-

636

-

multilabel_margin_loss.out

-

multilabel_margin_loss_out_npu

-

637

-

multilabel_margin_loss

-

multilabel_margin_loss_npu

-

638

-

multilabel_margin_loss_forward.output

-

multilabel_margin_loss_forward_out_npu

-

639

-

multilabel_margin_loss_forward

-

multilabel_margin_loss_forward_npu

-

640

-

nll_loss.out

-

nll_loss_out_npu

-

641

-

nll_loss

-

nll_loss_npu

-

642

-

nll_loss_forward.output

-

nll_loss_forward_out_npu

-

643

-

nll_loss_forward

-

nll_loss_forward_npu

-

644

-

nll_loss_backward.grad_input

-

nll_loss_backward_out_npu

-

645

-

nll_loss_backward

-

nll_loss_backward_npu

-

646

-

nll_loss2d.out

-

nll_loss2d_out_npu

-

647

-

nll_loss2d

-

nll_loss2d_npu

-

648

-

nll_loss2d_forward.output

-

nll_loss2d_forward_out_npu

-

649

-

nll_loss2d_forward

-

nll_loss2d_forward_npu

-

650

-

nll_loss2d_backward.grad_input

-

nll_loss2d_backward_out_npu

-

651

-

nll_loss2d_backward

-

nll_loss2d_backward_npu

-

652

-

smooth_l1_loss.out

-

smooth_l1_loss_out_npu

-

653

-

smooth_l1_loss

-

smooth_l1_loss_npu

-

654

-

smooth_l1_loss_backward.grad_input

-

smooth_l1_loss_backward_out_npu

-

655

-

smooth_l1_loss_backward

-

smooth_l1_loss_backward_npu

-

656

-

soft_margin_loss.out

-

soft_margin_loss_out_npu

-

657

-

soft_margin_loss

-

soft_margin_loss_npu

-

658

-

soft_margin_loss_backward.grad_input

-

soft_margin_loss_backward_out_npu

-

659

-

soft_margin_loss_backward

-

soft_margin_loss_backward_npu

-

660

-

elu.out

-

elu_out_npu

-

661

-

elu

-

elu_npu

-

662

-

elu_backward.grad_input

-

elu_backward_out_npu

-

663

-

elu_backward

-

elu_backward_npu

-

664

-

elu_

-

elu_npu_

-

665

-

glu.out

-

glu_out_npu

-

666

-

glu

-

glu_npu

-

667

-

glu_backward.grad_input

-

glu_backward_out_npu

-

668

-

glu_backward

-

glu_backward_npu

-

669

-

hardsigmoid.out

-

hardsigmoid_out_npu

-

670

-

hardsigmoid

-

hardsigmoid_npu

-

671

-

hardsigmoid_

-

hardsigmoid_npu_

-

672

-

hardsigmoid_backward

-

hardsigmoid_backward_npu

-

673

-

hardtanh.out

-

hardtanh_out_npu

-

674

-

hardtanh

-

hardtanh_npu

-

675

-

hardtanh_backward.grad_input

-

hardtanh_backward_out_npu

-

676

-

hardtanh_backward

-

hardtanh_backward_npu

-

677

-

hardtanh_

-

hardtanh_npu_

-

678

-

leaky_relu.out

-

leaky_relu_out_npu

-

679

-

leaky_relu

-

leaky_relu_npu

-

680

-

leaky_relu_backward

-

leaky_relu_backward_npu

-

681

-

leaky_relu_

-

leaky_relu_npu_

-

682

-

log_sigmoid.out

-

log_sigmoid_out_npu

-

683

-

log_sigmoid

-

log_sigmoid_npu

-

684

-

log_sigmoid_forward.output

-

log_sigmoid_forward_out_npu

-

685

-

log_sigmoid_forward

-

log_sigmoid_forward_npu

-

686

-

log_sigmoid_backward.grad_input

-

log_sigmoid_backward_out_npu

-

687

-

log_sigmoid_backward

-

log_sigmoid_backward_npu

-

688

-

rrelu_with_noise.out

-

rrelu_with_noise_out_npu

-

689

-

rrelu_with_noise

-

rrelu_with_noise_npu

-

690

-

rrelu_with_noise_backward

-

rrelu_with_noise_backward_npu

-

691

-

rrelu_with_noise_

-

rrelu_with_noise_npu_

-

692

-

softplus.out

-

softplus_out_npu

-

693

-

softplus

-

softplus_npu

-

694

-

softplus_backward.grad_input

-

softplus_backward_out_npu

-

695

-

softplus_backward

-

softplus_backward_npu

-

696

-

softshrink.out

-

softshrink_out_npu

-

697

-

softshrink

-

softshrink_npu

-

698

-

softshrink_backward.grad_input

-

softshrink_backward_out_npu

-

699

-

softshrink_backward

-

softshrink_backward_npu

-

700

-

adaptive_avg_pool2d.out

-

adaptive_avg_pool2d_out_npu

-

701

-

adaptive_avg_pool2d

-

adaptive_avg_pool2d_npu

-

702

-

_adaptive_avg_pool2d

-

_adaptive_avg_pool2d_npu

-

703

-

_adaptive_avg_pool2d_backward

-

adaptive_avg_pool2d_backward_npu

-

704

-

adaptive_avg_pool3d.out

-

adaptive_avg_pool3d_out_npu

-

705

-

adaptive_avg_pool3d

-

adaptive_avg_pool3d_npu

-

706

-

adaptive_avg_pool3d_backward.grad_input

-

adaptive_avg_pool3d_backward_out_npu

-

707

-

adaptive_avg_pool3d_backward

-

adaptive_avg_pool3d_backward_npu

-

708

-

adaptive_max_pool2d.out

-

adaptive_max_pool2d_out_npu

-

709

-

adaptive_max_pool2d

-

adaptive_max_pool2d_npu

-

710

-

adaptive_max_pool2d_backward.grad_input

-

adaptive_max_pool2d_backward_out_npu

-

711

-

adaptive_max_pool2d_backward

-

adaptive_max_pool2d_backward_npu

-

712

-

avg_pool2d.out

-

avg_pool2d_out_npu

-

713

-

avg_pool2d

-

avg_pool2d_npu

-

714

-

avg_pool2d_backward.grad_input

-

avg_pool2d_backward_out_npu

-

715

-

avg_pool2d_backward

-

avg_pool2d_backward_npu

-

716

-

avg_pool3d.out

-

avg_pool3d_out_npu

-

717

-

avg_pool3d

-

avg_pool3d_npu

-

718

-

avg_pool3d_backward.grad_input

-

avg_pool3d_backward_out_npu

-

719

-

avg_pool3d_backward

-

avg_pool3d_backward_npu

-

720

-

max_pool2d_with_indices.out

-

max_pool2d_with_indices_out_npu

-

721

-

max_pool2d_with_indices

-

max_pool2d_with_indices_npu

-

722

-

max_pool2d_with_indices_backward.grad_input

-

max_pool2d_with_indices_backward_out_npu

-

723

-

max_pool2d_with_indices_backward

-

max_pool2d_with_indices_backward_npu

-

724

-

max_pool3d_with_indices.out

-

max_pool3d_with_indices_out_npu

-

725

-

max_pool3d_with_indices

-

max_pool3d_with_indices_npu

-

726

-

max_pool3d_with_indices_backward.grad_input

-

max_pool3d_with_indices_backward_out_npu

-

727

-

max_pool3d_with_indices_backward

-

max_pool3d_with_indices_backward_npu

-

728

-

max_unpool2d.out

-

max_unpool2d_out_npu

-

729

-

max_unpool2d

-

max_unpool2d_npu

-

730

-

max_unpool2d_backward.grad_input

-

max_unpool2d_backward_out_npu

-

731

-

max_unpool2d_backward

-

max_unpool2d_backward_npu

-

732

-

max_unpool3d.out

-

max_unpool3d_out_npu

-

733

-

max_unpool3d

-

max_unpool3d_npu

-

734

-

max_unpool3d_backward.grad_input

-

max_unpool3d_backward_out_npu

-

735

-

max_unpool3d_backward

-

max_unpool3d_backward_npu

-

736

-

reflection_pad2d.out

-

reflection_pad2d_out_npu

-

737

-

reflection_pad2d

-

reflection_pad2d_npu

-

738

-

reflection_pad2d_backward.grad_input

-

reflection_pad2d_backward_out_npu

-

739

-

reflection_pad2d_backward

-

reflection_pad2d_backward_npu

-

740

-

replication_pad2d.out

-

replication_pad2d_out_npu

-

741

-

replication_pad2d

-

replication_pad2d_npu

-

742

-

replication_pad2d_backward.grad_input

-

replication_pad2d_backward_out_npu

-

743

-

replication_pad2d_backward

-

replication_pad2d_backward_npu

-

744

-

upsample_linear1d.out

-

upsample_linear1d_out_npu

-

745

-

upsample_linear1d

-

upsample_linear1d_npu

-

746

-

upsample_linear1d_backward

-

upsample_linear1d_backward_npu

-

747

-

upsample_bilinear2d.out

-

upsample_bilinear2d_out_npu

-

748

-

upsample_bilinear2d

-

upsample_bilinear2d_npu

-

749

-

upsample_bilinear2d_backward.grad_input

-

upsample_bilinear2d_backward_out_npu

-

750

-

upsample_bilinear2d_backward

-

upsample_bilinear2d_backward_npu

-

751

-

upsample_bicubic2d.out

-

upsample_bicubic2d_out_npu

-

752

-

upsample_bicubic2d

-

upsample_bicubic2d_npu

-

753

-

upsample_bicubic2d_backward.grad_input

-

upsample_bicubic2d_backward_out_npu

-

754

-

upsample_bicubic2d_backward

-

upsample_bicubic2d_backward_npu

-

755

-

upsample_trilinear3d.out

-

upsample_trilinear3d_out_npu

-

756

-

upsample_trilinear3d

-

upsample_trilinear3d_npu

-

757

-

upsample_trilinear3d_backward.grad_input

-

upsample_trilinear3d_backward_out_npu

-

758

-

upsample_trilinear3d_backward

-

upsample_trilinear3d_backward_npu

-

759

-

upsample_nearest1d.out

-

upsample_nearest1d_out_npu

-

760

-

upsample_nearest1d

-

upsample_nearest1d_npu

-

761

-

upsample_nearest1d_backward.grad_input

-

upsample_nearest1d_backward_out_npu

-

762

-

upsample_nearest1d_backward

-

upsample_nearest1d_backward_npu

-

763

-

upsample_nearest2d.out

-

upsample_nearest2d_out_npu

-

764

-

upsample_nearest2d

-

upsample_nearest2d_npu

-

765

-

upsample_nearest2d_backward.grad_input

-

upsample_nearest2d_backward_out_npu

-

766

-

upsample_nearest2d_backward

-

upsample_nearest2d_backward_npu

-

767

-

upsample_nearest3d.out

-

upsample_nearest3d_out_npu

-

768

-

upsample_nearest3d

-

upsample_nearest3d_npu

-

769

-

upsample_nearest3d_backward.grad_input

-

upsample_nearest3d_backward_out_npu

-

770

-

upsample_nearest3d_backward

-

upsample_nearest3d_backward_npu

-

771

-

sigmoid_backward.grad_input

-

sigmoid_backward_out_npu

-

772

-

sigmoid_backward

-

sigmoid_backward_npu

-

773

-

tanh_backward.grad_input

-

tanh_backward_out_npu

-

774

-

tanh_backward

-

tanh_backward_npu

-

775

-

slow_conv_transpose2d.out

-

slow_conv_transpose2d_out_npu

-

776

-

slow_conv_transpose2d

-

slow_conv_transpose2d_npu

-

777

-

slow_conv_transpose2d_backward.grad_output

-

slow_conv_transpose2d_backward_out_npu

-

778

-

slow_conv_transpose2d_backward.output_mask

-

slow_conv_transpose2d_backward_npu

-

779

-

thnn_conv2d.out

-

thnn_conv2d_out_npu

-

780

-

thnn_conv2d

-

thnn_conv2d_npu

-

781

-

thnn_conv2d_forward.output

-

thnn_conv2d_forward_out_npu

-

782

-

thnn_conv2d_forward

-

thnn_conv2d_forward_npu

-

783

-

thnn_conv2d_backward.output_mask

-

thnn_conv2d_backward_npu

-

784

-

thnn_conv_depthwise2d.out

-

thnn_conv_depthwise2d_out_npu

-

785

-

thnn_conv_depthwise2d

-

thnn_conv_depthwise2d_npu

-

786

-

thnn_conv_depthwise2d_forward.out

-

thnn_conv_depthwise2d_forward_out_npu

-

787

-

thnn_conv_depthwise2d_forward

-

thnn_conv_depthwise2d_forward_npu

-

788

-

thnn_conv_depthwise2d_backward.grad_input

-

thnn_conv_depthwise2d_backward_out_npu

-

789

-

thnn_conv_depthwise2d_backward.output_mask

-

thnn_conv_depthwise2d_backward_npu

-

790

-

slow_conv3d.out

-

slow_conv3d_out_npu

-

791

-

slow_conv3d

-

slow_conv3d_npu

-

792

-

slow_conv3d_forward.output

-

slow_conv3d_forward_out_npu

-

793

-

slow_conv3d_forward

-

slow_conv3d_forward_npu

-

794

-

slow_conv_dilated2d

-

slow_conv_dilated2d_npu

-

795

-

slow_conv_dilated2d_backward

-

slow_conv_dilated2d_backward_npu

-

796

-

col2im.out

-

im2col_backward_out_npu

-

797

-

col2im

-

im2col_backward_npu

-

798

-

col2im_backward.grad_input

-

im2col_out_npu

-

799

-

col2im_backward

-

im2col_npu

-

800

-

im2col.out

-

im2col_out_npu

-

801

-

im2col

-

im2col_npu

-

802

-

im2col_backward.grad_input

-

im2col_backward_out_npu

-

803

-

im2col_backward

-

im2col_backward_npu

-

804

-

isfinite

-

isfinite_npu

-
- -

PyTorch昇腾自定义算子

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

序号

-

PyTorch 算子(由昇腾开发)

-

昇腾适配算子

-

1

-

npu_convolution_transpose

-

npu_convolution_transpose

-

2

-

npu_conv_transpose2d

-

conv_transpose2d_npu

-

3

-

npu_convolution_transpose_backward

-

npu_convolution_transpose_backward

-

4

-

npu_conv_transpose2d_backward

-

conv_transpose2d_backward_npu

-

5

-

npu_conv_transpose3d_backward

-

conv_transpose3d_backward_npu

-

6

-

npu_convolution

-

npu_convolution

-

7

-

npu_convolution_backward

-

npu_convolution_backward

-

8

-

npu_convolution_double_backward

-

npu_convolution_double_backward

-

9

-

npu_conv2d

-

conv2d_npu

-

10

-

npu_conv2d.out

-

conv2d_out_npu

-

11

-

npu_conv2d_backward

-

conv2d_backward_npu

-

12

-

npu_conv3d

-

conv3d_npu

-

13

-

npu_conv3d.out

-

conv3d_out_npu

-

14

-

npu_conv3d_backward

-

conv3d_backward_npu

-

15

-

one_

-

one_npu_

-

16

-

npu_sort_v2.out

-

sort_without_indices_out_npu

-

17

-

npu_sort_v2

-

sort_without_indices_npu

-

18

-

npu_format_cast

-

format_cast_npu

-

19

-

npu_format_cast_.acl_format

-

format_cast_npu_

-

20

-

npu_format_cast_.src

-

format_cast_npu_

-

21

-

npu_transpose_to_contiguous

-

transpose_to_contiguous_npu

-

22

-

npu_transpose

-

transpose_npu

-

23

-

npu_transpose.out

-

transpose_out_npu

-

24

-

npu_broadcast

-

broadcast_npu

-

25

-

npu_broadcast.out

-

broadcast_out_npu

-

26

-

npu_dtype_cast

-

dtype_cast_npu

-

27

-

npu_dtype_cast_.Tensor

-

dtype_cast_npu_

-

28

-

npu_roi_alignbk

-

roi_align_backward_npu

-

29

-

empty_with_format

-

empty_with_format_npu

-

30

-

empty_with_format.names

-

empty_with_format_npu

-

31

-

copy_memory_

-

copy_memory_npu_

-

32

-

npu_one_hot

-

one_hot_npu

-

33

-

npu_stride_add

-

stride_add_npu

-

34

-

npu_softmax_cross_entropy_with_logits

-

softmax_cross_entropy_with_logits_npu

-

35

-

npu_softmax_cross_entropy_with_logits_backward

-

softmax_cross_entropy_with_logits_backward_npu

-

36

-

npu_ps_roi_pooling

-

ps_roi_pooling_npu

-

37

-

npu_ps_roi_pooling_backward

-

ps_roi_pooling_backward_npu

-

38

-

npu_roi_align

-

roi_align_npu

-

39

-

npu_nms_v4

-

nms_v4_npu

-

40

-

npu_lstm

-

lstm_npu

-

41

-

npu_lstm_backward

-

lstm_backward_npu

-

42

-

npu_iou

-

iou_npu

-

43

-

npu_ptiou

-

ptiou_npu

-

44

-

npu_nms_with_mask

-

nms_with_mask_npu

-

45

-

npu_pad

-

pad_npu

-

46

-

npu_bounding_box_encode

-

bounding_box_encode_npu

-

47

-

npu_bounding_box_decode

-

bounding_box_decode_npu

-

48

-

npu_gru

-

gru_npu

-

49

-

npu_gru_backward

-

gru_backward_npu

-

50

-

npu_set_.source_Storage_storage_offset_format

-

set_npu_

-

51

-

npu_random_choice_with_mask

-

random_choice_with_mask_npu

-

52

-

npu_batch_nms

-

batch_nms_npu

-

53

-

npu_slice

-

slice_npu

-

54

-

npu_slice.out

-

slice_out_npu

-

55

-

npu_dropoutV2

-

dropout_v2_npu

-

56

-

npu_dropoutV2_backward

-

dropout_v2_backward_npu

-

57

-

_npu_dropout

-

_dropout_npu

-

58

-

_npu_dropout_inplace

-

_dropout_npu_inplace

-

59

-

npu_dropout_backward

-

dropout_backward_npu

-

60

-

npu_indexing

-

indexing_npu

-

61

-

npu_indexing.out

-

indexing_out_npu

-

62

-

npu_ifmr

-

ifmr_npu

-

63

-

npu_max.dim

-

max_v1_npu

-

64

-

npu_max.names_dim

-

max_v1_npu

-

65

-

npu_scatter

-

scatter_npu

-

66

-

npu_max_backward

-

max_backward_npu

-

67

-

npu_apply_adam

-

apply_adam_npu

-

68

-

npu_layer_norm_eval

-

layer_norm_eval_npu

-

69

-

npu_alloc_float_status

-

alloc_float_status_npu

-

70

-

npu_get_float_status

-

get_float_status_npu

-

71

-

npu_clear_float_status

-

clear_float_status_npu

-

72

-

npu_confusion_transpose

-

confusion_transpose_npu

-

73

-

npu_confusion_transpose_backward

-

confusion_transpose_backward_npu

-

74

-

npu_bmmV2

-

bmm_v2_npu

-

75

-

fast_gelu

-

fast_gelu_npu

-

76

-

fast_gelu_backward

-

fast_gelu_backward_npu

-

77

-

npu_sub_sample

-

sub_sample_npu

-

78

-

npu_deformable_conv2d

-

deformable_conv2d_npu

-

79

-

npu_deformable_conv2dbk

-

deformable_conv2d_backward_npu

-

80

-

npu_mish

-

mish_npu

-

81

-

npu_anchor_response_flags

-

anchor_response_flags_npu

-

82

-

npu_yolo_boxes_encode

-

yolo_boxes_encode_npu

-

83

-

npu_grid_assign_positive

-

grid_assign_positive_npu

-

84

-

npu_mish_backward

-

mish_backward_npu

-

85

-

npu_normalize_batch

-

normalize_batch_npu

-

86

-

npu_masked_fill_range

-

masked_fill_range_npu

-

87

-

npu_linear

-

linear_npu

-

88

-

npu_linear_backward

-

linear_backward_npu

-

89

-

npu_bert_apply_adam

-

bert_apply_adam_npu

-

90

-

npu_giou

-

giou_npu

-

91

-

npu_giou_backward

-

giou_backward_npu

-
- diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-caution.gif" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-caution.gif" deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-caution.gif" and /dev/null differ diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-danger.gif" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-danger.gif" deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-danger.gif" and /dev/null differ diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-note.gif" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-note.gif" deleted file mode 100644 index 6314297e45c1de184204098efd4814d6dc8b1cda..0000000000000000000000000000000000000000 Binary files "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-note.gif" and /dev/null differ diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-notice.gif" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-notice.gif" deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-notice.gif" and /dev/null differ diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-tip.gif" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-tip.gif" deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-tip.gif" and /dev/null differ diff --git "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-warning.gif" "b/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-warning.gif" deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files "a/docs/zh/PyTorch\351\200\202\351\205\215\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-warning.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-caution.gif" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-caution.gif" deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-caution.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-danger.gif" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-danger.gif" deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-danger.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-note.gif" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-note.gif" deleted file mode 100644 index 6314297e45c1de184204098efd4814d6dc8b1cda..0000000000000000000000000000000000000000 Binary files "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-note.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-notice.gif" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-notice.gif" deleted file mode 100644 index 86024f61b691400bea99e5b1f506d9d9aef36e27..0000000000000000000000000000000000000000 Binary files "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-notice.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-tip.gif" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-tip.gif" deleted file mode 100644 index 93aa72053b510e456b149f36a0972703ea9999b7..0000000000000000000000000000000000000000 Binary files "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-tip.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-warning.gif" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-warning.gif" deleted file mode 100644 index 6e90d7cfc2193e39e10bb58c38d01a23f045d571..0000000000000000000000000000000000000000 Binary files "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/public_sys-resources/icon-warning.gif" and /dev/null differ diff --git "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md" "b/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md" deleted file mode 100644 index 2c606fdd00673c1876bddcd3cc3e13de476e5a29..0000000000000000000000000000000000000000 --- "a/docs/zh/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225/\346\224\257\346\214\201ONNX\347\256\227\345\255\220\346\270\205\345\215\225.md" +++ /dev/null @@ -1,4653 +0,0 @@ -# 支持ONNX算子清单 -- [Abs](#Absmd) -- [Acos](#Acosmd) -- [Acosh](#Acoshmd) -- [AdaptiveAvgPool2D](#AdaptiveAvgPool2Dmd) -- [AdaptiveMaxPool2D](#AdaptiveMaxPool2Dmd) -- [Add](#Addmd) -- [Addcmul](#Addcmulmd) -- [AffineGrid](#AffineGridmd) -- [And](#Andmd) -- [Argmax](#Argmaxmd) -- [Argmin](#Argminmd) -- [AscendRequantS16](#AscendRequantS16md) -- [AscendRequant](#AscendRequantmd) -- [AscendQuant](#AscendQuantmd) -- [AscendDequantS16](#AscendDequantS16md) -- [AscendDequant](#AscendDequantmd) -- [AscendAntiQuant](#AscendAntiQuantmd) -- [Asin](#Asinmd) -- [Asinh](#Asinhmd) -- [Atan](#Atanmd) -- [Atanh](#Atanhmd) -- [AveragePool](#AveragePoolmd) -- [BatchNormalization](#BatchNormalizationmd) -- [BatchMatMul](#BatchMatMulmd) -- [BatchMultiClassNMS](#BatchMultiClassNMSmd) -- [BitShift](#BitShiftmd) -- [Cast](#Castmd) -- [Ceil](#Ceilmd) -- [Celu](#Celumd) -- [Concat](#Concatmd) -- [Clip](#Clipmd) -- [ConvTranspose](#ConvTransposemd) -- [Cumsum](#Cumsummd) -- [Conv](#Convmd) -- [Compress](#Compressmd) -- [Constant](#Constantmd) -- [ConstantOfShape](#ConstantOfShapemd) -- [Cos](#Cosmd) -- [Cosh](#Coshmd) -- [DeformableConv2D](#DeformableConv2Dmd) -- [Det](#Detmd) -- [DepthToSpace](#DepthToSpacemd) -- [Div](#Divmd) -- [Dropout](#Dropoutmd) -- [Elu](#Elumd) -- [EmbeddingBag](#EmbeddingBagmd) -- [Equal](#Equalmd) -- [Erf](#Erfmd) -- [Exp](#Expmd) -- [Expand](#Expandmd) -- [EyeLike](#EyeLikemd) -- [Flatten](#Flattenmd) -- [Floor](#Floormd) -- [Gather](#Gathermd) -- [GatherND](#GatherNDmd) -- [GatherElements](#GatherElementsmd) -- [Gemm](#Gemmmd) -- [GlobalAveragePool](#GlobalAveragePoolmd) -- [GlobalLpPool](#GlobalLpPoolmd) -- [GlobalMaxPool](#GlobalMaxPoolmd) -- [Greater](#Greatermd) -- [GreaterOrEqual](#GreaterOrEqualmd) -- [HardSigmoid](#HardSigmoidmd) -- [hardmax](#hardmaxmd) -- [HardSwish](#HardSwishmd) -- [Identity](#Identitymd) -- [If](#Ifmd) -- [InstanceNormalization](#InstanceNormalizationmd) -- [Less](#Lessmd) -- [LeakyRelu](#LeakyRelumd) -- [LessOrEqual](#LessOrEqualmd) -- [Log](#Logmd) -- [LogSoftMax](#LogSoftMaxmd) -- [LpNormalization](#LpNormalizationmd) -- [LpPool](#LpPoolmd) -- [LRN](#LRNmd) -- [LSTM](#LSTMmd) -- [MatMul](#MatMulmd) -- [Max](#Maxmd) -- [MaxPool](#MaxPoolmd) -- [MaxRoiPool](#MaxRoiPoolmd) -- [MaxUnpool](#MaxUnpoolmd) -- [Mean](#Meanmd) -- [MeanVarianceNormalization](#MeanVarianceNormalizationmd) -- [Min](#Minmd) -- [Mod](#Modmd) -- [Mul](#Mulmd) -- [Multinomial](#Multinomialmd) -- [Neg](#Negmd) -- [NonMaxSuppression](#NonMaxSuppressionmd) -- [NonZero](#NonZeromd) -- [Not](#Notmd) -- [OneHot](#OneHotmd) -- [Or](#Ormd) -- [RandomNormalLike](#RandomNormalLikemd) -- [RandomUniformLike](#RandomUniformLikemd) -- [RandomUniform](#RandomUniformmd) -- [Range](#Rangemd) -- [Reciprocal](#Reciprocalmd) -- [ReduceL1](#ReduceL1md) -- [ReduceL2](#ReduceL2md) -- [ReduceLogSum](#ReduceLogSummd) -- [ReduceLogSumExp](#ReduceLogSumExpmd) -- [ReduceMin](#ReduceMinmd) -- [ReduceMean](#ReduceMeanmd) -- [ReduceProd](#ReduceProdmd) -- [ReduceSumSquare](#ReduceSumSquaremd) -- [Resize](#Resizemd) -- [Relu](#Relumd) -- [ReduceSum](#ReduceSummd) -- [ReduceMax](#ReduceMaxmd) -- [Reshape](#Reshapemd) -- [ReverseSequence](#ReverseSequencemd) -- [RoiExtractor](#RoiExtractormd) -- [RoiAlign](#RoiAlignmd) -- [Round](#Roundmd) -- [PRelu](#PRelumd) -- [Scatter](#Scattermd) -- [ScatterElements](#ScatterElementsmd) -- [ScatterND](#ScatterNDmd) -- [Shrink](#Shrinkmd) -- [Selu](#Selumd) -- [Shape](#Shapemd) -- [Sigmoid](#Sigmoidmd) -- [Slice](#Slicemd) -- [Softmax](#Softmaxmd) -- [Softsign](#Softsignmd) -- [Softplus](#Softplusmd) -- [SpaceToDepth](#SpaceToDepthmd) -- [Split](#Splitmd) -- [Sqrt](#Sqrtmd) -- [Squeeze](#Squeezemd) -- [Sub](#Submd) -- [Sign](#Signmd) -- [Sin](#Sinmd) -- [Sinh](#Sinhmd) -- [Size](#Sizemd) -- [Sum](#Summd) -- [Tanh](#Tanhmd) -- [TfIdfVectorizer](#TfIdfVectorizermd) -- [Tile](#Tilemd) -- [ThresholdedRelu](#ThresholdedRelumd) -- [TopK](#TopKmd) -- [Transpose](#Transposemd) -- [Pad](#Padmd) -- [Pow](#Powmd) -- [Unsqueeze](#Unsqueezemd) -- [Xor](#Xormd) -- [Where](#Wheremd) -

Abs

- -### 功能 - -对输入张量取绝对值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double、int32、int64 - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Acos

- -### 功能 - -计算输入张量的反余弦值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Acosh

- -### 功能 - -计算输入张量的反双曲余弦值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

AdaptiveAvgPool2D

- -### 功能 - -对输入进行2d自适应平均池化计算 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【属性】 - -一个属性: - -output\_size:int型数组,指定输出的hw的shape大小 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:与x类型一致 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AdaptiveMaxPool2D

- -### 功能 - -对输入进行2d自适应最大池化计算 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、float64 - -【属性】 - -一个属性: - -output\_size:int型数组,指定输出的hw的shape大小 - -【输出】 - -两个输出 - -y:一个tensor,数据类型:与x类型一致 - -argmax:一个tensor,数据类型:int32,int64 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

Add

- -### 功能 - -按元素求和 - -### 边界 - -【输入】 - -两个输入 - -A:一个张量,数据类型:int8、int16、int32、int64、uint8、float32、float16、double - -B:一个张量,数据类型与A相同 - -【输出】 - -C:一个张量,数据类型与A相同 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Addcmul

- -### 功能 - -元素级计算\(x1 \* x2\) \* value + input\_data - -### 边界 - -【输入】 - -四个输入 - -input\_data:一个tensor,数据类型:float16、float32、int32、int8、uint8 - -x1: 一个tensor,类型与input\_data相同 - -x2: 一个tensor,类型与input\_data相同 - -value: 一个tensor,类型与input\_data相同 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:y与输入相同 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AffineGrid

- -### 功能 - -给定一批矩阵,生成采样网格 - -### 边界 - -【输入】 - -俩个输入 - -theta:一个tensor,数据类型:float16、float32 - -output\_size:一个tensor,数据类型:int32 - -【属性】 - -一个属性: - -align\_corners:bool型 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:int - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

And

- -### 功能 - -逻辑与 - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:bool - -x2:一个tensor,数据类型:bool - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Argmax

- -### 功能 - -返回指定轴上最大值所对应的索引 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,表示最大值的索引位置,维度比输入x少1,数据类型:int32 - -【属性】 - -axis:必选,表示计算最大值索引的方向,数据类型:int32,aixs的值为\[-len\(x.shape\), len\(x.shape\)-1\] - -keep\_dim:可选,keep\_dim默认为1,支持1或0。 - -【约束】 - -算子不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32类型输入 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Argmin

- -### 功能 - -返回输入张量指定轴上最小值对应的索引 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:int64 - -【属性】 - -axis:数据类型为int,含义:指定计算轴;取值范围:\[-r, r-1\],r表示输入数据的秩 - -【约束】 - -算子不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32类型输入 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

AscendRequantS16

- -### 功能 - -重新量化算子 - -### 边界 - -【输入】 - -两个必选输入,一个可选输入 - -x0:一个tensor,数据类型:int16 - -req\_scale:一个tensor,数据类型:uint64 - -x1:一个tensor,数据类型:int16 - -【属性】 - -两个属性: - -dual\_output:bool型 - -relu\_flag:bool型 - -【输出】 - -两个输出 - -y0:一个tensor,数据类型:int8 - -y1:一个tensor,数据类型:int16 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AscendRequant

- -### 功能 - -重新量化算子 - -### 边界 - -【输入】 - -两个输入 - -x0:一个tensor,数据类型:int32 - -req\_scale:一个tensor,数据类型:uint64 - -【属性】 - -一个属性: - -relu\_flag,数据类型:bool - -【输出】 - -一个输出 - -y:一个tensor,数据类型:int8 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AscendQuant

- -### 功能 - -量化算子 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16,float32 - -【属性】 - -四个属性: - -offset,数据类型:float - -scale,数据类型:float - -sqrt\_mode,数据类型:bool - -round\_mode,数据类型:string - -【输出】 - -一个输出 - -y:一个tensor,数据类型:int8 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AscendDequantS16

- -### 功能 - -反量化算子 - -### 边界 - -【输入】 - -两个必选输入,一个可选输入 - -x0:一个tensor,数据类型:int32 - -req\_scale:一个tensor,数据类型:uint64 - -x1:一个tensor,数据类型:int16 - -【属性】 - -一个属性 - -relu\_flag,数据类型:bool - -【输出】 - -一个输出 - -y:一个tensor,数据类型:int16 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AscendDequant

- -### 功能 - -反量化算子 - -### 边界 - -【输入】 - -两个输入 - -x0:一个tensor,数据类型:int32 - -deq\_scale:一个tensor,数据类型:uint64,float16 - -【属性】 - -sqrt\_mode,数据类型:bool - -relu\_flag,数据类型:bool - -dtype,数据类型:float - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float16,float - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

AscendAntiQuant

- -### 功能 - -反量化算子 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:int8 - -【属性】 - -offset,float型 - -scale,float型 - -sqrt\_mode,bool - -round\_mode,string - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float16,float - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

Asin

- -### 功能 - -计算输入张量的反正弦 - -### 边界 - -【输入】 - -一个输入 - -x1:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Asinh

- -### 功能 - -计算输入张量双曲反正弦 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Atan

- -### 功能 - -计算输入张量的反正切值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Atanh

- -### 功能 - -计算输入张量的双曲反正切 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

AveragePool

- -### 功能 - -平均池化层 - -### 边界 - -【输入】 - -X:一个张量,数据类型:float16、float32,格式为NCHW - -【输出】 - -Y:一个张量,数据类型:float16、float32,格式为NCHW - -【属性】 - -auto\_pad:可选,支持NOTSET、SAME\_UPPER、SAME\_LOWER与VALID - -count\_include\_pad:int,暂不支持 - -kernel\_shape:可选,包括: - -− kernel\_shape\[0\]:数据类型:int32,指定沿H维度的窗口大小,取值范围为\[1, 32768\],默认为1 - -− kernel\_shape\[1\]:数据类型:int32,指定沿W维度的窗口大小,取值范围为\[1, 32768\],默认为1 - -strides:可选,包括: - -− strides\[0\]:数据类型:int32,指定沿H维度的步长,默认为1 - -− strides\[1\]:数据类型:int32,指定沿W维度的步长,默认为1 - -pads:可选,包括: - -− pads\[0\]:数据类型:int32,指定顶部padding,默认为0 - -− pads\[1\]:数据类型:int32,指定底部padding,默认为0 - -− pads\[2\]:数据类型:int32,指定左部padding,默认为0 - -− pads\[3\]:数据类型:int32,指定右部padding,默认为0 - -ceil\_mode:可选,数据类型:int32,取值:0(floor模式),1(ceil模式),默认为0 - -【约束】 - -strides\[0\]或者strides\[1\]取值步长大于63时,会使用AI CPU计算,性能会下降; - -kernel\_shape\_H或kernel\_shape\_W取值超过\[1,255\],或者kernel\_shape\_H \* kernel\_shape\_W \> 256时,会使用AI CPU计算,导致性能下降; - -1 <= input\_w <= 4096; - -当输入张量的N是一个质数时,N应当小于65535; - -ceil\_mode参数仅在auto\_pad='NOTSET'时生效; - -不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32类型输入; - -auto\_pad属性值SAME\_UPPER, SAME\_LOWER统一使用的TBE的SAME属性,即TBE算子没有根据这个属性区分pad的填充位置,可能会带来精度问题 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

BatchNormalization

- -### 功能 - -标准化张量 - -### 边界 - -【输入】 - -五个输入 - -X:数据类型为float16、float32的4D张量 - -scale:数据类型为float32的张量,指定尺度因子 - -B:数据类型为float32的张量,指定偏移量 - -mean:数据类型为float32的张量,指定均值 - -var:数据类型为float32的张量,指定方差 - -【输出】 - -五个输出 - -Y:标准化之后的张量,数据类型为float16或float32 - -mean:均值 - -var:方差 - -saved\_mean:在训练过程中使用已保存的平均值来加快梯度计算 - -saved\_var:在训练过程中使用已保存的方差来加快梯度计算 - -【属性】 - -epsilon:可选,数据类型:float32,指定一个小值与var相加,以避免除以0,默认为0.0001 - -momentum:float32,该参数暂不支持 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

BatchMatMul

- -### 功能 - -将两个输入执行矩阵乘 - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16,float,int32 - -x2:一个tensor,数据类型:float16,float,int32 - -【属性】 - -两个属性: - -adj\_x1:bool型 - -adj\_x2:bool型 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float16,float,int32 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

BatchMultiClassNMS

- -### 功能 - -为输入boxes和输入score计算nms - -### 边界 - -【输入】 - -两个必选输入,两个可选输入 - -boxes:一个tensor,数据类型:float16 - -scores:一个tensor,数据类型:float16 - -clip\_window:一个tensor,数据类型:float16 - -num\_valid\_boxes:一个tensor,数据类型:int32 - -【属性】 - -六个属性: - -score\_threshold:float型 - -iou\_threshold:float型 - -max\_size\_per\_class:int型 - -max\_total\_size:int型 - -change\_coordinate\_frame:bool型 - -transpose\_box:bool型 - -【输出】 - -四个输出 - -nmsed\_boxes:一个tensor,数据类型:float16 - -nmsed\_scores:一个tensor,数据类型:float16 - -nmsed\_classes:一个tensor,数据类型:float16 - -nmsed\_num:一个tensor,数据类型:float16 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

BitShift

- -### 功能 - -元素级位移算子 - -### 边界 - -【输入】 - -两个输入 - -x:一个tensor,表示被位移的输入 - -y:一个tensor,表示位移的数量 - -【输出】 - -z:一个tensor,表示位移后的结果 - -【属性】 - -direction:数据类型:string,必选,指定位移方向,取值范围:"RIGHT"或者"LEFT" - -【约束】 - -当direction="LEFT"时不支持UINT16,UIN32,UINT64 - -### 支持的ONNX版本 - -Opset v11/v12/v13 - -

Cast

- -### 功能 - -将输入数据的type转换为指定的type - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor - -【输出】 - -y:一个tensor,输出的数据类型为属性指定的类型,数据类型:bool、float16、float32、int8、int32、uint8等 - -【属性】 - -to:数据类型:int,必选,指定目标数据类型,取值范围:在指定的数据类型范围内 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Ceil

- -### 功能 - -对输入张量向上取整 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Celu

- -### 功能 - -连续可微的指数线性单位:对输入张量X按元素执行线性单位,使用公式: - -max\(0,x\) + min\(0,alpha\*\(exp\(x/alpha\)-1\)\) - -### 边界 - -【输入】 - -X:tensor\(float\) - -【输出】 - -Y:tensor\(float\) - -【属性】 - -alpha:float,默认值:1.0 - -### 支持的ONNX版本 - -Opset v12/v13 - -

Concat

- -### 功能 - -对多个张量Concat - -### 边界 - -【输入】 - -inputs:多个输入张量,数据类型:float16、float32、int32、uint8、int16、int8、int64、qint8、quint8、qint32、uint16、uint32、uint64、qint16、quint16 - -【输出】 - -concat\_result:张量,与输入张量类型一致 - -【属性】 - -axis:指定哪一个轴进行concat操作,负数表示从后往前对维度计数,取值范围为\[-r, r - 1\],r=rank\(inputs\) - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Clip

- -### 功能 - -将张量值剪辑到指定的最小值和最大值之间 - -### 边界 - -【输入】 - -三个输入 - -X :一个张量,数据类型:float16、float32、int32 - -min:一个scalar - -max:一个scalar - -【输出】 - -一个输出 - -Y:一个张量,剪辑后的输出,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ConvTranspose

- -### 功能 - -转置卷积 - -### 边界 - -【输入】 - -3个输入 - -x:tensor,数据类型:float16、float32 - -w:tensor,数据类型:float16、float32 - -b:可选tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【属性】 - -auto\_pad:str,默认为NOTSET,含义:显式使用padding的方式 - -dilations:ints,默认为全1序列,含义:filter的每轴空洞值 - -group:int,默认为1,含义:输入通道分组数 - -kernel\_shape:ints,默认为w,含义:卷积核大小 - -output\_padding:ints,默认为全0数组,含义:指定padding值 - -output\_shape:ints,根据pad自动计算,含义:输出shape - -pads:ints,默认为全0矩阵,含义:每根轴指定pad值 - -strides:ints,默认为全1矩阵,含义:每根轴的stride值 - -【约束】 - -目前只支持2D的转置卷积,3D及以上暂不支持 - -dilations只支持1 - -output\_shape支持限制:实现部分功能。现在支持output shape的大小,小于原始输入大小,但是不支持大于原始输入大小 - -算子不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32,float64的输入 - -属性auto\_pad不支持 "SAME\_UPPER","SAME\_LOWER" - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Cumsum

- -### 功能 - -计算输入张量在给定axis上面的累加和 - -### 边界 - -【输入】 - -两个输入 - -x:一个tensor,数据类型:float16、float32、int32 - -axis:一个int32或者int64的标量,默认为0,范围为\[-rank\(x\), rank\(x\)-1\] - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type - -【属性】 - -exclusive:int,默认为0,含义:是否返回不包括顶层元素的和 - -reverse:int,默认为0,含义:是否反方向求和 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Conv

- -### 功能 - -卷积 - -### 边界 - -【输入】 - -X:输入4D张量 - -W:权重张量 - -B:可选,偏差,一维张量 - -【输出】 - -Y:卷积输出张量 - -【属性】 - -auto\_pad:可选,支持VALID、NOTSET - -dilations:4个整数的列表,指定用于扩张卷积的扩张率,H和W维度取值范围为\[1, 255\] - -group:从输入通道到输出通道的阻塞连接数,输入通道和输出通道都必须被“group”整除;数据类型为int32,必须设置为1 - -pads:4个整数的列表,指定顶部、底部、左侧和右侧填充,取值范围为\[0, 255\] - -strides:4个整数的列表,指定沿高度H和宽度W的卷积步长。H和W维度取值范围为\[1, 63\],默认情况下,N和C尺寸设置为1 - -【约束】 - -输入张量,W维度取值范围为\[1, 4096\] - -权重张量,H维度和W维度取值范围为\[1, 255\] - -当输出张量的W == 1且H == 1时,输入张量和权重的H和W维度需相同 - -当输出张量的W = 1,H != 1时,算子不支持 - -不支持atc工具--precision\_mode=must\_keep\_origin\_dtype参数时输入类型为float32和float64 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Compress

- -### 功能 - -按指定轴进行切片。 - -### 边界 - -【输入】 - -两个输入: - -input:维度大于等于1的tensor,支持类型:uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float, string, bool - -condition:1维tensor,用于指定切片和需要选择的元素,支持类型:bool - -【输出】 - -一个输出 - -output:tensor,类型:与输入一致 - -【属性】 - -axis:可选,int类型,进行切片的轴,如果没有指定轴,在切片之前将输入tensor展平。取值范围是\[-r,r-1\],r为输入tensor的维数。 - -### 支持的ONNX版本 - -Opset v9//v11/v12/v13 - -

Constant

- -### 功能 - -构建constant节点张量 - -### 边界 - -【输入】 - -无 - -【输出】 - -一个输出 - -Y:输出张量,和提供的tensor值一致 - -【属性】 - -value:输出张量的值 - -【约束】 - -sparse\_value:不支持 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ConstantOfShape

- -### 功能 - -用给定的值和shape生成张量 - -### 边界 - -【输入】 - -x:1D的int64的tensor,表示输出数据的shape,所有的值必须大于0 - -【输出】 - -y:一个tensor,shape由输入指定,如果属性value指定了值,那输出的值和数据类型就等于value指定的值,如果属性value不指定,输出tensor的值默认为0,数据类型默认为float32 - -【属性】 - -value:指定输出tensor的数据和类型 - -【约束】 - -x:1<=len\(shape\)<=8 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Cos

- -### 功能 - -计算输入张量的余弦值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Cosh

- -### 功能 - -计算输入张量的双曲余弦 - -### 边界 - -【输入】 - -一个输入 - -X1:一个tensor,数据类型:float16、float、double - -【输出】 - -一个输出 - -y:一个张量,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

DeformableConv2D

- -### 功能 - -形变卷积 - -### 边界 - -【输入】 - -X:输入4D张量 - -filter:权重张量 - -offsets:偏移量,4维张量 - -bias:可选,偏差,一维张量 - -【输出】 - -Y:形变卷积输出张量 - -【属性】 - -auto\_pad:可选,支持VALID、NOTSET - -dilations:4个整数的列表,指定用于扩张卷积的扩张率,H和W维度取值范围为\[1, 255\] - -group:从输入通道到输出通道的阻塞连接数,输入通道和输出通道都必须被“group”整除;数据类型为int32,必须设置为1 - -pads:4个整数的列表,指定顶部、底部、左侧和右侧填充,取值范围为\[0, 255\] - -strides:4个整数的列表,指定沿高度H和宽度W的卷积步长。H和W维度取值范围为\[1, 63\],默认情况下,N和C尺寸设置为1 - -data\_format:string,表示输入数据format,默认是“NHWC” - -deformable\_groups:分组卷积通道数,缺省为1 - -modulated:bool,指定DeformableConv2D版本,true表示v2版本,false表示v1版本,当前只支持true - -【限制】 - -输入张量,W维度取值范围为\[1, 4096 / filter\_width\],H取值范围为\[1, 100000 / filter\_height\] - -权重张量,W维度取值范围为\[1, 63\],H取值范围为\[1, 63\] - -不支持atc工具--precision\_mode=must\_keep\_origin\_dtype参数时输入类型为float32和float64 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

Det

- -### 功能 - -计算方形矩阵行列式 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

DepthToSpace

- -### 功能 - -将数据由深度重排到空间数据块 - -### 边界 - -【输入】 - -1个输入 - -input:format为NCHW的tensor输入,类型:float16、float32,double,int32,int64等 - -【输出】 - -1个输出 - -output:一个张量,shape为\[N, C/\(blocksize \* blocksize\), H \* blocksize, W \* blocksize\] - -【属性】 - -blocksize:int,必选 指定被移动的块的大小 - -mode: string 指定是depth-column-row还是column-row-depth排列,默认DCR - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Div

- -### 功能 - -按元素进行除法运算 - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16、float32、double、int32、int64 - -x2:一个tensor,数据类型:float16、float32、double、int32、int64 - -【输出】 - -一个输出 - -y:一个tensor,数据类型和输入一致 - -【约束】 - -输入、输出的type相同 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Dropout

- -### 功能 - -拷贝或者屏蔽输入数据 - -### 边界 - -【输入】 - -1-3个输入 - -data:tensor输入,类型:float16、float32,double等 - -ratio:可选输入,类型:float16、float32,double等 - -training\_mode:可选输入,类型:bool - -【输出】 - -1-2个输出 - -output:一个张量 - -mask: 一个张量 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Elu

- -### 功能 - -Elu激活函数 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【属性】 - -alpha:float,默认为1.0,含义:系数 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

EmbeddingBag

- -### 功能 - -计算embedding函数的反向输出 - -### 边界 - -【输入】 - -两个必选输入,两个可选输入 - -weight:一个tensor,数据类型:float32 - -indices:一个tensor,数据类型:int32 - -offset:一个tensor,数据类型:int32 - -per\_sample\_weights:一个tensor,数据类型:float32 - -【属性】 - -四个属性: - -mode:string型 - -scale\_grad\_by\_fraq:bool型 - -sparse:bool型 - -include\_last\_offset:bool型 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float32 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

Equal

- -### 功能 - -判断两个输入张量对应位置是否相等 - -### 边界 - -【输入】 - -两个输入 - -X1:一个tensor - -X2:一个tensor - -【输出】 - -一个输出 - -y:一个tensor ,数据类型:bool - -【约束】 - -输入X1、X2的数据类型和格式相同,支持如下数据类型:bool、uint8、int8、int16、int32、int64、float16、float32、double - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Erf

- -### 功能 - -高斯误差函数 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型和格式与输入一致 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Exp

- -### 功能 - -计算输入张量的指数 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Expand

- -### 功能 - -将输入tensor广播到指定shape - -### 边界 - -【输入】 - -2个输入 - -input:tensor,数据类型:float16、float32 - -shape:tensor,数据类型:int64 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【约束】 - -需要修改模型将输入shape由placeholder改为const类型,可以使用onnxsimplifier简化模型 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

EyeLike

- -### 功能 - -生成一个2D矩阵,主对角线是1,其他为0 - -### 边界 - -【输入】 - -1个输入 - -x:2维tensor,用于拷贝tensor的shape - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape - -【属性】 - -dtype:int,指定输出数据类型 - -k:int,默认是0,表示主对角线被广播成1的索引。如y是输出,则y\[i, i+k\] = 1 - -【约束】 - -仅支持k=0 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Flatten

- -### 功能 - -将张量展平 - -### 边界 - -【输入】 - -input:多维张量,数据类型:int8、uint8、int16、uint16、int32、uint32、int64、uint64、float16、float32 - -【输出】 - -具有输入张量的内容的2D张量 - -【属性】 - -axis:int,该参数暂不支持负值索引 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Floor

- -### 功能 - -对输入张量向下取整 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Gather

- -### 功能 - -根据相应的轴从“x”中收集切片 - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16、float32、int32、int64、int8、int16、uint8、uint16、uint32、uint64、bool - -indices:一个tensor,数据类型:int32、int64 - -【输出】 - -一个输出 - -y:一个张量,数据类型和输入x1类型一致 - -【属性】 - -axis:数据类型:int,指定gather的轴,取值范围为\[-r, r-1\](r表示输入数据的秩) - -【约束】 - -不支持indices为负值的索引 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

GatherND

- -### 功能 - -将输入数据切片输出 - -### 边界 - -【输入】 - -2个输入 - -data:秩r\>=1的tensor输入,类型:float16, float32, double, int32, int64等 - -indices:int64的索引张量,秩q\>=1 - -【输出】 - -1个输出 - -output:一个张量, 秩为q + r - indices\_shape\[-1\] - 1 - -【属性】 - -batch\_dims:int,默认为0 批处理轴的数量 - -【约束】 - -不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时double的输入 - -### 支持的ONNX版本 - -Opset v11/v12/v13 - -

GatherElements

- -### 功能 - -获取索引位置的元素产生输出 - -### 边界 - -【输入】 - -2个输入 - -input:秩大于1的tensor输入,类型:float16、float32,double,int32,int64等 - -indices:int32/int64的索引张量 - -【输出】 - -1个输出 - -output:一个张量,与indices的shape相同 - -【属性】 - -axis:int,默认为0 指定聚集的轴 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Gemm

- -### 功能 - -通用矩阵乘 - -### 边界 - -【输入】 - -A:2D矩阵张量,数据类型:float16、float32 - -B:2D矩阵张量,数据类型:float16、float32 - -C:偏差,可选,该参数暂不支持 - -【输出】 - -Y:2D矩阵张量,数据类型:float16、float32 - -【属性】 - -transA:bool,是否A需要转置 - -transB:bool,是否B需要转置 - -alpha:float,该参数暂不支持 - -beta:float,该参数暂不支持 - -【约束】 - -v8/v9/v10版本不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32类型输入 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

GlobalAveragePool

- -### 功能 - -全局平均池化 - -### 边界 - -【输入】 - -X:一个张量,数据类型:float16、float32,格式为NCHW - -【输出】 - -Y:池化输出张量,数据类型与X相同,格式为NCHW - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

GlobalLpPool

- -### 功能 - -全局范数池化算子 - -### 边界 - -【输入】 - -2个输入 - -input:tensor,数据类型:float16、float32 - -p:可选属性, int32,默认2 - -【输出】 - -1个输出 - -y:更新后的张量数据,数据类型和输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

GlobalMaxPool

- -### 功能 - -全局最大池化算子 - -### 边界 - -【输入】 - -1个输入 - -x:前一个节点的输出tensor,类型:float16, float32, double - -【输出】 - -1个输出 - -output:池化后的张量 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Greater

- -### 功能 - -按元素比较输入x1和x2的大小,若x1\>x2,对应位置返回true - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16、float32、int32、int8、uint8 - -x2:一个tensor,数据类型:float16、float32、int32、int8、uint8 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:bool - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

GreaterOrEqual

- -### 功能 - -按元素比较输入x1和x2的大小,若x1\>=x2,对应位置返回true - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16、float32、int32、int8、uint8等 - -x2:一个tensor,数据类型:float16、float32、int32、int8、uint8等 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:bool - -### 支持的ONNX版本 - -Opset v8/v12 - -

HardSigmoid

- -### 功能 - -HardSigmoid接受一个输入数据\(张量\)并生成一个输出数据\(张量\),HardSigmoid函数y = max\(0, min\(1, alpha \* x + beta\)\)应用于张量元素方面。 - -### 边界 - -【输入】 - -1个输入 - -X:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -【输出】 - -1个输出 - -Y:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -【属性】 - -alpha:float,默认值:0.2 - -beta:float,默认值:0.2 - -### 支持的ONNX版本 - -Opset v1/v6/v8/v9/v10/v11/v12/v13 - -

hardmax

- -### 功能 - -计算hardmax结果,如果元素是指定axis的最大元素则设为1,否则为0 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,rank=2,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【属性】 - -axis:int,默认为-1,含义:指定计算轴 - -【约束】 - -使用atc工具--precision\_mode参数必须为allow\_fp32\_to\_fp16 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

HardSwish

- -### 功能 - -HardSwish激活函数。y=x \* max\(0, min\(1, alpha \* x + beta \)\),其中alpha=1/6,beat=0.5 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:tensor,数据类型:float16、float32 - -### 支持的ONNX版本 - -Opset v14 - -

Identity

- -### 功能 - -恒等操作 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

If

- -### 功能 - -逻辑控制判断算子 - -### 边界 - -【输入】 - -一个输入 - -cond:If op的条件 - -两个属性 - -else\_branch:条件为假的分支 - -then\_branch:条件为真的分支 - -【输出】 - -一到多个输出 - -y:tensor或者tensor序列 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

InstanceNormalization

- -### 功能 - -计算y = scale \* \(x - mean\) / sqrt\(variance + epsilon\) + B,其中mean 和 variance 是每个实例每个通道的均值和方法 - -### 边界 - -【输入】 - -3个输入 - -x: tensor,数据类型是float16,float - -scale:1维tensor,维度同x的C轴长度,和输入x同样的dtype - -B:1维tensor,维度同x的C轴长度,和输入x同样的dtype - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape和dtype - -【属性】 - -epsilon:float,默认是1e-05,避免除0 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Less

- -### 功能 - -按元素比较输入x1和x2的大小,若x1 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16、float32、int32、int8、uint8 - -x2:一个tensor,数据类型:float16、float32、int32、int8、uint8 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:bool - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

LeakyRelu

- -### 功能 - -对输入张量用leakrelu函数激活 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y: 一个tensor,数据类型和shape与输入一致 - -【属性】 - -alpha:数据类型为float,默认0.01,表示leakage系数 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

LessOrEqual

- -### 功能 - -小于等于计算 - -### 边界 - -【输入】 - -2个输入 - -x:tensor,数据类型:float16、float32 - -y:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape,数据类型:bool - -### 支持的ONNX版本 - -Opset v12/v13 - -

Log

- -### 功能 - -计算输入的自然对数 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

LogSoftMax

- -### 功能 - -对输入张量计算logsoftmax值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -【属性】 - -axis:数据类型为int;指定计算的轴,取值范围:\[-r, r-1\],r为输入的秩 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

LpNormalization

- -### 功能 - -给定一个矩阵,沿给定的轴应用LpNormalization。 - -### 边界 - -【输入】 - -1个输入 - -input,类型:tensor\(float16\), tensor\(float\) - -【输出】 - -1个输出 - -output,类型:tensor\(float16\), tensor\(float\) - -【属性】 - -axis:int,默认值:-1 - -p:int,默认值:2 - -【约束】 - -auto\_pad属性值SAME\_UPPER, SAME\_LOWER统一使用的TBE的SAME属性,即TBE算子没有根据这个属性区分pad的填充位置,可能会带来精度问题 - -### 支持的ONNX版本 - -Opset v1/v8/v9/v10/v11/v12/v13 - -

LpPool

- -### 功能 - -Lp范数池化。 - -### 边界 - -【输入】 - -一个输入 - -x:tensor,数据类型:float16 - -【输出】 - -一个输出 - -y:tensor,数据类型:float16 - -【属性】 - -auto\_pad:string,默认为NOTSET,支持:NOTSET, SAME\_UPPER或者 VALID - -kernel\_shape:必选,int列表,kernel每个轴上的尺寸 - -p:int,范数,默认为2 - -pads:int列表 - -strides:int列表 - -### 支持的ONNX版本 - -Opset v11/v12/v13 - -

LRN

- -### 功能 - -对输入张量做局部响应归一化 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和format - -【属性】 - -alpha:float,缩放因子 - -beta:float,指数项 - -bias:float - -size:int,求和的通道数,只支持奇数 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

LSTM

- -### 功能 - -计算单层LSTM。这个操作符通常通过一些自定义实现\(如CuDNN\)来支持。 - -### 边界 - -【输入3-8】 - -X:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -W:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -R:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -B:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -sequence\_lens:,类型:tensor\(int32\) - -initial\_h:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -initial\_c:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -p:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -【输出0-3】 - -Y:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -Y\_h:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -Y\_c:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -【属性】 - -activation\_alpha:list of floats - -activation\_beta:list of floats - -activations:list of strings - -clip: float - -direction: string,默认值:forward - -hidden\_size: int - -input\_forget: int,默认值:0 - -layout: int,默认值:0 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

MatMul

- -### 功能 - -矩阵乘 - -### 边界 - -【输入】 - -两个输入 - -x1:一个2D的tensor,数据类型:float16 - -x2:一个2D的tensor,数据类型:float16 - -【输出】 - -一个输出 - -y:一个2D的tensor,数据类型:float16 - -【约束】 - -仅支持1-6维输入 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Max

- -### 功能 - -元素级比较输入tensor的大小 - -### 边界 - -【输入】 - -多个输入\(1-∞\) - -data\_0:tensor的列表,类型:float16、float32,int8,int16,int32等 - -【输出】 - -一个输出 - -max:一个张量,和输入x同样的type和shape(广播后的shape) - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

MaxPool

- -### 功能 - -最大池化 - -### 边界 - -【输入】 - -X:一个张量,数据类型:float16、float32,格式为NCHW - -【输出】 - -Y:一个张量,数据类型:float16、float32,格式为NCHW - -【属性】 - -auto\_pad:可选,支持SAME\_UPPER、SAME\_LOWER、VALID、NOTSET - -storage\_order:暂不支持该参数 - -kernel\_shape:可选,包括: - -- kernel\_shape\[0\]:数据类型:int32,指定沿H维度的窗口大小,取值范围为\[1, 32768\],默认为1 -- kernel\_shape\[1\]:数据类型:int32,指定沿W维度的窗口大小,取值范围为\[1, 32768\],默认为1 - -strides:可选,包括: - -- strides\[0\]:数据类型:int32,指定沿H维度的步长,默认为1 -- strides\[1\]:数据类型:int32,指定沿W维度的步长,默认为1 - -pads:可选,包括: - -- pads\[0\]:数据类型:int32,指定顶部padding,默认为0 -- pads\[1\]:数据类型:int32,指定底部padding,默认为0 -- pads\[2\]:数据类型:int32,指定左部padding,默认为0 -- pads\[3\]:数据类型:int32,指定右部padding,默认为0 - -ceil\_mode:可选,数据类型:int32,取值:0\(floor模式),1(ceil模式),默认为0 - -【约束】 - -strides\[0\]或者strides\[1\]取值步长大于63时,会使用AI CPU计算,性能会下降; - -kernel\_shape\_H或kernel\_shape\_W取值超过\[1,255\],或者kernel\_shape\_H \* kernel\_shape\_W \> 256时,会使用AI CPU计算,导致性能下降; - -1 <= input\_w <= 4096 - -当输入张量的N是一个质数时,N应小于65535 - -2D tensor输入不支持dilations - -auto\_pad属性是VALID时,ceil\_mode属性值必须为0 - -不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32类型输入 - -pads属性和auto\_pad属性不可同时使用 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

MaxRoiPool

- -### 功能 - -ROI最大池消耗一个输入张量X和感兴趣区域\(ROI\),以便在每个ROI上应用最大池,从而产生输出的4-D形状张量\(num\_roi, channels, pooled\_shape\[0\], pooled\_shape\[1\]\)。 - -### 边界 - -【输入】 - -X:,类型:tensor\(float16\), tensor\(float\) - -rois:,类型:tensor\(float16\), tensor\(float\) - -【输出】 - -Y:,类型:tensor\(float16\), tensor\(float\), tensor\(double\) - -【属性】 - -pooled\_shape: list of ints - -spatial\_scale: float,默认值:1.0 - -【约束】 - -不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32类型输入 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/13 - -

MaxUnpool

- -### 功能 - -MaxPool操作的逆。 - -### 边界 - -【输入】 - -X:一个张量,数据类型:float16、float32 - -I:一个张量,数据类型:int64 - -output\_shape: \(可选\),设置输出的shape,数据类型:int64 - -【输出】 - -Y:一个张量,数据类型:和输入一致 - -【属性】 - -kernel\_shape:(必选),一个列表,数据类型:int类型,沿每个轴的内核大小 - -pads:一个列表,数据类型:int类型,沿每个轴pad - -strides:一个列表,数据类型:int类型,沿每个轴步长 - -### 支持的ONNX版本 - -Opset v9/v11/v12/v13 - -

Mean

- -### 功能 - -每个输入张量的元素均值\(支持numpy风格的广播\)。所有输入和输出必须具有相同的数据类型。该操作符支持多向\(即numpy风格\)广播。 - -### 边界 - -【输入1-∞】 - -data\_0:,类型:tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【输出】 - -mean:,类型:tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

MeanVarianceNormalization

- -### 功能 - -使用公式对输入张量X进行均值方差标准化:\(X-EX\)/sqrt\(E\(X-EX\)^2\) - -### 边界 - -【输入】 - -X:,类型:tensor\(float16\), tensor\(float\), tensor\(bfloat16\) - -【输出】 - -Y:,类型:tensor\(float16\), tensor\(float\), tensor\(bfloat16\) - -【属性】 - -axes: list of ints,默认值:\['0', '2', '3'\] - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Min

- -### 功能 - -计算输入tensors的最小值 - -### 边界 - -【输入】 - -1个输入 - -x:tensor列表,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:计算出最小值的tensor - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Mod

- -### 功能 - -执行元素二进制模数\(支持numpy风格的广播\)。余数的符号与除数的符号相同。 - -### 边界 - -【输入】 - -A:tensor\(uint8\), tensor\(uint16\), tensor\(uint32\), tensor\(uint64\), tensor\(int8\), tensor\(int16\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -B:tensor\(uint8\), tensor\(uint16\), tensor\(uint32\), tensor\(uint64\), tensor\(int8\), tensor\(int16\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【输出】 - -C:,类型:tensor\(uint8\), tensor\(uint16\), tensor\(uint32\), tensor\(uint64\), tensor\(int8\), tensor\(int16\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【属性】 - -fmod:int,默认值:0 - -【约束】 - -当输入类型为浮点时,fmod不支持为0 - -### 支持的ONNX版本 - -Opset v10/v11/v12/v13 - -

Mul

- -### 功能 - -矩阵点乘 - -### 边界 - -【输入】 - -A:一个张量,数据类型:float16、float32、uint8、int8、int16、int32 - -B:一个张量,数据类型:float16、float32、uint8、int8、int16、int32 - -【输出】 - -C:一个张量,数据类型与输入张量一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Multinomial

- -### 功能 - -返回Multinomial采样结果矩阵 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,shape=\[batch\_size, class\_size\],数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,shape=\[batch\_size, sample\_size\],输出type是int32、int64 - -【属性】 - -dtype:int,默认为6,含义:输出dtype,默认为int32 - -sample\_size:int,默认为1,含义:采样次数 - -seed:float,随机数种子 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Neg

- -### 功能 - -求输入的负数 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、int32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

NonMaxSuppression

- -### 功能 - -过滤掉与先前选定的框有较高重叠的“交集-并集”\(IOU\)框。移除得分小于score\_threshold的边界框。边界框格式由属性center\_point\_box表示。注意,该算法不知道原点在坐标系中的位置,更普遍地说,它对坐标系的正交变换和平移是不变的;因此,平移或反射坐标系统的结果在相同的方框被算法选择。selected\_indices输出是一组整数,索引到表示所选框的边界框的输入集合中。然后,可以使用Gather或gathernd操作获得与所选索引对应的边框坐标。 - -### 边界 - -【输入2-5】 - -boxes: tensor\(float\) - -scores: tensor\(float\) - -max\_output\_boxes\_per\_class: 可选,数据类型:tensor\(int64\) - -iou\_threshold: 可选,数据类型:tensor\(float\) - -score\_threshold: 可选,数据类型:tensor\(float\) - -【输出】 - -selected\_indices: tensor\(int64\) - -【属性】 - -center\_point\_box: int 默认值:0 - -### 支持的ONNX版本 - -Opset v10/v11/v12/v13 - -

NonZero

- -### 功能 - -返回非零元素的索引(行主序) - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、int32、int8、uint8等 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:int64 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Not

- -### 功能 - -逻辑非 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:bool - -【输出】 - -一个输出 - -y:一个tensor,数据类型:bool - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

OneHot

- -### 功能 - -根据输入生成独热编码张量 - -### 边界 - -【输入】 - -三个输入 - -indices:一个tensor,数据类型:uint8,uint16, uint32,uint64,int8,int16,int32,int64,float16,float,double - -depth:一个tensor,数据类型:uint8,uint16, uint32,uint64,int8,int16,int32,int64,float16,float,double - -values:一个tensor,数据类型:uint8,uint16, uint32,uint64,int8,int16,int32,int64,float16,float,double - -【属性】 - -一个属性 - -axis:(可选)添加独热表示的轴 - -【输出】 - -一个输出 - -y:一个tensor,数据类型与values输入的类型一致 - -【约束】 - -算子属性不支持axis<-1 - -### 支持的ONNX版本 - -Opset v9/v10/v11/v12/v13 - -

Or

- -### 功能 - -逻辑或 - -### 边界 - -【输入】 - -两个输入 - -X1:一个tensor,数据类型:bool - -X2:一个tensor,数据类型:bool - -【输出】 - -一个输出 - -y:一个tensor,数据类型:bool - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

RandomNormalLike

- -### 功能 - -根据正态分布生成随机数矩阵,输出tensor的shape与输入相同 - -### 边界 - -【输入】 - -1个输入 - -x: tensor,数据类型是float16,float - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape和dtype - -【属性】 - -dtype:int,指定输出tensor的dtype - -mean:float,默认是0.0,正态分布的均值 - -scale:float,默认是1.0,正态分布的标准差 - -seed:float,随机数种子 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

RandomUniformLike

- -### 功能 - -根据均匀分布生成随机数矩阵,输出tensor的shape与输入相同 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型是float16,float - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape和dtype - -【属性】 - -dtype:int,指定输出tensor的dtype - -high:float,默认是1.0,均匀分布的上界 - -low:float,默认是0.0,均匀分布的下界 - -seed:float,随机数种子 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

RandomUniform

- -### 功能 - -生成具有从均匀分布绘制的随机值的张量 - -### 边界 - -【属性】 - -五个属性 - -dtype:int类型,指明输出类型 - -high:float型,指明上边界 - -low:float型,指明下边界 - -seed:\(可选\),随机种子 - -shape:输出的形状 - -【输出】 - -一个输出 - -y:一个tensor,数据类型与dtype属性指定类型一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Range

- -### 功能 - -产生一个连续序列的tensor - -### 边界 - -【输入】 - -3个输入 - -start:scalar,数据类型:float16、float32 - -limit:scalar,数据类型:float16、float32 - -delta:scalar,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Reciprocal

- -### 功能 - -将输入张量取倒数 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceL1

- -### 功能 - -沿所提供的轴计算输入张量元素的L1范数。如果keepdim等于1,得到的张量的秩与输入的相同。如果keepdim等于0,那么得到的张量就会被精简维数。上述行为与numpy类似,只是numpy默认keepdim为False而不是True。 - -### 边界 - -【输入】 - -data:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【输出】 - -reduced:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【属性】 - -axes: list of ints - -keepdims: int,默认值:1 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceL2

- -### 功能 - -沿所提供的轴计算输入张量元素的L2范数。如果keepdim等于1,得到的张量的秩与输入的相同。如果keepdim等于0,那么得到的张量就会被精简维数。上述行为与numpy类似,只是numpy默认keepdim为False而不是True。 - -### 边界 - -【输入】 - -data:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【输出】 - -reduced:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【属性】 - -axes: list of ints - -keepdims: int,默认值:1 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceLogSum

- -### 功能 - -计算输入张量指定方向的对数和 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16, float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float16, float32 - -【属性】 - -axes:数据类型为listInt;含义:指定计算轴;取值范围:\[-r, r-1\],r是输入数据的维数 - -keepdims:数据类型为int;含义:是否保留缩减后的维度;默认为1 - -### 支持的ONNX版本 - -Opset v11/v13 - -

ReduceLogSumExp

- -### 功能 - -计算输入张量指定方向的对数和的指数 - -### 边界 - -【输入】 - -一个输入 - -data:一个tensor,数据类型:float16, float32 - -【输出】 - -一个输出 - -reduced:一个tensor,数据类型:float16, float32 - -【属性】 - -axes:一维tensor,数据类型int32、int64,含义:指定计算轴;取值范围:\[-r, r-1\],r是输入数据的维数 - -keepdims:数据类型为int;含义:是否缩减维度;默认为1表示缩减维度 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceMin

- -### 功能 - -计算输入张量指定方向的最小值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float16、float32 - -【属性】 - -axes:数据类型为listInt;含义:指定计算轴;取值范围:\[-r, r-1\],r是输入数据的维数 - -keepdims:数据类型为int;含义:是否保留缩减后的维度;默认为1 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceMean

- -### 功能 - -计算输入张量的指定维度的元素的均值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和format - -【属性】 - -axes:一个1D的整数列表,含义:指定精减的维度,取值范围为\[-r, r - 1\],r是输入矩阵的秩 - -keepdims:数据类型为int,默认为1,含义:是否保留缩减后的维度 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceProd

- -### 功能 - -计算输入张量的元素沿所提供的轴的乘积。如果keepdim等于1,得到的张量的秩与输入的相同。如果keepdim等于0,那么得到的张量就会被精简维数。 - -### 边界 - -【输入】 - -data:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【输出】 - -reduced:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【属性】 - -axes: list of ints - -keepdims: int,默认值:1 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceSumSquare

- -### 功能 - -沿所提供的轴计算输入张量元素的平方和。如果keepdim等于1,得到的张量的秩与输入的相同。如果keepdim等于0,那么得到的张量就会被精简维数。上述行为与numpy类似,只是numpy默认keepdim为False而不是True。 - -### 边界 - -【输入】 - -data:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【输出】 - -reduced:tensor\(uint32\), tensor\(uint64\), tensor\(int32\), tensor\(int64\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(bfloat16\) - -【属性】 - -axes: list of ints - -keepdims: int,默认值:1 - -### 支持的ONNX版本 - -Opset v1/v8/v9/v10/v11/v12/v13 - -

Resize

- -### 功能 - -调整输入tensor大小 - -### 边界 - -【输入】 - -4个输入 - -x:一个tensor,数据类型:float16、float32 - -roi: 被输入图像归一化的1Dtensor,\[start1, ..., startN, end1, ..., endN\],数据类型:float16、float32 - -scales:与输入x的秩相等的数组 - -sizes:输出tensor的size - -【输出】 - -一个输出 - -y:缩放后的张量 - -【属性】 - -coordinate\_transformation\_mode:str,默认为half\_pixel,含义:定义缩放后图像与原图像的坐标转换 - -cubic\_coeff\_a:float,默认为-0.75,含义:三次插值系数 - -exclude\_outside:int,默认为0,含义:超出tensor外的权重 - -mode:str,默认为nearest,含义:插值算法,包括nearest, linear and cubic - -nearest\_mode:str,默认为round\_prefer\_floor,含义:最近邻算子模式 - -【约束】 - -目前仅支持nearest和linear插值方式来处理图片,并且需要修改模型将输入scales或sizes由placeholder改为const类型,可以使用onnxsimplifier简化模型 - -### 支持的ONNX版本 - -Opset v10/v11/v12 - -

Relu

- -### 功能 - -整流线性单位函数 - -### 边界 - -【输入】 - -X:输入张量,数据类型:float32、int32、uint8、int16、int8、uint16、float16、qint8 - -【输出】 - -Y:输出张量,数据类型与X一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceSum

- -### 功能 - -计算输入张量指定维度的元素的和 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x的type和format相同 - -【属性】 - -axes:一个1D的整数列表,含义:指定精减的维度,取值范围为\[-r, r - 1\](r是输入矩阵的秩) - -keepdims:数据类型为int,默认为1,含义:是否保留缩减后的维度 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReduceMax

- -### 功能 - -计算输入张量指定方向的最大值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、int32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float16、float32、int32 - -【属性】 - -axes:数据类型为listInt;含义:指定计算轴;取值范围:\[-r, r-1\],r是输入数据的秩 - -keepdims:数据类型为int;含义:是否保留缩减后的维度;默认为1 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Reshape

- -### 功能 - -改变输入维度 - -### 边界 - -【输入】 - -两个输入 - -data:一个张量 - -shape:一个张量,定义了输出张量的形状,int64 - -【输出】 - -reshaped:一个张量 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ReverseSequence

- -### 功能 - -根据指定长度对batch序列进行排序 - -### 边界 - -【输入】 - -2个输入 - -x:tensor,rank \>= 2,数据类型:float16、float32 - -sequence\_lens:tensor,每个batch的指定长度,数据类型:int64 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【属性】 - -batch\_axis:int,默认为1,含义:指定batch轴 - -time\_axis:int,默认为1,含义:指定time轴 - -### 支持的ONNX版本 - -Opset v10/v11/v12/v13 - -

RoiExtractor

- -### 功能 - -从特征映射列表中获取ROI特征矩阵 - -### 边界 - -【输入】 - -两个输入 - -features:一个tensor,数据类型:float32,float16 - -rois:一个tensor,数据类型:float32,float16 - -【属性】 - -八个属性: - -finest\_scale:int型 - -roi\_scale\_factor:float型 - -spatial\_scale:float型数组 - -pooled\_height:int型 - -pooled\_width:int型 - -sample\_num:int型 - -pool\_mode:string型 - -aligned:bool型 - -【输出】 - -一个输出 - -y:一个tensor,数据类型:float32,float16 - -### 支持的ONNX版本 - -自定义算子,无对应onnx版本 - -

RoiAlign

- -### 功能 - -在每个roi区域进行池化处理 - -### 边界 - -【输入】 - -3个输入 - -x:tensor,4D输入,数据类型:float16、float32 - -rois:shape=\(num\_rois, 4\),数据类型:float16、float32 - -batch\_indices :shape=\(num\_rois,\),数据类型:int64 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type,shape=\(num\_rois, C, output\_height, output\_width\) - -【属性】 - -mode:string,默认为avg,含义:池化方式 - -output\_height:int,默认为1,含义:y的高度 - -output\_width:int,默认为1,含义:y的宽度 - -sampling\_ratio :int,默认为0,含义:插值算法采样点数 - -spatial\_scale:float,默认为1.0,含义:相对于输入图像的空间采样率 - -【约束】 - -batch\_indices数据类型只能写int32不能写int64 - -不支持atc工具参数--precision\_mode=must\_keep\_origin\_dtype时float32,float64的输入 - -### 支持的ONNX版本 - -Opset v10/v11/v12/v13 - -

Round

- -### 功能 - -对输入张量做四舍五入的运算 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

PRelu

- -### 功能 - -PRelu激活函数 - -### 边界 - -【输入】 - -两个输入 - -x:一个tensor,数据类型:float16、float32 - -slope:slope张量,数据类型和输入x一致 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【约束】 - -slope必须是1维,当输入x的shape是1维时,slope的维度值必须为1;输入x的shape是其他维度时,slope的维度值可以为1或者为输入x的shape\[1\] - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Scatter

- -### 功能 - -根据updates和indices来更新data的值,并把结果返回。 - -### 边界 - -【输入】 - -3个输入 - -data: tensor,数据类型是float16,float,int32 - -indices:tensor,数据类型是int32、int64 - -updates:tensor,数据类型同data - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape和dtype - -【属性】 - -axis:int,默认是0,表示沿axis取数据 - -### 支持的ONNX版本 - -Opset v9/v10 - -

ScatterElements

- -### 功能 - -根据updates和indices来更新data的值,并把结果返回。 - -### 边界 - -【输入】 - -1个输入 - -data: tensor,数据类型是float16,float,int32 - -indices:tensor,数据类型是int32、int64 - -updates:tensor,数据类型同data - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape和dtype - -【属性】 - -axis:int,默认是0,表示沿axis取数据 - -### 支持的ONNX版本 - -Opset v11/v12/v13 - -

ScatterND

- -### 功能 - -创建data的拷贝,同时在指定indices处根据updates更新 - -### 边界 - -【输入】 - -3个输入 - -data:tensor,rank \>= 1,数据类型:float16、float32 - -indices:tensor,rank \>= 1,数据类型:int64 - -updates:tensor,rank = q + r - indices\_shape\[-1\] - 1,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -### 支持的ONNX版本 - -Opset v11 - -

Shrink

- -### 功能 - -单输入单输出计算,If x < -lambd, y = x + bias; If x \> lambd, y = x - bias; Otherwise, y = 0. - -### 边界 - -【输入】 - -1个输入 - -data: tensor,数据类型是float16,float - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的shape和dtype - -【属性】 - -bias:float,默认是0.0 - -lambda:float,默认是0.5 - -### 支持的ONNX版本 - -Opset v9/v10/v11/ v12/v13 - -

Selu

- -### 功能 - -在元素级别使用指数线性单位函数y = gamma \* \(alpha \* e^x - alpha\) for x <= 0, y = gamma \* x for x \> 0 生成张量 - -### 边界 - -【输入】 - -一个输入 - -x:float16,float32,double类型的tensor - -两个属性 - -alpha:乘数因子 - -gamma:乘数因子 - -【输出】 - -一个输出 - -y:与输入类型相同的tensor - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Shape

- -### 功能 - -获取输入tensor的shape - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor - -【输出】 - -y:输入tensor的shape,数据类型为int64的tensor - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sigmoid

- -### 功能 - -对输入做sigmoid - -### 边界 - -【输入】 - -一个输入 - -x:数据类型支持float16、float32 - -【输出】 - -一个输出 - -y:数据类型和输入x一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Slice

- -### 功能 - -获取输入tensor的切片 - -### 边界 - -【输入】 - -五个输入 - -x:输入的tensor,数据类型:float16、float32、int32、uint8、bool、int8 - -starts:1Dtensor,int32或者int64,表示开始的索引位置 - -ends:1Dtensor,int32或者int64,表示结束的索引位置 - -axes:可选,1Dtensor,int32或者int64,表示切片的轴,取值范围为\[-r, r-1\](r表示输入数据的秩) - -steps:可选,1Dtensor,int32或者int64,表示切片的步长,最后一个轴的steps取值必须为1 - -【输出】 - -y:切片后的张量数据,数据类型和输入一致 - -【约束】 - -x:输入tensor维度不能为1 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Softmax

- -### 功能 - -对输入进行softmax - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,类型和shape与输入x一致 - -【属性】 - -axis:Int,可选,表示进行softmax的方向,默认值为-1,范围为\[ -len\(x.shape\), len\(x.shape\)-1\] - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Softsign

- -### 功能 - -计算输入张量的softsign\(x/\(1+|x|\)\) - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Softplus

- -### 功能 - -计算softplus - -### 边界 - -【输入】 - -一个输入 - -X:1D的输入张量 - -【输出】 - -一个输出 - -Y:1D的张量 - -【约束】 - -数据类型仅支持float16、float32 - -输入、输出的数据类型一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

SpaceToDepth

- -### 功能 - -SpaceToDepth将空间数据块重新排列成深度。更具体地说,这个op输出一个输入张量的副本,其中高度和宽度维度的值移动到深度维度。 - -### 边界 - -【输入】 - -input:tensor\(uint8\), tensor\(uint16\), tensor\(uint32\), tensor\(uint64\), tensor\(int8\), tensor\(int16\), tensor\(int32\), tensor\(int64\), tensor\(bfloat16\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(string\), tensor\(bool\), tensor\(complex64\), tensor\(complex128\) - -【输出】 - -output:tensor\(uint8\), tensor\(uint16\), tensor\(uint32\), tensor\(uint64\), tensor\(int8\), tensor\(int16\), tensor\(int32\), tensor\(int64\), tensor\(bfloat16\), tensor\(float16\), tensor\(float\), tensor\(double\), tensor\(string\), tensor\(bool\), tensor\(complex64\), tensor\(complex128\) - -【属性】 - -blocksize: int - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Split

- -### 功能 - -将输入切分成多个输出 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、int8、int16、int32、int64、uint8、uint16、uint32、uint64 - -【输出】 - -一个输出 - -y:由多个输出tensor组成的列表,每个tensor数据类型和输入x一致 - -【属性】 - -split:list,数据类型:int8、int16、int32、int64,指定每个输出tensor沿着切分方向的大小 - -axis:数据类型:int8、int16、int32、int64,指定切分的方向 - -【约束】 - -split的每个元素必须\>=1 - -split的所有元素之和必须等于axis指定的切分方向的size - -axis在\[ -len\(x.shape\), len\(x.shape\)-1\] 之间 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sqrt

- -### 功能 - -计算元素的平方根 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor - -【输出】 - -一个输出 - -y:一个tensor - -【约束】 - -输入、输出的数据类型相同,支持的数据类型:float16、float32 - -如果x小于0,返回Nan - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Squeeze

- -### 功能 - -从输入中去除尺寸为1的维度 - -### 边界 - -【输入】 - -一个输入 - -x:一个张量,数据类型:float16、float32、double、uint8、uint16、uint32、uint64、int8、int16、int32、int64、bool - -【输出】 - -y:一个tensor,数据类型和输入一致 - -【属性】 - -axes:一个数据类型为int32或者int64的整形列表,维度为1;取值范围为\[-r, r-1\](r表示输入张量的秩,负数表示从后面计算维度);含义:指定要去除的维度 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sub

- -### 功能 - -进行张量的减法运算 - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor - -x2:一个tensor - -【输出】 - -一个输出 - -y:一个张量,数据类型和输入一致 - -【约束】 - -输入、输出的shape和dtype相同,支持的数据类型:int32、float16、float32 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sign

- -### 功能 - -逐元素计算输入tensor的符号 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sin

- -### 功能 - -计算输入张量的正弦值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sinh

- -### 功能 - -计算输入张量双曲正弦值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32、double - -【输出】 - -一个输出 - -y:一个tensor,数据类型和shape与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Size

- -### 功能 - -计算输入tensor的元素个数 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个int64的scalar - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Sum

- -### 功能 - -求和 - -### 边界 - -【输入】 - -1个输入 - -x:tensor序列,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Tanh

- -### 功能 - -计算输入的双曲正切值 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型与输入一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

TfIdfVectorizer

- -### 功能 - -将输入文本序列向量化 - -### 边界 - -【输入】 - -1个输入 - -data: tensor,数据类型是int32,int64 - -【输出】 - -一个输出 - -y:一个张量,数据类型是float - -【属性】 - -max\_gram\_length:int,最大n-gram长度 - -max\_skip\_count:int,从data中构造n-gram时最多skip数 - -min\_gram\_length:int,最小n-gram长度 - -mode:string,加权模式。可选为"TF" \(term frequency\), "IDF" \(inverse document frequency\)和"TFIDF" \(the combination of TF and IDF\) - -ngram\_counts:int列表,n-gram池化的开始索引,有助于确认两个连续n-gram边界 - -ngram\_indexes:int列表,第i个元素表示输出tensor中第i个n-gram的坐标 - -pool\_int64s:int列表,不能与pool\_strings同时赋值,表示从训练集学到的n-grams - -pool\_strings:string列表,与pool\_int64s含义一样。 - -weights:float列表,存储每个n-gram的池化权重数值 - -### 支持的ONNX版本 - -Opset v9/v10/v11/ v12/v13 - -

Tile

- -### 功能 - -将输入张量沿指定维度重复 - -### 边界 - -【输入】 - -两个输入 - -x:一个tensor - -repeats:一个1D的int64的tensor,size和输入的维度数一样 - -【输出】 - -一个输出 - -y:输出的tensor,type和维度与输入一致,output\_dim\[i\] = input\_dim\[i\] \* repeats\[i\] - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

ThresholdedRelu

- -### 功能 - -当x \> alpha时y = x,否则y=0 - -### 边界 - -【输入】 - -1个输入 - -x:tensor,数据类型:float16、float32 - -【输出】 - -一个输出 - -y:一个张量,和输入x同样的type和shape - -【属性】 - -alpha:float,默认为1.0,含义:阈值 - -### 支持的ONNX版本 - -Opset v10/v11/v12/v13 - -

TopK

- -### 功能 - -返回指定轴的k个最大或最小值 - -### 边界 - -【输入】 - -2个输入 - -x:tensor,数据类型:float16、float32 - -k:tensor,数据类型:int64 - -【输出】 - -2个输出 - -Values:topk的返回值 - -Indices:topk的返回值索引 - -【属性】 - -axis:int,默认为-1,含义:指定排序的轴 - -largest:int,默认为1,含义:返回k个最大/最小值 - -sorted:int,默认为1,含义:是否升序 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Transpose

- -### 功能 - -转置 - -### 边界 - -【输入】 - -data:一个张量,数据类型:float16、float32、int8、int16、int32、int64、uint8、uint16、uint32、uint64 - -【输出】 - -transposed:转置之后的张量 - -【属性】 - -perm:必选,整数列表, 张量data的维度排列 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Pad

- -### 功能 - -对输入tensor做填充 - -### 边界 - -【输入】 - -两个输入 - -x:数据类型支持float16、float32、int32 - -pads:数据类型支持int32 、int64 - -constant\_value:可选。默认情况下为0、空字符串或False,如果选择的模式为\`constant\`,则要使用的标量值。 - -【输出】 - -一个输出 - -y:数据类型和输入x一致 - -【属性】 - -mode:str类型,支持模式有:constant,reflect,edge - -【约束】 - -当mode值为constant时,目前仅支持constant\_value=0 - -### 支持的ONNX版本 - -Opset v11 - -

Pow

- -### 功能 - -计算输入x1的x2次幂 - -### 边界 - -【输入】 - -两个输入 - -x1:一个tensor,数据类型:float16、float32、double、int32、int8、uint8 - -x2:一个tensor,数据类型和输入x1一致 - -【输出】 - -一个输出 - -y:数据类型和输入x1一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Unsqueeze

- -### 功能 - -在输入张量(数据)的形状中插入一维项 - -### 边界 - -【输入】 - -一个输入 - -x:一个tensor,数据类型:uint8、uint16、uint32、int8、int16、int32、float16、float32 - -【输出】 - -一个输出 - -y:一个tensor,数据类型和输入x一致 - -【属性】 - -axes:ListInt,表示在指定的维度进行插1维项,取值范围为\[-input\_rank, input\_rank\],input\_rank为输入张量的秩,axes的内容不可以重复 - -### 支持的ONNX版本 - -Opset v8/v9/10/v11/v12 - -

Xor

- -### 功能 - -输入张量元素的xor逻辑运算 - -### 边界 - -【输入】 - -两个输入 - -a:一个tensor,数据类型bool - -b:一个tensor,数据类型bool - -【输出】 - -c:一个tensor,数据类型bool - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 - -

Where

- -### 功能 - -根据条件从两个输入中选择元素 - -### 边界 - -【输入】 - -三个输入 - -condition,条件,数据类型:bool - -x:一个tensor,条件为true时从x中选取元素,数据类型支持float16、float32、int8、int32、uint8 - -y:一个tensor,条件为false时从y中选取元素,和x的数据类型一致 - -【输出】 - -一个tensor,数据类型和输入x一致 - -### 支持的ONNX版本 - -Opset v8/v9/v10/v11/v12/v13 -