diff --git a/.gitignore b/.gitignore
index 5cd9b811d971d597120a234199999efa0b2353ef..e6af4aa9f0e2a8cd27dee2d2781c7865d4b0001a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,10 +1,20 @@
# Built html files
-api/build_en
-api/build_zh_cn
-docs/build_en
-docs/build_zh_cn
-tutorials/build_en
-tutorials/build_zh_cn
+docs/api_cpp/build_en
+docs/api_cpp/build_zh_cn
+docs/api_java/build_en
+docs/api_java/build_zh_cn
+docs/api_python/build_en
+docs/api_python/build_zh_cn
+docs/faq/build_en
+docs/faq/build_zh_cn
+docs/note/build_en
+docs/note/build_zh_cn
+docs/programming_guide/build_en
+docs/programming_guide/build_zh_cn
+tutorials/inference/build_en
+tutorials/inference/build_zh_cn
+tutorials/training/build_en
+tutorials/training/build_zh_cn
# Workspace
.idea/
diff --git a/api/source_en/api/python/mindspore/mindspore.dtype.rst b/api/source_en/api/python/mindspore/mindspore.dtype.rst
deleted file mode 100644
index ecedea971844071ff47fa5505b9c852b5e77ff1f..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.dtype.rst
+++ /dev/null
@@ -1,111 +0,0 @@
-mindspore.dtype
-===============
-
-Data Type
-----------
-
-.. class:: mindspore.dtype
-
-Create a data type object of MindSpore.
-
-The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
-Run the following command to import the package:
-
-.. code-block::
-
- import mindspore.common.dtype as mstype
-
-or
-
-.. code-block::
-
- from mindspore import dtype as mstype
-
-Numeric Type
-~~~~~~~~~~~~
-
-Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
-The following table lists the details.
-
-============================================== =============================
-Definition Description
-============================================== =============================
-``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
-``mindspore.int16`` , ``mindspore.short`` 16-bit integer
-``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
-``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
-``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
-``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
-``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
-``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
-``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
-``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
-``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
-============================================== =============================
-
-Other Type
-~~~~~~~~~~
-
-For other defined types, see the following table.
-
-============================ =================
-Type Description
-============================ =================
-``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
-``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
-``bool_`` Boolean ``True`` or ``False``.
-``int_`` Integer scalar.
-``uint`` Unsigned integer scalar.
-``float_`` Floating-point scalar.
-``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
-``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
-``type_type`` Type definition of type.
-``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
-``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
-``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
-============================ =================
-
-Tree Topology
-~~~~~~~~~~~~~~
-
-The relationships of the above types are as follows:
-
-.. code-block::
-
-
- └─────── number
- │ ├─── bool_
- │ ├─── int_
- │ │ ├─── int8, byte
- │ │ ├─── int16, short
- │ │ ├─── int32, intc
- │ │ └─── int64, intp
- │ ├─── uint
- │ │ ├─── uint8, ubyte
- │ │ ├─── uint16, ushort
- │ │ ├─── uint32, uintc
- │ │ └─── uint64, uintp
- │ └─── float_
- │ ├─── float16
- │ ├─── float32
- │ └─── float64
- ├─── tensor
- │ ├─── Array[Float32]
- │ └─── ...
- ├─── list_
- │ ├─── List[Int32,Float32]
- │ └─── ...
- ├─── tuple_
- │ ├─── Tuple[Int32,Float32]
- │ └─── ...
- ├─── function
- │ ├─── Func
- │ ├─── Func[(Int32, Float32), Int32]
- │ └─── ...
- ├─── MetaTensor
- ├─── type_type
- ├─── type_none
- ├─── symbolic_key
- └─── env_type
\ No newline at end of file
diff --git a/api/source_en/api/python/mindspore/mindspore.hub.rst b/api/source_en/api/python/mindspore/mindspore.hub.rst
deleted file mode 100644
index 458c704fc392ff12901a1324b719303c5098eeee..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.hub.rst
+++ /dev/null
@@ -1,4 +0,0 @@
-mindspore.hub
-=============
-
-.. autofunction:: mindspore.hub.load_weights
diff --git a/api/source_en/api/python/mindspore/mindspore.ops.composite.rst b/api/source_en/api/python/mindspore/mindspore.ops.composite.rst
deleted file mode 100644
index 4dc22f1dcf4fc899a211b5d1ec7114bea7680aa5..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.ops.composite.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.composite
-=======================
-
-.. automodule:: mindspore.ops.composite
- :members:
diff --git a/api/source_en/api/python/mindspore/mindspore.ops.operations.rst b/api/source_en/api/python/mindspore/mindspore.ops.operations.rst
deleted file mode 100644
index 29bf49176bf455593d3398d8e2f1af17ebfe21a4..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.ops.operations.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.operations
-========================
-
-.. automodule:: mindspore.ops.operations
- :members:
diff --git a/api/source_en/api/python/mindspore/mindspore.rst b/api/source_en/api/python/mindspore/mindspore.rst
deleted file mode 100644
index 44c49e3df3e08d66f6f8d54c23891de30b85a922..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore
-=========
-
-.. automodule:: mindspore
- :members:
\ No newline at end of file
diff --git a/api/source_en/index.rst b/api/source_en/index.rst
deleted file mode 100644
index 12f19eaff2f8e9bdeec2f0977238dfd4d1be8238..0000000000000000000000000000000000000000
--- a/api/source_en/index.rst
+++ /dev/null
@@ -1,53 +0,0 @@
-.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Mar 24 11:00:00 2020.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-MindSpore API
-=============
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Python API
-
- api/python/mindspore/mindspore
- api/python/mindspore/mindspore.dtype
- api/python/mindspore/mindspore.common.initializer
- api/python/mindspore/mindspore.communication
- api/python/mindspore/mindspore.context
- api/python/mindspore/mindspore.hub
- api/python/mindspore/mindspore.nn
- api/python/mindspore/mindspore.nn.dynamic_lr
- api/python/mindspore/mindspore.nn.learning_rate_schedule
- api/python/mindspore/mindspore.nn.probability
- api/python/mindspore/mindspore.ops
- api/python/mindspore/mindspore.ops.composite
- api/python/mindspore/mindspore.ops.operations
- api/python/mindspore/mindspore.train
- api/python/mindspore/mindspore.dataset
- api/python/mindspore/mindspore.dataset.config
- api/python/mindspore/mindspore.dataset.text
- api/python/mindspore/mindspore.dataset.transforms
- api/python/mindspore/mindspore.dataset.vision
- api/python/mindspore/mindspore.mindrecord
- api/python/mindspore/mindspore.profiler
-
-.. toctree::
- :maxdepth: 1
- :caption: MindArmour Python API
-
- api/python/mindarmour/mindarmour
- api/python/mindarmour/mindarmour.adv_robustness.attacks
- api/python/mindarmour/mindarmour.adv_robustness.defenses
- api/python/mindarmour/mindarmour.adv_robustness.detectors
- api/python/mindarmour/mindarmour.adv_robustness.evaluations
- api/python/mindarmour/mindarmour.fuzz_testing
- api/python/mindarmour/mindarmour.privacy.diff_privacy
- api/python/mindarmour/mindarmour.privacy.evaluation
- api/python/mindarmour/mindarmour.utils
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Hub Python API
-
- api/python/mindspore_hub/mindspore_hub
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dtype.rst b/api/source_zh_cn/api/python/mindspore/mindspore.dtype.rst
deleted file mode 100644
index 633cd1e23e5c3d54077db437deb063c78aa9a9a2..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.dtype.rst
+++ /dev/null
@@ -1,112 +0,0 @@
-mindspore.dtype
-===============
-
-Data Type
-----------
-
-.. class:: mindspore.dtype
-
-Create a data type object of MindSpore.
-
-The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
-Run the following command to import the package:
-
-.. code-block::
-
- import mindspore.common.dtype as mstype
-
-or
-
-.. code-block::
-
- from mindspore import dtype as mstype
-
-Numeric Type
-~~~~~~~~~~~~
-
-Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
-The following table lists the details.
-
-============================================== =============================
-Definition Description
-============================================== =============================
-``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
-``mindspore.int16`` , ``mindspore.short`` 16-bit integer
-``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
-``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
-``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
-``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
-``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
-``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
-``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
-``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
-``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
-============================================== =============================
-
-Other Type
-~~~~~~~~~~
-
-For other defined types, see the following table.
-
-============================ =================
-Type Description
-============================ =================
-``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
-``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
-``bool_`` Boolean ``True`` or ``False``.
-``int_`` Integer scalar.
-``uint`` Unsigned integer scalar.
-``float_`` Floating-point scalar.
-``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
-``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
-``type_type`` Type definition of type.
-``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
-``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
-``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
-============================ =================
-
-Tree Topology
-~~~~~~~~~~~~~~
-
-The relationships of the above types are as follows:
-
-.. code-block::
-
-
- └─── mindspore.dtype
- ├─── number
- │ ├─── bool_
- │ ├─── int_
- │ │ ├─── int8, byte
- │ │ ├─── int16, short
- │ │ ├─── int32, intc
- │ │ └─── int64, intp
- │ ├─── uint
- │ │ ├─── uint8, ubyte
- │ │ ├─── uint16, ushort
- │ │ ├─── uint32, uintc
- │ │ └─── uint64, uintp
- │ └─── float_
- │ ├─── float16
- │ ├─── float32
- │ └─── float64
- ├─── tensor
- │ ├─── Array[float32]
- │ └─── ...
- ├─── list_
- │ ├─── List[int32,float32]
- │ └─── ...
- ├─── tuple_
- │ ├─── Tuple[int32,float32]
- │ └─── ...
- ├─── function
- │ ├─── Func
- │ ├─── Func[(int32, float32), int32]
- │ └─── ...
- ├─── MetaTensor
- ├─── type_type
- ├─── type_none
- ├─── symbolic_key
- └─── env_type
\ No newline at end of file
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.hub.rst b/api/source_zh_cn/api/python/mindspore/mindspore.hub.rst
deleted file mode 100644
index 458c704fc392ff12901a1324b719303c5098eeee..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.hub.rst
+++ /dev/null
@@ -1,4 +0,0 @@
-mindspore.hub
-=============
-
-.. autofunction:: mindspore.hub.load_weights
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.ops.composite.rst b/api/source_zh_cn/api/python/mindspore/mindspore.ops.composite.rst
deleted file mode 100644
index 4dc22f1dcf4fc899a211b5d1ec7114bea7680aa5..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.ops.composite.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.composite
-=======================
-
-.. automodule:: mindspore.ops.composite
- :members:
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.ops.operations.rst b/api/source_zh_cn/api/python/mindspore/mindspore.ops.operations.rst
deleted file mode 100644
index 29bf49176bf455593d3398d8e2f1af17ebfe21a4..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.ops.operations.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.operations
-========================
-
-.. automodule:: mindspore.ops.operations
- :members:
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.rst b/api/source_zh_cn/api/python/mindspore/mindspore.rst
deleted file mode 100644
index 44c49e3df3e08d66f6f8d54c23891de30b85a922..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore
-=========
-
-.. automodule:: mindspore
- :members:
\ No newline at end of file
diff --git a/api/source_zh_cn/index.rst b/api/source_zh_cn/index.rst
deleted file mode 100644
index 502e8495e2162735d542889ada4167c8e27fbf6d..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/index.rst
+++ /dev/null
@@ -1,59 +0,0 @@
-.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Mar 24 11:00:00 2020.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-MindSpore API
-=============
-
-.. toctree::
- :maxdepth: 1
- :caption: 编程指南
-
- programming_guide/api_structure
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Python API
-
- api/python/mindspore/mindspore
- api/python/mindspore/mindspore.dtype
- api/python/mindspore/mindspore.common.initializer
- api/python/mindspore/mindspore.communication
- api/python/mindspore/mindspore.context
- api/python/mindspore/mindspore.hub
- api/python/mindspore/mindspore.nn
- api/python/mindspore/mindspore.nn.dynamic_lr
- api/python/mindspore/mindspore.nn.learning_rate_schedule
- api/python/mindspore/mindspore.nn.probability
- api/python/mindspore/mindspore.ops
- api/python/mindspore/mindspore.ops.composite
- api/python/mindspore/mindspore.ops.operations
- api/python/mindspore/mindspore.train
- api/python/mindspore/mindspore.dataset
- api/python/mindspore/mindspore.dataset.config
- api/python/mindspore/mindspore.dataset.text
- api/python/mindspore/mindspore.dataset.transforms
- api/python/mindspore/mindspore.dataset.vision
- api/python/mindspore/mindspore.mindrecord
- api/python/mindspore/mindspore.profiler
-
-.. toctree::
- :maxdepth: 1
- :caption: MindArmour Python API
-
- api/python/mindarmour/mindarmour
- api/python/mindarmour/mindarmour.adv_robustness.attacks
- api/python/mindarmour/mindarmour.adv_robustness.defenses
- api/python/mindarmour/mindarmour.adv_robustness.detectors
- api/python/mindarmour/mindarmour.adv_robustness.evaluations
- api/python/mindarmour/mindarmour.fuzz_testing
- api/python/mindarmour/mindarmour.privacy.diff_privacy
- api/python/mindarmour/mindarmour.privacy.evaluation
- api/python/mindarmour/mindarmour.utils
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Hub Python API
-
- api/python/mindspore_hub/mindspore_hub
diff --git a/api/source_zh_cn/programming_guide/api_structure.md b/api/source_zh_cn/programming_guide/api_structure.md
deleted file mode 100644
index 9a42ef664223fdccb211cc09fa9034ce1f1a83a7..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/api_structure.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# MindSpore API概述
-
-
-
-- [MindSpore API概述](#mindsporeapi概述)
- - [设计理念](#设计理念)
- - [层次结构](#层次结构)
-
-
-
-
-
-## 设计理念
-
-MindSpore源于全产业的最佳实践,向数据科学家和算法工程师提供了统一的模型训练、推理和导出等接口,支持端、边、云等不同场景下的灵活部署,推动深度学习和科学计算等领域繁荣发展。
-
-MindSpore提供了动态图和静态图统一的编码方式,用户无需开发多套代码,仅变更一行代码便可切换动态图/静态图模式,从而拥有更轻松的开发调试及性能体验。
-
-此外,由于MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,大大降低了AI开发门槛。
-
-## 层次结构
-
-MindSpore向用户提供了3个不同层次的API,支撑用户进行网络构建、整图执行、子图执行以及单算子执行,从低到高分别为Low-Level Python API、Medium-Level Python API以及High-Level Python API。
-
-
-
-- Low-Level Python API
-
- 第一层为低阶API,主要包括张量定义、基础算子、自动微分等模块,用户可使用低阶API轻松实现张量操作和求导计算。
-
-- Medium-Level Python API
-
- 第二层为中阶API,其封装了低价API,提供网络层、优化器、损失函数等模块,用户可通过中阶API灵活构建神经网络和控制执行流程,快速实现模型算法逻辑。
-
-- High-Level Python API
-
- 第三层为高阶API,其在中阶API的基础上又提供了训练推理的管理、Callback、混合精度训练等高级接口,方便用户控制整网的执行流程和实现神经网络的训练及推理。
diff --git a/api/source_zh_cn/programming_guide/auto_augmentation.md b/api/source_zh_cn/programming_guide/auto_augmentation.md
deleted file mode 100644
index 57ec65060e666988f7e7d9fa6f7f783256374f33..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/auto_augmentation.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# 自动数据增强
-
-
-
-- [自动数据增强](#自动数据增强)
- - [基于概率动态调整数据增强策略](#基于概率动态调整数据增强策略)
- - [基于训练结果信息动态调整数据增强策略](#基于训练结果信息动态调整数据增强策略)
-
-
-
-
-
-## 基于概率动态调整数据增强策略
-
-MindSpore提供一系列基于概率的数据增强的API,用户可以对各种图像操作进行随机选择、组合,使数据增强更灵活。
-
-- [`RandomApply(transforms, prob=0.5)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.html?highlight=randomapply#mindspore.dataset.transforms.c_transforms.RandomApply)
-以一定的概率指定`transforms`操作,即可能执行,也可以能不执行;`transform`可以是一个,也可以是一系列。
-
- ```python
-
- rand_apply_list = RandomApply([c_vision.RandomCrop(), c_vision.RandomColorAdjust()])
- ds = ds.map(operations=rand_apply_list)
-
- ```
-
- 按50%的概率来顺序执行`RandomCrop`和`RandomColorAdjust`操作,否则都不执行。
-
-- [`RandomChoice(transforms)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.html?highlight=randomchoice#mindspore.dataset.transforms.c_transforms.RandomChoice)
-从`transfrom`操作中随机选择一个执行。
-
- ```python
-
- rand_choice = RandomChoice([c_vision.CenterCrop(), c_vision.RandomCrop()])
- ds = ds.map(operations=rand_choice)
-
- ```
-
- 分别以50%概率来执行`CenterCrop`和`RandomCrop`操作。
-
-- [`RandomSelectSubpolicy(policy)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.vision.html?highlight=randomselectsubpolicy#mindspore.dataset.transforms.vision.c_transforms.RandomSelectSubpolicy)
-用户可以预置策略(Policy),每次随机选择一个子策略(SubPolicy)组合,同一子策略中由若干个顺序执行的图像增强操作组成,每个操作与两个参数关联:1) 执行操作的概率 2)执行操作的幅度;
-对于一个batch中的每张图像,随机选择子策略来变换图像。
-
- ```python
-
- policy = [
- [(c_vision.RandomRotation((45, 45)), 0.5), (c_vision.RandomVerticalFlip(), 1.0), (c_vision.RandomColorAdjust(), 0.8)],
- [(c_vision.RandomRotation((90, 90)), 1), (c_vision.RandomColorAdjust(), 0.2)]
- ]
- ds = ds.map(operations=c_vision.RandomSelectSubpolicy(policy), input_columns=["image"])
-
- ```
-
- 示例中包括2条子策略,其中子策略1中包含`RandomRotation`、`RandomVerticalFlip`、`RandomColorAdjust`3个操作,概率分别为0.5、1.0、0.8;子策略2中包含`RandomRotation`和`RandomColorAdjust`,概率分别为1.0、2.0。
-
-## 基于训练结果信息动态调整数据增强策略
-
-Mindspore的`sync_wait`接口支持按batch或epoch粒度来调整数据增强策略,实现训练过程中动态调整数据增强策略。
-`sync_wait`必须和`sync_update`配合使用实现数据pipeline上的同步回调。
-
-- [`sync_wait(condition_name, num_batch=1, callback=None)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html?highlight=sync_wait#mindspore.dataset.ImageFolderDatasetV2.sync_wait)
-- [`sync_update(condition_name, num_batch=None, data=None)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html?highlight=sync_update#mindspore.dataset.ImageFolderDatasetV2.sync_update)
-
-`sync_wait`将阻塞整个数据处理pipeline直到`sync_update`触发用户预先定义的`callback`函数。
-
-1. 用户预先定义class`Augment`,其中`preprocess`为`map`操作中的自定义数据增强函数,`update`为更新数据增强策略的回调函数。
-
- ```python
- import mindspore.dataset.transforms.vision.py_transforms as transforms
- import mindspore.dataset as de
- import numpy as np
-
- class Augment:
- def __init__(self):
- self.ep_num = 0
- self.step_num = 0
-
- def preprocess(self, input_):
- return (np.array((input_ + self.step_num ** self.ep_num - 1), ))
-
- def update(self, data):
- self.ep_num = data['ep_num']
- self.step_num = data['step_num']
-
- ```
-
-2. 数据处理pipeline先回调自定义的增强策略更新函数`auto_aug.update`,然后在`map`操作中按更新后的策略来执行`auto_aug.preprocess`中定义的数据增强。
-
- ```python
-
- arr = list(range(1, 4))
- ds = de.NumpySlicesDataset(arr, shuffle=False)
- aug = Augment()
- ds= ds.sync_wait(condition_name="policy", callback=aug.update)
- ds = ds.map(operations=[aug.preprocess])
-
- ```
-
-3. 在每个step调用`sync_update`进行数据增强策略的更新。
-
- ```python
- epochs = 5
- itr = ds.create_tuple_iterator(num_epochs=epochs)
- step_num = 0
- for ep_num in range(epochs):
- for data in itr:
- print("epcoh: {}, step:{}, data :{}".format(ep_num, step_num, data))
- step_num += 1
- ds.sync_update(condition_name="policy", data={'ep_num': ep_num, 'step_num': step_num})
-
- ```
diff --git a/api/source_zh_cn/programming_guide/cell.md b/api/source_zh_cn/programming_guide/cell.md
deleted file mode 100644
index 8570e80c0a9d38524dcd8e21bd196ab4ef1da9b1..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/cell.md
+++ /dev/null
@@ -1,290 +0,0 @@
-# cell模块概述
-
-
-
-- [cell模块概述](#cell模块概述)
- - [概念用途](#概念用途)
- - [关键成员函数](#关键成员函数)
- - [模型层](#模型层)
- - [损失函数](#损失函数)
- - [网络构造](#Cell构造自定义网络)
-
-
-
-## 概念用途
-
-MindSpose的Cell类是构建所有网络的基类,也是网络的基本单元。当用户需要自定义网络时,需要继承Cell类,并重写__init_方法和contruct方法。
-
-损失函数,优化器和模型层等本质上也属于网络结构,也需要继承Cell类才能实现功能,同样用户也可以根据业务需求自定义这部分内容。
-
-本节内容首先将会介绍Cell类的关键成员函数,然后介绍基于Cell实现的MindSpore内置损失函数,优化器和模型层及使用方法,最后通过实例介绍
-如何利用Cell类构建自定义网络。
-
-## 关键成员函数
-
-### construct方法
-
-Cell类重写了__call__方法,在cell类的实例被调用时,会执行contruct方法。网络结构在contruct方法里面定义。
-
-下面的样例中,我们构建了一个简单的网络。用例的网络结构为Conv2d->BatchNorm2d->ReLU->Flatten->Dense。
-在construct方法中,x为输入数据, out是经过网络的每层计算后得到的计算结果。
-
-```
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
- self.bn = nn.BatchNorm2d(64)
- self.relu = nn.ReLU()
- self.flatten = nn.Flatten()
- self.fc = nn.Dense(64 * 222 * 222, 3)
-
- def construct(self, x):
- x = self.conv(x)
- x = self.bn(x)
- x = self.relu(x)
- x = self.flatten(x)
- out = self.fc(x)
- return out
-```
-
-### parameters_dict
-
-parameters_dict方法识别出网络结构中所有的参数,返回一个以key为参数名,value为参数值的OrderedDict()。
-
-Cell类中返回参数的方法还有许多,例如get_parameters(),trainable_params()等, 具体使用方法可以参见MindSpore API手册。
-
-代码样例如下:
-
-```
-net = Net()
-result = net.parameters_dict()
-print(result.keys())
-print(result['conv.weight'])
-```
-
-样例中的Net()采用上文构造网络的用例,打印了网络中是所有参数的名字和conv.weight参数的结果。
-
-运行结果如下:
-```
-odict_keys(['conv.weight', 'bn.moving_mean', 'bn.moving_variance', 'bn.gamma', 'bn.beta', 'fc.weight', 'fc.bias'])
-Parameter (name=conv.weight, value=[[[[ 1.07402597e-02 7.70052336e-03 5.55867562e-03]
- [-3.21971579e-03 -3.75304517e-04 -8.73021083e-04]
-...
-[-1.81201510e-02 -1.31190736e-02 -4.27651079e-03]]]])
-```
-
-### cells_and_names
-
-cells_and_names方法是一个迭代器,返回网络中每个cell的名字和它的内容本身。
-
-用例简单实现了网络的cell获取与打印每个cell名字的功能,其中根据上文网络结构可知,存在五个cell分别是'conv','bn','relu','flatten','fc'。
-
-代码样例如下:
-```
-net = Net()
-names = []
-for m in net.cells_and_names():
- names.append(m[0]) if m[0] else None
-print(names)
-```
-
-运行结果:
-```
-['conv', 'bn', 'relu', 'flatten', 'fc']
-```
-## 模型层
-
-在讲述了Cell的使用方法后可知,MindSpore能够以Cell为基类构造网络结构。
-
-为了方便业界需求及用户使用方便,MindSpore框架内置了大量的模型层,用户可以通过接口直接调用。
-
-同样,用户也可以自定义模型,此内容在cell自定义构建中介绍。
-
-### 内置模型层
-
-MindSpore框架在nn的layer层内置了丰富的接口,主要内容如下:
-
-- 激活层:
-
- 激活层内置了大量的激活函数,在定义网络结构中经常使用。激活函数为网络加入了非线性运算,使得网络能够拟合效果更好。
-
- 主要接口有Softmax,Relu,Elu,Tanh,Sigmoid等。
-
-- 基础层:
-
- 基础层实现了网络中一些常用的基础结构,例如全连接层,Onehot编码,Dropout,平铺层等都在此部分实现。
-
- 主要接口有Dense,Flatten,Dropout,Norm,OneHot等。
-
-- 容器层:
-
- 容器层主要功能是实现一些存储多个cell的数据结构。
-
- 主要接口有SequentialCell,CellList等。
-
-- 卷积层:
-
- 卷积层提供了一些卷积计算的功能,如普通卷积,深度卷积和卷积转置等。
-
- 主要接口有Conv2d,Conv1d,Conv2dTranspose,DepthwiseConv2d,Conv1dTranspose等。
-
-- 池化层:
-
- 池化层提供了平均池化和最大池化等计算的功能。
-
- 主要接口有AvgPool2d,MaxPool2d,AvgPool1d。
-
-- 嵌入层:
-
- 嵌入层提供word embedding的计算功能,将输入的单词映射为稠密向量。
-
- 主要接口有:Embedding,EmbeddingLookup,EmbeddingLookUpSplitMode等。
-
-- 长短记忆循环层:
-
- 长短记忆循环层提供LSTM计算功能。其中LSTM内部会调用LSTMCell接口, LSTMCell是一个LSTM单元,
- 对一个LSTM层做运算,当涉及多LSTM网络层运算时,使用LSTM接口。
-
- 主要接口有:LSTM,LSTMCell。
-
-- 标准化层:
-
- 标准化层提供了一些标准化的方法,即通过线性变换等方式将数据转换成均值和标准差。
-
- 主要接口有:BatchNorm1d,BatchNorm2d,LayerNorm,GroupNorm,GlobalBatchNorm等。
-
-- 数学计算层:
-
- 数据计算层提供一些算子拼接而成的计算功能,例如数据生成和一些数学计算等。
-
- 主要接口有ReduceLogSumExp,Range,LinSpace,LGamma等。
-
-- 图片层:
-
- 图片计算层提供了一些矩阵计算相关的功能,将图片数据进行一些变换与计算。
-
- 主要接口有ImageGradients,SSIM,MSSSIM,PSNR,CentralCrop等。
-
-- 量化层:
-
- 量化是指将数据从float的形式转换成一段数据范围的int类型,所以量化层提供了一些数据量化的方法和模型层结构封装。
-
- 主要接口有Conv2dBnAct,DenseBnAct,Conv2dBnFoldQuant,LeakyReLUQuant等。
-
-### 应用实例
-
-MindSpore的模型层在mindspore.nn下,使用方法如下所示:
-
-```
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
- self.bn = nn.BatchNorm2d(64)
- self.relu = nn.ReLU()
- self.flatten = nn.Flatten()
- self.fc = nn.Dense(64 * 222 * 222, 3)
-
- def construct(self, x):
- x = self.conv(x)
- x = self.bn(x)
- x = self.relu(x)
- x = self.flatten(x)
- out = self.fc(x)
- return out
-```
-
-依然是上述网络构造的用例,从这个用例中可以看出,程序调用了Conv2d,BatchNorm2d,ReLU,Flatten和Dense模型层的接口。
-在Net初始化方法里面被定义,然后在construct方法里面真正运行,这些模型层接口有序的连接,形成一个可执行的网络。
-
-## 损失函数
-
-目前MindSpore主要支持的损失函数有L1Loss,MSELoss,SmoothL1Loss,SoftmaxCrossEntropyWithLogits,SoftmaxCrossEntropyExpand
-和CosineEmbeddingLoss。
-
-MindSpore的损失函数全部是Cell的子类实现,所以也支持用户自定义损失函数,其构造方法在cell自定义构建中进行介绍。
-
-### 内置损失函数
-
-- L1Loss:
-
- 计算两个输入数据的绝对值误差,用于回归模型。reduction参数默认值为mean,返回loss平均值结果,
-若reduction值为sum,返回loss累加结果,若reduction值为none,返回每个loss的结果。
-
-- MSELoss:
-
- 计算两个输入数据的平方误差,用于回归模型。reduction参数默认值为mean,返回loss平均值结果,
-若reduction值为sum,返回loss累加结果,若reduction值为none,返回每个loss的结果。
-
-- SmoothL1Loss:
-
- SmoothL1Loss为平滑L1损失函数,用于回归模型,阈值sigma默认参数为1。
-
-- SoftmaxCrossEntropyWithLogits:
-
- 交叉熵损失函数,用于分类模型。当标签数据不是one-hot编码形式时,需要输入参数sparse为True。reduction参数
- 与L1Loss一致。
-
-- SoftmaxCrossEntropyExpand:
-
- 交叉熵扩展损失函数,用于分类模型。当标签数据不是one-hot编码形式时,需要输入参数sparse为True。
-
-- CosineEmbeddingLoss:
-
- CosineEmbeddingLoss用于衡量两个输入相似程度,用于分类模型。margin默认为0.0,reduction参数与L1Loss一致
-
-### 应用实例
-
-MindSpore的损失函数全部在mindspore.nn下,使用方法如下所示:
-
-```
-import numpy as np
-import mindspore.nn as nn
-from mindspore import Tensor
-
-loss = nn.L1Loss()
-input_data = Tensor(np.array([[1, 2, 3], [2, 3, 4]]).astype(np.float32))
-target_data = Tensor(np.array([[0, 2, 5], [3, 1, 1]]).astype(np.float32))
-print(loss(input_data, target_data))
-```
-
-此用例构造了两个Tensor数据,利用nn.L1Loss()接口定义了L1Loss,将input_data和target_data传入loss,
-执行L1Loss的计算,结果为1.5。若loss = nn.L1Loss(reduction='sum'),则结果为9.0。
-若loss = nn.L1Loss(reduction='none'),结果为[[1. 0. 2.] [1. 2. 3.]]
-
-
-## Cell构造自定义网络
-
-无论是网络结构,还是前文提到的模型层,损失函数和优化器等,本质上都是一个Cell,因此都可以自定义实现。
-
-首先构造一个继承cell的子类,然后在__init__方法里面定义算子和模型层等,然后在construct方法里面构造网络结构。
-
-以lenet5网络为例,在__init__方法中定义了卷积层,池化层和全连接层等结构单元,然后在construct方法将定义的内容连接在一起,
-形成一个完整lenet5的网络结构。
-
-lenet5网络实现方式如下所示:
-```
-import mindspore.nn as nn
-
-class LeNet5(nn.Cell):
- def __init__(self):
- super(LeNet5, self).__init__()
- self.conv1 = nn.Conv2d(3, 6, 5, pad_mode="valid")
- self.conv2 = nn.Conv2d(6, 16, 5, pad_mode="valid")
- self.fc1 = nn.Dense(16 * 5 * 5, 120)
- self.fc2 = nn.Dense(120, 84)
- self.fc3 = nn.Dense(84, 3)
- self.relu = nn.ReLU()
- self.max_pool2d = nn.MaxPool2d(kernel_size=2)
- self.flatten = nn.Flatten()
-
- def construct(self, x):
- x = self.max_pool2d(self.relu(self.conv1(x)))
- x = self.max_pool2d(self.relu(self.conv2(x)))
- x = self.flatten(x)
- x = self.relu(self.fc1(x))
- x = self.relu(self.fc2(x))
- x = self.fc3(x)
- return x
-```
diff --git a/api/source_zh_cn/programming_guide/dataset_conversion.md b/api/source_zh_cn/programming_guide/dataset_conversion.md
deleted file mode 100644
index 59e68a061ff4a680539cd85a9b9973622b189ee5..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/dataset_conversion.md
+++ /dev/null
@@ -1,521 +0,0 @@
-# MindSpore数据格式转换
-
-
-
-- [MindSpore数据格式转换](#mindspore数据格式转换)
- - [概述](#概述)
- - [非标准数据集转换MindRecord](#非标准数据集转换mindrecord)
- - [CV类数据集](#cv类数据集)
- - [NLP类数据集](#nlp类数据集)
- - [常用数据集转换MindRecord](#常用数据集转换mindrecord)
- - [转换CIFAR-10数据集](#转换cifar-10数据集)
- - [转换CIFAR-100数据集](#转换cifar-100数据集)
- - [转换ImageNet数据集](#转换imagenet数据集)
- - [转换MNIST数据集](#转换mnist数据集)
- - [转换CSV数据集](#转换csv数据集)
- - [转换TFRecord数据集](#转换tfrecord数据集)
-
-
-
-
-
-## 概述
-
-用户可以将非标准的数据集和常见的经典数据集转换为MindSpore数据格式即MindRecord,从而方便地加载到MindSpore中进行训练。同时,MindSpore在部分场景做了性能优化,使用MindSpore数据格式可以获得更好的性能体验。
-
-## 非标准数据集转换MindRecord
-
-主要介绍如何将CV类数据和NLP类数据转换为MindRecord格式,并通过MindDataset实现MindRecoed格式文件的读取。
-
-### CV类数据集
-
- ```python
- """
- 示例说明:本示例主要介绍用户如何将自己的CV类数据集转换成MindRecoed格式,并使用MindDataset读取。
- 详细步骤:
- 1. 创建一个包含100条记录的MindRecord文件,其样本包含file_name(字符串), label(整形), data(二进制)三个字段;
- 2. 使用MindDataset读取MindRecord文件。
- """
-
- from io import BytesIO
- import os
- import mindspore.dataset as ds
- from mindspore.mindrecord import FileWriter
- import mindspore.dataset.transforms.vision.c_transforms as vision
- from PIL import Image
-
- ################################ 生成MindRecord文件 ################################
-
- mindrecord_filename = "test.mindrecord"
-
- # 如果存在MindRecord文件,则需要先删除
- if os.path.exists(mindrecord_filename):
- os.remove(mindrecord_filename)
- os.remove(mindrecord_filename + ".db")
-
- # 创建写对象,将会生成 mindrecord_filename 和 mindrecord_filename.db 两个文件
- writer = FileWriter(file_name=mindrecord_filename, shard_num=1)
-
- # 定义数据集Schema
- cv_schema = {"file_name": {"type": "string"}, "label": {"type": "int32"}, "data": {"type": "bytes"}}
- writer.add_schema(cv_schema, "it is a cv dataset")
-
- # [可选]定义索引字段,只能是标量字段
- writer.add_index(["file_name", "label"])
-
- # 按Schema方式组织训练数据,并将其写入MindRecord文件
- # 此处使用Image.new(...)模拟图片数据,真实场景可以使用io接口读取磁盘上的图像数据
- data = []
- for i in range(100): # 模拟数据集有100个样本
- i += 1
-
- sample = {}
- white_io = BytesIO()
- Image.new('RGB', (i*10, i*10), (255, 255, 255)).save(white_io, 'JPEG') # 图片大小可以不同
- image_bytes = white_io.getvalue()
- sample['file_name'] = str(i) + ".jpg" # 对应file_name字段
- sample['label'] = i # 对应label字段
- sample['data'] = white_io.getvalue() # 对应data字段
-
- data.append(sample)
- if i % 10 == 0: # 每10条样本做一次写操作
- writer.write_raw_data(data)
- data = []
-
- if data: # 写入可能剩余的数据
- writer.write_raw_data(data)
-
- writer.commit() # 关闭写入操作
-
- ################################ 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=mindrecord_filename) # 创建读取对象,默认开启shuffle
- decode_op = vision.Decode()
- data_set = data_set.map(input_columns=["data"], operations=decode_op, num_parallel_workers=2) # 解码data字段
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
-
-### NLP类数据集
-
-> 因为NLP类数据一般会经过预处理转换为字典序,此预处理过程不在本示例范围,该示例只演示转换后的字典序数据如何写入MindRecord。
-
- ```python
- """
- 示例说明:本示例主要介绍用户如何将自己的NLP类数据集转换成MindRecoed格式,并使用MindDataset读取。
- 详细步骤:
- 1. 创建一个包含100条记录的MindRecord文件,其样本包含八个字段,均为整形数组;
- 2. 使用MindDataset读取MindRecord文件。
- """
-
- import os
- import numpy as np
- import mindspore.dataset as ds
- from mindspore.mindrecord import FileWriter
-
- ################################ 生成MindRecord文件 ################################
-
- mindrecord_filename = "test.mindrecord"
-
- # 如果存在MindRecord文件,则需要先删除
- if os.path.exists(mindrecord_filename):
- os.remove(mindrecord_filename)
- os.remove(mindrecord_filename + ".db")
-
- # 创建写对象,将会生成 mindrecord_filename 和 mindrecord_filename.db 两个文件
- writer = FileWriter(file_name=mindrecord_filename, shard_num=1)
-
- # 定义数据集Schema,此处认为文本已经转为字典序
- nlp_schema = {"source_sos_ids": {"type": "int64", "shape": [-1]},
- "source_sos_mask": {"type": "int64", "shape": [-1]},
- "source_eos_ids": {"type": "int64", "shape": [-1]},
- "source_eos_mask": {"type": "int64", "shape": [-1]},
- "target_sos_ids": {"type": "int64", "shape": [-1]},
- "target_sos_mask": {"type": "int64", "shape": [-1]},
- "target_eos_ids": {"type": "int64", "shape": [-1]},
- "target_eos_mask": {"type": "int64", "shape": [-1]}}
- writer.add_schema(nlp_schema, "it is a preprocessed nlp dataset")
-
- # 按Schema方式组织训练数据,并将其写入MindRecord文件
- data = []
- for i in range(100): # 模拟数据集有100个样本
- i += 1
-
- # 组织训练数据
- sample = {"source_sos_ids": np.array([i, i+1, i+2, i+3, i+4], dtype=np.int64),
- "source_sos_mask": np.array([i*1, i*2, i*3, i*4, i*5, i*6, i*7], dtype=np.int64),
- "source_eos_ids": np.array([i+5, i+6, i+7, i+8, i+9, i+10], dtype=np.int64),
- "source_eos_mask": np.array([19, 20, 21, 22, 23, 24, 25, 26, 27], dtype=np.int64),
- "target_sos_ids": np.array([28, 29, 30, 31, 32], dtype=np.int64),
- "target_sos_mask": np.array([33, 34, 35, 36, 37, 38], dtype=np.int64),
- "target_eos_ids": np.array([39, 40, 41, 42, 43, 44, 45, 46, 47], dtype=np.int64),
- "target_eos_mask": np.array([48, 49, 50, 51], dtype=np.int64)}
-
- data.append(sample)
- if i % 10 == 0: # 每10条样本做一次写操作
- writer.write_raw_data(data)
- data = []
-
- if data: # 写入可能剩余的数据
- writer.write_raw_data(data)
-
- writer.commit() # 关闭写入操作
-
- ################################ 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=mindrecord_filename) # 创建读取对象,默认开启shuffle
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
-
-## 常用数据集转换MindRecord
-
-MindSpore提供转换常见数据集的工具类,能够将常见的经典数据集转换为MindRecord格式。常见数据集及其对应的工具类列表如下。
-
-| 数据集 | 格式转换工具类 |
-| -------- | ------------ |
-| CIFAR-10 | Cifar10ToMR |
-| CIFAR-100 | Cifar100ToMR |
-| ImageNet | ImageNetToMR |
-| MNIST | MnistToMR |
-| TFRecord | TFRecordToMR |
-| CSV File | CsvToMR |
-
-更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.mindrecord.html)。
-
-### 转换CIFAR-10数据集
-
-用户可以通过`Cifar10ToMR`类,将CIFAR-10原始数据转换为MindRecord格式。
-
-1. 下载[CIFAR-10数据集](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz)并解压,目录结构如下所示。
-
- ```
- └─cifar-10-batches-py
- ├─batches.meta
- ├─data_batch_1
- ├─data_batch_2
- ├─data_batch_3
- ├─data_batch_4
- ├─data_batch_5
- ├─readme.html
- └─test_batch
- ```
-
-2. 导入数据集转换工具类`Cifar10ToMR`。
-
- ```python
- from mindspore.mindrecord import Cifar10ToMR
- ```
-
-3. 创建`Cifar10ToMR`对象,调用`transform`接口,将CIFAR-10数据集转换为MindRecord格式。
-
- ```python
- CIFAR10_DIR = "./cifar10/cifar-10-batches-py"
- MINDRECORD_FILE = "./cifar10.mindrecord"
- cifar10_transformer = Cifar10ToMR(CIFAR10_DIR, MINDRECORD_FILE)
- cifar10_transformer.transform(['label'])
- ```
-
- **参数说明:**
- - `CIFAR10_DIR`:CIFAR-10数据集的文件夹路径。
- - `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-### 转换CIFAR-100数据集
-
-用户可以通过`Cifar100ToMR`类,将CIFAR-100原始数据转换为MindRecord格式。
-
-1. 准备好CIFAR-100数据集,将文件解压至指定的目录(示例中将数据集保存到`cifar100`目录),如下所示。
-
- ```
- % ll cifar100/cifar-100-python/
- meta
- test
- train
- ```
- > CIFAR-100数据集下载地址:
-
-2. 导入转换数据集的工具类`Cifar100ToMR`。
-
- ```python
- from mindspore.mindrecord import Cifar100ToMR
- ```
-
-3. 实例化`Cifar100ToMR`对象,调用`transform`接口,将CIFAR-100数据集转换为MindSpore数据格式。
-
- ```python
- CIFAR100_DIR = "./cifar100/cifar-100-python"
- MINDRECORD_FILE = "./cifar100.mindrecord"
- cifar100_transformer = Cifar100ToMR(CIFAR100_DIR, MINDRECORD_FILE)
- cifar100_transformer.transform(['fine_label', 'coarse_label'])
- ```
-
- **参数说明:**
- - `CIFAR100_DIR`:CIFAR-100数据集的文件夹路径。
- - `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-### 转换ImageNet数据集
-
-用户可以通过`ImageNetToMR`类,将ImageNet原始数据(图片、标注)转换为MindSpore数据格式。
-
-1. 下载并按照要求准备好ImageNet数据集。
-
- > ImageNet数据集下载地址:
-
- 对下载后的ImageNet数据集,整理数据集组织形式为一个包含所有图片的文件夹,以及一个记录图片对应标签的映射文件。
-
- 标签映射文件包含2列,分别为各类别图片目录、标签ID,用空格隔开,映射文件示例如下:
- ```
- n01440760 0
- n01443537 1
- n01484850 2
- n01491361 3
- n01494475 4
- n01496331 5
- ```
-
-2. 导入转换数据集的工具类`ImageNetToMR`。
-
- ```python
- from mindspore.mindrecord import ImageNetToMR
- ```
-
-3. 实例化`ImageNetToMR`对象,调用`transform`接口,将数据集转换为MindSpore数据格式。
- ```python
- IMAGENET_MAP_FILE = "./testImageNetDataWhole/labels_map.txt"
- IMAGENET_IMAGE_DIR = "./testImageNetDataWhole/images"
- MINDRECORD_FILE = "./testImageNetDataWhole/imagenet.mindrecord"
- PARTITION_NUMBER = 4
- imagenet_transformer = ImageNetToMR(IMAGENET_MAP_FILE, IMAGENET_IMAGE_DIR, MINDRECORD_FILE, PARTITION_NUMBER)
- imagenet_transformer.transform()
- ```
- 其中,
- `IMAGENET_MAP_FILE`:ImageNetToMR数据集的标签映射文件路径。
- `IMAGENET_IMAGE_DIR`:包含ImageNet所有图片的文件夹路径。
- `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-### 转换MNIST数据集
-
-用户可以通过`MnistToMR`类,将MNIST原始数据转换为MindSpore数据格式。
-
-1. 准备MNIST数据集,将下载好的文件放至指定的目录,如下所示:
-
- ```
- % ll mnist_data/
- train-images-idx3-ubyte.gz
- train-labels-idx1-ubyte.gz
- t10k-images-idx3-ubyte.gz
- t10k-labels-idx1-ubyte.gz
- ```
-
- > MNIST数据集下载地址:
-
-2. 导入转换数据集的工具类`MnistToMR`。
-
- ```python
- from mindspore.mindrecord import MnistToMR
- ```
-
-3. 实例化`MnistToMR`对象,调用`transform`接口,将MNIST数据集转换为MindSpore数据格式。
-
- ```python
- MNIST_DIR = "./mnist_data"
- MINDRECORD_FILE = "./mnist.mindrecord"
- mnist_transformer = MnistToMR(MNIST_DIR, MINDRECORD_FILE)
- mnist_transformer.transform()
- ```
-
- ***参数说明:***
- - `MNIST_DIR`:MNIST数据集的文件夹路径。
- - `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-
-### 转换CSV数据集
-
- ```python
- """
- 示例说明:本示例首先创建一个CSV文件,然后通过MindSpore中CsvToMR工具,
- 将Csv文件转换为MindRecord文件,并最终通过MindDataset将其读取出来。
- 详细步骤:
- 1. 创建一个包含5条记录的CSV文件;
- 2. 使用CsvToMR工具将CSV转换为MindRecord;
- 3. 使用MindDataset读取MindRecord文件。
- """
-
- import csv
- import os
- import mindspore.dataset as ds
- from mindspore.mindrecord import CsvToMR
-
- CSV_FILE_NAME = "test.csv" # 创建的CSV文件
- MINDRECORD_FILE_NAME = "test.mindrecord" # 转换后的MindRecord文件
- PARTITION_NUM = 1
-
- ################################ 创建CSV文件 ################################
-
- # 生成CSV文件
- def generate_csv():
- headers = ["id", "name", "math", "english"]
- rows = [(1, "Lily", 78.5, 90),
- (2, "Lucy", 99, 85.2),
- (3, "Mike", 65, 71),
- (4, "Tom", 95, 99),
- (5, "Jeff", 85, 78.5)]
- with open(CSV_FILE_NAME, 'w', encoding='utf-8') as f:
- writer = csv.writer(f)
- writer.writerow(headers)
- writer.writerows(rows)
-
- generate_csv()
-
- if os.path.exists(MINDRECORD_FILE_NAME):
- os.remove(MINDRECORD_FILE_NAME)
- os.remove(MINDRECORD_FILE_NAME + ".db")
-
- ################################ CSV 转 MindRecord文件 ################################
-
- # 调用CsvToMR工具,初始化
- csv_transformer = CsvToMR(CSV_FILE_NAME, MINDRECORD_FILE_NAME, partition_number=PARTITION_NUM)
- # 执行转换操作
- csv_transformer.transform()
-
- assert os.path.exists(MINDRECORD_FILE_NAME)
- assert os.path.exists(MINDRECORD_FILE_NAME + ".db")
-
- ############################### 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE_NAME) # 创建读取对象,默认开启shuffle
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
-
-### 转换TFRecord数据集
-
- ```python
- """
- 示例说明:本示例通过TF创建一个TFRecord文件,然后通过MindSpore中TFRecordToMR工具,
- 将TFRecord文件转换为MindRecord文件,并最终通过MindDataset将其读取出来。
- 详细步骤:
- 1. 创建一个包含10条记录,且样本格式为:
- feature_dict = {"file_name": tf.io.FixedLenFeature([], tf.string),
- "image_bytes": tf.io.FixedLenFeature([], tf.string),
- "int64_scalar": tf.io.FixedLenFeature([], tf.int64),
- "float_scalar": tf.io.FixedLenFeature([], tf.float32),
- "int64_list": tf.io.FixedLenFeature([6], tf.int64),
- "float_list": tf.io.FixedLenFeature([7], tf.float32)}
- 的TFRecord文件;
- 2. 使用TFRecordToMR工具将TFRecord转换为MindRecord;
- 3. 使用MindDataset读取MindRecord文件,并通过Decode算子对其image_bytes字段进行解码。
- """
-
- import collections
- from io import BytesIO
- import os
- import mindspore.dataset as ds
- from mindspore.mindrecord import TFRecordToMR
- import mindspore.dataset.transforms.vision.c_transforms as vision
- from PIL import Image
- import tensorflow as tf # 需要tensorflow >= 2.1.0
-
- TFRECORD_FILE_NAME = "test.tfrecord" # 创建的TFRecord文件
- MINDRECORD_FILE_NAME = "test.mindrecord" # 转换后的MindRecord文件
- PARTITION_NUM = 1
-
- ################################ 创建TFRecord文件 ################################
-
- # 生成TFRecord文件
- def generate_tfrecord():
- def create_int_feature(values):
- if isinstance(values, list):
- feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) # values: [int, int, int]
- else:
- feature = tf.train.Feature(int64_list=tf.train.Int64List(value=[values])) # values: int
- return feature
-
- def create_float_feature(values):
- if isinstance(values, list):
- feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values))) # values: [float, float]
- else:
- feature = tf.train.Feature(float_list=tf.train.FloatList(value=[values])) # values: float
- return feature
-
- def create_bytes_feature(values):
- if isinstance(values, bytes):
- white_io = BytesIO()
- Image.new('RGB', (10, 10), (255, 255, 255)).save(white_io, 'JPEG') # 图片大小可以不同
- image_bytes = white_io.getvalue()
- feature = tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes])) # values: bytes
- else:
- # values: string
- feature = tf.train.Feature(bytes_list=tf.train.BytesList(value=[bytes(values, encoding='utf-8')]))
- return feature
-
- writer = tf.io.TFRecordWriter(TFRECORD_FILE_NAME)
-
- example_count = 0
- for i in range(10):
- file_name = "000" + str(i) + ".jpg"
- image_bytes = bytes(str("aaaabbbbcccc" + str(i)), encoding="utf-8")
- int64_scalar = i
- float_scalar = float(i)
- int64_list = [i, i+1, i+2, i+3, i+4, i+1234567890]
- float_list = [float(i), float(i+1), float(i+2.8), float(i+3.2),
- float(i+4.4), float(i+123456.9), float(i+98765432.1)]
-
- features = collections.OrderedDict()
- features["file_name"] = create_bytes_feature(file_name)
- features["image_bytes"] = create_bytes_feature(image_bytes)
- features["int64_scalar"] = create_int_feature(int64_scalar)
- features["float_scalar"] = create_float_feature(float_scalar)
- features["int64_list"] = create_int_feature(int64_list)
- features["float_list"] = create_float_feature(float_list)
-
- tf_example = tf.train.Example(features=tf.train.Features(feature=features))
- writer.write(tf_example.SerializeToString())
- example_count += 1
- writer.close()
- print("Write {} rows in tfrecord.".format(example_count))
-
- generate_tfrecord()
-
- ################################ TFRecord 转 MindRecord文件 ################################
-
- feature_dict = {"file_name": tf.io.FixedLenFeature([], tf.string),
- "image_bytes": tf.io.FixedLenFeature([], tf.string),
- "int64_scalar": tf.io.FixedLenFeature([], tf.int64),
- "float_scalar": tf.io.FixedLenFeature([], tf.float32),
- "int64_list": tf.io.FixedLenFeature([6], tf.int64),
- "float_list": tf.io.FixedLenFeature([7], tf.float32),
- }
-
- if os.path.exists(MINDRECORD_FILE_NAME):
- os.remove(MINDRECORD_FILE_NAME)
- os.remove(MINDRECORD_FILE_NAME + ".db")
-
- # 调用TFRecordToMR工具,初始化
- tfrecord_transformer = TFRecordToMR(TFRECORD_FILE_NAME, MINDRECORD_FILE_NAME, feature_dict, ["image_bytes"])
- # 执行转换操作
- tfrecord_transformer.transform()
-
- assert os.path.exists(MINDRECORD_FILE_NAME)
- assert os.path.exists(MINDRECORD_FILE_NAME + ".db")
-
- ############################### 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE_NAME) # 创建读取对象,默认开启shuffle
- decode_op = vision.Decode()
- data_set = data_set.map(input_columns=["image_bytes"], operations=decode_op, num_parallel_workers=2) # 解码图像字段
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
diff --git a/api/source_zh_cn/programming_guide/images/batch.png b/api/source_zh_cn/programming_guide/images/batch.png
deleted file mode 100644
index cce0f467eac154d0633543e5c69613ce7bdbbdcc..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/batch.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/concat.png b/api/source_zh_cn/programming_guide/images/concat.png
deleted file mode 100644
index 742aa2a0203f078ee7d06549c3372ce271cea455..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/concat.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/ctrans_invert.png b/api/source_zh_cn/programming_guide/images/ctrans_invert.png
deleted file mode 100644
index a27301d28dd11b037ab973cc97d1b3042f24f3b0..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/ctrans_invert.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/ctrans_resize.png b/api/source_zh_cn/programming_guide/images/ctrans_resize.png
deleted file mode 100644
index f4f2b23642cc8d87f3ad5684205c968c79bd794d..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/ctrans_resize.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/map.png b/api/source_zh_cn/programming_guide/images/map.png
deleted file mode 100644
index abe704717045e3816f3ffe4d10a8b023ec983b3d..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/map.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/pytrans_compose.png b/api/source_zh_cn/programming_guide/images/pytrans_compose.png
deleted file mode 100644
index 66221a4f5e7a9f985475fa2dd68f1994903636c3..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/pytrans_compose.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/randomcrop.png b/api/source_zh_cn/programming_guide/images/randomcrop.png
deleted file mode 100644
index 8095bceb67cd3643dda1dce6c060a98ccb40373f..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/randomcrop.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/randomhorizontalflip.png b/api/source_zh_cn/programming_guide/images/randomhorizontalflip.png
deleted file mode 100644
index f127d7ab479851049262fc3713dba7d14b2c908a..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/randomhorizontalflip.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/repeat.png b/api/source_zh_cn/programming_guide/images/repeat.png
deleted file mode 100644
index 7cb40834c41b8d17e37cf2da8ba368ad72212f48..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/repeat.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/shuffle.png b/api/source_zh_cn/programming_guide/images/shuffle.png
deleted file mode 100644
index d4af0f38c4ecbff6fb80ad3c06b974ef71adeb56..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/shuffle.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_bad.png b/api/source_zh_cn/programming_guide/images/tranform_bad.png
deleted file mode 100644
index 2d3ee60ccffdbe7c9ad3f5adb4235cdc8f3532d2..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_bad.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_good_1.png b/api/source_zh_cn/programming_guide/images/tranform_good_1.png
deleted file mode 100644
index 3c4b373ead883539b6d4673c68665bec20034e18..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_good_1.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_good_2.png b/api/source_zh_cn/programming_guide/images/tranform_good_2.png
deleted file mode 100644
index 066a5d082387206a01ceb6ad54cc9dd7e074c672..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_good_2.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_good_3.png b/api/source_zh_cn/programming_guide/images/tranform_good_3.png
deleted file mode 100644
index 500b36c18eb53253c58f84515d5b90b1136d23c0..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_good_3.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_pipeline.png b/api/source_zh_cn/programming_guide/images/tranform_pipeline.png
deleted file mode 100644
index 07906d4751f286de989a4c873d9fd422207eb5eb..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_pipeline.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/zip.png b/api/source_zh_cn/programming_guide/images/zip.png
deleted file mode 100644
index 2839b2c36f00533917b2406d7f215249ad8dbc6b..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/zip.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/nn.md b/api/source_zh_cn/programming_guide/nn.md
deleted file mode 100644
index a1bb61ec965da01e16ed19d5d850ee92594e9ea5..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/nn.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# nn模块
-
-
-
-MindSpore的nn模块是Python实现的模型组件,是对低阶API的封装,主要包括各种模型层、损失函数、优化器等。
-
-同时nn也提供了部分与Primitive算子同名的接口,主要作用是对Primitive算子进行进一步封装,为用户提供更友好的API。
-
-代码样例如下:
-```python
-import numpy as np
-from mindspore.common.tensor import Tensor
-import mindspore.nn as nn
-import mindspore
-
-net = nn.PSNR()
-img1 = Tensor(np.random.random((1,3,16,16)), mindspore.float32)
-img2 = Tensor(np.random.random((1,3,16,16)), mindspore.float32)
-output = net(img1, img2)
-print("output =", output)
-```
-
-输出如下:
-```
-output = [7.6338434]
-```
-
-各种模型层、损失函数、优化器等代码样例正在完善中。
diff --git a/api/source_zh_cn/programming_guide/ops.md b/api/source_zh_cn/programming_guide/ops.md
deleted file mode 100644
index 53bb69f5dfd011be5b4ed47d1f440da45df482a6..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/ops.md
+++ /dev/null
@@ -1,126 +0,0 @@
-# ops模块
-
-
-
-- [ops模块](#ops模块)
- - [mindspore.ops.operations](#mindsporeopsoperations)
- - [mindspore.ops.functional](#mindsporeopsfunctional)
- - [mindspore.ops.composite](#mindsporeopscomposite)
-
-
-
-
-
-MindSpore的ops模块主要存放算子相关接口,同时包含算子的校验和正反向关联的逻辑。
-
-ops主要包括operations、functional和composite,可通过ops直接获取到这三类算子。
-- operations提供单个的Primtive算子。一个算子对应一个原语,是最小的执行对象,需要实例化之后使用。
-- composite提供一些预定义的组合算子,以及复杂的涉及图变换的算子,如`GradOperation`。
-- functional提供operations和composite实例化后的对象,简化算子的调用流程。
-
-## mindspore.ops.operations
-
-operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。算子支持情况可查询[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html#mindspore-ops-operations)。
-
-Primitive算子也称为算子原语,它直接封装了底层的Ascend、GPU、AICPU、CPU等多种算子的具体实现,为用户提供基础算子能力。
-
-Primitive算子接口是构建高阶接口、自动微分、网络模型等能力的基础。
-
-代码样例如下:
-```python
-import numpy as np
-import mindspore
-from mindspore import Tensor
-import mindspore.ops.operations as P
-
-input_x = mindspore.Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
-input_y = 3.0
-pow = P.Pow()
-output = pow(input_x, input_y)
-print("output =", output)
-```
-
-输出如下:
-```
-output = [ 1. 8. 64.]
-```
-
-## mindspore.ops.functional
-
-为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html#mindspore-ops-functional)。
-
-例如`P.Pow`算子,我们提供了functional版本的`F.tensor_pow`算子。
-
-使用functional的代码样例如下:
-
-```python
-import numpy as np
-import mindspore
-from mindspore import Tensor
-from mindspore.ops import functional as F
-
-input_x = mindspore.Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
-input_y = 3.0
-output = F.tensor_pow(input_x, input_y)
-print("output =", output)
-```
-
-输出如下:
-```
-output = [ 1. 8. 64.]
-```
-
-## mindspore.ops.composite
-
-composite提供了一些算子的组合,包括clip_by_value和random相关的一些算子,以及涉及图变换的函数(`GradOperation`、`HyperMap`和`Map`等)。
-
-算子的组合可以直接像一般函数一样使用,例如使用`normal`生成一个随机分布:
-```python
-from mindspore.common import dtype as mstype
-from mindspore.ops import composite as C
-from mindspore import Tensor
-
-mean = Tensor(1.0, mstype.float32)
-stddev = Tensor(1.0, mstype.float32)
-output = C.normal((2, 3), mean, stddev, seed=5)
-print("ouput =", output)
-```
-输出如下:
-```
-output = [[2.4911082 0.7941146 1.3117087]
- [0.30582333 1.772938 1.525996]]
-```
-
-> 以上代码运行于MindSpore的GPU版本。
-
-针对涉及图变换的函数,用户可以使用`MultitypeFuncGraph`定义一组重载的函数,根据不同类型,走到不同实现。
-
-代码样例如下:
-```python
-import numpy as np
-from mindspore.ops.composite import MultitypeFuncGraph
-from mindspore import Tensor
-from mindspore.ops import functional as F
-
-add = MultitypeFuncGraph('add')
-@add.register("Number", "Number")
-def add_scalar(x, y):
- return F.scalar_add(x, y)
-
-@add.register("Tensor", "Tensor")
-def add_tensor(x, y):
- return F.tensor_add(x, y)
-
-tensor1 = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]]).astype('float32'))
-tensor2 = Tensor(np.array([[1.2, 2.1], [2.2, 3.2]]).astype('float32'))
-print('tensor', add(tensor1, tensor2))
-print('scalar', add(1, 2))
-```
-输出如下:
-```
-tensor [[2.4, 4.2]
- [4.4, 6.4]]
-scalar 3
-```
-
-此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的梯度函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.composite.html#mindspore.ops.composite.GradOperation)。
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/security_and_privacy.md b/api/source_zh_cn/programming_guide/security_and_privacy.md
deleted file mode 100644
index 8b6d6c20c9846c8847f76851223ad04ba451fa14..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/security_and_privacy.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# AI安全与隐私保护
-
-
-
-- [AI安全与隐私保护](AI安全与隐私保护)
- - [概述](#概述)
- - [对抗鲁棒性](#对抗鲁棒性)
- - [Attack](#Attack)
- - [Defense](#Defense)
- - [Detector](#Detector)
- - [模型安全测试](#模型安全测试)
- - [Fuzzer](#Fuzzer)
- - [差分隐私训练](#差分隐私训练)
- - [DPModel](#DPModel)
- - [隐私泄露风险评估](#隐私泄露风险评估)
- - [MembershipInference](#MembershipInference)
-
-
-
-## 概述
-
-本篇是AI安全与隐私保护的编程指南。
-
-AI作为一种通用技术,在带来巨大机遇和效益的同时也面临着新的安全与隐私保护的挑战。MindArmour是MindSpore的一个子项目,为MindSpore提供安全与隐私保护能力,主要包括对抗鲁棒性、模型安全测试、差分隐私训练、隐私泄露风险评估等技术。
-
-## 对抗鲁棒性
-
-### Attack
-Attack基类定义了对抗样本生成的使用接口,其子类实现了各种具体的生成算法,支持安全工作人员快速高效地生成对抗样本,用于攻击AI模型,以评估模型的鲁棒性。
-
-### Defense
-Defense基类定义了对抗训练的使用接口,其子类实现了各种具体的对抗训练算法,增强模型的对抗鲁棒性。
-
-### Detector
-Detector基类定义了对抗样本检测的使用借口,其子类实现了各种具体的检测算法,增强模型的对抗鲁棒性。
-
-详细内容,请参考[对抗鲁棒性官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/model_security.html)
-
-## 模型安全测试
-
-### Fuzzer
-
-Fuzzer类基于神经元覆盖率增益控制fuzzing流程,采用自然扰动和对抗样本生成方法作为变异策略,激活更多的神经元,从而探索不同类型的模型输出结果、错误行为,指导用户增强模型鲁棒性。
-
-详细内容,请参考[模型安全测试官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/fuzzer.html)
-
-## 差分隐私训练
-
-### DPModel
-
-DPModel继承了mindspore.Model,提供了差分隐私训练的入口函数。
-
-详细内容,请参考[差分隐私官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/differential_privacy.html)
-
-## 隐私泄露风险评估
-
-### MembershipInference
-
-MembershipInference类提供了一种模型逆向分析方法,能够基于模型对样本的预测信息,推测某个样本是否在模型的训练集中,以此评估模型的隐私泄露风险。
-
-详细内容,请参考[隐私泄露风险评估官方教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/membership_inference.html)
diff --git a/api/source_zh_cn/programming_guide/type.md b/api/source_zh_cn/programming_guide/type.md
deleted file mode 100644
index 3ccdb560386cb1a9fa71fd8dc6e724f2ca135662..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/type.md
+++ /dev/null
@@ -1,54 +0,0 @@
-# 数据类型
-
-
-
-- [数据类型](#数据类型)
- - [概述](#概述)
- - [操作接口](#操作接口)
-
-
-
-
-
-
-## 概述
-
-MindSpore张量支持不同的数据类型,有`int8`、`int16`、`int32`、`int64`、`uint8`、`uint16`、`uint32`、`uint64`、
-`float16`、`float32`、`float64`、`bool_`, 与NumPy的数据类型一一对应,Python里的`int`数会被转换为定义的int64进行运算,
-Python里的`float`数会被转换为定义的`float32`进行运算。
-
-## 操作接口
-- `dtype_to_nptype`
-
- 可通过该接口将MindSpore的数据类型转换为NumPy对应的数据类型。
-
-- `dtype_to_pytype`
-
- 可通过该接口将MindSpore的数据类型转换为Python对应的内置数据类型。
-
-
-- `pytype_to_dtype`
-
- 可通过该接口将Python内置的数据类型转换为MindSpore对应的数据类型。
-
-示例如下:
-
-```
-from mindspore import dtype as mstype
-
-np_type = mstype.dtype_to_nptype(mstype.int32)
-ms_type = mstype.pytype_to_dtype(int)
-py_type = mstype.dtype_to_pytype(mstype.float64)
-
-print(np_type)
-print(ms_type)
-print(py_type)
-```
-
-输出如下:
-
-```
-
-Int64
-
-```
diff --git a/api/Makefile b/docs/api_cpp/Makefile
similarity index 100%
rename from api/Makefile
rename to docs/api_cpp/Makefile
diff --git a/api/requirements.txt b/docs/api_cpp/requirements.txt
similarity index 100%
rename from api/requirements.txt
rename to docs/api_cpp/requirements.txt
diff --git a/tutorials/source_zh_cn/_static/logo_notebook.png b/docs/api_cpp/source_en/_static/logo_notebook.png
similarity index 100%
rename from tutorials/source_zh_cn/_static/logo_notebook.png
rename to docs/api_cpp/source_en/_static/logo_notebook.png
diff --git a/api/source_en/_static/logo_source.png b/docs/api_cpp/source_en/_static/logo_source.png
similarity index 100%
rename from api/source_en/_static/logo_source.png
rename to docs/api_cpp/source_en/_static/logo_source.png
diff --git a/docs/api_cpp/source_en/class_list.md b/docs/api_cpp/source_en/class_list.md
new file mode 100644
index 0000000000000000000000000000000000000000..236b192159d23e4d25548361858f80634d75ed21
--- /dev/null
+++ b/docs/api_cpp/source_en/class_list.md
@@ -0,0 +1,16 @@
+# Class List
+
+Here is a list of all classes with links to the namespace documentation for each member:
+
+| Namespace | Class Name | Description |
+| --- | --- | --- |
+| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. |
+| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#context) | Context defines for holding environment variables during runtime. |
+| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. |
+| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#primitivec) | Primitive defines as prototype of operator. |
+| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#model) | Model defines model in MindSpore Lite for managing graph. |
+| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#modelbuilder) | ModelBuilder is defined to build the model. |
+| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/en/r1.0/session.html#litesession) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. |
+| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/r1.0/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. |
+| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#litemat) |Class that represents a LiteMat of a Image. |
+
diff --git a/docs/api_cpp/source_en/conf.py b/docs/api_cpp/source_en/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..4787de3f631f53db97bad94ffb7c95441edf0bb7
--- /dev/null
+++ b/docs/api_cpp/source_en/conf.py
@@ -0,0 +1,60 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/source_en/apicc/dataset.md b/docs/api_cpp/source_en/dataset.md
similarity index 87%
rename from lite/docs/source_en/apicc/dataset.md
rename to docs/api_cpp/source_en/dataset.md
index 984ffc15eaa2fc44ed5e17c87f89b561083a5eae..aba075885f1b209be7f8d0d45931845d8b19b339 100644
--- a/lite/docs/source_en/apicc/dataset.md
+++ b/docs/api_cpp/source_en/dataset.md
@@ -1,11 +1,13 @@
# mindspore::dataset
-#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
-#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
+#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
+#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
## Functions of image_process.h
+### ResizeBilinear
+
```
bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
```
@@ -22,6 +24,8 @@ Resize image by bilinear algorithm, currently the data type only supports uint8,
Return True or False.
+### InitFromPixel
+
```
bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m)
```
@@ -40,6 +44,8 @@ Initialize LiteMat from pixel, currently the conversion supports rbgaTorgb and r
Return True or False.
+### ConvertTo
+
```
bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
```
@@ -56,6 +62,8 @@ Convert the data type, currently it supports converting the data type from uint8
Return True or False.
+### Crop
+
```
bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
```
@@ -74,8 +82,10 @@ Crop image, the channel supports is 3 and 1.
Return True or False.
+### SubStractMeanNormalize
+
```
-bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float *norm)
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std)
```
Normalize image, currently the supports data type is float.
@@ -85,16 +95,18 @@ Normalize image, currently the supports data type is float.
- `src`: Input image data.
- `dst`: Output image data.
- `mean`: Mean of the data set.
- - `norm`: Norm of the data set.
+ - `std`: Norm of the data set.
- Returns
Return True or False.
+### Pad
+
```
-bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
+bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r)
```
-Padd image, the channel supports is 3 and 1.
+Pad image, the channel supports is 3 and 1.
- Parameters
@@ -105,13 +117,15 @@ Padd image, the channel supports is 3 and 1.
- `left`: The length of left.
- `right`: The length of right.
- `pad_type`: The type of pad.
- - `fill_r`: R.
+ - `fill_b_or_gray`: B or GRAY.
- `fill_g`: G.
- - `fill_b`: B.
+ - `fill_r`: R.
- Returns
Return True or False.
+### Affine
+
```
void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue)
```
@@ -140,6 +154,8 @@ Apply affine transformation for 3 channel image.
- `dsize`: The size of the output image.
- `borderValue`: The pixel value is used for filing after the image is captured.
+### GetDefaultBoxes
+
```
std::vector> GetDefaultBoxes(BoxesConfig config)
```
@@ -154,6 +170,8 @@ Get default anchor boxes for Faster R-CNN, SSD, YOLO etc.
Return the default boxes.
+### ConvertBoxes
+
```
void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config)
```
@@ -166,6 +184,8 @@ Convert the prediction boxes to the actual boxes with (y, x, h, w).
- `default_boxes`: Default box.
- `config`: Objects of BoxesConfig structure.
+### ApplyNms
+
```
std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes)
```
@@ -190,6 +210,7 @@ Class that represents a lite Mat of a Image.
**Constructors & Destructors**
+### LiteMat
```
LiteMat()
@@ -211,6 +232,7 @@ Destructor of MindSpore dataset LiteMat.
**Public Member Functions**
+### Init
```
void Init(int width, LDataType data_type = LDataType::UINT8)
@@ -222,6 +244,8 @@ void Init(int width, int height, int channel, LDataType data_type = LDataType::U
The function to initialize the channel, width and height of the image, but the parameters are different.
+### IsEmpty
+
```
bool IsEmpty() const
```
@@ -232,6 +256,8 @@ A function to determine whether the object is empty.
Return True or False.
+### Release
+
```
void Release()
```
@@ -240,6 +266,8 @@ A function to release memory.
**Private Member Functions**
+### AlignMalloc
+
```
void *AlignMalloc(unsigned int size)
```
@@ -254,6 +282,8 @@ Apply for memory alignment.
Return the size of a pointer.
+### AlignFree
+
```
void AlignFree(void *ptr)
```
@@ -270,6 +300,8 @@ Initialize the value of elem_size_ by data_type.
- `data_type`: Type of data.
+### addRef
+
```
int addRef(int *p, int value)
```
diff --git a/lite/docs/source_en/apicc/errorcode_and_metatype.md b/docs/api_cpp/source_en/errorcode_and_metatype.md
similarity index 92%
rename from lite/docs/source_en/apicc/errorcode_and_metatype.md
rename to docs/api_cpp/source_en/errorcode_and_metatype.md
index df566213408154cd2034eb2932a5f6d1380f89f3..45b4877a858d82df61c1dffa8dc734edddd300a5 100644
--- a/lite/docs/source_en/apicc/errorcode_and_metatype.md
+++ b/docs/api_cpp/source_en/errorcode_and_metatype.md
@@ -13,6 +13,7 @@ Description of error code and meta type supported in MindSpore Lite.
| RET_NO_CHANGE | -4 | No change. |
| RET_SUCCESS_EXIT | -5 | No error but exit. |
| RET_MEMORY_FAILED | -6 | Fail to create memory. |
+| RET_NOT_SUPPORT | -7 | Fail to support. |
| RET_OUT_OF_TENSOR_RANGE | -101 | Failed to check range. |
| RET_INPUT_TENSOR_ERROR | -102 | Failed to check input tensor. |
| RET_REENTRANT_ERROR | -103 | Exist executor running. |
@@ -24,6 +25,8 @@ Description of error code and meta type supported in MindSpore Lite.
| RET_FORMAT_ERR | -401 | Failed to check the tensor format. |
| RET_INFER_ERR | -501 | Failed to infer shape. |
| RET_INFER_INVALID | -502 | Invalid infer shape before runtime. |
+| RET_INPUT_PARAM_INVALID | -601 | Invalid input param by user. |
+| RET_INPUT_PARAM_LACK | -602 | Lack input param by user. |
## MetaType
An **enum** type.
diff --git a/lite/docs/source_en/index.rst b/docs/api_cpp/source_en/index.rst
similarity index 48%
rename from lite/docs/source_en/index.rst
rename to docs/api_cpp/source_en/index.rst
index abecfe957e16896bca6efeb5a1cb376835251fa6..6b3fb87da08b8e47644ddb3bc308dd63de1d8d21 100644
--- a/lite/docs/source_en/index.rst
+++ b/docs/api_cpp/source_en/index.rst
@@ -1,16 +1,18 @@
.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Aug 17 10:00:00 2020.
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore Lite Documentation
-============================
+MindSpore C++ API
+=================
.. toctree::
- :glob:
- :maxdepth: 1
+ :glob:
+ :maxdepth: 1
- architecture
- apicc/apicc
- operator_list
- glossary
+ class_list
+ lite
+ session
+ tensor
+ dataset
+ errorcode_and_metatype
\ No newline at end of file
diff --git a/lite/docs/source_en/apicc/lite.md b/docs/api_cpp/source_en/lite.md
similarity index 65%
rename from lite/docs/source_en/apicc/lite.md
rename to docs/api_cpp/source_en/lite.md
index 93bc93edf0d709c8d227723f921ea39f9a39f3b0..6e2c33eeb2741a0e88b778ebab245716078d168d 100644
--- a/lite/docs/source_en/apicc/lite.md
+++ b/docs/api_cpp/source_en/lite.md
@@ -1,10 +1,10 @@
# mindspore::lite
-#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)>
+#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/context.h)>
-#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)>
+#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/model.h)>
-#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)>
+#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/version.h)>
## Allocator
@@ -23,23 +23,6 @@ Context()
Constructor of MindSpore Lite Context using default value for parameters.
-```
-Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx)
-```
-Constructor of MindSpore Lite Context using input value for parameters.
-
-- Parameters
-
- - `thread_num`: Define the work thread number during the runtime.
-
- - `allocator`: Define the allocator for malloc.
-
- - `device_ctx`: Define device information during the runtime.
-
-- Returns
-
- The instance of MindSpore Lite Context.
-
```
~Context()
```
@@ -52,10 +35,12 @@ float16_priority
```
A **bool** value. Defaults to **false**. Prior enable float16 inference.
+> Enabling float16 inference may cause low precision inference,because some variables may exceed the range of float16 during forwarding.
+
```
-device_ctx_{DT_CPU}
+device_type
```
-A [**DeviceContext**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicecontext) struct defined at the bottom of the text. Using to specify the device.
+A [**DeviceType**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#devicetype) **enum** type. Defaults to **DT_CPU**. Using to specify the device.
```
thread_num_
@@ -67,13 +52,13 @@ An **int** value. Defaults to **2**. Thread number config for thread pool.
allocator
```
-A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#allocator).
+A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#allocator).
```
cpu_bind_mode_
```
-A [**CpuBindMode**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#cpubindmode) enum variable. Defaults to **MID_CPU**.
+A [**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#cpubindmode) **enum** variable. Defaults to **MID_CPU**.
## PrimitiveC
Primitive is defined as prototype of operator.
@@ -121,6 +106,7 @@ Static method to create a Model pointer.
An **enum** type. CpuBindMode defined for holding bind cpu strategy argument.
**Attributes**
+
```
MID_CPU = -1
```
@@ -153,16 +139,6 @@ GPU device type.
DT_NPU = 0
```
NPU device type, not supported yet.
-## DeviceContext
-
-A **struct**. DeviceContext defined for holding DeviceType.
-
-**Attributes**
-```
-type
-```
-A [**DeviceType**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicetype) variable. The device type.
-
## Version
```
diff --git a/lite/docs/source_en/apicc/session.md b/docs/api_cpp/source_en/session.md
similarity index 85%
rename from lite/docs/source_en/apicc/session.md
rename to docs/api_cpp/source_en/session.md
index 3ecee43e21d6ef04213fba8ee566093c2ff7d9b5..63aa4aea1930e8ae9046441e9db24ecb30ff1cf4 100644
--- a/lite/docs/source_en/apicc/session.md
+++ b/docs/api_cpp/source_en/session.md
@@ -1,6 +1,6 @@
# mindspore::session
-#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)>
+#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/lite_session.h)>
## LiteSession
@@ -41,7 +41,7 @@ Compile MindSpore Lite model.
- Returns
- STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h).
+ STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h).
```
virtual std::vector GetInputs() const
@@ -73,13 +73,13 @@ Run session with callback.
- Parameters
- - `before`: A [**KernelCallBack**](https://www.mindspore.cn/lite/docs/en/master/apicc/session.html#kernelcallback) function. Define a callback function to be called before running each node.
+ - `before`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/session.html#kernelcallback) function. Define a callback function to be called before running each node.
- - `after`: A [**KernelCallBack**](https://www.mindspore.cn/lite/docs/en/master/apicc/session.html#kernelcallback) function. Define a callback function to be called after running each node.
+ - `after`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/session.html#kernelcallback) function. Define a callback function to be called after running each node.
- Returns
- STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h).
+ STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h).
```
virtual std::vector GetOutputsByNodeName(const std::string &node_name) const
@@ -151,7 +151,7 @@ Resize inputs shape.
- Returns
- STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h).
+ STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h).
**Static Public Member Functions**
diff --git a/lite/docs/source_en/apicc/tensor.md b/docs/api_cpp/source_en/tensor.md
similarity index 50%
rename from lite/docs/source_en/apicc/tensor.md
rename to docs/api_cpp/source_en/tensor.md
index 014929ba12ea2d636478ea7515562559bd9af087..f74d7a33ec2395f67192ebc3002143ac85f2f871 100644
--- a/lite/docs/source_en/apicc/tensor.md
+++ b/docs/api_cpp/source_en/tensor.md
@@ -1,6 +1,6 @@
# mindspore::tensor
-#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)>
+#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/ms_tensor.h)>
## MSTensor
@@ -30,25 +30,12 @@ virtual TypeId data_type() const
```
Get data type of the MindSpore Lite MSTensor.
-> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
+> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
- Returns
MindSpore Lite TypeId of the MindSpore Lite MSTensor.
-```
-virtual TypeId set_data_type(TypeId data_type)
-```
-Set data type for the MindSpore Lite MSTensor.
-
-- Parameters
-
- - `data_type`: Define MindSpore Lite TypeId to be set in the MindSpore Lite MSTensor.
-
-- Returns
-
- MindSpore Lite TypeId of the MindSpore Lite MSTensor after set.
-
```
virtual std::vector shape() const
```
@@ -59,19 +46,6 @@ Get shape of the MindSpore Lite MSTensor.
A vector of int as the shape of the MindSpore Lite MSTensor.
-```
-virtual size_t set_shape(const std::vector &shape)
-```
-Set shape for the MindSpore Lite MSTensor.
-
-- Parameters
-
- - `shape`: Define a vector of int as shape to be set into the MindSpore Lite MSTensor.
-
-- Returns
-
- Size of shape of the MindSpore Lite MSTensor after set.
-
```
virtual int DimensionSize(size_t index) const
```
@@ -96,16 +70,6 @@ Get number of element in MSTensor.
Number of element in MSTensor.
-```
-virtual std::size_t hash() const
-```
-
-Get hash of the MindSpore Lite MSTensor.
-
-- Returns
-
- Hash of the MindSpore Lite MSTensor.
-
```
virtual size_t Size() const
```
@@ -129,23 +93,3 @@ Get the pointer of data in MSTensor.
- Returns
The pointer points to data in MSTensor.
-
-**Static Public Member Functions**
-
-```
-static MSTensor *CreateTensor(TypeId data_type, const std::vector &shape)
-```
-
-Static method to create a MSTensor pointer.
-
-> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
-
-- Parameters
-
- - `data_type`: Define the data type of tensor to be created.
-
- - `shape`: Define the shape of tensor to be created.
-
-- Returns
-
- The pointer of MSTensor.
\ No newline at end of file
diff --git a/docs/api_cpp/source_zh_cn/_static/logo_notebook.png b/docs/api_cpp/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_cpp/source_zh_cn/_static/logo_notebook.png differ
diff --git a/api/source_zh_cn/_static/logo_source.png b/docs/api_cpp/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from api/source_zh_cn/_static/logo_source.png
rename to docs/api_cpp/source_zh_cn/_static/logo_source.png
diff --git a/docs/api_cpp/source_zh_cn/class_list.md b/docs/api_cpp/source_zh_cn/class_list.md
new file mode 100644
index 0000000000000000000000000000000000000000..2999e89bd33b017aa40e54c5a60874604f98a424
--- /dev/null
+++ b/docs/api_cpp/source_zh_cn/class_list.md
@@ -0,0 +1,15 @@
+# 类列表
+
+MindSpore Lite中的类定义及其所属命名空间和描述:
+
+| 命名空间 | 类 | 描述 |
+| --- | --- | --- |
+| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 |
+| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#context) | Context用于保存执行期间的环境变量。 |
+| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 |
+| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#primitivec) | PrimitiveC定义为算子的原型。 |
+| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 |
+| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 |
+| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 |
+| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 |
+| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#litemat) |LiteMat是一个处理图像的类。 |
diff --git a/docs/api_cpp/source_zh_cn/conf.py b/docs/api_cpp/source_zh_cn/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..625e5acd3bde751f170596e75261be4bb2bde60f
--- /dev/null
+++ b/docs/api_cpp/source_zh_cn/conf.py
@@ -0,0 +1,65 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_search_language = 'zh'
+
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/dataset.md b/docs/api_cpp/source_zh_cn/dataset.md
similarity index 87%
rename from lite/docs/source_zh_cn/apicc/dataset.md
rename to docs/api_cpp/source_zh_cn/dataset.md
index 379d3e11632327b3075c0f8a56d53c852cdeae80..190cdeef7747cceea411ac992b1dcf62b1b0000a 100644
--- a/lite/docs/source_zh_cn/apicc/dataset.md
+++ b/docs/api_cpp/source_zh_cn/dataset.md
@@ -1,11 +1,13 @@
# mindspore::dataset
-#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
-#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
+#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
+#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
## image_process.h文件的函数
+### ResizeBilinear
+
```
bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
```
@@ -22,6 +24,8 @@ bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
返回True或者False。
+### InitFromPixel
+
```
bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m)
```
@@ -40,6 +44,8 @@ bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType d
返回True或者False。
+### ConvertTo
+
```
bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
```
@@ -56,6 +62,8 @@ bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
返回True或者False。
+### Crop
+
```
bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
```
@@ -74,8 +82,10 @@ bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
返回True或者False。
+### SubStractMeanNormalize
+
```
-bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float *norm)
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std)
```
规一化图像,当前支持的数据类型为float。
@@ -85,13 +95,15 @@ bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float
- `src`: 输入的图片数据。
- `dst`: 输出图像数据。
- `mean`: 数据集的均值。
- - `norm`: 数据集的方差。
+ - `std`: 数据集的方差。
- 返回值
返回True或者False。
+### Pad
+
```
-bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
+bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r)
```
填充图像,通道支持为3和1。
@@ -105,13 +117,15 @@ bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int
- `left`: 图片左边长度。
- `right`: 图片右边长度。
- `pad_type`: padding的类型。
- - `fill_r`: R.
+ - `fill_b_or_gray`: R或者GRAY。
- `fill_g`: G.
- - `fill_b`: B.
+ - `fill_r`: R.
- 返回值
返回True或者False。
+### Affine
+
```
void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue)
```
@@ -140,6 +154,8 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi
- `dsize`: 输出图像的大小。
- `borderValue`: 采图之后用于填充的像素值。
+### GetDefaultBoxes
+
```
std::vector> GetDefaultBoxes(BoxesConfig config)
```
@@ -154,6 +170,8 @@ std::vector> GetDefaultBoxes(BoxesConfig config)
返回默认框。
+### ConvertBoxes
+
```
void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config)
```
@@ -166,6 +184,8 @@ void ConvertBoxes(std::vector> &boxes, std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes)
```
@@ -190,6 +210,7 @@ LiteMat是一个处理图像的类。
**构造函数和析构函数**
+### LiteMat
```
LiteMat()
@@ -211,6 +232,7 @@ MindSpore dataset LiteMat的析构函数。
**公有成员函数**
+### Init
```
void Init(int width, LDataType data_type = LDataType::UINT8)
@@ -222,6 +244,8 @@ void Init(int width, int height, int channel, LDataType data_type = LDataType::U
该函数用于初始化图像的通道,宽度和高度,参数不同。
+### IsEmpty
+
```
bool IsEmpty() const
```
@@ -232,6 +256,8 @@ bool IsEmpty() const
返回True或者False。
+### Release
+
```
void Release()
```
@@ -240,6 +266,8 @@ void Release()
**私有成员函数**
+### AlignMalloc
+
```
void *AlignMalloc(unsigned int size)
```
@@ -254,12 +282,17 @@ void *AlignMalloc(unsigned int size)
返回指针的大小。
+### AlignFree
+
```
void AlignFree(void *ptr)
```
释放指针内存大小的方法。
+
+### InitElemSize
+
```
void InitElemSize(LDataType data_type)
```
diff --git a/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md
similarity index 92%
rename from lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
rename to docs/api_cpp/source_zh_cn/errorcode_and_metatype.md
index 4195eaedcfa2cda8e0470d3db06950e35e2050d8..59f0d81ea4a3a254c7b37e9895c89de1d0357b3d 100644
--- a/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
+++ b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md
@@ -13,6 +13,7 @@
| RET_NO_CHANGE | -4 | 无改变。 |
| RET_SUCCESS_EXIT | -5 | 无错误退出。 |
| RET_MEMORY_FAILED | -6 | 创建内存失败。 |
+| RET_NOT_SUPPORT | -7 | 尚未支持。 |
| RET_OUT_OF_TENSOR_RANGE | -101 | 输出检查越界。 |
| RET_INPUT_TENSOR_ERROR | -102 | 输入检查越界。 |
| RET_REENTRANT_ERROR | -103 | 存在运行中的执行器。 |
@@ -24,6 +25,8 @@
| RET_FORMAT_ERR | -401 | 张量格式检查失败。 |
| RET_INFER_ERR | -501 | 维度推理失败。 |
| RET_INFER_INVALID | -502 | 无效的维度推理。 |
+| RET_INPUT_PARAM_INVALID | -601 | 无效的用户输入参数。 |
+| RET_INPUT_PARAM_LACK | -602 | 缺少必要的输入参数。 |
## MetaType
diff --git a/docs/api_cpp/source_zh_cn/index.rst b/docs/api_cpp/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..6b3fb87da08b8e47644ddb3bc308dd63de1d8d21
--- /dev/null
+++ b/docs/api_cpp/source_zh_cn/index.rst
@@ -0,0 +1,18 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore C++ API
+=================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ class_list
+ lite
+ session
+ tensor
+ dataset
+ errorcode_and_metatype
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/lite.md b/docs/api_cpp/source_zh_cn/lite.md
similarity index 60%
rename from lite/docs/source_zh_cn/apicc/lite.md
rename to docs/api_cpp/source_zh_cn/lite.md
index 2673487a861f56db5c8b9f6bab8daac555cb7fed..f0797d6a7e5d056f4cfa883d7c752a0f12115513 100644
--- a/lite/docs/source_zh_cn/apicc/lite.md
+++ b/docs/api_cpp/source_zh_cn/lite.md
@@ -1,197 +1,169 @@
-# mindspore::lite
-
-#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)>
-
-#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)>
-
-#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)>
-
-
-## Allocator
-
-Allocator类定义了一个内存池,用于动态地分配和释放内存。
-
-## Context
-
-Context类用于保存执行中的环境变量。
-
-**构造函数和析构函数**
-
-```
-Context()
-```
-
-用默认参数构造MindSpore Lite Context 对象。
-
-```
-Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx)
-```
-
-根据输入参数构造MindSpore Lite Context 对象。
-
-- 参数
-
- - `thread_num`: 定义了执行线程数。
-
- - `allocator`: 定义了内存分配器。
-
- - `device_ctx`: 定义了设备信息。
-
-- 返回值
-
- MindSpore Lite Context 指针。
-
-```
-~Context()
-```
-
-MindSpore Lite Context 的析构函数。
-
-**公有属性**
-
-```
-float16_priority
-```
-
-**bool** 值,默认为**false**,用于使能float16 推理。
-
-```
-device_ctx_{DT_CPU}
-```
-
-[**DeviceContext**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicecontext)结构体。用于设置设备信息。
-
-```
-thread_num_
-```
-
-**int** 值,默认为**2**,设置线程数。
-
-```
-allocator
-```
-
-指针类型,指向内存分配器[**Allocator**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#allocator)的指针。
-
-```
-cpu_bind_mode_
-```
-
-[**CpuBindMode**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#cpubindmode)枚举类型,默认为**MID_CPU**。
-
-## PrimitiveC
-
-PrimitiveC定义为算子的原型。
-
-## Model
-
-Model定义了MindSpore Lite中的模型,便于计算图管理。
-
-**析构函数**
-
-```
-~Model()
-```
-
-MindSpore Lite Model的析构函数。
-
-**公有成员函数**
-
-```
-void Destroy()
-```
-
-释放Model内的所有过程中动态分配的内存。
-
-```
-void Free()
-```
-
-释放MindSpore Lite Model中的MetaGraph。
-
-**静态公有成员函数**
-
-```
-static Model *Import(const char *model_buf, size_t size)
-```
-
-创建Model指针的静态方法。
-
-- 参数
-
- - `model_buf`: 定义了读取模型文件的缓存区。
-
- - `size`: 定义了模型缓存区的字节数。
-
-- 返回值
-
- 指向MindSpore Lite的Model的指针。
-
-## CpuBindMode
-枚举类型,设置cpu绑定策略。
-
-**属性**
-
-```
-MID_CPU = -1
-```
-
-优先中等CPU绑定策略。
-
-```
-HIGHER_CPU = 1
-```
-
-优先高级CPU绑定策略。
-
-```
-NO_BIND = 0
-```
-
-不绑定。
-
-## DeviceType
-枚举类型,设置设备类型。
-
-**属性**
-
-```
-DT_CPU = -1
-```
-
-设备为CPU。
-
-```
-DT_GPU = 1
-```
-
-设备为GPU。
-
-```
-DT_NPU = 0
-```
-
-设备为NPU,暂不支持。
-
-## DeviceContext
-
-定义设备类型的结构体。
-
-**属性**
-
-```
-type
-```
-
-[**DeviceType**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicetype) 变量。设备类型。
-
-## Version
-
-```
-std::string Version()
-```
-全局方法,用于获取版本的字符串。
-
-- 返回值
-
+# mindspore::lite
+
+#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/context.h)>
+
+#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/model.h)>
+
+#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/version.h)>
+
+
+## Allocator
+
+Allocator类定义了一个内存池,用于动态地分配和释放内存。
+
+## Context
+
+Context类用于保存执行中的环境变量。
+
+**构造函数和析构函数**
+
+```
+Context()
+```
+
+用默认参数构造MindSpore Lite Context 对象。
+
+```
+~Context()
+```
+
+MindSpore Lite Context 的析构函数。
+
+**公有属性**
+
+```
+float16_priority
+```
+
+**bool**值,默认为**false**,用于使能float16 推理。
+
+> 使能float16推理可能会导致模型推理精度下降,因为在模型推理的中间过程中,有些变量可能会超出float16的数值范围。
+
+```
+device_type
+```
+
+[**DeviceType**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#devicetype)枚举类型。默认为**DT_CPU**,用于设置设备信息。
+
+```
+thread_num_
+```
+
+**int** 值,默认为**2**,设置线程数。
+
+```
+allocator
+```
+
+指针类型,指向内存分配器[**Allocator**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#allocator)的指针。
+
+```
+cpu_bind_mode_
+```
+
+[**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#cpubindmode)枚举类型,默认为**MID_CPU**。
+
+## PrimitiveC
+
+PrimitiveC定义为算子的原型。
+
+## Model
+
+Model定义了MindSpore Lite中的模型,便于计算图管理。
+
+**析构函数**
+
+```
+~Model()
+```
+
+MindSpore Lite Model的析构函数。
+
+**公有成员函数**
+
+```
+void Destroy()
+```
+
+释放Model内的所有过程中动态分配的内存。
+
+```
+void Free()
+```
+
+释放MindSpore Lite Model中的MetaGraph。
+
+**静态公有成员函数**
+
+```
+static Model *Import(const char *model_buf, size_t size)
+```
+
+创建Model指针的静态方法。
+
+- 参数
+
+ - `model_buf`: 定义了读取模型文件的缓存区。
+
+ - `size`: 定义了模型缓存区的字节数。
+
+- 返回值
+
+ 指向MindSpore Lite的Model的指针。
+
+## CpuBindMode
+枚举类型,设置cpu绑定策略。
+
+**属性**
+
+```
+MID_CPU = -1
+```
+
+优先中等CPU绑定策略。
+
+```
+HIGHER_CPU = 1
+```
+
+优先高级CPU绑定策略。
+
+```
+NO_BIND = 0
+```
+
+不绑定。
+
+## DeviceType
+枚举类型,设置设备类型。
+
+**属性**
+
+```
+DT_CPU = -1
+```
+
+设备为CPU。
+
+```
+DT_GPU = 1
+```
+
+设备为GPU。
+
+```
+DT_NPU = 0
+```
+
+设备为NPU,暂不支持。
+
+## Version
+
+```
+std::string Version()
+```
+全局方法,用于获取版本的字符串。
+
+- 返回值
+
MindSpore Lite版本的字符串。
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/session.md b/docs/api_cpp/source_zh_cn/session.md
similarity index 83%
rename from lite/docs/source_zh_cn/apicc/session.md
rename to docs/api_cpp/source_zh_cn/session.md
index 86556e1351e97bf4ad435e09db907fdca4e5fefd..f83b7a467ac38e07cfc46e6e1f367d85a20c36cb 100644
--- a/lite/docs/source_zh_cn/apicc/session.md
+++ b/docs/api_cpp/source_zh_cn/session.md
@@ -1,177 +1,177 @@
-# mindspore::session
-
-#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)>
-
-
-## LiteSession
-
-LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。
-
-**构造函数和析构函数**
-
-```
-LiteSession()
-```
-MindSpore Lite LiteSession的构造函数,使用默认参数。
-```
-~LiteSession()
-```
-MindSpore Lite LiteSession的析构函数。
-
-**公有成员函数**
-```
-virtual void BindThread(bool if_bind)
-```
-尝试将线程池中的线程绑定到指定的cpu内核,或从指定的cpu内核进行解绑。
-
-- 参数
-
- - `if_bind`: 定义了对线程进行绑定或解绑。
-
-```
-virtual int CompileGraph(lite::Model *model)
-```
-编译MindSpore Lite模型。
-
-> 注意: CompileGraph必须在RunGraph方法之后调用。
-
-- 参数
-
- - `model`: 定义了需要被编译的模型。
-
-- 返回值
-
- STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。
-
-```
-virtual std::vector GetInputs() const
-```
-获取MindSpore Lite模型的MSTensors输入。
-
-- 返回值
-
- MindSpore Lite MSTensor向量。
-
-```
-std::vector GetInputsByName(const std::string &node_name) const
-```
-通过节点名获取MindSpore Lite模型的MSTensors输入。
-
-- 参数
-
- - `node_name`: 定义了节点名。
-
-- 返回值
-
- MindSpore Lite MSTensor向量。
-
-```
-virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr)
-```
-运行带有回调函数的会话。
-> 注意: RunGraph必须在CompileGraph方法之后调用。
-
-- 参数
-
- - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。
-
- - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。
-
-- 返回值
-
- STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。
-
-```
-virtual std::vector GetOutputsByNodeName(const std::string &node_name) const
-```
-通过节点名获取MindSpore Lite模型的MSTensors输出。
-
-- 参数
-
- - `node_name`: 定义了节点名。
-
-- 返回值
-
- MindSpore Lite MSTensor向量。
-
-```
-virtual std::unordered_map GetOutputs() const
-```
-获取与张量名相关联的MindSpore Lite模型的MSTensors输出。
-
-- 返回值
-
- 包含输出张量名和MindSpore Lite MSTensor的容器类型变量。
-
-```
-virtual std::vector GetOutputTensorNames() const
-```
-获取由当前会话所编译的模型的输出张量名。
-
-- 返回值
-
- 字符串向量,其中包含了按顺序排列的输出张量名。
-
-```
-virtual mindspore::tensor::MSTensor *GetOutputByTensorName(const std::string &tensor_name) const
-```
-通过张量名获取MindSpore Lite模型的MSTensors输出。
-
-- 参数
-
- - `tensor_name`: 定义了张量名。
-
-- 返回值
-
- 指向MindSpore Lite MSTensor的指针。
-
-```
-virtual int Resize(const std::vector &inputs, const std::vector> &dims)
-```
-调整输入的形状。
-
-- 参数
-
- - `inputs`: 模型对应的所有输入。
- - `dims`: 输入对应的新的shape,顺序注意要与inputs一致。
-
-- 返回值
-
- STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。
-
-**静态公有成员函数**
-
-```
-static LiteSession *CreateSession(lite::Context *context)
-```
-用于创建一个LiteSession指针的静态方法。
-
-- 参数
-
- - `context`: 定义了所要创建的session的上下文。
-
-- 返回值
-
- 指向MindSpore Lite LiteSession的指针。
-## KernelCallBack
-
-```
-using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)>
-```
-
-一个函数包装器。KernelCallBack 定义了指向回调函数的指针。
-
-## CallBackParam
-
-一个结构体。CallBackParam定义了回调函数的输入参数。
-**属性**
-
-```
-name_callback_param
-```
-**string** 类型变量。节点名参数。
-
-```
-type_callback_param
-```
+# mindspore::session
+
+#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/lite_session.h)>
+
+
+## LiteSession
+
+LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。
+
+**构造函数和析构函数**
+
+```
+LiteSession()
+```
+MindSpore Lite LiteSession的构造函数,使用默认参数。
+```
+~LiteSession()
+```
+MindSpore Lite LiteSession的析构函数。
+
+**公有成员函数**
+```
+virtual void BindThread(bool if_bind)
+```
+尝试将线程池中的线程绑定到指定的cpu内核,或从指定的cpu内核进行解绑。
+
+- 参数
+
+ - `if_bind`: 定义了对线程进行绑定或解绑。
+
+```
+virtual int CompileGraph(lite::Model *model)
+```
+编译MindSpore Lite模型。
+
+> 注意: CompileGraph必须在RunGraph方法之后调用。
+
+- 参数
+
+ - `model`: 定义了需要被编译的模型。
+
+- 返回值
+
+ STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h)中定义。
+
+```
+virtual std::vector GetInputs() const
+```
+获取MindSpore Lite模型的MSTensors输入。
+
+- 返回值
+
+ MindSpore Lite MSTensor向量。
+
+```
+std::vector GetInputsByName(const std::string &node_name) const
+```
+通过节点名获取MindSpore Lite模型的MSTensors输入。
+
+- 参数
+
+ - `node_name`: 定义了节点名。
+
+- 返回值
+
+ MindSpore Lite MSTensor向量。
+
+```
+virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr)
+```
+运行带有回调函数的会话。
+> 注意: RunGraph必须在CompileGraph方法之后调用。
+
+- 参数
+
+ - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/session.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。
+
+ - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/session.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。
+
+- 返回值
+
+ STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h)中定义。
+
+```
+virtual std::vector GetOutputsByNodeName(const std::string &node_name) const
+```
+通过节点名获取MindSpore Lite模型的MSTensors输出。
+
+- 参数
+
+ - `node_name`: 定义了节点名。
+
+- 返回值
+
+ MindSpore Lite MSTensor向量。
+
+```
+virtual std::unordered_map GetOutputs() const
+```
+获取与张量名相关联的MindSpore Lite模型的MSTensors输出。
+
+- 返回值
+
+ 包含输出张量名和MindSpore Lite MSTensor的容器类型变量。
+
+```
+virtual std::vector GetOutputTensorNames() const
+```
+获取由当前会话所编译的模型的输出张量名。
+
+- 返回值
+
+ 字符串向量,其中包含了按顺序排列的输出张量名。
+
+```
+virtual mindspore::tensor::MSTensor *GetOutputByTensorName(const std::string &tensor_name) const
+```
+通过张量名获取MindSpore Lite模型的MSTensors输出。
+
+- 参数
+
+ - `tensor_name`: 定义了张量名。
+
+- 返回值
+
+ 指向MindSpore Lite MSTensor的指针。
+
+```
+virtual int Resize(const std::vector &inputs, const std::vector> &dims)
+```
+调整输入的形状。
+
+- 参数
+
+ - `inputs`: 模型对应的所有输入。
+ - `dims`: 输入对应的新的shape,顺序注意要与inputs一致。
+
+- 返回值
+
+ STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h)中定义。
+
+**静态公有成员函数**
+
+```
+static LiteSession *CreateSession(lite::Context *context)
+```
+用于创建一个LiteSession指针的静态方法。
+
+- 参数
+
+ - `context`: 定义了所要创建的session的上下文。
+
+- 返回值
+
+ 指向MindSpore Lite LiteSession的指针。
+## KernelCallBack
+
+```
+using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)>
+```
+
+一个函数包装器。KernelCallBack 定义了指向回调函数的指针。
+
+## CallBackParam
+
+一个结构体。CallBackParam定义了回调函数的输入参数。
+**属性**
+
+```
+name_callback_param
+```
+**string** 类型变量。节点名参数。
+
+```
+type_callback_param
+```
**string** 类型变量。节点类型参数。
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/tensor.md b/docs/api_cpp/source_zh_cn/tensor.md
similarity index 46%
rename from lite/docs/source_zh_cn/apicc/tensor.md
rename to docs/api_cpp/source_zh_cn/tensor.md
index e9eae1f0fd9a62aa59e7b578b09a455bab843f1d..269d54d7428541c174a35e66d2af61e3bd91c74c 100644
--- a/lite/docs/source_zh_cn/apicc/tensor.md
+++ b/docs/api_cpp/source_zh_cn/tensor.md
@@ -1,6 +1,6 @@
# mindspore::tensor
-#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)>
+#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/ms_tensor.h)>
## MSTensor
@@ -15,7 +15,7 @@ MindSpore Lite MSTensor的构造函数。
- 返回值
- MindSpore Lite MSTensor 的实例。
+ MindSpore Lite MSTensor的实例。
```
virtual ~MSTensor()
@@ -29,25 +29,12 @@ virtual TypeId data_type() const
```
获取MindSpore Lite MSTensor的数据类型。
-> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
+> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
- 返回值
MindSpore Lite MSTensor类的MindSpore Lite TypeId。
-```
-virtual TypeId set_data_type(TypeId data_type)
-```
-设置MindSpore Lite MSTensor的数据类型。
-
-- 参数
-
- - `data_type`: 定义了MindSpore Lite MSTensor所需设置的MindSpore Lite TypeId。
-
-- 返回值
-
- 设置后的MindSpore Lite MSTensor的MindSpore Lite TypeI。
-
```
virtual std::vector shape() const
```
@@ -57,23 +44,10 @@ virtual std::vector shape() const
一个包含MindSpore Lite MSTensor形状数值的整型向量。
-```
-virtual size_t set_shape(const std::vector &shape)
-```
-设置MindSpore Lite MSTensor的形状.
-
-- 参数
-
- - `shape`: 定义了一个整型向量,包含了所需设置的MindSpore Lite MSTensor形状数值。
-
-- 返回值
-
- 设置形状后的MindSpore Lite MSTensor的大小。
-
```
virtual int DimensionSize(size_t index) const
```
-Get size of the dimension of the MindSpore Lite MSTensor index by the parameter index.
+通过参数索引获取MindSpore Lite MSTensor的维度的大小。
- 参数
@@ -92,15 +66,6 @@ virtual int ElementsNum() const
MSTensor中的元素个数
-```
-virtual std::size_t hash() const
-```
-获取MindSpore Lite MSTensor的哈希码。
-
-- 返回值
-
- MindSpore Lite MSTensor的哈希码。
-
```
virtual size_t Size() const
```
@@ -121,22 +86,3 @@ virtual void *MutableData() const
- 返回值
指向MSTensor中的数据的指针。
-
-**静态公有成员函数**
-
-```
-static MSTensor *CreateTensor(TypeId data_type, const std::vector &shape)
-```
-创建MSTensor指针的静态方法。
-
-> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
-
-- 参数
-
- - `data_type`: 定义了所要创建的张量的数据类型。
-
- - `shape`: 定义了所要创建的张量的形状。
-
-- 返回值
-
- 指向MSTensor的指针。
\ No newline at end of file
diff --git a/docs/Makefile b/docs/api_java/Makefile
similarity index 100%
rename from docs/Makefile
rename to docs/api_java/Makefile
diff --git a/docs/requirements.txt b/docs/api_java/requirements.txt
similarity index 100%
rename from docs/requirements.txt
rename to docs/api_java/requirements.txt
diff --git a/docs/api_java/source_en/_static/logo_notebook.png b/docs/api_java/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_java/source_en/_static/logo_notebook.png differ
diff --git a/docs/source_en/_static/logo_source.png b/docs/api_java/source_en/_static/logo_source.png
similarity index 100%
rename from docs/source_en/_static/logo_source.png
rename to docs/api_java/source_en/_static/logo_source.png
diff --git a/docs/api_java/source_en/conf.py b/docs/api_java/source_en/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..4020d50f7b5f7a90b26785749cb1d41046b4723c
--- /dev/null
+++ b/docs/api_java/source_en/conf.py
@@ -0,0 +1,61 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/api_java/source_zh_cn/_static/logo_notebook.png b/docs/api_java/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_java/source_zh_cn/_static/logo_notebook.png differ
diff --git a/docs/source_zh_cn/_static/logo_source.png b/docs/api_java/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from docs/source_zh_cn/_static/logo_source.png
rename to docs/api_java/source_zh_cn/_static/logo_source.png
diff --git a/docs/api_java/source_zh_cn/conf.py b/docs/api_java/source_zh_cn/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..e3dfb2a0a9fc6653113e7b2bb878a5497ceb4a2b
--- /dev/null
+++ b/docs/api_java/source_zh_cn/conf.py
@@ -0,0 +1,65 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_search_language = 'zh'
+
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/Makefile b/docs/api_python/Makefile
similarity index 100%
rename from lite/docs/Makefile
rename to docs/api_python/Makefile
diff --git a/api/numpy_objects.inv b/docs/api_python/numpy_objects.inv
similarity index 100%
rename from api/numpy_objects.inv
rename to docs/api_python/numpy_objects.inv
diff --git a/api/python_objects.inv b/docs/api_python/python_objects.inv
similarity index 100%
rename from api/python_objects.inv
rename to docs/api_python/python_objects.inv
diff --git a/docs/api_python/requirements.txt b/docs/api_python/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/api_python/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/api/run.sh b/docs/api_python/run.sh
similarity index 100%
rename from api/run.sh
rename to docs/api_python/run.sh
diff --git a/docs/api_python/source_en/_static/logo_notebook.png b/docs/api_python/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_python/source_en/_static/logo_notebook.png differ
diff --git a/lite/docs/source_en/_static/logo_source.png b/docs/api_python/source_en/_static/logo_source.png
similarity index 100%
rename from lite/docs/source_en/_static/logo_source.png
rename to docs/api_python/source_en/_static/logo_source.png
diff --git a/api/source_en/conf.py b/docs/api_python/source_en/conf.py
similarity index 100%
rename from api/source_en/conf.py
rename to docs/api_python/source_en/conf.py
diff --git a/docs/api_python/source_en/index.rst b/docs/api_python/source_en/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..c7cf2eb81d7a59a5fe85264ef66ac2b6f4bcdfad
--- /dev/null
+++ b/docs/api_python/source_en/index.rst
@@ -0,0 +1,48 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 11:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore API
+=============
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Python API
+
+ mindspore/mindspore
+ mindspore/mindspore.common.initializer
+ mindspore/mindspore.communication
+ mindspore/mindspore.context
+ mindspore/mindspore.dataset
+ mindspore/mindspore.dataset.config
+ mindspore/mindspore.dataset.text
+ mindspore/mindspore.dataset.transforms
+ mindspore/mindspore.dataset.vision
+ mindspore/mindspore.mindrecord
+ mindspore/mindspore.nn
+ mindspore/mindspore.nn.dynamic_lr
+ mindspore/mindspore.nn.probability
+ mindspore/mindspore.ops
+ mindspore/mindspore.profiler
+ mindspore/mindspore.train
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindArmour Python API
+
+ mindarmour/mindarmour
+ mindarmour/mindarmour.adv_robustness.attacks
+ mindarmour/mindarmour.adv_robustness.defenses
+ mindarmour/mindarmour.adv_robustness.detectors
+ mindarmour/mindarmour.adv_robustness.evaluations
+ mindarmour/mindarmour.fuzz_testing
+ mindarmour/mindarmour.privacy.diff_privacy
+ mindarmour/mindarmour.privacy.evaluation
+ mindarmour/mindarmour.utils
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Hub Python API
+
+ mindspore_hub/mindspore_hub
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.attacks.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.attacks.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.defenses.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.defenses.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.detectors.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.detectors.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.evaluations.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.evaluations.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.fuzz_testing.rst b/docs/api_python/source_en/mindarmour/mindarmour.fuzz_testing.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.fuzz_testing.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.fuzz_testing.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst b/docs/api_python/source_en/mindarmour/mindarmour.privacy.diff_privacy.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.privacy.diff_privacy.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.privacy.evaluation.rst b/docs/api_python/source_en/mindarmour/mindarmour.privacy.evaluation.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.privacy.evaluation.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.privacy.evaluation.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.rst b/docs/api_python/source_en/mindarmour/mindarmour.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.utils.rst b/docs/api_python/source_en/mindarmour/mindarmour.utils.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.utils.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.utils.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.common.initializer.rst b/docs/api_python/source_en/mindspore/mindspore.common.initializer.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.common.initializer.rst
rename to docs/api_python/source_en/mindspore/mindspore.common.initializer.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.communication.rst b/docs/api_python/source_en/mindspore/mindspore.communication.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.communication.rst
rename to docs/api_python/source_en/mindspore/mindspore.communication.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.context.rst b/docs/api_python/source_en/mindspore/mindspore.context.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.context.rst
rename to docs/api_python/source_en/mindspore/mindspore.context.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.config.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.config.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.config.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.config.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.text.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.text.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.text.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.text.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.transforms.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.transforms.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.vision.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.vision.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.vision.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.vision.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.mindrecord.rst b/docs/api_python/source_en/mindspore/mindspore.mindrecord.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.mindrecord.rst
rename to docs/api_python/source_en/mindspore/mindspore.mindrecord.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.dynamic_lr.rst b/docs/api_python/source_en/mindspore/mindspore.nn.dynamic_lr.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.dynamic_lr.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.dynamic_lr.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst b/docs/api_python/source_en/mindspore/mindspore.nn.learning_rate_schedule.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.learning_rate_schedule.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.probability.rst b/docs/api_python/source_en/mindspore/mindspore.nn.probability.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.probability.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.probability.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.rst b/docs/api_python/source_en/mindspore/mindspore.nn.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.ops.rst b/docs/api_python/source_en/mindspore/mindspore.ops.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.ops.rst
rename to docs/api_python/source_en/mindspore/mindspore.ops.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.profiler.rst b/docs/api_python/source_en/mindspore/mindspore.profiler.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.profiler.rst
rename to docs/api_python/source_en/mindspore/mindspore.profiler.rst
diff --git a/docs/api_python/source_en/mindspore/mindspore.rst b/docs/api_python/source_en/mindspore/mindspore.rst
new file mode 100644
index 0000000000000000000000000000000000000000..f510c8b5fcf74c317579ce0b95f25d24a324fe79
--- /dev/null
+++ b/docs/api_python/source_en/mindspore/mindspore.rst
@@ -0,0 +1,109 @@
+mindspore
+=========
+
+.. class:: mindspore.dtype
+
+ Create a data type object of MindSpore.
+
+ The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
+ Run the following command to import the package:
+
+ .. code-block::
+
+ import mindspore.common.dtype as mstype
+
+ or
+
+ .. code-block::
+
+ from mindspore import dtype as mstype
+
+ * **Numeric Type**
+
+ Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
+ The following table lists the details.
+
+ ============================================== =============================
+ Definition Description
+ ============================================== =============================
+ ``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
+ ``mindspore.int16`` , ``mindspore.short`` 16-bit integer
+ ``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
+ ``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
+ ``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
+ ``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
+ ``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
+ ``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
+ ``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
+ ``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
+ ``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
+ ============================================== =============================
+
+ * **Other Type**
+
+ For other defined types, see the following table.
+
+ ============================ =================
+ Type Description
+ ============================ =================
+ ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
+ ``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
+ ``bool_`` Boolean ``True`` or ``False``.
+ ``int_`` Integer scalar.
+ ``uint`` Unsigned integer scalar.
+ ``float_`` Floating-point scalar.
+ ``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
+ ``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
+ ``type_type`` Type definition of type.
+ ``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
+ ``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
+ ``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
+ ============================ =================
+
+ * **Tree Topology**
+
+ The relationships of the above types are as follows:
+
+ .. code-block::
+
+
+ └─────── number
+ │ ├─── bool_
+ │ ├─── int_
+ │ │ ├─── int8, byte
+ │ │ ├─── int16, short
+ │ │ ├─── int32, intc
+ │ │ └─── int64, intp
+ │ ├─── uint
+ │ │ ├─── uint8, ubyte
+ │ │ ├─── uint16, ushort
+ │ │ ├─── uint32, uintc
+ │ │ └─── uint64, uintp
+ │ └─── float_
+ │ ├─── float16
+ │ ├─── float32
+ │ └─── float64
+ ├─── tensor
+ │ ├─── Array[Float32]
+ │ └─── ...
+ ├─── list_
+ │ ├─── List[Int32,Float32]
+ │ └─── ...
+ ├─── tuple_
+ │ ├─── Tuple[Int32,Float32]
+ │ └─── ...
+ ├─── function
+ │ ├─── Func
+ │ ├─── Func[(Int32, Float32), Int32]
+ │ └─── ...
+ ├─── MetaTensor
+ ├─── type_type
+ ├─── type_none
+ ├─── symbolic_key
+ └─── env_type
+
+.. automodule:: mindspore
+ :members:
+ :exclude-members: Model, dataset_helper,
\ No newline at end of file
diff --git a/api/source_en/api/python/mindspore/mindspore.train.rst b/docs/api_python/source_en/mindspore/mindspore.train.rst
similarity index 75%
rename from api/source_en/api/python/mindspore/mindspore.train.rst
rename to docs/api_python/source_en/mindspore/mindspore.train.rst
index eb6753e672430d68f149e034791d0d7443125a78..3d24633055440776b8db533368c656d7a7a18fce 100644
--- a/api/source_en/api/python/mindspore/mindspore.train.rst
+++ b/docs/api_python/source_en/mindspore/mindspore.train.rst
@@ -1,6 +1,18 @@
mindspore.train
===============
+mindspore.train.model
+---------------------
+
+.. automodule:: mindspore.train.model
+ :members:
+
+mindspore.train.dataset_helper
+------------------------------
+
+.. automodule:: mindspore.train.dataset_helper
+ :members:
+
mindspore.train.summary
-----------------------
diff --git a/api/source_en/api/python/mindspore_hub/mindspore_hub.rst b/docs/api_python/source_en/mindspore_hub/mindspore_hub.rst
similarity index 100%
rename from api/source_en/api/python/mindspore_hub/mindspore_hub.rst
rename to docs/api_python/source_en/mindspore_hub/mindspore_hub.rst
diff --git a/docs/api_python/source_zh_cn/_static/logo_notebook.png b/docs/api_python/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_python/source_zh_cn/_static/logo_notebook.png differ
diff --git a/lite/docs/source_zh_cn/_static/logo_source.png b/docs/api_python/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from lite/docs/source_zh_cn/_static/logo_source.png
rename to docs/api_python/source_zh_cn/_static/logo_source.png
diff --git a/api/source_zh_cn/conf.py b/docs/api_python/source_zh_cn/conf.py
similarity index 97%
rename from api/source_zh_cn/conf.py
rename to docs/api_python/source_zh_cn/conf.py
index 2e2c89c42b33b7410e1fffcf68104ebfdd93c068..e10907fd0168de0180f3b0e093d9ba0a253ff05a 100644
--- a/api/source_zh_cn/conf.py
+++ b/docs/api_python/source_zh_cn/conf.py
@@ -76,7 +76,7 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
-html_search_options = {'dict': '../resource/jieba.txt'}
+html_search_options = {'dict': '../../resource/jieba.txt'}
html_static_path = ['_static']
diff --git a/docs/api_python/source_zh_cn/index.rst b/docs/api_python/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..7131254879048e85e77240cfd80be9508cc06b59
--- /dev/null
+++ b/docs/api_python/source_zh_cn/index.rst
@@ -0,0 +1,54 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 11:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore API
+=============
+
+.. toctree::
+ :maxdepth: 1
+ :caption: 编程指南
+
+ programming_guide/api_structure
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Python API
+
+ mindspore/mindspore
+ mindspore/mindspore.common.initializer
+ mindspore/mindspore.communication
+ mindspore/mindspore.context
+ mindspore/mindspore.dataset
+ mindspore/mindspore.dataset.config
+ mindspore/mindspore.dataset.text
+ mindspore/mindspore.dataset.transforms
+ mindspore/mindspore.dataset.vision
+ mindspore/mindspore.mindrecord
+ mindspore/mindspore.nn
+ mindspore/mindspore.nn.dynamic_lr
+ mindspore/mindspore.nn.probability
+ mindspore/mindspore.ops
+ mindspore/mindspore.profiler
+ mindspore/mindspore.train
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindArmour Python API
+
+ mindarmour/mindarmour
+ mindarmour/mindarmour.adv_robustness.attacks
+ mindarmour/mindarmour.adv_robustness.defenses
+ mindarmour/mindarmour.adv_robustness.detectors
+ mindarmour/mindarmour.adv_robustness.evaluations
+ mindarmour/mindarmour.fuzz_testing
+ mindarmour/mindarmour.privacy.diff_privacy
+ mindarmour/mindarmour.privacy.evaluation
+ mindarmour/mindarmour.utils
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Hub Python API
+
+ mindspore_hub/mindspore_hub
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.attacks.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.attacks.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.defenses.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.defenses.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.detectors.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.detectors.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.evaluations.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.evaluations.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.fuzz_testing.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.fuzz_testing.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.fuzz_testing.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.fuzz_testing.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.diff_privacy.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.diff_privacy.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.evaluation.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.evaluation.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.evaluation.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.evaluation.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.utils.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.utils.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.utils.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.utils.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.common.initializer.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.common.initializer.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.common.initializer.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.common.initializer.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.communication.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.communication.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.communication.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.communication.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.context.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.context.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.context.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.context.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.config.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.config.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.config.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.config.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.text.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.text.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.text.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.text.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.transforms.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.transforms.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.vision.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.vision.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.vision.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.vision.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.mindrecord.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.mindrecord.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.mindrecord.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.mindrecord.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.dynamic_lr.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.dynamic_lr.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.dynamic_lr.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.dynamic_lr.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.learning_rate_schedule.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.learning_rate_schedule.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.probability.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.probability.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.ops.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.ops.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.profiler.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.profiler.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.profiler.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.profiler.rst
diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.rst
new file mode 100644
index 0000000000000000000000000000000000000000..f510c8b5fcf74c317579ce0b95f25d24a324fe79
--- /dev/null
+++ b/docs/api_python/source_zh_cn/mindspore/mindspore.rst
@@ -0,0 +1,109 @@
+mindspore
+=========
+
+.. class:: mindspore.dtype
+
+ Create a data type object of MindSpore.
+
+ The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
+ Run the following command to import the package:
+
+ .. code-block::
+
+ import mindspore.common.dtype as mstype
+
+ or
+
+ .. code-block::
+
+ from mindspore import dtype as mstype
+
+ * **Numeric Type**
+
+ Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
+ The following table lists the details.
+
+ ============================================== =============================
+ Definition Description
+ ============================================== =============================
+ ``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
+ ``mindspore.int16`` , ``mindspore.short`` 16-bit integer
+ ``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
+ ``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
+ ``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
+ ``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
+ ``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
+ ``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
+ ``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
+ ``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
+ ``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
+ ============================================== =============================
+
+ * **Other Type**
+
+ For other defined types, see the following table.
+
+ ============================ =================
+ Type Description
+ ============================ =================
+ ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
+ ``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
+ ``bool_`` Boolean ``True`` or ``False``.
+ ``int_`` Integer scalar.
+ ``uint`` Unsigned integer scalar.
+ ``float_`` Floating-point scalar.
+ ``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
+ ``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
+ ``type_type`` Type definition of type.
+ ``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
+ ``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
+ ``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
+ ============================ =================
+
+ * **Tree Topology**
+
+ The relationships of the above types are as follows:
+
+ .. code-block::
+
+
+ └─────── number
+ │ ├─── bool_
+ │ ├─── int_
+ │ │ ├─── int8, byte
+ │ │ ├─── int16, short
+ │ │ ├─── int32, intc
+ │ │ └─── int64, intp
+ │ ├─── uint
+ │ │ ├─── uint8, ubyte
+ │ │ ├─── uint16, ushort
+ │ │ ├─── uint32, uintc
+ │ │ └─── uint64, uintp
+ │ └─── float_
+ │ ├─── float16
+ │ ├─── float32
+ │ └─── float64
+ ├─── tensor
+ │ ├─── Array[Float32]
+ │ └─── ...
+ ├─── list_
+ │ ├─── List[Int32,Float32]
+ │ └─── ...
+ ├─── tuple_
+ │ ├─── Tuple[Int32,Float32]
+ │ └─── ...
+ ├─── function
+ │ ├─── Func
+ │ ├─── Func[(Int32, Float32), Int32]
+ │ └─── ...
+ ├─── MetaTensor
+ ├─── type_type
+ ├─── type_none
+ ├─── symbolic_key
+ └─── env_type
+
+.. automodule:: mindspore
+ :members:
+ :exclude-members: Model, dataset_helper,
\ No newline at end of file
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.train.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.train.rst
similarity index 75%
rename from api/source_zh_cn/api/python/mindspore/mindspore.train.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.train.rst
index eb6753e672430d68f149e034791d0d7443125a78..3d24633055440776b8db533368c656d7a7a18fce 100644
--- a/api/source_zh_cn/api/python/mindspore/mindspore.train.rst
+++ b/docs/api_python/source_zh_cn/mindspore/mindspore.train.rst
@@ -1,6 +1,18 @@
mindspore.train
===============
+mindspore.train.model
+---------------------
+
+.. automodule:: mindspore.train.model
+ :members:
+
+mindspore.train.dataset_helper
+------------------------------
+
+.. automodule:: mindspore.train.dataset_helper
+ :members:
+
mindspore.train.summary
-----------------------
diff --git a/api/source_zh_cn/api/python/mindspore_hub/mindspore_hub.rst b/docs/api_python/source_zh_cn/mindspore_hub/mindspore_hub.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore_hub/mindspore_hub.rst
rename to docs/api_python/source_zh_cn/mindspore_hub/mindspore_hub.rst
diff --git a/lite/tutorials/Makefile b/docs/faq/Makefile
similarity index 100%
rename from lite/tutorials/Makefile
rename to docs/faq/Makefile
diff --git a/docs/faq/requirements.txt b/docs/faq/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/faq/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/docs/source_en/FAQ.md b/docs/faq/source_en/FAQ.md
similarity index 92%
rename from docs/source_en/FAQ.md
rename to docs/faq/source_en/FAQ.md
index 153aaf1e004ca93f5752dd32ea114f0895cbf2d2..69d87f3ac6bd824fb90c94d3c8f7526da0f80fae 100644
--- a/docs/source_en/FAQ.md
+++ b/docs/faq/source_en/FAQ.md
@@ -1,4 +1,4 @@
-# FAQ
+# FAQ
`Linux` `Windows` `Ascend` `GPU` `CPU` `Environmental Setup` `Model Export` `Model Training` `Beginner` `Intermediate` `Expert`
@@ -18,7 +18,7 @@
- [Supported Features](#supported-features)
-
+
## Installation
@@ -116,7 +116,7 @@ A: After MindSpore is installed on a CPU hardware platform, run the `python -c'i
Q: What can I do if the LSTM example on the official website cannot run on Ascend?
-A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/docs/en/master/operator_list.html) to view the supported operators.
+A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/doc/note/en/r1.0/operator_list_ms.html) to view the supported operators.
@@ -134,7 +134,7 @@ A: MindSpore uses protocol buffers (protobuf) to store training parameters and c
Q: How do I use models trained by MindSpore on Ascend 310? Can they be converted to models used by HiLens Kit?
-A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html).
+A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/inference/en/r1.0/multi_platform_inference.html).
@@ -146,19 +146,19 @@ A: When building a network, use `if self.training: x = dropput(x)`. During verif
Q: Where can I view the sample code or tutorial of MindSpore training and inference?
-A: Please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/index.html).
+A: Please visit the [MindSpore official website training](https://www.mindspore.cn/tutorial/training/en/r1.0/index.html) and [MindSpore official website inference](https://www.mindspore.cn/tutorial/inference/en/r1.0/index.html).
Q: What types of model is currently supported by MindSpore for training?
-A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md) for detailed information.
+A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md) for detailed information.
Q: What are the available recommendation or text generation networks or models provided by MindSpore?
-A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
+A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo).
@@ -176,7 +176,7 @@ A: Ascend 310 can only be used for inference. MindSpore supports training on Asc
Q: Does MindSpore require computing units such as GPUs and NPUs? What hardware support is required?
-A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/docs/en/master/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md).
+A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/doc/note/en/r1.0/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md).
@@ -186,12 +186,6 @@ A: MindSpore provides pluggable device management interface so that developer co
-Q: What is the relationship between MindSpore and ModelArts? Can MindSpore be used on ModelArts?
-
-A: ModelArts is an online training and inference platform on HUAWEI CLOUD. MindSpore is a Huawei deep learning framework. You can view the tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html) to learn how to train MindSpore models on ModelArts.
-
-
-
Q: Does MindSpore support Windows 10?
A: The MindSpore CPU version can be installed on Windows 10. For details about the installation procedure, please refer to the [MindSpore official website tutorial](https://www.mindspore.cn/install/en)
@@ -232,7 +226,7 @@ A: The problem is that the Graph mode is selected but the PyNative mode is used.
- PyNative mode: dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model.
- Graph mode: static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution. This mode uses technologies such as graph optimization to improve the running performance and facilitates large-scale deployment and cross-platform running.
-You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/debugging_in_pynative_mode.html).
+You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/debug_in_pynative_mode.html).
## Programming Language Extensions
@@ -255,7 +249,7 @@ A: In addition to data parallelism, MindSpore distributed training also supports
Q: Has MindSpore implemented the anti-pooling operation similar to `nn.MaxUnpool2d`?
-A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/en/master/use/custom_operator.html).
+A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/custom_operator_ascend.html).
@@ -291,7 +285,7 @@ A: The TensorFlow's object detection pipeline API belongs to the TensorFlow's Mo
Q: How do I migrate scripts or models of other frameworks to MindSpore?
-A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/advanced_use/network_migration.html).
+A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/migrate_3rd_scripts.html).
diff --git a/docs/faq/source_en/_static/logo_notebook.png b/docs/faq/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/faq/source_en/_static/logo_notebook.png differ
diff --git a/lite/tutorials/source_en/_static/logo_source.png b/docs/faq/source_en/_static/logo_source.png
similarity index 100%
rename from lite/tutorials/source_en/_static/logo_source.png
rename to docs/faq/source_en/_static/logo_source.png
diff --git a/docs/source_en/conf.py b/docs/faq/source_en/conf.py
similarity index 97%
rename from docs/source_en/conf.py
rename to docs/faq/source_en/conf.py
index 6db42071de7116aaf62bbb7f0b2c09825b13c69b..a1fd767271ac159540440ed65bd0d676163366a9 100644
--- a/docs/source_en/conf.py
+++ b/docs/faq/source_en/conf.py
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
diff --git a/docs/faq/source_en/index.rst b/docs/faq/source_en/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..a8969301c9ae45646390ccc0e329f56e648bdef5
--- /dev/null
+++ b/docs/faq/source_en/index.rst
@@ -0,0 +1,13 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore FAQ
+=================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ faq
\ No newline at end of file
diff --git a/docs/source_zh_cn/FAQ.md b/docs/faq/source_zh_cn/FAQ.md
similarity index 90%
rename from docs/source_zh_cn/FAQ.md
rename to docs/faq/source_zh_cn/FAQ.md
index f768d6bcc284427c62a83a3b7daddafb45dc1b88..ab91a5b1d58c9e3181ca25e677f79d6a79629949 100644
--- a/docs/source_zh_cn/FAQ.md
+++ b/docs/faq/source_zh_cn/FAQ.md
@@ -18,7 +18,7 @@
- [特性支持](#特性支持)
-
+
## 安装类
@@ -123,7 +123,7 @@ A:CPU硬件平台安装MindSpore后测试是否安装成功,只需要执行命
Q:官网的LSTM示例在Ascend上跑不通
-A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html)查看算子支持情况。
+A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/doc/note/zh-CN/r1.0/operator_list_ms.html)查看算子支持情况。
@@ -143,7 +143,7 @@ A: MindSpore采用protbuf存储训练参数,无法直接读取其他框架
Q:用MindSpore训练出的模型如何在Ascend 310上使用?可以转换成适用于HiLens Kit用的吗?
-A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型.
+A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.0/multi_platform_inference.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型.
@@ -155,19 +155,19 @@ A:在构造网络的时候可以通过 `if self.training: x = dropput(x)`,
Q:从哪里可以查看MindSpore训练及推理的样例代码或者教程?
-A:可以访问[MindSpore官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)。
+A:可以访问[MindSpore官网教程训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/index.html)和[MindSpore官网教程推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.0/index.html)。
Q:MindSpore支持哪些模型的训练?
-A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md)。
+A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md)。
Q:MindSpore有哪些现成的推荐类或生成类网络或模型可用?
-A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
+A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo)。
@@ -187,7 +187,7 @@ A:Ascend 310只能用作推理,MindSpore支持在Ascend 910训练,训练
Q:安装运行MindSpore时,是否要求平台有GPU、NPU等计算单元?需要什么硬件支持?
-A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/docs/zh-CN/master/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md)获取最新信息。
+A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/doc/note/zh-CN/r1.0/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md)获取最新信息。
@@ -199,7 +199,7 @@ A:MindSpore提供了可插拔式的设备管理接口,其他计算单元(
Q:MindSpore与ModelArts是什么关系,在ModelArts中能使用MindSpore吗?
-A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。
+A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。
@@ -245,7 +245,7 @@ A:这边的问题是选择了Graph模式却使用了PyNative的写法,所以
- PyNative模式:也称动态图模式,将神经网络中的各个算子逐一下发执行,方便用户编写和调试神经网络模型。
- Graph模式:也称静态图模式或者图模式,将神经网络模型编译成一整张图,然后下发执行。该模式利用图优化等技术提高运行性能,同时有助于规模部署和跨平台运行。
-用户可以参考[官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/debugging_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。
+用户可以参考[官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/debug_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。
@@ -273,7 +273,7 @@ A:MindSpore分布式训练除了支持数据并行,还支持算子级模型
Q:请问MindSpore实现了反池化操作了吗?类似于`nn.MaxUnpool2d` 这个反池化操作?
-A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/zh-CN/master/use/custom_operator.html)。
+A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/custom_operator_ascend.html)。
@@ -309,7 +309,7 @@ A:TensorFlow的对象检测Pipeline接口属于TensorFlow Model模块。待Min
Q:其他框架的脚本或者模型怎么迁移到MindSpore?
-A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html)的介绍。
+A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/migrate_3rd_scripts.html)的介绍。
diff --git a/docs/faq/source_zh_cn/_static/logo_notebook.png b/docs/faq/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/faq/source_zh_cn/_static/logo_notebook.png differ
diff --git a/lite/tutorials/source_zh_cn/_static/logo_source.png b/docs/faq/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/_static/logo_source.png
rename to docs/faq/source_zh_cn/_static/logo_source.png
diff --git a/docs/source_zh_cn/conf.py b/docs/faq/source_zh_cn/conf.py
similarity index 95%
rename from docs/source_zh_cn/conf.py
rename to docs/faq/source_zh_cn/conf.py
index f58451f3fafe89fa2a734d62f19c94424250757b..95d7701759707ab95a3c199cd8a22e2e2cc1194d 100644
--- a/docs/source_zh_cn/conf.py
+++ b/docs/faq/source_zh_cn/conf.py
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
@@ -59,6 +57,6 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
-html_search_options = {'dict': '../resource/jieba.txt'}
+html_search_options = {'dict': '../../resource/jieba.txt'}
html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/faq/source_zh_cn/index.rst b/docs/faq/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..a8969301c9ae45646390ccc0e329f56e648bdef5
--- /dev/null
+++ b/docs/faq/source_zh_cn/index.rst
@@ -0,0 +1,13 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore FAQ
+=================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ faq
\ No newline at end of file
diff --git a/tutorials/Makefile b/docs/note/Makefile
similarity index 100%
rename from tutorials/Makefile
rename to docs/note/Makefile
diff --git a/docs/note/requirements.txt b/docs/note/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/note/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/docs/note/source_en/_static/logo_notebook.png b/docs/note/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/note/source_en/_static/logo_notebook.png differ
diff --git a/tutorials/source_en/_static/logo_source.png b/docs/note/source_en/_static/logo_source.png
similarity index 100%
rename from tutorials/source_en/_static/logo_source.png
rename to docs/note/source_en/_static/logo_source.png
diff --git a/docs/source_en/benchmark.md b/docs/note/source_en/benchmark.md
similarity index 96%
rename from docs/source_en/benchmark.md
rename to docs/note/source_en/benchmark.md
index da19dcb1a4963cba02322c2ece58036fa0ff0a8b..63bd75a4f0d9f1a7ee77f147ecb94a9dc7e462ef 100644
--- a/docs/source_en/benchmark.md
+++ b/docs/note/source_en/benchmark.md
@@ -13,7 +13,7 @@
-
+
This document describes the MindSpore benchmarks.
For details about the MindSpore networks, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
diff --git a/docs/source_en/community.rst b/docs/note/source_en/community.rst
similarity index 76%
rename from docs/source_en/community.rst
rename to docs/note/source_en/community.rst
index 80c0ddf710394273e53eff9db6a1495d4a613a17..109b7f0fa8cda205c51e9c4fa98c43ea2e57b493 100644
--- a/docs/source_en/community.rst
+++ b/docs/note/source_en/community.rst
@@ -1,4 +1,4 @@
-Community
+Participate in MindSpore Community
=========
Contributing Code
@@ -9,4 +9,4 @@ If you want to contribute code, please read https://gitee.com/mindspore/mindspor
Contributing Documents
----------------------
-If you want to contribute documents, please read https://gitee.com/mindspore/docs/blob/master/CONTRIBUTING_DOC.md .
\ No newline at end of file
+If you want to contribute documents, please read https://gitee.com/mindspore/docs/blob/r1.0/CONTRIBUTING_DOC.md .
\ No newline at end of file
diff --git a/lite/docs/source_en/conf.py b/docs/note/source_en/conf.py
similarity index 93%
rename from lite/docs/source_en/conf.py
rename to docs/note/source_en/conf.py
index fd89055b9bf9e2a889d23de6fa7395072650db42..a1fd767271ac159540440ed65bd0d676163366a9 100644
--- a/lite/docs/source_en/conf.py
+++ b/docs/note/source_en/conf.py
@@ -15,9 +15,9 @@ import os
# -- Project information -----------------------------------------------------
-project = 'MindSpore Lite'
-copyright = '2020, MindSpore Lite'
-author = 'MindSpore Lite'
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = 'master'
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
diff --git a/docs/source_en/constraints_on_network_construction.md b/docs/note/source_en/constraints_on_network_construction.md
similarity index 72%
rename from docs/source_en/constraints_on_network_construction.md
rename to docs/note/source_en/constraints_on_network_construction.md
index 2da31582ec511af4c51e499d81e350b0b4d91797..517ec748ae77460875ae21672e1acd6947fc29e8 100644
--- a/docs/source_en/constraints_on_network_construction.md
+++ b/docs/note/source_en/constraints_on_network_construction.md
@@ -5,27 +5,27 @@
- [Constraints on Network Construction Using Python](#constraints-on-network-construction-using-python)
- - [Overview](#overview)
- - [Syntax Constraints](#syntax-constraints)
- - [Supported Python Data Types](#supported-python-data-types)
- - [MindSpore Extended Data Type](#mindspore-extended-data-type)
- - [Expression Types](#expression-types)
- - [Statement Types](#statement-types)
- - [System Functions/Class](#system-functionsclasses)
- - [Function Parameters](#function-parameters)
- - [Operators](#operators)
- - [Index operation](#index-operation)
- - [Unsupported Syntax](#unsupported-syntax)
- - [Network Definition Constraints](#network-definition-constraints)
- - [Instance Types on the Entire Network](#instance-types-on-the-entire-network)
- - [Network Input Type](#network-input-type)
- - [Network Graph Optimization](#network-graph-optimization)
- - [Network Construction Components](#network-construction-components)
- - [Other Constraints](#other-constraints)
+ - [Overview](#overview)
+ - [Syntax Constraints](#syntax-constraints)
+ - [Supported Python Data Types](#supported-python-data-types)
+ - [MindSpore Extended Data Type](#mindspore-extended-data-type)
+ - [Expression Types](#expression-types)
+ - [Statement Types](#statement-types)
+ - [System Functions/Classes](#system-functionsclasses)
+ - [Function Parameters](#function-parameters)
+ - [Operators](#operators)
+ - [Index operation](#index-operation)
+ - [Unsupported Syntax](#unsupported-syntax)
+ - [Network Definition Constraints](#network-definition-constraints)
+ - [Instance Types on the Entire Network](#instance-types-on-the-entire-network)
+ - [Network Input Type](#network-input-type)
+ - [Network Graph Optimization](#network-graph-optimization)
+ - [Network Construction Components](#network-construction-components)
+ - [Other Constraints](#other-constraints)
-
+
## Overview
MindSpore can compile user source code based on the Python syntax into computational graphs, and can convert common functions or instances inherited from nn.Cell into computational graphs. Currently, MindSpore does not support conversion of any Python source code into computational graphs. Therefore, there are constraints on source code compilation, including syntax constraints and network definition constraints. As MindSpore evolves, the constraints may change.
@@ -208,8 +208,8 @@ Currently, the following syntax is not supported in network constructors:
## Network Definition Constraints
### Instance Types on the Entire Network
-* Common Python function with the [@ms_function](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.html#mindspore.ms_function) decorator.
-* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
+* Common Python function with the [@ms_function](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.html#mindspore.ms_function) decorator.
+* Cell subclass inherited from [nn.Cell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell).
### Network Input Type
* The training data input parameters of the entire network must be of the Tensor type.
@@ -222,44 +222,59 @@ Currently, the following syntax is not supported in network constructors:
| Category | Content
| :----------- |:--------
-| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell).
+| `Cell` instance |[mindspore/nn/*](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html), and custom [Cell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell).
| Member function of a `Cell` instance | Member functions of other classes in the construct function of Cell can be called.
| Function | Custom Python functions and system functions listed in the preceding content.
| Dataclass instance | Class decorated with @dataclass.
-| Primitive operator |[mindspore/ops/operations/*](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html).
-| Composite operator |[mindspore/ops/composite/*](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.composite.html).
-| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
+| Primitive operator |[mindspore/ops/operations/*](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html).
+| Composite operator |[mindspore/ops/composite/*](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html).
+| Operator generated by constexpr |Uses the value generated by [@constexpr](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.constexpr) to calculate operators.
### Other Constraints
-Input parameters of the construct function on the entire network and parameters of functions modified by the ms_function decorator are generalized during the graph compilation. Therefore, they cannot be transferred to operators as constant input. Therefore, in graph mode, the parameter passed to the entry network can only be Tensor. As shown in the following example:
-* The following is an example of incorrect input:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
-
- def construct(self, input_x, input_axis):
- return self.expandDims(input_x, input_axis)
- expand_dim = ExpandDimsTest()
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x, 0)
- ```
- In the example, ExpandDimsTest is a single-operator network with two inputs: input_x and input_axis. The second input of the ExpandDims operator must be a constant. This is because input_axis is required when the output dimension of the ExpandDims operator is deduced during graph compilation. As the network parameter input, the value of input_axis is generalized into a variable and cannot be determined. As a result, the output dimension of the operator cannot be deduced, causing the graph compilation failure. Therefore, the input required by deduction in the graph compilation phase must be a constant. In APIs, the "constant input is needed" is marked for parameters that require constant input of these operators.
+1. Input parameters of the `construct` function on the entire network and parameters of functions modified by the `ms_function` decorator are generalized during the graph compilation and cannot be passed to operators as constant input. Therefore, in graph mode, the parameter passed to the entry network can only be `Tensor`. As shown in the following example:
+
+ * The following is an example of incorrect input:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+
+ def construct(self, input_x, input_axis):
+ return self.expandDims(input_x, input_axis)
+ expand_dim = ExpandDimsTest()
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x, 0)
+ ```
+ In the example, `ExpandDimsTest` is a single-operator network with two inputs: `input_x` and `input_axis`. The second input of the `ExpandDims` operator must be a constant. This is because `input_axis` is required when the output dimension of the `ExpandDims` operator is deduced during graph compilation. As the network parameter input, the value of `input_axis` is generalized into a variable and cannot be determined. As a result, the output dimension of the operator cannot be deduced, causing the graph compilation failure. Therefore, the input required by deduction in the graph compilation phase must be a constant. In the API, the parameters of this type of operator that require constant input will be explained, marked `const input is needed`.
+
+ * Directly enter the needed value or a member variable in a class for the constant input of the operator in the construct function. The following is an example of correct input:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self, axis):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+ self.axis = axis
+
+ def construct(self, input_x):
+ return self.expandDims(input_x, self.axis)
+ axis = 0
+ expand_dim = ExpandDimsTest(axis)
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x)
+ ```
+
+2. It is not allowed to modify `non-Parameter` type data members of the network. Examples are as follows:
-* Directly enter the needed value or a member variable in a class for the constant input of the operator in the construct function. The following is an example of correct input:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self, axis):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
- self.axis = axis
-
- def construct(self, input_x):
- return self.expandDims(input_x, self.axis)
- axis = 0
- expand_dim = ExpandDimsTest(axis)
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x)
```
+ class Net(Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.num = 2
+ self.par = Parameter(Tensor(np.ones((2, 3, 4))), name="par")
+
+ def construct(self, x, y):
+ return x + y
+ ```
+In the network defined above, `self.num` is not a `Parameter` and cannot be modified, but `self.par` is a `Parameter` and can be modified.
diff --git a/docs/source_zh_cn/design.rst b/docs/note/source_en/design.rst
similarity index 76%
rename from docs/source_zh_cn/design.rst
rename to docs/note/source_en/design.rst
index 5db1a3815235941f9a2a74420fe85dd36e1afad5..f828e869c1dfd4c79425c1007af3b07e5b309948 100644
--- a/docs/source_zh_cn/design.rst
+++ b/docs/note/source_en/design.rst
@@ -1,16 +1,16 @@
-设计文档
+Design
===========
.. toctree::
:maxdepth: 1
- architecture
- technical_white_paper
- design/mindspore/ir
+ design/mindspore/architecture
+ design/mindspore/architecture_lite
+ design/mindspore/mindir
design/mindspore/distributed_training_design
- design/mindinsight/profiler_design
design/mindinsight/training_visual_design
design/mindinsight/graph_visual_design
design/mindinsight/tensor_visual_design
+ design/mindinsight/profiler_design
design/mindarmour/differential_privacy_design
design/mindarmour/fuzzer_design
diff --git a/docs/note/source_en/design/mindarmour/differential_privacy_design.md b/docs/note/source_en/design/mindarmour/differential_privacy_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..86ce6049a28d0b7541c7b3bb10f27db8a062c846
--- /dev/null
+++ b/docs/note/source_en/design/mindarmour/differential_privacy_design.md
@@ -0,0 +1,71 @@
+# Differential Privacy
+
+`Ascend` `Model Development` `Model Optimization` `Framework Development` `Enterprise` `Expert` `Contributor`
+
+
+
+- [Differential Privacy](#differential-privacy)
+ - [Overall Design](#overall-design)
+ - [DP Optimizer](#dp-optimizer)
+ - [DP Mechanisms](#dp-mechanisms)
+ - [Monitor](#monitor)
+ - [Code Implementation](#code-implementation)
+ - [References](#references)
+
+
+
+
+
+## Overall Design
+
+The Differential-Privacy module of MindArmour implements the differential privacy training capability. Model training consists of building training dataset, computing loss, computing gradient, and updating model parameters. Currently, the differential privacy training of MindArmour focuses on the gradient computing process and uses the corresponding algorithm to clip and add noise to the gradient. In this way, user data privacy is protected.
+
+
+
+Figure 1 Overall design of differential privacy
+
+Figure 1 shows an overall design of differential privacy training, and mainly including differential privacy noise mechanisms (DP mechanisms), a differential privacy optimizer (DP optimizer), and a privacy monitor.
+
+
+### DP Optimizer
+
+DP optimizer inherits capabilities of the MindSpore optimizer and uses the DP mechanisms to scramble and protect gradients. Currently, MindArmour provides three types of DP optimizers: constant Gaussian optimizer, adaptive Gaussian optimizer, and adaptive clipping optimizer. Each type of DP optimizer adds differential privacy protection capabilities to common optimizers such as SGD and Momentum from different perspectives.
+
+* Constant Gaussian optimizer is a DP optimizer for non-adaptive Gaussian noise. The advantage is that the differential privacy budget ϵ can be strictly controlled. The disadvantage is that in the model training process, the noise amount added in each step is fixed. If the number of training steps is too large, the noise in the later phase of training makes the model convergence difficult, or even causes the performance to deteriorate greatly and the model availability to be poor.
+* Adaptive Gaussian optimizer adaptively adjusts the standard deviation to adjust the Gaussian distribution noise. In the initial phase of model training, a large amount of noise is added. As the model gradually converges, the noise amount gradually decreases, and the impact of the noise on the model availability is reduced. A disadvantage of the adaptive Gaussian noise is that a differential privacy budget cannot be strictly controlled.
+* Adaptive clipping optimizer is a DP optimizer that adaptively adjusts a clipping granularity. Gradient clipping is an important operation in differential privacy training. The adaptive clipping optimizer can control a ratio of gradient clipping to fluctuate within a given range and control the gradient clipping granularity during training steps.
+
+### DP Mechanisms
+
+The noise mechanism is a basis for building a differential privacy training capability. Different noise mechanisms meet requirements of different DP optimizers, including multiple mechanisms such as constant Gaussian distribution noise, adaptive Gaussian distribution noise, adaptive clipping Gaussian distribution noise, and Laplace distribution noise.
+
+### Monitor
+
+Monitor provides callback functions such as Rényi differential privacy (RDP) and zero-concentrated differential privacy (ZCDP) to monitor the differential privacy budget of the model.
+
+* ZCDP[2]
+
+ ZCDP is a loose differential privacy definition. It uses the Rényi divergence to measure the distribution difference of random functions on adjacent datasets.
+
+* RDP[3]
+
+ RDP is a more general differential privacy definition based on the Rényi divergence. It uses the Rényi divergence to measure the distribution difference between two adjacent datasets.
+
+
+Compared with traditional differential privacy, ZCDP and RDP provide stricter privacy budget upper bound guarantee.
+
+
+## Code Implementation
+
+* [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise.
+* [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation.
+* [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned.
+* [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability.
+
+## References
+
+[1] Dwork, Cynthia, and Jing Lei. "Differential privacy and robust statistics." *Proceedings of the forty-first annual ACM symposium on Theory of computing*. 2009.
+
+[2] Lee, Jaewoo, and Daniel Kifer. "Concentrated differentially private gradient descent with adaptive per-iteration privacy budget." *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*. 2018.
+
+[3] Mironov, Ilya. "Rényi differential privacy." *2017 IEEE 30th Computer Security Foundations Symposium (CSF)*. IEEE, 2017.
diff --git a/docs/note/source_en/design/mindarmour/fuzzer_design.md b/docs/note/source_en/design/mindarmour/fuzzer_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..2a41c2342eb3ed7fe13804890f7d97f491e2f20e
--- /dev/null
+++ b/docs/note/source_en/design/mindarmour/fuzzer_design.md
@@ -0,0 +1,74 @@
+# AI Model Security Test
+
+`Linux` `Ascend` `GPU` `CPU` `Data Preparation` `Model Development` `Model Training` `Model Optimization` `Enterprise` `Expert`
+
+
+
+
+- [AI Model Security Test](#ai-model-security-test)
+ - [Background](#background)
+ - [Fuzz Testing Design](#fuzz-testing-design)
+ - [Fuzz Testing Process](#fuzz-testing-process)
+ - [Code Implementation](#code-implementation)
+ - [References](#references)
+
+
+
+
+
+## Background
+
+Different from [fuzzing security test for traditional programs](https://zhuanlan.zhihu.com/p/43432370), MindArmour provides the AI model security test module fuzz_testing for deep neural network. Based on the neural network features, the concept of neuron coverage rate [1] is introduced to guide the fuzz testing. Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate so that more neurons can be activated by inputs. The distribution range of neuron values is wider to fully test DNN and explore the output results of different types of models and model error behavior.
+
+## Fuzz Testing Design
+
+The following figure shows the security test design of the AI model.
+
+
+
+At the user interface layer, users need to provide the original dataset `DataSet`, tested model `Model`, and Fuzzer parameter `Fuzzer configuration`. After fuzzing the model and data, Fuzzer module returns the security report `Security Report`.
+
+Fuzz testting architecture consists of three modules:
+
+1. Natural Threat/Adversarial Example Generator:
+
+ Randomly select a mutation method to mutate seed data and generate multiple variants. Mutation policies supporting multiple samples include:
+
+ - Image affine transformation methods: Translate, Rotate, Scale, and Shear.
+ - Methods based on image pixel value changes: Contrast, Brightness, Blur, and Noise.
+ - Methods for generating adversarial examples based on white-box and black-box attacks: FGSM, PGD, and MDIIM.
+
+2. Fuzzer Moduler:
+
+ Perform fuzz testing on the mutated data to observe the change of the neuron coverage rate. If the generated data increases the neuron coverage rate, add the data to the mutated seed queue for the next round of data mutation. Currently, the following neuron coverage metrics are supported: KMNC, NBC, and SNAC [2].
+
+3. Evaluation:
+
+ Evaluate the fuzz testing effect, quality of generated data, and strength of mutation methods. Five metrics of three types are supported, including the general evaluation metric (accuracy), neuron coverage rate metrics (kmnc, nbc, and snac), and adversarial attack evaluation metric (attack_success_rate).
+
+## Fuzz Testing Process
+
+
+
+The fuzz testing process is as follows:
+
+1. Select seed A from the seed queue according to the policy.
+2. Randomly select a mutation policy to mutate seed A and generate multiple variants A1, A2, ...
+3. Use the target model to predict the variants. If the semantics of variant is consistent with the seed, the variant enters the Fuzzed Tests.
+4. If the prediction is correct, use the neuron coverage metric for analysis.
+5. If a variant increases the coverage rate, place the variant in the seed queue for the next round of mutation.
+
+Through multiple rounds of mutations, you can obtain a series of variant data in the Fuzzed Tests, perform further analysis, and provide security reports from multiple perspectives. You can use them to deeply analyze defects of the neural network model and enhance the model to improve its universality and robustness.
+
+## Code Implementation
+
+1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process.
+2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC.
+3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods.
+4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks.
+
+## References
+
+[1] Pei K, Cao Y, Yang J, et al. Deepxplore: Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017: 1-18.
+
+[2] Ma L, Juefei-Xu F, Zhang F, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems[C]//Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ACM, 2018: 120-131.
\ No newline at end of file
diff --git a/docs/note/source_en/design/mindarmour/images/dp_arch.png b/docs/note/source_en/design/mindarmour/images/dp_arch.png
new file mode 100644
index 0000000000000000000000000000000000000000..c903e4e2acece6c6de882852dc3570126b6fcb05
Binary files /dev/null and b/docs/note/source_en/design/mindarmour/images/dp_arch.png differ
diff --git a/docs/source_zh_cn/design/mindarmour/images/fuzz_architecture.png b/docs/note/source_en/design/mindarmour/images/fuzz_architecture.png
similarity index 100%
rename from docs/source_zh_cn/design/mindarmour/images/fuzz_architecture.png
rename to docs/note/source_en/design/mindarmour/images/fuzz_architecture.png
diff --git a/docs/source_zh_cn/design/mindarmour/images/fuzz_process.png b/docs/note/source_en/design/mindarmour/images/fuzz_process.png
similarity index 100%
rename from docs/source_zh_cn/design/mindarmour/images/fuzz_process.png
rename to docs/note/source_en/design/mindarmour/images/fuzz_process.png
diff --git a/docs/source_en/design/mindinsight/graph_visual_design.md b/docs/note/source_en/design/mindinsight/graph_visual_design.md
similarity index 97%
rename from docs/source_en/design/mindinsight/graph_visual_design.md
rename to docs/note/source_en/design/mindinsight/graph_visual_design.md
index 64f6b32d4589d64262586d3b2630bf61d15a7d64..74eced995c47794b828267706608149559d50e06 100644
--- a/docs/source_en/design/mindinsight/graph_visual_design.md
+++ b/docs/note/source_en/design/mindinsight/graph_visual_design.md
@@ -15,7 +15,7 @@
-
+
## Background
diff --git a/docs/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png b/docs/note/source_en/design/mindinsight/images/analyser_class_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png
rename to docs/note/source_en/design/mindinsight/images/analyser_class_profiler.png
diff --git a/docs/note/source_en/design/mindinsight/images/context_profiler.png b/docs/note/source_en/design/mindinsight/images/context_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..f11782ebfe473ddfaec9736055c9012a5129a26f
Binary files /dev/null and b/docs/note/source_en/design/mindinsight/images/context_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_class_design.png b/docs/note/source_en/design/mindinsight/images/graph_visual_class_design.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/graph_visual_class_design.png
rename to docs/note/source_en/design/mindinsight/images/graph_visual_class_design.png
diff --git a/tutorials/source_en/advanced_use/images/graph.png b/docs/note/source_en/design/mindinsight/images/graph_visual_main.png
similarity index 100%
rename from tutorials/source_en/advanced_use/images/graph.png
rename to docs/note/source_en/design/mindinsight/images/graph_visual_main.png
diff --git a/tutorials/source_en/advanced_use/images/graph_sidebar.png b/docs/note/source_en/design/mindinsight/images/graph_visual_right_side.png
similarity index 100%
rename from tutorials/source_en/advanced_use/images/graph_sidebar.png
rename to docs/note/source_en/design/mindinsight/images/graph_visual_right_side.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/module_profiler.png b/docs/note/source_en/design/mindinsight/images/module_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/module_profiler.png
rename to docs/note/source_en/design/mindinsight/images/module_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/parser_module_profiler.png b/docs/note/source_en/design/mindinsight/images/parser_module_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/parser_module_profiler.png
rename to docs/note/source_en/design/mindinsight/images/parser_module_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png b/docs/note/source_en/design/mindinsight/images/proposer_class_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png
rename to docs/note/source_en/design/mindinsight/images/proposer_class_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png b/docs/note/source_en/design/mindinsight/images/proposer_module_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png
rename to docs/note/source_en/design/mindinsight/images/proposer_module_profiler.png
diff --git a/docs/source_en/design/mindinsight/images/tensor_histogram.png b/docs/note/source_en/design/mindinsight/images/tensor_histogram.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/tensor_histogram.png
rename to docs/note/source_en/design/mindinsight/images/tensor_histogram.png
diff --git a/tutorials/source_en/advanced_use/images/tensor_table.png b/docs/note/source_en/design/mindinsight/images/tensor_table.png
similarity index 100%
rename from tutorials/source_en/advanced_use/images/tensor_table.png
rename to docs/note/source_en/design/mindinsight/images/tensor_table.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png b/docs/note/source_en/design/mindinsight/images/time_order_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png
rename to docs/note/source_en/design/mindinsight/images/time_order_profiler.png
diff --git a/docs/source_en/design/mindinsight/images/training_visualization_architecture.png b/docs/note/source_en/design/mindinsight/images/training_visualization_architecture.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/training_visualization_architecture.png
rename to docs/note/source_en/design/mindinsight/images/training_visualization_architecture.png
diff --git a/docs/source_en/design/mindinsight/images/training_visualization_data_flow.png b/docs/note/source_en/design/mindinsight/images/training_visualization_data_flow.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/training_visualization_data_flow.png
rename to docs/note/source_en/design/mindinsight/images/training_visualization_data_flow.png
diff --git a/docs/source_en/design/mindinsight/images/training_visualization_data_model.png b/docs/note/source_en/design/mindinsight/images/training_visualization_data_model.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/training_visualization_data_model.png
rename to docs/note/source_en/design/mindinsight/images/training_visualization_data_model.png
diff --git a/docs/note/source_en/design/mindinsight/profiler_design.md b/docs/note/source_en/design/mindinsight/profiler_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..c312ef9f33d0d2426a1089dc16f549e87cefc9cc
--- /dev/null
+++ b/docs/note/source_en/design/mindinsight/profiler_design.md
@@ -0,0 +1,175 @@
+# Profiler Design Document
+
+`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
+
+
+
+- [Profiler Design Document](#profiler-design-document)
+ - [Background](#background)
+ - [Profiler Architecture Design](#profiler-architecture-design)
+ - [Context](#context)
+ - [Module Structure](#module-structure)
+ - [Internal Module Interaction](#internal-module-interaction)
+ - [Sub-Module Design](#sub-module-design)
+ - [ProfilerAPI and Controller](#profilerapi-and-controller)
+ - [Description](#description)
+ - [Design](#design)
+ - [Parser](#parser)
+ - [Description](#description-1)
+ - [Design](#design-1)
+ - [Analyser](#analyser)
+ - [Description](#description-2)
+ - [Design](#design-2)
+ - [Proposer](#proposer)
+ - [Description](#description-3)
+ - [Design](#design-3)
+
+
+
+
+
+## Background
+
+To support model development and performance debugging in MindSpore, an easy-to-use profile tool is required to intuitively display the performance information of each dimension of a network model, provide users with easy-to-use and abundant profiling functions, and help users quickly locate network performance faults.
+
+## Profiler Architecture Design
+The Profiler architecture design is introduced from the following three aspects: the overall context interaction relationship of Profiler; the internal structure of Profiler, including the module structure and module layers; the interactive calling relationship between modules.
+
+### Context
+
+Profiler is a part of the MindSpore debugging and optimization tool. The following figure shows the tool context.
+
+
+
+Figure 1 Context relationship
+
+As shown in the preceding figure, the interaction between the Profiler and other components is as follows:
+
+1. In the training script, MindSpore Profiler is called to send the command to the MindSpore ada communication module for starting performance data collection. Finally, the ada generates original performance data.
+
+2. MindSpore Profiler parses the original data in the user script and generates the intermediate data results in the specified folder.
+
+3. MindInsight Profiler connects to the intermediate data and provides the visualized Profiler function for users.
+### Module Structure
+
+Modules are classified into the following layers:
+
+
+
+Figure 2 Relationships between modules at different layers
+
+
+Module functions are as follows:
+1. ProfilerAPI is a calling entry provided by code, including the performance collection startup API and analysis API.
+2. Controller is a module at a layer lower than that of ProfilerAPI. It is called by the startup API of ProfilerAPI to start or stop the performance collection function. The original data is written to a fixed position by ada.
+3. Parser is a module for parsing original performance data which is collected on the device and cannot be directly understood by users. Parser parses, combines, and converts the data to generate intermediate results that can be understood by users and analyzed by upper layers.
+4. Analyser obtains the intermediate results parsed by the lower-layer Parser, encapsulates, filters, and sorts the intermediate results, and returns the various information to the upper-layer Profiler API and RESTful.
+5. RESTful is used to call the common API provided by the backend Analyser to obtain objective data and use RESTful to connect to the frontend.
+
+### Internal Module Interaction
+Users can use API or RESTful to complete internal module interaction process. The following uses the API as an example:
+
+
+
+Figure 3 Module interaction
+
+The interaction process of each module is as follows:
+
+1. ProfilerAPI calls the control function of the lower-layer Controller to control the lower-layer collection module to collect performance information. Currently, the collection module (ada) receives commands in resident process mode and independently collects performance information.
+
+2. After the training is complete, users call the analysis API of ProfilerAPI.
+
+3. Profiler API analysis API uses the Parser module to parse performance data, generates intermediate results, calls the Aalayser module to analyze the results, and returns various information to users.
+
+## Sub-Module Design
+### ProfilerAPI and Controller
+
+#### Description
+ProfilerAPI provides an entry API in the training script for users to start performance collection and analyze performance data.
+ProfilerAPI delivers commands through Controller to control the startup of ada.
+
+#### Design
+ProfilerAPI belongs to the API layer of upper-layer application and is integrated by the training script. The function is divided into two parts:
+
+- Before training, call the bottom-layer Controller API to deliver a command to start a profiling task.
+
+- After training, call the bottom-layer Controller API to deliver commands to stop the profiling task, call the Analyser and Parser APIs to parse data files and generate result data such as operator performance statistics and training trace statistics.
+
+
+Controller provides an API for the upper layer, calls API of the lower-layer performance collection module, and delivers commands for starting and stopping performance collection.
+
+The generated original performance data includes:
+
+- `hwts.log.data.45.dev.profiler_default_tag` file: stores operator execution information, including the start and end of a task and stream ID.
+- `DATA_PREPROCESS.dev.AICPU` file: specifies AI CPU operator execution time at each stage.
+- `Framework.host.task_desc_info` file: stores the mapping between operator IDs and operator names and the input and output information of each operator.
+- `training_trace.46.dev.profiler_default_tag` file: stores the start and end time of each step and time of step interval, forward and backward propagation, and step tail.
+
+### Parser
+#### Description
+Parser is a module for parsing original performance data which is collected on the device and cannot be directly understood by users. Parser parses, combines, and converts the data to generate intermediate results that can be understood by users and analyzed by upper layers.
+#### Design
+
+
+Figure 4 Parser module
+
+As shown in the preceding figure, there are HWTS Parser, AI CPU Parser, Framework Parser, and Training Trace Parser modules. Each module parses a type of original data to obtain the intermediate file that can be read by users.
+
+- HWTS Parser: parses the `hwts.log.data.45.dev.profiler_default_tag` file to obtain the task-based statistics of the device, such as the start and end of each task and stream ID, which are used to compute the operator execution time.
+- AI CPU Parser: parses the `DATA_PREPROCESS.dev.AICPU` file to obtain the AI CPU operator execution time at each stage.
+- Framework Parser: parses the `Framework.host.task_desc_info` file to obtain the mapping between AI Core operator and task, and key operator information.
+- Training Trace Parser: parses the `training_trace.46.dev.profiler_default_tag` file to analyze the time at each training stage.
+
+### Analyser
+
+#### Description
+Analyzer is used to filter, sort, query, and page the intermediate results generated at the parsing stage.
+
+#### Design
+
+This module parses the intermediate files generated by Parser, provides a general API for upper-layer data analysis, and returns the analyzed data to the upper layer for display. Various intermediate files have certain common points which can be abstracted. Therefore, following figure shows the design of the Analyser class.
+
+
+
+Figure 5 Analyser class
+
+As shown in the preceding figure, multiple Analysers are implemented for different contents to be queried. Filter, sorting, and pagination conditions can be defined for each Analyser. Each Analyser knows which intermediate files are required to merge, filter, and sort data. Analyser is associated with Parser through the intermediate files generated by Parser, and no function is called. In this way, Analyser and Parser are decoupled.
+
+Currently, there are two types of analyzers for operator information:
+
+- Filter the average information of operator types.
+- Filter the detailed average information of each operator in two Analysers (AicoreTypeAnalyser and AicoreDetailAnalyser).
+
+To hide the internal implementation of Analyser and facilitate calling, the simple factory mode is used to obtain the specified Analyser through AnalyserFactory.
+
+
+### Proposer
+#### Description
+Proposer is a Profiler performance optimization suggestion module. Proposer calls the Analyser module to obtain performance data, analyzes the performance data based on optimization rules, and displays optimization suggestions for users through the UI and API.
+
+#### Design
+
+Modules are classified into the following layers:
+
+
+
+Figure 6 Proposer module
+
+As shown in the preceding figure:
+
+- Proposer provides API for calling the API and RESTful to obtain optimization suggestions.
+- Proposer calls the Analyser API to obtain performance data and obtain optimization suggestions based on optimization rules.
+- Proposer calls Analyser factory to obtain the Analyser object.
+
+You can call the query API of the Analyser object to obtain information, including the top N AICore, AICoreType, and AICpu operators that are sorted by time and the time information of each traning trace stage.
+
+The following figure shows the module class design:
+
+
+
+Figure 7 Proposer class
+
+As shown in the preceding figure:
+
+- Proposers of various types inherit the abstract class Proposer and implement the analyze methods.
+- API and CLI call the ProposerFactory to obtain the Proposer and call the Proposer.analyze function to obtain the optimization suggestions of each type of Proposer.
\ No newline at end of file
diff --git a/docs/source_en/design/mindinsight/tensor_visual_design.md b/docs/note/source_en/design/mindinsight/tensor_visual_design.md
similarity index 88%
rename from docs/source_en/design/mindinsight/tensor_visual_design.md
rename to docs/note/source_en/design/mindinsight/tensor_visual_design.md
index ce3839d5b9affa269ccf802cf10a697412a82b78..76bb0a7b7d45a44c741dcd05925a6be807936ece 100644
--- a/docs/source_en/design/mindinsight/tensor_visual_design.md
+++ b/docs/note/source_en/design/mindinsight/tensor_visual_design.md
@@ -14,7 +14,7 @@
-
+
## Background
@@ -44,7 +44,7 @@ Figure 1: Table view
Figure 1 displays tensors recorded by a user in a form of a table. The following functions are included:
-- The input boxes under the table display the tensor data of the current dimension. The colon (:) indicates all values of the current dimension. You can enter the corresponding index in the box (the meaning is the same as that of the Python index, and negative values are supported) or use `:` to query tensor data in a specific dimension.
+- The input boxes under the table display the tensor data of the current dimension. The colon (:) indicates index range of the current dimension which is basically the same as the meaning of Python index. If no specific index is specified, it indicates all the values of the current dimension and `2:5` indicates the value of index from 2 to 5 (not including 5). You can enter the corresponding index in the box or use index range containing `:` to query tensor data in a specific dimension.
- Drag the thumb of the linear slider below the table to query the tensor data of a specific step.

diff --git a/docs/source_en/design/mindinsight/training_visual_design.md b/docs/note/source_en/design/mindinsight/training_visual_design.md
similarity index 94%
rename from docs/source_en/design/mindinsight/training_visual_design.md
rename to docs/note/source_en/design/mindinsight/training_visual_design.md
index 599a44df3b974494a1900902f799ed6a6c1e765d..58876603c59dcb57d13dc5779ddc8813f4543f93 100644
--- a/docs/source_en/design/mindinsight/training_visual_design.md
+++ b/docs/note/source_en/design/mindinsight/training_visual_design.md
@@ -18,7 +18,7 @@
-
+
[MindInsight](https://gitee.com/mindspore/mindinsight) is a visualized debugging and tuning component of MindSpore. MindInsight can be used to complete tasks such as training visualization, performance tuning, and precision tuning.
@@ -40,11 +40,11 @@ The training information collection function in MindSpore consists of training i
Training information collection APIs include:
-- Training information collection API based on the summary operator. This API contains four summary operators, that is, the ScalarSummary operator for recording scalar data, the ImageSummary operator for recording image data, the HistogramSummary operator for recording parameter distribution histogram data, and the TensorSummary operator for recording tensor data. For details about the operators, see [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
+- Training information collection API based on the summary operator. This API contains four summary operators, that is, the ScalarSummary operator for recording scalar data, the ImageSummary operator for recording image data, the HistogramSummary operator for recording parameter distribution histogram data, and the TensorSummary operator for recording tensor data. For details about the operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.0/operator_list.html).
-- Training information collection API based on the Python API. You can use the [SummaryRecord.add_value](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value) method to collect training information in Python code.
+- Training information collection API based on the Python API. You can use the [SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value) method to collect training information in Python code.
-- Easy-to-use training information collection callback. The [SummaryCollector](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector) callback function can be used to conveniently collect common training information to training logs.
+- Easy-to-use training information collection callback. The [SummaryCollector](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector) callback function can be used to conveniently collect common training information to training logs.
The training information persistence module mainly includes a summary_record module used to manage a cache and a write_pool module used to process data in parallel and write data into a file. After the training information is made persistent, it is stored in the training log file (summary file).
diff --git a/docs/source_en/architecture.md b/docs/note/source_en/design/mindspore/architecture.md
similarity index 89%
rename from docs/source_en/architecture.md
rename to docs/note/source_en/design/mindspore/architecture.md
index 43cf28ef2fa68e47a8c50d59d3fd5ad752fb280b..9b59fc4291c0bb59df24518ebb3f854f98b3832e 100644
--- a/docs/source_en/architecture.md
+++ b/docs/note/source_en/design/mindspore/architecture.md
@@ -2,7 +2,7 @@
`Linux` `Windows` `Ascend` `GPU` `CPU` `On Device` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
-
+
The MindSpore framework consists of the Frontend Expression layer, Graph Engine layer, and Backend Runtime layer.
diff --git a/lite/docs/source_en/architecture.md b/docs/note/source_en/design/mindspore/architecture_lite.md
similarity index 85%
rename from lite/docs/source_en/architecture.md
rename to docs/note/source_en/design/mindspore/architecture_lite.md
index 64585775720d39c365190d9f8f24c82931cf24e3..e244e0c9eed12136807fe13835450db37d5831de 100644
--- a/lite/docs/source_en/architecture.md
+++ b/docs/note/source_en/design/mindspore/architecture_lite.md
@@ -1,6 +1,8 @@
# Overall Architecture
-
+`Linux` `Windows` `On Device` `Inference Application` `Intermediate` `Expert` `Contributor`
+
+
The overall architecture of MindSpore Lite is as follows:
diff --git a/docs/note/source_en/design/mindspore/distributed_training_design.md b/docs/note/source_en/design/mindspore/distributed_training_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..b943ae1f4fdeab8f7ac68211b6f513b6f54c3a8a
--- /dev/null
+++ b/docs/note/source_en/design/mindspore/distributed_training_design.md
@@ -0,0 +1,144 @@
+# Distributed Training Design
+
+`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
+
+
+
+- [Distributed Training Design](#distributed-training-design)
+ - [Background](#background)
+ - [Concepts](#concepts)
+ - [Collective Communication](#collective-communication)
+ - [Synchronization Mode](#synchronization-mode)
+ - [Data Parallelism](#data-parallelism)
+ - [Principle of Data Parallelism](#principle-of-data-parallelism)
+ - [Data Parallel Code](#data-parallel-code)
+ - [Automatic Parallelism](#automatic-parallelism)
+ - [Principle of Automatic Parallelism](#principle-of-automatic-parallelism)
+ - [Automatic Parallel Code](#automatic-parallel-code)
+
+
+
+
+
+## Background
+
+With the rapid development of deep learning, the number of datasets and parameters are growing exponentially to improve the accuracy and generalization capability of neural networks. Parallel distributed training has become a development trend to resolve the performance bottleneck of ultra-large scale networks. MindSpore supports the mainstream distributed training paradigm and develops an automatic hybrid parallel solution. The following describes the design principles of several parallel training modes and provides guidance for users to perform custom development.
+
+
+## Concepts
+
+### Collective Communication
+
+Collective communication is defined as communication that involves a group of processes. All processes in the group send and receive data after meeting certain conditions. MindSpore implements data transmission during parallel training through collective communication. On Ascend chips, MindSpore depends on the Huawei Collective Communication Library (`HCCL`) to implement the task. On GPU, MindSpore depends on the NVIDIA Collective Communication Library (`NCCL`) to implement the task.
+
+### Synchronization Mode
+
+In synchronous mode, all devices strart training at the same time and update parameter values synchronously after the backward propagation algorithm is executed. Currently, MindSpore uses the synchronous training mode.
+
+## Data Parallelism
+
+This section describes how the data parallel mode `ParallelMode.DATA_PARALLEL` works in MindSpore.
+
+### Principle of Data Parallelism
+
+
+
+1. Environment dependencies
+
+ Each time before parallel training starts, the `mindspore.communication.init` API is called to initialize communication resources and the global communication group `WORLD_COMM_GROUP` is automatically created.
+
+2. Data distribution
+
+ The key of data parallelism is to split datasets based on the sample dimension and deliver the split datasets to different devices. Each dataset loading API provided by the `mindspore.dataset` module has the `num_shards` and `shard_id` parameters. The parameters are used to split a dataset into multiple datasets, perform cyclic sampling, and collect data of the `batch` size to each device. When the data volume is insufficient, the sampling restarts from the beginning.
+
+3. Network structure
+
+ The scripting method of data parallel network is the same as that of standalone network. This is because, although models of each device are executed independently during the forward and backward propagation processes, the same network structure is maintained. To ensure the synchronous training between devices, the initial values of corresponding network parameters must be the same. You are advised to set the same random number seed on each device by using `numpy.random.seed` to broadcast models.
+
+4. Gradient aggregation
+
+ Theoretically, the training effect of data parallel network should be the same as that of the standalone network. To ensure the consistency of the calculation logic, the `AllReduce` operator is inserted after gradient calculation to implement the gradient aggregation operation between devices. You can enable `mean` to average the sum of gradient values, or regard `mean` as a hyperparameter. Enabling `mean` is equivalent to reducing the learning rate by multiple times.
+
+5. Parameter update
+
+ Because the gradient aggregation operation is introduced, the models of each device perform parameter update with the same gradient value. Therefore, MindSpore implements a synchronous data parallel training mode. Theoretically, models trained by each device are the same. If the reduce operation on samples is involved on the network, the network output may be different. This is determined by the sharding attribute of data parallelism.
+
+### Data Parallel Code
+
+1. Collective communication
+
+ - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer.
+ - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition.
+
+2. Gradient aggregation
+
+ - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation.
+
+
+## Automatic Parallelism
+
+As a key feature of MindSpore, automatic parallelism is used to implement hybrid parallel training that combines automatic data parallelism and model parallelism. It aims to help users express the parallel algorithm logic using standalone scripts, reduce the difficulty of distributed training, improve the algorithm R&D efficiency, and maintain the high performance of training. This section describes how the automatic parallel mode `ParallelMode.AUTO_PARALLEL` and semi-automatic parallel mode `ParallelMode.SEMI_AUTO_PARALLEL` work in MindSpore.
+
+### Principle of Automatic Parallelism
+
+
+
+1. Distributed operator and tensor layout
+
+ As shown in the preceding figure, the automatic parallel process traverses the standalone forward ANF graphs and performs shard modeling on tensors in the unit of distributed operator, indicating how the input and output tensors of an operator are distributed to each device of the cluster, that is, the tensor layout. Users do not need to know which device runs which slice of a model. The framework automatically schedules and allocates model slices.
+
+ To obtain the tensor layout model, each operator has a shard strategy, which indicates the shard status of each input of the operator in the corresponding dimension. Generally, tensors can be sharded in any dimension as long as the value is a multiple of 2, and the even distribution principle is met. The following figure shows an example of the three-dimensional `BatchMatmul` operation. The parallel strategy consists of two tuples, indicating the sharding of `input` and `weight`, respectively. Elements in a tuple correspond to tensor dimensions one by one. `2^N` indicates the shard unit, and `1` indicates that the tuple is not sharded. If you want to express a parallel data shard strategy, that is, only data in the `batch` dimension of `input` is sharded, and data in other dimensions are not sharded, you can use `strategy=((2^N, 1, 1),(1, 1, 1))`. If you want to express a parallel model shard strategy, that is, only model in the non-`batch` dimension of `weight` is sharded, for example, only the `channel` dimension is sharded, you can use `strategy=((1, 1, 1),(1, 1, 2^N))`. If you want to express a hybrid parallel shard strategy, one of which is `strategy=((2^N, 1, 1),(1, 1, 2^N))`.
+
+ 
+
+ Based on the shard strategy of an operator, the framework automatically derives the distribution model of input tensors and output tensors of the operator. This distribution model consists of `device_matrix`, `tensor_shape`, and `tensor map`, which indicate the device matrix shape, tensor shape, and mapping between devices and tensor dimensions, respectively. Based on the tensor layout model, distributed operator determines whether to insert extra computation and communication operations in the graph to ensure that the operator computing logic is correct.
+
+2. Tensor Redistribution
+
+ When the output tensor model of an operator is inconsistent with the input tensor model of the next operator, computation and communication operations need to be introduced to implement the change between tensor layouts. The automatic parallel process introduces the tensor redistribution algorithm, which can be used to derive the communication conversion operations between random tensor layouts. The following three examples represent a parallel computing process of the formula `Z=(X×W)×V`, that is, a `MatMul` operation of two two-dimensional matrices, and show how to perform conversion between different parallel modes.
+
+ In example 1, the output of the first data parallel matrix multiplication is sharded in the row rection, and the input of the second model parallel matrix multiplication requires full tensors. The framework automatically inserts the `AllGather` operator to implement redistribution.
+
+ 
+
+ In example 2, the output of parallel matrix multiplication of the first model is sharded in the column direction, and the input of parallel matrix multiplication of the second model is sharded in the row direction. The framework automatically inserts a communication operator equivalent to the `AlltoAll` operation in collective communication to implement redistribution.
+
+ 
+
+ In example 3, an output shard mode of the first hybrid parallel matrix multiplication is the same as an input shard mode of the second hybrid parallel matrix multiplication. Therefore, redistribution does not need to be introduced. In the second matrix multiplication operation, the related dimensions of the two inputs are sharded. Therefore, the `AllReduce` operator needs to be inserted to ensure the operation correctness.
+
+ 
+
+ In general, this distributed representation breaks the boundary between data parallelism and model parallelism, making it easy to implement hybrid parallelism. From the perspective of scripts, users only need to construct a standalone network to express the parallel algorithm logic. Framework automatically shards the entire graph.
+
+3. Efficient parallel strategy search algorithm
+
+ The `SEMI_AUTO_PARALLEL` semi-automatic parallel mode indicates that you manually configure the parallel strategy for operators when you are familiar with the operator sharding representation. This mode is helpful for manual optimization, with certain commissioning difficulty. You need to master the parallel principle and obtain a high-performance parallel solution based on the network structure and cluster topology. To further help users accelerate the parallel network training process, the automatic parallel mode `AUTO_PARALLEL` introduces the automatic search feature of the parallel strategy on the basis of the semi-automatic parallel mode. Automatic parallelism builds cost models based on the hardware platform, and calculates the computation cost, memory cost, and communication cost of a certain amount of data and specific operators based on different parallel strategies Then, by using the dynamic programming algorithm or recursive programming algorithm and taking the memory upper limit of a single device as a constraint condition, a parallel strategy with optimal performance is efficiently searched out.
+
+ Strategy search replaces manual model sharding and provides a high-performance sharding solution within a short period of time, greatly reducing the threshold for parallel training.
+
+
+4. Convenient distributed automatic differentiation
+
+ In addition to forward network communication, the traditional manual model sharding needs to consider backward parallel computing. MindSpore encapsulates communication operations into operators and automatically generates backward propagation of communication operators based on the original automatic differentiation operations of the framework. Therefore, even during distributed training, users only need to pay attention to the forward propagation of the network to implement actual automatic parallel training.
+
+### Automatic Parallel Code
+
+1. Tensor layout model
+ - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated.
+
+2. Distributed operators
+ - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated.
+
+3. Strategy search algorithm
+ - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations.
+
+4. Device management
+ - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`.
+
+5. Entire graph sharding
+ - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode.
+
+
+6. Backward propagation of communication operators
+ - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`.
diff --git a/lite/docs/source_en/images/MindSpore-Lite-architecture.png b/docs/note/source_en/design/mindspore/images/MindSpore-Lite-architecture.png
similarity index 100%
rename from lite/docs/source_en/images/MindSpore-Lite-architecture.png
rename to docs/note/source_en/design/mindspore/images/MindSpore-Lite-architecture.png
diff --git a/docs/source_en/images/architecture.eddx b/docs/note/source_en/design/mindspore/images/architecture.eddx
similarity index 100%
rename from docs/source_en/images/architecture.eddx
rename to docs/note/source_en/design/mindspore/images/architecture.eddx
diff --git a/docs/source_en/images/architecture.png b/docs/note/source_en/design/mindspore/images/architecture.png
similarity index 100%
rename from docs/source_en/images/architecture.png
rename to docs/note/source_en/design/mindspore/images/architecture.png
diff --git a/docs/source_zh_cn/design/mindspore/images/auto_parallel.png b/docs/note/source_en/design/mindspore/images/auto_parallel.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/auto_parallel.png
rename to docs/note/source_en/design/mindspore/images/auto_parallel.png
diff --git a/docs/note/source_en/design/mindspore/images/data_parallel.png b/docs/note/source_en/design/mindspore/images/data_parallel.png
new file mode 100644
index 0000000000000000000000000000000000000000..a92c82aa64615b398e83b9bc2cf0aa2c5db9f904
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/data_parallel.png differ
diff --git a/docs/source_en/design/mindspore/images/ir/cf.dot b/docs/note/source_en/design/mindspore/images/ir/cf.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/cf.dot
rename to docs/note/source_en/design/mindspore/images/ir/cf.dot
diff --git a/docs/source_en/design/mindspore/images/ir/cf.png b/docs/note/source_en/design/mindspore/images/ir/cf.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/cf.png
rename to docs/note/source_en/design/mindspore/images/ir/cf.png
diff --git a/docs/source_en/design/mindspore/images/ir/closure.dot b/docs/note/source_en/design/mindspore/images/ir/closure.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/closure.dot
rename to docs/note/source_en/design/mindspore/images/ir/closure.dot
diff --git a/docs/source_en/design/mindspore/images/ir/closure.png b/docs/note/source_en/design/mindspore/images/ir/closure.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/closure.png
rename to docs/note/source_en/design/mindspore/images/ir/closure.png
diff --git a/docs/source_en/design/mindspore/images/ir/hof.dot b/docs/note/source_en/design/mindspore/images/ir/hof.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/hof.dot
rename to docs/note/source_en/design/mindspore/images/ir/hof.dot
diff --git a/docs/source_en/design/mindspore/images/ir/hof.png b/docs/note/source_en/design/mindspore/images/ir/hof.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/hof.png
rename to docs/note/source_en/design/mindspore/images/ir/hof.png
diff --git a/docs/source_en/design/mindspore/images/ir/ir.dot b/docs/note/source_en/design/mindspore/images/ir/ir.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/ir.dot
rename to docs/note/source_en/design/mindspore/images/ir/ir.dot
diff --git a/docs/source_en/design/mindspore/images/ir/ir.png b/docs/note/source_en/design/mindspore/images/ir/ir.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/ir.png
rename to docs/note/source_en/design/mindspore/images/ir/ir.png
diff --git a/docs/source_zh_cn/design/mindspore/images/operator_split.png b/docs/note/source_en/design/mindspore/images/operator_split.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/operator_split.png
rename to docs/note/source_en/design/mindspore/images/operator_split.png
diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png
rename to docs/note/source_en/design/mindspore/images/tensor_redistribution.png
diff --git a/docs/note/source_en/design/mindspore/images/tensor_redistribution1.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution1.png
new file mode 100644
index 0000000000000000000000000000000000000000..ed4d79416a0a07f8d75e738aa544d214834ae778
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/tensor_redistribution1.png differ
diff --git a/docs/note/source_en/design/mindspore/images/tensor_redistribution2.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution2.png
new file mode 100644
index 0000000000000000000000000000000000000000..114f984c66ae578722dbcdbb59ab03c44dbcb097
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/tensor_redistribution2.png differ
diff --git a/docs/note/source_en/design/mindspore/images/tensor_redistribution3.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution3.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd66c9120615f50f2b3f60cfe139954cb4adf307
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/tensor_redistribution3.png differ
diff --git a/docs/source_en/design/mindspore/ir.md b/docs/note/source_en/design/mindspore/mindir.md
similarity index 92%
rename from docs/source_en/design/mindspore/ir.md
rename to docs/note/source_en/design/mindspore/mindir.md
index 4837ba94baccb0f15638d6bb744ec13f9035bb1b..8dc2cf5b6c80e55fd9f5bd6e3d3ffa01dcf33f4a 100644
--- a/docs/source_en/design/mindspore/ir.md
+++ b/docs/note/source_en/design/mindspore/mindir.md
@@ -1,7 +1,7 @@
# MindSpore IR (MindIR)
-`Framework Development` `Intermediate` `Expert` `Contributor`
+`Linux` `Windows` `Framework Development` `Intermediate` `Expert` `Contributor`
@@ -18,7 +18,7 @@
-
+
## Overview
An intermediate representation (IR) is a representation of a program between the source and target languages, which facilitates program analysis and optimization for the compiler. Therefore, the IR design needs to consider the difficulty in converting the source language to the target language, as well as the ease-of-use and performance of program analysis and optimization.
@@ -77,7 +77,7 @@ lambda (x, y)
let c = b * %1 in
c end
```
-The corresponding MindIR is [ir.dot](./images/ir/ir.dot).
+The corresponding MindIR is [ir.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_en/design/mindspore/images/ir/ir.dot).

@@ -107,7 +107,7 @@ def hof(x):
return res
```
-The corresponding MindIR is [hof.dot](./images/ir/hof.dot).
+The corresponding MindIR is [hof.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_en/design/mindspore/images/ir/hof.dot).

In the actual network training scripts, the automatic derivation generic function `GradOperation` and `Partial` and `HyperMap` that are commonly used in the optimizer are typical high-order functions. Higher-order semantics greatly improve the flexibility and simplicity of MindSpore representations.
@@ -127,7 +127,7 @@ def fibonacci(n):
return fibonacci(n-1) + fibonacci(n-2)
```
-The corresponding MindIR is [cf.dot](./images/ir/cf.dot).
+The corresponding MindIR is [cf.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_en/design/mindspore/images/ir/cf.dot).

`fibonacci` is a top-level function graph. Two function graphs at the top level are selected and called by `switch`. `✓fibonacci` is the True branch of the first `if`, and `✗fibonacci` is the False branch of the first `if`. `✓✗fibonacci` called in `✗fibonacci` is the True branch of `elif`, and `✗✗fibonacci` is the False branch of `elif`. The key is, in a MindIR, conditional jumps and recursion are represented in the form of higher-order control flows. For example, `✓✗fibonacci` and `✗fibonacci` are transferred in as parameters of the `switch` operator. `switch` selects a function as the return value based on the condition parameter. In this way, `switch` performs a binary selection operation on the input functions as common values and does not call the functions. The real function call is completed on CNode following `switch`.
@@ -152,7 +152,7 @@ def ms_closure():
return out1, out2
```
-The corresponding MindIR is [closure.dot](./images/ir/closure.dot).
+The corresponding MindIR is [closure.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_en/design/mindspore/images/ir/closure.dot).

In the example, `a` and `b` are free variables because the variables `a` and `b` in `func_inner` are parameters defined in the referenced parent graph `func_outer`. The variable `closure` is a closure, which is the combination of the function `func_inner` and its context `func_outer(1, 2)`. Therefore, the result of `out1` is 4, which is equivalent to `1+2+1`, and the result of `out2` is 5, which is equivalent to `1+2+2`.
diff --git a/docs/source_en/glossary.md b/docs/note/source_en/glossary.md
similarity index 85%
rename from docs/source_en/glossary.md
rename to docs/note/source_en/glossary.md
index ae1fb21e9168f00bd574fdef787b2a7b3a86f831..c90722a69a5d5818b3e00f6ab2f933d4bc7bcdc3 100644
--- a/docs/source_en/glossary.md
+++ b/docs/note/source_en/glossary.md
@@ -2,7 +2,7 @@
`Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Beginner` `Intermediate` `Expert`
-
+
| Acronym and Abbreviation | Description |
| ----- | ----- |
@@ -32,9 +32,10 @@
| LSTM | Long short-term memory, an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. |
| Manifest | A data format file. Huawei ModelArt adopts this format. For details, see . |
| ME | Mind Expression, MindSpore frontend, which is used to compile tasks from user source code to computational graphs, control execution during training, maintain contexts (in non-sink mode), and dynamically generate graphs (in PyNative mode). |
-| MindArmour | MindSpore security component, which is used for AI adversarial example management, AI model attack defense and enhancement, and AI model robustness evaluation. |
+| MindArmour | The security module of MindSpore, which improves the confidentiality, integrity and usability of the model through technical means such as differential privacy and adversarial attack and defense. MindArmour prevents attackers from maliciously modifying the model or cracking the internal components of the model to steal the parameters of the model. |
| MindData | MindSpore data framework, which provides data loading, enhancement, dataset management, and visualization. |
| MindInsight | MindSpore visualization component, which visualizes information such as scalars, images, computational graphs, and model hyperparameters. |
+| MindRecord | It is a data format defined by MindSpore, it is a module for reading, writing, searching and converting data sets in MindSpore format. |
| MindSpore | Huawei-leaded open-source deep learning framework. |
| MindSpore Lite | A lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side. |
| MNIST database | Modified National Handwriting of Images and Technology database, a large handwritten digit database, which is usually used to train various image processing systems. |
@@ -43,5 +44,5 @@
| ResNet-50 | Residual Neural Network 50, a residual neural network proposed by four Chinese people, including Kaiming He from Microsoft Research Institute. |
| Schema | Data set structure definition file, which defines the fields contained in a dataset and the field types. |
| Summary | An operator that monitors the values of tensors on the network. It is a peripheral operation in the figure and does not affect the data flow. |
-| TBE | Tensor Boost Engine, an operator development tool that is extended based on the Tensor Virtual Machine (TVM) framework. |
+| TBE | Tensor Boost Engine, it is a self-developed NPU operator development tool developed by Huawei, which is expanded on the basis of the TVM (Tensor Virtual Machine) framework. It provides a set of Python API to implement development activities and develop custom operators. |
| TFRecord | Data format defined by TensorFlow. |
diff --git a/docs/source_en/help_seeking_path.md b/docs/note/source_en/help_seeking_path.md
similarity index 92%
rename from docs/source_en/help_seeking_path.md
rename to docs/note/source_en/help_seeking_path.md
index ac924fcfb0868840371df2313fc37080017b06e2..cd54fb6270a9d0049934352dc2b475724c285938 100644
--- a/docs/source_en/help_seeking_path.md
+++ b/docs/note/source_en/help_seeking_path.md
@@ -2,7 +2,7 @@
`Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Beginner` `Intermediate` `Expert`
-
+
This document describes how to seek help and support when you encounter problems in using MindSpore. The following flowchart shows the overall help-seeking process which starts from users encountering a problem in using MindSpore and ends with they finding a proper solution. Help-seeking methods are introduced based on the flowchart.
diff --git a/docs/note/source_en/image_classification.md b/docs/note/source_en/image_classification.md
new file mode 100644
index 0000000000000000000000000000000000000000..8f3e0c25c8e0378ad48e5abc6316fe4e3ce4e254
--- /dev/null
+++ b/docs/note/source_en/image_classification.md
@@ -0,0 +1,32 @@
+# Image classification
+
+
+
+## Image classification introduction
+
+Image classification is to identity what an image represents, to predict the object list and the probabilites. For example,the following tabel shows the classification results after mode inference.
+
+
+
+| Category | Probability |
+| ---------- | ----------- |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification).
+
+## Image classification model list
+
+The following table shows the data of some image classification models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| Model name | Size(Mb) | Top1 | Top5 | F1 | CPU 4 thread delay (ms) |
+|-----------------------| :----------: | :----------: | :----------: | :----------: | :-----------: |
+| [MobileNetV2](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) | 11.5 | - | - | 65.5% | 14.595 |
+| [Inceptionv3](https://download.mindspore.cn/model_zoo/official/lite/inceptionv3_lite/inceptionv3.ms) | 90.9 | 78.62% | 94.08% | - | 92.086 |
+| [Shufflenetv2](https://download.mindspore.cn/model_zoo/official/lite/shufflenetv2_lite/shufflenetv2.ms) | 8.8 | 67.74% | 87.62% | - | 8.303 |
+| [GoogleNet](https://download.mindspore.cn/model_zoo/official/lite/googlenet_lite/googlenet.ms) | 25.3 | 72.2% | 90.06% | - | 23.257 |
+| [ResNext50](https://download.mindspore.cn/model_zoo/official/lite/resnext50_lite/resnext50.ms) | 95.8 | 73.1% | 91.21% | - | 138.164 |
diff --git a/docs/source_en/images/help_seeking_path.png b/docs/note/source_en/images/help_seeking_path.png
similarity index 100%
rename from docs/source_en/images/help_seeking_path.png
rename to docs/note/source_en/images/help_seeking_path.png
diff --git a/lite/tutorials/source_en/images/lite_quick_start_app_result.png b/docs/note/source_en/images/image_classification_result.png
similarity index 100%
rename from lite/tutorials/source_en/images/lite_quick_start_app_result.png
rename to docs/note/source_en/images/image_classification_result.png
diff --git a/docs/note/source_en/images/object_detection.png b/docs/note/source_en/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/docs/note/source_en/images/object_detection.png differ
diff --git a/docs/source_en/index.rst b/docs/note/source_en/index.rst
similarity index 57%
rename from docs/source_en/index.rst
rename to docs/note/source_en/index.rst
index b998ceb9e7171ee985df6dad5a31ce4ef7528c5f..3e3751cfc891d569cc53f24a3eb7e3d77a1064d1 100644
--- a/docs/source_en/index.rst
+++ b/docs/note/source_en/index.rst
@@ -3,20 +3,13 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore Documentation
-=======================
+MindSpore Note
+=================
.. toctree::
:glob:
:maxdepth: 1
design
- roadmap
- benchmark
- network_list
- operator_list
- constraints_on_network_construction
- glossary
- FAQ
- help_seeking_path
- community
+ specification_note
+ others
diff --git a/tutorials/source_zh_cn/advanced_use/auto_data_acceleration.rst b/docs/note/source_en/network_list.rst
similarity index 30%
rename from tutorials/source_zh_cn/advanced_use/auto_data_acceleration.rst
rename to docs/note/source_en/network_list.rst
index 003693d04b59d55bc6673bd1fe49c7550a296bac..e4b29f1e46ddd503a7a7628cb229c97802f76ebe 100644
--- a/tutorials/source_zh_cn/advanced_use/auto_data_acceleration.rst
+++ b/docs/note/source_en/network_list.rst
@@ -1,8 +1,7 @@
-自动数据加速
-========
+Network List
+============
.. toctree::
:maxdepth: 1
- data_processing_acceleration
- cache
+ network_list_ms
\ No newline at end of file
diff --git a/docs/note/source_en/network_list_ms.md b/docs/note/source_en/network_list_ms.md
new file mode 100644
index 0000000000000000000000000000000000000000..4658ea931ff8514a5ba43de448a659ca854b18e6
--- /dev/null
+++ b/docs/note/source_en/network_list_ms.md
@@ -0,0 +1,45 @@
+# MindSpore Network List
+
+`Linux` `Ascend` `GPU` `CPU` `Model Development` `Intermediate` `Expert`
+
+
+
+- [MindSpore Network List](#mindspore-network-list)
+ - [Model Zoo](#model-zoo)
+
+
+
+
+
+## Model Zoo
+
+| Domain | Sub Domain | Network | Ascend | GPU | CPU
+|:------ |:------| :----------- |:------ |:------ |:-----
+|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
+| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
+|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
+|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
+|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
+| Computer Vision (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
+|Computer Vision (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
+| Computer Vision (CV) | Targets Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Targets Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Doing | Doing
+| Computer Vision(CV) | Targets Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
+| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
+| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
+| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
+
+> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/r1.0/mindinsight/wizard/) to quickly generate classic network scripts.
diff --git a/docs/note/source_en/object_detection.md b/docs/note/source_en/object_detection.md
new file mode 100644
index 0000000000000000000000000000000000000000..b70eb7a83aaa1e78266f8bb002e2a0e115993292
--- /dev/null
+++ b/docs/note/source_en/object_detection.md
@@ -0,0 +1,26 @@
+# Object detection
+
+
+
+## Object dectectin introduction
+
+Object detection can identify the object in the image and its position in the image. For the following figure, the output of the object detection model is shown in the following table. The rectangular box is used to identify the position of the object in the graph and the probability of the object category is marked. The four numbers in the coordinates are Xmin, Ymin, Xmax, Ymax; the probability represents the probility of the detected object.
+
+
+
+| Category | Probability | Coordinate |
+| -------- | ----------- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection).
+
+## Object detection model list
+
+The following table shows the data of some object detection models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| Model name | Size | mAP(IoU=0.50:0.95) | CPU 4 thread delay (ms) |
+|-----------------------| :----------: | :----------: | :-----------: |
+| [MobileNetv2-SSD](https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms) | 16.7 | 0.22 | 25.4 |
+
diff --git a/docs/note/source_en/operator_list.rst b/docs/note/source_en/operator_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..8a013bafdf8a1b57b72a2e9f0d4444d703f59383
--- /dev/null
+++ b/docs/note/source_en/operator_list.rst
@@ -0,0 +1,10 @@
+Operator List
+=============
+
+.. toctree::
+ :maxdepth: 1
+
+ operator_list_ms
+ operator_list_implicit
+ operator_list_parallel
+ operator_list_lite
\ No newline at end of file
diff --git a/docs/note/source_en/operator_list_implicit.md b/docs/note/source_en/operator_list_implicit.md
new file mode 100644
index 0000000000000000000000000000000000000000..db2b251de53c852abea095db4316337da3e1ae38
--- /dev/null
+++ b/docs/note/source_en/operator_list_implicit.md
@@ -0,0 +1,104 @@
+# MindSpore Implicit Type Conversion Operator List
+
+`Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert`
+
+
+
+- [MindSpore Implicit Type Conversion Operator List](#mindspore-implicit-type-conversion-operator-list)
+ - [Implicit Type Conversion](#implicit-type-conversion)
+ - [conversion rules](#conversion-rules)
+ - [data types involved in conversion](#data-types-involved-in-conversion)
+ - [support ops](#support-ops)
+
+
+
+
+
+## Implicit Type Conversion
+
+### conversion rules
+* Scalar and Tensor operations: during operation, the scalar is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation;
+when Tensor is a bool data type and the scalar is int or float, both the scalar and Tensor are converted to the Tensor with the data type of int32 or float32.
+* Tensor operation of different data types: the priority of data type is bool < uint8 < int8 < int16 < int32 < int64 < float16 < float32
+
+| Operation | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | Tensorflow
Lite op supported | Caffe
Lite op supported | Onnx
Lite op supported |
+|-----------------------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------|
+| Abs | | Supported | Supported | Supported | Supported | Supported | Abs | | Abs |
+| Add | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add |
+| AddN | | Supported | | | | | AddN | | |
+| Argmax | | Supported | Supported | Supported | | | Argmax | ArgMax | ArgMax |
+| Argmin | | Supported | Supported | Supported | | | Argmin | | |
+| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling| Pooling | AveragePool |
+| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | BatchNorm | BatchNormalization |
+| BatchToSpace | | Supported | Supported | Supported | | | BatchToSpace, BatchToSpaceND | | |
+| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | BiasAdd |
+| Broadcast | | Supported | | | | | BroadcastTo | | Expand |
+| Cast | Supported | Supported | | Supported | Supported | Supported | Cast, DEQUANTIZE* | | Cast |
+| Ceil | | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil |
+| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat |
+| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv |
+| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose |
+| Cos | | Supported | Supported | Supported | Supported | Supported | Cos | | Cos |
+| Crop | | Supported | Supported | Supported | | | | Crop | |
+| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | Deconvolution| ConvTranspose |
+| DepthToSpace | | Supported | Supported | Supported | | | DepthToSpace| | DepthToSpace |
+| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | Convolution |
+| Div | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div |
+| Eltwise | Supported | Supported | | | | | | Eltwise | |
+| Elu | | Supported | | | | | Elu | | Elu |
+| Equal | Supported | Supported | Supported | Supported | | | Equal | | Equal |
+| Exp | | Supported | | | Supported | Supported | Exp | | Exp |
+| ExpandDims | | Supported | | | | | | | |
+| Fill | | Supported | | | | | Fill | | |
+| Flatten | | Supported | | | | | | Flatten | |
+| Floor | | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor |
+| FloorDiv | Supported | Supported | | | | | FloorDiv | | |
+| FloorMod | Supported | Supported | | | | | FloorMod | | |
+| FullConnection | | Supported | Supported | Supported | Supported | Supported | FullyConnected | InnerProduct | |
+| GatherNd | | Supported | Supported | Supported | | | GatherND | | |
+| GatherV2 | | Supported | Supported | Supported | | | Gather | | Gather |
+| Greater | Supported | Supported | Supported | Supported | | | Greater | | Greater |
+| GreaterEqual | Supported | Supported | Supported | Supported | | | GreaterEqual| | |
+| Hswish | Supported | Supported | Supported | Supported | | | HardSwish | | |
+| LeakyReLU | Supported | Supported | | | Supported | Supported | LeakyRelu | | LeakyRelu |
+| Less | Supported | Supported | Supported | Supported | | | Less | | Less |
+| LessEqual | Supported | Supported | Supported | Supported | | | LessEqual | | |
+| LRN | | Supported | | | | | LocalResponseNorm | | Lrn |
+| Log | | Supported | Supported | Supported | Supported | Supported | Log | | Log |
+| LogicalAnd | Supported | Supported | | | | | LogicalAnd | | |
+| LogicalNot | | Supported | Supported | Supported | Supported | Supported | LogicalNot | | |
+| LogicalOr | Supported | Supported | | | | | LogicalOr | | |
+| LSTM | | Supported | | | | | | | |
+| MatMul | | Supported | Supported | Supported | Supported | Supported | | | MatMul |
+| Maximum | Supported | Supported | | | | | Maximum | | Max |
+| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool |
+| Minimum | Supported | Supported | | | | | Minimum | | Min |
+| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul |
+| NotEqual | Supported | Supported | Supported | Supported | | | NotEqual | | |
+| OneHot | | Supported | | | | | OneHot | | |
+| Pad | | Supported | Supported | Supported | | | Pad | | Pad |
+| Pow | | Supported | Supported | Supported | | | Pow | Power | Power |
+| PReLU | | Supported | | | Supported | Supported | | PReLU | |
+| Range | | Supported | | | | | Range | | |
+| Rank | | Supported | | | | | Rank | | |
+| ReduceMax | Supported | Supported | Supported | Supported | | | ReduceMax | | ReduceMax |
+| ReduceMean | Supported | Supported | Supported | Supported | | | Mean | | ReduceMean |
+| ReduceMin | Supported | Supported | Supported | Supported | | | ReduceMin | | ReduceMin |
+| ReduceProd | Supported | Supported | Supported | Supported | | | ReduceProd | | |
+| ReduceSum | Supported | Supported | Supported | Supported | | | Sum | | ReduceSum |
+| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | |
+| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu |
+| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip* |
+| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | Reshape | Reshape | Reshape,Flatten |
+| Resize | | Supported | Supported | Supported | | | ResizeBilinear, NearestNeighbor | Interp | |
+| Reverse | | Supported | | | | | reverse | | |
+| ReverseSequence | | Supported | | | | | ReverseSequence | | |
+| Round | | Supported | Supported | Supported | Supported | Supported | Round | | |
+| Rsqrt | | Supported | Supported | Supported | Supported | Supported | Rsqrt | | |
+| Scale | | Supported | | | Supported | Supported | | Scale | |
+| ScatterNd | | Supported | | | | | ScatterNd | | |
+| Shape | | Supported | | | | | Shape | | Shape |
+| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | Logistic | Sigmoid | Sigmoid |
+| Sin | | Supported | Supported | Supported | Supported | Supported | Sin | | Sin |
+| Slice | | Supported | Supported | Supported | Supported | Supported | Slice | | Slice |
+| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax |
+| SpaceToBatch | | Supported | | | | | | | |
+| SpaceToBatchND | | Supported | | | | | SpaceToBatchND | | |
+| SpaceToDepth | | Supported | | | | | SpaceToDepth | | SpaceToDepth |
+| SparseToDense | | Supported | | | | | SpareToDense | | |
+| Split | Supported | Supported | Supported | Supported | | | Split, SplitV | | |
+| Sqrt | | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt |
+| Square | | Supported | Supported | Supported | Supported | Supported | Square | | |
+| SquaredDifference | | Supported | | | | | SquaredDifference | | |
+| Squeeze | | Supported | Supported | Supported | | | Squeeze | | Squeeze |
+| StridedSlice | | Supported | Supported | Supported | | | StridedSlice| | |
+| Stack | | Supported | | | | | Stack | | |
+| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub |
+| Tanh | Supported | Supported | | | Supported | Supported | Tanh | TanH | |
+| Tile | | Supported | | | | | Tile | | Tile |
+| TopK | | Supported | Supported | Supported | | | TopKV2 | | |
+| Transpose | Supported | Supported | | | Supported | Supported | Transpose | Permute | Transpose |
+| Unique | | Supported | | | | | Unique | | |
+| Unsqueeze | | Supported | Supported | Supported | | | | | Unsqueeze |
+| Unstack | | Supported | | | | | Unstack | | |
+| Where | | Supported | | | | | Where | | |
+| ZerosLike | | Supported | | | | | ZerosLike | | |
+
+* Clip: only support convert clip(0, 6) to Relu6.
+* DEQUANTIZE: only support to convert fp16 to fp32.
diff --git a/docs/note/source_en/operator_list_ms.md b/docs/note/source_en/operator_list_ms.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a36aadd6e31bef0f7261cf9bf3de42d7246075d
--- /dev/null
+++ b/docs/note/source_en/operator_list_ms.md
@@ -0,0 +1,396 @@
+# MindSpore Operator List
+
+`Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert`
+
+
+
+- [MindSpore Operator List](#mindspore-operator-list)
+ - [mindspore.nn](#mindsporenn)
+ - [mindspore.ops](#mindsporeops)
+ - [mindspore.ops.functional](#mindsporeopsfunctional)
+ - [Distributed Operator](#distributed-operator)
+ - [Implicit Type Conversion](#implicit-type-conversion)
+ - [conversion rules](#conversion-rules)
+ - [data types involved in conversion](#data-types-involved-in-conversion)
+ - [support ops](#support-ops)
+
+
+
+
+
+## mindspore.nn
+
+| Operation | Ascend | GPU | CPU |Operator Type
+| :----------- |:------ |:------ |:-----|:---
+| [mindspore.nn.Softmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Softmax) | Supported | Supported | Supported |layer/activation
+| [mindspore.nn.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LogSoftmax) | Supported | Supported | Doing |layer/activation
+| [mindspore.nn.ReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ReLU) | Supported | Supported | Supported |layer/activation
+| [mindspore.nn.ReLU6](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ReLU6) |Supported | Supported | Supported |layer/activation
+| [mindspore.nn.HSwish](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSwish) | Doing | Supported | Doing |layer/activation
+| [mindspore.nn.HSigmoid](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSigmoid) | Doing | Supported | Doing |layer/activation
+| [mindspore.nn.LeakyReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLU) | Supported |Supported | Doing |layer/activation
+| [mindspore.nn.Tanh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Tanh) | Supported | Supported | Doing |layer/activation
+| [mindspore.nn.GELU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GELU) | Supported | Supported | Doing |layer/activation
+| [mindspore.nn.Sigmoid](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Supported | Doing |layer/activation
+| [mindspore.nn.PReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.PReLU) | Supported |Doing | Doing |layer/activation
+| [mindspore.nn.Dropout](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Dropout) |Supported | Supported | Supported |layer/basic
+| [mindspore.nn.Flatten](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
+| [mindspore.nn.Dense](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
+| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
+| [mindspore.nn.Norm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Supported | Supported | Doing |layer/basic
+| [mindspore.nn.OneHot](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
+| [mindspore.nn.Range](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
+| [mindspore.nn.SequentialCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
+| [mindspore.nn.CellList](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.CellList) | Supported | Supported | Doing |layer/container
+| [mindspore.nn.Conv2d](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) | Supported | Supported | Supported |layer/conv
+| [mindspore.nn.Conv2dTranspose](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dTranspose) | Supported | Supported | Doing |layer/conv
+| [mindspore.nn.Conv1d](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv1d) | Supported | Supported | Doing |layer/conv
+| [mindspore.nn.Conv1dTranspose](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv1dTranspose) | Supported | Supported | Doing |layer/conv
+| [mindspore.nn.Embedding](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Embedding) |Supported | Supported | Doing |layer/embedding
+| [mindspore.nn.ImageGradients](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ImageGradients) | Supported |Supported | Doing |layer/image
+| [mindspore.nn.SSIM](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SSIM) | Supported | Supported | Doing |layer/image
+| [mindspore.nn.PSNR](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.PSNR) | Supported |Doing | Doing |layer/image
+| [mindspore.nn.CentralCrop](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.CentralCrop) | Supported |Doing | Doing |layer/image
+| [mindspore.nn.LSTM](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LSTM) | Doing | Supported | Supported |layer/lstm
+| [mindspore.nn.GlobalBatchNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GlobalBatchNorm) | Supported |Doing | Doing |layer/normalization
+| [mindspore.nn.BatchNorm1d](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm1d) | Supported |Doing | Doing |layer/normalization
+| [mindspore.nn.BatchNorm2d](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) | Supported | Supported | Doing |layer/normalization
+| [mindspore.nn.GroupNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GroupNorm) | Supported | Doing | Doing |layer/normalization
+| [mindspore.nn.LayerNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LayerNorm) | Supported | Supported | Doing |layer/normalization
+| [mindspore.nn.MatrixDiag](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixDiagPart](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiagPart) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixSetDiag](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MatrixSetDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.LinSpace](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
+| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
+| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Doing | Doing |layer/quant
+| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Doing |layer/quant
+| [mindspore.nn.FakeQuantWithMinMax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.FakeQuantWithMinMax) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnFoldQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnWithoutFoldQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnWithoutFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.DenseQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DenseQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.ActQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ActQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.LeakyReLUQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLUQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSwishQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSwishQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSigmoidQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSigmoidQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.TensorAddQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.TensorAddQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.MulQuant](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MulQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.L1Loss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
+| [mindspore.nn.MSELoss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
+| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) |Supported |Doing | Doing |loss/loss
+| [mindspore.nn.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyWithLogits) | Supported | Supported | Supported |loss/loss
+| [mindspore.nn.SoftmaxCrossEntropyExpand](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyExpand) | Supported |Supported | Doing |loss/loss
+| [mindspore.nn.CosineEmbeddingLoss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.CosineEmbeddingLoss) |Supported |Supported | Doing |loss/loss
+| [mindspore.nn.ProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ProximalAdagrad) | Supported | Doing | Doing |optim/ProximalAdagrad
+| [mindspore.nn.LazyAdam](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LazyAdam) | Supported | Doing | Doing |optim/lazyadam
+| [mindspore.nn.Adam](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Adam) | Supported |Doing | Doing |optim/adam
+| [mindspore.nn.AdamWeightDecay](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.AdamWeightDecay) | Supported | Supported | Doing |optim/adam
+| [mindspore.nn.Lamb](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Lamb) | Supported | Supported | Doing |optim/lamb
+| [mindspore.nn.LARS](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LARS) |Supported |Doing | Doing |optim/lars
+| [mindspore.nn.Momentum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Momentum) | Supported | Supported | Supported |optim/momentum
+| [mindspore.nn.Optimizer](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Optimizer) | Supported | Supported | Doing |optim/optimizer
+| [mindspore.nn.RMSProp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.RMSProp) | Supported | Support | Doing |optim/optimizer
+| [mindspore.nn.SGD](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SGD) |Supported |Doing | Doing |optim/sgd
+| [mindspore.nn.WithLossCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.WithGradCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
+| [mindspore.nn.DistributedGradReducer](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DistributedGradReducer) | Supported |Doing | Doing |wrap/grad_reducer
+| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.Cell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell) | Supported | Supported | Supported |cell
+| [mindspore.nn.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.EmbeddingLookup) |Supported | Supported | Supported |layer/embedding
+| [mindspore.nn.Pad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Pad) |Supported | Supported | Doing |layer/basic
+
+## mindspore.ops
+
+| Operation | Ascend | GPU | CPU |Operator Type
+| :----------- |:------ |:------ |:-----|:---
+| [mindspore.ops.Flatten](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Flatten) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Acosh) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorMod) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Elu) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.MirrorPad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MirrorPad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Unpack](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Unpack) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pack) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.L2Loss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.L2Loss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.CTCLoss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CTCLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.RNNTLoss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RNNTLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softplus) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLU6) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.HSwish](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HSwish) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.HSigmoid](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HSigmoid) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BatchNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchNorm) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.LRN](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LRN) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.Conv2D](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Conv2D) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.DepthwiseConv2dNative](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPoolWithArgmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPool](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MaxPool) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.AvgPool](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AvgPool) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Conv2DBackpropInput](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TopK) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyRMSProp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ApplyCenteredRMSProp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.SmoothL1Loss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SGD](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SGD) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.LayerNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LayerNorm) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ResizeBilinear](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ResizeBilinear) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.GetNext](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GetNext) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LSTM](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LSTM) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.BasicLSTMCell](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BasicLSTMCell) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Pad](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ROIAlign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ROIAlign) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Adam](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Adam) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.BinaryCrossEntropy](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.KLDivLoss](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.KLDivLoss) | Doing | Supported | Doing | nn_ops
+| [mindspore.ops.LARSUpdate](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LARSUpdate) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softsign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceAll](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReduceProd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.CumProd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CumProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.CumSum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CumSum) | Supported | Supported| Doing | math_ops
+| [mindspore.ops.AddN](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AddN) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Neg) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sub) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Square) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.SquareSumAll](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SquareSumAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rsqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reciprocal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Exp) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Log) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Log1p) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Div) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Floor) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.EqualCount](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.EqualCount) | Doing | Supported | Supported | math_ops
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Greater) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Less) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Atan2) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LessEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Ceil) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Inv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Invert](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Invert) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUAllocFloatStatus](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUGetFloatStatus](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUClearFloatStatus](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloatStatus](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloatStatus) | Doing | Supported | Doing | math_ops
+| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cosh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ACos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BesselI0e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BesselI1e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Asin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Asinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Erf) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Erfc) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Expm1) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NMSWithMask](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NMSWithMask) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Abs) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sign) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Round) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InplaceAdd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceSub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InplaceSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.DType](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DType) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.SameTypeShape](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SameTypeShape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cast) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.IsSubClass](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IsSubClass) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.IsInstance](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IsInstance) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Shape](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Shape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Split) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Rank](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rank) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.TruncatedNormal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncatedNormal) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Size](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Size) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Fill](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Fill) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ZerosLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.TupleToArray](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TupleToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToArray](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToTensor](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarToTensor) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.InvertPermutation](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InvertPermutation) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Argmax) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Argmin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tile) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentProd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Concat) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ParallelConcat](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ParallelConcat) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Slice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Select](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Select) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Diag](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Diag) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.DiagPart](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DiagPart) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.Eye](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Eye) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ResizeNearestNeighbor](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ApplyFtrl](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyFtrl) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToDepth](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SpaceToDepth) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.DepthToSpace](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatch](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatch) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatchND](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatchND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpace](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpaceND](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpaceND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.IsFinite](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IsFinite) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.InplaceUpdate](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InplaceUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Rint](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rint) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReverseV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReverseV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReduceOp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceOp) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllReduce](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AllReduce) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllGather](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AllGather) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.ReduceScatter](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceScatter) | Doing | Supported | Doing | comm_ops
+| [mindspore.ops.Broadcast](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Broadcast) | Supported | Doing | Doing | comm_ops
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | Supported | Supported | Supported | control_ops
+| [mindspore.ops.GeSwitch](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GeSwitch) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.Merge](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Merge) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.ScalarSummary](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.ImageSummary](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ImageSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.TensorSummary](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.HistogramSummary](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HistogramSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.InsertGradientOf](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InsertGradientOf) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.Print](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Print) | Supported | Doing | Doing | debug_ops
+| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Assign) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxEncode](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxDecode](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.PopulationCount](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.PopulationCount) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.CheckValid](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CheckValid) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.IOU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IOU) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.MakeRefKey](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MakeRefKey) | Supported | Supported | Supported | other_ops
+| [mindspore.ops.InTopK](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InTopK) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.StandardNormal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.StandardNormal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.Gamma](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Gamma) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.Poisson](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Poisson) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.UniformInt](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UniformInt) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.UniformReal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UniformReal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.RandomChoiceWithMask](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
+| [mindspore.ops.RandomCategorical](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RandomCategorical) | Supported| Doing | Doing | random_ops
+| [mindspore.ops.ScalarCast](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarCast) | Supported | Supported | Supported | inner_ops
+| [mindspore.ops.ReverseSequence](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReverseSequence) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.CropAndResize](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CropAndResize) | Supported | Doing | Doing | image_ops
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Xdivy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Xlogy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.HistogramFixedWidth](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BNTrainingReduce](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingReduce) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingUpdate](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingUpdate) | Supported | Doing | Doing | nn_ops
+
+## mindspore.ops.functional
+
+| Operation | functional Operation
+| :----------- | :-----------
+| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pack) | pack
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | tensor_add
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | assign_sub
+| [mindspore.ops.AddN](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AddN) | addn
+| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Square) | square
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | sqrt
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal) | equal
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | not_equal
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | logical_not
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | logical_and
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | logical_or
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | expand_dims
+| [mindspore.ops.DType](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DType) | dtype
+| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cast) | cast
+| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | reshape
+| [mindspore.ops.Shape](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Shape) | shape
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | gather
+| [mindspore.ops.Rank](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rank) | rank
+| [mindspore.ops.Size](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Size) | size
+| [mindspore.ops.Fill](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Fill) | fill
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | ones_like
+| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tile) | tile
+| [mindspore.ops.Select](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Select) | select
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | scatter_nd
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | gather_nd
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | control_depend
+| [mindspore.ops.Print](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Print) | print
+| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Assign) | assign
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow) | tensor_pow
+
+> At present, functional supports some operators without attributes, which will be further completed in the future.
diff --git a/docs/note/source_en/operator_list_parallel.md b/docs/note/source_en/operator_list_parallel.md
new file mode 100644
index 0000000000000000000000000000000000000000..0685405dae687070784918154fc9dbb63df735f1
--- /dev/null
+++ b/docs/note/source_en/operator_list_parallel.md
@@ -0,0 +1,75 @@
+# MindSpore Distributed Operator List
+
+`Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert`
+
+
+
+- [MindSpore Distributed Operator List](#mindspore-distributed-operator-list)
+ - [Distributed Operator](#distributed-operator)
+
+
+
+
+
+## Distributed Operator
+
+| op name | constraints
+| :----------- | :-----------
+| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ACos) | None
+| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cos) | None
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | None
+| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Log) | None
+| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Exp) | None
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | None
+| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | None
+| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | None
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | None
+| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cast) | None
+| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Neg) | None
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | None
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | None
+| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Square) | None
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | None
+| [mindspore.ops.Dropout](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Dropout) | Repeated calculation is not supported.
+| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Div) | None
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | None
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | None
+| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mul) | None
+| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sub) | None
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow) | None
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | None
+| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Greater) | None
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | None
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | None
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal) | None
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | None
+| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | None
+| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | None
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | None
+| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Concat) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | Need to be used in conjunction with `DropoutDoMask`.
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported.
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.
+| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseGatherV2) | The same as GatherV2.
+| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.EmbeddingLookup) | The same as GatherV2.
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | The last dimension of logits and labels can't be splited; Only supports using output[0].
+| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | `transpose_a=True` is not supported.
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | `transpore_a=True` is not supported.
+| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight.
+| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs.
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | None
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | None
+| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | Configuring shard strategy is not supported.
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is not supported when the strides of dimension is 1.
+| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tile) | Only support configuring shard strategy for multiples.
+| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | None
+
+> Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.
+>
diff --git a/docs/note/source_en/others.rst b/docs/note/source_en/others.rst
new file mode 100644
index 0000000000000000000000000000000000000000..61a9cdf7576454946cfc8700fe75b80718a191dc
--- /dev/null
+++ b/docs/note/source_en/others.rst
@@ -0,0 +1,11 @@
+Others
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ glossary
+ roadmap
+ help_seeking_path
+ community
+
diff --git a/docs/source_en/roadmap.md b/docs/note/source_en/roadmap.md
similarity index 96%
rename from docs/source_en/roadmap.md
rename to docs/note/source_en/roadmap.md
index 774456c99e9b0106487ef04508f71c9f527a6ea6..890ea9d6a426fd207ecc261d6c9eadfbfe9ba166 100644
--- a/docs/source_en/roadmap.md
+++ b/docs/note/source_en/roadmap.md
@@ -14,7 +14,7 @@
-
+
MindSpore's top priority plans in the year are displayed as follows. We will continuously adjust the priority based on user feedback.
diff --git a/docs/note/source_en/specification_note.rst b/docs/note/source_en/specification_note.rst
new file mode 100644
index 0000000000000000000000000000000000000000..bd2b83eebf0b4807e02e051337bac7b8e047a1cb
--- /dev/null
+++ b/docs/note/source_en/specification_note.rst
@@ -0,0 +1,13 @@
+Specification Note
+==================
+
+.. toctree::
+ :maxdepth: 1
+
+ benchmark
+ network_list
+ operator_list
+ constraints_on_network_construction
+ image_classification
+ object_detection
+
diff --git a/docs/note/source_zh_cn/_static/logo_notebook.png b/docs/note/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/note/source_zh_cn/_static/logo_notebook.png differ
diff --git a/tutorials/source_zh_cn/_static/logo_source.png b/docs/note/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from tutorials/source_zh_cn/_static/logo_source.png
rename to docs/note/source_zh_cn/_static/logo_source.png
diff --git a/docs/source_zh_cn/benchmark.md b/docs/note/source_zh_cn/benchmark.md
similarity index 96%
rename from docs/source_zh_cn/benchmark.md
rename to docs/note/source_zh_cn/benchmark.md
index ef06735f46f6673c99111035a22acb68433056fe..54f7ea80da94fdd4099078ae1fc4fc370eb40bcb 100644
--- a/docs/source_zh_cn/benchmark.md
+++ b/docs/note/source_zh_cn/benchmark.md
@@ -13,7 +13,7 @@
-
+
本文介绍MindSpore的基准性能。MindSpore网络定义可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
diff --git a/docs/source_zh_cn/community.rst b/docs/note/source_zh_cn/community.rst
similarity index 86%
rename from docs/source_zh_cn/community.rst
rename to docs/note/source_zh_cn/community.rst
index 04ffbb4cde98b2fdb3646e754840ef042f019292..1095f0665a783779dea8dbdb6468ef8d1c99a987 100644
--- a/docs/source_zh_cn/community.rst
+++ b/docs/note/source_zh_cn/community.rst
@@ -9,4 +9,4 @@
贡献文档
-----------
-如何贡献文档,请参见链接 https://gitee.com/mindspore/docs/blob/master/CONTRIBUTING_DOC.md 。
\ No newline at end of file
+如何贡献文档,请参见链接 https://gitee.com/mindspore/docs/blob/r1.0/CONTRIBUTING_DOC_CN.md 。
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/conf.py b/docs/note/source_zh_cn/conf.py
similarity index 92%
rename from lite/docs/source_zh_cn/conf.py
rename to docs/note/source_zh_cn/conf.py
index dd8e0482bb14ee1ec4242dcd8e550cbb12eb0712..95d7701759707ab95a3c199cd8a22e2e2cc1194d 100644
--- a/lite/docs/source_zh_cn/conf.py
+++ b/docs/note/source_zh_cn/conf.py
@@ -15,9 +15,9 @@ import os
# -- Project information -----------------------------------------------------
-project = 'MindSpore Lite'
-copyright = '2020, MindSpore Lite'
-author = 'MindSpore Lite'
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = 'master'
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
@@ -59,4 +57,6 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/source_zh_cn/constraints_on_network_construction.md b/docs/note/source_zh_cn/constraints_on_network_construction.md
similarity index 76%
rename from docs/source_zh_cn/constraints_on_network_construction.md
rename to docs/note/source_zh_cn/constraints_on_network_construction.md
index 8b352b2625d65ebdb811e75a84caddc9a71f0b78..1018ac956f37b5717719d9aa768718654bf60dad 100644
--- a/docs/source_zh_cn/constraints_on_network_construction.md
+++ b/docs/note/source_zh_cn/constraints_on_network_construction.md
@@ -25,7 +25,7 @@
-
+
## 概述
MindSpore完成从用户源码到计算图的编译,用户源码基于Python语法编写,当前MindSpore支持将普通函数或者继承自nn.Cell的实例转换生成计算图,暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。随着MindSpore的演进,这些约束可能会发生变化。
@@ -207,8 +207,8 @@ tuple也支持切片取值操作, 但不支持切片类型为Tensor类型,支
## 网络定义约束
### 整网实例类型
-* 带[@ms_function](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。
-* 继承自[nn.Cell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell)的Cell子类。
+* 带[@ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。
+* 继承自[nn.Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell)的Cell子类。
### 网络输入类型
* 整网的训练数据输入参数只能是Tensor类型。
@@ -221,44 +221,59 @@ tuple也支持切片取值操作, 但不支持切片类型为Tensor类型,支
| 类别 | 内容
| :----------- |:--------
-| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell)。
+| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell)。
| `Cell`实例的成员函数 | Cell的construct中可以调用其他类成员函数。
| 函数 | 自定义Python函数、前文中列举的系统函数。
| dataclass实例 | 使用@dataclass装饰的类。
-| Primitive算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html)
-| Composite算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.composite.html)
-| constexpr生成算子 |使用[@constexpr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr)生成的值计算算子。
+| Primitive算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html)
+| Composite算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html)
+| constexpr生成算子 |使用[@constexpr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.constexpr)生成的值计算算子。
### 其他约束
-整网construct函数输入的参数以及使用ms_function装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用。所以,在图模式下,限制入口网络的参数只能是Tensor,如下例所示:
-* 错误的写法如下:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
-
- def construct(self, input_x, input_axis):
- return self.expandDims(input_x, input_axis)
- expand_dim = ExpandDimsTest()
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x, 0)
- ```
- 在示例中,ExpandDimsTest是一个只有单算子的网络,网络的输入有input_x和input_axis两个。因为ExpandDims算子的第二个输入需要是常量,这是因为在图编译过程中推导ExpandDims算子输出维度的时候需要用到,而input_axis作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
+1. 整网`construct`函数输入的参数以及使用`ms_function`装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用。所以,在图模式下,限制入口网络的参数只能是`Tensor`,如下例所示:
+
+ * 错误的写法如下:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+
+ def construct(self, input_x, input_axis):
+ return self.expandDims(input_x, input_axis)
+ expand_dim = ExpandDimsTest()
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x, 0)
+ ```
+ 在示例中,`ExpandDimsTest`是一个只有单算子的网络,网络的输入有`input_x`和`input_axis`两个。因为`ExpandDims`算子的第二个输入需要是常量,这是因为在图编译过程中推导`ExpandDims`算子输出维度的时候需要用到,而`input_axis`作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
+
+ * 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self, axis):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+ self.axis = axis
+
+ def construct(self, input_x):
+ return self.expandDims(input_x, self.axis)
+ axis = 0
+ expand_dim = ExpandDimsTest(axis)
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x)
+ ```
+
+2. 不允许修改网络的非`Parameter`类型数据成员。示例如下:
-* 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self, axis):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
- self.axis = axis
-
- def construct(self, input_x):
- return self.expandDims(input_x, self.axis)
- axis = 0
- expand_dim = ExpandDimsTest(axis)
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x)
```
+ class Net(Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.num = 2
+ self.par = Parameter(Tensor(np.ones((2, 3, 4))), name="par")
+
+ def construct(self, x, y):
+ return x + y
+ ```
+ 上面所定义的网络里,`self.num`不是一个`Parameter`,不允许被修改,而`self.par`是一个`Parameter`,可以被修改。
diff --git a/docs/note/source_zh_cn/design.rst b/docs/note/source_zh_cn/design.rst
new file mode 100644
index 0000000000000000000000000000000000000000..2eddcf074fcd968f211f11e0c48f600c97ae157d
--- /dev/null
+++ b/docs/note/source_zh_cn/design.rst
@@ -0,0 +1,17 @@
+设计说明
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ design/technical_white_paper
+ design/mindspore/architecture
+ design/mindspore/architecture_lite
+ design/mindspore/mindir
+ design/mindspore/distributed_training_design
+ design/mindinsight/profiler_design
+ design/mindinsight/training_visual_design
+ design/mindinsight/graph_visual_design
+ design/mindinsight/tensor_visual_design
+ design/mindarmour/differential_privacy_design
+ design/mindarmour/fuzzer_design
diff --git a/docs/source_zh_cn/design/mindarmour/differential_privacy_design.md b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md
similarity index 95%
rename from docs/source_zh_cn/design/mindarmour/differential_privacy_design.md
rename to docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md
index 276219655693f4c3424f23906cde901cd465217a..72868f5dc71065bebed6525c1b404b15b8b6a7cb 100644
--- a/docs/source_zh_cn/design/mindarmour/differential_privacy_design.md
+++ b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md
@@ -1,70 +1,70 @@
-# 差分隐私
-
-`Linux` `Ascend` `模型开发` `模型调优` `框架开发` `企业` `高级` `贡献者`
-
-
-
-- [差分隐私](#差分隐私)
- - [总体设计](#总体设计)
- - [差分隐私优化器](#差分隐私优化器)
- - [差分隐私的噪声机制](#差分隐私的噪声机制)
- - [Monitor](#monitor)
- - [代码实现](#代码实现)
- - [参考文献](#参考文献)
-
-
-
-
-
-## 总体设计
-
-MindArmour的Differential-Privacy模块实现了差分隐私训练的能力。模型的训练主要由构建训练数据集、计算损失、计算梯度以及更新模型参数等过程组成,目前MindArmour的差分隐私训练主要着力于计算梯度的过程,通过相应的算法对梯度进行裁剪、加噪等处理,从而保护用户数据隐私。
-
-
-
-图1 差分隐私总体设计
-
-图1是差分隐私训练的总体设计,主要由差分隐私噪声机制(DP Mechanisms)、差分隐私优化器(DP Optimizer)、差分隐私监控器(Privacy Monitor)组成。
-
-
-### 差分隐私优化器
-
-差分隐私优化器继承了MindSpore优化器的能力,并使用差分隐私的噪声机制对梯度加扰保护。目前,MindArmour提供三类差分隐私优化器:固定高斯优化器、自适应高斯优化器、自适应裁剪优化器,每类差分隐私优化器从不同的角度为SGD、Momentum等常规优化器增加差分隐私保护的能力。
-
-* 固定高斯优化器,是一种非自适应高斯噪声的差分隐私优化器。其优势在于可以严格控制差分隐私预算ϵ,缺点是在模型训练过程中,每个Step添加的噪声量固定,若迭代次数过大,训练后期的噪声使得模型收敛困难,甚至导致性能大幅下跌,模型可用性差。
-* 自适应高斯优化器,通过自适应调整标准差,来调整高斯分布噪声的大小,在模型训练初期,添加的噪声量较大,随着模型逐渐收敛,噪声量逐渐减小,噪声对于模型可用性的影响减小。自适应高斯噪声的缺点是不能严格控制差分隐私预算。
-* 自适应裁剪优化器,是一种自适应调整调整裁剪粒度的差分隐私优化器,梯度裁剪是差分隐私训练的一个重要操作,自适应裁剪优化器能够自适应的控制梯度裁剪的的比例在给定的范围波动,控制迭代训练过程中梯度裁剪的粒度。
-
-### 差分隐私的噪声机制
-
-噪声机制是构建差分隐私训练能力的基础,不同的噪声机制满足不同差分隐私优化器的需求,包括固定高斯分布噪声、自适应高斯分布噪声、自适应裁剪高斯分布噪声、拉普拉斯分布噪声等多种机制。
-
-### Monitor
-
-Monitor提供RDP、ZCDP等回调函数,用于监测模型的差分隐私预算。
-
-* ZCDP[2]
-
- ZCDP,zero-concentrated differential privacy,是一种宽松的差分隐私定义,利用Rényi散度来度量随机函数在相邻数据集上的分布差异。
-
-* RDP[3]
-
- RDP,Rényi Differential Privacy,是一种更通用的基于R'enyi散度的差分隐私定义,利用Rényi散度来度量两个相邻数据集的分布差异。
-
-相对于传统差分隐私,ZCDP和RDP都能能够提供更加严格的隐私预算上界保证。
-
-
-## 代码实现
-
-* [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): 这个文件实现了差分隐私训练所需的噪声生成机制,包括简单高斯噪声、自适应高斯噪声、自适应裁剪高斯噪声等。
-* [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): 这个文件实现了使用噪声生成机制在反向传播时添加噪声的根本逻辑。
-* [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): 实现了计算差分隐私预算的回调函数,模型训练过程中,会反馈当前的差分隐私预算。
-* [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): 这个文件实现了计算损失和梯度的逻辑,差分隐私训练的梯度截断逻辑在此文件中实现,且model.py是用户使用差分隐私训练能力的入口。
-
-## 参考文献
-
-[1] Dwork, Cynthia, and Jing Lei. "Differential privacy and robust statistics." *Proceedings of the forty-first annual ACM symposium on Theory of computing*. 2009.
-
-[2] Lee, Jaewoo, and Daniel Kifer. "Concentrated differentially private gradient descent with adaptive per-iteration privacy budget." *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*. 2018.
-
-[3] Mironov, Ilya. "Rényi differential privacy." *2017 IEEE 30th Computer Security Foundations Symposium (CSF)*. IEEE, 2017.
+# 差分隐私
+
+`Linux` `Ascend` `模型开发` `模型调优` `框架开发` `企业` `高级` `贡献者`
+
+
+
+- [差分隐私](#差分隐私)
+ - [总体设计](#总体设计)
+ - [差分隐私优化器](#差分隐私优化器)
+ - [差分隐私的噪声机制](#差分隐私的噪声机制)
+ - [Monitor](#monitor)
+ - [代码实现](#代码实现)
+ - [参考文献](#参考文献)
+
+
+
+
+
+## 总体设计
+
+MindArmour的Differential-Privacy模块实现了差分隐私训练的能力。模型的训练主要由构建训练数据集、计算损失、计算梯度以及更新模型参数等过程组成,目前MindArmour的差分隐私训练主要着力于计算梯度的过程,通过相应的算法对梯度进行裁剪、加噪等处理,从而保护用户数据隐私。
+
+
+
+图1 差分隐私总体设计
+
+图1是差分隐私训练的总体设计,主要由差分隐私噪声机制(DP Mechanisms)、差分隐私优化器(DP Optimizer)、差分隐私监控器(Privacy Monitor)组成。
+
+
+### 差分隐私优化器
+
+差分隐私优化器继承了MindSpore优化器的能力,并使用差分隐私的噪声机制对梯度加扰保护。目前,MindArmour提供三类差分隐私优化器:固定高斯优化器、自适应高斯优化器、自适应裁剪优化器,每类差分隐私优化器从不同的角度为SGD、Momentum等常规优化器增加差分隐私保护的能力。
+
+* 固定高斯优化器,是一种非自适应高斯噪声的差分隐私优化器。其优势在于可以严格控制差分隐私预算ϵ,缺点是在模型训练过程中,每个Step添加的噪声量固定,若迭代次数过大,训练后期的噪声使得模型收敛困难,甚至导致性能大幅下跌,模型可用性差。
+* 自适应高斯优化器,通过自适应调整标准差,来调整高斯分布噪声的大小,在模型训练初期,添加的噪声量较大,随着模型逐渐收敛,噪声量逐渐减小,噪声对于模型可用性的影响减小。自适应高斯噪声的缺点是不能严格控制差分隐私预算。
+* 自适应裁剪优化器,是一种自适应调整调整裁剪粒度的差分隐私优化器,梯度裁剪是差分隐私训练的一个重要操作,自适应裁剪优化器能够自适应的控制梯度裁剪的的比例在给定的范围波动,控制迭代训练过程中梯度裁剪的粒度。
+
+### 差分隐私的噪声机制
+
+噪声机制是构建差分隐私训练能力的基础,不同的噪声机制满足不同差分隐私优化器的需求,包括固定高斯分布噪声、自适应高斯分布噪声、自适应裁剪高斯分布噪声、拉普拉斯分布噪声等多种机制。
+
+### Monitor
+
+Monitor提供RDP、ZCDP等回调函数,用于监测模型的差分隐私预算。
+
+* ZCDP[2]
+
+ ZCDP,zero-concentrated differential privacy,是一种宽松的差分隐私定义,利用Rényi散度来度量随机函数在相邻数据集上的分布差异。
+
+* RDP[3]
+
+ RDP,Rényi Differential Privacy,是一种更通用的基于R'enyi散度的差分隐私定义,利用Rényi散度来度量两个相邻数据集的分布差异。
+
+相对于传统差分隐私,ZCDP和RDP都能能够提供更加严格的隐私预算上界保证。
+
+
+## 代码实现
+
+* [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): 这个文件实现了差分隐私训练所需的噪声生成机制,包括简单高斯噪声、自适应高斯噪声、自适应裁剪高斯噪声等。
+* [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): 这个文件实现了使用噪声生成机制在反向传播时添加噪声的根本逻辑。
+* [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): 实现了计算差分隐私预算的回调函数,模型训练过程中,会反馈当前的差分隐私预算。
+* [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): 这个文件实现了计算损失和梯度的逻辑,差分隐私训练的梯度截断逻辑在此文件中实现,且model.py是用户使用差分隐私训练能力的入口。
+
+## 参考文献
+
+[1] Dwork, Cynthia, and Jing Lei. "Differential privacy and robust statistics." *Proceedings of the forty-first annual ACM symposium on Theory of computing*. 2009.
+
+[2] Lee, Jaewoo, and Daniel Kifer. "Concentrated differentially private gradient descent with adaptive per-iteration privacy budget." *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*. 2018.
+
+[3] Mironov, Ilya. "Rényi differential privacy." *2017 IEEE 30th Computer Security Foundations Symposium (CSF)*. IEEE, 2017.
diff --git a/docs/source_zh_cn/design/mindarmour/fuzzer_design.md b/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md
similarity index 79%
rename from docs/source_zh_cn/design/mindarmour/fuzzer_design.md
rename to docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md
index 81a730c30a9ff3804fea82b355544b9894bfa8c1..bbf75dd2d8b5c6b3d01680f6e3c172e177812224 100644
--- a/docs/source_zh_cn/design/mindarmour/fuzzer_design.md
+++ b/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md
@@ -1,73 +1,73 @@
-# AI模型安全测试
-
-`Linux` `Ascend` `GPU` `CPU` `数据准备` `模型开发` `模型训练` `模型调优` `企业` `高级`
-
-
-
-- [AI模型安全测试](#ai模型安全测试)
- - [背景](#背景)
- - [Fuzzer设计图](#Fuzzer设计图)
- - [Fuzzer流程](#Fuzzer流程)
- - [代码实现](#代码实现)
- - [参考文献](#参考文献)
-
-
-
-
-
-## 背景
-
-不同于[传统程序的Fuzz安全测试](https://zhuanlan.zhihu.com/p/43432370),MindArmour针对深度神经网络,提供AI模型安全测试模块Fuzzer。根据神经网络的特点,引入神经元覆盖率[1]的概念,作为Fuzz的测试指导,引导Fuzz朝神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试DNN,探索不同类型的模型输出结果、模型错误行为。
-
-## Fuzzer设计图
-
-AI模型安全测试设计图如下。
-
-
-
-在用户接口层,需要用户提供原始数据集`DataSet`、被测试模型`Model`和配置Fuzzer参数`Fuzzer configuration`。Fuzzer模块对模型和数据进行Fuzz测试后,返回安全评估报告`Security Report`。
-
-Fuzzer架构主要包括三个模块:
-
-1. Natural Threat/Adversarial Example Generator(数据变异模块):
-
- 随机选择变异方法对种子数据变异生成多个变种。支持多种样本的变异策略, 包括:
-
- - 图像仿射变换方法如:平移、旋转、缩放、错切。
- - 基于图像像素值变化的方法如:改变对比度、亮度、模糊、加噪。
- - 基于对抗攻击的白盒、黑盒对抗样本生成方法,如FGSM、PGD、MDIIM。
-
-2. Fuzzer moduler(变异指导模块):
-
- 对变异生成的数据进行fuzz测试,观察神经元覆盖率的变化情况,如果生成的数据使得神经元覆盖率增加,则加入变异的种子队列,用于下一轮的数据变异。目前支持的神经元覆盖率指标包括KMNC、NBC、SNAC[2]。
-
-3. Evaluation(评估模块):
-
- 评估Fuzzer效果,生成数据的质量,变异方法的强度。支持3个类型5种指标,包括通用评价指标:accuracy,神经元覆盖率指标:kmnc, nbc,snac,对抗攻击评价指标:attack_success_rate。
-
-## Fuzzer流程
-
-
-
-具体的Fuzzer流程如下:
-
-1. 根据策略从种子队列中选择一个种子A。
-2. 随机选择变异策略,对种子A进行变异,生成多个变种数据A1,A2...
-3. 用目标模型对变种A1,A2...进行预测,如果变种使得目标模型预测错误,则改变种进入Failed tests。
-4. 若目标模型对于变种的预测结果是正确的,用神经元覆盖率指标进行分析。
-5. 如果变种使得覆盖率增加,那么将该变种放入种子队列,用于下一轮变异。
-
-通过多轮循环,我们获得一系列变异数据Fuzzed Tests,并进一步分析,从多个角度给出安全报告。可以用于深入分析神经网络模型的缺陷,从而针对这些缺陷,进行模型增强等,改善提升模型的通用性、鲁棒性。
-
-## 代码实现
-
-1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py):Fuzzer总体流程。
-2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py):神经元覆盖率指标,包括KMNC,NBC,SNAC。
-3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py):图像变异方法,包括基于像素值的变化方法和仿射变化方法。
-4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks):对抗样本攻击方法,包含多种黑盒、白盒攻击方法。
-
-## 参考文献
-
-[1] Pei K, Cao Y, Yang J, et al. Deepxplore: Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017: 1-18.
-
-[2]Ma L, Juefei-Xu F, Zhang F, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems[C]//Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ACM, 2018: 120-131.
+# AI模型安全测试
+
+`Linux` `Ascend` `GPU` `CPU` `数据准备` `模型开发` `模型训练` `模型调优` `企业` `高级`
+
+
+
+- [AI模型安全测试](#ai模型安全测试)
+ - [背景](#背景)
+ - [Fuzz Testing设计图](#fuzz-testing设计图)
+ - [Fuzz Testing流程](#fuzz-testing流程)
+ - [代码实现](#代码实现)
+ - [参考文献](#参考文献)
+
+
+
+
+
+## 背景
+
+不同于[传统程序的Fuzz安全测试](https://zhuanlan.zhihu.com/p/43432370),MindArmour针对深度神经网络,提供AI模型安全测试模块fuzz_testing。根据神经网络的特点,引入神经元覆盖率[1]的概念,作为Fuzz的测试指导,引导Fuzz朝神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试DNN,探索不同类型的模型输出结果、模型错误行为。
+
+## Fuzz Testing设计图
+
+AI模型安全测试设计图如下。
+
+
+
+在用户接口层,需要用户提供原始数据集`DataSet`、被测试模型`Model`和配置Fuzzer参数`Fuzzer configuration`。Fuzzer模块对模型和数据进行Fuzz测试后,返回安全评估报告`Security Report`。
+
+Fuzz Testing架构主要包括三个模块:
+
+1. Natural Threat/Adversarial Example Generator(数据变异模块):
+
+ 随机选择变异方法对种子数据变异生成多个变种。支持多种样本的变异策略, 包括:
+
+ - 图像仿射变换方法如:平移、旋转、缩放、错切。
+ - 基于图像像素值变化的方法如:改变对比度、亮度、模糊、加噪。
+ - 基于对抗攻击的白盒、黑盒对抗样本生成方法,如FGSM、PGD、MDIIM。
+
+2. Fuzzer moduler(变异指导模块):
+
+ 对变异生成的数据进行fuzz测试,观察神经元覆盖率的变化情况,如果生成的数据使得神经元覆盖率增加,则加入变异的种子队列,用于下一轮的数据变异。目前支持的神经元覆盖率指标包括KMNC、NBC、SNAC[2]。
+
+3. Evaluation(评估模块):
+
+ 评估Fuzz Testing的效果,生成数据的质量,变异方法的强度。支持3个类型5种指标,包括通用评价指标:accuracy,神经元覆盖率指标:kmnc, nbc,snac,对抗攻击评价指标:attack_success_rate。
+
+## Fuzz Testing流程
+
+
+
+具体的Fuzz Testing流程如下:
+
+1. 根据策略从种子队列中选择一个种子A。
+2. 随机选择变异策略,对种子A进行变异,生成多个变种数据A1,A2...
+3. 用目标模型对变种A1,A2...进行预测,如果变种的语意与种子保持一致,则进入Fuzzed Tests。
+4. 若目标模型对于变种的预测结果是正确的,用神经元覆盖率指标进行分析。
+5. 如果变种使得覆盖率增加,那么将该变种放入种子队列,用于下一轮变异。
+
+通过多轮循环,我们获得一系列变异数据Fuzzed Tests,并进一步分析,从多个角度给出安全报告。可以用于深入分析神经网络模型的缺陷,从而针对这些缺陷,进行模型增强等,改善提升模型的通用性、鲁棒性。
+
+## 代码实现
+
+1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py):Fuzzer总体流程。
+2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py):神经元覆盖率指标,包括KMNC,NBC,SNAC。
+3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py):图像变异方法,包括基于像素值的变化方法和仿射变化方法。
+4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks):对抗样本攻击方法,包含多种黑盒、白盒攻击方法。
+
+## 参考文献
+
+[1] Pei K, Cao Y, Yang J, et al. Deepxplore: Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017: 1-18.
+
+[2]Ma L, Juefei-Xu F, Zhang F, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems[C]//Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ACM, 2018: 120-131.
diff --git a/docs/source_zh_cn/design/mindarmour/images/dp_arch.png b/docs/note/source_zh_cn/design/mindarmour/images/dp_arch.png
similarity index 100%
rename from docs/source_zh_cn/design/mindarmour/images/dp_arch.png
rename to docs/note/source_zh_cn/design/mindarmour/images/dp_arch.png
diff --git a/docs/note/source_zh_cn/design/mindarmour/images/fuzz_architecture.png b/docs/note/source_zh_cn/design/mindarmour/images/fuzz_architecture.png
new file mode 100644
index 0000000000000000000000000000000000000000..d4e8b89bd9a9f4844c59790f5b2114d1d477f927
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindarmour/images/fuzz_architecture.png differ
diff --git a/docs/note/source_zh_cn/design/mindarmour/images/fuzz_process.png b/docs/note/source_zh_cn/design/mindarmour/images/fuzz_process.png
new file mode 100644
index 0000000000000000000000000000000000000000..2e04347f7cfb0819562578a6be1e91b5cc7ce9d5
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindarmour/images/fuzz_process.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/graph_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md
similarity index 96%
rename from docs/source_zh_cn/design/mindinsight/graph_visual_design.md
rename to docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md
index f84aa65eba740aff307dc5fa61435489335cc051..aae96b6575a686ae8897d30c7ecd6b94844d2272 100644
--- a/docs/source_zh_cn/design/mindinsight/graph_visual_design.md
+++ b/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md
@@ -15,7 +15,7 @@
-
+
## 特性背景
diff --git a/docs/note/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..3f785786eb8652e8d1cfc09795e48895da80eef9
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/images/context_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/context_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/context_profiler.png
rename to docs/note/source_zh_cn/design/mindinsight/images/context_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/graph_visual_class_design.png b/docs/note/source_zh_cn/design/mindinsight/images/graph_visual_class_design.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/graph_visual_class_design.png
rename to docs/note/source_zh_cn/design/mindinsight/images/graph_visual_class_design.png
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph.png b/docs/note/source_zh_cn/design/mindinsight/images/graph_visual_main.png
similarity index 100%
rename from tutorials/source_zh_cn/advanced_use/images/graph.png
rename to docs/note/source_zh_cn/design/mindinsight/images/graph_visual_main.png
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_sidebar.png b/docs/note/source_zh_cn/design/mindinsight/images/graph_visual_right_side.png
similarity index 100%
rename from tutorials/source_zh_cn/advanced_use/images/graph_sidebar.png
rename to docs/note/source_zh_cn/design/mindinsight/images/graph_visual_right_side.png
diff --git a/docs/note/source_zh_cn/design/mindinsight/images/module_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/module_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..f30582b53e046a37e5d97450b148d4e665ba174d
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindinsight/images/module_profiler.png differ
diff --git a/docs/note/source_zh_cn/design/mindinsight/images/parser_module_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/parser_module_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..8ef3c927013517e341fbe44c7f96f0be05536b80
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindinsight/images/parser_module_profiler.png differ
diff --git a/docs/note/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..3e2d4363e92821b05cafc330573c981a1ab99bbf
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png differ
diff --git a/docs/note/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..909dd42c89715d49a11c35764d84aab231b91fb4
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/images/tensor_histogram.png b/docs/note/source_zh_cn/design/mindinsight/images/tensor_histogram.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/tensor_histogram.png
rename to docs/note/source_zh_cn/design/mindinsight/images/tensor_histogram.png
diff --git a/tutorials/source_zh_cn/advanced_use/images/tensor_table.png b/docs/note/source_zh_cn/design/mindinsight/images/tensor_table.png
similarity index 100%
rename from tutorials/source_zh_cn/advanced_use/images/tensor_table.png
rename to docs/note/source_zh_cn/design/mindinsight/images/tensor_table.png
diff --git a/docs/note/source_zh_cn/design/mindinsight/images/time_order_profiler.png b/docs/note/source_zh_cn/design/mindinsight/images/time_order_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..35eef99934ce9d743ebe0294e18ff0b5ea40abab
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindinsight/images/time_order_profiler.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/images/training_visualization_architecture.png b/docs/note/source_zh_cn/design/mindinsight/images/training_visualization_architecture.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/training_visualization_architecture.png
rename to docs/note/source_zh_cn/design/mindinsight/images/training_visualization_architecture.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/training_visualization_data_flow.png b/docs/note/source_zh_cn/design/mindinsight/images/training_visualization_data_flow.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/training_visualization_data_flow.png
rename to docs/note/source_zh_cn/design/mindinsight/images/training_visualization_data_flow.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/training_visualization_data_model.png b/docs/note/source_zh_cn/design/mindinsight/images/training_visualization_data_model.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/training_visualization_data_model.png
rename to docs/note/source_zh_cn/design/mindinsight/images/training_visualization_data_model.png
diff --git a/docs/source_zh_cn/design/mindinsight/profiler_design.md b/docs/note/source_zh_cn/design/mindinsight/profiler_design.md
similarity index 96%
rename from docs/source_zh_cn/design/mindinsight/profiler_design.md
rename to docs/note/source_zh_cn/design/mindinsight/profiler_design.md
index 0171bd3f9bcd4176f75796f6f54c5344be1872de..a709f7107d3e6b0dd11f89c8beeddc552e191ae7 100644
--- a/docs/source_zh_cn/design/mindinsight/profiler_design.md
+++ b/docs/note/source_zh_cn/design/mindinsight/profiler_design.md
@@ -1,174 +1,174 @@
-# Profiler设计文档
-
-`Linux` `Ascend` `GPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者`
-
-
-
-- [Profiler设计文档](#profiler设计文档)
- - [背景](#背景)
- - [Profiler框架设计](#profiler架构设计)
- - [上下文](#上下文)
- - [模块层级结构](#模块层级结构)
- - [内部模块交互](#内部模块交互)
- - [子模块设计](#准备训练脚本)
- - [ProfilerAPI和Controller](#profiler-api-controller)
- - [ProfilerAPI和Controller模块介绍](#profiler-api-controller模块介绍)
- - [Analyser](#analyser)
- - [Analyser模块介绍](#analyser模块介绍)
- - [Analyser模块设计](#analyser模块设计)
- - [Parser](#parser)
- - [Parser模块介绍](#parser模块介绍)
- - [Parser模块设计](#parser模块设计)
- - [Proposer](#proposer)
- - [Proposer模块介绍](#proposer模块介绍)
- - [Proposer模块设计](#proposer模块设计)
-
-
-
-
-
-## 背景
-
-为了支持用户在MindSpore进行模型开发性能调试,需要提供易用的Profile工具,直观地展现网络模型各维度的性能信息,为用户提供易用、丰富的性能分析功能,帮助用户快速定位网络中性能问题。
-
-## Profiler架构设计
-这一章将介绍Profiler的架构设计,第一节从整体Profiler的角度出发介绍其上下文交互关系,第二节将打开Profiler内部,介绍模块层架结构以及模块划分,第三节将介绍模块间的交互调用关系。
-
-### 上下文
-
-Profiler是MindSpore调试调优工具的一部分,在整个使用过程中的上下文环境如下图所示:
-
-
-
-图1:上下文关系图
-
-如上图所示,Profiler与其他部分的交互包括:
-
-1. 在训练脚本中调用MindSpore的Profiler向MindSpore的ada通信模块发送启动收集性能数据的命令,最终由ada生成性能原始数据;
-
-2. MindSpore侧Profiler将在用户脚本中对原始数据进行解析,并在用户指定的文件夹下面生成中间数据结果;
-
-3. Mindinsight侧Profiler对接中间数据,提供可视化Profiler功能供用户使用。
-### 模块层级结构
-
-模块层级划分如下:
-
-
-
-图2:层级模块关系图
-
-
-如上图所示,各个模块功能介绍如下:
-1. ProfilerAPI是代码侧对用户提供的调用入口,为用户提供了性能收集启动接口以及分析接口;
-2. Controller是ProfilerAPI下层的模块,被ProfilerAPI中的启动接口调用,负责控制下方性能收集功能的启动停止,原始数据会被ada写入固定位置;
-3. Parser是性能原始数据解析模块,由于性能原始数据是在设备侧收集的信息,所以信息不能直接被用户所理解,该模块负责将信息进行解析、组合、转换,最终形成用户可理解、上层可分析的中间结果;
-4. Analyser获取下层Parser解析出的中间结果,负责对中间结果的封装、筛选、排序,最终按照信息分类,返回各个类别对应的信息,提供给上层的表现层Profiler API、RESTful使用;
-5. 通过RESTful调用后端Analyser提供的common API,获取目标数据,以RESTful接口对接前端。
-
-### 内部模块交互
-从用户角度,有两种使用形式API、RESTful,我们以API为例,阐述一个完整的内部模块交互流程:
-
-
-
-图3:模块交互图
-
-如上图所示,各个模块交互流程如下:
-
-1. ProfilerAPI会调用下层Controller的控制函数,控制下层收集模块进行收集,目前收集模块(ada)是以常驻进程的方式接受命令,并独立工作收集性能信息的;
-
-2. 用户在训练结束后会调用ProfilerAPI的分析接口;
-
-3. Profiler API分析接口首先使用Parser模块对性能数据进行解析,产生中间结果,再调用Aalayser进行中间结果分析,最终将各类信息返回至用户侧。
-
-## 子模块设计
-### ProfilerAPI和Controller
-
-#### ProfilerAPI和Controller模块说明
-ProfilerAPI为用户在训练脚本侧提供入口API,用户通过ProfilerAPI启动性能收集以及对性能数据进行分析。
-ProfilerAPI通过Controller下发命令,完成对ada启动的控制。
-
-#### ProfilerAPI和Controller模块设计
-ProfilerAPI模块,属于上层应用接口层,由训练脚本集成。功能分为两部分:
-
-- 训练前调用底层Controller接口,下发命令,启动profiling统计任务。
-
-- 训练完成后,调用底层Controller接口,下发命令,停止性能统计任务,再调用Analyser、Parser模块接口解析数据文件,生成算子性能统计、training trace统计等结果数据。
-
-
-Controller模块提供对上层接口,并调用底层性能收集模块接口,下发启动和停止性能收集的命令。
-
-最终生成的性能原始数据主要包含:
-
-- `hwts.log.data.45.dev.profiler_default_tag`文件:存储算子执行信息,包括task的开始/结束,stream id的信息等;
-- `DATA_PREPROCESS.dev.AICPU`文件:AI CPU算子的执行各阶段的执行时间信息;
-- `Framework.host.task_desc_info`文件:存储算子id与算子名称的对应关系,以及每个算子的输入输出信息;
-- `training_trace.46.dev.profiler_default_tag`文件:存储每个step的开始结束时刻,迭代间隙、迭代前向反向、迭代拖尾的时刻信息。
-
-### Parser
-#### Parser模块介绍
-Parser是原始性能数据解析模块,由于原始性能数据是在设备侧收集的信息,所以信息不能直接被用户所理解,该模块负责将信息进行解析、组合、转换,最终形成用户可理解、上层可分析的中间结果。
-#### Parser模块设计
-
-
-图4:Parser模块图
-
-如上图所示,Parser模块主要由HWTS Parser、AI CPU Parser、Framework Parser、Training Trace Parser组成,每个模块对应解析一种原始数据,通过解析原始数据得到用户能读懂的中间文件。
-
-- HWTS Parser:解析`hwts.log.data.45.dev.profiler_default_tag`文件,获得Device基于task的统计信息,如每个task的开始/结束,stream id等数据,用于算子执行时间的计算。
-- AI CPU Parser:解析`DATA_PREPROCESS.dev.AICPU`文件,获得AI CPU算子的执行各阶段的执行时间信息。
-- Framework Parser:解析`Framework.host.task_desc_info`文件,用于获取AI Core算子与task的对应关系,算子关键信息等内容。
-- Training Trace Parser:解析`training_trace.46.dev.profiler_default_tag`文件,用于分析训练各阶段的时间。
-
-### Analyser
-
-#### Analyser模块介绍
-分析器的作用是对解析阶段生成的中间结果,进行筛选、排序、查询、分页等相关操作。
-
-#### Analyser模块设计
-
-该模块负责解析Parser生成的中间文件,为上层数据分析提供通用接口,将分析后的数据返回给上层展示给用户,由于各种中间文件有一定的共同点,可以抽象出公共内容,所以Analyser类设计如下图所示:
-
-
-
-图5:Analyser类图
-
-如上图所示,针对期望查询的不同内容,实现多个Analyser,每个Analyser可以定义筛选、排序、分页条件。每个Analyser知道自己需要哪些中间文件来进行数据的合并、筛选、排序。Analyser与Parser是通过Parser生成的中间文件关联起来的,本身不存在函数调用的情况,这样对两个模块进行了解耦。
-
-针对算子信息的Analyser,目前存在两种:
-
-- 针对算子类型平均信息的筛选。
-- 针对每个算子详细平均信息的筛选,分别在两个Analyser中实现(AicoreTypeAnalyser、AicoreDetailAnalyser)。
-
-为了隐藏Analyser内部实现,方便调用,使用简单工厂模式,通过AnalyserFactory获取指定的Analyser。
-
-
-### Proposer
-#### Proposer模块介绍
-Proposer是Profiler性能优化建议模块,Proposer调用Analyser模块获取性能数据,通过调优规则对性能数据进行分析,输出调优建议由UI、API接口展示给用户。
-
-#### Proposer模块设计
-
-模块划分如下所示:
-
-
-
-图6:Proposer模块图
-
-模块设计如上图所示:
-
-- Proposer提供接口用于API、RESTful调用以获取优化建议。
-- Proposer调用Analyser接口,获取性能数据并根据优化规则,获得优化建议。
-- Proposer调用Analyser工厂获得Analyser对象。
-
-调用Analyser对象的query接口获取信息,包括:按时间排序TOP N的AICore、AICoreType、AICpu算子信息、traning trace各阶段的时间信息。
-
-模块类设计如下所示:
-
-
-
-图7:Proposer类图
-
-如上模块类图所示:
-
-- 各类型Proposer继承抽象类Proposer并实现analyze方法;
+# Profiler设计文档
+
+`Linux` `Ascend` `GPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者`
+
+
+
+- [Profiler设计文档](#profiler设计文档)
+ - [背景](#背景)
+ - [Profiler框架设计](#profiler架构设计)
+ - [上下文](#上下文)
+ - [模块层级结构](#模块层级结构)
+ - [内部模块交互](#内部模块交互)
+ - [子模块设计](#准备训练脚本)
+ - [ProfilerAPI和Controller](#profiler-api-controller)
+ - [ProfilerAPI和Controller模块介绍](#profiler-api-controller模块介绍)
+ - [Analyser](#analyser)
+ - [Analyser模块介绍](#analyser模块介绍)
+ - [Analyser模块设计](#analyser模块设计)
+ - [Parser](#parser)
+ - [Parser模块介绍](#parser模块介绍)
+ - [Parser模块设计](#parser模块设计)
+ - [Proposer](#proposer)
+ - [Proposer模块介绍](#proposer模块介绍)
+ - [Proposer模块设计](#proposer模块设计)
+
+
+
+
+
+## 背景
+
+为了支持用户在MindSpore进行模型开发性能调试,需要提供易用的Profile工具,直观地展现网络模型各维度的性能信息,为用户提供易用、丰富的性能分析功能,帮助用户快速定位网络中性能问题。
+
+## Profiler架构设计
+这一章将介绍Profiler的架构设计,第一节从整体Profiler的角度出发介绍其上下文交互关系,第二节将打开Profiler内部,介绍模块层架结构以及模块划分,第三节将介绍模块间的交互调用关系。
+
+### 上下文
+
+Profiler是MindSpore调试调优工具的一部分,在整个使用过程中的上下文环境如下图所示:
+
+
+
+图1:上下文关系图
+
+如上图所示,Profiler与其他部分的交互包括:
+
+1. 在训练脚本中调用MindSpore的Profiler向MindSpore的ada通信模块发送启动收集性能数据的命令,最终由ada生成性能原始数据;
+
+2. MindSpore侧Profiler将在用户脚本中对原始数据进行解析,并在用户指定的文件夹下面生成中间数据结果;
+
+3. Mindinsight侧Profiler对接中间数据,提供可视化Profiler功能供用户使用。
+### 模块层级结构
+
+模块层级划分如下:
+
+
+
+图2:层级模块关系图
+
+
+如上图所示,各个模块功能介绍如下:
+1. ProfilerAPI是代码侧对用户提供的调用入口,为用户提供了性能收集启动接口以及分析接口;
+2. Controller是ProfilerAPI下层的模块,被ProfilerAPI中的启动接口调用,负责控制下方性能收集功能的启动停止,原始数据会被ada写入固定位置;
+3. Parser是性能原始数据解析模块,由于性能原始数据是在设备侧收集的信息,所以信息不能直接被用户所理解,该模块负责将信息进行解析、组合、转换,最终形成用户可理解、上层可分析的中间结果;
+4. Analyser获取下层Parser解析出的中间结果,负责对中间结果的封装、筛选、排序,最终按照信息分类,返回各个类别对应的信息,提供给上层的表现层Profiler API、RESTful使用;
+5. 通过RESTful调用后端Analyser提供的common API,获取目标数据,以RESTful接口对接前端。
+
+### 内部模块交互
+从用户角度,有两种使用形式API、RESTful,我们以API为例,阐述一个完整的内部模块交互流程:
+
+
+
+图3:模块交互图
+
+如上图所示,各个模块交互流程如下:
+
+1. ProfilerAPI会调用下层Controller的控制函数,控制下层收集模块进行收集,目前收集模块(ada)是以常驻进程的方式接受命令,并独立工作收集性能信息的;
+
+2. 用户在训练结束后会调用ProfilerAPI的分析接口;
+
+3. Profiler API分析接口首先使用Parser模块对性能数据进行解析,产生中间结果,再调用Aalayser进行中间结果分析,最终将各类信息返回至用户侧。
+
+## 子模块设计
+### ProfilerAPI和Controller
+
+#### ProfilerAPI和Controller模块说明
+ProfilerAPI为用户在训练脚本侧提供入口API,用户通过ProfilerAPI启动性能收集以及对性能数据进行分析。
+ProfilerAPI通过Controller下发命令,完成对ada启动的控制。
+
+#### ProfilerAPI和Controller模块设计
+ProfilerAPI模块,属于上层应用接口层,由训练脚本集成。功能分为两部分:
+
+- 训练前调用底层Controller接口,下发命令,启动profiling统计任务。
+
+- 训练完成后,调用底层Controller接口,下发命令,停止性能统计任务,再调用Analyser、Parser模块接口解析数据文件,生成算子性能统计、training trace统计等结果数据。
+
+
+Controller模块提供对上层接口,并调用底层性能收集模块接口,下发启动和停止性能收集的命令。
+
+最终生成的性能原始数据主要包含:
+
+- `hwts.log.data.45.dev.profiler_default_tag`文件:存储算子执行信息,包括task的开始/结束,stream id的信息等;
+- `DATA_PREPROCESS.dev.AICPU`文件:AI CPU算子的执行各阶段的执行时间信息;
+- `Framework.host.task_desc_info`文件:存储算子id与算子名称的对应关系,以及每个算子的输入输出信息;
+- `training_trace.46.dev.profiler_default_tag`文件:存储每个step的开始结束时刻,迭代间隙、迭代前向反向、迭代拖尾的时刻信息。
+
+### Parser
+#### Parser模块介绍
+Parser是原始性能数据解析模块,由于原始性能数据是在设备侧收集的信息,所以信息不能直接被用户所理解,该模块负责将信息进行解析、组合、转换,最终形成用户可理解、上层可分析的中间结果。
+#### Parser模块设计
+
+
+图4:Parser模块图
+
+如上图所示,Parser模块主要由HWTS Parser、AI CPU Parser、Framework Parser、Training Trace Parser组成,每个模块对应解析一种原始数据,通过解析原始数据得到用户能读懂的中间文件。
+
+- HWTS Parser:解析`hwts.log.data.45.dev.profiler_default_tag`文件,获得Device基于task的统计信息,如每个task的开始/结束,stream id等数据,用于算子执行时间的计算。
+- AI CPU Parser:解析`DATA_PREPROCESS.dev.AICPU`文件,获得AI CPU算子的执行各阶段的执行时间信息。
+- Framework Parser:解析`Framework.host.task_desc_info`文件,用于获取AI Core算子与task的对应关系,算子关键信息等内容。
+- Training Trace Parser:解析`training_trace.46.dev.profiler_default_tag`文件,用于分析训练各阶段的时间。
+
+### Analyser
+
+#### Analyser模块介绍
+分析器的作用是对解析阶段生成的中间结果,进行筛选、排序、查询、分页等相关操作。
+
+#### Analyser模块设计
+
+该模块负责解析Parser生成的中间文件,为上层数据分析提供通用接口,将分析后的数据返回给上层展示给用户,由于各种中间文件有一定的共同点,可以抽象出公共内容,所以Analyser类设计如下图所示:
+
+
+
+图5:Analyser类图
+
+如上图所示,针对期望查询的不同内容,实现多个Analyser,每个Analyser可以定义筛选、排序、分页条件。每个Analyser知道自己需要哪些中间文件来进行数据的合并、筛选、排序。Analyser与Parser是通过Parser生成的中间文件关联起来的,本身不存在函数调用的情况,这样对两个模块进行了解耦。
+
+针对算子信息的Analyser,目前存在两种:
+
+- 针对算子类型平均信息的筛选。
+- 针对每个算子详细平均信息的筛选,分别在两个Analyser中实现(AicoreTypeAnalyser、AicoreDetailAnalyser)。
+
+为了隐藏Analyser内部实现,方便调用,使用简单工厂模式,通过AnalyserFactory获取指定的Analyser。
+
+
+### Proposer
+#### Proposer模块介绍
+Proposer是Profiler性能优化建议模块,Proposer调用Analyser模块获取性能数据,通过调优规则对性能数据进行分析,输出调优建议由UI、API接口展示给用户。
+
+#### Proposer模块设计
+
+模块划分如下所示:
+
+
+
+图6:Proposer模块图
+
+模块设计如上图所示:
+
+- Proposer提供接口用于API、RESTful调用以获取优化建议。
+- Proposer调用Analyser接口,获取性能数据并根据优化规则,获得优化建议。
+- Proposer调用Analyser工厂获得Analyser对象。
+
+调用Analyser对象的query接口获取信息,包括:按时间排序TOP N的AICore、AICoreType、AICpu算子信息、traning trace各阶段的时间信息。
+
+模块类设计如下所示:
+
+
+
+图7:Proposer类图
+
+如上模块类图所示:
+
+- 各类型Proposer继承抽象类Proposer并实现analyze方法;
- API、CLI通过调用工厂ProposerFactory获取Proposer,并调用Proposer.analyze函数获取各类型的Proposer分析的优化建议。
\ No newline at end of file
diff --git a/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md
similarity index 89%
rename from docs/source_zh_cn/design/mindinsight/tensor_visual_design.md
rename to docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md
index e3af486df552ed81c7f4e4b6fab8bf680c0b2687..1cae60ca24cfbd13df531a23caf1dcd511dc48ad 100644
--- a/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md
+++ b/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md
@@ -14,7 +14,7 @@
-
+
## 特性背景
@@ -44,7 +44,7 @@ Tensor可视支持1-N维的Tensor以表格或直方图的形式展示,对于0
图1将用户所记录的张量以表格的形式展示,包含以下功能:
-- 表格中白色方框显示当前展示的是哪个维度下的张量数据,其中冒号`:`表示当前维度的所有值,可以在方框输入对应的索引(和Python的索引含义一致,支持负值)或者`:`来查询特定维度的张量数据。
+- 表格中白色方框显示当前展示的是哪个维度下的张量数据,其中冒号`:`表示当前维度索引范围,和Python索引含义基本一致,不指定具体索引表示当前维度所有值,`2:5`表示索引2到5(不包括5)的值,可以在方框输入对应的索引或者含有`:`的索引范围来查询特定维度的张量数据。
- 拖拽表格下方的空心圆圈可以查询特定步骤的张量数据。

diff --git a/docs/source_zh_cn/design/mindinsight/training_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md
similarity index 92%
rename from docs/source_zh_cn/design/mindinsight/training_visual_design.md
rename to docs/note/source_zh_cn/design/mindinsight/training_visual_design.md
index 8e5c5071aba3bba71fdabd771556c60bab14e13f..0bf0ef0d7a1a5eefb1b85142dabc82ec1664428a 100644
--- a/docs/source_zh_cn/design/mindinsight/training_visual_design.md
+++ b/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md
@@ -18,7 +18,7 @@
-
+
[MindInsight](https://gitee.com/mindspore/mindinsight)是MindSpore的可视化调试调优组件。通过MindInsight可以完成训练可视、性能调优、精度调优等任务。
@@ -40,11 +40,11 @@
训练信息收集API包括:
-- 基于summary算子的训练信息收集API。这部分API主要包括4个summary算子,即用于记录标量数据的ScalarSummary算子,用于记录图片数据的ImageSummary算子,用于记录参数分布图(直方图)数据的HistogramSummary算子和用于记录张量数据的TensorSummary算子。请访问[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html)以获取关于这些算子的信息。
+- 基于summary算子的训练信息收集API。这部分API主要包括4个summary算子,即用于记录标量数据的ScalarSummary算子,用于记录图片数据的ImageSummary算子,用于记录参数分布图(直方图)数据的HistogramSummary算子和用于记录张量数据的TensorSummary算子。请访问[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.0/operator_list.html)以获取关于这些算子的信息。
-- 基于Python API的训练信息收集API。通过[SummaryRecord.add_value](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value)方法,可以在Python代码中完成训练信息的收集。
+- 基于Python API的训练信息收集API。通过[SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value)方法,可以在Python代码中完成训练信息的收集。
-- 易用的训练信息收集callback。通过[SummaryCollector](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)这一callback可以方便地收集常用训练信息到训练日志中。
+- 易用的训练信息收集callback。通过[SummaryCollector](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)这一callback可以方便地收集常用训练信息到训练日志中。
训练信息持久化模块主要包括用于管理缓存的summary_record模块和用于并行处理数据、写入文件的write_pool模块。训练信息持久化后,存储在训练日志文件(summary文件中)。
diff --git a/docs/source_zh_cn/architecture.md b/docs/note/source_zh_cn/design/mindspore/architecture.md
similarity index 81%
rename from docs/source_zh_cn/architecture.md
rename to docs/note/source_zh_cn/design/mindspore/architecture.md
index 091f5ca131fe9784d785c39b330241330181c23a..0f893ca8b291e4760376944e434a9332cc6a5a25 100644
--- a/docs/source_zh_cn/architecture.md
+++ b/docs/note/source_zh_cn/design/mindspore/architecture.md
@@ -2,20 +2,20 @@
`Linux` `Windows` `Ascend` `GPU` `CPU` `端侧` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者`
-
+
MindSpore框架架构总体分为MindSpore前端表示层、MindSpore计算图引擎和MindSpore后端运行时三层。

-- MindSpore前端表示层(Mind Expression,简称ME)
+- MindSpore前端表示层(MindExpression,简称ME)
该部分包含Python API、MindSpore IR(Intermediate representation,简称IR)、计算图高级别优化(Graph High Level Optimization,简称GHLO)三部分。
- Python API向用户提供统一的模型训练、推理、导出接口,以及统一的数据处理、增强、格式转换接口。
- GHLO包含硬件无关的优化(如死代码消除等)、自动并行和自动微分等功能。
- MindSpore IR提供统一的中间表示,MindSpore基于此IR进行pass优化。
-- MindSpore计算图引擎(Graph Engine,简称GE)
+- MindSpore计算图引擎(GraphEngine,简称GE)
该部分包含计算图低级别优化(Graph Low Level Optimization,简称GLLO)、图执行。
- GLLO包含硬件相关的优化,以及算子融合、Buffer融合等软硬件结合相关的深度优化。
diff --git a/lite/docs/source_zh_cn/architecture.md b/docs/note/source_zh_cn/design/mindspore/architecture_lite.md
similarity index 79%
rename from lite/docs/source_zh_cn/architecture.md
rename to docs/note/source_zh_cn/design/mindspore/architecture_lite.md
index ce86fa9829a184306cde335d74d06a70117c6c63..5f39ac72d4e242e02589d1ac8bd58ec13ba8a377 100644
--- a/lite/docs/source_zh_cn/architecture.md
+++ b/docs/note/source_zh_cn/design/mindspore/architecture_lite.md
@@ -1,10 +1,13 @@
# 总体架构
-
+`Linux` `Windows` `端侧` `推理应用` `中级` `高级` `贡献者`
+
+
+
MindSpore Lite框架的总体架构如下所示:
-
+
- **前端(Frontend):** 负责模型生成,用户可以通过模型构建接口构建模型,将第三方模型和MindSpore训练的模型转换为MindSpore Lite模型,其中第三方模型包括TensorFlow Lite、Caffe 1.0和ONNX模型。
diff --git a/docs/source_zh_cn/design/mindspore/distributed_training_design.md b/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md
similarity index 95%
rename from docs/source_zh_cn/design/mindspore/distributed_training_design.md
rename to docs/note/source_zh_cn/design/mindspore/distributed_training_design.md
index 3a73eb8ff66d49500707f959e32d87fb926bf85c..b0c69efb8ff6fdb681c402dcc55a0ac43e4ea093 100644
--- a/docs/source_zh_cn/design/mindspore/distributed_training_design.md
+++ b/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md
@@ -18,7 +18,7 @@
-
+
## 背景
@@ -47,19 +47,19 @@
每次开始进行并行训练前,通过调用`mindspore.communication.init`接口初始化通信资源,并自动创建全局通信组`WORLD_COMM_GROUP`。
-2. 数据分发
+2. 数据分发(Data distribution)
数据并行的核心在于将数据集在样本维度拆分并下发到不同的卡上。在`mindspore.dataset`模块提供的所有数据集加载接口中都有`num_shards`和`shard_id`两个参数,它们用于将数据集拆分为多份并循环采样的方式,采集`batch`大小的数据到各自的卡上,当出现数据量不足的情况时将会从头开始采样。
3. 网络构图
- 数据并行网络的书写方式与单机网络没有差别,这是因为在正反向传播过程中各卡的模型间是独立执行的,只是保持了相同的网络结构。唯一需要特别注意的是为了保证各卡间训练同步,相应的网络参数初始化值应当是一致的,这里建议通过`numpy.random.seed`在每张卡上设置相同的随机数种子达到模型广播的目的。
+ 数据并行网络的书写方式与单机网络没有差别,这是因为在正反向传播(Forward propogation & Backword Propogation)过程中各卡的模型间是独立执行的,只是保持了相同的网络结构。唯一需要特别注意的是为了保证各卡间训练同步,相应的网络参数初始化值应当是一致的,这里建议通过`numpy.random.seed`在每张卡上设置相同的随机数种子达到模型广播的目的。
-4. 梯度聚合
+4. 梯度聚合(Gradient aggregation)
数据并行理论上应该实现和单机一致的训练效果,为了保证计算逻辑的一致性,在梯度计算完成后插入`AllReduce`算子实现各卡间的梯度聚合操作。这里我们设置了`mean`开关,用户可以选择是否要对求和后的梯度值进行求平均操作,也可以将其视为超参项,打开开关等价于学习率倍数缩小。
-5. 参数更新
+5. 参数更新(Parameter update)
因为引入了梯度聚合操作,所以各卡的模型会以相同的梯度值一起进入参数更新步骤。因此MindSpore实现的是一种同步数据并行训练方式。理论上最终每卡训练出来的模型是相同的,如果网络中含有在样本维度的归约类型操作,网络的输出可能会有所差别,这是由数据并行的切分性质决定的。
diff --git a/lite/docs/source_zh_cn/images/MindSpore-Lite-architecture.png b/docs/note/source_zh_cn/design/mindspore/images/MindSpore-Lite-architecture.png
similarity index 100%
rename from lite/docs/source_zh_cn/images/MindSpore-Lite-architecture.png
rename to docs/note/source_zh_cn/design/mindspore/images/MindSpore-Lite-architecture.png
diff --git a/docs/source_zh_cn/images/architecture.eddx b/docs/note/source_zh_cn/design/mindspore/images/architecture.eddx
similarity index 100%
rename from docs/source_zh_cn/images/architecture.eddx
rename to docs/note/source_zh_cn/design/mindspore/images/architecture.eddx
diff --git a/docs/source_zh_cn/images/architecture.png b/docs/note/source_zh_cn/design/mindspore/images/architecture.png
similarity index 100%
rename from docs/source_zh_cn/images/architecture.png
rename to docs/note/source_zh_cn/design/mindspore/images/architecture.png
diff --git a/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png b/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png
new file mode 100644
index 0000000000000000000000000000000000000000..800b3b2536c739dcc48a1e46b5f65fc327e4ce8d
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png differ
diff --git a/docs/note/source_zh_cn/design/mindspore/images/data_parallel.png b/docs/note/source_zh_cn/design/mindspore/images/data_parallel.png
new file mode 100644
index 0000000000000000000000000000000000000000..a92c82aa64615b398e83b9bc2cf0aa2c5db9f904
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindspore/images/data_parallel.png differ
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/cf.dot b/docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/cf.dot
rename to docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/cf.png b/docs/note/source_zh_cn/design/mindspore/images/ir/cf.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/cf.png
rename to docs/note/source_zh_cn/design/mindspore/images/ir/cf.png
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/closure.dot b/docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/closure.dot
rename to docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/closure.png b/docs/note/source_zh_cn/design/mindspore/images/ir/closure.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/closure.png
rename to docs/note/source_zh_cn/design/mindspore/images/ir/closure.png
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/hof.dot b/docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/hof.dot
rename to docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/hof.png b/docs/note/source_zh_cn/design/mindspore/images/ir/hof.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/hof.png
rename to docs/note/source_zh_cn/design/mindspore/images/ir/hof.png
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/ir.dot b/docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/ir.dot
rename to docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot
diff --git a/docs/source_zh_cn/design/mindspore/images/ir/ir.png b/docs/note/source_zh_cn/design/mindspore/images/ir/ir.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/ir/ir.png
rename to docs/note/source_zh_cn/design/mindspore/images/ir/ir.png
diff --git a/docs/note/source_zh_cn/design/mindspore/images/operator_split.png b/docs/note/source_zh_cn/design/mindspore/images/operator_split.png
new file mode 100644
index 0000000000000000000000000000000000000000..4063170990c6816884361f195db5851cfbdf932e
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindspore/images/operator_split.png differ
diff --git a/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution1.png b/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution1.png
new file mode 100644
index 0000000000000000000000000000000000000000..ed4d79416a0a07f8d75e738aa544d214834ae778
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution1.png differ
diff --git a/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution2.png b/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution2.png
new file mode 100644
index 0000000000000000000000000000000000000000..114f984c66ae578722dbcdbb59ab03c44dbcb097
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution2.png differ
diff --git a/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution3.png b/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution3.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd66c9120615f50f2b3f60cfe139954cb4adf307
Binary files /dev/null and b/docs/note/source_zh_cn/design/mindspore/images/tensor_redistribution3.png differ
diff --git a/docs/source_zh_cn/design/mindspore/ir.md b/docs/note/source_zh_cn/design/mindspore/mindir.md
similarity index 94%
rename from docs/source_zh_cn/design/mindspore/ir.md
rename to docs/note/source_zh_cn/design/mindspore/mindir.md
index 77bc45014d5301fa6f76a9505ce38a491c09b9b4..7499075d6df1cca4ad2c49ce806f70eb676443eb 100644
--- a/docs/source_zh_cn/design/mindspore/ir.md
+++ b/docs/note/source_zh_cn/design/mindspore/mindir.md
@@ -1,6 +1,6 @@
# MindSpore IR(MindIR)
-`Linux` `框架开发` `中级` `高级` `贡献者`
+`Linux` `Windows` `框架开发` `中级` `高级` `贡献者`
@@ -17,7 +17,7 @@
-
+
## 简介
中间表示(IR)是程序编译过程中介于源语言和目标语言之间的程序表示,以方便编译器进行程序分析和优化,因此IR的设计需要考虑从源语言到目标语言的转换难度,同时考虑程序分析和优化的易用性和性能。
@@ -76,7 +76,7 @@ lambda (x, y)
let c = b * %1 in
c end
```
-对应的MindIR为[ir.dot](https://gitee.com/mindspore/docs/blob/master/docs/source_zh_cn/design/mindspore/images/ir/ir.dot):
+对应的MindIR为[ir.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot):

@@ -106,7 +106,7 @@ def hof(x):
return res
```
-对应的MindIR为[hof.dot](https://gitee.com/mindspore/docs/blob/master/docs/source_zh_cn/design/mindspore/images/ir/hof.dot):
+对应的MindIR为[hof.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot):

@@ -127,7 +127,7 @@ def fibonacci(n):
return fibonacci(n-1) + fibonacci(n-2)
```
-对应的MindIR为[cf.dot](https://gitee.com/mindspore/docs/blob/master/docs/source_zh_cn/design/mindspore/images/ir/cf.dot):
+对应的MindIR为[cf.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot):

@@ -153,7 +153,7 @@ def ms_closure():
return out1, out2
```
-对应的MindIR为[closure.dot](https://gitee.com/mindspore/docs/blob/master/docs/source_zh_cn/design/mindspore/images/ir/closure.dot):
+对应的MindIR为[closure.dot](https://gitee.com/mindspore/docs/blob/r1.0/docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot):

diff --git a/docs/source_zh_cn/technical_white_paper.md b/docs/note/source_zh_cn/design/technical_white_paper.md
similarity index 100%
rename from docs/source_zh_cn/technical_white_paper.md
rename to docs/note/source_zh_cn/design/technical_white_paper.md
diff --git a/docs/source_zh_cn/glossary.md b/docs/note/source_zh_cn/glossary.md
similarity index 86%
rename from docs/source_zh_cn/glossary.md
rename to docs/note/source_zh_cn/glossary.md
index 647c9076f97496a863d4f9ba88e06df4f2beb908..8f3ff30f86e32f2c337dd3945fce6b9f5e022f4e 100644
--- a/docs/source_zh_cn/glossary.md
+++ b/docs/note/source_zh_cn/glossary.md
@@ -2,7 +2,7 @@
`Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `初级` `中级` `高级`
-
+
| 术语/缩略语 | 说明 |
| ----- | ----- |
@@ -32,9 +32,10 @@
| LSTM | Long short-term memory,长短期记忆,对应的网络是一种时间循环神经网络,适合于处理和预测时间序列中间隔和延迟非常长的重要事件。 |
| Manifest | 一种数据格式文件,华为ModelArts采用了该格式,详细说明请参见。 |
| ME | Mind Expression,MindSpore前端,主要完成从用户源码到计算图的编译任务、训练中控制执行及上下文维护(非下沉模式配置下)、动态图(PyNative模式)等。 |
-| MindArmour | MindSpore安全组件,用于AI对抗样本管理,AI模型防攻击和增强,AI模型健壮性评测。 |
+| MindArmour | MindSpore安全模块,通过差分隐私、对抗性攻防等技术手段,提升模型的保密性、完整性和可用性,阻止攻击者对模型进行恶意修改或是破解模型的内部构件,窃取模型的参数。 |
| MindData | MindSpore数据框架,提供数据加载、增强、数据集管理以及可视化。 |
| MindInsight | MindSpore可视化组件,可视化标量、图像、计算图以及模型超参等信息。 |
+| MindRecord | MindSpore定义的一种数据格式,是一个执行读取、写入、搜索和转换MindSpore格式数据集的模块。 |
| MindSpore | 华为主导开源的深度学习框架。 |
| MindSpore Lite | 一个轻量级的深度神经网络推理引擎,提供了将MindSpore训练出的模型在端侧进行推理的功能。 |
| MNIST database | Modified National Institute of Standards and Technology database,一个大型手写数字数据库,通常用于训练各种图像处理系统。 |
@@ -43,5 +44,5 @@
| ResNet-50 | Residual Neural Network 50,由微软研究院的Kaiming He等四名华人提出的残差神经网络。 |
| Schema | 数据集结构定义文件,用于定义数据集包含哪些字段以及字段的类型。 |
| Summary | 是对网络中Tensor取值进行监测的一种算子,在图中是“外围”操作,不影响数据流本身。 |
-| TBE | Tensor Boost Engine,在TVM( Tensor Virtual Machine )框架基础上扩展的算子开发工具。 |
+| TBE | Tensor Boost Engine,华为自研的NPU算子开发工具,在TVM( Tensor Virtual Machine )框架基础上扩展,提供了一套Python API来实施开发活动,进行自定义算子开发。 |
| TFRecord | Tensorflow定义的数据格式。 |
diff --git a/docs/source_zh_cn/help_seeking_path.md b/docs/note/source_zh_cn/help_seeking_path.md
similarity index 90%
rename from docs/source_zh_cn/help_seeking_path.md
rename to docs/note/source_zh_cn/help_seeking_path.md
index 0c469483bb8b7a86bd7ea02082ea8a018a9cfbec..9de02451fdb111af0c021c01182ea2fffa4d9f29 100644
--- a/docs/source_zh_cn/help_seeking_path.md
+++ b/docs/note/source_zh_cn/help_seeking_path.md
@@ -1,8 +1,8 @@
-# 问题求助路径
+# 如何求助(求助路径)
`Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `初级` `中级` `高级`
-
+
本文将简述用户在使用MindSpore遇到问题时,如何使用官方提供的问题求助路径解决问题。MindSpore问题求助整体流程如图中所示,从用户使用MindSpore发现问题开始,直至选择到合适的问题解决方法。下面我们基于问题求助流程图对各种求助方法做解释说明。
diff --git a/docs/note/source_zh_cn/image_classification.md b/docs/note/source_zh_cn/image_classification.md
new file mode 100644
index 0000000000000000000000000000000000000000..37c2cbd9e7d0aa5a02149d581eb7d52b70d5cb06
--- /dev/null
+++ b/docs/note/source_zh_cn/image_classification.md
@@ -0,0 +1,33 @@
+# 图像分类
+
+
+
+## 图像分类介绍
+
+图像分类模型可以预测图片中出现哪些物体,识别出图片中出现物体列表及其概率。 比如下图经过模型推理的分类结果为下表:
+
+
+
+| 类别 | 概率 |
+| ---------- | ------ |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。
+
+## 图像分类模型列表
+
+下表是使用MindSpore Lite推理的部分图像分类模型的数据。
+
+> 下表的性能是在mate30手机上测试的。
+
+| 模型名称 | 大小(Mb) | Top1 | Top5 | F1 | CPU 4线程时延(ms) |
+|-----------------------| :----------: | :----------: | :----------: | :----------: | :-----------: |
+| [MobileNetV2](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) | 11.5 | - | - | 65.5% | 14.595 |
+| [Inceptionv3](https://download.mindspore.cn/model_zoo/official/lite/inceptionv3_lite/inceptionv3.ms) | 90.9 | 78.62% | 94.08% | - | 92.086 |
+| [Shufflenetv2](https://download.mindspore.cn/model_zoo/official/lite/shufflenetv2_lite/shufflenetv2.ms) | 8.8 | 67.74% | 87.62% | - | 8.303 |
+| [GoogleNet](https://download.mindspore.cn/model_zoo/official/lite/googlenet_lite/googlenet.ms) | 25.3 | 72.2% | 90.06% | - | 23.257 |
+| [ResNext50](https://download.mindspore.cn/model_zoo/official/lite/resnext50_lite/resnext50.ms) | 95.8 | 73.1% | 91.21% | - | 138.164 |
+
diff --git a/docs/source_zh_cn/images/help_seeking_path.png b/docs/note/source_zh_cn/images/help_seeking_path.png
similarity index 100%
rename from docs/source_zh_cn/images/help_seeking_path.png
rename to docs/note/source_zh_cn/images/help_seeking_path.png
diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_app_result.png b/docs/note/source_zh_cn/images/image_classification_result.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/lite_quick_start_app_result.png
rename to docs/note/source_zh_cn/images/image_classification_result.png
diff --git a/docs/note/source_zh_cn/images/object_detection.png b/docs/note/source_zh_cn/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/docs/note/source_zh_cn/images/object_detection.png differ
diff --git a/docs/source_zh_cn/index.rst b/docs/note/source_zh_cn/index.rst
similarity index 63%
rename from docs/source_zh_cn/index.rst
rename to docs/note/source_zh_cn/index.rst
index 91ddd47bd41b271296d377dcb5044aa6af87df45..885398285994b9d2eddb4acb7787383f06a1565e 100644
--- a/docs/source_zh_cn/index.rst
+++ b/docs/note/source_zh_cn/index.rst
@@ -3,7 +3,7 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore文档
+MindSpore Note
=================
.. toctree::
@@ -11,12 +11,5 @@ MindSpore文档
:maxdepth: 1
design
- roadmap
- benchmark
- network_list
- operator_list
- constraints_on_network_construction
- glossary
- FAQ
- help_seeking_path
- community
+ specification_note
+ others
diff --git a/docs/note/source_zh_cn/network_list.rst b/docs/note/source_zh_cn/network_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..e70ac86c49ea5a6903bff4ca49a9667b48b43500
--- /dev/null
+++ b/docs/note/source_zh_cn/network_list.rst
@@ -0,0 +1,7 @@
+网络支持
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ network_list_ms
\ No newline at end of file
diff --git a/docs/note/source_zh_cn/network_list_ms.md b/docs/note/source_zh_cn/network_list_ms.md
new file mode 100644
index 0000000000000000000000000000000000000000..7ad7267e58fec367adc5d815bf2dd3bf91f6106e
--- /dev/null
+++ b/docs/note/source_zh_cn/network_list_ms.md
@@ -0,0 +1,45 @@
+# MindSpore网络支持
+
+`Linux` `Ascend` `GPU` `CPU` `模型开发` `中级` `高级`
+
+
+
+- [MindSpore网络支持](#mindspore网络支持)
+ - [Model Zoo](#model-zoo)
+
+
+
+
+
+## Model Zoo
+
+| 领域 | 子领域 | 网络 | Ascend | GPU | CPU
+|:---- |:------- |:---- |:---- |:---- |:----
+|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
+| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
+| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
+|计算机视觉(CV) | 目标检测(Targets Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
+| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Targets Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Targets Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
+| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
+| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
+| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
+| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
+
+> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/r1.0/mindinsight/wizard/) 快速生成经典网络脚本。
diff --git a/docs/note/source_zh_cn/object_detection.md b/docs/note/source_zh_cn/object_detection.md
new file mode 100644
index 0000000000000000000000000000000000000000..39f0cbea82120f29f673433216883216b5a8b850
--- /dev/null
+++ b/docs/note/source_zh_cn/object_detection.md
@@ -0,0 +1,26 @@
+# 对象检测
+
+
+
+## 对象检测介绍
+
+对象检测可以识别出图片中的对象和该对象在图片中的位置。 如:对下图使用对象检测模型的输出如下表所示,使用矩形框识别图中对象的位置并且标注出对象类别的概率,其中坐标中的4个数字分别为Xmin,Ymin,,Xmax,,Ymax;概率表示反应被检测物理的可信程度。
+
+
+
+| 类别 | 概率 | 坐标 |
+| ----- | ---- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+使用MindSpore Lite实现对象检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection)。
+
+## 对象检测模型列表
+
+下表是使用MindSpore Lite推理的部分对象检测模型的数据。
+
+> 下表的性能是在mate30手机上测试的。
+
+| 模型名称 | 大小 | mAP(IoU=0.50:0.95) | CPU 4线程时延(ms) |
+|-----------------------| :----------: | :----------: | :-----------: |
+| [MobileNetv2-SSD](https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms) | 16.7 | 0.22 | 25.4 |
+
diff --git a/docs/note/source_zh_cn/operator_list.rst b/docs/note/source_zh_cn/operator_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..4a5895cf24b6a9bb5a9505707a00e9097fcc175f
--- /dev/null
+++ b/docs/note/source_zh_cn/operator_list.rst
@@ -0,0 +1,10 @@
+算子支持
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ operator_list_ms
+ operator_list_implicit
+ operator_list_parallel
+ operator_list_lite
\ No newline at end of file
diff --git a/docs/note/source_zh_cn/operator_list_implicit.md b/docs/note/source_zh_cn/operator_list_implicit.md
new file mode 100644
index 0000000000000000000000000000000000000000..3fa1a547715f51456ef52188f4837fce48c45749
--- /dev/null
+++ b/docs/note/source_zh_cn/operator_list_implicit.md
@@ -0,0 +1,103 @@
+# MindSpore隐式类型转换的算子支持
+
+`Linux` `Ascend` `GPU` `CPU` `模型开发` `初级` `中级` `高级`
+
+
+
+- [MindSpore隐式类型转换的算子支持](#mindspore隐式类型转换的算子支持)
+ - [隐式类型转换](#隐式类型转换)
+ - [转换规则](#转换规则)
+ - [参与转换的数据类型](#参与转换的数据类型)
+ - [支持算子](#支持算子)
+
+
+
+
+
+## 隐式类型转换
+### 转换规则
+* 标量与Tensor运算:运算时,将标量自动转为Tensor,数据类型和参与运算的Tensor数据类型保持一致;
+而当Tensor是bool数据类型,标量是int或float时,将标量和Tensor都转为数据类型为int32或float32的Tensor。
+* 不同数据类型Tensor运算:数据类型优先级排序为bool < uint8 < int8 < int16 < int32 < int64 < float16 < float32 < float64,
+运算时,先确定参与运算的Tensor中优先级相对最高的数据类型,然后将低优先级数据类型Tensor转换为相对最高优先级数据类型;
+而当int8和uint8数据类型的Tensor进行运算时,将其都转为int16的Tensor。
+* 不支持对Parameter进行数据类型转换:如果按照转换规则推导,需要对网络中定义的Parameter进行数据类型转换时,会抛出RuntimeError异常。
+
+### 参与转换的数据类型
+* bool
+* int8
+* uint8
+* int16
+* int32
+* int64
+* float16
+* float32
+* float64
+
+### 支持算子
+
+| 算子名
+| :-----------
+| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Assign)
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub)
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum)
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam)
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam)
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl)
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad)
+| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdaMax)
+| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdadelta)
+| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdagrad)
+| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdagradV2)
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad)
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2)
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad)
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad)
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign)
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign)
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent)
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent)
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl)
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2)
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd)
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr)
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor)
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd)
+| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sub)
+| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mul)
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow)
+| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Minimum)
+| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Maximum)
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RealDiv)
+| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Div)
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan)
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv)
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv)
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod)
+| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mod)
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorMod)
+| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Atan2)
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference)
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Xdivy)
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Xlogy)
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal)
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual)
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual)
+| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Greater)
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual)
+| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Less)
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LessEqual)
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd)
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr)
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate)
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd)
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub)
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd)
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate)
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax)
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin)
+| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterAdd)
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub)
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul)
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv)
+
diff --git a/docs/note/source_zh_cn/operator_list_lite.md b/docs/note/source_zh_cn/operator_list_lite.md
new file mode 100644
index 0000000000000000000000000000000000000000..d5d5424ef4a8912b6baf28f9ee3dd552d7afab59
--- /dev/null
+++ b/docs/note/source_zh_cn/operator_list_lite.md
@@ -0,0 +1,111 @@
+# MindSpore Lite算子支持
+
+`Linux` `Ascend` `端侧` `推理应用` `初级` `中级` `高级`
+
+
+
+| 操作名 | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | 支持的Tensorflow
Lite算子 | 支持的Caffe
Lite算子 | 支持的Onnx
Lite算子 |
+|-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------|
+| Abs | | Supported | Supported | Supported | Supported | Supported | Abs | | Abs |
+| Add | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add |
+| AddN | | Supported | | | | | AddN | | |
+| Argmax | | Supported | Supported | Supported | | | Argmax | ArgMax | ArgMax |
+| Argmin | | Supported | Supported | Supported | | | Argmin | | |
+| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling| Pooling | AveragePool |
+| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | BatchNorm | BatchNormalization |
+| BatchToSpace | | Supported | Supported | Supported | | | BatchToSpace, BatchToSpaceND | | |
+| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | BiasAdd |
+| Broadcast | | Supported | | | | | BroadcastTo | | Expand |
+| Cast | Supported | Supported | | Supported | Supported | Supported | Cast, DEQUANTIZE* | | Cast |
+| Ceil | | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil |
+| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat |
+| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv |
+| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose |
+| Cos | | Supported | Supported | Supported | Supported | Supported | Cos | | Cos |
+| Crop | | Supported | Supported | Supported | | | | Crop | |
+| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | Deconvolution| ConvTranspose |
+| DepthToSpace | | Supported | Supported | Supported | | | DepthToSpace| | DepthToSpace |
+| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | Convolution |
+| Div | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div |
+| Eltwise | Supported | Supported | | | | | | Eltwise | |
+| Elu | | Supported | | | | | Elu | | Elu |
+| Equal | Supported | Supported | Supported | Supported | | | Equal | | Equal |
+| Exp | | Supported | | | Supported | Supported | Exp | | Exp |
+| ExpandDims | | Supported | | | | | | | |
+| Fill | | Supported | | | | | Fill | | |
+| Flatten | | Supported | | | | | | Flatten | |
+| Floor | | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor |
+| FloorDiv | Supported | Supported | | | | | FloorDiv | | |
+| FloorMod | Supported | Supported | | | | | FloorMod | | |
+| FullConnection | | Supported | Supported | Supported | Supported | Supported | FullyConnected | InnerProduct | |
+| GatherNd | | Supported | Supported | Supported | | | GatherND | | |
+| GatherV2 | | Supported | Supported | Supported | | | Gather | | Gather |
+| Greater | Supported | Supported | Supported | Supported | | | Greater | | Greater |
+| GreaterEqual | Supported | Supported | Supported | Supported | | | GreaterEqual| | |
+| Hswish | Supported | Supported | Supported | Supported | | | HardSwish | | |
+| LeakyReLU | Supported | Supported | | | Supported | Supported | LeakyRelu | | LeakyRelu |
+| Less | Supported | Supported | Supported | Supported | | | Less | | Less |
+| LessEqual | Supported | Supported | Supported | Supported | | | LessEqual | | |
+| LRN | | Supported | | | | | LocalResponseNorm | | Lrn |
+| Log | | Supported | Supported | Supported | Supported | Supported | Log | | Log |
+| LogicalAnd | Supported | Supported | | | | | LogicalAnd | | |
+| LogicalNot | | Supported | Supported | Supported | Supported | Supported | LogicalNot | | |
+| LogicalOr | Supported | Supported | | | | | LogicalOr | | |
+| LSTM | | Supported | | | | | | | |
+| MatMul | | Supported | Supported | Supported | Supported | Supported | | | MatMul |
+| Maximum | Supported | Supported | | | | | Maximum | | Max |
+| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool |
+| Minimum | Supported | Supported | | | | | Minimum | | Min |
+| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul |
+| NotEqual | Supported | Supported | Supported | Supported | | | NotEqual | | |
+| OneHot | | Supported | | | | | OneHot | | |
+| Pad | | Supported | Supported | Supported | | | Pad | | Pad |
+| Pow | | Supported | Supported | Supported | | | Pow | Power | Power |
+| PReLU | | Supported | | | Supported | Supported | | PReLU | |
+| Range | | Supported | | | | | Range | | |
+| Rank | | Supported | | | | | Rank | | |
+| ReduceMax | Supported | Supported | Supported | Supported | | | ReduceMax | | ReduceMax |
+| ReduceMean | Supported | Supported | Supported | Supported | | | Mean | | ReduceMean |
+| ReduceMin | Supported | Supported | Supported | Supported | | | ReduceMin | | ReduceMin |
+| ReduceProd | Supported | Supported | Supported | Supported | | | ReduceProd | | |
+| ReduceSum | Supported | Supported | Supported | Supported | | | Sum | | ReduceSum |
+| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | |
+| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu |
+| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip* |
+| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | Reshape | Reshape | Reshape,Flatten |
+| Resize | | Supported | Supported | Supported | | | ResizeBilinear, NearestNeighbor | Interp | |
+| Reverse | | Supported | | | | | reverse | | |
+| ReverseSequence | | Supported | | | | | ReverseSequence | | |
+| Round | | Supported | Supported | Supported | Supported | Supported | Round | | |
+| Rsqrt | | Supported | Supported | Supported | Supported | Supported | Rsqrt | | |
+| Scale | | Supported | | | Supported | Supported | | Scale | |
+| ScatterNd | | Supported | | | | | ScatterNd | | |
+| Shape | | Supported | | | | | Shape | | Shape |
+| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | Logistic | Sigmoid | Sigmoid |
+| Sin | | Supported | Supported | Supported | Supported | Supported | Sin | | Sin |
+| Slice | | Supported | Supported | Supported | Supported | Supported | Slice | | Slice |
+| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax |
+| SpaceToBatch | | Supported | | | | | | | |
+| SpaceToBatchND | | Supported | | | | | SpaceToBatchND | | |
+| SpaceToDepth | | Supported | | | | | SpaceToDepth | | SpaceToDepth |
+| SparseToDense | | Supported | | | | | SpareToDense | | |
+| Split | Supported | Supported | Supported | Supported | | | Split, SplitV | | |
+| Sqrt | | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt |
+| Square | | Supported | Supported | Supported | Supported | Supported | Square | | |
+| SquaredDifference | | Supported | | | | | SquaredDifference | | |
+| Squeeze | | Supported | Supported | Supported | | | Squeeze | | Squeeze |
+| StridedSlice | | Supported | Supported | Supported | | | StridedSlice| | |
+| Stack | | Supported | | | | | Stack | | |
+| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub |
+| Tanh | Supported | Supported | | | Supported | Supported | Tanh | TanH | |
+| Tile | | Supported | | | | | Tile | | Tile |
+| TopK | | Supported | Supported | Supported | | | TopKV2 | | |
+| Transpose | Supported | Supported | | | Supported | Supported | Transpose | Permute | Transpose |
+| Unique | | Supported | | | | | Unique | | |
+| Unsqueeze | | Supported | Supported | Supported | | | | | Unsqueeze |
+| Unstack | | Supported | | | | | Unstack | | |
+| Where | | Supported | | | | | Where | | |
+| ZerosLike | | Supported | | | | | ZerosLike | | |
+
+* Clip: 仅支持将clip(0, 6)转换为Relu6.
+* DEQUANTIZE: 仅支持将fp16转换为fp32.
diff --git a/docs/note/source_zh_cn/operator_list_ms.md b/docs/note/source_zh_cn/operator_list_ms.md
new file mode 100644
index 0000000000000000000000000000000000000000..aeaefe94ce70f20719e18c619378264a8576b75d
--- /dev/null
+++ b/docs/note/source_zh_cn/operator_list_ms.md
@@ -0,0 +1,393 @@
+# MindSpore算子支持
+
+`Linux` `Ascend` `GPU` `CPU` `模型开发` `初级` `中级` `高级`
+
+
+
+- [MindSpore算子支持](#mindspore算子支持)
+ - [mindspore.nn](#mindsporenn)
+ - [mindspore.ops.operations](#mindsporeopsoperations)
+ - [mindspore.ops.functional](#mindsporeopsfunctional)
+
+
+
+
+
+## mindspore.nn
+
+| 操作名 | Ascend | GPU | CPU |算子类别
+| :----------- |:------ |:------ |:-----|:---
+| [mindspore.nn.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Softmax) | Supported | Supported | Supported |layer/activation
+| [mindspore.nn.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LogSoftmax) | Supported | Supported | Doing |layer/activation
+| [mindspore.nn.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ReLU) | Supported | Supported | Supported |layer/activation
+| [mindspore.nn.ReLU6](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ReLU6) |Supported | Supported | Supported |layer/activation
+| [mindspore.nn.HSwish](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSwish) | Doing | Supported | Doing |layer/activation
+| [mindspore.nn.HSigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSigmoid) | Doing | Supported | Doing |layer/activation
+| [mindspore.nn.LeakyReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLU) | Supported |Supported | Doing |layer/activation
+| [mindspore.nn.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Tanh) | Supported | Supported | Doing |layer/activation
+| [mindspore.nn.GELU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GELU) | Supported | Supported | Doing |layer/activation
+| [mindspore.nn.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Supported | Doing |layer/activation
+| [mindspore.nn.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.PReLU) | Supported |Doing | Doing |layer/activation
+| [mindspore.nn.Dropout](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Dropout) |Supported | Supported | Supported |layer/basic
+| [mindspore.nn.Flatten](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
+| [mindspore.nn.Dense](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
+| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
+| [mindspore.nn.Norm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Supported | Supported | Doing |layer/basic
+| [mindspore.nn.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
+| [mindspore.nn.Range](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
+| [mindspore.nn.SequentialCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
+| [mindspore.nn.CellList](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.CellList) | Supported | Supported | Doing |layer/container
+| [mindspore.nn.Conv2d](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) | Supported | Supported | Supported |layer/conv
+| [mindspore.nn.Conv2dTranspose](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dTranspose) | Supported | Supported | Doing |layer/conv
+| [mindspore.nn.Conv1d](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv1d) | Supported | Supported | Doing |layer/conv
+| [mindspore.nn.Conv1dTranspose](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv1dTranspose) | Supported | Supported | Doing |layer/conv
+| [mindspore.nn.Embedding](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Embedding) |Supported | Supported | Doing |layer/embedding
+| [mindspore.nn.ImageGradients](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ImageGradients) | Supported |Supported | Doing |layer/image
+| [mindspore.nn.SSIM](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SSIM) | Supported | Supported | Doing |layer/image
+| [mindspore.nn.PSNR](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.PSNR) | Supported |Doing | Doing |layer/image
+| [mindspore.nn.CentralCrop](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.CentralCrop) | Supported |Doing | Doing |layer/image
+| [mindspore.nn.LSTM](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LSTM) | Doing | Supported | Supported |layer/lstm
+| [mindspore.nn.GlobalBatchNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GlobalBatchNorm) | Supported |Doing | Doing |layer/normalization
+| [mindspore.nn.BatchNorm1d](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm1d) | Supported |Doing | Doing |layer/normalization
+| [mindspore.nn.BatchNorm2d](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) | Supported | Supported | Doing |layer/normalization
+| [mindspore.nn.GroupNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GroupNorm) | Supported | Doing | Doing |layer/normalization
+| [mindspore.nn.LayerNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LayerNorm) | Supported | Supported | Doing |layer/normalization
+| [mindspore.nn.MatrixDiag](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixDiagPart](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiagPart) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixSetDiag](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MatrixSetDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.LinSpace](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
+| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
+| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.FakeQuantWithMinMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.FakeQuantWithMinMax) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnFoldQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnWithoutFoldQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnWithoutFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2dQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.DenseQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DenseQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.ActQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ActQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.LeakyReLUQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLUQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSwishQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSwishQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSigmoidQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.HSigmoidQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.TensorAddQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.TensorAddQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.MulQuant](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MulQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.L1Loss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
+| [mindspore.nn.MSELoss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
+| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) | Supported |Doing | Doing |loss/loss
+| [mindspore.nn.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyWithLogits) | Supported | Supported | Supported |loss/loss
+| [mindspore.nn.SoftmaxCrossEntropyExpand](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyExpand) | Supported |Supported | Doing |loss/loss
+| [mindspore.nn.CosineEmbeddingLoss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.CosineEmbeddingLoss) |Supported |Supported | Doing |loss/loss
+| [mindspore.nn.ProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ProximalAdagrad) | Supported |Doing | Doing |optim/ProximalAdagrad
+| [mindspore.nn.LazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LazyAdam) | Supported |Doing | Doing |optim/lazyadam
+| [mindspore.nn.Adam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Adam) | Supported |Doing | Doing |optim/adam
+| [mindspore.nn.AdamWeightDecay](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.AdamWeightDecay) | Supported | Supported | Doing |optim/adam
+| [mindspore.nn.Lamb](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Lamb) | Supported | Supported | Doing |optim/lamb
+| [mindspore.nn.LARS](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.LARS) |Supported |Doing | Doing |optim/lars
+| [mindspore.nn.Momentum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Momentum) | Supported | Supported | Supported |optim/momentum
+| [mindspore.nn.Optimizer](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Optimizer) | Supported | Supported | Doing |optim/optimizer
+| [mindspore.nn.RMSProp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.RMSProp) | Supported | Supported | Doing |optim/optimizer
+| [mindspore.nn.SGD](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.SGD) |Supported |Supported | Doing |optim/sgd
+| [mindspore.nn.WithLossCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.WithGradCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
+| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
+| [mindspore.nn.DistributedGradReducer](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DistributedGradReducer) | Supported |Doing | Doing |wrap/grad_reducer
+| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell) | Supported | Supported | Supported |cell
+| [mindspore.nn.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.EmbeddingLookup) |Supported | Supported | Supported |layer/embedding
+| [mindspore.nn.Pad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Pad) |Supported | Supported | Doing |layer/basic
+
+## mindspore.ops.operations
+
+| 操作名 | Ascend | GPU | CPU |算子类别
+| :----------- |:------ |:------ |:-----|:---
+| [mindspore.ops.Flatten](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Flatten) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Acosh) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorMod) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Elu) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.MirrorPad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MirrorPad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Unpack](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Unpack) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pack) | Supported| Doing | Doing | nn_ops
+| [mindspore.ops.L2Loss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.L2Loss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.CTCLoss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CTCLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.RNNTLoss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RNNTLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softplus) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLU6) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.HSwish](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HSwish) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.HSigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HSigmoid) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BatchNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchNorm) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.LRN](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LRN) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.Conv2D](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Conv2D) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.DepthwiseConv2dNative](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPoolWithArgmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPool](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MaxPool) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.AvgPool](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AvgPool) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Conv2DBackpropInput](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TopK) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyRMSProp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ApplyCenteredRMSProp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.SmoothL1Loss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SGD](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SGD) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LayerNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LayerNorm) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ResizeBilinear](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ResizeBilinear) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.GetNext](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GetNext) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LSTM](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LSTM) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.BasicLSTMCell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BasicLSTMCell) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Pad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ROIAlign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ROIAlign) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Adam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Adam) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.BinaryCrossEntropy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.KLDivLoss](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.KLDivLoss) | Doing | Supported | Doing | nn_ops
+| [mindspore.ops.LARSUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LARSUpdate) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softsign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceAll](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReduceProd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.CumProd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CumProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.CumSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CumSum) | Supported | Supported| Doing | math_ops
+| [mindspore.ops.AddN](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AddN) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Neg) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sub) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Square) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.SquareSumAll](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SquareSumAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rsqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reciprocal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Exp) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Log) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Log1p) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Div) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Floor) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.EqualCount](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.EqualCount) | Doing | Supported | Supported | math_ops
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Greater) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Less) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Atan2) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LessEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Ceil) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Inv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Invert](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Invert) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUAllocFloatStatus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUGetFloatStatus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUClearFloatStatus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloatStatus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloatStatus) | Doing | Supported | Doing | math_ops
+| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cosh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ACos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BesselI0e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BesselI1e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Asin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Asinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Erf) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Erfc) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Expm1) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NMSWithMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NMSWithMask) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Abs) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sign) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Round) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InplaceAdd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InplaceSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.DType](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DType) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.SameTypeShape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SameTypeShape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cast) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.IsSubClass](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IsSubClass) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.IsInstance](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IsInstance) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Shape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Shape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Split) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Rank](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rank) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.TruncatedNormal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TruncatedNormal) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Size](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Size) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Fill](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Fill) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ZerosLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.TupleToArray](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TupleToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToArray](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToTensor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarToTensor) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.InvertPermutation](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InvertPermutation) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Argmax) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Argmin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tile) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentProd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Concat) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ParallelConcat](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ParallelConcat) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Slice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Select](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Select) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Diag](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Diag) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.DiagPart](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DiagPart) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.Eye](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Eye) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ResizeNearestNeighbor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ApplyFtrl) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToDepth](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SpaceToDepth) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.DepthToSpace](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DepthToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatch](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatch) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatchND](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatchND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpace](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpaceND](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpaceND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.IsFinite](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IsFinite) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.InplaceUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InplaceUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Rint](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rint) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReverseV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReverseV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReduceOp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceOp) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllReduce](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AllReduce) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllGather](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AllGather) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.ReduceScatter](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceScatter) | Doing | Supported | Doing | comm_ops
+| [mindspore.ops.Broadcast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Broadcast) | Supported | Doing | Doing | comm_ops
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | Supported | Supported | Supported | control_ops
+| [mindspore.ops.GeSwitch](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GeSwitch) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.Merge](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Merge) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.ScalarSummary](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.ImageSummary](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ImageSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.TensorSummary](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.HistogramSummary](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HistogramSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.InsertGradientOf](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InsertGradientOf) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.Print](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Print) | Supported | Doing | Doing | debug_ops
+| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Assign) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxEncode](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxDecode](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.PopulationCount](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.PopulationCount) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.CheckValid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CheckValid) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.IOU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.IOU) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.MakeRefKey](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MakeRefKey) | Supported | Supported | Supported | other_ops
+| [mindspore.ops.InTopK](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.InTopK) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.StandardNormal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.StandardNormal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.Gamma](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Gamma) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.Poisson](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Poisson) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.UniformInt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UniformInt) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.UniformReal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.UniformReal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.RandomChoiceWithMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
+| [mindspore.ops.RandomCategorical](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RandomCategorical) | Supported| Doing | Doing | random_ops
+| [mindspore.ops.ScalarCast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScalarCast) | Supported | Supported | Supported | inner_ops
+| [mindspore.ops.ReverseSequence](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReverseSequence) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.CropAndResize](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.CropAndResize) | Supported | Doing | Doing | image_ops
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Xdivy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Xlogy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.HistogramFixedWidth](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Eps](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Eps) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLUV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingReduce](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingReduce) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingUpdate) | Supported | Doing | Doing | nn_ops
+
+## mindspore.ops.functional
+
+| 操作名 | 对应functional算子
+| :----------- | :-----------
+| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pack) | pack
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | tensor_add
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | assign_sub
+| [mindspore.ops.AddN](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AddN) | addn
+| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Square) | square
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | sqrt
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal) | equal
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | not_equal
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | logical_not
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | logical_and
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | logical_or
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | expand_dims
+| [mindspore.ops.DType](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DType) | dtype
+| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cast) | cast
+| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | reshape
+| [mindspore.ops.Shape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Shape) | shape
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | gather
+| [mindspore.ops.Rank](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Rank) | rank
+| [mindspore.ops.Size](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Size) | size
+| [mindspore.ops.Fill](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Fill) | fill
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | ones_like
+| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tile) | tile
+| [mindspore.ops.Select](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Select) | select
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | scatter_nd
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | gather_nd
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | control_depend
+| [mindspore.ops.Print](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Print) | print
+| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Assign) | assign
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow) | tensor_pow
+
+> 当前functional支持了一部分没有属性的算子,后续会进一步补齐完整。
diff --git a/docs/note/source_zh_cn/operator_list_parallel.md b/docs/note/source_zh_cn/operator_list_parallel.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb35a6bc8ad14b69e37a484785ceac24b5ce2166
--- /dev/null
+++ b/docs/note/source_zh_cn/operator_list_parallel.md
@@ -0,0 +1,74 @@
+# MindSpore分布式算子支持
+
+`Linux` `Ascend` `GPU` `CPU` `模型开发` `初级` `中级` `高级`
+
+
+
+- [MindSpore分布式算子支持](#mindspore分布式算子支持)
+ - [分布式算子](#分布式算子)
+
+
+
+
+
+## 分布式算子
+
+| 操作名 | 约束
+| :----------- | :-----------
+| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ACos) | None
+| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cos) | None
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | None
+| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Log) | None
+| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Exp) | None
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | None
+| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | None
+| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | None
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | None
+| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Cast) | None
+| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Neg) | None
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | None
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | None
+| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Square) | None
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | None
+| [mindspore.ops.Dropout](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Dropout) | 不支持重复计算
+| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Div) | None
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | None
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | None
+| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Mul) | None
+| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Sub) | None
+| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Pow) | None
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | None
+| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Greater) | None
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | None
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | None
+| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Equal) | None
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | None
+| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | None
+| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | None
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | None
+| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Concat) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | 需和`DropoutDoMask`联合使用
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | 需和`DropoutGenMask`联合使用,不支持配置切分策略
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分
+| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SparseGatherV2) | 同GatherV2
+| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.EmbeddingLookup) | 同GatherV2
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0]
+| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | 不支持`transpose_a=True`
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | 不支持`transpore_a=True`
+| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | weight的shape在非[1]的情况下,输入(input_x)的Channel维要和weight的切分方式一致
+| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | 仅支持输入(indices)是1维的Tensor,切分策略要配置输出的切分策略,以及第1和第2个输入的切分策略
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | None
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | None
+| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | 不支持配置切分策略
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分
+| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Tile) | 仅支持对multiples配置切分策略
+| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | None
+
+> 重复计算是指,机器没有用满,比如:集群有8张卡跑分布式训练,切分策略只对输入切成了4份。这种情况下会发生重复计算。
diff --git a/docs/note/source_zh_cn/others.rst b/docs/note/source_zh_cn/others.rst
new file mode 100644
index 0000000000000000000000000000000000000000..e3c3d6fd55b643e10d8027ad908a731cc5eb9f6d
--- /dev/null
+++ b/docs/note/source_zh_cn/others.rst
@@ -0,0 +1,11 @@
+其他说明
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ glossary
+ roadmap
+ help_seeking_path
+ community
+
diff --git a/docs/source_zh_cn/roadmap.md b/docs/note/source_zh_cn/roadmap.md
similarity index 95%
rename from docs/source_zh_cn/roadmap.md
rename to docs/note/source_zh_cn/roadmap.md
index 6b24fdc3e17857d8f5612e808f8ba8b218ee3d5e..6519569e35f5cadb5dd2c5235eea47b4c30ed5ce 100644
--- a/docs/source_zh_cn/roadmap.md
+++ b/docs/note/source_zh_cn/roadmap.md
@@ -15,7 +15,7 @@
-
+
以下将展示MindSpore近一年的高阶计划,我们会根据用户的反馈诉求,持续调整计划的优先级。
diff --git a/docs/note/source_zh_cn/specification_note.rst b/docs/note/source_zh_cn/specification_note.rst
new file mode 100644
index 0000000000000000000000000000000000000000..566271cc24fdfdb5bbf6400f4d1d4d722977d723
--- /dev/null
+++ b/docs/note/source_zh_cn/specification_note.rst
@@ -0,0 +1,13 @@
+规格说明
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ benchmark
+ network_list
+ operator_list
+ constraints_on_network_construction
+ image_classification
+ object_detection
+
diff --git a/docs/programming_guide/Makefile b/docs/programming_guide/Makefile
new file mode 100644
index 0000000000000000000000000000000000000000..1eff8952707bdfa503c8d60c1e9a903053170ba2
--- /dev/null
+++ b/docs/programming_guide/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line, and also
+# from the environment for the first two.
+SPHINXOPTS ?=
+SPHINXBUILD ?= sphinx-build
+SOURCEDIR = source_zh_cn
+BUILDDIR = build_zh_cn
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/programming_guide/requirements.txt b/docs/programming_guide/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/programming_guide/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/docs/programming_guide/source_en/_static/logo_notebook.png b/docs/programming_guide/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/programming_guide/source_en/_static/logo_notebook.png differ
diff --git a/docs/programming_guide/source_en/_static/logo_source.png b/docs/programming_guide/source_en/_static/logo_source.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc347d271abe082ae8d16242328551648766b6fb
Binary files /dev/null and b/docs/programming_guide/source_en/_static/logo_source.png differ
diff --git a/docs/programming_guide/source_en/api_structure.md b/docs/programming_guide/source_en/api_structure.md
new file mode 100644
index 0000000000000000000000000000000000000000..5a45a2d3ffae10eed437cc747f81a7be043525ab
--- /dev/null
+++ b/docs/programming_guide/source_en/api_structure.md
@@ -0,0 +1,3 @@
+# Note
+
+Programming guide is being translated, will be released soon.
\ No newline at end of file
diff --git a/docs/programming_guide/source_en/conf.py b/docs/programming_guide/source_en/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..a1fd767271ac159540440ed65bd0d676163366a9
--- /dev/null
+++ b/docs/programming_guide/source_en/conf.py
@@ -0,0 +1,58 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/index.rst b/docs/programming_guide/source_en/index.rst
similarity index 60%
rename from lite/docs/source_zh_cn/index.rst
rename to docs/programming_guide/source_en/index.rst
index 20ecdbb72c0fe01cbc24c674bda6944504c792ff..2933364dd8c4bf433b4123f4c0fdbdfc9ddf137e 100644
--- a/lite/docs/source_zh_cn/index.rst
+++ b/docs/programming_guide/source_en/index.rst
@@ -1,16 +1,14 @@
.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Aug 17 10:00:00 2020.
+ sphinx-quickstart on Thu Mar 24 11:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore端侧文档
-==================
+MindSpore Programming Guide
+===========================
.. toctree::
- :glob:
:maxdepth: 1
- architecture
- apicc/apicc
+ api_structure
+ network_list
operator_list
- glossary
diff --git a/docs/programming_guide/source_en/network_list.rst b/docs/programming_guide/source_en/network_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..3e7a1bb42a55c77206df43cf0a0817437741decf
--- /dev/null
+++ b/docs/programming_guide/source_en/network_list.rst
@@ -0,0 +1,7 @@
+Network List
+============
+
+.. toctree::
+ :maxdepth: 1
+
+ MindSpore Network List
\ No newline at end of file
diff --git a/docs/programming_guide/source_en/operator_list.rst b/docs/programming_guide/source_en/operator_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..a43ce5681e0b55dc29b884b0ed9b371a64dfcc2c
--- /dev/null
+++ b/docs/programming_guide/source_en/operator_list.rst
@@ -0,0 +1,10 @@
+Operator List
+=============
+
+.. toctree::
+ :maxdepth: 1
+
+ MindSpore Operator List
+ MindSpore Implicit Type Conversion
+ MindSpore Distributed Operator List
+ MindSpore Lite Operator List
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/_static/logo_notebook.png b/docs/programming_guide/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/_static/logo_notebook.png differ
diff --git a/docs/programming_guide/source_zh_cn/_static/logo_source.png b/docs/programming_guide/source_zh_cn/_static/logo_source.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc347d271abe082ae8d16242328551648766b6fb
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/_static/logo_source.png differ
diff --git a/docs/programming_guide/source_zh_cn/advanced_use.rst b/docs/programming_guide/source_zh_cn/advanced_use.rst
new file mode 100644
index 0000000000000000000000000000000000000000..44d483602dcf4a8e700d770a31ce2a448854cb92
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/advanced_use.rst
@@ -0,0 +1,12 @@
+进阶用法
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ train
+ infer
+ performance_optimization
+ user_defined
+ security_and_privacy
+ extension
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/api_structure.md b/docs/programming_guide/source_zh_cn/api_structure.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf7c1f392ef2356309c94220d9f9ab50aa53dd11
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/api_structure.md
@@ -0,0 +1,73 @@
+# MindSpore API概述
+
+
+
+- [MindSpore API概述](#mindsporeapi概述)
+ - [总体架构](#总体架构)
+ - [设计理念](#设计理念)
+ - [层次结构](#层次结构)
+
+
+
+
+
+## 总体架构
+MindSpore是一个全场景深度学习框架,旨在实现易开发、高效执行、全场景覆盖三大目标,其中易开发表现为API友好、调试难度低以及额外的自动化属性,高效执行包括计算效率、数据预处理效率和分布式训练效率,全场景则指框架同时支持云、边缘以及端侧场景。
+
+MindSpore总体架构分为前端表示层(Mind Expression,ME)、计算图引擎(Graph Engine,GE)和后端运行时三个部分。ME提供了用户级应用软件编程接口(Application Programming Interface,API),用于构建和训练神经网络,并将用户的Python代码转换为数据流图。GE是算子和硬件资源的管理器,负责控制从ME接收的数据流图的执行。后端运行时包含云、边、端上不同环境中的高效运行环境,例如CPU、GPU、Ascend AI处理器、 Android/iOS等。更多总体架构的相关内容请参见[总体架构](https://www.mindspore.cn/docs/zh-CN/master/architecture.html)。
+
+## 设计理念
+
+MindSpore源于全产业的最佳实践,向数据科学家和算法工程师提供了统一的模型训练、推理和导出等接口,支持端、边、云等不同场景下的灵活部署,推动深度学习和科学计算等领域繁荣发展。
+
+MindSpore提供了Python编程范式,用户使用Python原生控制逻辑即可构建复杂的神经网络模型,AI编程变得简单,具体示例请参见[实现一个图片分类应用](https://www.mindspore.cn/tutorial/zh-CN/master/quick_start/quick_start.html)。
+
+目前主流的深度学习框架的执行模式有两种,分别为静态图模式和动态图模式。静态图模式拥有较高的训练性能,但难以调试。动态图模式相较于静态图模式虽然易于调试,但难以高效执行。MindSpore提供了动态图和静态图统一的编码方式,大大增加了静态图和动态图的可兼容性,用户无需开发多套代码,仅变更一行代码便可切换动态图/静态图模式,例如设置`context.set_context(mode=context.PYNATIVE_MODE)`切换成动态图模式,设置`context.set_context(mode=context.GRAPH_MODE)`即可切换成静态图模式,用户可拥有更轻松的开发调试及性能体验。
+
+神经网络模型通常基于梯度下降算法进行训练,但手动求导过程复杂且梯度难以计算。MindSpore的基于源码转换(Source Code Transformation,SCT)的自动微分(Automatic Differentiation)机制采用函数式可微分编程架构,在接口层提供Python编程接口,包括控制流的表达。用户可聚焦于模型算法的数学原生表达,无需手动进行求导,在动态图模式下自动微分的样例代码如下所示。
+
+```python
+import mindspore as ms
+from mindspore.ops import composite as C
+from mindspore import context
+from mindspore.common import Tensor
+
+
+context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU")
+
+
+def cost(x, y): return x * (x + y)
+
+
+def test_grad(x, y):
+ return C.GradOperation(get_all=True)(cost)(Tensor(x, dtype=ms.float32), Tensor(y, dtype=ms.float32))
+
+
+def main():
+ return test_grad(2, 1)
+
+```
+
+其中,第一步定义了一个函数(计算图),第二步利用MindSpore提供的反向接口进行自动微分,定义了一个反向函数(计算图),最后给定一些输入就能获取第一步定义的函数在指定处的导数,求导结果为`(5, 2)`。
+
+此外,SCT能够将Python代码转换为函数中间表达(Intermediate Representation,IR),函数中间表达构造出能够在不同设备解析和执行的计算图,并且在执行该计算图前,应用了多种软硬件协同优化技术,端、边、云等不同场景下的性能和效率得到针对性的提升。
+
+随着神经网络模型和数据集的规模不断增加,分布式并行训练成为了神经网络训练的常见做法,但分布式并行训练的策略选择和编写十分复杂,这严重制约着深度学习模型的训练效率,阻碍深度学习的发展。MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,例如设置`context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)`便可自动建立代价模型,为用户选择一种较优的并行模式,提高神经网络训练效率,大大降低了AI开发门槛,使用户能够快速实现模型思路。
+
+## 层次结构
+
+MindSpore向用户提供了3个不同层次的API,支撑用户进行网络构建、整图执行、子图执行以及单算子执行,从低到高分别为Low-Level Python API、Medium-Level Python API以及High-Level Python API。
+
+
+
+- Low-Level Python API
+
+ 第一层为低阶API,主要包括张量定义、基础算子、自动微分等模块,用户可使用低阶API轻松实现张量定义和求导计算,例如用户可通过`Tensor`接口自定义张量,使用`ops.composite`模块下的`GradOperation`算子计算函数在指定处的导数。
+
+- Medium-Level Python API
+
+ 第二层为中阶API,其封装了低价API,提供网络层、优化器、损失函数等模块,用户可通过中阶API灵活构建神经网络和控制执行流程,快速实现模型算法逻辑,例如用户可调用`Cell`接口构建神经网络模型和计算逻辑,通过使用`loss`模块和`Optimizer`接口为神经网络模型添加损失函数和优化方式。
+
+- High-Level Python API
+
+ 第三层为高阶API,其在中阶API的基础上又提供了训练推理的管理、Callback、混合精度训练等高级接口,方便用户控制整网的执行流程和实现神经网络的训练及推理,例如用户使用`Model`接口,指定要训练的神经网络模型和相关的训练设置,即可对神经网络模型进行训练。
diff --git a/api/source_zh_cn/programming_guide/augmentation.md b/docs/programming_guide/source_zh_cn/augmentation.md
similarity index 60%
rename from api/source_zh_cn/programming_guide/augmentation.md
rename to docs/programming_guide/source_zh_cn/augmentation.md
index 74fd8b0ba812a83678302bdedf5be17cea0ea403..13a3333111685a5c1d86e86952681afa627f6cbf 100644
--- a/api/source_zh_cn/programming_guide/augmentation.md
+++ b/docs/programming_guide/source_zh_cn/augmentation.md
@@ -10,26 +10,25 @@
- [Resize](#resize)
- [Invert](#invert)
- [py_transforms](#py_transforms)
- - [ComposeOp](#composeop)
+ - [Compose](#compose)
- [使用说明](#使用说明)
-
+
## 概述
在计算机视觉任务中,数据量过小或是样本场景单一等问题都会影响模型的训练效果,用户可以通过数据增强操作对图像进行预处理,从而提升模型的泛化性。
-MindSpore提供了c_transforms模块和py_transforms模块供用户进行数据增强操作,用户也可以自定义函数或者算子进行数据增强。
-
-MindSpore目前支持的常用数据增强算子如下表所示,更多数据增强算子参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.vision.html)。
+MindSpore提供了`c_transforms`模块和`py_transforms`模块供用户进行数据增强操作,用户也可以自定义函数或者算子进行数据增强。
| 模块 | 实现 | 说明 |
| ---- | ---- | ---- |
| c_transforms | 基于C++的OpenCV实现 | 具有较高的性能。 |
-| py_transforms | 基于Python的PIL实现 | 该模块提供了多种图像增强功能,并提供了PIL Image和numpy数组之间的传输方法。|
+| py_transforms | 基于Python的PIL实现 | 该模块提供了多种图像增强功能,并提供了PIL Image和NumPy数组之间的传输方法。|
+MindSpore目前支持的常用数据增强算子如下表所示,更多数据增强算子参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.dataset.vision.html)。
| 模块 | 算子 | 说明 |
| ---- | ---- | ---- |
@@ -40,46 +39,38 @@ MindSpore目前支持的常用数据增强算子如下表所示,更多数据
| py_transforms | RandomCrop | 在图像随机位置裁剪指定大小子图像。 |
| | Resize | 将图像缩放到指定大小。 |
| | Invert | 将图像进行反相。 |
-| |ComposeOp | 将列表中的数据增强操作依次执行。 |
+| |Compose | 将列表中的数据增强操作依次执行。 |
## c_transforms
-下面将简要介绍几种常用的c_transforms模块数据增强算子的使用方法,更多的c_transforms模块数据增强算子参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.vision.html#module-mindspore.dataset.vision.c_transforms)。
+下面将简要介绍几种常用的`c_transforms`模块数据增强算子的使用方法。
### RandomCrop
对输入图像进行在随机位置的裁剪。
**参数说明:**
-- *size: 裁剪图像的尺寸。*
-- *padding: 填充的像素数量。*
-- *pad_if_needed: 原图小于裁剪尺寸时,是否需要填充。*
-- *fill_value: 在常量填充模式时使用的填充值。*
-- *padding_mode: 填充模式。*
+- `size`:裁剪图像的尺寸。
+- `padding`:填充的像素数量。
+- `pad_if_needed`:原图小于裁剪尺寸时,是否需要填充。
+- `fill_value`:在常量填充模式时使用的填充值。
+- `padding_mode`:填充模式。
-```python
-# 对输入图像进行在随机位置的裁剪
+下面的样例首先使用顺序采样器加载CIFAR-10数据集,然后对已加载的图片进行长宽均为10的随机裁剪,最后输出裁剪前后的图片形状及对应标签,并对图片进行了展示。
+```python
import matplotlib.pyplot as plt
import mindspore.dataset as ds
-import mindspore.dataset.transforms.vision.c_transforms as c_trans
+import mindspore.dataset.vision.c_transforms as c_trans
-# 下载Cifar10数据集,将其解压到Cifar10Data目录
DATA_DIR = "../data/dataset/testCifar10Data2"
-# 指定一个顺序采样器SequentialSampler,按照读取顺序获取3个样本数据
sampler = ds.SequentialSampler(num_samples=3)
-
-# 使用Cifar10Dataset读取数据集,指定sampler为上述采样器
dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 创建一个随机裁剪算子,裁剪后的长宽分为10个像素
random_crop = c_trans.RandomCrop([10, 10])
+dataset2 = dataset1.map(operations=random_crop, input_columns=["image"])
-# 使用map算子将其作用到数据管道的数据集中
-dataset2 = dataset1.map(input_columns=["image"], operations=random_crop)
-
-# 启动数据管道,输出3个样本数据
image_list1, label_list1 = [], []
image_list2, label_list2 = [], []
for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_iterator()):
@@ -89,33 +80,37 @@ for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_it
image_list2.append(data2['image'])
label_list2.append(data2['label'])
print("Cropped image Shape:", data2['image'].shape, ", Cropped label:", data2['label'])
- print("")
+ print("------")
-# 将原图与裁剪后的图可视化
num_samples = len(image_list1) + len(image_list2)
for i in range(num_samples):
if i < len(image_list1):
plt.subplot(2, len(image_list1), i + 1)
- plt.imshow(image_list1[i])
- plt.title(label_list1[i])
+ plt.imshow(image_list1[i].asnumpy())
+ plt.title(label_list1[i].asnumpy())
else:
plt.subplot(2, len(image_list2), i + 1)
- plt.imshow(image_list2[i % len(image_list2)])
- plt.title(label_list2[i % len(image_list2)])
+ plt.imshow(image_list2[i % len(image_list2)].asnumpy())
+ plt.title(label_list2[i % len(image_list2)].asnumpy())
plt.show()
```
+输出结果如下:
+
```
Source image Shape : (32, 32, 3) , Source label : 6
Cropped image Shape: (10, 10, 3) , Cropped label: 6
-
+------
Source image Shape : (32, 32, 3) , Source label : 9
Cropped image Shape: (10, 10, 3) , Cropped label: 9
-
+------
Source image Shape : (32, 32, 3) , Source label : 9
Cropped image Shape: (10, 10, 3) , Cropped label: 9
+------
```
+图片展示如下:
+

### RandomHorizontalFlip
@@ -123,34 +118,25 @@ Cropped image Shape: (10, 10, 3) , Cropped label: 9
对输入图像进行随机水平翻转。
**参数说明:**
-- *prob: 单张图片发生翻转的概率。*
+- `prob`: 单张图片发生翻转的概率。
-```python
-# 对输入图像进行随机水平翻转
+下面的样例首先使用随机采样器加载CIFAR-10数据集,然后对已加载的图片进行概率为0.8的随机水平翻转,最后输出翻转前后的图片形状及对应标签,并对图片进行了展示。
+```python
import matplotlib.pyplot as plt
import mindspore.dataset as ds
-import mindspore.dataset.transforms.vision.c_transforms as c_trans
+import mindspore.dataset.vision.c_transforms as c_trans
-# 设置全局随机种子
ds.config.set_seed(6)
-# 下载Cifar10数据集,将其解压到Cifar10Data目录
DATA_DIR = "../data/dataset/testCifar10Data2"
-# 指定一个随机采样器RandomSampler,按照读取顺序获取4个样本数据
sampler = ds.RandomSampler(num_samples=4)
-
-# 使用Cifar10Dataset读取数据集,指定sampler为上述采样器
dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 创建一个随机翻转算子,设置翻转概率为0.8
random_horizontal_flip = c_trans.RandomHorizontalFlip(prob=0.8)
+dataset2 = dataset1.map(operations=random_horizontal_flip, input_columns=["image"])
-# 使用map算子将其作用到数据管道的数据集中
-dataset2 = dataset1.map(input_columns=["image"], operations=random_horizontal_flip)
-
-# 启动数据管道,输出4个样本数据
image_list1, label_list1 = [], []
image_list2, label_list2 = [], []
for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_iterator()):
@@ -160,36 +146,40 @@ for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_it
image_list2.append(data2['image'])
label_list2.append(data2['label'])
print("Flipped image Shape:", data2['image'].shape, ", Flipped label:", data2['label'])
- print("")
+ print("------")
-# 将原图与裁剪后的图可视化
num_samples = len(image_list1) + len(image_list2)
for i in range(num_samples):
if i < len(image_list1):
plt.subplot(2, len(image_list1), i + 1)
- plt.imshow(image_list1[i])
- plt.title(label_list1[i])
+ plt.imshow(image_list1[i].asnumpy())
+ plt.title(label_list1[i].asnumpy())
else:
plt.subplot(2, len(image_list2), i + 1)
- plt.imshow(image_list2[i % len(image_list2)])
- plt.title(label_list2[i % len(image_list2)])
+ plt.imshow(image_list2[i % len(image_list2)].asnumpy())
+ plt.title(label_list2[i % len(image_list2)].asnumpy())
plt.show()
```
+输出结果如下:
+
```
Source image Shape : (32, 32, 3) , Source label : 3
Flipped image Shape: (32, 32, 3) , Flipped label: 3
-
+------
Source image Shape : (32, 32, 3) , Source label : 6
Flipped image Shape: (32, 32, 3) , Flipped label: 6
-
+------
Source image Shape : (32, 32, 3) , Source label : 6
Flipped image Shape: (32, 32, 3) , Flipped label: 6
-
+------
Source image Shape : (32, 32, 3) , Source label : 9
Flipped image Shape: (32, 32, 3) , Flipped label: 9
+------
```
+图片展示如下:
+

### Resize
@@ -197,29 +187,23 @@ Flipped image Shape: (32, 32, 3) , Flipped label: 9
对输入图像进行缩放。
**参数说明:**
-- *self: 缩放的目标大小。*
-- *interpolation: 缩放时采用的插值方式。*
+- `self`:缩放的目标大小。
+- `interpolation`:缩放时采用的插值方式。
-```python
-# 对输入图像进行指定大小缩放
+下面的样例首先加载MNIST数据集,然后将已加载的图片缩放至(101, 101)大小,最后输出缩放前后的图片形状及对应标签,并对图片进行了展示。
+```python
import matplotlib.pyplot as plt
import mindspore.dataset as ds
-import mindspore.dataset.transforms.vision.c_transforms as c_trans
+import mindspore.dataset.vision.c_transforms as c_trans
-# 下载MNIST数据集,将其解压到MnistData目录
DATA_DIR = "../data/dataset/testMnistData2"
-# 使用MnistDataset读取数据集
dataset1 = ds.MnistDataset(DATA_DIR, num_samples=4, shuffle=False)
-# 创建一个缩放算子,将MNIST的图片从(28, 28)缩放到(101, 101)
resize = c_trans.Resize(size=[101, 101])
+dataset2 = dataset1.map(operations=resize, input_columns=["image"])
-# 使用map算子将其作用到数据管道的数据集中
-dataset2 = dataset1.map(input_columns=["image"], operations=resize)
-
-# 启动数据管道
image_list1, label_list1 = [], []
image_list2, label_list2 = [], []
for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_iterator()):
@@ -229,68 +213,63 @@ for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_it
image_list2.append(data2['image'])
label_list2.append(data2['label'])
print("Flipped image Shape:", data2['image'].shape, ", Flipped label:", data2['label'])
- print("")
+ print("------")
-# 将原图与裁剪后的图可视化
num_samples = len(image_list1) + len(image_list2)
for i in range(num_samples):
if i < len(image_list1):
plt.subplot(2, len(image_list1), i + 1)
- plt.imshow(image_list1[i].squeeze(), cmap=plt.cm.gray)
- plt.title(label_list1[i])
+ plt.imshow(image_list1[i].asnumpy().squeeze(), cmap=plt.cm.gray)
+ plt.title(label_list1[i].asnumpy())
else:
plt.subplot(2, len(image_list2), i + 1)
- plt.imshow(image_list2[i % len(image_list2)].squeeze(), cmap=plt.cm.gray)
- plt.title(label_list2[i % len(image_list2)])
+ plt.imshow(image_list2[i % len(image_list2)].asnumpy().squeeze(), cmap=plt.cm.gray)
+ plt.title(label_list2[i % len(image_list2)].asnumpy())
plt.show()
```
+输出结果如下:
+
```
Source image Shape : (28, 28, 1) , Source label : 5
Flipped image Shape: (101, 101, 1) , Flipped label: 5
-
+------
Source image Shape : (28, 28, 1) , Source label : 0
Flipped image Shape: (101, 101, 1) , Flipped label: 0
-
+------
Source image Shape : (28, 28, 1) , Source label : 4
Flipped image Shape: (101, 101, 1) , Flipped label: 4
-
+------
Source image Shape : (28, 28, 1) , Source label : 1
Flipped image Shape: (101, 101, 1) , Flipped label: 1
+------
```
+图片展示如下:
+

### Invert
对输入图像进行反相处理。
-```python
-# 对输入图像进行反相处理
+下面的样例首先加载CIFAR-10数据集,然后同时定义缩放和反相操作并作用于已加载的图片,最后输出缩放与反相前后的图片形状及对应标签,并对图片进行了展示。
+```python
import matplotlib.pyplot as plt
import mindspore.dataset as ds
-import mindspore.dataset.transforms.vision.c_transforms as c_trans
+import mindspore.dataset.vision.c_transforms as c_trans
-# 设置全局随机种子
ds.config.set_seed(8)
-# 下载Cifar10数据集,将其解压到Cifar10Data目录
DATA_DIR = "../data/dataset/testCifar10Data2"
-# 使用Cifar10Dataset读取数据集
dataset1 = ds.Cifar10Dataset(DATA_DIR, num_samples=4, shuffle=True)
-# 创建一个缩放算子,将图片缩放到(101, 101)
resize = c_trans.Resize(size=[101, 101])
-
-# 创建一个反相算子
invert = c_trans.Invert()
+dataset2 = dataset1.map(operations=[resize, invert], input_columns=["image"])
-# 使用map算子将其作用到数据管道的数据集中(两个算子按顺序起作用)
-dataset2 = dataset1.map(input_columns=["image"], operations=[resize, invert])
-
-# 启动数据管道
image_list1, label_list1 = [], []
image_list2, label_list2 = [], []
for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_iterator()):
@@ -300,87 +279,88 @@ for data1, data2 in zip(dataset1.create_dict_iterator(), dataset2.create_dict_it
image_list2.append(data2['image'])
label_list2.append(data2['label'])
print("Flipped image Shape:", data2['image'].shape, ", Flipped label:", data2['label'])
- print("")
+ print("------")
-# 将原图与裁剪后的图可视化
num_samples = len(image_list1) + len(image_list2)
for i in range(num_samples):
if i < len(image_list1):
plt.subplot(2, len(image_list1), i + 1)
- plt.imshow(image_list1[i].squeeze(), cmap=plt.cm.gray)
- plt.title(label_list1[i])
+ plt.imshow(image_list1[i].asnumpy().squeeze(), cmap=plt.cm.gray)
+ plt.title(label_list1[i].asnumpy())
else:
plt.subplot(2, len(image_list2), i + 1)
- plt.imshow(image_list2[i % len(image_list2)].squeeze(), cmap=plt.cm.gray)
- plt.title(label_list2[i % len(image_list2)])
+ plt.imshow(image_list2[i % len(image_list2)].asnumpy().squeeze(), cmap=plt.cm.gray)
+ plt.title(label_list2[i % len(image_list2)].asnumpy())
plt.show()
```
+输出结果如下:
+
```
Source image Shape : (32, 32, 3) , Source label : 4
Flipped image Shape: (32, 32, 3) , Flipped label: 4
-
+------
Source image Shape : (32, 32, 3) , Source label : 9
Flipped image Shape: (32, 32, 3) , Flipped label: 9
-
+------
Source image Shape : (32, 32, 3) , Source label : 6
Flipped image Shape: (32, 32, 3) , Flipped label: 6
-
+------
Source image Shape : (32, 32, 3) , Source label : 5
Flipped image Shape: (32, 32, 3) , Flipped label: 5
+------
```
+图片展示如下:
+

## py_transforms
-下面将简要介绍几种常用的py_transforms模块数据增强算子的使用方法,更多的py_transforms模块数据增强算子参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.vision.html#module-mindspore.dataset.vision.py_transforms)。
+下面将简要介绍几种常用的`py_transforms`模块数据增强算子的使用方法。
-### ComposeOp
+### Compose
-```python
-# 对输入图像进行解码,缩放组合操作
+接收一个`transforms`列表,将列表中的数据增强操作依次作用于数据集图片。
+下面的样例首先加载一个图片数据集,然后同时定义解码、缩放和数据类型转换操作,并作用于已加载的图片,最后输出处理后的图片形状及对应标签,并对图片进行了展示。
+
+```python
import matplotlib.pyplot as plt
import mindspore.dataset as ds
-import mindspore.dataset.transforms.vision.py_transforms as py_trans
+import mindspore.dataset.vision.py_transforms as py_trans
+from mindspore.dataset.transforms.py_transforms import Compose
-# 设置全局随机种子
ds.config.set_seed(8)
-# 图像数据集目录
DATA_DIR = "../data/dataset/testPK/data"
-# 使用ImageFolderDatasetV2读取数据集,获取5个样本
-dataset1 = ds.ImageFolderDatasetV2(DATA_DIR, num_samples=5, shuffle=True)
+dataset1 = ds.ImageFolderDataset(DATA_DIR, num_samples=5, shuffle=True)
-# 创建一组数据增强算子的集合
transforms_list = [
- py_trans.Decode(), # 解码图像到PIL格式
- py_trans.Resize(size=(200,200)), # 缩放图像到[200, 200]大小
- py_trans.ToTensor() # 将PIL图像转换到Numpy
+ py_trans.Decode(),
+ py_trans.Resize(size=(200,200)),
+ py_trans.ToTensor()
]
-compose_trans = py_trans.ComposeOp(transforms_list)
-
-# 使用map算子将其作用到数据管道的数据集中
-dataset2 = dataset1.map(input_columns=["image"], operations=compose_trans())
+compose_trans = Compose(transforms_list)
+dataset2 = dataset1.map(operations=compose_trans, input_columns=["image"])
-# 启动数据管道,输出5个样本数据
image_list, label_list = [], []
for data in dataset2.create_dict_iterator():
image_list.append(data['image'])
label_list.append(data['label'])
print("Transformed image Shape:", data['image'].shape, ", Transformed label:", data['label'])
-# 将原图与裁剪后的图可视化
num_samples = len(image_list)
for i in range(num_samples):
plt.subplot(1, len(image_list), i + 1)
- plt.imshow(image_list[i].transpose(1, 2, 0))
- plt.title(label_list[i])
+ plt.imshow(image_list[i].asnumpy().transpose(1, 2, 0))
+ plt.title(label_list[i].asnumpy())
plt.show()
```
+输出结果如下:
+
```
Transformed image Shape: (3, 200, 200) , Transformed label: 3
Transformed image Shape: (3, 200, 200) , Transformed label: 0
@@ -389,11 +369,13 @@ Transformed image Shape: (3, 200, 200) , Transformed label: 0
Transformed image Shape: (3, 200, 200) , Transformed label: 3
```
+图片展示如下:
+

## 使用说明
-请勿混用c_transforms与py_transforms,因为c_transforms是在C++内维护buffer管理,py_transforms是在Python内维护buffer管理,两者混用会降低性能。
+请勿混用`c_transforms`与`py_transforms`,因为两者作用于图片的格式不同,混用会降低处理性能。

@@ -401,15 +383,15 @@ Transformed image Shape: (3, 200, 200) , Transformed label: 3
**推荐的使用方式:**
-- 单独使用py_transform或c_transform
+- 单独使用`py_transform`或`c_transform`

-- 先使用py_transform,再使用c_transform
+- 先使用`py_transform`,再使用`c_transform`

-- 先使用c_transform,再使用py_transform
+- 先使用`c_transform`,再使用`py_transform`

diff --git a/docs/programming_guide/source_zh_cn/auto_augmentation.md b/docs/programming_guide/source_zh_cn/auto_augmentation.md
new file mode 100644
index 0000000000000000000000000000000000000000..e81a383926482f749f793137818f22389ea4296e
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/auto_augmentation.md
@@ -0,0 +1,131 @@
+# 自动数据增强
+
+
+
+- [自动数据增强](#自动数据增强)
+ - [概述](#概述)
+ - [基于概率的自动数据增强](#基于概率的自动数据增强)
+ - [RandomApply](#RandomApply)
+ - [RandomChoice](#RandomChoice)
+ - [RandomSelectSubpolicy](#RandomSelectSubpolicy)
+ - [基于回调参数的自动数据增强](#基于回调参数的自动数据增强)
+
+
+
+
+
+## 概述
+
+MindSpore除了可以让用户自定义数据增强的使用,还提供了一种自动数据增强方式,可以基于特定策略自动对图像进行数据增强处理。
+
+自动数据增强主要分为基于概率的自动数据增强和基于回调参数的自动数据增强。
+
+## 基于概率的自动数据增强
+
+MindSpore提供了一系列基于概率的自动数据增强API,用户可以对各种数据增强操作进行随机选择与组合,使数据增强更加灵活。
+
+关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.dataset.transforms.html)。
+
+### RandomApply
+
+API接收一个数据增强操作列表`transforms`,以一定的概率顺序执行列表中各数据增强操作,默认概率为0.5,否则都不执行。
+
+在下面的代码示例中,以0.5的概率来顺序执行`RandomCrop`和`RandomColorAdjust`操作,否则都不执行。
+
+```python
+import mindspore.dataset.vision.c_transforms as c_vision
+from mindspore.dataset.transforms.c_transforms import RandomApply
+
+rand_apply_list = RandomApply([c_vision.RandomCrop(512), c_vision.RandomColorAdjust()])
+```
+
+### RandomChoice
+
+API接收一个数据增强操作列表`transforms`,从中随机选择一个数据增强操作执行。
+
+在下面的代码示例中,等概率地在`CenterCrop`和`RandomCrop`中选择一个操作执行。
+
+```python
+import mindspore.dataset.vision.c_transforms as c_vision
+from mindspore.dataset.transforms.c_transforms import RandomChoice
+
+rand_choice = RandomChoice([c_vision.CenterCrop(512), c_vision.RandomCrop(512)])
+```
+
+### RandomSelectSubpolicy
+
+API接收一个预置策略列表,包含一系列子策略组合,每一子策略由若干个顺序执行的数据增强操作及其执行概率组成。
+
+对各图像先等概率随机选择一种子策略,再依照子策略中的概率顺序执行各个操作。
+
+在下面的代码示例中,预置了两条子策略,子策略1中包含`RandomRotation`、`RandomVerticalFlip`和`RandomColorAdjust`三个操作,概率分别为0.5、1.0和0.8;子策略2中包含`RandomRotation`和`RandomColorAdjust`两个操作,概率分别为1.0和0.2。
+
+```python
+import mindspore.dataset.vision.c_transforms as c_vision
+from mindspore.dataset.vision.c_transforms import RandomSelectSubpolicy
+
+policy_list = [
+ [(c_vision.RandomRotation((45, 45)), 0.5), (c_vision.RandomVerticalFlip(), 1.0), (c_vision.RandomColorAdjust(), 0.8)],
+ [(c_vision.RandomRotation((90, 90)), 1.0), (c_vision.RandomColorAdjust(), 0.2)]
+ ]
+policy = RandomSelectSubpolicy(policy_list)
+```
+
+## 基于回调参数的自动数据增强
+
+MindSpore的`sync_wait`接口支持按batch或epoch粒度在训练过程中动态调整数据增强策略,用户可以设定阻塞条件触发特定的数据增强操作。
+
+`sync_wait`将阻塞整个数据处理pipeline直到`sync_update`触发用户预先定义的`callback`函数,两者需配合使用,对应说明如下:
+
+- sync_wait(condition_name, num_batch=1, callback=None)
+
+ 该API为数据集添加一个阻塞条件`condition_name`,当`sync_update`调用时执行指定的`callback`函数。
+
+- sync_update(condition_name, num_batch=None, data=None)
+
+ 该API用于释放对应`condition_name`的阻塞,并对`data`触发指定的`callback`函数。
+
+下面将演示基于回调参数的自动数据增强的用法。
+
+1. 用户预先定义`Augment`类,其中`preprocess`为自定义的数据增强函数,`update`为更新数据增强策略的回调函数。
+
+ ```python
+ import mindspore.dataset.vision.py_transforms as transforms
+ import mindspore.dataset as ds
+ import numpy as np
+
+ class Augment:
+ def __init__(self):
+ self.ep_num = 0
+ self.step_num = 0
+
+ def preprocess(self, input_):
+ return (np.array((input_ + self.step_num ** self.ep_num - 1), ))
+
+ def update(self, data):
+ self.ep_num = data['ep_num']
+ self.step_num = data['step_num']
+ ```
+
+2. 数据处理pipeline先回调自定义的增强策略更新函数`update`,然后在`map`操作中按更新后的策略来执行`preprocess`中定义的数据增强操作。
+
+ ```python
+ arr = list(range(1, 4))
+ dataset = ds.NumpySlicesDataset(arr, shuffle=False)
+ aug = Augment()
+ dataset = dataset.sync_wait(condition_name="policy", callback=aug.update)
+ dataset = dataset.map(operations=[aug.preprocess])
+ ```
+
+3. 在每个step中调用`sync_update`进行数据增强策略的更新。
+
+ ```python
+ epochs = 5
+ itr = dataset.create_tuple_iterator(num_epochs=epochs)
+ step_num = 0
+ for ep_num in range(epochs):
+ for data in itr:
+ print("epcoh: {}, step:{}, data :{}".format(ep_num, step_num, data))
+ step_num += 1
+ dataset.sync_update(condition_name="policy", data={'ep_num': ep_num, 'step_num': step_num})
+ ```
diff --git a/api/source_zh_cn/programming_guide/auto_parallel_context.md b/docs/programming_guide/source_zh_cn/auto_parallel.md
similarity index 78%
rename from api/source_zh_cn/programming_guide/auto_parallel_context.md
rename to docs/programming_guide/source_zh_cn/auto_parallel.md
index 14ce9bdab9d86ddc8e66279f2d92167cc2ae0f48..7c9aa6512a1d79e560ced1fd407ef2ceaa63b013 100644
--- a/api/source_zh_cn/programming_guide/auto_parallel_context.md
+++ b/docs/programming_guide/source_zh_cn/auto_parallel.md
@@ -6,31 +6,34 @@
- [概述](#概述)
- [分布式并行配置](#分布式并行配置)
- [通用配置](#通用配置)
- - [device_num](#device_num)
- - [global_rank](#global_rank)
- - [gradients_mean](#gradients_mean)
- - [parallel_mode](#parallel_mode)
+ - [device_num](#device_num)
+ - [global_rank](#global_rank)
+ - [gradients_mean](#gradients_mean)
+ - [parallel_mode](#parallel_mode)
+ - [all_reduce_fusion_config](#all_reduce_fusion_config)
- [自动并行配置](#自动并行配置)
- - [gradient_fp32_sync](#gradient_fp32-sync)
- - [loss_repeated_mean](#loss_repeated_mean)
- - [auto_parallel_search_mode](#auto_parallel_search_mode)
- - [strategy_ckpt_load_file](#strategy_ckpt_load_file)
- - [strategy_ckpt_save_file](#strategy_ckpt_save_file)
- - [full_batch](#full_batch)
+ - [gradient_fp32_sync](#gradient_fp32_sync)
+ - [auto_parallel_search_mode](#auto_parallel_search_mode)
+ - [strategy_ckpt_load_file](#strategy_ckpt_load_file)
+ - [strategy_ckpt_save_file](#strategy_ckpt_save_file)
+ - [full_batch](#full_batch)
- [数据并行配置](#数据并行配置)
- - [enable_parallel_optimizer](#enable_parallel_optimizer)
+ - [enable_parallel_optimizer](#enable_parallel_optimizer)
- [混合并行配置](#混合并行配置)
- - [layerwise_parallel](#layerwise_parallel)
+ - [layerwise_parallel](#layerwise_parallel)
- [分布式通信接口](#分布式通信接口)
- [init](#init)
- - [get_rank](#get_rank)
- [get_group_size](#get_group_size)
+ - [get_rank](#get_rank)
+ - [分布式属性配置](#分布式属性配置)
+ - [cross_batch](#cross_batch)
+ - [fusion](#fusion)
- [数据并行](#数据并行)
- [自动并行](#自动并行)
-
+
## 概述
@@ -43,7 +46,7 @@ MindSpore提供了分布式并行训练的功能,它支持了包括数据并
MindSpore的分布式并行配置通过`auto_parallel_context`来进行集中管理,用户可根据自身需求和实际情况来进行个性化的配置。这些配置可分为四大类:
- 通用配置:对数据并行和自动并行均起作用的配置,如:`device_num`、`global_rank`。
-- 自动并行配置:仅在自动并行模式下起作用的配置,如:`gradient_fp32_sync`、`loss_repeated_mean`。
+- 自动并行配置:仅在自动并行模式下起作用的配置,如:`gradient_fp32_sync`。
- 数据并行配置:仅在数据并行模式下起作用的配置,如:`enable_parallel_optimizer`。
- 混合并行配置:仅在混合并行模式下起作用的配置,如:`layerwise_parallel`。
@@ -97,15 +100,21 @@ context.get_auto_parallel_context("gradients_mean")
- `stand_alone`:单机模式。
- `data_parallel`:数据并行模式。
- `hybrid_parallel`:混合并行模式。
-- `semi_auto_parallel`:半自动并行模式,即用户可通过`set_strategy`方法给算子配置切分策略,若不配置策略,则默认是数据并行策略。
+- `semi_auto_parallel`:半自动并行模式,即用户可通过`shard`方法给算子配置切分策略,若不配置策略,则默认是数据并行策略。
- `auto_parallel`:自动并行模式,即框架会自动建立代价模型,为用户选择最优的切分策略。
+其中`auto_parallel`和`data_parallel`在MindSpore教程中有完整样例:
+
+。
+
代码样例如下:
```python
-from mindspore import context
+from mindspore import context
+from mindspore.ops import operations as P
-context.set_auto_parallel_context(parallel_mode=“auto_parallel”)
+context.set_auto_parallel_context(parallel_mode="semi_auto_parallel")
+mul = P.Mul().shard(((2, 1), (2, 1)))
context.get_auto_parallel_context("parallel_mode")
```
@@ -140,29 +149,16 @@ context.set_auto_parallel_context(gradient_fp32_sync=False)
context.get_auto_parallel_context("gradient_fp32_sync")
```
-#### loss_repeated_mean
-
-`loss_repeated_mean`表示在loss重复计算的场景下,反向是否进行均值操作,其值为bool类型,默认为True。loss存在重复计算的场景下,反向进行均值操作能使分布式逻辑和单机保持一致。但在某些场景下,不进行均值操作可能会使网络收敛的速度更快。因此,MindSpore提供`loss_repeated_mean`接口,让用户自由配置。
-
-代码样例如下:
-
-```python
-from mindspore import context
-
-context.set_auto_parallel_context(loss_repeated_mean=False)
-context.get_auto_parallel_context("loss_repeated_mean")
-```
-
#### auto_parallel_search_mode
-MindSpore提供了`dynamic_programming`和`recursive_programming`两种搜索策略的算法。`dynamic_programming`能够搜索出代价模型刻画的最优策略,但在搜索巨大网络模型的并行策略时耗时较长;而`recursive_programming`能较快搜索出并行策略,但搜索出来的策略可能不是运行性能最优的。为此,MindSpore提供了参数,让用户自由选择搜索算法。
+MindSpore提供了`dynamic_programming`和`recursive_programming`两种搜索策略的算法。`dynamic_programming`能够搜索出代价模型刻画的最优策略,但在搜索巨大网络模型的并行策略时耗时较长;而`recursive_programming`能较快搜索出并行策略,但搜索出来的策略可能不是运行性能最优的。为此,MindSpore提供了参数,让用户自由选择搜索算法,默认是`dynamic_programming`。
代码样例如下:
```python
from mindspore import context
-context.set_auto_parallel_context(auto_parallel_search_mode=“dynamic_programming”)
+context.set_auto_parallel_context(auto_parallel_search_mode="dynamic_programming")
context.get_auto_parallel_context("auto_parallel_search_mode")
```
@@ -175,7 +171,7 @@ context.get_auto_parallel_context("auto_parallel_search_mode")
```python
from mindspore import context
-context.set_auto_parallel_context(strategy_ckpt_load_file=“./”)
+context.set_auto_parallel_context(strategy_ckpt_load_file="./")
context.get_auto_parallel_context("strategy_ckpt_load_file")
```
@@ -188,7 +184,7 @@ context.get_auto_parallel_context("strategy_ckpt_load_file")
```python
from mindspore import context
-context.set_auto_parallel_context(strategy_ckpt_save_file=“./”)
+context.set_auto_parallel_context(strategy_ckpt_save_file="./")
context.get_auto_parallel_context("strategy_ckpt_save_file")
```
@@ -285,18 +281,45 @@ init()
rank_id = get_rank()
```
-### 数据并行
+## 分布式属性配置
+
+### cross_batch
+
+在特定场景下,`data_parallel`的计算逻辑和`stand_alone`是不一样的,`auto_parallel`在任何场景下都是和`stand_alone`的计算逻辑保持一致。而`data_parallel`的收敛效果可能更好,因此MindSpore提供了`cross_barch`这个参数,可以使`auto_parallel`的计算逻辑和`data_parallel`保持一致,用户可通过`add_prim_attr`方法进行配置,默认值是False。
+
+代码样例如下:
+
+```python
+from mindspore.ops import operations as P
+
+mul = P.Mul().set_prim_attr("cross_batch", True)
+```
+
+### fusion
+
+出于性能考虑,MindSpore提供了通信算子融合功能,`fusion`值相同的同类通信算子会融合在一起,`fusion`值为0时,表示不融合。
+
+代码样例如下:
+
+```python
+from mindspore.ops import operations as P
+
+allreduce = P.AllReduce().set_prim_attr("fusion", 1)
+```
+
+## 数据并行
数据并行是对数据进行切分的并行模式,一般按照batch维度切分,将数据分配到各个计算单元(worker)中,进行模型计算。在数据并行模式下,数据集要以数据并行的方式导入,并且`parallel_mode`要设置为`data_parallel`。
具体用例请参考MindSpore分布式并行训练教程:
-。
+。
-### 自动并行
+## 自动并行
自动并行是融合了数据并行、模型并行及混合并行的一种分布式并行模式,可以自动建立代价模型,为用户选择一种并行模式。其中,代价模型指基于内存的计算开销和通信开销对训练时间建模,并设计高效的算法找到训练时间较短的并行策略。在自动并行模式下,数据集也要以数据并行的方式导入,并且`parallel_mode`要设置为`auto_parallel`。
具体用例请参考MindSpore分布式并行训练教程:
-。
\ No newline at end of file
+。
+
diff --git a/api/source_zh_cn/programming_guide/callback.md b/docs/programming_guide/source_zh_cn/callback.md
similarity index 70%
rename from api/source_zh_cn/programming_guide/callback.md
rename to docs/programming_guide/source_zh_cn/callback.md
index 5717a72a47b5e06350f4f9379e215ebdc8f0411c..cabd49b8b395e42b16f2597986c3701f51022782 100644
--- a/api/source_zh_cn/programming_guide/callback.md
+++ b/docs/programming_guide/source_zh_cn/callback.md
@@ -1,49 +1,46 @@
-# MindSpore Callback回调函数机制
+# Callback机制
-- [概述](#概述)
-- [MindSpore内置回调函数](#mindspore内置回调函数)
-- [MindSpore自定义回调函数](#mindspore自定义回调函数)
+- [Callback机制](#callback机制)
+ - [概述](#概述)
+ - [MindSpore内置回调函数](#mindspore内置回调函数)
+ - [MindSpore自定义回调函数](#mindspore自定义回调函数)
-
+
## 概述
Callback回调函数在MindSpore中被实现为一个类,Callback机制类似于一种监控模式,可以帮助用户观察网络训练过程中各种参数的变化情况和网络内部的状态,还可以根据用户的指定,在达到特定条件后执行相应的操作,在训练过程中,Callback列表会按照定义的顺序执行Callback函数。Callback机制让用户可以及时有效地掌握网络模型的训练状态,并根据需要随时作出调整,可以极大地提升用户的开发效率。
-在MindSpore中,Callback机制一般用在网络训练过程`model.train`中,用户可以通过配置不同的内置回调函数传入不同的参数,从而实现各种功能。例如,可以通过`LossMonitor`监控每一个epoch的loss变化情况,通过checkpoint保存网络参数和模型进行再训练或推理,通过`TimeMonitor`监控每一个epoch,每一个step的训练时间,以及提前终止训练,动态调整参数等。
+在MindSpore中,Callback机制一般用在网络训练过程`model.train`中,用户可以通过配置不同的内置回调函数传入不同的参数,从而实现各种功能。例如,可以通过`LossMonitor`监控每一个epoch的loss变化情况,通过`ModelCheckpoint`保存网络参数和模型进行再训练或推理,通过`TimeMonitor`监控每一个epoch,每一个step的训练时间,以及提前终止训练,动态调整参数等。
## MindSpore内置回调函数
- ModelCheckpoint
- 与模型训练过程相结合,保存训练后的模型和网络参数,方便进行再推理或再训练。
+ 与模型训练过程相结合,保存训练后的模型和网络参数,方便进行再推理或再训练。`ModelCheckpoint`一般与`CheckpointConfig`配合使用,`CheckpointConfig`是一个参数配置类,可自定义配置checkpoint的保存策略。
-- CheckpointConfig
-
- 一般与`ModelCheckpoint`配合使用,可自定义配置checkpoint的保存策略。
-
- 详细内容,请参考[Checkpoint官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html)。
+ 详细内容,请参考[Checkpoint官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/save_load_model_hybrid_parallel.html)。
- SummaryCollector
帮助收集一些常见信息,如loss、learning rate、计算图、参数权重等,方便用户将训练过程可视化和查看信息,并且可以允许summary操作从summary文件中收集数据。
- 详细内容,请参考[Summary官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/summary_record.html)。
+ 详细内容,请参考[Summary官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/summary_record.html)。
- LossMonitor
监控训练过程中的loss变化情况,当loss为NAN或INF时,提前终止训练。可以在日志中输出loss,方便用户查看。
- 详细内容,请参考[LossMonitor官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/second_order_optimizer_for_resnet50_application.html#id11)。
+ 详细内容,请参考[LossMonitor官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/cv_resnet50_second_order_optimizer.html#id11)。
- TimeMonitor
监控训练过程中每个epoch,每个step的运行时间。
- 详细内容,请参考[TimeMonitor官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/second_order_optimizer_for_resnet50_application.html#id11)。
+ 详细内容,请参考[TimeMonitor官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/cv_resnet50_second_order_optimizer.html#id11)。
## MindSpore自定义回调函数
MindSpore不但有功能强大的内置回调函数,还可以支持用户自定义回调函数。当用户有自己的特殊需求时,可以基于Callback基类,自定义满足用户自身需求的回调函数。Callback可以把训练过程中的重要信息记录下来,通过一个字典类型变量cb_params传递给Callback对象, 用户可以在各个自定义的Callback中获取到相关属性,执行自定义操作。
@@ -54,6 +51,6 @@ MindSpore不但有功能强大的内置回调函数,还可以支持用户自
2. 实现保存训练过程中精度最高的checkpoint文件,用户可以自定义在每一轮迭代后都保存当前精度最高的模型。
-详细内容,请参考[自定义Callback官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/customized_debugging_information.html#id3)。
+详细内容,请参考[自定义Callback官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/custom_debugging_info.html#id3)。
根据教程,用户可以很容易实现具有其他功能的自定义回调函数,如实现在每一轮训练结束后都输出相应的详细训练信息,包括训练进度、训练轮次、训练名称、loss值等;如实现在loss或模型精度达到一定值后停止训练,用户可以设定loss或模型精度的阈值,当loss或模型精度达到该阈值后就提前终止训练等。
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/cell.md b/docs/programming_guide/source_zh_cn/cell.md
new file mode 100644
index 0000000000000000000000000000000000000000..c823f17216e84abca138db19a8824fe206ee6b0a
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/cell.md
@@ -0,0 +1,373 @@
+# Cell构建及其子类
+
+
+
+- [Cell构建及其子类](#cell构建及其子类)
+ - [概述](#概述)
+ - [关键成员函数](#关键成员函数)
+ - [construct方法](#construct方法)
+ - [parameters_dict](#parameters_dict)
+ - [cells_and_names](#cells_and_names)
+ - [set_grad](#set_grad)
+ - [nn模块与ops模块的关系](#nn模块与ops模块的关系)
+ - [模型层](#模型层)
+ - [内置模型层](#内置模型层)
+ - [应用实例](#应用实例)
+ - [损失函数](#损失函数)
+ - [内置损失函数](#内置损失函数)
+ - [应用实例](#应用实例-1)
+ - [优化算法](#优化算法)
+ - [构建自定义网络](#构建自定义网络)
+
+
+
+
+
+## 概述
+
+MindSpore的`Cell`类是构建所有网络的基类,也是网络的基本单元。当用户需要自定义网络时,需要继承`Cell`类,并重写`__init__`方法和`contruct`方法。
+
+损失函数、优化器和模型层等本质上也属于网络结构,也需要继承`Cell`类才能实现功能,同样用户也可以根据业务需求自定义这部分内容。
+
+本节内容首先将会介绍`Cell`类的关键成员函数,然后介绍基于`Cell`实现的MindSpore内置损失函数、优化器和模型层及使用方法,最后通过实例介绍如何利用`Cell`类构建自定义网络。
+
+## 关键成员函数
+
+### construct方法
+
+`Cell`类重写了`__call__`方法,在`Cell`类的实例被调用时,会执行`contruct`方法。网络结构在`contruct`方法里面定义。
+
+下面的样例中,我们构建了一个简单的网络实现卷积计算功能。构成网络的算子在`__init__`中定义,在`contruct`方法里面使用,用例的网络结构为`Conv2d`->`BiasAdd`。
+
+在`construct`方法中,`x`为输入数据,`output`是经过网络结构计算后得到的计算结果。
+
+```
+import mindspore.nn as nn
+from mindspore.ops import operations as P
+from mindspore.common.parameter import Parameter
+from mindspore.common.initializer import initializer
+
+class Net(nn.Cell):
+ def __init__(self, in_channels=10, out_channels=20, kernel_size=3):
+ super(Net, self).__init__()
+ self.conv2d = P.Conv2D(out_channels, kernel_size)
+ self.bias_add = P.BiasAdd()
+ self.weight = Parameter(
+ initializer('normal', [out_channels, in_channels, kernel_size, kernel_size]),
+ name='conv.weight')
+
+ def construct(self, x):
+ output = self.conv2d(x, self.weight)
+ output = self.bias_add(output, self.bias)
+ return output
+```
+
+### parameters_dict
+
+`parameters_dict`方法识别出网络结构中所有的参数,返回一个以key为参数名,value为参数值的`OrderedDict`。
+
+`Cell`类中返回参数的方法还有许多,例如`get_parameters`、`trainable_params`等,具体使用方法可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Cell)。
+
+代码样例如下:
+
+```
+net = Net()
+result = net.parameters_dict()
+print(result.keys())
+print(result['conv.weight'])
+```
+
+样例中的`Net`采用上文构造网络的用例,打印了网络中所有参数的名字和`conv.weight`参数的结果。
+
+输出如下:
+```
+odict_keys(['conv.weight'])
+Parameter (name=conv.weight, value=[[[[-3.95042636e-03 1.08830128e-02 -6.51786150e-03]
+ [ 8.66129529e-03 7.36288540e-03 -4.32638079e-03]
+ [-1.47628486e-02 8.24100431e-03 -2.71035335e-03]]
+ ......
+ [ 1.58852488e-02 -1.03505487e-02 1.72988791e-02]]]])
+```
+
+### cells_and_names
+
+`cells_and_names`方法是一个迭代器,返回网络中每个`Cell`的名字和它的内容本身。
+
+用例简单实现了获取与打印每个`Cell`名字的功能,其中根据网络结构可知,存在1个`Cell`为`nn.Conv2d`。
+
+其中`nn.Conv2d`是MindSpore以`Cell`为基类封装好的一个卷积层,其具体内容将在“模型层”中进行介绍。
+
+代码样例如下:
+```
+import mindspore.nn as nn
+
+class Net1(nn.Cell):
+ def __init__(self):
+ super(Net1, self).__init__()
+ self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
+
+ def construct(self, x):
+ out = self.conv(x)
+ return out
+
+net = Net1()
+names = []
+for m in net.cells_and_names():
+ print(m)
+ names.append(m[0]) if m[0] else None
+print('-------names-------')
+print(names)
+```
+
+输出如下:
+```
+('', Net1<
+ (conv): Conv2d
+ >)
+('conv', Conv2d)
+-------names-------
+['conv']
+```
+
+### set_grad
+
+`set_grad`接口功能是使用户构建反向网络,在不传入参数调用时,默认设置`requires_grad`为True,需要在计算网络反向的场景中使用。
+
+以`TrainOneStepCell`为例,其接口功能是使网络进行单步训练,需要计算网络反向,因此初始化方法里需要使用`set_grad`。
+
+`TrainOneStepCell`部分代码如下:
+```
+class TrainOneStepCell(Cell):
+ def __init__(self, network, optimizer, sens=1.0):
+ super(TrainOneStepCell, self).__init__(auto_prefix=False)
+ self.network = network
+ self.network.set_grad()
+ ......
+```
+
+如果用户使用`TrainOneStepCell`等类似接口无需使用`set_grad`, 内部已封装实现。
+
+若用户需要自定义此类训练功能的接口,需要在其内部调用,或者在外部设置`network.set_grad`。
+
+## nn模块与ops模块的关系
+
+MindSpore的nn模块是Python实现的模型组件,是对低阶API的封装,主要包括各种模型层、损失函数、优化器等。
+
+同时nn也提供了部分与`Primitive`算子同名的接口,主要作用是对`Primitive`算子进行进一步封装,为用户提供更友好的API。
+
+重新分析上文介绍`construct`方法的用例,此用例是MindSpore的`nn.Conv2d`源码简化内容,内部会调用`P.Conv2D`。`nn.Conv2d`卷积API增加输入参数校验功能并判断是否`bias`等,是一个高级封装的模型层。
+```
+import mindspore.nn as nn
+from mindspore.ops import operations as P
+from mindspore.common.parameter import Parameter
+from mindspore.common.initializer import initializer
+
+class Net(nn.Cell):
+ def __init__(self, in_channels=10, out_channels=20, kernel_size=3):
+ super(Net, self).__init__()
+ self.conv2d = P.Conv2D(out_channels, kernel_size)
+ self.bias_add = P.BiasAdd()
+ self.weight = Parameter(
+ initializer('normal', [out_channels, in_channels, kernel_size, kernel_size]),
+ name='conv.weight')
+
+ def construct(self, x):
+ output = self.conv2d(x, self.weight)
+ output = self.bias_add(output, self.bias)
+ return output
+```
+
+## 模型层
+
+在讲述了`Cell`的使用方法后可知,MindSpore能够以`Cell`为基类构造网络结构。
+
+为了方便用户的使用,MindSpore框架内置了大量的模型层,用户可以通过接口直接调用。
+
+同样,用户也可以自定义模型,此内容在“构建自定义网络”中介绍。
+
+### 内置模型层
+
+MindSpore框架在`mindspore.nn`的layer层内置了丰富的接口,主要内容如下:
+
+- 激活层
+
+ 激活层内置了大量的激活函数,在定义网络结构中经常使用。激活函数为网络加入了非线性运算,使得网络能够拟合效果更好。
+
+ 主要接口有`Softmax`、`Relu`、`Elu`、`Tanh`、`Sigmoid`等。
+
+- 基础层
+
+ 基础层实现了网络中一些常用的基础结构,例如全连接层、Onehot编码、Dropout、平铺层等都在此部分实现。
+
+ 主要接口有`Dense`、`Flatten`、`Dropout`、`Norm`、`OneHot`等。
+
+- 容器层
+
+ 容器层主要功能是实现一些存储多个Cell的数据结构。
+
+ 主要接口有`SequentialCell`、`CellList`等。
+
+- 卷积层
+
+ 卷积层提供了一些卷积计算的功能,如普通卷积、深度卷积和卷积转置等。
+
+ 主要接口有`Conv2d`、`Conv1d`、`Conv2dTranspose`、`Conv1dTranspose`等。
+
+- 池化层
+
+ 池化层提供了平均池化和最大池化等计算的功能。
+
+ 主要接口有`AvgPool2d`、`MaxPool2d`和`AvgPool1d`。
+
+- 嵌入层
+
+ 嵌入层提供word embedding的计算功能,将输入的单词映射为稠密向量。
+
+ 主要接口有`Embedding`、`EmbeddingLookup`、`EmbeddingLookUpSplitMode`等。
+
+- 长短记忆循环层
+
+ 长短记忆循环层提供LSTM计算功能。其中`LSTM`内部会调用`LSTMCell`接口,`LSTMCell`是一个LSTM单元,对一个LSTM层做运算,当涉及多LSTM网络层运算时,使用`LSTM`接口。
+
+ 主要接口有`LSTM`和`LSTMCell`。
+
+- 标准化层
+
+ 标准化层提供了一些标准化的方法,即通过线性变换等方式将数据转换成均值和标准差。
+
+ 主要接口有`BatchNorm1d`、`BatchNorm2d`、`LayerNorm`、`GroupNorm`、`GlobalBatchNorm`等。
+
+- 数学计算层
+
+ 数据计算层提供一些算子拼接而成的计算功能,例如数据生成和一些数学计算等。
+
+ 主要接口有`ReduceLogSumExp`、`Range`、`LinSpace`、`LGamma`等。
+
+- 图片层
+
+ 图片计算层提供了一些矩阵计算相关的功能,将图片数据进行一些变换与计算。
+
+ 主要接口有`ImageGradients`、`SSIM`、`MSSSIM`、`PSNR`、`CentralCrop`等。
+
+- 量化层
+
+ 量化是指将数据从float的形式转换成一段数据范围的int类型,所以量化层提供了一些数据量化的方法和模型层结构封装。
+
+ 主要接口有`Conv2dBnAct`、`DenseBnAct`、`Conv2dBnFoldQuant`、`LeakyReLUQuant`等。
+
+### 应用实例
+
+MindSpore的模型层在`mindspore.nn`下,使用方法如下所示:
+
+```
+import mindspore.nn as nn
+
+class Net(nn.Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
+ self.bn = nn.BatchNorm2d(64)
+ self.relu = nn.ReLU()
+ self.flatten = nn.Flatten()
+ self.fc = nn.Dense(64 * 222 * 222, 3)
+
+ def construct(self, x):
+ x = self.conv(x)
+ x = self.bn(x)
+ x = self.relu(x)
+ x = self.flatten(x)
+ out = self.fc(x)
+ return out
+```
+
+依然是上述网络构造的用例,从这个用例中可以看出,程序调用了`Conv2d`、`BatchNorm2d`、`ReLU`、`Flatten`和`Dense`模型层的接口。
+
+在`Net`初始化方法里被定义,然后在`construct`方法里真正运行,这些模型层接口有序的连接,形成一个可执行的网络。
+
+## 损失函数
+
+目前MindSpore主要支持的损失函数有`L1Loss`、`MSELoss`、`SmoothL1Loss`、`SoftmaxCrossEntropyWithLogits`和`CosineEmbeddingLoss`。
+
+MindSpore的损失函数全部是`Cell`的子类实现,所以也支持用户自定义损失函数,其构造方法在“构建自定义网络”中进行介绍。
+
+### 内置损失函数
+
+- L1Loss
+
+ 计算两个输入数据的绝对值误差,用于回归模型。`reduction`参数默认值为mean,返回loss平均值结果,若`reduction`值为sum,返回loss累加结果,若`reduction`值为none,返回每个loss的结果。
+
+- MSELoss
+
+ 计算两个输入数据的平方误差,用于回归模型。`reduction`参数同`L1Loss`。
+
+- SmoothL1Loss
+
+ `SmoothL1Loss`为平滑L1损失函数,用于回归模型,阈值`sigma`默认参数为1。
+`
+- SoftmaxCrossEntropyWithLogits
+
+ 交叉熵损失函数,用于分类模型。当标签数据不是one-hot编码形式时,需要输入参数`sparse`为True。`reduction`参数默认值为none,其参数含义同`L1Loss`。
+
+- CosineEmbeddingLoss
+
+ `CosineEmbeddingLoss`用于衡量两个输入相似程度,用于分类模型。`margin`默认为0.0,`reduction`参数同`L1Loss`。
+
+### 应用实例
+
+MindSpore的损失函数全部在mindspore.nn下,使用方法如下所示:
+
+```
+import numpy as np
+import mindspore.nn as nn
+from mindspore import Tensor
+
+loss = nn.L1Loss()
+input_data = Tensor(np.array([[1, 2, 3], [2, 3, 4]]).astype(np.float32))
+target_data = Tensor(np.array([[0, 2, 5], [3, 1, 1]]).astype(np.float32))
+print(loss(input_data, target_data))
+```
+
+输出结果:
+```
+1.5
+```
+
+此用例构造了两个Tensor数据,利用`nn.L1Loss`接口定义了loss,将`input_data`和`target_data`传入loss,执行L1Loss的计算,结果为1.5。若loss = nn.L1Loss(reduction='sum'),则结果为9.0。若loss = nn.L1Loss(reduction='none'),结果为[[1. 0. 2.] [1. 2. 3.]]。
+
+## 优化算法
+
+`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,详细说明参见[优化算法](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.0/optim.html)。
+
+## 构建自定义网络
+
+无论是网络结构,还是前文提到的模型层、损失函数和优化器等,本质上都是一个`Cell`,因此都可以自定义实现。
+
+首先构造一个继承`Cell`的子类,然后在`__init__`方法里面定义算子和模型层等,在`construct`方法里面构造网络结构。
+
+以LeNet网络为例,在`__init__`方法中定义了卷积层,池化层和全连接层等结构单元,然后在`construct`方法将定义的内容连接在一起,形成一个完整LeNet的网络结构。
+
+LeNet网络实现方式如下所示:
+```
+import mindspore.nn as nn
+
+class LeNet5(nn.Cell):
+ def __init__(self):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(1, 6, 5, pad_mode="valid")
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode="valid")
+ self.fc1 = nn.Dense(16 * 5 * 5, 120)
+ self.fc2 = nn.Dense(120, 84)
+ self.fc3 = nn.Dense(84, 3)
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2)
+ self.flatten = nn.Flatten()
+
+ def construct(self, x):
+ x = self.max_pool2d(self.relu(self.conv1(x)))
+ x = self.max_pool2d(self.relu(self.conv2(x)))
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+```
diff --git a/docs/programming_guide/source_zh_cn/compute_component.rst b/docs/programming_guide/source_zh_cn/compute_component.rst
new file mode 100644
index 0000000000000000000000000000000000000000..58d676eacf136ff3e6af3cdd0e85053ee616db62
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/compute_component.rst
@@ -0,0 +1,10 @@
+计算组件
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ operator
+ parameter
+ cell
+ network_component
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/conf.py b/docs/programming_guide/source_zh_cn/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..95d7701759707ab95a3c199cd8a22e2e2cc1194d
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/conf.py
@@ -0,0 +1,62 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_search_language = 'zh'
+
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/context.md b/docs/programming_guide/source_zh_cn/context.md
new file mode 100644
index 0000000000000000000000000000000000000000..eb9e45b3d1e58ee3a8733d65e7f8022e8be3e829
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/context.md
@@ -0,0 +1,135 @@
+# 运行管理
+
+
+
+- [运行管理](#运行管理)
+ - [概述](#概述)
+ - [执行模式管理](#执行模式管理)
+ - [模式选择](#模式选择)
+ - [模式切换](#模式切换)
+ - [硬件管理](#硬件管理)
+ - [分布式管理](#分布式管理)
+ - [维测管理](#维测管理)
+ - [采集profiling数据](#采集profiling数据)
+ - [异步数据dump功能](#异步数据dump功能)
+ - [print算子落盘](#print算子落盘)
+
+
+
+
+
+## 概述
+初始化网络之前要配置context参数,用于控制程序执行的策略。比如选择执行模式、选择执行后端、配置分布式相关参数等。按照context参数设置实现的不同功能,可以将其分为执行模式管理、硬件管理、分布式管理和维测管理等。
+
+## 执行模式管理
+MindSpore支持PyNative和Graph这两种运行模式:
+
+- `PYNATIVE_MODE`:动态图模式,将神经网络中的各个算子逐一下发执行,方便用户编写和调试神经网络模型。
+
+- `GRAPH_MODE`:静态图模式或者图模式,将神经网络模型编译成一整张图,然后下发执行。该模式利用图优化等技术提高运行性能,同时有助于规模部署和跨平台运行。
+
+### 模式选择
+通过设置可以控制程序运行的模式,默认情况下,MindSpore处于PyNative模式。
+
+代码样例如下:
+```python
+from mindspore import context
+context.set_context(mode=context.GRAPH_MODE)
+```
+
+### 模式切换
+实现两种模式之间的切换,可以通过`context.set_context(mode=context.GRAPH_MODE)`切换为Graph模式;同样地,MindSpore处于Graph模式时,可以通过 `context.set_context(mode=context.PYNATIVE_MODE)`切换为PyNative模式。
+
+代码样例如下:
+```python
+import numpy as np
+import mindspore.nn as nn
+from mindspore import context, Tensor
+
+context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
+
+conv = nn.Conv2d(3, 4, 3, bias_init='zeros')
+input_data = Tensor(np.ones([1, 3, 5, 5]).astype(np.float32))
+conv(input_data)
+context.set_context(mode=context.PYNATIVE_MODE)
+conv(input_data)
+```
+
+上面的例子先将运行模式设置为`GRAPH_MODE`模式,然后将模式切换为`PYNATIVE_MODE`模式,实现了模式的切换。
+
+## 硬件管理
+硬件管理部分主要包括`device_target`和`device_id`两个参数。
+
+- `device_target`: 用于设置目标设备,支持Ascend、GPU和CPU,默认设置是Ascend。
+
+- `device_id`: 表示卡物理序号,即卡所在机器中的实际序号。如果目标设备为Ascend,且规格为N*Ascend(其中N>1,如8*Ascend),在非分布式模式执行的情况下,为了避免设备的使用冲突,可以通过设置`device_id`决定程序执行的device编号,该编号范围为:0 ~ 服务器总设备数量-1,服务器总设备数量不能超过4096,默认为设备0。
+
+> 在GPU和CPU上,设置`device_id`参数无效。
+
+代码样例如下:
+```python
+from mindspore import context
+context.set_context(device_target="Ascend", device_id=6)
+```
+
+## 分布式管理
+context中有专门用于配置并行训练参数的接口:context.set_auto_parallel_context,该接口必须在初始化网络之前调用。
+
+- `parallel_mode`:分布式并行模式,默认为单机模式`ParallelMode.STAND_ALONE`。可选数据并行`ParallelMode.DATA_PARALLEL`及自动并行`ParallelMode.AUTO_PARALLEL`。
+
+- `gradients_mean`:反向计算时,框架内部会将数据并行参数分散在多台机器的梯度值进行收集,得到全局梯度值后再传入优化器中更新。默认值为`False`,设置为True对应`allreduce_mean`操作,False对应`allreduce_sum`操作。
+
+- `enable_parallel_optimizer`:开发中特性。打开优化器模型并行开关,通过拆分权重到各卡分别进行更新再同步的方式以提升性能。该参数目前只在数据并行模式和参数量大于机器数时有效,支持`Lamb`和`Adam`优化器。
+
+> `device_num`和`global_rank`建议采用默认值,框架内会调用HCCL接口获取。
+
+代码样例如下:
+```python
+from mindspore import context
+from mindspore.context import ParallelMode
+context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, gradients_mean=True)
+```
+
+> 分布式并行训练详细介绍可以查看[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/distributed_training_tutorials.html)。
+
+## 维测管理
+为了方便维护和定位问题,context提供了大量维测相关的参数配置,如采集profiling数据、异步数据dump功能和print算子落盘等。
+
+### 采集profiling数据
+系统支持在训练过程中采集profiling数据,然后通过profiling工具进行性能分析。当前支持采集的profiling数据包括:
+
+- `enable_profiling`:是否开启profiling功能。设置为True,表示开启profiling功能,从enable_options读取profiling的采集选项;设置为False,表示关闭profiling功能,仅采集training_trace。
+
+- `enable_options`:profiling采集选项,取值如下,支持采集多项数据。training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据;task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息;op_trace:采集单算子性能数据。格式:['op_trace','task_trace','training_trace']
+
+代码样例如下:
+```python
+from mindspore import context
+context.set_context(enable_profiling=True, profiling_options="training_trace")
+```
+
+### 异步数据dump功能
+在Ascend环境上执行训练,当训练结果和预期有偏差时,可以通过异步数据dump功能保存算子的输入输出进行调试。
+
+代码样例如下:
+```python
+from mindspore import context
+context.set_context(save_graphs=True)
+```
+
+> 详细的调试方法可以查看[异步数据Dump功能介绍](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/custom_debugging_info.html#dump)。
+
+### print算子落盘
+默认情况下,MindSpore的自研print算子可以将用户输入的Tensor或字符串信息打印出来,支持多字符串输入,多Tensor输入和字符串与Tensor的混合输入,输入参数以逗号隔开。
+
+> Print打印功能可以查看[Print算子功能介绍](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/custom_debugging_info.html#print)。
+
+- `print_file_path`:可以将print算子数据保存到文件,同时关闭屏幕打印功能。如果保存的文件已经存在,则会给文件添加时间戳后缀。数据保存到文件可以解决数据量较大时屏幕打印数据丢失的问题。
+
+代码样例如下:
+```python
+from mindspore import context
+context.set_context(print_file_path="print.pb")
+```
+
+> context接口详细介绍可以查看[mindspore.context](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.context.html)。
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/data_pipeline.rst b/docs/programming_guide/source_zh_cn/data_pipeline.rst
new file mode 100644
index 0000000000000000000000000000000000000000..62f55a9e10f09862341f28c075c606735ad82081
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/data_pipeline.rst
@@ -0,0 +1,13 @@
+数据管道
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ dataset_loading
+ sampler
+ pipeline
+ augmentation
+ tokenizer
+ dataset_conversion
+ auto_augmentation
diff --git a/docs/programming_guide/source_zh_cn/data_type.rst b/docs/programming_guide/source_zh_cn/data_type.rst
new file mode 100644
index 0000000000000000000000000000000000000000..ee35ad4275c43a2c79162e5661fcfb6934c6edf4
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/data_type.rst
@@ -0,0 +1,8 @@
+数据类型
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ dtype
+ tensor
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/dataset_conversion.md b/docs/programming_guide/source_zh_cn/dataset_conversion.md
new file mode 100644
index 0000000000000000000000000000000000000000..9bf0387310d5f43ce1dcbb21932557eab21a5ffb
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/dataset_conversion.md
@@ -0,0 +1,416 @@
+# MindSpore数据格式转换
+
+
+
+- [MindSpore数据格式转换](#mindspore数据格式转换)
+ - [概述](#概述)
+ - [非标准数据集转换MindRecord](#非标准数据集转换mindrecord)
+ - [转换CV类数据集](#转换cv类数据集)
+ - [转换NLP类数据集](#转换nlp类数据集)
+ - [常用数据集转换MindRecord](#常用数据集转换mindrecord)
+ - [转换CIFAR-10数据集](#转换cifar-10数据集)
+ - [转换ImageNet数据集](#转换imagenet数据集)
+ - [转换CSV数据集](#转换csv数据集)
+ - [转换TFRecord数据集](#转换tfrecord数据集)
+
+
+
+
+
+## 概述
+
+用户可以将非标准的数据集和常用的数据集转换为MindSpore数据格式,即MindRecord,从而方便地加载到MindSpore中进行训练。同时,MindSpore在部分场景做了性能优化,使用MindRecord可以获得更好的性能。
+
+## 非标准数据集转换MindRecord
+
+下面主要介绍如何将CV类数据和NLP类数据转换为MindRecord,并通过`MindDataset`实现MindRecord文件的读取。
+
+### 转换CV类数据集
+
+本示例主要介绍用户如何将自己的CV类数据集转换成MindRecord,并使用`MindDataset`读取。
+
+本示例首先创建一个包含100条记录的MindRecord文件,其样本包含`file_name`(字符串)、
+`label`(整形)、 `data`(二进制)三个字段,然后使用`MindDataset`读取该MindRecord文件。
+
+```python
+from io import BytesIO
+import os
+import mindspore.dataset as ds
+from mindspore.mindrecord import FileWriter
+import mindspore.dataset.vision.c_transforms as vision
+from PIL import Image
+
+mindrecord_filename = "test.mindrecord"
+
+if os.path.exists(mindrecord_filename):
+ os.remove(mindrecord_filename)
+ os.remove(mindrecord_filename + ".db")
+
+writer = FileWriter(file_name=mindrecord_filename, shard_num=1)
+
+cv_schema = {"file_name": {"type": "string"}, "label": {"type": "int32"}, "data": {"type": "bytes"}}
+writer.add_schema(cv_schema, "it is a cv dataset")
+
+writer.add_index(["file_name", "label"])
+
+data = []
+for i in range(100):
+ i += 1
+
+ sample = {}
+ white_io = BytesIO()
+ Image.new('RGB', (i*10, i*10), (255, 255, 255)).save(white_io, 'JPEG')
+ image_bytes = white_io.getvalue()
+ sample['file_name'] = str(i) + ".jpg"
+ sample['label'] = i
+ sample['data'] = white_io.getvalue()
+
+ data.append(sample)
+ if i % 10 == 0:
+ writer.write_raw_data(data)
+ data = []
+
+if data:
+ writer.write_raw_data(data)
+
+writer.commit()
+
+data_set = ds.MindDataset(dataset_file=mindrecord_filename)
+decode_op = vision.Decode()
+data_set = data_set.map(operations=decode_op, input_columns=["data"], num_parallel_workers=2)
+count = 0
+for item in data_set.create_dict_iterator(output_numpy=True):
+ print("sample: {}".format(item))
+ count += 1
+print("Got {} samples".format(count))
+```
+
+### 转换NLP类数据集
+
+本示例主要介绍用户如何将自己的NLP类数据集转换成MindRecord,并使用`MindDataset`读取。为了方便展示,此处略去了将文本转换成字典序的预处理过程。
+
+本示例首先创建一个包含100条记录的MindRecord文件,其样本包含八个字段,均为整形数组,然后使用`MindDataset`读取该MindRecord文件。
+
+```python
+import os
+import numpy as np
+import mindspore.dataset as ds
+from mindspore.mindrecord import FileWriter
+
+mindrecord_filename = "test.mindrecord"
+
+if os.path.exists(mindrecord_filename):
+ os.remove(mindrecord_filename)
+ os.remove(mindrecord_filename + ".db")
+
+writer = FileWriter(file_name=mindrecord_filename, shard_num=1)
+
+nlp_schema = {"source_sos_ids": {"type": "int64", "shape": [-1]},
+ "source_sos_mask": {"type": "int64", "shape": [-1]},
+ "source_eos_ids": {"type": "int64", "shape": [-1]},
+ "source_eos_mask": {"type": "int64", "shape": [-1]},
+ "target_sos_ids": {"type": "int64", "shape": [-1]},
+ "target_sos_mask": {"type": "int64", "shape": [-1]},
+ "target_eos_ids": {"type": "int64", "shape": [-1]},
+ "target_eos_mask": {"type": "int64", "shape": [-1]}}
+writer.add_schema(nlp_schema, "it is a preprocessed nlp dataset")
+
+data = []
+for i in range(100):
+ i += 1
+
+ sample = {"source_sos_ids": np.array([i, i+1, i+2, i+3, i+4], dtype=np.int64),
+ "source_sos_mask": np.array([i*1, i*2, i*3, i*4, i*5, i*6, i*7], dtype=np.int64),
+ "source_eos_ids": np.array([i+5, i+6, i+7, i+8, i+9, i+10], dtype=np.int64),
+ "source_eos_mask": np.array([19, 20, 21, 22, 23, 24, 25, 26, 27], dtype=np.int64),
+ "target_sos_ids": np.array([28, 29, 30, 31, 32], dtype=np.int64),
+ "target_sos_mask": np.array([33, 34, 35, 36, 37, 38], dtype=np.int64),
+ "target_eos_ids": np.array([39, 40, 41, 42, 43, 44, 45, 46, 47], dtype=np.int64),
+ "target_eos_mask": np.array([48, 49, 50, 51], dtype=np.int64)}
+
+ data.append(sample)
+ if i % 10 == 0:
+ writer.write_raw_data(data)
+ data = []
+
+if data:
+ writer.write_raw_data(data)
+
+writer.commit()
+
+data_set = ds.MindDataset(dataset_file=mindrecord_filename)
+count = 0
+for item in data_set.create_dict_iterator():
+ print("sample: {}".format(item))
+ count += 1
+print("Got {} samples".format(count))
+```
+
+## 常用数据集转换MindRecord
+
+MindSpore提供转换常用数据集的工具类,能够将常用的数据集转换为MindRecord。常用数据集及其对应的工具类列表如下。
+
+| 数据集 | 格式转换工具类 |
+| -------- | ------------ |
+| CIFAR-10 | Cifar10ToMR |
+| CIFAR-100 | Cifar100ToMR |
+| ImageNet | ImageNetToMR |
+| MNIST | MnistToMR |
+| TFRecord | TFRecordToMR |
+| CSV File | CsvToMR |
+
+更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.mindrecord.html)。
+
+### 转换CIFAR-10数据集
+
+用户可以通过`Cifar10ToMR`类,将CIFAR-10原始数据转换为MindRecord,并使用MindDataset读取。
+
+1. 下载[CIFAR-10数据集](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz)并解压,其目录结构如下所示。
+
+ ```
+ └─cifar-10-batches-py
+ ├─batches.meta
+ ├─data_batch_1
+ ├─data_batch_2
+ ├─data_batch_3
+ ├─data_batch_4
+ ├─data_batch_5
+ ├─readme.html
+ └─test_batch
+ ```
+
+2. 导入数据集转换工具类`Cifar10ToMR`。
+
+ ```python
+ from mindspore.mindrecord import Cifar10ToMR
+ ```
+
+3. 创建`Cifar10ToMR`对象,调用`transform`接口,将CIFAR-10数据集转换为MindRecord。
+
+ ```python
+ CIFAR10_DIR = "./cifar-10-batches-py"
+ MINDRECORD_FILE = "./cifar10.mindrecord"
+ cifar10_transformer = Cifar10ToMR(CIFAR10_DIR, MINDRECORD_FILE)
+ cifar10_transformer.transform(['label'])
+ ```
+
+ **参数说明:**
+ - `CIFAR10_DIR`:CIFAR-10数据集的文件夹路径。
+ - `MINDRECORD_FILE`:输出的MindRecord文件路径。
+
+4. 通过`MindDataset`读取MindRecord。
+
+ ```
+ import mindspore.dataset as ds
+ import mindspore.dataset.vision.c_transforms as vision
+
+ data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ decode_op = vision.Decode()
+ data_set = data_set.map(operations=decode_op, input_columns=["data"], num_parallel_workers=2)
+ count = 0
+ for item in data_set.create_dict_iterator(output_numpy=True):
+ print("sample: {}".format(item))
+ count += 1
+ print("Got {} samples".format(count))
+ ```
+
+### 转换ImageNet数据集
+
+用户可以通过`ImageNetToMR`类,将ImageNet原始数据(图片、标注)转换为MindRecord,并使用MindDataset读取。
+
+1. 下载[ImageNet数据集](http://image-net.org/download),将所有图片存放在同一文件夹,用一个映射文件记录图片和标签的对应关系。映射文件包含2列,分别为各类别图片目录和标签ID,用空格隔开,映射文件示例如下:
+
+ ```
+ n01440760 0
+ n01443537 1
+ n01484850 2
+ n01491361 3
+ n01494475 4
+ n01496331 5
+ ```
+
+2. 导入数据集转换工具类`ImageNetToMR`。
+
+ ```python
+ from mindspore.mindrecord import ImageNetToMR
+ ```
+
+3. 创建`ImageNetToMR`对象,调用`transform`接口,将数据集转换为MindRecord。
+
+ ```python
+ IMAGENET_MAP_FILE = "./testImageNetDataWhole/labels_map.txt"
+ IMAGENET_IMAGE_DIR = "./testImageNetDataWhole/images"
+ MINDRECORD_FILE = "./testImageNetDataWhole/imagenet.mindrecord"
+ PARTITION_NUMBER = 8
+ imagenet_transformer = ImageNetToMR(IMAGENET_MAP_FILE, IMAGENET_IMAGE_DIR, MINDRECORD_FILE, PARTITION_NUMBER)
+ imagenet_transformer.transform()
+ ```
+
+ **参数说明:**
+ - `IMAGENET_MAP_FILE`:ImageNet数据集标签映射文件的路径。
+ - `IMAGENET_IMAGE_DIR`:包含ImageNet所有图片的文件夹路径。
+ - `MINDRECORD_FILE`:输出的MindRecord文件路径。
+
+4. 通过`MindDataset`读取MindRecord。
+
+ ```
+ import mindspore.dataset as ds
+ import mindspore.dataset.vision.c_transforms as vision
+
+ data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE + "0")
+ decode_op = vision.Decode()
+ data_set = data_set.map(operations=decode_op, input_columns=["data"], num_parallel_workers=2)
+ count = 0
+ for item in data_set.create_dict_iterator(output_numpy=True):
+ print("sample: {}".format(item))
+ count += 1
+ print("Got {} samples".format(count))
+ ```
+
+### 转换CSV数据集
+
+本示例首先创建一个包含5条记录的CSV文件,然后通过`CsvToMR`工具类将CSV文件转换为MindRecord,并最终通过`MindDataset`将其读取出来。
+
+```python
+import csv
+import os
+import mindspore.dataset as ds
+from mindspore.mindrecord import CsvToMR
+
+CSV_FILE_NAME = "test.csv"
+MINDRECORD_FILE_NAME = "test.mindrecord"
+PARTITION_NUM = 1
+
+def generate_csv():
+ headers = ["id", "name", "math", "english"]
+ rows = [(1, "Lily", 78.5, 90),
+ (2, "Lucy", 99, 85.2),
+ (3, "Mike", 65, 71),
+ (4, "Tom", 95, 99),
+ (5, "Jeff", 85, 78.5)]
+ with open(CSV_FILE_NAME, 'w', encoding='utf-8') as f:
+ writer = csv.writer(f)
+ writer.writerow(headers)
+ writer.writerows(rows)
+
+generate_csv()
+
+if os.path.exists(MINDRECORD_FILE_NAME):
+ os.remove(MINDRECORD_FILE_NAME)
+ os.remove(MINDRECORD_FILE_NAME + ".db")
+
+csv_transformer = CsvToMR(CSV_FILE_NAME, MINDRECORD_FILE_NAME, partition_number=PARTITION_NUM)
+
+csv_transformer.transform()
+
+assert os.path.exists(MINDRECORD_FILE_NAME)
+assert os.path.exists(MINDRECORD_FILE_NAME + ".db")
+
+data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE_NAME)
+count = 0
+for item in data_set.create_dict_iterator(output_numpy=True):
+ print("sample: {}".format(item))
+ count += 1
+print("Got {} samples".format(count))
+```
+
+### 转换TFRecord数据集
+
+> 目前只支持TensorFlow 2.1.0及以上版本。
+
+本示例首先通过TensorFlow创建一个TFRecord文件,然后通过`TFRecordToMR`工具类将TFRecord文件转换为MindRecord,最后通过`MindDataset`将其读取出来,并使用`Decode`算子对`image_bytes`字段进行解码。
+
+```python
+import collections
+from io import BytesIO
+import os
+import mindspore.dataset as ds
+from mindspore.mindrecord import TFRecordToMR
+import mindspore.dataset.vision.c_transforms as vision
+from PIL import Image
+import tensorflow as tf
+
+TFRECORD_FILE_NAME = "test.tfrecord"
+MINDRECORD_FILE_NAME = "test.mindrecord"
+PARTITION_NUM = 1
+
+def generate_tfrecord():
+ def create_int_feature(values):
+ if isinstance(values, list):
+ feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
+ else:
+ feature = tf.train.Feature(int64_list=tf.train.Int64List(value=[values]))
+ return feature
+
+ def create_float_feature(values):
+ if isinstance(values, list):
+ feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values)))
+ else:
+ feature = tf.train.Feature(float_list=tf.train.FloatList(value=[values]))
+ return feature
+
+ def create_bytes_feature(values):
+ if isinstance(values, bytes):
+ white_io = BytesIO()
+ Image.new('RGB', (10, 10), (255, 255, 255)).save(white_io, 'JPEG')
+ image_bytes = white_io.getvalue()
+ feature = tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes]))
+ else:
+ feature = tf.train.Feature(bytes_list=tf.train.BytesList(value=[bytes(values, encoding='utf-8')]))
+ return feature
+
+ writer = tf.io.TFRecordWriter(TFRECORD_FILE_NAME)
+
+ example_count = 0
+ for i in range(10):
+ file_name = "000" + str(i) + ".jpg"
+ image_bytes = bytes(str("aaaabbbbcccc" + str(i)), encoding="utf-8")
+ int64_scalar = i
+ float_scalar = float(i)
+ int64_list = [i, i+1, i+2, i+3, i+4, i+1234567890]
+ float_list = [float(i), float(i+1), float(i+2.8), float(i+3.2),
+ float(i+4.4), float(i+123456.9), float(i+98765432.1)]
+
+ features = collections.OrderedDict()
+ features["file_name"] = create_bytes_feature(file_name)
+ features["image_bytes"] = create_bytes_feature(image_bytes)
+ features["int64_scalar"] = create_int_feature(int64_scalar)
+ features["float_scalar"] = create_float_feature(float_scalar)
+ features["int64_list"] = create_int_feature(int64_list)
+ features["float_list"] = create_float_feature(float_list)
+
+ tf_example = tf.train.Example(features=tf.train.Features(feature=features))
+ writer.write(tf_example.SerializeToString())
+ example_count += 1
+ writer.close()
+ print("Write {} rows in tfrecord.".format(example_count))
+
+generate_tfrecord()
+
+feature_dict = {"file_name": tf.io.FixedLenFeature([], tf.string),
+ "image_bytes": tf.io.FixedLenFeature([], tf.string),
+ "int64_scalar": tf.io.FixedLenFeature([], tf.int64),
+ "float_scalar": tf.io.FixedLenFeature([], tf.float32),
+ "int64_list": tf.io.FixedLenFeature([6], tf.int64),
+ "float_list": tf.io.FixedLenFeature([7], tf.float32),
+ }
+
+if os.path.exists(MINDRECORD_FILE_NAME):
+ os.remove(MINDRECORD_FILE_NAME)
+ os.remove(MINDRECORD_FILE_NAME + ".db")
+
+tfrecord_transformer = TFRecordToMR(TFRECORD_FILE_NAME, MINDRECORD_FILE_NAME, feature_dict, ["image_bytes"])
+tfrecord_transformer.transform()
+
+assert os.path.exists(MINDRECORD_FILE_NAME)
+assert os.path.exists(MINDRECORD_FILE_NAME + ".db")
+
+data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE_NAME)
+decode_op = vision.Decode()
+data_set = data_set.map(operations=decode_op, input_columns=["image_bytes"], num_parallel_workers=2)
+count = 0
+for item in data_set.create_dict_iterator(output_numpy=True):
+ print("sample: {}".format(item))
+ count += 1
+print("Got {} samples".format(count))
+```
diff --git a/api/source_zh_cn/programming_guide/dataset_loading.md b/docs/programming_guide/source_zh_cn/dataset_loading.md
similarity index 61%
rename from api/source_zh_cn/programming_guide/dataset_loading.md
rename to docs/programming_guide/source_zh_cn/dataset_loading.md
index e3695494d5c24c92daa030459d8fe16f72931d7c..df08efca625ce4b5b366c0836e3688ae0d3fb345 100644
--- a/api/source_zh_cn/programming_guide/dataset_loading.md
+++ b/docs/programming_guide/source_zh_cn/dataset_loading.md
@@ -4,8 +4,7 @@
- [数据集加载](#数据集加载)
- [概述](#概述)
- - [经典数据集加载](#经典数据集加载)
- - [MNIST数据集](#mnist数据集)
+ - [常用数据集加载](#常用数据集加载)
- [CIFAR10/100数据集](#cifar10100数据集)
- [VOC数据集](#voc数据集)
- [COCO数据集](#coco数据集)
@@ -13,8 +12,7 @@
- [MindRecord数据格式](#mindrecord数据格式)
- [Manifest数据格式](#manifest数据格式)
- [TFRecord数据格式](#tfrecord数据格式)
- - [Numpy数据格式](#numpy数据格式)
- - [text数据格式](#text数据格式)
+ - [NumPy数据格式](#numpy数据格式)
- [CSV数据格式](#csv数据格式)
- [自定义数据集加载](#自定义数据集加载)
- [构造数据集生成函数](#构造数据集生成函数)
@@ -23,19 +21,19 @@
-
+
## 概述
-MindSpore支持加载图像领域常用的经典数据集,用户可以直接使用`mindspore.dataset`中对应的类实现数据集的加载。目前支持的经典数据集及对应的数据集类如下表所示。
+MindSpore支持加载图像领域常用的数据集,用户可以直接使用`mindspore.dataset`中对应的类实现数据集的加载。目前支持的常用数据集及对应的数据集类如下表所示。
| 图像数据集 | 数据集类 | 数据集简介 |
| ---- | ---- | ---- |
| MNIST | MnistDataset | MNIST是一个大型手写数字图像数据集,拥有60,000张训练图像和10,000张测试图像,常用于训练各种图像处理系统。 |
| CIFAR-10 | Cifar10Dataset | CIFAR-10是一个微小图像数据集,包含10种类别下的60,000张32x32大小彩色图像,平均每种类别6,000张,其中5,000张为训练集,1,000张为测试集。 |
| CIFAR-100 | Cifar100Dataset | CIFAR-100与CIFAR-10类似,但拥有100种类别,平均每种类别600张,其中500张为训练集,100张为测试集。 |
-|CelebA | CelebADataset | CelebA是一个大型人脸图像数据集,包含超过200,000张名人人脸图像,每张图像拥有40个特征标记。 |
-| PASCAL-VOC | VOCDataset | PASCAL-VOC是一个经典图像数据集,被广泛用于目标检测、图像分割等计算机视觉领域。 |
+| CelebA | CelebADataset | CelebA是一个大型人脸图像数据集,包含超过200,000张名人人脸图像,每张图像拥有40个特征标记。 |
+| PASCAL-VOC | VOCDataset | PASCAL-VOC是一个常用图像数据集,被广泛用于目标检测、图像分割等计算机视觉领域。 |
| COCO | CocoDataset | COCO是一个大型目标检测、图像分割、姿态估计数据集。 |
| CLUE | CLUEDataset | CLUE是一个大型中文语义理解数据集。 |
@@ -45,65 +43,39 @@ MindSpore还支持加载多种数据存储格式下的数据集,用户可以
| ---- | ---- | ---- |
| MindRecord | MindDataset | MindRecord是MindSpore的自研数据格式,具有读写高效、易于分布式处理等优势。 |
| Manifest | ManifestDataset | Manifest是华为ModelArts支持的一种数据格式,描述了原始文件和标注信息,可用于标注、训练、推理场景。 |
-| TFRecord | TFRecordDataset | TFRecord是Tensorflow定义的一种二进制数据文件格式。 |
-| Numpy | NumpySlicesDataset | Numpy数据源指的是已经读入内存中的Numpy arrays格式数据集。 |
+| TFRecord | TFRecordDataset | TFRecord是TensorFlow定义的一种二进制数据文件格式。 |
+| NumPy | NumpySlicesDataset | NumPy数据源指的是已经读入内存中的NumPy arrays格式数据集。 |
| Text File | TextFileDataset | Text File指的是常见的文本格式数据。 |
| CSV File | CSVDataset | CSV指逗号分隔值,其文件以纯文本形式存储表格数据。 |
-MindSpore也同样支持使用GeneratorDataset自定义数据集的加载方式,用户可以根据需要实现自己的数据集类。
+MindSpore也同样支持使用`GeneratorDataset`自定义数据集的加载方式,用户可以根据需要实现自己的数据集类。
-更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)。
+> 更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.dataset.html)。
-## 经典数据集加载
+## 常用数据集加载
-### MNIST数据集
+下面将介绍几种常用数据集的加载方式。
-```python
-# 通过MNIST API读取、解析MNIST数据集,并构建数据管道
-
-import mindspore.dataset as ds
-
-# 下载MNIST数据集,将其解压到MnistData目录
-DATA_DIR = "MnistData/"
-
-# 使用MnistDataset读取数据集,指定num_samples以获取5个样本数据
-# shuffle参数为True时,是随机获取5个样本,每次运行的label结果可能不一致
-dataset = ds.MnistDataset(DATA_DIR, num_samples=5, shuffle=True)
-
-# 启动数据管道,输出5个样本数据
-for data in dataset.create_dict_iterator():
- print("Image shape:", data['image'].shape, ", Label:", data['label'])
-```
+### CIFAR10/100数据集
-```
-Image shape: (28, 28, 1) , Label: 4
-Image shape: (28, 28, 1) , Label: 9
-Image shape: (28, 28, 1) , Label: 4
-Image shape: (28, 28, 1) , Label: 0
-Image shape: (28, 28, 1) , Label: 9
-```
+下面的样例通过`Cifar10Dataset`接口加载CIFAR-10数据集,使用顺序采样器获取其中5个样本,然后展示了对应图片的形状和标签。
-### CIFAR10/100数据集
+CIFAR-100数据集和MNIST数据集的加载方式也与之类似。
```python
-# 通过Cifar API读取、解析CIFAR数据集,并构建数据管道(以CIFAR10数据集为例)
-
import mindspore.dataset as ds
-# 下载CIFAR10数据集,将其解压到CIFAR10Data目录
DATA_DIR = "Cifar10Data/"
-# 指定一个顺序采样器SequentialSampler,按照读取顺序获取5个样本数据
sampler = ds.SequentialSampler(num_samples=5)
-
-# 使用CIFAR10Dataset读取数据集,指定sampler为上述采样器
dataset = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 启动数据管道,输出5个样本数据
for data in dataset.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
+输出结果如下:
+
```
Image shape: (32, 32, 3) , Label: 0
Image shape: (32, 32, 3) , Label: 1
@@ -114,34 +86,30 @@ Image shape: (32, 32, 3) , Label: 4
### VOC数据集
-```python
-# 通过VOC API读取、解析VOC数据集,并构建数据管道
+下面的样例通过`VOCDataset`接口加载VOC2012数据集,分别演示了将任务指定为分割(Segmentation)和检测(Detection)时的原始图像形状和目标形状。
+```python
import mindspore.dataset as ds
-# 下载VOC数据集,将其解压到VOC2012目录
DATA_DIR = "VOC2012/"
-# 使用VOCDataset读取数据集,指定为Segmentation任务,同时指定num_samples以获取2个样本数据
-# decode参数会将读取的图像解码
-dataset = ds.VOCDataset(DATA_DIR, task="Segmentation", mode="train", num_samples=2, decode=True, shuffle=False)
+dataset = ds.VOCDataset(DATA_DIR, task="Segmentation", usage="train", num_samples=2, decode=True, shuffle=False)
+
print("[Segmentation]:")
for data in dataset.create_dict_iterator():
- # 原图像
print("image shape:", data["image"].shape)
- # 分割后图像
print("target shape:", data["target"].shape)
-# 接下来是Detection任务
-dataset = ds.VOCDataset(DATA_DIR, task="Detection", mode="train", num_samples=1, decode=True, shuffle=False)
+dataset = ds.VOCDataset(DATA_DIR, task="Detection", usage="train", num_samples=1, decode=True, shuffle=False)
+
print("[Detection]:")
for data in dataset.create_dict_iterator():
- # 原图像
print("image shape:", data["image"].shape)
- # 目标框
print("bbox shape:", data["bbox"].shape)
```
+输出结果如下:
+
```
[Segmentation]:
image shape: (281, 500, 3)
@@ -155,39 +123,35 @@ bbox shape: (2, 4)
### COCO数据集
-```python
-# 通过Coco API读取、解析Coco数据集,并构建数据管道
+下面的样例通过`CocoDataset`接口加载COCO数据集,分别演示了将任务指定为目标检测(Detection)、背景分割(Stuff)、关键点检测(Keypoint)和全景分割(Panoptic)时获取到的不同数据。
+```python
import mindspore.dataset as ds
-# 下载Coco数据集,将其解压到CocoData目录
DATA_DIR = "COCO/train/"
ANNOTATION_FILE = "COCO/annotations/train.json"
KEYPOINT_FILE = "COCO/annotations/key_point.json"
PANOPTIC_FILE = "COCO/annotations/panoptic.json"
-# 使用CocoDataset读取数据集,指定为Detection任务,同时指定num_samples以获取1个样本数据
dataset = ds.CocoDataset(DATA_DIR, annotation_file=ANNOTATION_FILE, task="Detection", num_samples=1)
for data in dataset.create_dict_iterator():
print("Detection:", data.keys())
-# 让我们来观察一下,在指定Coco不同任务时,我们获取到的不同数据
-# Stuff 任务
dataset = ds.CocoDataset(DATA_DIR, annotation_file=ANNOTATION_FILE, task="Stuff", num_samples=1)
for data in dataset.create_dict_iterator():
print("Stuff:", data.keys())
-# Keypoint 任务
dataset = ds.CocoDataset(DATA_DIR, annotation_file=KEYPOINT_FILE, task="Keypoint", num_samples=1)
for data in dataset.create_dict_iterator():
print("Keypoint:", data.keys())
-# Panoptic 任务
dataset = ds.CocoDataset(DATA_DIR, annotation_file=PANOPTIC_FILE, task="Panoptic", num_samples=1)
for data in dataset.create_dict_iterator():
print("Panoptic:", data.keys())
```
+输出结果如下:
+
```
Detection: dict_keys(['bbox', 'image', 'iscrowd', 'category_id'])
Stuff: dict_keys(['segmentation', 'iscrowd', 'image'])
@@ -195,49 +159,51 @@ Keypoint: dict_keys(['keypoints', 'num_keypoints', 'image'])
Panoptic: dict_keys(['bbox', 'image', 'area', 'category_id', 'iscrowd'])
```
-> 更多经典数据集加载接口说明,参见对应[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)。
-
## 特定格式数据集加载
+下面将介绍几种特定格式数据集文件的加载方式。
+
### MindRecord数据格式
-MindRecord是MindSpore的自研数据格式,具有更好的性能和特性。
+MindRecord是MindSpore定义的一种数据格式,使用MindRecord能够获得更好的性能提升。
+
+> 阅读[数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.0/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。
->阅读[数据格式转换](https://www.mindspore.cn/api/zh-CN/master/programming_guide/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。
+下面的样例通过`MindDataset`接口加载MindRecord文件,并展示已加载数据的标签。
```python
import mindspore.dataset as ds
-# 指定MindRecord数据格式地址
DATA_DIR = "mindrecord_dataset_path"
mindrecord_dataset = ds.MindDataset(DATA_DIR)
-# 启动数据管道读取
-for data in mindrecord_dataset.create_dict_iterator():
+for data in mindrecord_dataset.create_dict_iterator(output_numpy=True):
print(data["label"])
```
### Manifest数据格式
-Manifest是华为ModelArts支持的数据格式文件,详细说明请参见相关[文档](https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0009.html)。
+Manifest是华为ModelArts支持的数据格式文件,详细说明请参见[Manifest文档](https://support.huaweicloud.com/engineers-modelarts/modelarts_23_0009.html)。
+
+下面的样例通过`ManifestDataset`接口加载Manifest文件,并展示已加载数据的标签。
```python
import mindspore.dataset as ds
-# 指定Manifest数据集地址
DATA_DIR = "manifest_dataset_path"
manifest_dataset = ds.ManifestDataset(DATA_DIR)
-# 启动数据管道读取
for data in manifest_dataset.create_dict_iterator():
print(data["label"])
```
### TFRecord数据格式
-TFRecord是Tensorflow定义的一种二进制数据文件格式。
+TFRecord是TensorFlow定义的一种二进制数据文件格式。
+
+下面的样例通过`TFRecordDataset`接口加载TFRecord文件,并介绍了两种不同的数据集格式设定方案。
-1. 传入数据集路径或`.tfrecord`文件列表,创建TFRecordDataset对象。
+1. 传入数据集路径或TFRecord文件列表,创建`TFRecordDataset`对象。
```python
import mindspore.dataset as ds
@@ -246,77 +212,76 @@ TFRecord是Tensorflow定义的一种二进制数据文件格式。
dataset = ds.TFRecordDataset(DATA_DIR)
```
-2. 用户可以选择通过创建Schema文件或Schema类,设定数据集格式及特征。
-
- - 创建Schema文件
-
- Schema文件示例:
-
- ```
- {
- "datasetType": "TF",
- "numRows": 3,
- "columns": {
- "image": {
- "type": "uint8",
- "rank": 1
- },
- "label" : {
- "type": "int64",
- "rank": 1
+2. 用户可以通过编写Schema文件或创建Schema对象,设定数据集格式及特征。
+
+ - 编写Schema文件
+
+ 将数据集格式和特征按JSON格式写入Schema文件,示例如下:
+
+ ```
+ {
+ "columns": {
+ "image": {
+ "type": "uint8",
+ "rank": 1
+ },
+ "label" : {
+ "type": "string",
+ "rank": 1
+ }
+ "id" : {
+ "type": "int64",
+ "rank": 0
+ }
}
}
- }
- ```
+ ```
- - `datasetType`: 数据格式的类型,这里`TF`是指TFrecord数据格式。
+ - `columns`:列信息字段,需要根据数据集的实际列名定义。上面的示例中,数据集列为`image`、`label`和`id`。
- - `columns`:列信息字段,需要根据数据集的实际列名定义,上面Schema文件示例中,数据集列为`image`和`label`两列。
+ 然后在创建`TFRecordDataset`时将Schema文件路径传入。
- - `numRows`:行数信息字段,控制加载数据的最大行数。如果定义的行数大于实际行数,加载时则以实际行数为准。
+ ```python
+ DATA_DIR = "tfrecord_dataset_path"
+ SCHEMA_DIR = "dataset_schema_path/schema.json"
+ dataset = ds.TFRecordDataset(DATA_DIR, schema=SCHEMA_DIR)
+ ```
- 在创建TFRecordDataset时将Schema文件路径传入。
+ - 创建Schema对象
- ```python
- DATA_DIR = "tfrecord_dataset_path"
- SCHEMA_DIR = "dataset_schema_path/schema.json"
- dataset = ds.TFRecordDataset(DATA_DIR, schema=SCHEMA_DIR)
- ```
+ 创建Schema对象,为其添加自定义字段,然后在创建数据集对象时传入。
- - 创建Schema类
+ ```python
+ import mindspore.common.dtype as mstype
+ schema = ds.Schema()
+ schema.add_column('image', de_type=mstype.uint8)
+ schema.add_column('label', de_type=mstype.int32)
+ dataset = ds.TFRecordDataset(DATA_DIR, schema=schema)
+ ```
- ```python
- import mindspore.common.dtype as mstype
- schema = ds.Schema()
- schema.add_column('image', de_type=mstype.uint8)
- schema.add_column('label', de_type=mstype.int32)
- dataset = ds.TFRecordDataset(DATA_DIR, schema=schema)
- ```
+### NumPy数据格式
-### Numpy数据格式
+如果所有数据已经读入内存,可以直接使用`NumpySlicesDataset`类将其加载。
-如果所有数据已经读入内存,可以直接使用NumpySlicesDataset类将其加载。
+下面的样例分别介绍了通过`NumpySlicesDataset`加载arrays数据、 list数据和dict数据的方式。
-- 加载Numpy arrays数据
+- 加载NumPy arrays数据
```python
- # 从Numpy arrays构建数据管道
-
import numpy as np
import mindspore.dataset as ds
- # 使用numpy构建一个数组
features, labels = np.random.sample((5, 2)), np.random.sample((5, 1))
- # 从numpy中构建数据管道
- # 注意:传入参数需要是一个tuple,即是(features, labels);column_names用于指定生成的数据集名称为col1, col2
+
data = (features, labels)
dataset = ds.NumpySlicesDataset(data, column_names=["col1", "col2"], shuffle=False)
- # 启动数据管道
for data in dataset:
print(data[0], " ", data[1])
```
+ 输出结果如下:
+
```
[0.49893939 0.36348882] [0.15234002]
[0.83845534 0.19721032] [0.94602561]
@@ -328,22 +293,19 @@ TFRecord是Tensorflow定义的一种二进制数据文件格式。
- 加载Python list数据
```python
- # 从Python list构建数据管道
import mindspore.dataset as ds
- # 构建一个list
data1 = [[1, 2], [3, 4]]
- # 从list中构建数据管道
- # column_names用于指定生成的数据集名称为col1
dataset = ds.NumpySlicesDataset(data1, column_names=["col1"], shuffle=False)
- # 启动数据管道
for data in dataset:
print(data[0])
```
+ 输出结果如下:
+
```
[1 2]
[3 4]
@@ -352,60 +314,42 @@ TFRecord是Tensorflow定义的一种二进制数据文件格式。
- 加载Python dict数据
```python
- # 从Python dict构建数据管道
-
import mindspore.dataset as ds
- # 构建一个dict
data1 = {"a": [1, 2], "b": [3, 4]}
- # 从dict中构建数据管道
- # column_names用于指定生成的数据集名称为col1, col2
dataset = ds.NumpySlicesDataset(data1, column_names=["col1", "col2"], shuffle=False)
- # 启动数据管道
for data in dataset.create_dict_iterator():
print(data)
```
+ 输出结果如下:
+
```
- {'col1': array(1, dtype=int64), 'col2': array(3, dtype=int64)}
- {'col1': array(2, dtype=int64), 'col2': array(4, dtype=int64)}
+ {'col1': Tensor(shape=[], dtype=Int64, value= 1), 'col2': Tensor(shape=[], dtype=Int64, value= 3)}
+ {'col1': Tensor(shape=[], dtype=Int64, value= 2), 'col2': Tensor(shape=[], dtype=Int64, value= 4)}
```
-### text数据格式
-
-```python
-import mindspore.dataset as ds
-
-# 指定text数据格式地址
-DATA_DIR = "text_file_path"
-text_dataset = ds.TextFileDataset(DATA_DIR)
+### CSV数据格式
-# 启动数据管道读取
-for data in text_dataset.create_dict_iterator():
- print(data["label"])
-```
+下面的样例通过`CSVDataset`加载CSV格式数据集文件,并展示了已加载数据的标签。
-### CSV数据格式
+Text格式数据集文件的加载方式与CSV文件类似。
```python
import mindspore.dataset as ds
-# 指定CSV数据格式地址
DATA_DIR = "csv_file_path"
csv_dataset = ds.CSVDataset(DATA_DIR)
-# 启动数据管道读取
-for data in csv_dataset.create_dict_iterator():
- print(data["label"])
+for data in csv_dataset.create_dict_iterator(output_numpy=True):
+ print(data["1"])
```
->更多数据格式文件加载说明,参见对应[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)。
-
## 自定义数据集加载
-对于目前MindSpore不支持直接加载的数据集,可以通过构造GeneratorDataset对象实现自定义方式的加载,或者将其转换成MindRecord数据格式。目前自定义数据集加载有以下几种方式。
+对于目前MindSpore不支持直接加载的数据集,可以通过构造`GeneratorDataset`对象实现自定义方式的加载,或者将其转换成MindRecord数据格式。目前自定义数据集加载有以下几种方式。
### 构造数据集生成函数
@@ -415,23 +359,22 @@ for data in csv_dataset.create_dict_iterator():
import numpy as np
import mindspore.dataset as ds
-# 随机生成一个数据集
np.random.seed(58)
data = np.random.sample((5, 2))
label = np.random.sample((5, 1))
-# 自定义数据返回方式
def GeneratorFunc():
for i in range(5):
yield (data[i], label[i])
-# 构建自定义数据集对象
dataset = ds.GeneratorDataset(GeneratorFunc, ["data", "label"])
for data in dataset.create_dict_iterator():
print(data["data"], data["label"])
```
+输出结果如下:
+
```
[0.36510558 0.45120592] [0.78888122]
[0.49606035 0.07562207] [0.38068183]
@@ -476,6 +419,8 @@ for data in dataset.create_dict_iterator():
print(data["data"], data["label"])
```
+输出结果如下:
+
```
[0.36510558 0.45120592] [0.78888122]
[0.49606035 0.07562207] [0.38068183]
@@ -511,6 +456,8 @@ for data in dataset.create_dict_iterator():
print(data["data"], data["label"])
```
+输出结果如下:
+
```
[0.36510558 0.45120592] [0.78888122]
[0.49606035 0.07562207] [0.38068183]
@@ -549,6 +496,8 @@ for data in dataset.create_dict_iterator():
print(data["data"], data["label"])
```
+输出结果如下:
+
```
[0.36510558 0.45120592] [0.78888122]
[0.57176158 0.28963401] [0.16271622]
diff --git a/docs/programming_guide/source_zh_cn/dtype.md b/docs/programming_guide/source_zh_cn/dtype.md
new file mode 100644
index 0000000000000000000000000000000000000000..0237e1d00445dc21733b63bf2c2a3c2becb69dc7
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/dtype.md
@@ -0,0 +1,64 @@
+# dtype
+
+
+
+- [dtype](#dtype)
+ - [概述](#概述)
+ - [数据类型转换接口](#数据类型转换接口)
+
+
+
+
+
+## 概述
+
+MindSpore张量支持不同的数据类型,包含`int8`、`int16`、`int32`、`int64`、`uint8`、`uint16`、`uint32`、`uint64`、`float16`、`float32`、`float64`、`bool_`,与NumPy的数据类型一一对应。
+
+在MindSpore的运算处理流程中,Python中的`int`数会被转换为定义的int64类型,`float`数会被转换为定义的`float32`类型。
+
+详细的类型支持情况请参考。
+
+以下代码,打印MindSpore的数据类型int32。
+```
+from mindspore import dtype as mstype
+
+data_type = mstype.int32
+print(data_type)
+```
+
+输出如下:
+
+```
+Int32
+```
+
+
+## 数据类型转换接口
+
+MindSpore提供了以下几个接口,实现与NumPy数据类型和Python内置的数据类型间的转换。
+
+- `dtype_to_nptype`:将MindSpore的数据类型转换为NumPy对应的数据类型。
+- `dtype_to_pytype`:将MindSpore的数据类型转换为Python对应的内置数据类型。
+- `pytype_to_dtype`:将Python内置的数据类型转换为MindSpore对应的数据类型。
+
+以下代码实现了不同数据类型间的转换,并打印转换后的类型。
+
+```
+from mindspore import dtype as mstype
+
+np_type = mstype.dtype_to_nptype(mstype.int32)
+ms_type = mstype.pytype_to_dtype(int)
+py_type = mstype.dtype_to_pytype(mstype.float64)
+
+print(np_type)
+print(ms_type)
+print(py_type)
+```
+
+输出如下:
+
+```
+
+Int64
+
+```
diff --git a/docs/programming_guide/source_zh_cn/execution_management.rst b/docs/programming_guide/source_zh_cn/execution_management.rst
new file mode 100644
index 0000000000000000000000000000000000000000..b57742c1c72acd6a5f4e25b8fd9ee7dba5cd6dfc
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/execution_management.rst
@@ -0,0 +1,9 @@
+执行管理
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ context
+ run
+ callback
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/extension.rst b/docs/programming_guide/source_zh_cn/extension.rst
new file mode 100644
index 0000000000000000000000000000000000000000..ffba7b0682c05c45a45ee9f9784935b35e874b33
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/extension.rst
@@ -0,0 +1,7 @@
+功能扩展
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ probability
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/images/api_structure.png b/docs/programming_guide/source_zh_cn/images/api_structure.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/api_structure.png
rename to docs/programming_guide/source_zh_cn/images/api_structure.png
diff --git a/docs/programming_guide/source_zh_cn/images/batch.png b/docs/programming_guide/source_zh_cn/images/batch.png
new file mode 100644
index 0000000000000000000000000000000000000000..ee974652d361b4085033a08789a036d331c2bec8
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/batch.png differ
diff --git a/api/source_zh_cn/programming_guide/images/cifar.png b/docs/programming_guide/source_zh_cn/images/cifar.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/cifar.png
rename to docs/programming_guide/source_zh_cn/images/cifar.png
diff --git a/api/source_zh_cn/programming_guide/images/cifar2.png b/docs/programming_guide/source_zh_cn/images/cifar2.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/cifar2.png
rename to docs/programming_guide/source_zh_cn/images/cifar2.png
diff --git a/docs/programming_guide/source_zh_cn/images/concat.png b/docs/programming_guide/source_zh_cn/images/concat.png
new file mode 100644
index 0000000000000000000000000000000000000000..7a28ff7826cc2a1c6334e2ff15eeaaffd6b67c06
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/concat.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/ctrans_invert.png b/docs/programming_guide/source_zh_cn/images/ctrans_invert.png
new file mode 100644
index 0000000000000000000000000000000000000000..b73f9bd1abed0b4064d10461cc360160591ef4e3
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/ctrans_invert.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/ctrans_resize.png b/docs/programming_guide/source_zh_cn/images/ctrans_resize.png
new file mode 100644
index 0000000000000000000000000000000000000000..e5275e371cbe0b668a0f6f1d699ea67efa09956f
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/ctrans_resize.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/map.png b/docs/programming_guide/source_zh_cn/images/map.png
new file mode 100644
index 0000000000000000000000000000000000000000..275631c1c5f0ea256be00004251e61c382748487
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/map.png differ
diff --git a/api/source_zh_cn/programming_guide/images/mnist.png b/docs/programming_guide/source_zh_cn/images/mnist.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/mnist.png
rename to docs/programming_guide/source_zh_cn/images/mnist.png
diff --git a/api/source_zh_cn/programming_guide/images/project.png b/docs/programming_guide/source_zh_cn/images/project.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/project.png
rename to docs/programming_guide/source_zh_cn/images/project.png
diff --git a/docs/programming_guide/source_zh_cn/images/pytrans_compose.png b/docs/programming_guide/source_zh_cn/images/pytrans_compose.png
new file mode 100644
index 0000000000000000000000000000000000000000..6d74fc231a7253393f98a645c0c68c7b2c517fb2
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/pytrans_compose.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/randomcrop.png b/docs/programming_guide/source_zh_cn/images/randomcrop.png
new file mode 100644
index 0000000000000000000000000000000000000000..ef62fe1a08f221a2c4ce81f9e60ba5c9e0d93a61
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/randomcrop.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/randomhorizontalflip.png b/docs/programming_guide/source_zh_cn/images/randomhorizontalflip.png
new file mode 100644
index 0000000000000000000000000000000000000000..2d851183a8f858c54a26b636703b9177df4ec80e
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/randomhorizontalflip.png differ
diff --git a/api/source_zh_cn/programming_guide/images/rename.png b/docs/programming_guide/source_zh_cn/images/rename.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/rename.png
rename to docs/programming_guide/source_zh_cn/images/rename.png
diff --git a/docs/programming_guide/source_zh_cn/images/repeat.png b/docs/programming_guide/source_zh_cn/images/repeat.png
new file mode 100644
index 0000000000000000000000000000000000000000..9717ec81c52f23615e236d27e0f7c96bd6ac1155
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/repeat.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/shuffle.png b/docs/programming_guide/source_zh_cn/images/shuffle.png
new file mode 100644
index 0000000000000000000000000000000000000000..4464cefad03beefac6bb413da22eebeffaf8fe41
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/shuffle.png differ
diff --git a/api/source_zh_cn/programming_guide/images/take.png b/docs/programming_guide/source_zh_cn/images/take.png
similarity index 100%
rename from api/source_zh_cn/programming_guide/images/take.png
rename to docs/programming_guide/source_zh_cn/images/take.png
diff --git a/docs/programming_guide/source_zh_cn/images/tranform_bad.png b/docs/programming_guide/source_zh_cn/images/tranform_bad.png
new file mode 100644
index 0000000000000000000000000000000000000000..0f659a14be8d8af05cee1757adc3d67664e1c259
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/tranform_bad.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/tranform_good_1.png b/docs/programming_guide/source_zh_cn/images/tranform_good_1.png
new file mode 100644
index 0000000000000000000000000000000000000000..e147f019e9211ee888e58172469ff1fabe4fa776
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/tranform_good_1.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/tranform_good_2.png b/docs/programming_guide/source_zh_cn/images/tranform_good_2.png
new file mode 100644
index 0000000000000000000000000000000000000000..f6ed65482d233d88f46b108c3fe4e21bd00df4c5
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/tranform_good_2.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/tranform_good_3.png b/docs/programming_guide/source_zh_cn/images/tranform_good_3.png
new file mode 100644
index 0000000000000000000000000000000000000000..575d8038bb7f8260b86d226033c30e15c3b20e0e
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/tranform_good_3.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/tranform_pipeline.png b/docs/programming_guide/source_zh_cn/images/tranform_pipeline.png
new file mode 100644
index 0000000000000000000000000000000000000000..7278418d5ebfb3db921627f213ceb455aba53794
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/tranform_pipeline.png differ
diff --git a/docs/programming_guide/source_zh_cn/images/zip.png b/docs/programming_guide/source_zh_cn/images/zip.png
new file mode 100644
index 0000000000000000000000000000000000000000..f0052435898ae6a3546dfea9c50711ab3f303699
Binary files /dev/null and b/docs/programming_guide/source_zh_cn/images/zip.png differ
diff --git a/docs/programming_guide/source_zh_cn/index.rst b/docs/programming_guide/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..faca5855e5035bd6f461d190f4e1d1ed629c69ee
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/index.rst
@@ -0,0 +1,20 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 11:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore编程指南
+===================
+
+.. toctree::
+ :maxdepth: 1
+
+ api_structure
+ data_type
+ compute_component
+ data_pipeline
+ execution_management
+ auto_parallel
+ advanced_use
+ network_list
+ operator_list
diff --git a/docs/programming_guide/source_zh_cn/infer.md b/docs/programming_guide/source_zh_cn/infer.md
new file mode 100644
index 0000000000000000000000000000000000000000..cd9b07a7813729a2c4309a5c3e1d47a95518f620
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/infer.md
@@ -0,0 +1,19 @@
+# 推理
+
+
+
+- [推理](#推理)
+
+
+
+
+
+基于MindSpore训练后的模型,支持在Ascend 910 AI处理器、Ascend 310 AI处理器、GPU、CPU、端侧等多种不同的平台上执行推理。使用方法可参考如下教程:
+
+- [在Ascend 910 AI处理器上执行推理](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/multi_platform_inference.html#ascend-910-ai)
+- [在Ascend 310 AI处理器上执行推理](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/multi_platform_inference.html#ascend-310-ai)
+- [在GPU上执行推理](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/multi_platform_inference.html#gpu)
+- [在CPU上执行推理](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/multi_platform_inference.html#cpu)
+- [在端侧执行推理](https://www.mindspore.cn/lite/tutorial/lite/zh-CN/r1.0/quick_start/quick_start.html)
+
+同时,MindSpore提供了一个轻量级、高性能的服务模块,称为MindSpore Serving,可帮助MindSpore开发者在生产环境中高效部署在线推理服务,使用方法可参考[部署推理服务](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.0/serving.html)。
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/component.md b/docs/programming_guide/source_zh_cn/network_component.md
similarity index 76%
rename from api/source_zh_cn/programming_guide/component.md
rename to docs/programming_guide/source_zh_cn/network_component.md
index dd92ce5515f63c12a6e11d84c3b8648734f94c8f..83a785954f7f9a268dfc8cd3c90f2fe024c049eb 100644
--- a/api/source_zh_cn/programming_guide/component.md
+++ b/docs/programming_guide/source_zh_cn/network_component.md
@@ -4,24 +4,25 @@
- [常用网络组件](#常用网络组件)
- [概述](#概述)
- - [GradOperation](#GradOperation)
- - [WithLossCell](#WithLossCell)
- - [TrainOneStepCell](#TrainOneStepCell)
+ - [GradOperation](#gradoperation)
+ - [WithLossCell](#withlosscell)
+ - [TrainOneStepCell](#trainonestepcell)
+
+
## 概述
-MindSpore封装一些常用的网络组件,用于网络的训练,推理,求梯度和数据处理等。
+MindSpore封装了一些常用的网络组件,用于网络的训练、推理、求梯度和数据处理等操作。
这些网络组件可以直接被用户使用,同样也会在`model.train`和`model.eval`等更高级的封装接口内部进行使用。
-本节内容将会介绍三个网络组件,分别是`GradOperation`,`WithLossCell`和`TrainOneStepCell`,将会从功能,用户使用和内部使用三个方面来进行介绍。
+本节内容将会介绍三个网络组件,分别是`GradOperation`、`WithLossCell`和`TrainOneStepCell`,将会从功能、用户使用和内部使用三个方面来进行介绍。
## GradOperation
-GradOperation组件用于生成输入函数的梯度,利用`get_all`,`get_by_list`和`sens_param`参数
-控制梯度的计算方式,细节内容详见API文档。
+GradOperation组件用于生成输入函数的梯度,利用`get_all`、`get_by_list`和`sens_param`参数控制梯度的计算方式,细节内容详见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.ops.html#mindspore.ops.GradOperation)。
GradOperation的使用实例如下:
@@ -59,9 +60,7 @@ y = Tensor([[0.01, 0.3, 1.1], [0.1, 0.2, 1.3], [2.1, 1.2, 3.3]], dtype=mstype.fl
GradNetWrtX(Net())(x, y)
```
-上面的例子是计算`Net`相对与x的梯度值,首先需要定义网络`Net`作为`GradOperation`的输入,
-实例创建了包含梯度运算的`GradNetWrtX`。调用`GradNetWrtX`是将网络传入`GradOperation`生成梯度函数,
-将输入数据传入梯度函数中返回最终结果。
+上面的例子是计算`Net`相对与x的梯度值,首先需要定义网络`Net`作为`GradOperation`的输入,实例创建了包含梯度运算的`GradNetWrtX`。调用`GradNetWrtX`是将网络传入`GradOperation`生成梯度函数,将输入数据传入梯度函数中返回最终结果。
输出如下:
@@ -76,7 +75,7 @@ MindSpore涉及梯度计算的其他组件,例如`WithGradCell`和`TrainOneSte
## WithLossCell
-`WithLossCell`本质上是一个包含损失函数的`Cell`, 构造`WithLossCell`需要事先定义好网络和损失函数。
+`WithLossCell`本质上是一个包含损失函数的`Cell`,构造`WithLossCell`需要事先定义好网络和损失函数。
下面通过一个实例来介绍其具体的使用, 首先需要构造一个网络,内容如下:
@@ -124,20 +123,19 @@ class LeNet(nn.Cell):
return output
```
-下面是`WithLossCell`的使用实例,分别定义好网络和损失函数,然后创建一个`WithLossCell`,
-然后传入输入数据和标签数据,`WithLossCell`内部根据网络和损失函数返回计算结果
+下面是`WithLossCell`的使用实例,分别定义好网络和损失函数,然后创建一个`WithLossCell`,传入输入数据和标签数据,`WithLossCell`内部根据网络和损失函数返回计算结果。
```
data = Tensor(np.ones([32, 1, 32, 32]).astype(np.float32) * 0.01)
label = Tensor(np.ones([32]).astype(np.int32))
net = LeNet()
-criterion = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
+criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_with_criterion = WithLossCell(net, criterion)
loss = net_with_criterion(data, label)
print("+++++++++Loss+++++++++++++")
print(loss)
```
-输出结果如下:
+输出如下:
```
+++++++++Loss+++++++++++++
2.302585
@@ -157,7 +155,7 @@ learning_rate = 0.01
momentum = 0.9
optimizer = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), learning_rate, momentum)
-criterion = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
+criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_with_criterion = WithLossCell(net, criterion)
train_network = TrainOneStepCell(net_with_criterion, optimizer) # optimizer
for i in range(5):
@@ -167,10 +165,9 @@ for i in range(5):
print(res)
```
-用例中构造了优化器和一个`WithLossCell`的实例,然后传入`TrainOneStepCell` 中初始化一个训练网络,用例循环五次,相当于网络训练了五次,
-并输出每次的loss结果,由结果可以看出每次训练后loss值在逐渐减小。
+用例中构造了优化器和一个`WithLossCell`的实例,然后传入`TrainOneStepCell`中初始化一个训练网络,用例循环五次,相当于网络训练了五次,并输出每次的loss结果,由结果可以看出每次训练后loss值在逐渐减小。
-输出结果如下:
+输出如下:
```
+++++++++result:0++++++++++++
2.302585
diff --git a/docs/programming_guide/source_zh_cn/network_list.rst b/docs/programming_guide/source_zh_cn/network_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..9f3f23bd00ff952ef7824cbc149905a4c64ce876
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/network_list.rst
@@ -0,0 +1,7 @@
+网络支持
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ MindSpore网络支持
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/operator.md b/docs/programming_guide/source_zh_cn/operator.md
similarity index 50%
rename from api/source_zh_cn/programming_guide/operator.md
rename to docs/programming_guide/source_zh_cn/operator.md
index a62e2250259863469be22349a058727d7b8e12dc..6dab6864973a2da75a39a2d77c9d85d4c58656ff 100644
--- a/api/source_zh_cn/programming_guide/operator.md
+++ b/docs/programming_guide/source_zh_cn/operator.md
@@ -1,454 +1,586 @@
-# 算子组件
-
-算子组件指常用的算子及其操作,按功能大致可分为张量操作,网络操作,数组操作,图像操作,编码操作,调试操作,量化操作等七个模块。所有的算子在Ascend芯片或者CPU, GPU的支持情况,参见[这里](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html "list")
-
-
-这七类算子操作的相互关系见下:
-
-
-
-- [算子组件](#算子组件)
- - [张量操作](#张量操作)
- - [标量运算](#标量运算)
- - [加法](#加法)
- - [Element-wise 除法](#element-wise-除法)
- - [Element-wise 乘](#element-wise-乘)
- - [三角函数](#求三角函数)
- - [向量运算](#向量运算)
- - [Concat](#concat-算子)
- - [Squeeze](#squeeze)
- - [Sparse2Dense](#求sparse2dense改变tensor维度使其变稠密)
- - [ScalarCast](#scalarcast)
- - [矩阵运算](#矩阵运算)
- - [矩阵乘法](#矩阵乘法)
- - [常见范数](#常见范数)
- - [广播机制](#广播机制)
- - [网络操作](#网络操作)
- - [特征提取](#特征提取)
- - [卷积操作](#卷积操作)
- - [卷积的反向传播操作](#卷积的反向传播算子操作)
- - [激活函数](#激活函数)
- - [LossFunction](#lossfunction)
- - [L1 Loss](#l1loss)
- - [优化算法](#优化算法)
- - [SGD](#sgd)
- - [数组操作](#数组操作)
- - [DType](#dtype)
- - [Cast](#cast)
- - [Shape](#shape)
- - [图像操作](#图像操作)
- - [编码运算](#编码运算)
- - [BoundingBoxEncode](#boundingboxencode)
- - [BoundingBoxDecode](#boundingboxdecode)
- - [IOU](#iou-计算)
- - [调试操作](#调试操作)
- - [Debug](#debug)
- - [HookBackward](#hookbackward)
- - [量化操作](#量化操作)
- - [MinMaxUpdatePerLayer](#minmaxupdateperlayer)
-
-
-
-
-
-## 张量操作
-
-
-主要包括张量的结构操作和张量的数学运算。
-张量结构操作诸如:张量创建,索引切片,维度变换,合并分割。
-张量数学运算主要有:标量运算,向量运算,矩阵运算。另外我们会介绍张量运算的广播机制。
-本篇我们介绍张量的数学运算。
-
-
-
-### 标量运算
-张量的数学运算符可以分为标量运算符、向量运算符、以及矩阵运算符。
-加减乘除乘方,以及三角函数,指数,对数等常见函数,逻辑比较运算符等都是标量运算符。
-标量运算符的特点是对张量实施逐元素运算。
-有些标量运算符对常用的数学运算符进行了重载。并且支持类似numpy的广播特性。
-
-举例说明:
-```python
-import numpy as np
-import mindspore # 导入mindspore包
-from mindspore import Tensor # 导入mindspore下的Tensor包
-import mindspore.ops.operations as P
-input_x = mindspore.Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
-input_y = 3.0
-input_x**input_y
-```
-
-真实输入为:
-```python
-print(input_x)
-[ 1. 8. 64.]
-```
-
-真实输出为:
-```python
-print(input_x**input_y)
-[ 1. 8. 64.]
-```
-
-#### 加法
-```python
-input_x + input_y
-[4.0 5.0 7.0]
-```
-
-除普通加外,还有element-wise加法:
-```python
-net = NetAddN()
-input_x = Tensor(np.array([1, 2, 3]), mindspore.float32)
-input_y = Tensor(np.array([4, 5, 6]), mindspore.float32)
-net(input_x, input_y, input_x, input_y)[10.0, 14.0, 18.0]
-```
-
-#### Element-wise 除法
-```python
-input_x = Tensor(np.array([-4.0, 5.0, 6.0]), mindspore.float32)
-input_y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
-div = P.Div()
-div(input_x, input_y)
-```
-
-求FloorDiv:
-```python
-input_x = Tensor(np.array([2, 4, -1]), mindspore.int32))
-input_y = Tensor(np.array([3, 3, 3]), mindspore.int32)
-floor_div = P.FloorDiv()
-floor_div(input_x, input_y)[0, 1, -1]
-```
-
-#### Element-wise 乘
-```python
-input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
-input_y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
-mul = P.Mul()
-mul(input_x, input_y)
-```
-
-真实输出:
-```python
-[4, 10, 18]
-```
-
-#### 求三角函数:
-```python
-acos = P.ACos()
-input_x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
-output = acos(input_x)
-```
-
-### 向量运算
-向量运算符只在一个特定轴上运算,将一个向量映射到一个标量或者另外一个向量。
-
-#### Concat 算子:
-```python
-data1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
-data2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
-op = P.Concat()
-output = op((data1, data2))
-```
-
-#### Squeeze
-```python
-input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
-squeeze = P.Squeeze(2)
-output = squeeze(input_tensor)
-```
-
-#### 求Sparse2Dense(改变tensor维度使其变稠密):
-```python
-indices = Tensor([[0, 1], [1, 2]])
-values = Tensor([1, 2], dtype=ms.float32)
-dense_shape = (3, 4)
-out = P.SparseToDense()(indices, values, dense_shape)
-```
-
-#### ScalarCast:
-```python
-scalar_cast = P.ScalarCast()
-output = scalar_cast(255.0, mindspore.int32)
-```
-
-### 矩阵运算
-矩阵运算包括: 矩阵乘法,矩阵范数,矩阵行列式,矩阵求特征值,矩阵分解等运算。
-
-#### 矩阵乘法:
-```python
-input_x = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
-input_y = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
-matmul = P.MatMul()
-output = matmul(input_x, input_y)
-```
-
-#### 常见范数:
-
-```python
-input_x = Tensor(np.ones([128, 64, 32, 64]), mindspore.float32)
-scale = Tensor(np.ones([64]), mindspore.float32)
-bias = Tensor(np.ones([64]), mindspore.float32)
-mean = Tensor(np.ones([64]), mindspore.float32)
-variance = Tensor(np.ones([64]), mindspore.float32)
-batch_norm = P.BatchNorm()
-output = batch_norm(input_x, scale, bias, mean, variance)
-```
-
-#### 广播机制
-
-Broadcast 广播一个tensor到整个group
-举例说明:
-```python
-from mindspore import Tensor
-from mindspore.communication import init
-import mindspore.nn as nn
-import mindspore.ops.operations as P
-init()
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.broadcast = P.Broadcast(1)
-
- def construct(self, x):
- return self.broadcast((x,))
-
-input_ = Tensor(np.ones([2, 8]).astype(np.float32))
-net = Net()
-output = net(input_)
-```
-
-## 网络操作
-
-
-网络操作包括特征提取, 激活函数, LossFunction, 优化算法等:
-
-### 特征提取
-
-#### 卷积操作
-举例说明:
-```python
-input = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
-weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32))
-conv2d = P.Conv2D(out_channel=32, kernel_size=3)
-conv2d(input, weight)
-```
-
-#### 卷积的反向传播算子操作:
-输出结果:
-```python
-dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
-weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
-x = Tensor(np.ones([10, 32, 32, 32]))
-conv2d_backprop_input = P.Conv2DBackpropInput(out_channel=32, kernel_size=3)
-conv2d_backprop_input(dout, weight, F.shape(x))
-```
-
-### 激活函数
-举例说明:
-```python
-input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
-softmax = P.Softmax()
-softmax(input_x)
-```
-
-输出结果:
-```python
-[0.01165623, 0.03168492, 0.08612854, 0.23412167, 0.6364086]
-```
-
-### LossFunction
-
-#### L1Loss:
-举例说明:
-```python
-loss = P.SmoothL1Loss()
-input_data = Tensor(np.array([1, 2, 3]), mindspore.float32)
-target_data = Tensor(np.array([1, 2, 2]), mindspore.float32)
-loss(input_data, target_data)
-```
-
-输出结果:
-```python
-[0, 0, 0.5]
-```
-
-### 优化算法
-#### SGD:
-```python
-sgd = P.SGD()
-parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
-gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
-learning_rate = Tensor(0.01, mindspore.float32)
-accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
-momentum = Tensor(0.1, mindspore.float32)
-stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
-result = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
-```
-
-## 数组操作
-
-
-
-数组操作指操作对象是一些数组的操作。
-
-### DType
-返回跟输入的数据类型一致的并且适配Mindspore的tensor变量, 常用于Mindspore 工程内。
-举例说明:
-```python
-input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
-type = P.DType()(input_tensor)
-```
-
-### Cast
-转换输入的数据类型并且输出与目标数据类型相同的变量
-举例说明:
-```python
-input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
-input_x = Tensor(input_np)
-type_dst = mindspore.float16
-cast = P.Cast()
-result = cast(input_x, type_dst)
-```
-
-### Shape
-返回输入数据的形状
-举例说明:
-```python
-input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
-shape = P.Shape()
-output = shape(input_tensor)
-```
-
-## 图像操作
-
-
-图像操作包括图像预处理操作, 如图像剪切(Crop,便于得到大量训练样本)和大小变化(Reise,用于构建图像金子塔等):
-
-举例说明:
-```python
-class CropAndResizeNet(nn.Cell):
- def __init__(self, crop_size):
- super(CropAndResizeNet, self).__init__()
- self.crop_and_resize = P.CropAndResize()
- self.crop_size = crop_size
- @ms_function
- def construct(self, x, boxes, box_index):
- return self.crop_and_resize(x, boxes, box_index, self.crop_size)
-
-BATCH_SIZE = 1
-NUM_BOXES = 5
-IMAGE_HEIGHT = 256
-IMAGE_WIDTH = 256
-CHANNELS = 3
-image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
-boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
-box_index = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
-crop_size = (24, 24)
-crop_and_resize = CropAndResizeNet(crop_size=crop_size)
-output = crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_index))
-print(output.asnumpy())
-```
-
-## 编码运算
-
-
-编码运算包括 BoundingBox Encoding和 BoundingBox Decoding, IOU计算等。
-
-### BoundingBoxEncode
-对物体所在区域方框进行编码,得到类似PCA的更精简信息,以便做后续类似特征提取,物体检测,图像恢复等任务。
-
-举例说明:
-```python
-anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32)
-groundtruth_box = Tensor([[3,1,2,2],[1,2,1,4]],mindspore.float32)
-boundingbox_encode = P.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
-boundingbox_encode(anchor_box, groundtruth_box)
-```
-输出结果为:
-```python
-[[5.0000000e-01 5.0000000e-01 -6.5504000e+04 6.9335938e-01]
- [-1.0000000e+00 2.5000000e-01 0.0000000e+00 4.0551758e-01]]
-```
-
-### BoundingBoxDecode
-编码器对区域位置信息解码之后,用此算子进行解码。
-
-举例说明:
-```python
-anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32)
-deltas = Tensor([[3,1,2,2],[1,s2,1,4]],mindspore.float32)
-boundingbox_decode = P.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0), max_shape=(768, 1280), wh_ratio_clip=0.016)
-boundingbox_decode(anchor_box, deltas)
-```
-输出结果:
-```python
-[[4.1953125 0. 0. 5.1953125]
- [2.140625 0. 3.859375 60.59375]]
-```
-
-### IOU 计算:
-计算预测的物体所在方框和真实物体所在方框的交集区域与并集区域的占比大小。其常作为一种损失函数,用以优化模型。
-
-举例说明:
-```python
-iou = P.IOU()
-anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
-gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
-```
-
-## 调试操作
-调试操作指的是用于调试网络的一些常用算子及其操作, 例如Debug等
-
-### Debug
-输出tensor变量的数值, 方便用户随时随地打印想了解或者debug必需的某变量数值。
-
-参考示例:
-```python
-class DebugNN(nn.Cell):
- def __init__(self,):
- self.debug = nn.Debug()
-
- def construct(self, x, y):
- x = self.add(x, y)
- self.debug(x)
- return x
-```
-
-### HookBackward
-打印中间变量的梯度,这一算子特别常用,遂举例在此,虽目前仅支持Pynative 形式
-参考示例:
-```python
-def hook_fn(grad_out):
- print(grad_out)
-
-grad_all = GradOperation(get_all=True)
-hook = P.HookBackward(hook_fn)
-
-def hook_test(x, y):
- z = x * y
- z = hook(z)
- z = z * y
- return z
-
-def backward(x, y):
- return grad_all(hook_test)(x, y)
-
-backward(1, 2)
-```
-
-## 量化操作
-
-
-量化操作指对tensor做量化或者反量化操作。 量化操作指将浮点数用整数的加和表示,利用整数加和并行加速时速度快的优点, 实
-现在可接受精度损失下的性能提升。反量化指其反过程,其在精度要求高的地方常被用到。
-
-### MinMaxUpdatePerLayer
-完成在训练时的量化和反量化操作
-举例说明:
-```python
-input_tensor = Tensor(np.random.rand(3, 16, 5, 5), mstype.float32)
-min_tensor = Tensor(np.array([-6]), mstype.float32)
-max_tensor = Tensor(np.array([6]), mstype.float32)
-output_tensor = FakeQuantPerLayer(num_bits=8)(input_tensor, min_tensor, max_tensor)
-```
+# 算子
+
+
+
+- [算子](#算子)
+ - [概述](#概述)
+ - [张量操作](#张量操作)
+ - [标量运算](#标量运算)
+ - [加法](#加法)
+ - [Element-wise乘法](#element-wise乘法)
+ - [求三角函数](#求三角函数)
+ - [向量运算](#向量运算)
+ - [Squeeze](#squeeze)
+ - [求Sparse2Dense](#求sparse2dense)
+ - [矩阵运算](#矩阵运算)
+ - [矩阵乘法](#矩阵乘法)
+ - [广播机制](#广播机制)
+ - [网络操作](#网络操作)
+ - [特征提取](#特征提取)
+ - [卷积操作](#卷积操作)
+ - [卷积的反向传播算子操作](#卷积的反向传播算子操作)
+ - [激活函数](#激活函数)
+ - [LossFunction](#lossfunction)
+ - [L1Loss](#l1loss)
+ - [优化算法](#优化算法)
+ - [SGD](#sgd)
+ - [数组操作](#数组操作)
+ - [DType](#dtype)
+ - [Cast](#cast)
+ - [Shape](#shape)
+ - [图像操作](#图像操作)
+ - [编码运算](#编码运算)
+ - [BoundingBoxEncode](#boundingboxencode)
+ - [BoundingBoxDecode](#boundingboxdecode)
+ - [IOU计算](#iou计算)
+ - [调试操作](#调试操作)
+ - [Debug](#debug)
+ - [HookBackward](#hookbackward)
+ - [量化操作](#量化操作)
+ - [MinMaxUpdatePerLayer](#minmaxupdateperlayer)
+
+
+
+
+
+## 概述
+
+算子组件包含了常用的算子及其操作,按功能大致可分为张量操作、网络操作、数组操作、图像操作、编码操作、调试操作和量化操作七个模块。所有的算子在Ascend AI处理器、GPU和CPU的支持情况,参见[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.0/operator_list.html)。
+
+## 张量操作
+
+张量操作包括张量的结构操作和张量的数学运算。
+
+张量结构操作有:张量创建、索引切片、维度变换和合并分割。
+
+张量数学运算有:标量运算、向量运算和矩阵运算。
+
+这里以张量的数学运算和运算的广播机制为例,介绍使用方法。
+
+### 标量运算
+
+张量的数学运算符可以分为标量运算符、向量运算符以及矩阵运算符。
+
+加减乘除乘方,以及三角函数、指数、对数等常见函数,逻辑比较运算符等都是标量运算符。
+
+标量运算符的特点是对张量实施逐元素运算。
+
+有些标量运算符对常用的数学运算符进行了重载。并且支持类似NumPy的广播特性。
+
+以下代码实现了对input_x作乘方数为input_y的乘方操作:
+```python
+import numpy as np
+import mindspore
+from mindspore import Tensor
+import mindspore.ops.operations as P
+input_x = mindspore.Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
+input_y = 3.0
+print(input_x**input_y)
+```
+
+输出如下:
+```
+[ 1. 8. 64.]
+```
+
+#### 加法
+
+上述代码中`input_x`和`input_y`的相加实现方式如下:
+```python
+print(input_x + input_y)
+```
+
+输出如下:
+```
+[4.0 5.0 7.0]
+```
+
+#### Element-wise乘法
+
+以下代码实现了Element-wise乘法示例:
+```python
+import numpy as np
+import mindspore
+from mindspore import Tensor
+import mindspore.ops.operations as P
+
+input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32)
+input_y = Tensor(np.array([4.0, 5.0, 6.0]), mindspore.float32)
+mul = P.Mul()
+res = mul(input_x, input_y)
+
+print(res)
+```
+
+输出如下:
+```
+[4. 10. 18]
+```
+
+#### 求三角函数
+
+以下代码实现了Acos:
+```python
+import numpy as np
+import mindspore
+from mindspore import Tensor
+import mindspore.ops.operations as P
+
+acos = P.ACos()
+input_x = Tensor(np.array([0.74, 0.04, 0.30, 0.56]), mindspore.float32)
+output = acos(input_x)
+print(output)
+```
+
+输出如下:
+```
+[0.7377037, 1.5307858, 1.2661037,0.97641146]
+```
+### 向量运算
+
+向量运算符只在一个特定轴上运算,将一个向量映射到一个标量或者另外一个向量。
+
+#### Squeeze
+
+以下代码实现了压缩第3个通道维度为1的通道:
+```python
+import numpy as np
+import mindspore
+from mindspore import Tensor
+import mindspore.ops.operations as P
+
+input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
+squeeze = P.Squeeze(2)
+output = squeeze(input_tensor)
+
+print(output)
+```
+
+输出如下:
+```
+[[1. 1.]
+ [1. 1.]
+ [1. 1.]]
+```
+#### 求Sparse2Dense
+
+以下代码实现了对Sparse2Dense示例:
+```python
+import numpy as np
+import mindspore as ms
+from mindspore import Tensor
+import mindspore.ops.operations as P
+
+indices = Tensor([[0, 1], [1, 2]])
+values = Tensor([1, 2], dtype=ms.float32)
+dense_shape = (3, 4)
+out = P.SparseToDense()(indices, values, dense_shape)
+
+print(out)
+```
+
+输出如下:
+```
+[[0, 1, 0, 0],
+ [0, 0, 2, 0],
+ [0, 0, 0, 0]]
+```
+
+### 矩阵运算
+
+矩阵运算包括矩阵乘法、矩阵范数、矩阵行列式、矩阵求特征值、矩阵分解等运算。
+
+#### 矩阵乘法
+
+以下代码实现了input_x 和 input_y的矩阵乘法:
+```python
+import numpy as np
+import mindspore
+from mindspore import Tensor
+import mindspore.ops.operations as P
+
+input_x = Tensor(np.ones(shape=[1, 3]), mindspore.float32)
+input_y = Tensor(np.ones(shape=[3, 4]), mindspore.float32)
+matmul = P.MatMul()
+output = matmul(input_x, input_y)
+
+print(output)
+```
+
+输出如下:
+```
+[[3. 3. 3. 3.]]
+```
+
+#### 广播机制
+
+广播表示输入各变量channel数目不一致时,改变他们的channel 数以得到结果。
+
+以下代码实现了广播机制的示例:
+```python
+from mindspore import Tensor
+from mindspore.communication import init
+import mindspore.nn as nn
+import mindspore.ops.operations as P
+import numpy as np
+
+init()
+class Net(nn.Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.broadcast = P.Broadcast(1)
+
+ def construct(self, x):
+ return self.broadcast((x,))
+
+input_ = Tensor(np.ones([2, 8]).astype(np.float32))
+net = Net()
+output = net(input_)
+```
+
+## 网络操作
+
+网络操作包括特征提取、激活函数、LossFunction、优化算法等。
+
+### 特征提取
+
+特征提取是机器学习中的常见操作,核心是提取比原输入更具代表性的Tensor。
+
+#### 卷积操作
+
+以下代码实现了常见卷积操作之一的2D convolution 操作:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+input = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32)
+weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
+conv2d = P.Conv2D(out_channel=32, kernel_size=3)
+conv2d(input, weight)
+```
+
+#### 卷积的反向传播算子操作
+
+以下代码实现了反向梯度算子传播操作的具体代码,输出存于dout, weight:
+
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+import mindspore.ops.functional as F
+
+dout = Tensor(np.ones([10, 32, 30, 30]), mindspore.float32)
+weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32)
+x = Tensor(np.ones([10, 32, 32, 32]))
+conv2d_backprop_input = P.Conv2DBackpropInput(out_channel=32, kernel_size=3)
+conv2d_backprop_input(dout, weight, F.shape(x))
+```
+
+### 激活函数
+
+以下代码实现Softmax激活函数计算:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+input_x = Tensor(np.array([1, 2, 3, 4, 5]), mindspore.float32)
+softmax = P.Softmax()
+res = softmax(input_x)
+
+print(res)
+```
+
+输出如下:
+```
+[0.01165623 0.03168492 0.08612854 0.23412167 0.6364086]
+```
+
+### LossFunction
+
+#### L1Loss
+
+以下代码实现了L1 loss function:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+loss = P.SmoothL1Loss()
+input_data = Tensor(np.array([1, 2, 3]), mindspore.float32)
+target_data = Tensor(np.array([1, 2, 2]), mindspore.float32)
+loss(input_data, target_data)
+```
+
+输出如下:
+```
+[0. 0. 0.5]
+```
+
+### 优化算法
+
+#### SGD
+
+以下代码实现了SGD梯度下降算法的具体实现,输出是result:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+sgd = P.SGD()
+parameters = Tensor(np.array([2, -0.5, 1.7, 4]), mindspore.float32)
+gradient = Tensor(np.array([1, -1, 0.5, 2]), mindspore.float32)
+learning_rate = Tensor(0.01, mindspore.float32)
+accum = Tensor(np.array([0.1, 0.3, -0.2, -0.1]), mindspore.float32)
+momentum = Tensor(0.1, mindspore.float32)
+stat = Tensor(np.array([1.5, -0.3, 0.2, -0.7]), mindspore.float32)
+result = sgd(parameters, gradient, learning_rate, accum, momentum, stat)
+
+print(result)
+```
+
+输出如下:
+```
+[0. 0. 0. 0.]
+```
+
+## 数组操作
+
+数组操作指操作对象是一些数组的操作。
+
+### DType
+
+返回跟输入的数据类型一致的并且适配Mindspore的Tensor变量,常用于Mindspore工程内。
+
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
+typea = P.DType()(input_tensor)
+
+print(typea)
+```
+
+输出如下:
+```
+Float32
+```
+
+### Cast
+
+转换输入的数据类型并且输出与目标数据类型相同的变量。
+
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
+input_x = Tensor(input_np)
+type_dst = mindspore.float16
+cast = P.Cast()
+result = cast(input_x, type_dst)
+print(result.type())
+```
+
+输出结果:
+```
+mindspore.float16
+```
+
+### Shape
+
+返回输入数据的形状。
+
+以下代码实现了返回输入数据input_tensor的操作:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+input_tensor = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
+shape = P.Shape()
+output = shape(input_tensor)
+print(output)
+```
+
+输出如下:
+```
+[3, 2, 1]
+```
+
+## 图像操作
+
+图像操作包括图像预处理操作,如图像剪切(Crop,便于得到大量训练样本)和大小变化(Reise,用于构建图像金子塔等)。
+
+以下代码实现了Crop和Resize操作:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore.common.dtype as mstype
+from mindpore.ops import composite as C
+
+class CropAndResizeNet(nn.Cell):
+ def __init__(self, crop_size):
+ super(CropAndResizeNet, self).__init__()
+ self.crop_and_resize = P.CropAndResize()
+ self.crop_size = crop_size
+
+ def construct(self, x, boxes, box_index):
+ return self.crop_and_resize(x, boxes, box_index, self.crop_size)
+
+BATCH_SIZE = 1
+NUM_BOXES = 5
+IMAGE_HEIGHT = 256
+IMAGE_WIDTH = 256
+CHANNELS = 3
+image = np.random.normal(size=[BATCH_SIZE, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS]).astype(np.float32)
+boxes = np.random.uniform(size=[NUM_BOXES, 4]).astype(np.float32)
+box_index = np.random.uniform(size=[NUM_BOXES], low=0, high=BATCH_SIZE).astype(np.int32)
+crop_size = (24, 24)
+crop_and_resize = CropAndResizeNet(crop_size=crop_size)
+output = crop_and_resize(Tensor(image), Tensor(boxes), Tensor(box_index))
+print(output.asnumpy())
+```
+
+## 编码运算
+
+编码运算包括BoundingBox Encoding、BoundingBox Decoding、IOU计算等。
+
+### BoundingBoxEncode
+
+对物体所在区域方框进行编码,得到类似PCA的更精简信息,以便做后续类似特征提取,物体检测,图像恢复等任务。
+
+以下代码实现了对anchor_box和groundtruth_box的boundingbox encode:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32)
+groundtruth_box = Tensor([[3,1,2,2],[1,2,1,4]],mindspore.float32)
+boundingbox_encode = P.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0))
+res = boundingbox_encode(anchor_box, groundtruth_box)
+print(res)
+```
+
+输出如下:
+```
+[[5.0000000e-01 5.0000000e-01 -6.5504000e+04 6.9335938e-01]
+ [-1.0000000e+00 2.5000000e-01 0.0000000e+00 4.0551758e-01]]
+```
+
+### BoundingBoxDecode
+
+编码器对区域位置信息解码之后,用此算子进行解码。
+
+以下代码实现了:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32)
+deltas = Tensor([[3,1,2,2],[1,s2,1,4]],mindspore.float32)
+boundingbox_decode = P.BoundingBoxDecode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0), max_shape=(768, 1280), wh_ratio_clip=0.016)
+res = boundingbox_decode(anchor_box, deltas)
+print(res)
+```
+
+输出如下:
+```
+[[4.1953125 0. 0. 5.1953125]
+ [2.140625 0. 3.859375 60.59375]]
+```
+
+### IOU计算
+
+计算预测的物体所在方框和真实物体所在方框的交集区域与并集区域的占比大小,常作为一种损失函数,用以优化模型。
+
+以下代码实现了计算两个变量anchor_boxes和gt_boxes之间的IOU,以out输出:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore
+
+iou = P.IOU()
+anchor_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
+gt_boxes = Tensor(np.random.randint(1.0, 5.0, [3, 4]), mindspore.float16)
+out = iou(anchor_boxes, gt_boxes)
+print(out)
+```
+
+输出如下:
+```
+[[0. -0. 0.]
+ [0. -0. 0.]
+ [0. 0. 0.]]
+```
+
+## 调试操作
+
+调试操作指的是用于调试网络的一些常用算子及其操作,例如Debug等, 此操作非常方便,对入门深度学习重要,极大提高学习者的学习体验。
+
+### Debug
+
+输出Tensor变量的数值,方便用户随时随地打印想了解或者debug必需的某变量数值。
+
+以下代码实现了输出x这一变量的值:
+```python
+from mindspore import nn
+
+class DebugNN(nn.Cell):
+ def __init__(self,):
+ self.debug = nn.Debug()
+
+ def construct(self, x, y):
+ self.debug()
+ x = self.add(x, y)
+ self.debug(x)
+ return x
+```
+
+### HookBackward
+
+打印中间变量的梯度,是比较常用的算子,目前仅支持Pynative模式。
+
+以下代码实现了打印中间变量(例中x,y)的梯度:
+```python
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore.common.dtype as mstype
+from mindpore.ops import composite as C
+
+def hook_fn(grad_out):
+ print(grad_out)
+
+grad_all = C.GradOperation(get_all=True)
+hook = P.HookBackward(hook_fn)
+
+def hook_test(x, y):
+ z = x * y
+ z = hook(z)
+ z = z * y
+ return z
+
+def backward(x, y):
+ return grad_all(hook_test)(Tensor(x, mstype.float32), Tensor(y, mstype.float32))
+
+backward(1, 2)
+```
diff --git a/docs/programming_guide/source_zh_cn/operator_list.rst b/docs/programming_guide/source_zh_cn/operator_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..c99fcc79f8f44dc1a54bb75aea21b8a373fc62fe
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/operator_list.rst
@@ -0,0 +1,10 @@
+算子支持
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ MindSpore算子支持
+ MindSpore隐式类型转换的算子支持
+ MindSpore分布式算子支持
+ MindSpore Lite算子支持
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/optim.md b/docs/programming_guide/source_zh_cn/optim.md
similarity index 35%
rename from api/source_zh_cn/programming_guide/optim.md
rename to docs/programming_guide/source_zh_cn/optim.md
index 7a8ebb7096ae41b44fce8fcf3b4796033658431b..26629665bc2c98cd0e6e4529f705b848a1672139 100644
--- a/api/source_zh_cn/programming_guide/optim.md
+++ b/docs/programming_guide/source_zh_cn/optim.md
@@ -1,52 +1,57 @@
-# optim模块
+# 优化算法
-- [优化器](#优化器)
+- [优化算法](#优化算法)
- [概述](#概述)
- [学习率](#学习率)
- [dynamic_lr](#dynamic_lr)
- [learning_rate_schedule](#learning_rate_schedule)
- - [optimzer](#optimzer)
+ - [Optimzer](#optimzer)
- [如何使用](#如何使用)
- [内置优化器](#内置优化器)
-
+
## 概述
-mindSpore.nn.optim是Mindspore框架中实现各种优化算法的模块,包含常用的优化器,学习率等,并且接口具备足够的通用性,可以将以后更新、更复杂的方法集成到模块里。
+`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,包含常用的优化器、学习率等,并且接口具备足够的通用性,可以将以后更新、更复杂的方法集成到模块里。
-mindspore.nn.optim为模型提供常用的优化器,如SGD、ADAM、Momentum。优化器用于计算和更新梯度,模型优化算法的选择直接关系到最终模型的性能,如果有时候效果不好,未必是特征或者模型设计的问题,很有可能是优化算法的问题;
-同时还有mindspore.nn提供的学习率的模块,学习率learing_rate分为dynamic_lr和learning_rate_schedule,都是动态学习率,但是实现方式不同,学习率最为监督学习以及深度学习中重要的参数,其决定着目标函数是否能收敛到局部最小值以及何时能收敛到最小值。
-合适的学习率能够使目标函数在合适的的时间内收敛到局部最小值。
+`mindspore.nn.optim`为模型提供常用的优化器,如`SGD`、`ADAM`、`Momentum`。优化器用于计算和更新梯度,模型优化算法的选择直接关系到最终模型的性能,如果有时候效果不好,未必是特征或者模型设计的问题,很有可能是优化算法的问题;同时还有`mindspore.nn`提供的学习率的模块,学习率分为`dynamic_lr`和`learning_rate_schedule`,都是动态学习率,但是实现方式不同,学习率最为监督学习以及深度学习中重要的参数,其决定着目标函数是否能收敛到局部最小值以及何时能收敛到最小值。合适的学习率能够使目标函数在合适的的时间内收敛到局部最小值。
> 本文档中的所有示例,支持CPU,GPU,Ascend环境。
## 学习率
### dynamic_lr
-mindspore.nn.dynamic_lr模块有以下几个类,piecewise_constant_lr类是得到分段不变的学习速率,exponential_decay_lr类是基于指数衰减函数计算学习率,natural_exp_decay_lr类是基于自然指数衰减函数计算学习率,inverse_decay_lr类是基于反时间衰减函数计算学习速率,cosine_decay_lr类是基于余弦衰减函数计算学习率,polynomial_decay_lr类是基于多项式衰减函数计算学习率,warmup_lr类是提高学习率,它们是属于dynamic_lr的不同实现方式。
+`mindspore.nn.dynamic_lr`模块有以下几个类:
-例如piecewise_constant_lr类代码样例如下:
+- `piecewise_constant_lr`类:基于得到分段不变的学习速率。
+- `exponential_decay_lr`类:基于指数衰减函数计算学习率。
+- `natural_exp_decay_lr`类:基于自然指数衰减函数计算学习率。
+- `inverse_decay_lr`类:基于反时间衰减函数计算学习速率。
+- `cosine_decay_lr`类:基于余弦衰减函数计算学习率。
+- `polynomial_decay_lr`类:基于多项式衰减函数计算学习率。
+- `warmup_lr`类:提高学习率。
-```
-class mindspore.nn.dynamic_lr.piecewise_constant_lr(milestone, learning_rates)
+它们是属于`dynamic_lr`的不同实现方式。
-Parameters:
- milestone (Union[list[int], tuple[int]]) – A list of milestone. This list is a monotone increasing list. Every element is a milestone step, and must be greater than 0.
- learning_rates (Union[list[float], tuple[float]]) – A list of learning rates.
+例如`piecewise_constant_lr`类代码样例如下:
-Returns:
- list[float]. The size of list
```
+from mindspore.nn.dynamic_lr import piecewise_constant_lr
-```
-milestone = [2, 5, 10]
-learning_rates = [0.1, 0.05, 0.01]
-piecewise_constant_lr(milestone, learning_rates)
+def test_dynamic_lr():
+ milestone = [2, 5, 10]
+ learning_rates = [0.1, 0.05, 0.01]
+ lr = piecewise_constant_lr(milestone, learning_rates)
+ print(lr)
+
+
+if __name__ == '__main__':
+ test_dynamic_lr()
```
返回结果如下:
@@ -56,49 +61,50 @@ piecewise_constant_lr(milestone, learning_rates)
### learning_rate_schedule
-mindspore.nn.learning_rate_schedule模块下有以下几个类。ExponentialDecayLR类,NaturalExpDecayLR类,InverseDecayLR类,CosineDecayLR类,PolynomialDecayLR类,WarmUpLR类。它们都属于learning_rate_schedule,只是实现方式不同。
+`mindspore.nn.learning_rate_schedule`模块下有以下几个类:`ExponentialDecayLR`类、`NaturalExpDecayLR`类、`InverseDecayLR`类、`CosineDecayLR`类、`PolynomialDecayLR`类和`WarmUpLR`类。它们都属于`learning_rate_schedule`,只是实现方式不同,各自含义如下:
-ExponentialDecayLR类是基于指数衰减函数计算学习率,NaturalExpDecayLR类是基于自然指数衰减函数巨酸学习率,InverseDecayLR类是基于反时间衰减函数计算学习速率,CosineDecayLR类是基于余弦衰减函数计算学习率,PolynomialDecayLR类是基于多项式衰减函数计算学习率,WarmUpLR类是提高学习率,它们是属于learning_rate_schedule的不同实现方式。
+- `ExponentialDecayLR`类:基于指数衰减函数计算学习率。
+- `NaturalExpDecayLR`类:基于自然指数衰减函数计算学习率。
+- `InverseDecayLR`类:基于反时间衰减函数计算学习速率。
+- `CosineDecayLR`类:基于余弦衰减函数计算学习率。
+- `PolynomialDecayLR`类:基于多项式衰减函数计算学习率。
+- `WarmUpLR`类:提高学习率。
+
+它们是属于`learning_rate_schedule`的不同实现方式。
例如ExponentialDecayLR类代码样例如下:
```
-class ExponentialDecayLR(learning_rate, decay_rate, decay_steps, is_stair=False)
+from mindspore.common import dtype as mstype
+from mindspore import Tensor
+from mindspore.nn.learning_rate_schedule import ExponentialDecayLR
-Parameters:
- learning_rate(float) - The initial value of learning rate.
- decay_rate(float) - The decay rate.
- decay_steps(int) - A value used to calculate decayed learning rate.
- is_stair(bool) - if true,learning rate decay once every decay_steps times. Default: False.
+def test_learning_rate_schedule():
+ learning_rate = 0.1 # learning_rate(float) - The initial value of learning rate.
+ decay_rate = 0.9 # decay_rate(float) - The decay rate.
+ decay_steps = 4 # decay_steps(int) - A value used to calculate decayed learning rate.
+ global_step = Tensor(2, mstype.int32)
+ exponential_decay_lr = ExponentialDecayLR(learning_rate, decay_rate, decay_steps)
+ res = exponential_decay_lr(global_step)
+ print(res)
-inputs:
- Tensor.The current step number.
-Returns:
- Tensor. The learning rate value for the current step.
+if __name__ == '__main__':
+ test_learning_rate_schedule()
```
+返回结果如下:
```
-from mindspore.common import dtype as mstype
-from mindspore import Tensor
-
-
-learning_rate = 0.1 # learning_rate(float) - The initial value of learning rate.
-decay_rate = 0.9 # decay_rate(float) - The decay rate.
-decay_steps = 4 # decay_steps(int) - A value used to calculate decayed learning rate.
-global_step = Tensor(2, mystype.int32)
-exponential_decay_lr = ExponentialDecayLR(learning_rate, decay_rate, decay_steps)
-exponential_decay_lr(global_step)
-
+0.094868325
```
-## optimzer
+## Optimzer
### 如何使用
-为了使用mindspore.nn.optim,我们需要构建一个optimizer对象。这个对象能够保持当前参数状态并基于计算得到的梯度进行参数更新。
+为了使用`mindspore.nn.optim`,我们需要构建一个`Optimizer`对象。这个对象能够保持当前参数状态并基于计算得到的梯度进行参数更新。
- 构建
-为了构建一个Optimizer,我们需要给它一个包含可需要优化的参数(必须是Variable对象)的iterable。然后,你可以设置optimizer的参数选项,比如学习率,权重衰减等等。
+为了构建一个`Optimizer`,我们需要给它一个包含可需要优化的参数(必须是Variable对象)的iterable。然后,你可以设置Optimizer的参数选项,比如学习率,权重衰减等等。
代码样例如下:
@@ -117,7 +123,7 @@ optim = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)
优化器也支持为没个参数单独设置选项。若想这么做,不要直接传入变量Variable,而是传入一个字典的iterable。每一个字典都分别定义了一组参数,并且包含一个key键,这个key键对应相应的参数value值。其他的key键应该是优化器所接受的其他参数,并且会被用于对这组参数的优化。
我们仍然能够传递选项作为关键字参数,在未重写这些选项的组中,它们会被用作默认值。当你只想改动一个参数组的选项,但其他参数组的选项不变时,这是非常有用的。
-例如,当我们想制定每一层的学习率时,以SGD为例:
+例如,当我们想制定每一层的学习率时,以`SGD`为例:
```
from mindspore import nn
@@ -132,22 +138,37 @@ optim = nn.SGD([{'params': conv_params, 'weight_decay': 0.01},
### 内置优化器
-深度学习优化算法大概常用的有SGD、Adam、Ftrl、lazyadam、Momentum、RMSprop、Lars、Proximal_ada_grad和lamb这几种。
-在mindspore.nn.optim模块中,他们都有对应的类实现。例如:
+深度学习优化算法大概常用的有`SGD`、`Adam`、`Ftrl`、`lazyadam`、`Momentum`、`RMSprop`、`Lars`、`Proximal_ada_grad`和`lamb`这几种。
+在`mindspore.nn.optim`模块中,他们都有对应的类实现。例如:
-- SGD,默认参数为纯SGD,设置momentum参数不为0,考虑了一阶动量,设置nesterov为True后变成NAG,即Nesterov Accelerated Gradient,在计算梯度时计算的是向前走一步所在位置的梯度。
+- `SGD`,默认参数为纯SGD,设置`momentum`参数不为0,考虑了一阶动量,设置`nesterov`为True后变成`NAG`,即`Nesterov Accelerated Gradient`,在计算梯度时计算的是向前走一步所在位置的梯度。
-- RMSprop,考虑了二阶动量,对于不同的参数有不同的学习率,即自适应学习率,对Adagrad进行了优化,通过指数平滑只考虑一定窗口内的二阶动量。
+- `RMSprop`,考虑了二阶动量,对于不同的参数有不同的学习率,即自适应学习率,对`Adagrad`进行了优化,通过指数平滑只考虑一定窗口内的二阶动量。
-- Adam,同时考虑了一阶动量和二阶动量,可以看成RMSprop上进一步考虑了一阶动量。
+- `Adam`,同时考虑了一阶动量和二阶动量,可以看成`RMSprop`上进一步考虑了一阶动量。
-例如SGD的代码样例如下:
+例如`SGD`的代码样例如下:
```
from mindspore import nn
from mindspore.train import Model
from .optimizer import Optimizer
-
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+import mindspore.common.dtype as mstype
+from mindpore.ops import composite as C
+from mindspore.common.parameter import Parameter
+
+class Net(nn.Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.matmul = P.MatMul()
+ self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
+ def construct(self, x, y):
+ x = x * self.z
+ out = self.matmul(x, y)
+ return out
net = Net()
optim = nn.SGD(params=net.trainable_params())
diff --git a/api/source_zh_cn/programming_guide/parameter.md b/docs/programming_guide/source_zh_cn/parameter.md
similarity index 72%
rename from api/source_zh_cn/programming_guide/parameter.md
rename to docs/programming_guide/source_zh_cn/parameter.md
index 4e6bf70213746377f7f22d2107e006205143f2b4..5cec49ecdede7ac14e60d7c03977c5849d71de44 100644
--- a/api/source_zh_cn/programming_guide/parameter.md
+++ b/docs/programming_guide/source_zh_cn/parameter.md
@@ -6,36 +6,36 @@
- [概述](#概述)
- [初始化](#初始化)
- [属性](#属性)
- - [接口](#方法)
+ - [方法](#方法)
- [ParameterTuple](#parametertuple)
-
-
-
+
## 概述
-Parameter是变量张量,代表在训练网络时,需要被更新的参数,是MetaTensor的一个子类。
+`Parameter`是变量张量,代表在训练网络时,需要被更新的参数。本章主要介绍了`Parameter`的初始化以及属性和方法的使用,同时介绍了`ParameterTuple`。
## 初始化
```
-def __init__(self, default_input, name, requires_grad=True, layerwise_parallel=False)
+mindspore.Parameter(default_input, name, requires_grad=True, layerwise_parallel=False)
```
初始化一个`Parameter`对象,传入的数据支持`Tensor`、`Initializer`、`int`和`float`四种类型。
-`Initializer`是初始化器,保存了shape和dtype信息,可调用`to_tensor`方法生成存有数据的Tensor。
+`Initializer`是初始化器,保存了shape和dtype信息,提供`to_tensor`方法生成存有数据的`Tensor`,可调用`initializer`接口生成`Initializer`对象。
-当网络采用半自动或者全自动并行策略,并且使用`Initializer`初始化`Parameter`时,
-`Parameter`里保存的不是`Tensor`,而是`MetaTensor`。
+当网络采用半自动或者全自动并行策略,并且使用`Initializer`初始化`Parameter`时,`Parameter`里保存的不是`Tensor`,而是`MetaTensor`。
-`MetaTensor`与`Tensor`不同,`MetaTensor`仅保存张量的形状和类型,而不保存实际数据,
-所以不会占用任何内存,可调用`init_data`接口将`Parameter`里保存的`MetaTensor`转化为`Tensor`。
+`MetaTensor`与`Tensor`不同,`MetaTensor`仅保存张量的形状和类型,而不保存实际数据,所以不会占用任何内存,可调用`init_data`接口将`Parameter`里保存的`MetaTensor`转化为`Tensor`。
可为每个`Parameter`指定一个名称,便于后续操作和更新。
-当`layerwise_parallel`为`True`时,参数广播和参数梯度聚合时会过滤掉该参数。
+当参数需要被更新时,需要将`requires_grad`设置为`True`。
+
+当`layerwise_parallel`(混合并行)配置为True时,参数广播和参数梯度聚合时会过滤掉该参数。
+
+有关分布式并行的相关配置,可以参考文档:。
下例通过三种不同的数据类型构造了`Parameter`,三个`Parameter`都需要更新,都不采用layerwise并行。如下:
```
@@ -64,6 +64,7 @@ Parameter (name=z, value=2.0)
```
## 属性
+
- `inited_param`:返回保存了实际数据的`Parameter`,如果`Parameter`原本保存的是`MetaTensor`,会将其转换为`Tensor`。
- `name`:实例化`Parameter`时,为其指定的名字。
@@ -75,7 +76,8 @@ Parameter (name=z, value=2.0)
如果是,就不再对其进行切分,如果不是,需要根据网络并行策略确认是否对其进行切分。
-- `is_init`:`Parameter`的初始化状态。
+- `is_init`:`Parameter`的初始化状态。在GE后端,Parameter需要一个`init graph`来从主机同步数据到设备侧,该标志表示数据是否已同步到设备。
+ 此标志仅在GE后端起作用,其他后端将被设置为False。
- `layerwise_parallel`:`Parameter`是否支持layerwise并行。如果支持,参数就不会进行广播和梯度聚合,反之则需要。
@@ -122,9 +124,9 @@ data: Parameter (name=x, value=[[0 1 2]
当初始化`Parameter`传入的数据是`Initializer`时,可调用该接口将`Parameter`保存的数据转换为`Tensor`。
- `set_data`:设置`Parameter`保存的数据,支持传入`Tensor`、`Initializer`、`int`和`float`进行设置,
- 将slice_shape设置为True时,可改变`Parameter`的shape,反之,设置的数据shape必须与`Parameter`原来的shape保持一致。
+ 将方法的入参`slice_shape`设置为True时,可改变`Parameter`的shape,反之,设置的数据shape必须与`Parameter`原来的shape保持一致。
-- `set_param_ps`:控制训练参数是否通过[Parameter Server](https://gitee.com/mindspore/docs/blob/master/tutorials/source_zh_cn/advanced_use/parameter_server_training.md)进行训练。
+- `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/apply_parameter_server_training.html)进行训练。
- `clone`:克隆`Parameter`,需要指定克隆之后的参数名称。
@@ -137,13 +139,13 @@ from mindspore import Tensor, Parameter
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
-x = Parameter(data=initializer('ones', [1, 2, 3], mstype.float32), name='x')
+x = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32), name='x')
print(x)
print(x.clone(prefix="x_c"))
print(x.init_data())
print(x.set_param_ps())
-print(x.set_parameter_data(data=Tensor(np.arange(2*3).reshape((1, 2, 3)))))
+print(x.set_data(default_input=Tensor(np.arange(2*3).reshape((1, 2, 3)))))
```
输出如下:
@@ -171,9 +173,9 @@ from mindspore import Tensor, Parameter, ParameterTuple
from mindspore.common import dtype as mstype
from mindspore.common.initializer import initializer
-x = Parameter(data=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
-y = Parameter(data=initializer('ones', [1, 2, 3], mstype.float32), name='y')
-z = Parameter(data=2.0, name='z')
+x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
+y = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32), name='y')
+z = Parameter(default_input=2.0, name='z')
params = ParameterTuple((x, y, z))
params_copy = params.clone("params_copy")
print(params, "\n")
diff --git a/docs/programming_guide/source_zh_cn/performance_optimization.md b/docs/programming_guide/source_zh_cn/performance_optimization.md
new file mode 100644
index 0000000000000000000000000000000000000000..96a94151734ecd1eeda53c5e93e7d60508df2992
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/performance_optimization.md
@@ -0,0 +1,19 @@
+# 性能优化
+
+
+
+- [性能优化](#性能优化)
+
+
+
+
+
+MindSpore提供了多种性能优化方法,用户可根据实际情况,利用它们来提升训练和推理的性能。
+
+| 优化阶段 | 优化方法 | 支持情况 |
+| --- | --- | --- |
+| 训练 | [分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/distributed_training_tutorials.html) | Ascend、GPU |
+| | [混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/enable_mixed_precision.html) | Ascend、GPU |
+| | [图算融合](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/enable_graph_kernel_fusion.html) | Ascend |
+| | [梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/apply_gradient_accumulation.html) | Ascend、GPU |
+| 推理 | [训练后量化](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/post_training_quantization.html) | Lite |
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/pipeline.md b/docs/programming_guide/source_zh_cn/pipeline.md
similarity index 51%
rename from api/source_zh_cn/programming_guide/pipeline.md
rename to docs/programming_guide/source_zh_cn/pipeline.md
index bd8af3408a74be9594ed762832502cdcbf10cec9..14ea43ff0ba6bc53693c6a68e1214b3633c35f61 100644
--- a/api/source_zh_cn/programming_guide/pipeline.md
+++ b/docs/programming_guide/source_zh_cn/pipeline.md
@@ -11,11 +11,10 @@
- [repeat](#repeat)
- [zip](#zip)
- [concat](#concat)
- - [project](#project)
-
+
## 概述
@@ -23,7 +22,7 @@
MindSpore的各个数据集类都为用户提供了多种数据处理算子,用户可以构建数据处理pipeline定义需要使用的数据处理操作,数据即可在训练过程中像水一样源源不断地经过数据处理pipeline流向训练系统。
-MindSpore目前支持的常用数据处理算子如下表所示,更多数据处理操作参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)。
+MindSpore目前支持的常用数据处理算子如下表所示,更多数据处理操作参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.dataset.html)。
| 数据处理算子 | 算子说明 |
| ---- | ---- |
@@ -45,54 +44,49 @@ MindSpore目前支持的常用数据处理算子如下表所示,更多数据

-```python
-# 将数据集进行混洗操作
+下面的样例先构建了一个随机数据集,然后对其进行混洗操作,最后展示了混洗后的数据结果。
+```python
import numpy as np
import mindspore.dataset as ds
-# 设置全局随机种子,确保shuffle的行为可预测
ds.config.set_seed(0)
-# 构建一个generator
def generator_func():
for i in range(5):
yield (np.array([i, i+1, i+2]),)
-# 从generator中构建数据管道
dataset1 = ds.GeneratorDataset(generator_func, ["data"])
-# 为数据集创建一个混洗操作
-# buffer_size代表创建一个存放size个样本的容器,再从此容器中随机采样样本进行输出
-# 当buffer_size设置为dataset的长度时,是全局混洗
dataset1 = dataset1.shuffle(buffer_size=2)
for data in dataset1.create_dict_iterator():
print(data)
```
+输出结果如下:
+
```
-{'data': array([0, 1, 2], dtype=int64)}
-{'data': array([2, 3, 4], dtype=int64)}
-{'data': array([3, 4, 5], dtype=int64)}
-{'data': array([1, 2, 3], dtype=int64)}
-{'data': array([4, 5, 6], dtype=int64)}
+{'data': Tensor(shape=[3], dtype=Int64, value=[0, 1, 2])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[2, 3, 4])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[3, 4, 5])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[1, 2, 3])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[4, 5, 6])}
```
### map
将指定的函数或算子作用于数据集的指定列数据,实现数据映射操作。用户可以自定义映射函数,也可以直接使用c_transforms或py_transforms中的算子针对图像、文本数据进行数据增强。
->更多数据增强的使用说明,参见编程指南中[数据增强](https://www.mindspore.cn/api/zh-CN/master/programming_guide/augmentation.html)章节。
+>更多数据增强的使用说明,参见编程指南中[数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.0/augmentation.html)章节。

-```python
-# 将数据集进行映射操作
+下面的样例先构建了一个随机数据集,然后定义了数据翻倍的映射函数并将其作用于数据集,最后对比展示了映射前后的数据结果。
+```python
import numpy as np
import mindspore.dataset as ds
-# 构建一个generator
def generator_func():
for i in range(5):
yield (np.array([i, i+1, i+2]),)
@@ -100,36 +94,33 @@ def generator_func():
def pyfunc(x):
return x*2
-# 从generator中构建数据管道
dataset = ds.GeneratorDataset(generator_func, ["data"])
-# 创建数据管道,输出原始数据
for data in dataset.create_dict_iterator():
print(data)
-print("")
+print("------ after processing ------")
-# 为数据集创建一个映射操作
-# input_columns指定要处理的列,operation指定映射函数
-dataset = dataset.map(input_columns=["data"], operations=pyfunc)
+dataset = dataset.map(operations=pyfunc, input_columns=["data"])
-# 创建数据管道,输出映射后的数据
for data in dataset.create_dict_iterator():
print(data)
```
+输出结果如下:
+
```
-{'data': array([0, 1, 2], dtype=int64)}
-{'data': array([1, 2, 3], dtype=int64)}
-{'data': array([2, 3, 4], dtype=int64)}
-{'data': array([3, 4, 5], dtype=int64)}
-{'data': array([4, 5, 6], dtype=int64)}
-
-{'data': array([0, 2, 4], dtype=int64)}
-{'data': array([2, 4, 6], dtype=int64)}
-{'data': array([4, 6, 8], dtype=int64)}
-{'data': array([ 6, 8, 10], dtype=int64)}
-{'data': array([ 8, 10, 12], dtype=int64)}
+{'data': Tensor(shape=[3], dtype=Int64, value=[0, 1, 2])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[1, 2, 3])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[2, 3, 4])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[3, 4, 5])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[4, 5, 6])}
+------ after processing ------
+{'data': Tensor(shape=[3], dtype=Int64, value=[0, 2, 4])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[2, 4, 6])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[4, 6, 8])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[ 6, 8, 10])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[ 8, 10, 12])}
```
### batch
@@ -138,45 +129,40 @@ for data in dataset.create_dict_iterator():

-```python
-# 将数据集进行分批操作
+下面的样例先构建了一个随机数据集,然后分别展示了保留多余数据与否的数据集分批结果,其中批大小为2。
+```python
import numpy as np
import mindspore.dataset as ds
-# 构建一个generator
def generator_func():
for i in range(5):
yield (np.array([i, i+1, i+2]),)
-# 从generator中构建数据管道
dataset1 = ds.GeneratorDataset(generator_func, ["data"])
-# 为数据集划分批次,batch_size代表每2个样本为一个批次
-# drop_remainder代表是否丢弃最后不能完整构成批次的样本
-# 在此例子中,5%2=1,但因为drop_remainder=False,因此保留最后一个单独的样本
dataset1 = dataset1.batch(batch_size=2, drop_remainder=False)
for data in dataset1.create_dict_iterator():
print(data)
-print("")
+print("------ drop remainder ------")
-# 从generator中构建数据管道
dataset2 = ds.GeneratorDataset(generator_func, ["data"])
-# 丢弃最后不能完整构成批次的样本
dataset2 = dataset2.batch(batch_size=2, drop_remainder=True)
for data in dataset2.create_dict_iterator():
print(data)
```
-```
-{'data': array([[0, 1, 2], [1, 2, 3]], dtype=int64)}
-{'data': array([[2, 3, 4], [3, 4, 5]], dtype=int64)}
-{'data': array([[4, 5, 6]], dtype=int64)}
+输出结果如下:
-{'data': array([[0, 1, 2], [1, 2, 3]], dtype=int64)}
-{'data': array([[2, 3, 4], [3, 4, 5]], dtype=int64)}
+```
+{'data': Tensor(shape=[2, 3], dtype=Int64, value=[[0, 1, 2], [1, 2, 3]])}
+{'data': Tensor(shape=[2, 3], dtype=Int64, value=[[2, 3, 4], [3, 4, 5]])}
+{'data': Tensor(shape=[1, 3], dtype=Int64, value=[[4, 5, 6]])}
+------ drop remainder ------
+{'data': Tensor(shape=[2, 3], dtype=Int64, value=[[0, 1, 2], [1, 2, 3]])}
+{'data': Tensor(shape=[2, 3], dtype=Int64, value=[[2, 3, 4], [3, 4, 5]])}
```
### repeat
@@ -187,38 +173,36 @@ for data in dataset2.create_dict_iterator():

-```python
-# 将数据集进行加倍操作
+下面的样例先构建了一个随机数据集,然后将其重复2次,最后展示了重复后的数据结果。
+```python
import numpy as np
import mindspore.dataset as ds
-# 构建一个generator
def generator_func():
for i in range(5):
yield (np.array([i, i+1, i+2]),)
-# 从generator中构建数据管道
dataset1 = ds.GeneratorDataset(generator_func, ["data"])
-# 为数据集创建一个加倍操作
-# count参数代表将数据集内容扩充为原来的count倍
dataset1 = dataset1.repeat(count=2)
for data in dataset1.create_dict_iterator():
print(data)
```
+输出结果如下:
+
```
-{'data': array([0, 1, 2], dtype=int64)}
-{'data': array([1, 2, 3], dtype=int64)}
-{'data': array([2, 3, 4], dtype=int64)}
-{'data': array([3, 4, 5], dtype=int64)}
-{'data': array([4, 5, 6], dtype=int64)}
-{'data': array([0, 1, 2], dtype=int64)}
-{'data': array([1, 2, 3], dtype=int64)}
-{'data': array([2, 3, 4], dtype=int64)}
-{'data': array([3, 4, 5], dtype=int64)}
-{'data': array([4, 5, 6], dtype=int64)}
+{'data': Tensor(shape=[3], dtype=Int64, value=[0, 1, 2])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[1, 2, 3])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[2, 3, 4])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[3, 4, 5])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[4, 5, 6])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[0, 1, 2])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[1, 2, 3])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[2, 3, 4])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[3, 4, 5])}
+{'data': Tensor(shape=[3], dtype=Int64, value=[4, 5, 6])}
```
### zip
@@ -230,39 +214,36 @@ for data in dataset1.create_dict_iterator():

-```python
-# 将数据集进行合并操作
+下面的样例先构建了两个不同样本数的随机数据集,然后将其进行列拼接,最后展示了拼接后的数据结果。
+```python
import numpy as np
import mindspore.dataset as ds
-# 构建一个generator
def generator_func():
for i in range(7):
yield (np.array([i, i+1, i+2]),)
-# 构建另一个generator
def generator_func2():
for i in range(4):
yield (np.array([1, 2]),)
-# 从generator中构建数据管道
dataset1 = ds.GeneratorDataset(generator_func, ["data1"])
dataset2 = ds.GeneratorDataset(generator_func2, ["data2"])
-# 为数据集创建一个合并操作
-# 新的dataset3会拥有2个列名,分别为data1,data2,同时因为data2的数据较少,会与data2的数据长度对齐
dataset3 = ds.zip((dataset1, dataset2))
for data in dataset3.create_dict_iterator():
print(data)
```
+输出结果如下:
+
```
-{'data1': array([0, 1, 2], dtype=int64), 'data2': array([1, 2], dtype=int64)}
-{'data1': array([1, 2, 3], dtype=int64), 'data2': array([1, 2], dtype=int64)}
-{'data1': array([2, 3, 4], dtype=int64), 'data2': array([1, 2], dtype=int64)}
-{'data1': array([3, 4, 5], dtype=int64), 'data2': array([1, 2], dtype=int64)}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [0, 1, 2]), 'data2': Tensor(shape=[2], dtype=Int64, value= [1, 2])}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [1, 2, 3]), 'data2': Tensor(shape=[2], dtype=Int64, value= [1, 2])}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [2, 3, 4]), 'data2': Tensor(shape=[2], dtype=Int64, value= [1, 2])}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [3, 4, 5]), 'data2': Tensor(shape=[2], dtype=Int64, value= [1, 2])}
```
### concat
@@ -273,84 +254,34 @@ for data in dataset3.create_dict_iterator():

-```python
-# 将数据集进行连接操作
+下面的样例先构建了两个随机数据集,然后将其进行行拼接,最后展示了拼接后的数据结果。值得一提的是,使用`+`运算符也能达到同样的效果。
+```python
import numpy as np
import mindspore.dataset as ds
-# 构建一个generator
def generator_func():
for i in range(2):
yield (np.array([0, 0, 0]),)
-# 构建另一个generator
def generator_func2():
for i in range(2):
yield (np.array([1, 2, 3]),)
-# 从generator中构建数据管道
dataset1 = ds.GeneratorDataset(generator_func, ["data1"])
dataset2 = ds.GeneratorDataset(generator_func2, ["data1"])
-# 为数据集创建一个连接操作,将dataset2合并到dataset1的data1列中
dataset3 = dataset1.concat(dataset2)
-# 值得一提的是,使用'+'运算符可以达到上面同样的效果
-# dataset3 = dataset1 + dataset2
-
for data in dataset3.create_dict_iterator():
print(data)
-
-```
-
-```
-{'data1': array([0, 0, 0], dtype=int64)}
-{'data1': array([0, 0, 0], dtype=int64)}
-{'data1': array([1, 2, 3], dtype=int64)}
-{'data1': array([1, 2, 3], dtype=int64)}
```
-### project
+输出结果如下:
-对数据集列进行映射,将指定列按顺序保留并向下传递到数据管道中,其余列将被丢弃。
-
->`project`还可以用于改变column排列的顺序!
-
-
-
-```python
-# 将数据集进行投影操作
-
-import numpy as np
-import mindspore.dataset as ds
-
-# 构建一个generator
-def generator_func():
- for i in range(2):
- yield (np.array([1, 2, 3]), np.array([7, 8, 9]), )
-
-# 从generator中构建数据管道
-dataset = ds.GeneratorDataset(generator_func, ["data1", "data2"])
-
-# 构建数据管道,获得原始数据
-for data in dataset.create_dict_iterator():
- print(data)
-
-print("")
-
-# 为数据集创建一个投影操作,只保留data1的数据
-dataset = dataset.project(columns=["data1"])
-
-# 构建数据管道,获得投影后的数据
-for data in dataset.create_dict_iterator():
- print(data)
```
-
-```
-{'data1': array([1, 2, 3], dtype=int64), 'data2': array([7, 8, 9], dtype=int64)}
-{'data1': array([1, 2, 3], dtype=int64), 'data2': array([7, 8, 9], dtype=int64)}
-
-{'data1': array([1, 2, 3], dtype=int64)}
-{'data1': array([1, 2, 3], dtype=int64)}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [0, 0, 0])}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [0, 0, 0])}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [1, 2, 3])}
+{'data1': Tensor(shape=[3], dtype=Int64, value= [1, 2, 3])}
```
diff --git a/docs/programming_guide/source_zh_cn/probability.md b/docs/programming_guide/source_zh_cn/probability.md
new file mode 100644
index 0000000000000000000000000000000000000000..beef724de9fb9cc04adc4307965c39e79b4830ee
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/probability.md
@@ -0,0 +1,954 @@
+# 深度概率编程库
+
+
+
+- [深度概率编程库](#深度概率编程库)
+ - [概率分布](#概率分布)
+ - [概率分布类](#概率分布类)
+ - [Distribution基类](#distribution基类)
+ - [伯努利分布Bernoulli](#伯努利分布bernoulli)
+ - [指数分布Exponential](#指数分布exponential)
+ - [几何分布Geometric](#几何分布geometric)
+ - [正态分布Normal](#正态分布normal)
+ - [均匀分布Uniform](#均匀分布uniform)
+ - [概率分布类在PyNative模式下的应用](#概率分布类在pynative模式下的应用)
+ - [概率分布类在图模式下的应用](#概率分布类在图模式下的应用)
+ - [概率分布映射](#概率分布映射)
+ - [Bijector类接口设计](#bijector类接口设计)
+ - [Bijector基类](#bijector基类)
+ - [幂函数变换映射PowerTransform](#幂函数变换映射powertransform)
+ - [指数变换映射Exp](#指数变换映射exp)
+ - [标量仿射变换映射ScalarAffine](#标量仿射变换映射scalaraffine)
+ - [Softplus变换映射Softplus](#softplus变换映射softplus)
+ - [PyNative模式下调用Bijector实例](#pynative模式下调用bijector实例)
+ - [图模式下调用Bijector实例](#图模式下调用bijector实例)
+ - [TransformedDistribution类接口设计](#transformeddistribution类接口设计)
+ - [PyNative模式下调用TransformedDistribution实例](#pynative模式下调用transformeddistribution实例)
+ - [图模式下调用TransformedDistribution实例](#图模式下调用transformeddistribution实例)
+ - [深度概率网络](#深度概率网络)
+ - [VAE](#vae)
+ - [ConditionalVAE](#conditionalvae)
+ - [概率推断算法](#概率推断算法)
+ - [贝叶斯层](#贝叶斯层)
+ - [贝叶斯转换](#贝叶斯转换)
+ - [贝叶斯工具箱](#贝叶斯工具箱)
+
+
+
+
+
+MindSpore深度概率编程的目标是将深度学习和贝叶斯学习结合,包括概率分布、概率分布映射、深度概率网络、概率推断算法、贝叶斯层、贝叶斯转换和贝叶斯工具箱,面向不同的开发者。对于专业的贝叶斯学习用户,提供概率采样、推理算法和模型构建库;另一方面,为不熟悉贝叶斯深度学习的用户提供了高级的API,从而不用更改深度学习编程逻辑,即可利用贝叶斯模型。
+
+## 概率分布
+
+概率分布(`mindspore.nn.probability.distribution`)是概率编程的基础。**Distribution** 类提供多样的概率统计接口,例如概率密度函数 *pdf* 、累积密度函数 *cdf* 、散度计算 *kl_loss* 、抽样 *sample* 等。现有的概率分布实例包括高斯分布,伯努利分布,指数型分布,几何分布和均匀分布。
+
+### 概率分布类
+
+- `Distribution`:所有概率分布的基类。
+
+- `Bernoulli`:伯努利分布。参数为试验成功的概率。
+
+- `Exponential`: 指数型分布。参数为率参数。
+
+- `Geometric`:几何分布。参数为一次伯努利试验成功的概率。
+
+- `Normal`:正态(高斯)分布。参数为均值和标准差。
+
+- `Uniform`:均匀分布。参数为数轴上的最小值和最大值。
+
+#### Distribution基类
+
+`Distribution` 是所有概率分布的基类。
+
+接口介绍:`Distribution`类支持的函数包括 `prob`、`log_prob`、`cdf`、`log_cdf`、`survival_function`、`log_survival`、`mean`、`sd`、`var`、`entropy`、`kl_loss`、`cross_entropy` 和 `sample` 。分布不同,所需传入的参数也不同。只有在派生类中才能使用,由派生类的函数实现决定参数。
+
+- `prob` :概率密度函数(pdf)/ 概率质量函数(pmf)。
+- `log_prob` :对数似然函数。
+- `cdf` :累积分布函数(cdf)。
+- `log_cdf` :对数累积分布函数(cdf)。
+- `survival_function` :生存函数。
+- `log_survival` :对数生存函数。
+- `mean` :均值。
+- `sd` :标准差。
+- `var` :方差。
+- `entropy` :熵。
+- `kl_loss` :Kullback-Leibler散度。
+- `cross_entropy` :两个概率分布的交叉熵。
+- `sample` :概率分布的随机抽样。
+
+#### 伯努利分布Bernoulli
+
+伯努利分布,继承自 `Distribution` 类。
+
+属性:
+- `Bernoulli.probs`:伯努利试验成功的概率。
+
+`Distribution` 基类调用 `Bernoulli` 中私有接口以实现基类中的公有接口。`Bernoulli` 支持的公有接口为:
+
+- `mean`,`mode`,`var`:可选择传入 试验成功的概率 *probs1* 。
+- `entropy`:可选择传入 试验成功的概率 *probs1* 。
+- `cross_entropy`,`kl_loss`:必须传入 *dist* 和 *probs1_b* 。*dist* 为另一分布的类型,目前只支持此处为 *‘Bernoulli’* 。 *probs1_b* 为分布 *b* 的试验成功概率。可选择传入分布 *a* 的参数 *probs1_a* 。
+- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入 *value* 。可选择传入试验成功的概率 *probs* 。
+- `sample`:可选择传入样本形状 *shape* 和试验成功的概率 *probs1* 。
+
+#### 指数分布Exponential
+
+指数分布,继承自`Distribution`类。
+
+属性:
+- `Exponential.rate`:率参数。
+
+`Distribution` 基类调用 `Exponential` 私有接口以实现基类中的公有接口。`Exponential` 支持的公有接口为:
+
+- `mean`,`mode`,`var`:可选择传入率参数 *rate* 。
+- `entropy`:可选择传入率参数 *rate* 。
+- `cross_entropy`,`kl_loss`:必须传入 *dist* 和 *rate_b* 。 *dist* 为另一分布的类型的名称, 目前只支持此处为 *‘Exponential’* 。*rate_b* 为分布 *b* 的率参数。可选择传入分布 *a* 的参数 *rate_a* 。
+- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入 *value* 。可选择传入率参数 *rate* 。
+- `sample`:可选择传入样本形状 *shape* 和率参数 *rate* 。
+
+#### 几何分布Geometric
+
+几何分布,继承自`Distribution`类。
+
+属性:
+- `Geometric.probs`:伯努利试验成功的概率。
+
+`Distribution` 基类调用 `Geometric` 中私有接口以实现基类中的公有接口。`Geometric` 支持的公有接口为:
+
+- `mean`,`mode`,`var`:可选择传入 试验成功的概率 *probs1* 。
+- `entropy`:可选择传入 试验成功的概率 *probs1* 。
+- `cross_entropy`,`kl_loss`:必须传入 *dist* 和 *probs1_b* 。*dist* 为另一分布的类型的名称,目前只支持此处为 *‘Geometric’* 。 *probs1_b* 为分布 *b* 的试验成功概率。可选择传入分布 *a* 的参数 *probs1_a* 。
+- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入 *value* 。可选择传入试验成功的概率 *probs1* 。
+- `sample`:可选择传入样本形状 *shape* 和试验成功的概率 *probs1* 。
+
+#### 正态分布Normal
+
+正态(高斯)分布,继承自 **Distribution** 类。
+
+**Distribution** 基类调用 **Normal** 中私有接口以实现基类中的公有接口。**Normal** 支持的公有接口为:
+- `mean`,`mode`,`var`:可选择传入分布的参数均值 *mean* 和标准差 *sd* 。
+- `entropy`:可选择传入分布的参数均值 *mean* 和标准差 *sd* 。
+- `cross_entropy`,`kl_loss`:必须传入 *dist* ,*mean_b* 和 *sd_b* 。*dist* 为另一分布的类型的名称,目前只支持此处为 *‘Normal’* 。*mean_b* 和 *sd_b* 为分布 *b* 的均值和标准差。可选择传入分布的参数 *a* 均值 *mean_a* 和标准差 *sd_a* 。
+- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入 *value* 。可选择分布的参数包括均值 *mean_a* 和标准差 *sd_a* 。
+- `sample`:可选择传入样本形状 *shape* 和分布的参数包括均值 *mean_a* 和标准差 *sd_a* 。
+
+#### 均匀分布Uniform
+
+均匀分布,继承自`Distribution`类。
+
+属性:
+- `Uniform.low`:最小值。
+- `Uniform.high`:最大值。
+
+`Distribution` 基类调用 `Uniform` 以实现基类中的公有接口。`Uniform` 支持的公有接口为:
+
+- `mean`,`mode`,`var`:可选择传入分布的参数最大值 *high* 和最小值 *low* 。
+- `entropy`:可选择传入分布的参数最大值 *high* 和最小值 *low* 。
+- `cross_entropy`,`kl_loss`:必须传入 *dist* ,*high_b* 和 *low_b* 。*dist* 为另一分布的类型的名称,目前只支持此处为 *‘Uniform’* 。 *high_b* 和 *low_b* 为分布 *b* 的参数。可选择传入分布 *a* 的参数即最大值 *high_a* 和最小值 *low_a* 。
+- `prob`,`log_prob`,`cdf`,`log_cdf`,`survival_function`,`log_survival`:必须传入 *value* 。可选择传入分布的参数最大值 *high* 和最小值 *low* 。
+- `sample`:可选择传入 *shape* 和分布的参数即最大值 *high* 和最小值 *low* 。
+
+### 概率分布类在PyNative模式下的应用
+
+`Distribution` 子类可在 **PyNative** 模式下使用。
+
+导入相关模块:
+
+```python
+from mindspore import Tensor
+from mindspore import dtype as mstype
+import mindspore.context as context
+import mindspore.nn.probability.distribution as msd
+context.set_context(mode=context.PYNATIVE_MODE)
+```
+以 **Normal** 为例, 创建一个均值为0.0、标准差为1.0的正态分布:
+```python
+my_normal = msd.Normal(0.0, 1.0, dtype=mstype.float32)
+```
+计算均值:
+```python
+mean = my_normal.mean()
+print(mean)
+```
+输出为:
+```
+0.0
+```
+计算方差:
+```python
+var = my_normal.var()
+print(var)
+```
+输出为:
+```
+1.0
+```
+计算熵:
+```python
+entropy = my_normal.entropy()
+print(entropy)
+```
+输出为:
+```
+1.4189385
+```
+计算 **pdf**:
+```python
+value = Tensor([-0.5, 0.0, 0.5], dtype=mstype.float32)
+prob = my_normal.prob(value)
+print(prob)
+```
+输出为:
+```
+[0.35206532, 0.3989423, 0.35206532]
+```
+计算 **cdf**:
+```python
+cdf = my_normal.cdf(value)
+print(cdf)
+```
+输出为:
+```
+[0.30852754, 0.5, 0.69146246]
+```
+计算 **kl_loss**:
+```python
+mean_b = Tensor(1.0, dtype=mstype.float32)
+sd_b = Tensor(2.0, dtype=mstype.float32)
+kl = my_normal.kl_loss('Normal', mean_b, sd_b)
+print(kl)
+```
+输出为:
+```
+0.44314718
+```
+
+### 概率分布类在图模式下的应用
+
+在图模式下,**Distribution** 子类可用在网络中。
+
+导入相关模块:
+```python
+import mindspore.nn as nn
+from mindspore import Tensor
+from mindspore import dtype as mstype
+import mindspore.context as context
+import mindspore.nn.probability.distribution as msd
+context.set_context(mode=context.GRAPH_MODE)
+```
+创建网络:
+```python
+# 网络继承nn.Cell
+class Net(nn.Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.normal = msd.Normal(0.0, 1.0, dtype=mstype.float32)
+
+ def construct(self, value, mean, sd):
+ pdf = self.normal.prob(value)
+ kl = self.normal.kl_loss("Normal", mean, sd)
+ return pdf, kl
+```
+调用网络:
+```python
+net = Net()
+value = Tensor([-0.5, 0.0, 0.5], dtype=mstype.float32)
+mean = Tensor(1.0, dtype=mstype.float32)
+sd = Tensor(1.0, dtype=mstype.float32)
+pdf, kl = net(value, mean, sd)
+print("pdf: ", pdf)
+print("kl: ", kl)
+```
+输出为:
+```
+pdf: [0.3520653, 0.39894226, 0.3520653]
+kl: 0.5
+```
+
+## 概率分布映射
+
+Bijector(`mindspore.nn.probability.bijector`)是概率编程的基本组成部分。Bijector描述了一种随机变量的变换方法,可以通过一个已有的随机变量X和一个映射函数f生成一个新的随机变量$Y = f(x)$。
+`Bijector`提供了映射相关的四种变换方法。它可以当做算子直接使用,也可以作用在某个随机变量`Distribution`类实例上生成新的随机变量的`Distribution`类实例。
+
+### Bijector类接口设计
+
+#### Bijector基类
+
+`Bijector`类是所有Bejictor的基类。其接口包括:
+
+1. 类特征函数
+ - `name`:无参函数,返回 `name` 的值。
+ - `is_dtype`:无参函数,返回 `dtype` 的值。
+ - `parameter`:无参函数,返回 `parameter` 的值。
+ - `is_constant_jacobian`:无参函数,返回 `is_constant_jacobian` 的值。
+ - `is_injective`:无参函数,返回 `is_injective` 的值。
+
+2. 映射函数
+ - `forward`:正向映射,创建派生类后由派生类的 `_forward` 决定参数。
+ - `inverse`:反向映射,创建派生类后由派生类的 `_inverse` 决定参数。
+ - `forward_log_jacobian`:正向映射的导数的对数,创建派生类后由派生类的 `_forward_log_jacobian` 决定参数。
+ - `inverse_log_jacobian`:反向映射的导数的对数,创建派生类后由派生类的 `_inverse_log_jacobian` 决定参数。
+
+* `Bijector` 作为函数调用:
+输入是一个 `Distribution` 类:生成一个 `TransformedDistribution` **(不可在图内调用)**。
+
+#### 幂函数变换映射PowerTransform
+`PowerTransform`做如下变量替换:$Y = g(X) = {(1 + X * c)}^{1 / c}$。其接口包括:
+
+1. 类特征函数
+ - `power`:无参函数,返回 `power` 的值。
+
+2. 映射函数
+ - `forward`:正向映射,输入为 `Tensor` 。
+ - `inverse`:反向映射,输入为 `Tensor` 。
+ - `forward_log_jacobian`:正向映射的导数的对数,输入为 `Tensor` 。
+ - `inverse_log_jacobian`:反向映射的导数的对数,输入为 `Tensor` 。
+
+#### 指数变换映射Exp
+`Exp`做如下变量替换:$Y = g(X)= exp(X)$。其接口包括:
+
+映射函数
+- `forward`:正向映射,输入为 `Tensor` 。
+- `inverse`:反向映射,输入为 `Tensor` 。
+- `forward_log_jacobian`:正向映射的导数的对数,输入为 `Tensor` 。
+- `inverse_log_jacobian`:反向映射的导数的对数,输入为 `Tensor` 。
+
+#### 标量仿射变换映射ScalarAffine
+`ScalarAffine`做如下变量替换:Y = g(X) = a * X + b。其接口包括:
+
+1. 类特征函数
+ - `scale`:无参函数,返回scale的值。
+ - `shift`:无参函数,返回shift的值。
+
+2. 映射函数
+ - `forward`:正向映射,输入为 `Tensor` 。
+ - `inverse`:反向映射,输入为 `Tensor` 。
+ - `forward_log_jacobian`:正向映射的导数的对数,输入为 `Tensor` 。
+ - `inverse_log_jacobian`:反向映射的导数的对数,输入为 `Tensor` 。
+
+#### Softplus变换映射Softplus
+`Softplus`做如下变量替换:$Y = g(X) = log(1 + e ^ {kX}) / k $。其接口包括:
+
+1. 类特征函数
+ - `sharpness`:无参函数,返回 `sharpness` 的值。
+
+2. 映射函数
+ - `forward`:正向映射,输入为 `Tensor` 。
+ - `inverse`:反向映射,输入为 `Tensor` 。
+ - `forward_log_jacobian`:正向映射的导数的对数,输入为 `Tensor` 。
+ - `inverse_log_jacobian`:反向映射的导数的对数,输入为 `Tensor` 。
+
+### PyNative模式下调用Bijector实例
+
+在执行之前,我们需要导入需要的库文件包。双射类最主要的库是 `mindspore.nn.probability.bijector`,导入后我们使用 `msb` 作为库的缩写并进行调用。
+
+导入相关模块:
+```python
+import numpy as np
+import mindspore.nn as nn
+import mindspore.nn.probability.bijector as msb
+import mindspore.context as context
+from mindspore import Tensor
+from mindspore import dtype
+context.set_context(mode=context.PYNATIVE_MODE)
+```
+
+下面我们以 `PowerTransform` 为例。创建一个指数为2的 `PowerTransform` 对象。
+
+构造`PowerTransform`:
+```python
+powertransform = msb.PowerTransform(power=2)
+powertransform
+```
+
+输出:
+```python
+PowerTransform
+```
+
+接下来可以使用映射函数进行运算。
+
+调用 `forward` 方法,计算正向映射:
+```python
+x = np.array([2.0, 3.0, 4.0, 5.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+forward = powertransform.forward(tx)
+foward
+```
+
+输出:
+```python
+Tensor(shape=[4], dtype=Float32, [ 2.23606801e+00 2.64575124e+00 3.00000000e+00 3.31662488e+00])
+```
+
+输入 `inverse` 方法,计算反向映射:
+```python
+inverse = powertransform.inverse(tx)
+inverse
+```
+
+输出:
+```python
+Tensor(shape=[4], dtype=Float32, [ 1.50000000e+00 4.00000048e+00 7.50000000e+00 1.20000010e+01])
+```
+
+输入 `forward_log_jacobian` 方法,计算正向映射导数的对数:
+```python
+forward_log_jaco = powertransform.forward_log_jacobian(tx)
+forward_log_jaco
+```
+
+输出:
+```python
+Tensor(shape=[4], dtype=Float32, [-8.04718971e-01 -9.72955048e-01 -1.09861231e+00 -1.19894767e+00])
+```
+
+输入`inverse_log_jacobian`方法,计算反向映射导数的对数:
+```python
+inverse_log_jaco = powertransform.inverse_log_jacobian(tx)
+inverse_log_jaco
+```
+
+输出:
+```python
+Tensor(shape=[4], dtype=Float32, [ 6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00])
+```
+
+### 图模式下调用Bijector实例
+
+在图模式下,`Bijector`子类可用在网络中。
+
+导入相关模块:
+```python
+import mindspore.nn as nn
+from mindspore import Tensor
+from mindspore import dtype as mstype
+import mindspore.context as context
+import mindspore.nn.probability.Bijector as msb
+context.set_context(mode=context.GRAPH_MODE)
+```
+
+创建网络:
+```python
+class Net(nn.Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ # 创建PowerTransform实例
+ self.powertransform = msb.PowerTransform(power=2)
+
+ def construct(self, value):
+ forward = self.s1.forward(value)
+ inverse = self.s1.inverse(value)
+ forward_log_jaco = self.s1.forward_log_jacobian(value)
+ inverse_log_jaco = self.s1.inverse_log_jacobian(value)
+ return forward, inverse, forward_log_jaco, inverse_log_jaco
+```
+调用网络:
+```python
+net = Net()
+x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+forward, inverse, forward_log_jaco, inverse_log_jaco = net(tx)
+print("forward: ", forward)
+print("inverse: ", inverse)
+print("forward_log_jaco: ", forward_log_jaco)
+print("inverse_log_jaco: ", inverse_log_jaco)
+```
+输出为:
+```python
+forward: [2.236068 2.6457512 3. 3.3166249]
+inverse: [ 1.5 4.0000005 7.5 12.000001 ]
+forward_log_jaco: [-0.804719 -0.97295505 -1.0986123 -1.1989477 ]
+inverse_log_jaco: [0.6931472 1.0986123 1.3862944 1.609438 ]
+```
+
+### TransformedDistribution类接口设计
+
+`TransformedDistribution` 继承自 `Distribution` ,是可通过映射f(x)变化得到的数学分布的基类。其接口包括:
+
+1. 类特征函数
+
+ - `bijector`:无参函数,返回分布的变换方法。
+ - `distribution`:无参函数,返回原始分布。
+ - `is_linear_transformation`:无参函数,返回线性变换标志。
+
+2. 接口函数(以下接口函数的参数与构造函数中 `distribution` 的对应接口的参数相同)。
+
+ - `cdf`:累积分布函数(cdf)。
+ - `log_cdf`:对数累积分布函数(cdf)。
+ - `survival_function`:生存函数。
+ - `log_survival`:对数生存函数。
+ - `prob`:概率密度函数(pdf)/ 概率质量函数(pmf)。
+ - `log_prob`:对数似然函数。
+ - `sample`:随机取样。
+ - `mean`:无参数。只有当 `Bijector.is_constant_jacobian=true` 时可调用。
+
+### PyNative模式下调用TransformedDistribution实例
+
+`TransformedDistribution` 子类可在 **PyNative** 模式下使用。
+在执行之前,我们需要导入需要的库文件包。
+
+导入相关模块:
+```python
+import numpy as np
+import mindspore.nn as nn
+import mindspore.nn.probability.bijector as msb
+import mindspore.nn.probability.distribution as msd
+import mindspore.context as context
+from mindspore import Tensor
+from mindspore import dtype
+context.set_context(mode=context.PYNATIVE_MODE)
+```
+
+构造一个 `TransformedDistribution` 实例,使用 `Normal` 分布作为需要变换的分布类,使用 `Exp` 作为映射变换,可以生成 `LogNormal` 分布。
+```python
+normal = msd.Normal(0.0, 1.0, dtype=dtype.float32)
+exp = msb.Exp()
+LogNormal = msd.TransformedDistribution(exp, normal, dtype=dtype.float32, seed=0, name="LogNormal")
+LogNormal
+```
+
+输出:
+```python
+TransformedDistribution<
+ (_bijector): Exp
+ (_distribution): Normal
+ >
+```
+
+可以对 `LogNormal` 进行概率分布计算。例如:
+
+计算 **cdf** :
+```python
+x = np.array([2.0, 5.0, 10.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+cdf = LogNormal.cdf(tx)
+cdf
+```
+
+输出:
+```python
+Tensor(shape=[3], dtype=Float32, [ 7.55891383e-01 9.46239710e-01 9.89348888e-01])
+```
+
+计算 **log_cdf** :
+```python
+x = np.array([2.0, 5.0, 10.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+log_cdf = LogNormal.log_cdf(tx)
+log_cdf
+```
+
+输出:
+```python
+Tensor(shape=[3], dtype=Float32, [-2.79857576e-01 -5.52593507e-02 -1.07082408e-02])
+```
+
+计算 **survival_function** :
+```python
+x = np.array([2.0, 5.0, 10.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+survival_function = LogNormal.survival_function(tx)
+survival_function
+```
+
+输出:
+```python
+Tensor(shape=[3], dtype=Float32, [ 2.44108617e-01 5.37602901e-02 1.06511116e-02])
+```
+
+计算 **log_survival** :
+```python
+x = np.array([2.0, 5.0, 10.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+log_survival = LogNormal.log_survival(tx)
+log_survival
+```
+
+输出:
+```python
+Tensor(shape=[3], dtype=Float32, [-1.41014194e+00 -2.92322016e+00 -4.54209089e+00])
+```
+
+计算 **prob** :
+```python
+x = np.array([2.0, 5.0, 10.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+prob = LogNormal.prob(tx)
+prob
+```
+
+输出:
+```python
+Tensor(shape=[3], dtype=Float32, [ 1.56874031e-01 2.18507163e-02 2.81590177e-03])
+```
+
+计算 **log_prob** :
+```python
+x = np.array([2.0, 5.0, 10.0], dtype=np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+log_prob = LogNormal.log_prob(tx)
+log_prob
+```
+
+输出:
+```python
+Tensor(shape=[3], dtype=Float32, [-1.85231221e+00 -3.82352161e+00 -5.87247276e+00])
+```
+
+调用取样函数 **sample** :
+```python
+shape = ((3, 2))
+sample = LogNormal.sample(shape)
+sample
+```
+
+输出:
+```python
+Tensor(shape=[3, 2], dtype=Float32,
+[[ 7.64315844e-01 3.01435232e-01]
+ [ 1.17166102e+00 2.60277224e+00]
+ [ 7.02699006e-01 3.91564220e-01]])
+```
+
+### 图模式下调用TransformedDistribution实例
+
+在图模式下,**TransformedDistribution**子类可用在网络中。
+
+导入相关模块:
+```python
+import mindspore.nn as nn
+from mindspore import Tensor
+from mindspore import dtype
+import mindspore.context as context
+import mindspore.nn.probability.Bijector as msb
+import mindspore.nn.probability.Distribution as msd
+context.set_context(mode=self.GRAPH_MODE)
+```
+
+创建网络:
+```python
+class Net(nn.Cell):
+ def __init__(self, shape, dtype=dtype.float32, seed=0, name='transformed_distribution'):
+ super(Net, self).__init__()
+ # 创建TransformedDistribution实例
+ self.exp = msb.Exp()
+ self.normal = msd.Normal(0.0, 1.0, dtype=dtype)
+ self.lognormal = msd.TransformedDistribution(self.exp, self.normal, dtype=dtype, seed=seed, name=name)
+ self.shape = shape
+
+ def construct(self, value):
+ cdf = self.lognormal.cdf(value)
+ sample = self.lognormal.sample(self.shape)
+ return cdf, sample
+```
+
+调用网络:
+```python
+shape = (2, 3)
+net = Net(shape=shape, name="LogNormal")
+x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32)
+tx = Tensor(x, dtype=dtype.float32)
+cdf, sample = net(tx)
+print("cdf: ", cdf)
+print("sample: ", sample)
+```
+输出为:
+```python
+cdf: [0.7558914 0.8640314 0.9171715 0.9462397]
+sample: [[0.21036398 0.44932044 0.5669641 ]
+ [1.4103683 6.724116 0.97894996]]
+```
+
+## 深度概率网络
+
+使用MindSpore深度概率编程库(`mindspore.nn.probability.dpn`)来构造变分自编码器(VAE)进行推理尤为简单。我们只需要自定义编码器和解码器(DNN模型),调用VAE或CVAE接口形成其派生网络,然后调用ELBO接口进行优化,最后使用SVI接口进行变分推理。这样做的好处是,不熟悉变分推理的用户可以像构建DNN模型一样来构建概率模型,而熟悉的用户可以调用这些接口来构建更为复杂的概率模型。VAE的接口在`mindspore.nn.probability.dpn`下面,dpn代表的是Deep probabilistic network,这里提供了一些基本的深度概率网络的接口,例如VAE。
+
+### VAE
+
+首先,我们需要先自定义encoder和decoder,调用`mindspore.nn.probability.dpn.VAE`接口来构建VAE网络,我们除了传入encoder和decoder之外,还需要传入encoder输出变量的维度hidden size,以及VAE网络存储潜在变量的维度latent size,一般latent size会小于hidden size。
+
+```python
+import mindspore.nn as nn
+from mindspore.ops import operations as P
+from mindspore.nn.probability.dpn import VAE
+
+IMAGE_SHAPE = (-1, 1, 32, 32)
+
+
+class Encoder(nn.Cell):
+ def __init__(self):
+ super(Encoder, self).__init__()
+ self.fc1 = nn.Dense(1024, 800)
+ self.fc2 = nn.Dense(800, 400)
+ self.relu = nn.ReLU()
+ self.flatten = nn.Flatten()
+
+ def construct(self, x):
+ x = self.flatten(x)
+ x = self.fc1(x)
+ x = self.relu(x)
+ x = self.fc2(x)
+ x = self.relu(x)
+ return x
+
+
+class Decoder(nn.Cell):
+ def __init__(self):
+ super(Decoder, self).__init__()
+ self.fc1 = nn.Dense(400, 1024)
+ self.sigmoid = nn.Sigmoid()
+ self.reshape = P.Reshape()
+
+ def construct(self, z):
+ z = self.fc1(z)
+ z = self.reshape(z, IMAGE_SHAPE)
+ z = self.sigmoid(z)
+ return z
+
+
+encoder = Encoder()
+decoder = Decoder()
+vae = VAE(encoder, decoder, hidden_size=400, latent_size=20)
+```
+### ConditionalVAE
+
+类似地,ConditionalVAE与VAE的使用方法比较相近,不同的是,ConditionalVAE利用了数据集的标签信息,属于有监督学习算法,其生成效果一般会比VAE好。
+
+首先,先自定义encoder和decoder,并调用`mindspore.nn.probability.dpn.ConditionalVAE`接口来构建ConditionalVAE网络,这里的encoder和VAE的不同,因为需要传入数据集的标签信息;decoder和上述的一样。ConditionalVAE接口的传入则还需要传入数据集的标签类别个数,其余和VAE接口一样。
+
+```
+import mindspore.nn as nn
+from mindspore.ops import operations as P
+from mindspore.nn.probability.dpn import ConditionalVAE
+
+IMAGE_SHAPE = (-1, 1, 32, 32)
+
+
+class Encoder(nn.Cell):
+ def __init__(self, num_classes):
+ super(Encoder, self).__init__()
+ self.fc1 = nn.Dense(1024 + num_classes, 400)
+ self.relu = nn.ReLU()
+ self.flatten = nn.Flatten()
+ self.concat = P.Concat(axis=1)
+ self.one_hot = nn.OneHot(depth=num_classes)
+
+ def construct(self, x, y):
+ x = self.flatten(x)
+ y = self.one_hot(y)
+ input_x = self.concat((x, y))
+ input_x = self.fc1(input_x)
+ input_x = self.relu(input_x)
+ return input_x
+
+
+class Decoder(nn.Cell):
+ def __init__(self):
+ super(Decoder, self).__init__()
+ self.fc1 = nn.Dense(400, 1024)
+ self.sigmoid = nn.Sigmoid()
+ self.reshape = P.Reshape()
+
+ def construct(self, z):
+ z = self.fc1(z)
+ z = self.reshape(z, IMAGE_SHAPE)
+ z = self.sigmoid(z)
+ return z
+
+
+encoder = Encoder(num_classes=10)
+decoder = Decoder()
+cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20, num_classes=10)
+```
+
+加载数据集,我们可以使用Mnist数据集,具体的数据加载和预处理过程可以参考这里[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/quick_start/quick_start.html),这里会用到create_dataset函数创建数据迭代器。
+
+```python
+ds_train = create_dataset(image_path, 128, 1)
+```
+接下来,需要用到infer接口进行VAE网络的变分推断。
+
+## 概率推断算法
+
+调用ELBO接口(`mindspore.nn.probability.infer.ELBO`)来定义VAE网络的损失函数,调用`WithLossCell`封装VAE网络和损失函数,并定义优化器,之后传入SVI接口(`mindspore.nn.probability.infer.SVI`)。SVI的`run`函数可理解为VAE网络的训练,可以指定训练的`epochs`,返回结果为训练好的网络;`get_train_loss`函数可以返回训练好后模型的loss。
+
+```python
+from mindspore.nn.probability.infer import ELBO, SVI
+
+net_loss = ELBO(latent_prior='Normal', output_prior='Normal')
+net_with_loss = nn.WithLossCell(vae, net_loss)
+optimizer = nn.Adam(params=vae.trainable_params(), learning_rate=0.001)
+
+vi = SVI(net_with_loss=net_with_loss, optimizer=optimizer)
+vae = vi.run(train_dataset=ds_train, epochs=10)
+trained_loss = vi.get_train_loss()
+```
+最后,得到训练好的VAE网络后,我们可以使用`vae.generate_sample`生成新样本,需要传入待生成样本的个数,及生成样本的shape,shape需要保持和原数据集中的样本shape一样;当然,我们也可以使用`vae.reconstruct_sample`重构原来数据集中的样本,来测试VAE网络的重建能力。
+```python
+generated_sample = vae.generate_sample(64, IMAGE_SHAPE)
+for sample in ds_train.create_dict_iterator():
+ sample_x = Tensor(sample['image'], dtype=mstype.float32)
+ reconstructed_sample = vae.reconstruct_sample(sample_x)
+print('The shape of the generated sample is ', generated_sample.shape)
+```
+我们可以看一下新生成样本的shape:
+```
+The shape of the generated sample is (64, 1, 32, 32)
+```
+ConditionalVAE训练过程和VAE的过程类似,但需要注意的是使用训练好的ConditionalVAE网络生成新样本和重建新样本时,需要输入标签信息,例如下面生成的新样本就是64个0-7的数字。
+
+```
+sample_label = Tensor([i for i in range(0, 8)] * 8, dtype=mstype.int32)
+generated_sample = cvae.generate_sample(sample_label, 64, IMAGE_SHAPE)
+for sample in ds_train.create_dict_iterator():
+ sample_x = Tensor(sample['image'], dtype=mstype.float32)
+ sample_y = Tensor(sample['label'], dtype=mstype.int32)
+ reconstructed_sample = cvae.reconstruct_sample(sample_x, sample_y)
+print('The shape of the generated sample is ', generated_sample.shape)
+```
+查看一下新生成的样本的shape:
+```
+The shape of the generated sample is (64, 1, 32, 32)
+```
+
+如果希望新生成的样本更好,更清晰,用户可以自己定义更复杂的encoder和decoder,这里的示例只用了两层全连接层,仅供示例的指导。
+
+## 贝叶斯层
+
+下面的范例使用MindSpore的`nn.probability.bnn_layers`中的API实现BNN图片分类模型。MindSpore的`nn.probability.bnn_layers`中的API包括`NormalPrior`,`NormalPosterior`,`ConvReparam`,`DenseReparam`和`WithBNNLossCell`。BNN与DNN的最大区别在于,BNN层的weight和bias不再是确定的值,而是服从一个分布。其中,`NormalPrior`,`NormalPosterior`分别用来生成服从正态分布的先验分布和后验分布;`ConvReparam`和`DenseReparam`分别是使用reparameteration方法实现的贝叶斯卷积层和全连接层;`WithBNNLossCell`是用来封装BNN和损失函数的。
+
+如何使用`nn.probability.bnn_layers`中的API构建贝叶斯神经网络并实现图片分类,可以参考教程[使用贝叶斯网络](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/apply_deep_probability_program.html#id3)。
+
+## 贝叶斯转换
+
+对于不熟悉贝叶斯模型的研究人员,MDP提供了贝叶斯转换接口(`mindspore.nn.probability.transform`),支持DNN (Deep Neural Network)模型一键转换成BNN (Bayesian Neural Network)模型。
+
+其中的模型转换API`TransformToBNN`的`__init__`函数定义如下:
+
+```
+class TransformToBNN:
+ def __init__(self, trainable_dnn, dnn_factor=1, bnn_factor=1):
+ net_with_loss = trainable_dnn.network
+ self.optimizer = trainable_dnn.optimizer
+ self.backbone = net_with_loss.backbone_network
+ self.loss_fn = getattr(net_with_loss, "_loss_fn")
+ self.dnn_factor = dnn_factor
+ self.bnn_factor = bnn_factor
+ self.bnn_loss_file = None
+```
+参数`trainable_bnn`是经过`TrainOneStepCell`包装的可训练DNN模型,`dnn_factor`和`bnn_factor`分别为由损失函数计算得到的网络整体损失的系数和每个贝叶斯层的KL散度的系数。
+API`TransformToBNN`主要实现了两个功能:
+- 功能一:转换整个模型
+
+ `transform_to_bnn_model`方法可以将整个DNN模型转换为BNN模型。其定义如下:
+
+ ```
+ def transform_to_bnn_model(self,
+ get_dense_args=lambda dp: {"in_channels": dp.in_channels, "has_bias": dp.has_bias,
+ "out_channels": dp.out_channels, "activation": dp.activation},
+ get_conv_args=lambda dp: {"in_channels": dp.in_channels, "out_channels": dp.out_channels,
+ "pad_mode": dp.pad_mode, "kernel_size": dp.kernel_size,
+ "stride": dp.stride, "has_bias": dp.has_bias,
+ "padding": dp.padding, "dilation": dp.dilation,
+ "group": dp.group},
+ add_dense_args=None,
+ add_conv_args=None):
+ r"""
+ Transform the whole DNN model to BNN model, and wrap BNN model by TrainOneStepCell.
+
+ Args:
+ get_dense_args (function): The arguments gotten from the DNN full connection layer. Default: lambda dp:
+ {"in_channels": dp.in_channels, "out_channels": dp.out_channels, "has_bias": dp.has_bias}.
+ get_conv_args (function): The arguments gotten from the DNN convolutional layer. Default: lambda dp:
+ {"in_channels": dp.in_channels, "out_channels": dp.out_channels, "pad_mode": dp.pad_mode,
+ "kernel_size": dp.kernel_size, "stride": dp.stride, "has_bias": dp.has_bias}.
+ add_dense_args (dict): The new arguments added to BNN full connection layer. Default: {}.
+ add_conv_args (dict): The new arguments added to BNN convolutional layer. Default: {}.
+
+ Returns:
+ Cell, a trainable BNN model wrapped by TrainOneStepCell.
+ """
+
+ ```
+ 参数`get_dense_args`指定从DNN模型的全连接层中获取哪些参数,默认值是DNN模型的全连接层和BNN的全连接层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Dense);`get_conv_args`指定从DNN模型的卷积层中获取哪些参数,默认值是DNN模型的卷积层和BNN的卷积层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.nn.html#mindspore.nn.Conv2d);参数`add_dense_args`和`add_conv_args`分别指定了要为BNN层指定哪些新的参数值。需要注意的是,`add_dense_args`中的参数不能与`get_dense_args`重复,`add_conv_args`和`get_conv_args`也是如此。
+
+- 功能二:转换指定类型的层
+
+ `transform_to_bnn_layer`方法可以将DNN模型中指定类型的层(`nn.Dense`或者`nn.Conv2d`)转换为对应的贝叶斯层。其定义如下:
+
+ ```
+ def transform_to_bnn_layer(self, dnn_layer, bnn_layer, get_args=None, add_args=None):
+ r"""
+ Transform a specific type of layers in DNN model to corresponding BNN layer.
+
+ Args:
+ dnn_layer_type (Cell): The type of DNN layer to be transformed to BNN layer. The optional values are
+ nn.Dense, nn.Conv2d.
+ bnn_layer_type (Cell): The type of BNN layer to be transformed to. The optional values are
+ DenseReparameterization, ConvReparameterization.
+ get_args (dict): The arguments gotten from the DNN layer. Default: None.
+ add_args (dict): The new arguments added to BNN layer. Default: None.
+
+ Returns:
+ Cell, a trainable model wrapped by TrainOneStepCell, whose sprcific type of layer is transformed to the corresponding bayesian layer.
+ """
+ ```
+ 参数`dnn_layer`指定将哪个类型的DNN层转换成BNN层,`bnn_layer`指定DNN层将转换成哪个类型的BNN层,`get_args`和`add_args`分别指定从DNN层中获取哪些参数和要为BNN层的哪些参数重新赋值。
+
+如何在MindSpore中使用API`TransformToBNN`可以参考教程[DNN一键转换成BNN](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/apply_deep_probability_program.html#dnnbnn)
+
+## 贝叶斯工具箱
+
+贝叶斯神经网络的优势之一就是可以获取不确定性,MDP在上层提供了不确定性估计的工具箱(`mindspore.nn.probability.toolbox`),用户可以很方便地使用该工具箱计算不确定性。不确定性意味着深度学习模型对预测结果的不确定程度。目前,大多数深度学习算法只能给出高置信度的预测结果,而不能判断预测结果的确定性,不确定性主要有两种类型:偶然不确定性和认知不确定性。
+
+- 偶然不确定性(Aleatoric Uncertainty):描述数据中的内在噪声,即无法避免的误差,这个现象不能通过增加采样数据来削弱。
+- 认知不确定性(Epistemic Uncertainty):模型自身对输入数据的估计可能因为训练不佳、训练数据不够等原因而不准确,可以通过增加训练数据等方式来缓解。
+
+不确定性评估工具箱的接口如下:
+- `model`:待评估不确定性的已训练好的模型。
+- `train_dataset`:用于训练的数据集,迭代器类型。
+- `task_type`:模型的类型,字符串,输入“regression”或者“classification”。
+- `num_classes`:如果是分类模型,需要指定类别的标签数量。
+- `epochs`:用于训练不确定模型的迭代数。
+- `epi_uncer_model_path`:用于存储或加载计算认知不确定性的模型的路径。
+- `ale_uncer_model_path`:用于存储或加载计算偶然不确定性的模型的路径。
+- `save_model`:布尔类型,是否需要存储模型。
+
+在使用前,需要先训练好模型,以LeNet5为例,使用方式如下:
+```
+from mindspore.nn.probability.toolbox.uncertainty_evaluation import UncertaintyEvaluation
+from mindspore.train.serialization import load_checkpoint, load_param_into_net
+
+if __name__ == '__main__':
+ # get trained model
+ network = LeNet5()
+ param_dict = load_checkpoint('checkpoint_lenet.ckpt')
+ load_param_into_net(network, param_dict)
+ # get train and eval dataset
+ ds_train = create_dataset('workspace/mnist/train')
+ ds_eval = create_dataset('workspace/mnist/test')
+ evaluation = UncertaintyEvaluation(model=network,
+ train_dataset=ds_train,
+ task_type='classification',
+ num_classes=10,
+ epochs=1,
+ epi_uncer_model_path=None,
+ ale_uncer_model_path=None,
+ save_model=False)
+ for eval_data in ds_eval.create_dict_iterator():
+ eval_data = Tensor(eval_data['image'], mstype.float32)
+ epistemic_uncertainty = evaluation.eval_epistemic_uncertainty(eval_data)
+ aleatoric_uncertainty = evaluation.eval_aleatoric_uncertainty(eval_data)
+ print('The shape of epistemic uncertainty is ', epistemic_uncertainty.shape)
+ print('The shape of epistemic uncertainty is ', aleatoric_uncertainty.shape)
+```
+`eval_epistemic_uncertainty`计算的是认知不确定性,也叫模型不确定性,对于每一个样本的每个预测标签都会有一个不确定值;`eval_aleatoric_uncertainty`计算的是偶然不确定性,也叫数据不确定性,对于每一个样本都会有一个不确定值。
+所以输出为:
+
+```
+The shape of epistemic uncertainty is (32, 10)
+The shape of epistemic uncertainty is (32,)
+```
+uncertainty的值位于[0,1]之间,越大表示不确定性越高。
diff --git a/docs/programming_guide/source_zh_cn/run.md b/docs/programming_guide/source_zh_cn/run.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea2dfb586add153a4475cae9e36247b8a50b9ad7
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/run.md
@@ -0,0 +1,372 @@
+# 运行方式
+
+
+
+- [运行方式](#运行方式)
+ - [概述](#概述)
+ - [执行单算子](#执行单算子)
+ - [执行普通函数](#执行普通函数)
+ - [执行网络模型](#执行网络模型)
+ - [执行训练模型](#执行训练模型)
+ - [执行推理模型](#执行推理模型)
+
+
+
+
+
+## 概述
+执行主要有三种方式:单算子、普通函数和网络训练模型。
+
+
+## 执行单算子
+
+执行单个算子,并打印相关结果。
+
+代码样例如下:
+```python
+import numpy as np
+import mindspore.nn as nn
+from mindspore import context, Tensor
+
+context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
+
+conv = nn.Conv2d(3, 4, 3, bias_init='zeros')
+input_data = Tensor(np.ones([1, 3, 5, 5]).astype(np.float32))
+output = conv(input_data)
+print(output.asnumpy())
+```
+
+输出如下:
+```python
+[[[[ 0.06022915 0.06149777 0.06149777 0.06149777 0.01145121]
+ [ 0.06402162 0.05889071 0.05889071 0.05889071 -0.00933781]
+ [ 0.06402162 0.05889071 0.05889071 0.05889071 -0.00933781]
+ [ 0.06402162 0.05889071 0.05889071 0.05889071 -0.00933781]
+ [ 0.02712326 0.02096302 0.02096302 0.02096302 -0.01119636]]
+
+ [[-0.0258286 -0.03362969 -0.03362969 -0.03362969 -0.00799183]
+ [-0.0513729 -0.06778982 -0.06778982 -0.06778982 -0.03168458]
+ [-0.0513729 -0.06778982 -0.06778982 -0.06778982 -0.03168458]
+ [-0.0513729 -0.06778982 -0.06778982 -0.06778982 -0.03168458]
+ [-0.04186669 -0.07266843 -0.07266843 -0.07266843 -0.04836193]]
+
+ [[-0.00840744 -0.03043237 -0.03043237 -0.03043237 0.00172079]
+ [ 0.00401019 -0.03755453 -0.03755453 -0.03755453 -0.00851137]
+ [ 0.00401019 -0.03755453 -0.03755453 -0.03755453 -0.00851137]
+ [ 0.00401019 -0.03755453 -0.03755453 -0.03755453 -0.00851137]
+ [ 0.00270888 -0.03718876 -0.03718876 -0.03718876 -0.03043662]]
+
+ [[-0.00982172 0.02009856 0.02009856 0.02009856 0.03327979]
+ [ 0.02529106 0.04035065 0.04035065 0.04035065 0.01782833]
+ [ 0.02529106 0.04035065 0.04035065 0.04035065 0.01782833]
+ [ 0.02529106 0.04035065 0.04035065 0.04035065 0.01782833]
+ [ 0.01015155 0.00781826 0.00781826 0.00781826 -0.02884173]]]]
+```
+
+
+## 执行普通函数
+
+将若干算子组合成一个函数,然后直接通过函数调用的方式执行这些算子,并打印相关结果,如下例所示。
+
+代码样例如下:
+```python
+import numpy as np
+from mindspore import context, Tensor
+from mindspore.ops import functional as F
+
+context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
+
+def tensor_add_func(x, y):
+ z = F.tensor_add(x, y)
+ z = F.tensor_add(z, x)
+ return z
+
+x = Tensor(np.ones([3, 3], dtype=np.float32))
+y = Tensor(np.ones([3, 3], dtype=np.float32))
+output = tensor_add_func(x, y)
+print(output.asnumpy())
+```
+
+输出如下:
+```python
+[[3. 3. 3.]
+ [3. 3. 3.]
+ [3. 3. 3.]]
+```
+
+## 执行网络模型
+MindSpore的Model接口是用于训练和验证的高级接口。可以将有训练或推理功能的layers组合成一个对象,通过调用train、eval、predict接口可以分别实现训练、推理和预测功能。
+
+用户可以根据实际需要传入网络、损失函数和优化器等初始化Model接口,还可以通过配置amp_level实现混合精度,配置metrics实现模型评估。
+
+### 执行训练模型
+通过调用Model的train接口可以实现训练。
+
+代码样例如下:
+```python
+import os
+
+import mindspore.dataset.vision.c_transforms as CV
+from mindspore.dataset.vision import Inter
+
+import mindspore.dataset as ds
+import mindspore.dataset.transforms.c_transforms as CT
+import mindspore.dataset.vision.c_transforms as CV
+import mindspore.nn as nn
+from mindspore import context
+from mindspore.common import dtype as mstype
+from mindspore.common.initializer import Normal
+from mindspore.common.initializer import TruncatedNormal
+from mindspore.dataset.vision import Inter
+from mindspore.train import Model
+from mindspore.train.callback import LossMonitor
+
+
+def create_dataset(data_path, batch_size=32, repeat_size=1,
+ num_parallel_workers=1):
+ """
+ create dataset for train or test
+ """
+ # define dataset
+ mnist_ds = ds.MnistDataset(data_path)
+
+ resize_height, resize_width = 32, 32
+ rescale = 1.0 / 255.0
+ shift = 0.0
+ rescale_nml = 1 / 0.3081
+ shift_nml = -1 * 0.1307 / 0.3081
+
+ # define map operations
+ resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode
+ rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
+ rescale_op = CV.Rescale(rescale, shift)
+ hwc2chw_op = CV.HWC2CHW()
+ type_cast_op = CT.TypeCast(mstype.int32)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=resize_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+
+ return mnist_ds
+
+
+def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):
+ """weight initial for conv layer"""
+ weight = weight_variable()
+ return nn.Conv2d(in_channels, out_channels,
+ kernel_size=kernel_size, stride=stride, padding=padding,
+ weight_init=weight, has_bias=False, pad_mode="valid")
+
+
+def fc_with_initialize(input_channels, out_channels):
+ """weight initial for fc layer"""
+ weight = weight_variable()
+ bias = weight_variable()
+ return nn.Dense(input_channels, out_channels, weight, bias)
+
+
+def weight_variable():
+ """weight initial"""
+ return TruncatedNormal(0.02)
+
+
+class LeNet5(nn.Cell):
+ """
+ Lenet network
+
+ Args:
+ num_class (int): Num classes. Default: 10.
+ num_channel (int): Num channels. Default: 1.
+
+ Returns:
+ Tensor, output tensor
+ Examples:
+ >>> LeNet(num_class=10)
+
+ """
+
+ def __init__(self, num_class=10, num_channel=1):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
+ self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
+ self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
+ self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.flatten = nn.Flatten()
+
+ def construct(self, x):
+ x = self.max_pool2d(self.relu(self.conv1(x)))
+ x = self.max_pool2d(self.relu(self.conv2(x)))
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
+
+if __name__ == "__main__":
+ context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
+ ds_train = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data/", "train"), 32)
+
+ network = LeNet5(10)
+ net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
+ net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
+ model = Model(network, net_loss, net_opt)
+
+ print("============== Starting Training ==============")
+ model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=True)
+```
+
+输出如下:
+```python
+epoch: 1 step: 1, loss is 2.300784
+epoch: 1 step: 2, loss is 2.3076947
+epoch: 1 step: 3, loss is 2.2993166
+...
+epoch: 1 step: 1873, loss is 0.13014838
+epoch: 1 step: 1874, loss is 0.0346688
+epoch: 1 step: 1875, loss is 0.017264696
+```
+
+> 使用PyNative模式调试, 请参考[使用PyNative模式调试](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/debug_in_pynative_mode.html), 包括单算子、普通函数和网络训练模型的执行。
+
+### 执行推理模型
+通过调用Model的train接口可以实现推理。为了方便评估模型的好坏,可以在Model接口初始化的时候设置评估指标Metric。
+
+Metric是用于评估模型好坏的指标。常见的主要有Accuracy、Fbeta、Precision、Recall和TopKCategoricalAccuracy等,通常情况下,一种模型指标无法全面的评估模型的好坏,一般会结合多个指标共同作用对模型进行评估。
+
+常用的内置评估指标:
+- `Accuracy`(准确率):是一个用于评估分类模型的指标。通俗来说,准确率是指我们的模型预测正确的结果所占的比例。 公式:$$Accuracy = (TP+TN)/(TP+TN+FP+FN)$$
+
+- `Precision`(精确率):在被识别为正类别的样本中,确实为正类别的比例。公式:$$Precision = TP/(TP+FP)$$
+
+- `Recall`(召回率):在所有正类别样本中,被正确识别为正类别的比例。 公式:$$Recall = TP/(TP+FN)$$
+
+- `Fbeta`(调和均值):综合考虑precision和recall的调和均值。
+公式:$$F_\beta = (1 + \beta^2) \cdot \frac{precisiont \cdot recall}{(\beta^2 \cdot precision) + recall}$$
+
+- `TopKCategoricalAccuracy`(多分类TopK准确率):计算TopK分类准确率。
+
+代码样例如下:
+```python
+import os
+
+import mindspore.dataset as ds
+import mindspore.dataset.transforms.c_transforms as CT
+import mindspore.dataset.vision.c_transforms as CV
+import mindspore.nn as nn
+from mindspore import context
+from mindspore.common import dtype as mstype
+from mindspore.common.initializer import Normal
+from mindspore.dataset.vision import Inter
+from mindspore.nn.metrics import Accuracy, Precision
+from mindspore.train import Model
+from mindspore.train.serialization import load_checkpoint, load_param_into_net
+
+
+class LeNet5(nn.Cell):
+ """
+ Lenet network
+
+ Args:
+ num_class (int): Num classes. Default: 10.
+ num_channel (int): Num channels. Default: 1.
+
+ Returns:
+ Tensor, output tensor
+ Examples:
+ >>> LeNet(num_class=10)
+
+ """
+
+ def __init__(self, num_class=10, num_channel=1):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
+ self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
+ self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
+ self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.flatten = nn.Flatten()
+
+ def construct(self, x):
+ x = self.max_pool2d(self.relu(self.conv1(x)))
+ x = self.max_pool2d(self.relu(self.conv2(x)))
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
+
+def create_dataset(data_path, batch_size=32, repeat_size=1,
+ num_parallel_workers=1):
+ """
+ create dataset for train or test
+ """
+ # define dataset
+ mnist_ds = ds.MnistDataset(data_path)
+
+ resize_height, resize_width = 32, 32
+ rescale = 1.0 / 255.0
+ shift = 0.0
+ rescale_nml = 1 / 0.3081
+ shift_nml = -1 * 0.1307 / 0.3081
+
+ # define map operations
+ resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode
+ rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
+ rescale_op = CV.Rescale(rescale, shift)
+ hwc2chw_op = CV.HWC2CHW()
+ type_cast_op = CT.TypeCast(mstype.int32)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=resize_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+
+ return mnist_ds
+
+
+if __name__ == "__main__":
+ context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
+
+ network = LeNet5(10)
+ net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
+ repeat_size = 10
+ net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
+ model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy(), "Precision": Precision()})
+
+ print("============== Starting Testing ==============")
+ param_dict = load_checkpoint("./ckpt/checkpoint_lenet-1_1875.ckpt")
+ load_param_into_net(network, param_dict)
+ ds_eval = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data", "test"), 32, 1)
+ acc = model.eval(ds_eval, dataset_sink_mode=True)
+ print("============== {} ==============".format(acc))
+```
+
+输出如下:
+```python
+============== {'Accuracy': 0.96875, 'Precision': array([0.97782258, 0.99451052, 0.98031496, 0.92723881, 0.98352214,
+ 0.97165533, 0.98726115, 0.9472196 , 0.9394551 , 0.98236515])} ==============
+```
\ No newline at end of file
diff --git a/api/source_zh_cn/programming_guide/sampler.md b/docs/programming_guide/source_zh_cn/sampler.md
similarity index 45%
rename from api/source_zh_cn/programming_guide/sampler.md
rename to docs/programming_guide/source_zh_cn/sampler.md
index dc76658852fda5925547c3898657fe64b5a69a0a..7c718e40c28f938d9db07c0828c9784eeb573946 100644
--- a/api/source_zh_cn/programming_guide/sampler.md
+++ b/docs/programming_guide/source_zh_cn/sampler.md
@@ -5,7 +5,6 @@
- [采样器](#采样器)
- [概述](#概述)
- [MindSpore采样器](#mindspore采样器)
- - [SequentialSampler](#sequentialsampler)
- [RandomSampler](#randomsampler)
- [WeightedRandomSampler](#weightedrandomsampler)
- [SubsetRandomSampler](#subsetrandomsampler)
@@ -15,11 +14,11 @@
-
+
## 概述
-MindSpore提供了多种用途的采样器,帮助用户对数据集进行不同形式的采样,以满足训练需求,能够解决诸如数据集过大或样本类别分布不均等问题。只需在加载数据集时将采样器对象传入,即可实现数据的采样。
+MindSpore提供了多种用途的采样器(Sampler),帮助用户对数据集进行不同形式的采样,以满足训练需求,能够解决诸如数据集过大或样本类别分布不均等问题。只需在加载数据集时传入采样器对象,即可实现数据的采样。
MindSpore目前提供的采样器类别如下表所示。此外,用户也可以根据需要实现自定义的采样器类。
@@ -34,124 +33,45 @@ MindSpore目前提供的采样器类别如下表所示。此外,用户也可
## MindSpore采样器
-下面以[CIFAR10数据集](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz)为例,介绍MindSpore采样器使用方法。
-
-### SequentialSampler
-
-从指定的索引位置开始顺序采样指定数目的数据。
-
-```python
-# 通过SequentialSampler定义一个顺序采样器,并作用于数据集
-
-import mindspore.dataset as ds
-
-# CIFAR10数据集路径
-DATA_DIR = "Cifar10Data/"
-
-# 1. 定义一个顺序采样器SequentialSampler,按照读取顺序获取5个样本数据
-sampler = ds.SequentialSampler(num_samples=5)
-dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-
-# 启动数据管道,输出5个样本数据
-for data in dataset1.create_dict_iterator():
- print("Image shape:", data['image'].shape, ", Label:", data['label'])
-
-print("")
-
-# 2. 定义一个顺序采样器SequentialSampler,跳过前2个数据,继续按照读取顺序获取5个样本数据
-sampler = ds.SequentialSampler(start_index=2, num_samples=5)
-dataset2 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-
-# 启动数据管道,输出5个样本数据
-for data in dataset2.create_dict_iterator():
- print("Image shape:", data['image'].shape, ", Label:", data['label'])
-
-print("")
-
-# 3. 同类用法,指定数据集中的num_samples参数为5,shuffle参数为False,同样可以达到1的效果
-dataset3 = ds.Cifar10Dataset(DATA_DIR, num_samples=5, shuffle=False)
-
-# 启动数据管道,输出5个样本数据
-for data in dataset3.create_dict_iterator():
- print("Image shape:", data['image'].shape, ", Label:", data['label'])
-```
-
-```
-Image shape: (32, 32, 3) , Label: 0
-Image shape: (32, 32, 3) , Label: 1
-Image shape: (32, 32, 3) , Label: 2
-Image shape: (32, 32, 3) , Label: 3
-Image shape: (32, 32, 3) , Label: 4
-
-Image shape: (32, 32, 3) , Label: 2
-Image shape: (32, 32, 3) , Label: 3
-Image shape: (32, 32, 3) , Label: 4
-Image shape: (32, 32, 3) , Label: 5
-Image shape: (32, 32, 3) , Label: 6
-
-Image shape: (32, 32, 3) , Label: 0
-Image shape: (32, 32, 3) , Label: 1
-Image shape: (32, 32, 3) , Label: 2
-Image shape: (32, 32, 3) , Label: 3
-Image shape: (32, 32, 3) , Label: 4
-```
+下面以[CIFAR-10数据集](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz)为例,介绍几种常用MindSpore采样器的使用方法。
### RandomSampler
从索引序列中随机采样指定数目的数据。
-```python
-# 通过RandomSampler定义一个随机采样器,并作用于数据集
+下面的样例使用随机采样器分别从CIFAR-10数据集中有放回和无放回地随机采样5个数据,并展示已加载数据的形状和标签。
+```python
import mindspore.dataset as ds
-# 设置全局随机种子,确保RandomSampler的行为可预测
ds.config.set_seed(0)
-# CIFAR数据集路径
DATA_DIR = "Cifar10Data/"
-# 1. 定义一个随机采样器SequentialSampler,随机获取5个样本数据
sampler = ds.RandomSampler(num_samples=5)
dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 启动数据管道,输出5个样本数据
for data in dataset1.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
-print("")
+print("------------")
-# 2. 定义一个随机采样器RandomSampler,replacement=True意味着有放回抽样
sampler = ds.RandomSampler(replacement=True, num_samples=5)
dataset2 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 启动数据管道,输出5个样本数据
for data in dataset2.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
-
-print("")
-
-# 3. 同类用法,指定数据集中的num_samples参数为5,shuffle参数为True,同样可以达到2的效果
-dataset3 = ds.Cifar10Dataset(DATA_DIR, num_samples=5, shuffle=True)
-
-# 启动数据管道,输出5个样本数据
-for data in dataset3.create_dict_iterator():
- print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
+输出结果如下:
+
```
Image shape: (32, 32, 3) , Label: 0
Image shape: (32, 32, 3) , Label: 2
Image shape: (32, 32, 3) , Label: 6
Image shape: (32, 32, 3) , Label: 4
Image shape: (32, 32, 3) , Label: 6
-
-Image shape: (32, 32, 3) , Label: 8
-Image shape: (32, 32, 3) , Label: 8
-Image shape: (32, 32, 3) , Label: 1
-Image shape: (32, 32, 3) , Label: 2
-Image shape: (32, 32, 3) , Label: 7
-
+------------
Image shape: (32, 32, 3) , Label: 8
Image shape: (32, 32, 3) , Label: 8
Image shape: (32, 32, 3) , Label: 1
@@ -163,29 +83,25 @@ Image shape: (32, 32, 3) , Label: 7
指定每种类别的采样概率,按照概率在各类别中随机采样指定数目的数据。
-```python
-# 通过WeightedRandomSampler定义一个带权重的随机采样器,并作用于数据集
+下面的样例使用带权随机采样器从CIFAR-10数据集的10个类别中按概率获取6个样本,并展示已读取数据的形状和标签。
+```python
import mindspore.dataset as ds
-# 设置全局随机种子,确保WeightedRandomSampler的行为可预测
ds.config.set_seed(1)
-# CIFAR数据集路径
DATA_DIR = "Cifar10Data/"
-# 定义一个带权重的随机采样器WeightedRandomSampler
-# weights代表CIFAR10中10类数据的采样概率,num_samples表示随机获取6个样本数据
-# replacement参数与RandomSampler中一致
weights = [1, 1, 0, 0, 0, 0, 0, 0, 0, 0]
sampler = ds.WeightedRandomSampler(weights, num_samples=6)
-dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
+dataset = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 启动数据管道,输出6个样本数据
-for data in dataset1.create_dict_iterator():
+for data in dataset.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
+输出结果如下:
+
```
Image shape: (32, 32, 3) , Label: 1
Image shape: (32, 32, 3) , Label: 1
@@ -199,28 +115,25 @@ Image shape: (32, 32, 3) , Label: 0
从指定索引子序列中随机采样指定数目的数据。
-```python
-# 通过SubsetRandomSampler定义一个子集随机采样器,并作用于数据集
+下面的样例使用子序列随机采样器从CIFAR-10数据集的指定子序列中抽样3个样本,并展示已读取数据的形状和标签。
+```python
import mindspore.dataset as ds
-# 设置全局随机种子,确保SubsetRandomSampler的行为可预测
ds.config.set_seed(2)
-# CIFAR数据集路径
DATA_DIR = "Cifar10Data/"
-# 定义一个带采样集合的随机采样器SubsetRandomSampler
-# indice代表可采样的集合,num_samples表示获取3个样本数据,即从可采样集合中(0~5)随机获取3个值,作为下标访问数据集的数据
indices = [0, 1, 2, 3, 4, 5]
sampler = ds.SubsetRandomSampler(indices, num_samples=3)
-dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
+dataset = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 启动数据管道,输出3个样本数据
-for data in dataset1.create_dict_iterator():
+for data in dataset.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
+输出结果如下:
+
```
Image shape: (32, 32, 3) , Label: 5
Image shape: (32, 32, 3) , Label: 0
@@ -231,29 +144,24 @@ Image shape: (32, 32, 3) , Label: 3
在指定的数据集类别P中,每种类别各采样K条数据。
-```python
-# 通过PKSampler定义一个针对各个类别随机采样器,并作用于数据集
+下面的样例使用PK采样器从CIFAR-10数据集中每种类别抽样2个样本,最多20个样本,并展示已读取数据的形状和标签。
+```python
import mindspore.dataset as ds
-# 设置全局随机种子,确保PKSampler的shuffle参数行为可预测
ds.config.set_seed(3)
-# CIFAR数据集路径
DATA_DIR = "Cifar10Data/"
-# 定义一个针对类别采样的随机采样器PKSampler
-# num_val代表从每个类别采样K个样本,class_column代表针对特定的数据列采样(一般是label)
-# num_samples代表输出的样本数,设置num_samples = num_val*class_nums,确保每个类别平均采样
-# shuffle代表样本是否需要被混洗
sampler = ds.PKSampler(num_val=2, class_column='label', num_samples=20)
-dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
+dataset = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)
-# 启动数据管道,输出20个样本数据
-for data in dataset1.create_dict_iterator():
+for data in dataset.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
+输出结果如下:
+
```
Image shape: (32, 32, 3) , Label: 0
Image shape: (32, 32, 3) , Label: 0
@@ -281,66 +189,53 @@ Image shape: (32, 32, 3) , Label: 9
在分布式训练中,对数据集分片进行采样。
-```python
-# 通过DistributedSampler定义一个将数据集进行分片操作,并获取某个分片进行采样的采样器,并作用于数据集
+下面的样例使用分布式采样器将构建的数据集分为3片,在每个分片中采样3个数据样本,并展示已读取的数据。
+```python
import numpy as np
import mindspore.dataset as ds
-# 构建一个list
data_source = [0, 1, 2, 3, 4, 5, 6, 7, 8]
-# 定义一个采样器DistributedSampler
-# num_shards代表将CIFAR数据集拆分成n个分片
-# shard_id代表获取第m个分片
-# num_samples代表获取该分片的10个样本
-# shuffle代表样本是否需要被混洗
sampler = ds.DistributedSampler(num_shards=3, shard_id=0, shuffle=False, num_samples=3)
-
-# 从list中构建数据管道
dataset = ds.NumpySlicesDataset(data_source, column_names=["data"], sampler=sampler)
-# 经过DistributedSampler分片后,数据集的内容为
-# shard_id 0: 0, 3, 6
-# shard_id 1: 1, 4, 7
-# shard_id 2: 2, 5, 8
-# 因此第0个分片拥有数据为0, 3, 6
for data in dataset.create_dict_iterator():
print(data)
```
+输出结果如下:
+
```
-{'data': array(0, dtype=int64)}
-{'data': array(3, dtype=int64)}
-{'data': array(6, dtype=int64)}
+{'data': Tensor(shape=[], dtype=Int64, value= 0)}
+{'data': Tensor(shape=[], dtype=Int64, value= 3)}
+{'data': Tensor(shape=[], dtype=Int64, value= 6)}
```
## 自定义采样器
-用户可以继承Sampler基类,通过实现`__iter__`方法来自定义采样器的采样方式。
+用户可以继承`Sampler`基类,通过实现`__iter__`方法来自定义采样器的采样方式。
-```python
-# 继承Sampler基类,重载__iter__成为新的采样器
+下面的样例定义了一个从下标0至下标9间隔为2采样的采样器,将其作用于CIFAR-10数据集,并展示已读取数据的形状和标签。
+```python
import mindspore.dataset as ds
class MySampler(ds.Sampler):
def __iter__(self):
- # 采样器的行为是,从下标0开始到下标9,以2为间隔采样
for i in range(0, 10, 2):
yield i
-# CIFAR数据集路径
DATA_DIR = "Cifar10Data/"
-# 将自定义构建的采样器传入到sampler参数
-dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=MySampler())
+dataset = ds.Cifar10Dataset(DATA_DIR, sampler=MySampler())
-# 启动数据管道,输出5个样本数据
-for data in dataset1.create_dict_iterator():
+for data in dataset.create_dict_iterator():
print("Image shape:", data['image'].shape, ", Label:", data['label'])
```
+输出结果如下:
+
```
Image shape: (32, 32, 3) , Label: 0
Image shape: (32, 32, 3) , Label: 2
diff --git a/docs/programming_guide/source_zh_cn/security_and_privacy.md b/docs/programming_guide/source_zh_cn/security_and_privacy.md
new file mode 100644
index 0000000000000000000000000000000000000000..e75453f0ff844fe8e9319e284f254336cb73e6ec
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/security_and_privacy.md
@@ -0,0 +1,61 @@
+# AI安全与隐私保护
+
+
+
+- [AI安全与隐私保护](#ai安全与隐私保护)
+ - [概述](#概述)
+ - [对抗鲁棒性](#对抗鲁棒性)
+ - [Attack](#attack)
+ - [Defense](#defense)
+ - [Detector](#detector)
+ - [模型安全测试](#模型安全测试)
+ - [Fuzzer](#fuzzer)
+ - [差分隐私训练](#差分隐私训练)
+ - [DPModel](#dpmodel)
+ - [隐私泄露风险评估](#隐私泄露风险评估)
+ - [MembershipInference](#membershipinference)
+
+
+
+
+
+## 概述
+
+本篇主要介绍AI安全与隐私保护。AI作为一种通用技术,在带来巨大机遇和效益的同时也面临着新的安全与隐私保护的挑战。MindArmour是MindSpore的一个子项目,为MindSpore提供安全与隐私保护能力,主要包括对抗鲁棒性、模型安全测试、差分隐私训练、隐私泄露风险评估等技术。
+
+## 对抗鲁棒性
+
+### Attack
+`Attack`基类定义了对抗样本生成的使用接口,其子类实现了各种具体的生成算法,支持安全工作人员快速高效地生成对抗样本,用于攻击AI模型,以评估模型的鲁棒性。
+
+### Defense
+`Defense`基类定义了对抗训练的使用接口,其子类实现了各种具体的对抗训练算法,增强模型的对抗鲁棒性。
+
+### Detector
+`Detector`基类定义了对抗样本检测的使用借口,其子类实现了各种具体的检测算法,增强模型的对抗鲁棒性。
+
+详细内容,请参考[对抗鲁棒性官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/improve_model_security_nad.html)。
+
+## 模型安全测试
+
+### Fuzzer
+
+`Fuzzer`类基于神经元覆盖率增益控制fuzzing流程,采用自然扰动和对抗样本生成方法作为变异策略,激活更多的神经元,从而探索不同类型的模型输出结果、错误行为,指导用户增强模型鲁棒性。
+
+详细内容,请参考[模型安全测试官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/test_model_security_fuzzing.html)。
+
+## 差分隐私训练
+
+### DPModel
+
+`DPModel`继承了`mindspore.Model`,提供了差分隐私训练的入口函数。
+
+详细内容,请参考[差分隐私官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/protect_user_privacy_with_differential_privacy.html)。
+
+## 隐私泄露风险评估
+
+### MembershipInference
+
+`MembershipInference`类提供了一种模型逆向分析方法,能够基于模型对样本的预测信息,推测某个样本是否在模型的训练集中,以此评估模型的隐私泄露风险。
+
+详细内容,请参考[隐私泄露风险评估官方教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/test_model_security_membership_inference.html)。
diff --git a/api/source_zh_cn/programming_guide/tensor.md b/docs/programming_guide/source_zh_cn/tensor.md
similarity index 63%
rename from api/source_zh_cn/programming_guide/tensor.md
rename to docs/programming_guide/source_zh_cn/tensor.md
index 3959362bb808dba14c31328977e11075d25c773c..71dc5e63aec07a304fb7f59968b39a3a2294cbf3 100644
--- a/api/source_zh_cn/programming_guide/tensor.md
+++ b/docs/programming_guide/source_zh_cn/tensor.md
@@ -1,8 +1,8 @@
-# 张量
+# Tensor
-- [张量](#张量)
+- [Tensor](#tensor)
- [概述](#概述)
- [张量构造](#张量构造)
- [张量的属性和方法](#张量的属性和方法)
@@ -11,24 +11,21 @@
-
+
## 概述
-张量是MindSpore网络运算中的基本数据结构,即为多维数组。张量里的数据分为不同的类型,
-支持的类型有`int8`、`int16`、`int32`、`int64`、`uint8`、`uint16`、`uint32`、`uint64`、`float16`、`float32`、`float64`、`bool_`,
-与NumPy里的数据类型一一对应。
+张量(Tensor)是MindSpore网络运算中的基本数据结构。张量中的数据类型可参考[dtype](https://www.mindspore.cn/doc/programming_guide/r1.0/dtype.html)。
不同维度的张量分别表示不同的数据,0维张量表示标量,1维张量表示向量,2维张量表示矩阵,3维张量可以表示彩色图像的RGB三通道等等。
-> 本文档中的所有示例,都是在PyNative模式下运行的,暂不支持CPU。
+> 本文中的所有示例,支持在PyNative模式下运行,暂不支持CPU。
## 张量构造
-构造张量时支持传入`Tensor`、`float`、`int`、`bool`、`tuple`、`list`和`NumPy.array`。
+构造张量时,支持传入`Tensor`、`float`、`int`、`bool`、`tuple`、`list`和`NumPy.array`类型。
-`Tensor`作为初始值可指定dtype,如果没有指定dtype,`int`、`float`、`bool`分别对应`int32`、`float32`、`bool_`,
-`tuple`和`list`生成的1维`Tensor`数据类型与`tuple`和`list`里存放数据的类型相对应。
+`Tensor`作为初始值时,可指定dtype,如果没有指定dtype,`int`、`float`、`bool`分别对应`int32`、`float32`、`bool_`,`tuple`和`list`生成的1维`Tensor`数据类型与`tuple`和`list`里存放数据的类型相对应。
代码样例如下:
@@ -65,6 +62,7 @@ True
```
## 张量的属性和方法
+
### 属性
张量的属性包括形状(shape)和数据类型(dtype)。
@@ -93,9 +91,9 @@ print(x_shape, x_dtype)
### 方法
-张量的方法包括`all`、`any`和`asnumpy`。
-- `all(axis, keep_dims)`:在指定维度上通过`and`操作进行归约,axis代表归约维度,keep_dims表示是否保留归约后的维度。
-- `any(axis, keep_dims)`:在指定维度上通过`or`操作进行归约,axis代表归约维度,keep_dims表示是否保留归约后的维度。
+张量的方法包括`all`、`any`和`asnumpy`,`all`和`any`方法目前只支持Ascend。
+- `all(axis, keep_dims)`:在指定维度上通过`and`操作进行归约,`axis`代表归约维度,`keep_dims`表示是否保留归约后的维度。
+- `any(axis, keep_dims)`:在指定维度上通过`or`操作进行归约,参数含义同`all`。
- `asnumpy()`:将`Tensor`转换为NumPy的array。
代码样例如下:
diff --git a/api/source_zh_cn/programming_guide/tokenizer.md b/docs/programming_guide/source_zh_cn/tokenizer.md
similarity index 39%
rename from api/source_zh_cn/programming_guide/tokenizer.md
rename to docs/programming_guide/source_zh_cn/tokenizer.md
index 8ac94e6a54d453785728c66681b78d410f1288c0..7c1002b37ba4a859b75c83dc68e60e0354c76d28 100644
--- a/api/source_zh_cn/programming_guide/tokenizer.md
+++ b/docs/programming_guide/source_zh_cn/tokenizer.md
@@ -5,16 +5,22 @@
- [分词器](#分词器)
- [概述](#概述)
- [MindSpore分词器](#mindspore分词器)
+ - [BertTokenizer](#BertTokenizer)
+ - [JiebaTokenizer](#JiebaTokenizer)
+ - [SentencePieceTokenizer](#SentencePieceTokenizer)
+ - [UnicodeCharTokenizer](#UnicodeCharTokenizer)
+ - [WhitespaceTokenizer](#WhitespaceTokenizer)
+ - [WordpieceTokenizer](#WordpieceTokenizer)
-
+
## 概述
分词就是将连续的字序列按照一定的规范重新组合成词序列的过程,合理的进行分词有助于语义的理解。
-MindSpore提供了多种用途的分词器,能够帮助用户高性能地处理文本,用户可以构建自己的字典,使用适当的标记器将句子拆分为不同的标记,并通过查找操作获取字典中标记的索引。
+MindSpore提供了多种用途的分词器(Tokenizer),能够帮助用户高性能地处理文本,用户可以构建自己的字典,使用适当的标记器将句子拆分为不同的标记,并通过查找操作获取字典中标记的索引。
MindSpore目前提供的分词器如下表所示。此外,用户也可以根据需要实现自定义的分词器。
@@ -30,109 +36,51 @@ MindSpore目前提供的分词器如下表所示。此外,用户也可以根
| WhitespaceTokenizer | 根据空格符对标量文本数据进行分词。 |
| WordpieceTokenizer | 根据单词集对标量文本数据进行分词。 |
-更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.text.html)。
+更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore/mindspore.dataset.text.html)。
## MindSpore分词器
-### BasicTokenizer
-
-`BasicTokenizer`是通过大小写折叠、编码统一、去除重音符,按照正则匹配模式来分词的。
-
-```python
-import mindspore.dataset as ds
-import mindspore.dataset.text as text
-
-# 构建输入的数据列表
-input_list = ["Welcome to Beijing北京欢迎您", "長風破浪會有時,直掛雲帆濟滄海","😀嘿嘿😃哈哈😄大笑😁嘻嘻",
- "明朝(1368—1644年)和清朝(1644—1911年),是中国封建王朝史上最后两个朝代",
- "明代(1368-1644)と清代(1644-1911)は、中国の封建王朝の歴史における最後の2つの王朝でした",
- "명나라 (1368-1644)와 청나라 (1644-1911)는 중국 봉건 왕조의 역사에서 마지막 두 왕조였다"]
-
-dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-
-print("------------------------before tokenize----------------------------")
-
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
- print(text.to_str(data['text']))
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
-# 输出分词之后的数据
-# BasicTokenizer为分词的函数
-basic_tokenizer = text.BasicTokenizer()
-
-dataset = dataset.map(operations=basic_tokenizer)
-
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text'])
- print(token)
-```
-
-```
-------------------------before tokenize----------------------------
-Welcome to Beijing北京欢迎您
-長風破浪會有時,直掛雲帆濟滄海
-😀嘿嘿😃哈哈😄大笑😁嘻嘻
-明朝(1368—1644年)和清朝(1644—1911年),是中国封建王朝史上最后两个朝代
-明代(1368-1644)と清代(1644-1911)は、中国の封建王朝の歴史における最後の2つの王朝でした
-명나라 (1368-1644)와 청나라 (1644-1911)는 중국 봉건 왕조의 역사에서 마지막 두 왕조였다
-------------------------after tokenize-----------------------------
-['Welcome' 'to' 'Beijing' '北' '京' '欢' '迎' '您']
-['長' '風' '破' '浪' '會' '有' '時' ',' '直' '掛' '雲' '帆' '濟' '滄' '海']
-['😀' '嘿' '嘿' '😃' '哈' '哈' '😄' '大' '笑' '😁' '嘻' '嘻']
-['明' '朝' '(' '1368' '—' '1644' '年' ')' '和' '清' '朝' '(' '1644' '—' '1911' '年' ')' ',' '是' '中' '国' '封' '建' '王' '朝' '史' '上' '最' '后' '两' '个' '朝' '代']
-['明' '代' '(' '1368' '-' '1644' ')' 'と' '清' '代' '(' '1644' '-' '1911' ')' 'は' '、' '中' '国' 'の' '封' '建' '王' '朝' 'の' '歴' '史' 'における' '最' '後' 'の2つの' '王' '朝' 'でした']
-['명나라' '(' '1368' '-' '1644' ')' '와' '청나라' '(' '1644' '-' '1911' ')' '는' '중국' '봉건' '왕조의' '역사에서' '마지막' '두' '왕조였다']
-```
+下面介绍几种常用分词器的使用方法。
### BertTokenizer
`BertTokenizer`是通过调用`BasicTokenizer`和`WordpieceTokenizer`来进行分词的。
+下面的样例首先构建了一个文本数据集和字符串列表,然后通过`BertTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
+
```python
import mindspore.dataset as ds
import mindspore.dataset.text as text
-# 构建输入的数据列表
input_list = ["床前明月光", "疑是地上霜", "举头望明月", "低头思故乡", "I am making small mistakes during working hours",
"😀嘿嘿😃哈哈😄大笑😁嘻嘻", "繁體字"]
-
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-print("------------------------before tokenize----------------------------")
+print("------------------------before tokenization----------------------------")
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
+for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
-# 字符串列表,其中每个元素都是字符串类型的单词。
vocab_list = [
"床", "前", "明", "月", "光", "疑", "是", "地", "上", "霜", "举", "头", "望", "低", "思", "故", "乡",
"繁", "體", "字", "嘿", "哈", "大", "笑", "嘻", "i", "am", "mak", "make", "small", "mistake",
"##s", "during", "work", "##ing", "hour", "😀", "😃", "😄", "😁", "+", "/", "-", "=", "12",
"28", "40", "16", " ", "I", "[CLS]", "[SEP]", "[UNK]", "[PAD]", "[MASK]", "[unused1]", "[unused10]"]
-# 从单词列表中构建一个vocab对象
vocab = text.Vocab.from_list(vocab_list)
-
-# 输出分词之后的数据
-# BertTokenizer为分词的函数
tokenizer_op = text.BertTokenizer(vocab=vocab)
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
dataset = dataset.map(operations=tokenizer_op)
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text'])
- print(token)
+print("------------------------after tokenization-----------------------------")
+
+for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
+ print(text.to_str(i['text']))
```
+输出结果如下:
+
```
-------------------------before tokenize----------------------------
+------------------------before tokenization----------------------------
床前明月光
疑是地上霜
举头望明月
@@ -140,7 +88,7 @@ for i in dataset.create_dict_iterator(num_epochs=1):
I am making small mistakes during working hours
😀嘿嘿😃哈哈😄大笑😁嘻嘻
繁體字
-------------------------after tokenize-----------------------------
+------------------------after tokenization-----------------------------
['床' '前' '明' '月' '光']
['疑' '是' '地' '上' '霜']
['举' '头' '望' '明' '月']
@@ -154,123 +102,75 @@ I am making small mistakes during working hours
`JiebaTokenizer`是基于jieba的中文分词。
+下面的样例首先构建了一个文本数据集,然后使用HMM与MP字典文件创建`JiebaTokenizer`对象,并对数据集进行分词,最后展示了分词前后的文本结果。
+
```python
import mindspore.dataset as ds
import mindspore.dataset.text as text
-# 构建输入的数据列表
input_list = ["床前明月光", "疑是地上霜", "举头望明月", "低头思故乡", "I am making small mistakes during working hours",
"😀嘿嘿😃哈哈😄大笑😁嘻嘻", "繁體字"]
-
-# 字典文件由HMMSegment算法和MPSegment算法使用,该字典可在cppjieba的官方网站上获得。
-HMM_FILE = "hmm_model.utf8"
-MP_FILE = "jieba.dict.utf8"
-
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-print("------------------------before tokenize----------------------------")
+print("------------------------before tokenization----------------------------")
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
+for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
-tokenizer_op = text.JiebaTokenizer(HMM_FILE, MP_FILE)
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
+HMM_FILE = "hmm_model.utf8"
+MP_FILE = "jieba.dict.utf8"
+jieba_op = text.JiebaTokenizer(HMM_FILE, MP_FILE)
+dataset = dataset.map(operations=jieba_op, input_columns=["text"], num_parallel_workers=1)
-dataset = dataset.map(input_columns=["text"], operations=jieba_op, num_parallel_workers=1)
+print("------------------------after tokenization-----------------------------")
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text'])
- print(token)
+for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
+ print(text.to_str(i['text']))
```
+输出结果如下:
+
```
-------------------------before tokenize----------------------------
+------------------------before tokenization----------------------------
今天天气太好了我们一起去外面玩吧
-------------------------after tokenize-----------------------------
+------------------------after tokenization-----------------------------
['今天天气' '太好了' '我们' '一起' '去' '外面' '玩吧']
```
-### RegexTokenizer
-
-`RegexTokenizer`是通正则表达式匹配模式来进行分词的。
-
-```python
-import mindspore.dataset as ds
-import mindspore.dataset.text as text
-
-# 构建输入的数据列表
-input_list = ["Welcome to Shenzhen!"]
-
-# 原始字符串将由匹配的元素分隔。
-delim_pattern = "\\s+"
-
-dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-
-print("------------------------before tokenize----------------------------")
-
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
- print(text.to_str(data['text']))
-
-tokenizer_op = text.RegexTokenizer(delim_pattern)
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
-dataset = dataset.map(operations=tokenizer_op)
-
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text']).tolist()
- print(token)
-```
-
-```
-------------------------before tokenize----------------------------
-Welcome to Shenzhen!
-------------------------after tokenize-----------------------------
-['Welcome', 'to', 'Shenzhen!']
-```
-
### SentencePieceTokenizer
-`SentencePieceTokenizer`是基于SentencePiece这个开源的自然语言处理工具包。
+`SentencePieceTokenizer`是基于[SentencePiece](https://github.com/google/sentencepiece)这个开源的自然语言处理工具包。
+
+下面的样例首先构建了一个文本数据集,然后从`VOCAB_FILE`文件中构建一个`vocab`对象,再通过`SentencePieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
```python
import mindspore.dataset as ds
import mindspore.dataset.text as text
-# 构建输入的数据列表
input_list = ["I saw a girl with a telescope."]
-
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-print("------------------------before tokenize----------------------------")
+print("------------------------before tokenization----------------------------")
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
+for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
-# 从文件数据中构建一个vocab对象
-vocab = text.SentencePieceVocab.from_file([VOCAB_FILE], 5000, 0.9995, SentencePieceModel.UNIGRAM, {})
+vocab = text.SentencePieceVocab.from_dataset(dataset, 5000, 0.9995, SentencePieceModel.UNIGRAM, {})
tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING)
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
dataset = dataset.map(operations=tokenizer_op)
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text'])
- print(token)
+print("------------------------after tokenization-----------------------------")
+
+for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
+ print(text.to_str(i['text']))
```
+输出结果如下:
+
```
-------------------------before tokenize----------------------------
+------------------------before tokenization----------------------------
I saw a girl with a telescope.
-------------------------after tokenize-----------------------------
+------------------------after tokenization-----------------------------
['▁I' '▁sa' 'w' '▁a' '▁girl' '▁with' '▁a' '▁te' 'les' 'co' 'pe' '.']
```
@@ -278,124 +178,77 @@ I saw a girl with a telescope.
`UnicodeCharTokenizer`是根据Unicode字符集来分词的。
+下面的样例首先构建了一个文本数据集,然后通过`UnicodeCharTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
+
```python
import mindspore.dataset as ds
import mindspore.dataset.text as text
-# 构建输入的数据列表
input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
-
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-print("------------------------before tokenize----------------------------")
+print("------------------------before tokenization----------------------------")
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
+for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
tokenizer_op = text.UnicodeCharTokenizer()
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
dataset = dataset.map(operations=tokenizer_op)
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text']).tolist()
- print(token)
+print("------------------------after tokenization-----------------------------")
+
+for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
+ print(text.to_str(i['text']).tolist())
```
+输出结果如下:
+
```
-------------------------before tokenize----------------------------
+------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
-------------------------after tokenize-----------------------------
+------------------------after tokenization-----------------------------
['W', 'e', 'l', 'c', 'o', 'm', 'e', ' ', 't', 'o', ' ', 'B', 'e', 'i', 'j', 'i', 'n', 'g', '!']
['北', '京', '欢', '迎', '您', '!']
['我', '喜', '欢', 'E', 'n', 'g', 'l', 'i', 's', 'h', '!']
```
-### UnicodeScriptTokenizer
-
-`UnicodeScriptTokenizer`是根据不同的Unicode的边界来进行分词的。
-
-```python
-import mindspore.dataset as ds
-import mindspore.dataset.text as text
-
-# 构建输入的数据列表
-input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
-
-dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-
-print("------------------------before tokenize----------------------------")
-
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
- print(text.to_str(data['text']))
-
-tokenizer_op = text.UnicodeScriptTokenizer()
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
-dataset = dataset.map(operations=tokenizer_op)
-
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text']).tolist()
- print(token)
-```
-
-```
-------------------------before tokenize----------------------------
-Welcome to Beijing!
-北京欢迎您!
-我喜欢English!
-------------------------after tokenize-----------------------------
-['Welcome', 'to', 'Beijing', '!']
-['北京欢迎您', '!']
-['我喜欢', 'English', '!']
-```
-
### WhitespaceTokenizer
`WhitespaceTokenizer`是根据空格来进行分词的。
+下面的样例首先构建了一个文本数据集,然后通过`WhitespaceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
+
```python
import mindspore.dataset as ds
import mindspore.dataset.text as text
-# 构建输入的数据列表
input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
-
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-print("------------------------before tokenize----------------------------")
+print("------------------------before tokenization----------------------------")
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
+for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
tokenizer_op = text.WhitespaceTokenizer()
-
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
dataset = dataset.map(operations=tokenizer_op)
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text']).tolist()
- print(token)
+print("------------------------after tokenization-----------------------------")
+
+for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
+ print(text.to_str(i['text']).tolist())
```
+输出结果如下:
+
```
->> Tokenize Result
-------------------------before tokenize----------------------------
+------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
-------------------------after tokenize-----------------------------
+------------------------after tokenization-----------------------------
['Welcome', 'to', 'Beijing!']
['北京欢迎您!']
['我喜欢English!']
@@ -405,40 +258,34 @@ Welcome to Beijing!
`WordpieceTokenizer`是基于单词集来划分的,单词集里没有的,但是有组合的也会划分出来。
+下面的样例首先构建了一个文本数据集,然后从单词列表中构建`vocab`对象,通过`WordpieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
+
```python
import mindspore.dataset as ds
import mindspore.dataset.text as text
-# 构建输入的数据列表
-input_list = ["my", "favorite", "book", "is", "love", "during", "the", "cholera", "era", "what", "我", "最", "喜", "欢", "的", "书", "是", "霍", "乱", "时", "期", "的", "爱", "情", "您"]
-
+input_list = ["my", "favorite", "book", "is", "love", "during", "the", "cholera", "era", "what", "我", "最", "喜", "欢", "书", "是", "霍", "乱", "时", "期", "的", "爱", "情", "您"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
-print("------------------------before tokenize----------------------------")
+print("------------------------before tokenization----------------------------")
-# 输出分词之前的数据
-for data in dataset.create_dict_iterator():
+for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
-#打印分词后的数据输出
-print("------------------------after tokenize-----------------------------")
-
-# 从单词列表中构建一个vocab对象
-vocab = text.Vocab.from_list(vocab_list)
-
-# 输出分词之后的数据
-# BasicTokenizer为分词的函数
+vocab = text.Vocab.from_list(input_list)
tokenizer_op = text.WordpieceTokenizer(vocab=vocab)
-
dataset = dataset.map(operations=tokenizer_op)
-for i in dataset.create_dict_iterator(num_epochs=1):
- token = text.to_str(i['text'])
- print(token)
+print("------------------------after tokenization-----------------------------")
+
+for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
+ print(text.to_str(i['text']))
```
+输出结果如下:
+
```
-------------------------before tokenize----------------------------
+------------------------before tokenization----------------------------
my
favorite
book
@@ -464,7 +311,7 @@ what
爱
情
您
-------------------------after tokenize-----------------------------
+------------------------after tokenization-----------------------------
['my']
['favor' '##ite']
['book']
diff --git a/docs/programming_guide/source_zh_cn/train.md b/docs/programming_guide/source_zh_cn/train.md
new file mode 100644
index 0000000000000000000000000000000000000000..9ce5b02c7bc8fa574fc24fc76ea8987405abf3ea
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/train.md
@@ -0,0 +1,442 @@
+# 训练
+
+
+
+- [训练](#训练)
+ - [概述](#概述)
+ - [自定义训练网络](#自定义训练网络)
+ - [自定义训练循环](#自定义训练循环)
+ - [边训练边推理](#边训练边推理)
+ - [on-device执行](#on-device执行)
+ - [计算图下沉](#计算图下沉)
+ - [数据下沉](#数据下沉)
+
+
+
+
+
+## 概述
+MindSpore在Model_zoo也已经提供了大量的目标检测、自然语言处理等多种网络模型,供用户直接使用,但是对于某些高级用户而言可能想要自行设计网络或者自定义训练循环,下面就对自定义训练网络、自定义训练循环和边训练边推理三种场景进行介绍,另外对On device执行方式进行详细介绍。
+
+## 自定义训练网络
+在自定义训练网络前,需要先了解下MindSpore的网络支持、Python源码构造网络约束和算子支持情况。
+
+- 网络支持:当前MindSpore已经支持多种网络,按类型分为计算机视觉、自然语言处理、推荐和图神经网络,可以通过[网络支持](https://www.mindspore.cn/doc/note/zh-CN/r1.0/network_list.html)查看具体支持的网络情况。如果现有网络无法满足用户需求,用户可以根据实际需要定义自己的网络。
+
+- Python源码构造网络约束:MindSpore暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。详细情况可以查看[Python源码构造网络约束](https://www.mindspore.cn/doc/note/zh-CN/r1.0/constraints_on_network_construction.html)了解。随着MindSpore的演进,这些约束可能会发生变化。
+
+- 算子支持:顾名思义,网络的基础是算子,所以用户自定义训练网络前要对MindSpore当前支持的算子有所了解,可以通过查看[算子支持](https://www.mindspore.cn/doc/note/zh-CN/r1.0/operator_list.html)了解不同的后端(Ascend、GPU和CPU)的算子实现情况。
+
+> 当开发网络遇到内置算子不足以满足需求时,用户也可以参考[自定义算子](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/custom_operator_ascend.html),方便快捷地扩展昇腾AI处理器的自定义算子。
+
+代码样例如下:
+```python
+import numpy as np
+
+from mindspore.common.tensor import Tensor
+from mindspore.nn import Cell, Dense, SoftmaxCrossEntropyWithLogits, Momentum, TrainOneStepCell, WithLossCell
+from mindspore.ops import operations as P
+
+
+class ReLUReduceMeanDense(Cell):
+ def __init__(self, kernel, bias, in_channel, num_class):
+ super().__init__()
+ self.relu = P.ReLU()
+ self.mean = P.ReduceMean(keep_dims=False)
+ self.dense = Dense(in_channel, num_class, kernel, bias)
+
+ def construct(self, x):
+ x = self.relu(x)
+ x = self.mean(x, (2, 3))
+ x = self.dense(x)
+ return x
+
+
+if __name__ == "__main__":
+ weight_np = np.ones((1000, 2048)).astype(np.float32)
+ weight = Tensor(weight_np.copy())
+ bias_np = np.ones((1000,)).astype(np.float32)
+ bias = Tensor(bias_np.copy())
+ net = ReLUReduceMeanDense(weight, bias, 2048, 1000)
+ criterion = SoftmaxCrossEntropyWithLogits(sparse=False)
+ optimizer = Momentum(learning_rate=0.1, momentum=0.1,
+ params=filter(lambda x: x.requires_grad, net.get_parameters()))
+ net_with_criterion = WithLossCell(net, criterion)
+ train_network = TrainOneStepCell(net_with_criterion, optimizer)
+ train_network.set_train()
+ input_np = np.random.randn(32, 2048, 7, 7).astype(np.float32)
+ input = Tensor(input_np.copy())
+ label_np_onehot = np.zeros(shape=(32, 1000)).astype(np.float32)
+ label = Tensor(label_np_onehot.copy())
+ for i in range(1):
+ loss = train_network(input, label)
+ print("-------loss------", loss)
+```
+
+输出如下:
+```python
+-------loss------ [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
+ 0. 0. 0. 0. 0. 0. 0. 0.]
+```
+
+## 自定义训练循环
+用户如果不想使用MindSpore提供的Model接口,可以将模仿Model的train接口自由控制循环的迭代次数和每个epoch的step数量。
+
+代码样例如下:
+```python
+import os
+
+import mindspore.dataset as ds
+import mindspore.dataset.transforms.c_transforms as CT
+import mindspore.dataset.vision.c_transforms as CV
+import mindspore.nn as nn
+from mindspore import context
+from mindspore.common import dtype as mstype
+from mindspore.common.initializer import TruncatedNormal
+from mindspore.common.parameter import ParameterTuple
+from mindspore.dataset.vision import Inter
+from mindspore.nn.wrap.cell_wrapper import WithLossCell
+from mindspore.ops import composite as C
+from mindspore.ops import functional as F
+from mindspore.ops import operations as P
+from mindspore.train.dataset_helper import DatasetHelper, connect_network_with_dataset
+
+
+def create_dataset(data_path, batch_size=32, repeat_size=1,
+ num_parallel_workers=1):
+ """
+ create dataset for train or test
+ """
+ # define dataset
+ mnist_ds = ds.MnistDataset(data_path)
+
+ resize_height, resize_width = 32, 32
+ rescale = 1.0 / 255.0
+ shift = 0.0
+ rescale_nml = 1 / 0.3081
+ shift_nml = -1 * 0.1307 / 0.3081
+
+ # define map operations
+ resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode
+ rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
+ rescale_op = CV.Rescale(rescale, shift)
+ hwc2chw_op = CV.HWC2CHW()
+ type_cast_op = CT.TypeCast(mstype.int32)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=resize_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+
+ return mnist_ds
+
+
+def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):
+ """weight initial for conv layer"""
+ weight = weight_variable()
+ return nn.Conv2d(in_channels, out_channels,
+ kernel_size=kernel_size, stride=stride, padding=padding,
+ weight_init=weight, has_bias=False, pad_mode="valid")
+
+
+def fc_with_initialize(input_channels, out_channels):
+ """weight initial for fc layer"""
+ weight = weight_variable()
+ bias = weight_variable()
+ return nn.Dense(input_channels, out_channels, weight, bias)
+
+
+def weight_variable():
+ """weight initial"""
+ return TruncatedNormal(0.02)
+
+
+class LeNet5(nn.Cell):
+ """
+ Lenet network
+ Args:
+ num_class (int): Num classes. Default: 10.
+
+ Returns:
+ Tensor, output tensor
+
+ Examples:
+ >>> LeNet(num_class=10)
+ """
+
+ def __init__(self, num_class=10):
+ super(LeNet5, self).__init__()
+ self.num_class = num_class
+ self.batch_size = 32
+ self.conv1 = conv(1, 6, 5)
+ self.conv2 = conv(6, 16, 5)
+ self.fc1 = fc_with_initialize(16 * 5 * 5, 120)
+ self.fc2 = fc_with_initialize(120, 84)
+ self.fc3 = fc_with_initialize(84, self.num_class)
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.reshape = P.Reshape()
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.reshape(x, (self.batch_size, -1))
+ x = self.fc1(x)
+ x = self.relu(x)
+ x = self.fc2(x)
+ x = self.relu(x)
+ x = self.fc3(x)
+ return x
+
+
+class TrainOneStepCell(nn.Cell):
+ def __init__(self, network, optimizer, sens=1.0):
+ super(TrainOneStepCell, self).__init__(auto_prefix=False)
+ self.network = network
+ self.weights = ParameterTuple(network.trainable_params())
+ self.optimizer = optimizer
+ self.grad = C.GradOperation(get_by_list=True, sens_param=True)
+ self.sens = sens
+
+ def set_sens(self, value):
+ self.sens = value
+
+ def construct(self, data, label):
+ weights = self.weights
+ loss = self.network(data, label)
+ sens = P.Fill()(P.DType()(loss), P.Shape()(loss), self.sens)
+ grads = self.grad(self.network, weights)(data, label, sens)
+ return F.depend(loss, self.optimizer(grads))
+
+
+if __name__ == "__main__":
+ context.set_context(mode=context.GRAPH_MODE, device_target="Ascend")
+ ds_train = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data/", "train"), 32)
+
+ network = LeNet5(10)
+ net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
+ net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
+ net = WithLossCell(network, net_loss)
+ net = TrainOneStepCell(net, net_opt)
+ dataset_helper = DatasetHelper(ds_train, dataset_sink_mode=True, sink_size=100, epoch_num=10)
+ net = connect_network_with_dataset(net, dataset_helper)
+ network.set_train()
+ print("============== Starting Training ==============")
+ epoch = 10
+ for step in range(epoch):
+ for inputs in dataset_helper:
+ output = net(*inputs)
+ print("epoch: {0}/{1}, losses: {2}".format(step + 1, epoch, output.asnumpy(), flush=True))
+```
+
+输出如下:
+```python
+epoch: 1/10, losses: 2.294034719467163
+epoch: 2/10, losses: 2.3150298595428467
+epoch: 3/10, losses: 2.3107073307037354
+epoch: 4/10, losses: 2.3155436515808105
+epoch: 5/10, losses: 2.28973388671875
+epoch: 6/10, losses: 2.3108928203582764
+epoch: 7/10, losses: 2.293713092803955
+epoch: 8/10, losses: 2.29837703704834
+epoch: 9/10, losses: 2.305952548980713
+epoch: 10/10, losses: 1.4282708168029785
+```
+
+> 典型的使用场景是梯度累积,详细查看[梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/apply_gradient_accumulation.html)。
+
+## 边训练边推理
+对于某些数据量较大、训练时间较长的复杂网络,为了能掌握训练的不同阶段模型精度的指标变化情况,可以通过边训练边推理的方式跟踪精度的变化情况。具体可以参考[同步训练和验证模型](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/synchronization_training_and_evaluation.html)。
+
+## on-device执行
+当前MindSpore支持的后端包括Ascend、GPU、CPU,所谓On Device中的Device通常指Ascend(昇腾)AI处理器。
+
+昇腾芯片上集成了AICORE、AICPU和CPU。其中,AICORE负责大型Tensor Vector运算,AICPU负责标量运算,CPU负责逻辑控制和任务分发。
+
+Host侧CPU负责将图或算子下发到昇腾芯片。昇腾芯片由于具备了运算、逻辑控制和任务分发的功能,所以不需要与Host侧的CPU进行频繁的交互,只需要将计算完的最终结果返回给Host侧,实现整图下沉到Device执行,避免Host-Device频繁交互,减小了开销。
+
+以下是Device的主要组成结构:
+- 片上32G内存:5G(parameter) + 26G(feature map) + 1G(HCCL)
+- 多流水线并行:6条流水线
+- AICORE&带宽:32Cores、读写带宽128GBps
+- 通信协议:HCCS、PCIe4.0、RoCEv2
+
+### 计算图下沉
+计算图整图下沉到Device上执行,减少Host-Device交互开销。可以结合循环下沉实现多个Step下沉,进一步减少Host和Device的交互次数。
+
+循环下沉是在On Device执行的基础上的优化,目的是进一步减少Host侧和Device侧之间的交互次数。通常情况下,每个step都返回一个结果,循环下沉是控制每隔多少个step返回一次结果。
+
+默认配置下是每一个epoch返回一次结果,这样每个epoch里,Host侧和Device侧只需要进行一次数据交互。
+
+也可以结合`train`接口的`dataset_sink_mode`和`sink_size`控制每个epoch的下沉数据量。
+
+### 数据下沉
+`Model`的`train`接口参数`dataset_sink_mode`可以控制数据是否下沉。`dataset_sink_mode`为True表示数据下沉,否则为非下沉。所谓下沉即数据通过通道直接传送到Device上。
+
+dataset_sink_mode参数可以配合`sink_size`控制每个`epoch`下沉的数据量大小。当`dataset_sink_mode`设置为True,即数据下沉模式时:
+
+如果`sink_size`为默认值-1,则每一个`epoch`下沉的数据量为原始的整个数据集大小;
+
+如果`sink_size`>0,此时原始数据集可以被无限次遍历,每个`epoch`下沉`sink_size`大小的数据量,下一个`epoch`继续从上次遍历的结束位置继续遍历。
+
+下沉的总数据量由`epoch`和`sink_size`两个变量共同控制,即总数据量=`epoch`*`sink_size`。
+
+代码样例如下:
+```python
+import os
+
+import mindspore.dataset as ds
+import mindspore.dataset.transforms.c_transforms as CT
+import mindspore.dataset.vision.c_transforms as CV
+import mindspore.nn as nn
+from mindspore import context
+from mindspore.common import dtype as mstype
+from mindspore.common.initializer import TruncatedNormal
+from mindspore.dataset.vision import Inter
+from mindspore.nn.metrics import Accuracy
+from mindspore.ops import operations as P
+from mindspore.train import Model
+from mindspore.train.callback import LossMonitor
+
+
+def create_dataset(data_path, batch_size=32, repeat_size=1,
+ num_parallel_workers=1):
+ """
+ create dataset for train or test
+ """
+ # define dataset
+ mnist_ds = ds.MnistDataset(data_path)
+
+ resize_height, resize_width = 32, 32
+ rescale = 1.0 / 255.0
+ shift = 0.0
+ rescale_nml = 1 / 0.3081
+ shift_nml = -1 * 0.1307 / 0.3081
+
+ # define map operations
+ resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode
+ rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
+ rescale_op = CV.Rescale(rescale, shift)
+ hwc2chw_op = CV.HWC2CHW()
+ type_cast_op = CT.TypeCast(mstype.int32)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=resize_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(input_columns="image", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+
+ return mnist_ds
+
+
+def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):
+ """weight initial for conv layer"""
+ weight = weight_variable()
+ return nn.Conv2d(in_channels, out_channels,
+ kernel_size=kernel_size, stride=stride, padding=padding,
+ weight_init=weight, has_bias=False, pad_mode="valid")
+
+
+def fc_with_initialize(input_channels, out_channels):
+ """weight initial for fc layer"""
+ weight = weight_variable()
+ bias = weight_variable()
+ return nn.Dense(input_channels, out_channels, weight, bias)
+
+
+def weight_variable():
+ """weight initial"""
+ return TruncatedNormal(0.02)
+
+
+class LeNet5(nn.Cell):
+ """
+ Lenet network
+ Args:
+ num_class (int): Num classes. Default: 10.
+
+ Returns:
+ Tensor, output tensor
+
+ Examples:
+ >>> LeNet(num_class=10)
+ """
+
+ def __init__(self, num_class=10):
+ super(LeNet5, self).__init__()
+ self.num_class = num_class
+ self.batch_size = 32
+ self.conv1 = conv(1, 6, 5)
+ self.conv2 = conv(6, 16, 5)
+ self.fc1 = fc_with_initialize(16 * 5 * 5, 120)
+ self.fc2 = fc_with_initialize(120, 84)
+ self.fc3 = fc_with_initialize(84, self.num_class)
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.reshape = P.Reshape()
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.reshape(x, (self.batch_size, -1))
+ x = self.fc1(x)
+ x = self.relu(x)
+ x = self.fc2(x)
+ x = self.relu(x)
+ x = self.fc3(x)
+ return x
+
+
+if __name__ == "__main__":
+ context.set_context(mode=context.GRAPH_MODE, device_target="Ascend")
+ ds_train = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data/", "train"), 32)
+
+ network = LeNet5(10)
+ net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
+ net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9)
+ model = Model(network, net_loss, net_opt)
+
+ print("============== Starting Training ==============")
+ model.train(epoch=10, train_dataset=ds_train, callbacks=[LossMonitor()], dataset_sink_mode=True, sink_size=1000)
+```
+
+`batch_size`为32的情况下,数据集的大小为1875,当`sink_size`设置为1000时,表示每个`epoch`下沉1000个batch的数据,下沉次数为`epoch`=10,下沉的总数据量为:`epoch`*`sink_size`=10000。
+
+输出如下:
+```python
+epoch: 1 step: 1000, loss is 0.5399815
+epoch: 2 step: 1000, loss is 0.033433747
+epoch: 3 step: 1000, loss is 0.054761313
+epoch: 4 step: 1000, loss is 0.007882872
+epoch: 5 step: 1000, loss is 0.00658499
+epoch: 6 step: 1000, loss is 0.0413095
+epoch: 7 step: 1000, loss is 0.13373856
+epoch: 8 step: 1000, loss is 0.015793817
+epoch: 9 step: 1000, loss is 0.00017951085
+epoch: 10 step: 1000, loss is 0.01490275
+```
+
+> `dataset_sink_mode`为False时,`sink_size`参数设置无效。
\ No newline at end of file
diff --git a/docs/programming_guide/source_zh_cn/user_defined.rst b/docs/programming_guide/source_zh_cn/user_defined.rst
new file mode 100644
index 0000000000000000000000000000000000000000..706c60a72fa00e67965bf1724b01351e393ff6e0
--- /dev/null
+++ b/docs/programming_guide/source_zh_cn/user_defined.rst
@@ -0,0 +1,9 @@
+自定义
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ 自定义TBE算子
+ 自定义GPU算子
+ 自定义CPU算子
diff --git a/docs/source_en/design.rst b/docs/source_en/design.rst
deleted file mode 100644
index 359add5edcdd0d373da5eb99037c88cf5bfd99e7..0000000000000000000000000000000000000000
--- a/docs/source_en/design.rst
+++ /dev/null
@@ -1,11 +0,0 @@
-Design
-===========
-
-.. toctree::
- :maxdepth: 1
-
- architecture
- design/mindspore/ir
- design/mindinsight/training_visual_design
- design/mindinsight/graph_visual_design
- design/mindinsight/tensor_visual_design
\ No newline at end of file
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_main.png b/docs/source_en/design/mindinsight/images/graph_visual_main.png
deleted file mode 100644
index 55ca7d7183c818a15b69a3a6ee2c4ef29655460c..0000000000000000000000000000000000000000
Binary files a/docs/source_en/design/mindinsight/images/graph_visual_main.png and /dev/null differ
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_right_side.png b/docs/source_en/design/mindinsight/images/graph_visual_right_side.png
deleted file mode 100644
index 90e8d868b5ff9d68ae14d55d8f3ff188db412556..0000000000000000000000000000000000000000
Binary files a/docs/source_en/design/mindinsight/images/graph_visual_right_side.png and /dev/null differ
diff --git a/docs/source_en/design/mindinsight/images/tensor_table.png b/docs/source_en/design/mindinsight/images/tensor_table.png
deleted file mode 100644
index 725bd9f8481826d682b593c2224a766854e9b4f8..0000000000000000000000000000000000000000
Binary files a/docs/source_en/design/mindinsight/images/tensor_table.png and /dev/null differ
diff --git a/docs/source_en/network_list.md b/docs/source_en/network_list.md
deleted file mode 100644
index 897111be5078687a3c4b4671c0c9f05904226128..0000000000000000000000000000000000000000
--- a/docs/source_en/network_list.md
+++ /dev/null
@@ -1,60 +0,0 @@
-# Network List
-
-`Linux` `Ascend` `GPU` `CPU` `Model Development` `Intermediate` `Expert`
-
-
-
-- [Network List](#network-list)
- - [Model Zoo](#model-zoo)
- - [Pre-trained Models](#pre-trained-models)
-
-
-
-
-
-## Model Zoo
-
-| Domain | Sub Domain | Network | Ascend | GPU | CPU
-|:------ |:------| :----------- |:------ |:------ |:-----
-|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
-| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
-|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
-|Computer Vision (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
-| Computer Vision (CV) | Targets Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Targets Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
-| Computer Vision(CV) | Targets Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
-| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
-| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
-| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
-| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
-| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
-
-> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts.
-
-## Pre-trained Models
-*It refers to the released MindSpore version. The hardware platforms that support model training are CPU, GPU and Ascend. As shown in the table below, ✓ indicates that the pre-trained model run on the selected platform.
-
-| Domain | Sub Domain| Network | Dataset | CPU | GPU | Ascend | 0.5.0-beta*
-|:------ |:------ | :------- |:------ |:------ |:------ |:----- |:-----
-|Computer Vision (CV) | Image Classification| [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | CIFAR-10| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/alexnet/alexnet_ascend_0.5.0_cifar10_official_classification_20200716.tar.gz)
-|Computer Vision (CV) | Image Classification| [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py)| MNIST | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/lenet/lenet_ascend_0.5.0_mnist_official_classification_20200716.tar.gz)
-|Computer Vision (CV) | Image Classification| [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py)| CIFAR-10 | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/vgg/vgg16_ascend_0.5.0_cifar10_official_classification_20200715.tar.gz)
-|Computer Vision (CV) | Image Classification| [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | CIFAR-10| | | ✓ |[Download](http://download.mindspore.cn/model_zoo/official/cv/resnet/resnet50_v1.5_ascend_0.3.0_cifar10_official_classification_20200718.tar.gz)
-|Computer Vision (CV) | Targets Detection| [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | COCO 2014| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/yolo/yolov3_darknet53_ascend_0.5.0_coco2014_official_object_detection_20200717.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_Base](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | zhwiki | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_base_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_NEZHA](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py)| zhwiki| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_nezha_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py)| WMT English-German| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/transformer/transformer_ascend_0.5.0_wmtende_official_machine_translation_20200713.tar.gz)
diff --git a/docs/source_en/operator_list.md b/docs/source_en/operator_list.md
deleted file mode 100644
index 672de46b5ab7e69e5c8743b03fa3cfd323d899d7..0000000000000000000000000000000000000000
--- a/docs/source_en/operator_list.md
+++ /dev/null
@@ -1,535 +0,0 @@
-# Operator List
-
-`Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert`
-
-
-
-- [Operator List](#operator-list)
- - [mindspore.nn](#mindsporenn)
- - [mindspore.ops.operations](#mindsporeopsoperations)
- - [mindspore.ops.functional](#mindsporeopsfunctional)
- - [Distributed Operator](#distributed-operator)
- - [Implicit Type Conversion](#implicit-type-conversion)
- - [conversion rules](#conversion-rules)
- - [data types involved in conversion](#data-types-involved-in-conversion)
- - [support ops](#support-ops)
-
-
-
-
-
-## mindspore.nn
-
-| Operation | Ascend | GPU | CPU |Operator Type
-| :----------- |:------ |:------ |:-----|:---
-| [mindspore.nn.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Softmax) | Supported | Supported | Supported |layer/activation
-| [mindspore.nn.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LogSoftmax) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ReLU) | Supported | Supported | Supported |layer/activation
-| [mindspore.nn.ReLU6](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ReLU6) |Supported | Supported | Supported |layer/activation
-| [mindspore.nn.HSwish](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSwish) | Doing | Supported | Doing |layer/activation
-| [mindspore.nn.HSigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSigmoid) | Doing | Supported | Doing |layer/activation
-| [mindspore.nn.LeakyReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLU) | Supported |Supported | Doing |layer/activation
-| [mindspore.nn.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Tanh) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.GELU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GELU) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Supported | Doing |layer/activation
-| [mindspore.nn.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.PReLU) | Supported |Doing | Doing |layer/activation
-| [mindspore.nn.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dropout) |Supported | Supported | Supported |layer/basic
-| [mindspore.nn.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
-| [mindspore.nn.Dense](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
-| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
-| [mindspore.nn.Norm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Doing | Supported | Doing |layer/basic
-| [mindspore.nn.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
-| [mindspore.nn.Range](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
-| [mindspore.nn.SequentialCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
-| [mindspore.nn.CellList](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CellList) | Supported | Supported | Doing |layer/container
-| [mindspore.nn.Conv2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) | Supported | Supported | Supported |layer/conv
-| [mindspore.nn.Conv2dTranspose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dTranspose) | Supported | Supported | Doing |layer/conv
-| [mindspore.nn.Conv1d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv1d) | Supported | Supported | Doing |layer/conv
-| [mindspore.nn.Conv1dTranspose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv1dTranspose) | Supported | Supported | Doing |layer/conv
-| [mindspore.nn.Embedding](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Embedding) |Supported | Supported | Doing |layer/embedding
-| [mindspore.nn.ImageGradients](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ImageGradients) | Supported |Supported | Doing |layer/image
-| [mindspore.nn.SSIM](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SSIM) | Supported | Supported | Doing |layer/image
-| [mindspore.nn.PSNR](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.PSNR) | Supported |Doing | Doing |layer/image
-| [mindspore.nn.CentralCrop](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CentralCrop) | Supported |Doing | Doing |layer/image
-| [mindspore.nn.LSTM](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LSTM) | Doing | Supported | Supported |layer/lstm
-| [mindspore.nn.GlobalBatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GlobalBatchNorm) | Supported |Doing | Doing |layer/normalization
-| [mindspore.nn.BatchNorm1d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm1d) | Supported |Doing | Doing |layer/normalization
-| [mindspore.nn.BatchNorm2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) | Supported | Supported | Doing |layer/normalization
-| [mindspore.nn.GroupNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GroupNorm) | Supported | Doing | Doing |layer/normalization
-| [mindspore.nn.LayerNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LayerNorm) | Supported | Supported | Doing |layer/normalization
-| [mindspore.nn.MatrixDiag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiag) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.MatrixDiagPart](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiagPart) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.MatrixSetDiag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixSetDiag) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.LinSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
-| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
-| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Doing | Doing |layer/quant
-| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Doing |layer/quant
-| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
-| [mindspore.nn.MSELoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
-| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) |Supported |Doing | Doing |loss/loss
-| [mindspore.nn.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyWithLogits) | Supported | Supported | Supported |loss/loss
-| [mindspore.nn.SoftmaxCrossEntropyExpand](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyExpand) | Supported |Supported | Doing |loss/loss
-| [mindspore.nn.CosineEmbeddingLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CosineEmbeddingLoss) |Supported |Supported | Doing |loss/loss
-| [mindspore.nn.ProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ProximalAdagrad) | Supported | Doing | Doing |optim/ProximalAdagrad
-| [mindspore.nn.LazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LazyAdam) | Supported | Doing | Doing |optim/lazyadam
-| [mindspore.nn.Adam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Adam) | Supported |Doing | Doing |optim/adam
-| [mindspore.nn.AdamWeightDecay](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AdamWeightDecay) | Supported | Supported | Doing |optim/adam
-| [mindspore.nn.Lamb](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Lamb) | Supported | Supported | Doing |optim/lamb
-| [mindspore.nn.LARS](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LARS) |Supported |Doing | Doing |optim/lars
-| [mindspore.nn.Momentum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Momentum) | Supported | Supported | Supported |optim/momentum
-| [mindspore.nn.Optimizer](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Optimizer) | Supported | Supported | Doing |optim/optimizer
-| [mindspore.nn.RMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.RMSProp) | Supported | Support | Doing |optim/optimizer
-| [mindspore.nn.SGD](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SGD) |Supported |Doing | Doing |optim/sgd
-| [mindspore.nn.WithLossCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.WithGradCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.DataWrapper](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DataWrapper) |Doing | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
-| [mindspore.nn.DistributedGradReducer](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DistributedGradReducer) | Supported |Doing | Doing |wrap/grad_reducer
-| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell) | Supported | Supported | Supported |cell
-
-## mindspore.ops.operations
-
-| Operation | Ascend | GPU | CPU |Operator Type
-| :----------- |:------ |:------ |:-----|:---
-| [mindspore.ops.operations.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Flatten) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Unpack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Unpack) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.L2Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Loss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.CTCLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CTCLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.RNNTLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RNNTLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Softplus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softplus) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ReLU6](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU6) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.HSwish](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSwish) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.HSigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSigmoid) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchNorm) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.LRN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LRN) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.Conv2D](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2D) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNative](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPoolWithArgmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPool](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPool) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.AvgPool](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AvgPool) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Conv2DBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.TopK](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TopK) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyRMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ApplyCenteredRMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.SmoothL1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SGD](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SGD) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.LayerNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LayerNorm) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ResizeBilinear](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeBilinear) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.GetNext](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GetNext) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LSTM](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LSTM) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.BasicLSTMCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Pad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ROIAlign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ROIAlign) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.BinaryCrossEntropy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.KLDivLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.KLDivLoss) | Doing | Supported | Doing | nn_ops
-| [mindspore.ops.operations.LARSUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LARSUpdate) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Softsign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softsign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.ReduceProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.CumProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.CumSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumSum) | Supported | Supported| Doing | math_ops
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.SquareSumAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquareSumAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Reciprocal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reciprocal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log1p](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log1p) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Floor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Floor) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.EqualCount](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EqualCount) | Doing | Supported | Supported | math_ops
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Ceil](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Ceil) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Inv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Inv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Invert](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Invert) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUAllocFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUGetFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUClearFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloatStatus) | Doing | Supported | Doing | math_ops
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Cosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cosh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI0e](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI0e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI1e](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI1e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Tan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asinh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erf](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erf) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erfc](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erfc) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sinh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Expm1](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Expm1) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NMSWithMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NMSWithMask) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Abs](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Abs) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sign) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Round](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Round) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceAdd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.SameTypeShape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SameTypeShape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.IsSubClass](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsSubClass) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.IsInstance](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsInstance) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Split](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Split) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.TruncatedNormal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncatedNormal) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ZerosLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ZerosLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.TupleToArray](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TupleToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToArray](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToTensor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToTensor) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.InvertPermutation](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InvertPermutation) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmax) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ParallelConcat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ParallelConcat) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Slice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Slice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.DiagPart](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DiagPart) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Eye](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Eye) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ResizeNearestNeighbor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToDepth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToDepth) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.DepthToSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatch](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatch) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatchND](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatchND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpaceND](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpaceND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.IsFinite](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsFinite) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.InplaceUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Rint](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rint) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReverseV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReduceOp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceOp) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllReduce](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllReduce) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllGather](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllGather) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.ReduceScatter](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceScatter) | Doing | Supported | Doing | comm_ops
-| [mindspore.ops.operations.Broadcast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Broadcast) | Supported | Doing | Doing | comm_ops
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | Supported | Supported | Supported | control_ops
-| [mindspore.ops.operations.GeSwitch](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GeSwitch) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.Merge](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Merge) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.ScalarSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.ImageSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ImageSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.TensorSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.HistogramSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.InsertGradientOf](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InsertGradientOf) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | Supported | Doing | Doing | debug_ops
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxEncode](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxDecode](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.PopulationCount](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PopulationCount) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.CheckValid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CheckValid) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.IOU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IOU) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.MakeRefKey](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MakeRefKey) | Supported | Supported | Supported | other_ops
-| [mindspore.ops.operations.InTopK](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InTopK) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.StandardNormal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StandardNormal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.Gamma](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gamma) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.Poisson](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Poisson) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.UniformInt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformInt) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.UniformReal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformReal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomChoiceWithMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomCategorical](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomCategorical) | Supported| Doing | Doing | random_ops
-| [mindspore.ops.operations.ScalarCast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarCast) | Supported | Supported | Supported | inner_ops
-| [mindspore.ops.operations.ReverseSequence](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseSequence) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.CropAndResize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CropAndResize) | Supported | Doing | Doing | image_ops
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.HistogramFixedWidth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
-
-## mindspore.ops.functional
-
-| Operation | functional Operation
-| :----------- | :-----------
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | pack
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | tensor_add
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | assign_sub
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | addn
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | square
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | sqrt
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | equal
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | not_equal
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | logical_not
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | logical_and
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | logical_or
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | expand_dims
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | dtype
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | cast
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | reshape
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | shape
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | gather
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | rank
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | size
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | fill
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | ones_like
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | tile
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | select
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | scatter_nd
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | gather_nd
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | control_depend
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | print
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | assign
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | tensor_pow
-
-> At present, functional supports some operators without attributes, which will be further completed in the future.
-
-## Distributed Operator
-
-| op name | constraints
-| :----------- | :-----------
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | None
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | None
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | None
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | None
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | None
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | None
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | None
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | None
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | None
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | None
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | None
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | None
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | None
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | None
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | None
-| [mindspore.ops.operations.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Dropout) | Repeated calculation is not supported.
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | None
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | None
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | None
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | None
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | None
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | None
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | None
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | None
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | None
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | None
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | None
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | None
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | None
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | None
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | None
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Need to be used in conjunction with `DropoutDoMask`.
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported.
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.
-| [mindspore.ops.operations.SparseGatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseGatherV2) | The same as GatherV2.
-| [mindspore.ops.operations.EmbeddingLookup](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EmbeddingLookup) | The same as GatherV2.
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | The last dimension of logits and labels can't be splited; Only supports using output[0].
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | `transpose_a=True` is not supported.
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | `transpore_a=True` is not supported.
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | The shard strategy in channel dimension of input_x should be consistent with weight.
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Only support 1-dim indices.
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | None
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | None
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | None
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | The output index can't be used as the input of other operators.
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | The output index can't be used as the input of other operators.
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | None
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Configuring shard strategy is not supported.
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is not supported when the strides of dimension is 1.
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Only support configuring shard strategy for multiples.
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | None
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Configuring shard strategy is not supported.
-
-> Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.
->
-
-## Implicit Type Conversion
-
-### conversion rules
-* Scalar and Tensor operations: during operation, the scalar is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation;
-when Tensor is a bool data type and the scalar is int or float, both the scalar and Tensor are converted to the Tensor with the data type of int32 or float32.
-* Tensor operation of different data types: the priority of data type is bool < uint8 < int8 < int16 < int32 < int64 < float16 < float32
-
-- [网络支持](#网络支持)
- - [Model Zoo](#model-zoo)
- - [预训练模型](#预训练模型)
-
-
-
-
-
-## Model Zoo
-
-| 领域 | 子领域 | 网络 | Ascend | GPU | CPU
-|:---- |:------- |:---- |:---- |:---- |:----
-|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
-| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
-|计算机视觉(CV) | 目标检测(Targets Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
-| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
-| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
-| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
-| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
-
-> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) 快速生成经典网络脚本。
-
-## 预训练模型
-*代表MindSpore已发布的版本号,支持网络训练的硬件平台有CPU、GPU和Ascend,以下表格中 ✓ 代表模型是基于选中的硬件平台训练而来。
-
-| 领域 | 子领域 | 网络 |数据集 | CPU | GPU | Ascend | 0.5.0-beta*
-|:---- |:----- |:---- |:---- |:---- |:---- |:---- |:------
-|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | CIFAR-10 | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/alexnet/alexnet_ascend_0.5.0_cifar10_official_classification_20200716.tar.gz)
-|计算机视觉(CV) | 图像分类(Image Classification)| [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py)| MNIST | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/lenet/lenet_ascend_0.5.0_mnist_official_classification_20200716.tar.gz)
-|计算机视觉(CV) | 图像分类(Image Classification)| [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py)|CIFAR-10 | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/vgg/vgg16_ascend_0.5.0_cifar10_official_classification_20200715.tar.gz)
-|计算机视觉(CV) | 图像分类(Image Classification)| [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) |CIFAR-10 | | | ✓ |[下载](http://download.mindspore.cn/model_zoo/official/cv/resnet/resnet50_v1.5_ascend_0.3.0_cifar10_official_classification_20200718.tar.gz)
-|计算机视觉(CV) | 目标检测(Targets Detection)| [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53) |COCO 2014 | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/yolo/yolov3_darknet53_ascend_0.5.0_coco2014_official_object_detection_20200717.tar.gz)
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding)| [BERT_Base](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | zhwiki | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_base_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding)| [BERT_NEZHA](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py)| zhwiki | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_nezha_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding)| [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py)|WMT English-German | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/nlp/transformer/transformer_ascend_0.5.0_wmtende_official_machine_translation_20200713.tar.gz)
diff --git a/docs/source_zh_cn/operator_list.md b/docs/source_zh_cn/operator_list.md
deleted file mode 100644
index 016d4b5f8025ca4ffe97b5ff6a836b8c265c1f90..0000000000000000000000000000000000000000
--- a/docs/source_zh_cn/operator_list.md
+++ /dev/null
@@ -1,533 +0,0 @@
-# 算子支持
-
-`Linux` `Ascend` `GPU` `CPU` `模型开发` `初级` `中级` `高级`
-
-
-
-- [算子支持](#算子支持)
- - [mindspore.nn](#mindsporenn)
- - [mindspore.ops.operations](#mindsporeopsoperations)
- - [mindspore.ops.functional](#mindsporeopsfunctional)
- - [分布式算子](#分布式算子)
- - [隐式类型转换](#隐式类型转换)
- - [转换规则](#转换规则)
- - [参与转换的数据类型](#参与转换的数据类型)
- - [支持算子](#支持算子)
-
-
-
-
-
-## mindspore.nn
-
-| 操作名 | Ascend | GPU | CPU |算子类别
-| :----------- |:------ |:------ |:-----|:---
-| [mindspore.nn.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Softmax) | Supported | Supported | Supported |layer/activation
-| [mindspore.nn.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LogSoftmax) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ReLU) | Supported | Supported | Supported |layer/activation
-| [mindspore.nn.ReLU6](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ReLU6) |Supported | Supported | Supported |layer/activation
-| [mindspore.nn.HSwish](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSwish) | Doing | Supported | Doing |layer/activation
-| [mindspore.nn.HSigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSigmoid) | Doing | Supported | Doing |layer/activation
-| [mindspore.nn.LeakyReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLU) | Supported |Supported | Doing |layer/activation
-| [mindspore.nn.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Tanh) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.GELU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GELU) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Supported | Doing |layer/activation
-| [mindspore.nn.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.PReLU) | Supported |Doing | Doing |layer/activation
-| [mindspore.nn.Dropout](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dropout) |Supported | Supported | Supported |layer/basic
-| [mindspore.nn.Flatten](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
-| [mindspore.nn.Dense](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
-| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
-| [mindspore.nn.Norm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Doing | Supported | Doing |layer/basic
-| [mindspore.nn.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
-| [mindspore.nn.Range](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
-| [mindspore.nn.SequentialCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
-| [mindspore.nn.CellList](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CellList) | Supported | Supported | Doing |layer/container
-| [mindspore.nn.Conv2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) | Supported | Supported | Supported |layer/conv
-| [mindspore.nn.Conv2dTranspose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dTranspose) | Supported | Supported | Doing |layer/conv
-| [mindspore.nn.Conv1d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv1d) | Supported | Supported | Doing |layer/conv
-| [mindspore.nn.Conv1dTranspose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv1dTranspose) | Supported | Supported | Doing |layer/conv
-| [mindspore.nn.Embedding](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Embedding) |Supported | Supported | Doing |layer/embedding
-| [mindspore.nn.ImageGradients](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ImageGradients) | Supported |Supported | Doing |layer/image
-| [mindspore.nn.SSIM](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SSIM) | Supported | Supported | Doing |layer/image
-| [mindspore.nn.PSNR](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.PSNR) | Supported |Doing | Doing |layer/image
-| [mindspore.nn.CentralCrop](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CentralCrop) | Supported |Doing | Doing |layer/image
-| [mindspore.nn.LSTM](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LSTM) | Doing | Supported | Supported |layer/lstm
-| [mindspore.nn.GlobalBatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GlobalBatchNorm) | Supported |Doing | Doing |layer/normalization
-| [mindspore.nn.BatchNorm1d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm1d) | Supported |Doing | Doing |layer/normalization
-| [mindspore.nn.BatchNorm2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) | Supported | Supported | Doing |layer/normalization
-| [mindspore.nn.GroupNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GroupNorm) | Supported | Doing | Doing |layer/normalization
-| [mindspore.nn.LayerNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LayerNorm) | Supported | Supported | Doing |layer/normalization
-| [mindspore.nn.MatrixDiag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiag) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.MatrixDiagPart](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiagPart) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.MatrixSetDiag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixSetDiag) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.LinSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
-| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
-| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
-| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Doing | Doing |layer/quant
-| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Doing |layer/quant
-| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
-| [mindspore.nn.MSELoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
-| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) | Supported |Doing | Doing |loss/loss
-| [mindspore.nn.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyWithLogits) | Supported | Supported | Supported |loss/loss
-| [mindspore.nn.SoftmaxCrossEntropyExpand](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyExpand) | Supported |Supported | Doing |loss/loss
-| [mindspore.nn.CosineEmbeddingLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CosineEmbeddingLoss) |Supported |Supported | Doing |loss/loss
-| [mindspore.nn.ProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ProximalAdagrad) | Supported |Doing | Doing |optim/ProximalAdagrad
-| [mindspore.nn.LazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LazyAdam) | Supported |Doing | Doing |optim/lazyadam
-| [mindspore.nn.Adam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Adam) | Supported |Doing | Doing |optim/adam
-| [mindspore.nn.AdamWeightDecay](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AdamWeightDecay) | Supported | Supported | Doing |optim/adam
-| [mindspore.nn.Lamb](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Lamb) | Supported | Supported | Doing |optim/lamb
-| [mindspore.nn.LARS](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LARS) |Supported |Doing | Doing |optim/lars
-| [mindspore.nn.Momentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Momentum) | Supported | Supported | Supported |optim/momentum
-| [mindspore.nn.Optimizer](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Optimizer) | Supported | Supported | Doing |optim/optimizer
-| [mindspore.nn.RMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.RMSProp) | Supported | Supported | Doing |optim/optimizer
-| [mindspore.nn.SGD](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SGD) |Supported |Supported | Doing |optim/sgd
-| [mindspore.nn.WithLossCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.WithGradCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.DataWrapper](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DataWrapper) |Doing | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
-| [mindspore.nn.DistributedGradReducer](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DistributedGradReducer) | Supported |Doing | Doing |wrap/grad_reducer
-| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.Cell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell) | Supported | Supported | Supported |cell
-
-## mindspore.ops.operations
-
-| 操作名 | Ascend | GPU | CPU |算子类别
-| :----------- |:------ |:------ |:-----|:---
-| [mindspore.ops.operations.Flatten](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Flatten) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Unpack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Unpack) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | Supported| Doing | Doing | nn_ops
-| [mindspore.ops.operations.L2Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Loss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.CTCLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CTCLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.RNNTLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RNNTLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Softplus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softplus) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ReLU6](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU6) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.HSwish](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSwish) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.HSigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSigmoid) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchNorm) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.LRN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LRN) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.Conv2D](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2D) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNative](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPoolWithArgmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPool](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPool) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.AvgPool](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AvgPool) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Conv2DBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.TopK](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TopK) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyRMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ApplyCenteredRMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.SmoothL1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SGD](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SGD) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LayerNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LayerNorm) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ResizeBilinear](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeBilinear) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.GetNext](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GetNext) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LSTM](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LSTM) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.BasicLSTMCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Pad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ROIAlign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ROIAlign) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.BinaryCrossEntropy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.KLDivLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.KLDivLoss) | Doing | Supported | Doing | nn_ops
-| [mindspore.ops.operations.LARSUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LARSUpdate) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Softsign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softsign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.ReduceProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.CumProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.CumSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumSum) | Supported | Supported| Doing | math_ops
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.SquareSumAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquareSumAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Reciprocal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reciprocal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log1p](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log1p) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Floor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Floor) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.EqualCount](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EqualCount) | Doing | Supported | Supported | math_ops
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Ceil](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Ceil) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Inv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Inv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Invert](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Invert) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUAllocFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUGetFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUClearFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloatStatus) | Doing | Supported | Doing | math_ops
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Cosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cosh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI0e](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI0e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI1e](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI1e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Tan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asinh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erf](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erf) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erfc](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erfc) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sinh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Expm1](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Expm1) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NMSWithMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NMSWithMask) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Abs](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Abs) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sign) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Round](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Round) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceAdd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.SameTypeShape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SameTypeShape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.IsSubClass](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsSubClass) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.IsInstance](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsInstance) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Split](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Split) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.TruncatedNormal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncatedNormal) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ZerosLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ZerosLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.TupleToArray](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TupleToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToArray](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToTensor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToTensor) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.InvertPermutation](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InvertPermutation) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmax) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ParallelConcat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ParallelConcat) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Slice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Slice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.DiagPart](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DiagPart) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Eye](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Eye) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ResizeNearestNeighbor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToDepth](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToDepth) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.DepthToSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatch](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatch) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatchND](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatchND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpaceND](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpaceND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.IsFinite](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsFinite) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.InplaceUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Rint](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rint) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReverseV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReduceOp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceOp) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllReduce](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllReduce) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllGather](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllGather) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.ReduceScatter](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceScatter) | Doing | Supported | Doing | comm_ops
-| [mindspore.ops.operations.Broadcast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Broadcast) | Supported | Doing | Doing | comm_ops
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | Supported | Supported | Supported | control_ops
-| [mindspore.ops.operations.GeSwitch](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GeSwitch) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.Merge](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Merge) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.ScalarSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.ImageSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ImageSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.TensorSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.HistogramSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.InsertGradientOf](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InsertGradientOf) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | Supported | Doing | Doing | debug_ops
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxEncode](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxDecode](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.PopulationCount](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PopulationCount) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.CheckValid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CheckValid) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.IOU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IOU) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.MakeRefKey](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MakeRefKey) | Supported | Supported | Supported | other_ops
-| [mindspore.ops.operations.InTopK](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InTopK) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.StandardNormal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StandardNormal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.Gamma](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gamma) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.Poisson](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Poisson) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.UniformInt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformInt) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.UniformReal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformReal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomChoiceWithMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomCategorical](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomCategorical) | Supported| Doing | Doing | random_ops
-| [mindspore.ops.operations.ScalarCast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarCast) | Supported | Supported | Supported | inner_ops
-| [mindspore.ops.operations.ReverseSequence](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseSequence) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.CropAndResize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CropAndResize) | Supported | Doing | Doing | image_ops
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.HistogramFixedWidth](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
-
-## mindspore.ops.functional
-
-| 操作名 | 对应functional算子
-| :----------- | :-----------
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | pack
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | tensor_add
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | assign_sub
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | addn
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | square
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | sqrt
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | equal
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | not_equal
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | logical_not
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | logical_and
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | logical_or
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | expand_dims
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | dtype
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | cast
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | reshape
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | shape
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | gather
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | rank
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | size
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | fill
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | ones_like
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | tile
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | select
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | scatter_nd
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | gather_nd
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | control_depend
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | print
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | assign
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | tensor_pow
-
-> 当前functional支持了一部分没有属性的算子,后续会进一步补齐完整。
-
-## 分布式算子
-
-| 操作名 | 约束
-| :----------- | :-----------
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | None
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | None
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | None
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | None
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | None
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | None
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | None
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | None
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | None
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | None
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | None
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | None
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | None
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | None
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | None
-| [mindspore.ops.operations.Dropout](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Dropout) | 不支持重复计算
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | None
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | None
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | None
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | None
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | None
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | None
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | None
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | None
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | None
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | None
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | None
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | None
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | None
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | None
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | None
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | 需和`DropoutDoMask`联合使用
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | 需和`DropoutGenMask`联合使用,不支持配置切分策略
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分
-| [mindspore.ops.operations.SparseGatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseGatherV2) | 同GatherV2
-| [mindspore.ops.operations.EmbeddingLookup](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EmbeddingLookup) | 同GatherV2
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0]
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | 不支持`transpose_a=True`
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | 不支持`transpore_a=True`
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | 输入(input_x)的Channel维要和weight的切分方式一致
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | 仅支持输入(indices)是1维的Tensor
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | None
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | None
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | None
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | 第一个输出(index)不能作为其他算子的输入
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | 第一个输出(index)不能作为其他算子的输入
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | None
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | 不支持配置切分策略
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | 仅支持对multiples配置切分策略
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | None
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | 不支持配置切分策略
-
-> 重复计算是指,机器没有用满,比如:集群有8张卡跑分布式训练,切分策略只对输入切成了4份。这种情况下会发生重复计算。
-
-## 隐式类型转换
-### 转换规则
-* 标量与Tensor运算:运算时,将标量自动转为Tensor,数据类型和参与运算的Tensor数据类型保持一致;
-而当Tensor是bool数据类型,标量是int或float时,将标量和Tensor都转为数据类型为int32或float32的Tensor。
-* 不同数据类型Tensor运算:数据类型优先级排序为bool < uint8 < int8 < int16 < int32 < int64 < float16 < float32 < float64,
-运算时,先确定参与运算的Tensor中优先级相对最高的数据类型,然后将低优先级数据类型Tensor转换为相对最高优先级数据类型;
-而当int8和uint8数据类型的Tensor进行运算时,将其都转为int16的Tensor。
-* 不支持对Parameter进行数据类型转换:如果按照转换规则推导,需要对网络中定义的Parameter进行数据类型转换时,会抛出RuntimeError异常。
-
-### 参与转换的数据类型
-* bool
-* int8
-* uint8
-* int16
-* int32
-* int64
-* float16
-* float32
-* float64
-
-### 支持算子
-
-| 算子名
-| :-----------
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign)
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub)
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum)
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam)
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam)
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl)
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad)
-| [mindspore.ops.operations.ApplyAdaMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdaMax)
-| [mindspore.ops.operations.ApplyAdadelta](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdadelta)
-| [mindspore.ops.operations.ApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdagrad)
-| [mindspore.ops.operations.ApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdagradV2)
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad)
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2)
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad)
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad)
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign)
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign)
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent)
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent)
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl)
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2)
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd)
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr)
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor)
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd)
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub)
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul)
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow)
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum)
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum)
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv)
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div)
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan)
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv)
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv)
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod)
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod)
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod)
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2)
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference)
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy)
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy)
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal)
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual)
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual)
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater)
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual)
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less)
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual)
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd)
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr)
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate)
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd)
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub)
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd)
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate)
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax)
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin)
-| [mindspore.ops.operations.ScatterAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterAdd)
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub)
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul)
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv)
-
diff --git a/install/mindspore_cpu_install.md b/install/mindspore_cpu_install.md
index ed91ce11ce44ee64ac32238221bd668cba923657..eef04ac3e1e00f6940f79ae1d2e89313997a3bf1 100644
--- a/install/mindspore_cpu_install.md
+++ b/install/mindspore_cpu_install.md
@@ -22,7 +22,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore 1.0 | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/equirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
**安装依赖:**
与可执行文件安装依赖相同 |
- GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -63,7 +63,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. 在源码根目录下执行如下命令编译MindSpore。
@@ -98,7 +98,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
-| MindArmour master | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour 1.0 | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.0/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
@@ -123,7 +123,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r1.0
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
diff --git a/install/mindspore_cpu_install_en.md b/install/mindspore_cpu_install_en.md
index 25335b2d1d2baabdaacc7f4b7401441bc18eac2a..e6eb748813b9d663253d076b38aa8dbd4b09c160 100644
--- a/install/mindspore_cpu_install_en.md
+++ b/install/mindspore_cpu_install_en.md
@@ -22,7 +22,7 @@ This document describes how to quickly install MindSpore in a Ubuntu system with
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
same as the executable file installation dependencies. |
+| MindSpore 1.0 | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
same as the executable file installation dependencies. |
- GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -63,7 +63,7 @@ This document describes how to quickly install MindSpore in a Ubuntu system with
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile MindSpore:
@@ -98,7 +98,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour 1.0 | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.0/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -123,7 +123,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
diff --git a/install/mindspore_cpu_win_install.md b/install/mindspore_cpu_win_install.md
index d2cf00feab8578c530090516fb1f0b1542c6ac3a..8c09dd5fce48e4e501b061d534575a13ec40fbe7 100644
--- a/install/mindspore_cpu_win_install.md
+++ b/install/mindspore_cpu_win_install.md
@@ -20,7 +20,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh
- [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404
- [CMake](https://cmake.org/download/) 3.14.1
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore 1.0 | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh
- [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404
- [CMake](https://cmake.org/download/) 3.14.1
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
**安装依赖:**
与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -62,7 +62,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. 在源码根目录下执行如下命令编译MindSpore。
diff --git a/install/mindspore_cpu_win_install_en.md b/install/mindspore_cpu_win_install_en.md
index ea2a2e2137d53f0c06071ab961db32aef42dc366..2dc359be4b24fb9cadb6a1d674ee41942d8dd55d 100644
--- a/install/mindspore_cpu_win_install_en.md
+++ b/install/mindspore_cpu_win_install_en.md
@@ -20,7 +20,7 @@ This document describes how to quickly install MindSpore in a Windows system wit
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh
- [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404
- [CMake](https://cmake.org/download/) 3.14.1
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindSpore 1.0 | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh
- [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404
- [CMake](https://cmake.org/download/) 3.14.1
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
**Installation dependencies:**
same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -62,7 +62,7 @@ This document describes how to quickly install MindSpore in a Windows system wit
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile MindSpore:
diff --git a/install/mindspore_d_install.md b/install/mindspore_d_install.md
index 233616f1b08caf0ac80710750b85e23b7dd954e3..259c165731f0632ce4ddc281071518afbea20337 100644
--- a/install/mindspore_d_install.md
+++ b/install/mindspore_d_install.md
@@ -33,7 +33,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore 1.0 | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 |
- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。
- GCC 7.3.0可以直接通过apt命令安装。
@@ -82,7 +82,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. 在源码根目录下,执行如下命令编译MindSpore。
@@ -183,7 +183,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindInsight 1.0 | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r1.0/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -208,7 +208,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r1.0
```
> **不能**直接在仓库主页下载zip包获取源码。
@@ -248,7 +248,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindArmour master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour 1.0 | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.0/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
@@ -273,7 +273,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r1.0
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
diff --git a/install/mindspore_d_install_en.md b/install/mindspore_d_install_en.md
index a4d8bda0e4e3bb12e8f74df65bb176a45fc2c1ee..d376e57873c6cbf5f28efa80ae0787edeb0d5050 100644
--- a/install/mindspore_d_install_en.md
+++ b/install/mindspore_d_install_en.md
@@ -20,7 +20,6 @@ This document describes how to quickly install MindSpore in an Ascend AI process
- [Installing MindSpore Hub](#installing-mindspore-hub)
-
## Environment Requirements
### Hardware Requirements
@@ -33,7 +32,7 @@ This document describes how to quickly install MindSpore in an Ascend AI process
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindSpore 1.0 | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. |
- Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T400](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251811136?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251167910)). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document.
- GCC 7.3.0 can be installed by using apt command.
@@ -82,7 +81,7 @@ The compilation and installation must be performed on the Ascend 910 AI processo
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile MindSpore:
@@ -183,7 +182,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindInsight 1.0 | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r1.0/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**Installation dependencies:**
same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -208,7 +207,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r1.0
```
> You are **not** supposed to obtain the source code from the zip package downloaded from the repository homepage.
@@ -250,7 +249,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour 1.0 | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.0/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -275,7 +274,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
diff --git a/install/mindspore_gpu_install.md b/install/mindspore_gpu_install.md
index 64c1b6fb50cca7c433af81d90b68309827bc8cf9..1e9f8f2252c7b58582ff12b5146bc4834a4890db 100644
--- a/install/mindspore_gpu_install.md
+++ b/install/mindspore_gpu_install.md
@@ -19,6 +19,7 @@
+
## 环境要求
### 硬件要求
@@ -29,7 +30,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (可选,单机多卡/多机多卡训练需要)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore 1.0 | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (可选,单机多卡/多机多卡训练需要)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
- 为了方便用户使用,MindSpore降低了对Autoconf、Libtool、Automake版本的依赖,可以使用系统自带版本。
@@ -65,7 +66,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. 在源码根目录下执行如下命令编译MindSpore。
@@ -125,7 +126,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindInsight 1.0 | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/r1.0/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -150,7 +151,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r1.0
```
> **不能**直接在仓库主页下载zip包获取源码。
@@ -190,7 +191,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
-| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour 1.0 | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.0/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
@@ -215,7 +216,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r1.0
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
diff --git a/install/mindspore_gpu_install_en.md b/install/mindspore_gpu_install_en.md
index 2f45d1b35b3ab18a444b054071416e484be6297f..0e1ead512cbade9ea8a0dcbb3145714d33a904cd 100644
--- a/install/mindspore_gpu_install_en.md
+++ b/install/mindspore_gpu_install_en.md
@@ -29,7 +29,7 @@ This document describes how to quickly install MindSpore in a NVIDIA GPU environ
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindSpore 1.0 | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.0/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during `.whl` package installation. In other cases, you need to manually install dependency items.
- MindSpore reduces dependency on Autoconf, Libtool, Automake versions for the convenience of users, default versions of these tools built in their systems are now supported.
@@ -65,7 +65,7 @@ This document describes how to quickly install MindSpore in a NVIDIA GPU environ
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile MindSpore:
@@ -125,7 +125,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindInsight 1.0 | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r1.0/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**Installation dependencies:**
same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -150,7 +150,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r1.0
```
> You are **not** supposed to obtain the source code from the zip package downloaded from the repository homepage.
@@ -192,7 +192,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour 1.0 | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore 1.0
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r1.0/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -217,7 +217,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r1.0
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
diff --git a/lite/docs/source_en/apicc/apicc.rst b/lite/docs/source_en/apicc/apicc.rst
deleted file mode 100644
index 82bbf145ca283f9d79be97c93986fcf03d9e2aed..0000000000000000000000000000000000000000
--- a/lite/docs/source_en/apicc/apicc.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-C++ API
-=======
-
-.. toctree::
- :maxdepth: 1
-
- class_list
- lite
- session
- tensor
- dataset
- errorcode_and_metatype
\ No newline at end of file
diff --git a/lite/docs/source_en/apicc/class_list.md b/lite/docs/source_en/apicc/class_list.md
deleted file mode 100644
index bf0acb8f57c2702e3da3ef89d0fb23e723add840..0000000000000000000000000000000000000000
--- a/lite/docs/source_en/apicc/class_list.md
+++ /dev/null
@@ -1,16 +0,0 @@
-# Class List
-
-Here is a list of all classes with links to the namespace documentation for each member:
-
-| Namespace | Class Name | Description |
-| --- | --- | --- |
-| mindspore::lite | [Allocator](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. |
-| mindspore::lite | [Context](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#context) | Context defines for holding environment variables during runtime. |
-| mindspore::lite | [ModelImpl](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. |
-| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#primitivec) | Primitive defines as prototype of operator. |
-| mindspore::lite | [Model](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#model) | Model defines model in MindSpore Lite for managing graph. |
-| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#modelbuilder) | ModelBuilder is defined to build the model. |
-| mindspore::session | [LiteSession](https://www.mindspore.cn/lite/docs/en/master/apicc/session.html#litesession) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. |
-| mindspore::tensor | [MSTensor](https://www.mindspore.cn/lite/docs/en/master/apicc/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. |
-| mindspore::dataset | [LiteMat](https://www.mindspore.cn/lite/docs/en/master/apicc/dataset.html#litemat) |Class that represents a LiteMat of a Image. |
-
diff --git a/lite/docs/source_en/glossary.md b/lite/docs/source_en/glossary.md
deleted file mode 100644
index 8b596229292567ddb8b24b7a0cc1560c763f224d..0000000000000000000000000000000000000000
--- a/lite/docs/source_en/glossary.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# Glossary
-
-
-
-| Acronym and Abbreviation | Description |
-| ----- | ----- |
-| MindSpore Lite | MindSpore AI engine is applied to the intelligent terminal and resource constrained scenes on the edge side. |
-| MindSpore Micro | MindSpore AI engine with smaller package size for IOT devices. |
-| GHLO | Graph high-level optimization. |
-| GLLO | Graph low-level optimization. |
-| RT | Runtime. |
-
diff --git a/lite/docs/source_en/operator_list.md b/lite/docs/source_en/operator_list.md
deleted file mode 100644
index 6038b5c95690dd4b30378a7101d828ef9d0cda90..0000000000000000000000000000000000000000
--- a/lite/docs/source_en/operator_list.md
+++ /dev/null
@@ -1,111 +0,0 @@
-# Operator List
-
-
-
-> √ The checked items are the operators supported by MindSpore Lite。
-
-| Operation | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | Tensorflow
Lite op supported | Caffe
Lite op supported | Onnx
Lite op supported |
-|-----------------------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------|
-| Abs | | √ | √ | √ | | | Abs | | Abs |
-| Add | √ | √ | √ | √ | | √ | Add | | Add |
-| AddN | | √ | | | | | AddN | | |
-| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax |
-| Argmin | | √ | √ | √ | | | Argmin | | |
-| AvgPool | √ | √ | √ | √ | | √ | MeanPooling| Pooling | AveragePool |
-| BatchNorm | √ | √ | √ | √ | | √ | | BatchNorm | BatchNormalization |
-| BatchToSpace | | √ | √ | √ | | | BatchToSpace, BatchToSpaceND | | |
-| BiasAdd | | √ | √ | √ | | √ | | | BiasAdd |
-| Broadcast | | √ | | | | | BroadcastTo | | Expand |
-| Cast | √ | √ | | √ | | | Cast, DEQUANTIZE* | | Cast |
-| Ceil | | √ | √ | √ | | | Ceil | | Ceil |
-| Concat | √ | √ | √ | √ | √ | √ | Concat | Concat | Concat |
-| Conv2d | √ | √ | √ | √ | √ | √ | Conv2D | Convolution | Conv |
-| Conv2dTranspose | √ | √ | √ | √ | √ | √ | DeConv2D | Deconvolution | ConvTranspose |
-| Cos | | √ | √ | √ | | | Cos | | Cos |
-| Crop | | √ | √ | √ | | | | Crop | |
-| DeDepthwiseConv2D | | √ | √ | √ | | | | Deconvolution| ConvTranspose |
-| DepthToSpace | | √ | √ | √ | | | DepthToSpace| | DepthToSpace |
-| DepthwiseConv2dNative | √ | √ | √ | √ | √ | √ | DepthwiseConv2D | Convolution | Convolution |
-| Div | √ | √ | √ | √ | | √ | Div, RealDiv | | Div |
-| Eltwise | √ | √ | | | | | | Eltwise | |
-| Elu | | √ | | | | | Elu | | Elu |
-| Equal | √ | √ | √ | √ | | | Equal | | Equal |
-| Exp | | √ | | | | | Exp | | Exp |
-| ExpandDims | | √ | | | | | | | |
-| Fill | | √ | | | | | Fill | | |
-| Flatten | | √ | | | | | | Flatten | |
-| Floor | | √ | √ | √ | | | flOOR | | Floor |
-| FloorDiv | √ | √ | | | | | FloorDiv | | |
-| FloorMod | √ | √ | | | | | FloorMod | | |
-| FullConnection | | √ | √ | √ | | | FullyConnected | InnerProduct | |
-| GatherNd | | √ | √ | √ | | | GatherND | | |
-| GatherV2 | | √ | √ | √ | | | Gather | | Gather |
-| Greater | √ | √ | √ | √ | | | Greater | | Greater |
-| GreaterEqual | √ | √ | √ | √ | | | GreaterEqual| | |
-| Hswish | √ | √ | √ | √ | | | HardSwish | | |
-| LeakyReLU | √ | √ | | | | √ | LeakyRelu | | LeakyRelu |
-| Less | √ | √ | √ | √ | | | Less | | Less |
-| LessEqual | √ | √ | √ | √ | | | LessEqual | | |
-| LRN | | √ | | | | | LocalResponseNorm | | Lrn |
-| Log | | √ | √ | √ | | | Log | | Log |
-| LogicalAnd | √ | √ | | | | | LogicalAnd | | |
-| LogicalNot | | √ | √ | √ | | | LogicalNot | | |
-| LogicalOr | √ | √ | | | | | LogicalOr | | |
-| LSTM | | √ | | | | | | | |
-| MatMul | | √ | √ | √ | √ | √ | | | MatMul |
-| Maximum | √ | √ | | | | | Maximum | | Max |
-| MaxPool | √ | √ | √ | √ | | √ | MaxPooling | Pooling | MaxPool |
-| Minimum | √ | √ | | | | | Minimum | | Min |
-| Mul | √ | √ | √ | √ | | √ | Mul | | Mul |
-| NotEqual | √ | √ | √ | √ | | | NotEqual | | |
-| OneHot | | √ | | | | | OneHot | | |
-| Pad | | √ | √ | √ | | | Pad | | Pad |
-| Pow | | √ | √ | √ | | | Pow | Power | Power |
-| PReLU | | √ | | | | √ | | PReLU | |
-| Range | | √ | | | | | Range | | |
-| Rank | | √ | | | | | Rank | | |
-| ReduceMax | √ | √ | √ | √ | | | ReduceMax | | ReduceMax |
-| ReduceMean | √ | √ | √ | √ | | | Mean | | ReduceMean |
-| ReduceMin | √ | √ | √ | √ | | | ReduceMin | | ReduceMin |
-| ReduceProd | √ | √ | √ | √ | | | ReduceProd | | |
-| ReduceSum | √ | √ | √ | √ | | | Sum | | ReduceSum |
-| ReduceSumSquare | √ | √ | √ | √ | | | | | |
-| ReLU | √ | √ | √ | √ | | √ | Relu | ReLU | Relu |
-| ReLU6 | √ | √ | √ | √ | | √ | Relu6 | ReLU6 | Clip* |
-| Reshape | √ | √ | √ | √ | | √ | Reshape | Reshape | Reshape,Flatten |
-| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | |
-| Reverse | | √ | | | | | reverse | | |
-| ReverseSequence | | √ | | | | | ReverseSequence | | |
-| Round | | √ | √ | √ | | | Round | | |
-| Rsqrt | | √ | √ | √ | | | Rsqrt | | |
-| Scale | | √ | | | | | | Scale | |
-| ScatterNd | | √ | | | | | ScatterNd | | |
-| Shape | | √ | | | | | Shape | | Shape |
-| Sigmoid | √ | √ | √ | √ | | √ | Logistic | Sigmoid | Sigmoid |
-| Sin | | √ | √ | √ | | | Sin | | Sin |
-| Slice | | √ | √ | √ | √ | √ | Slice | | Slice |
-| Softmax | √ | √ | √ | √ | | √ | Softmax | Softmax | Softmax |
-| SpaceToBatch | | √ | | | | | | | |
-| SpaceToBatchND | | √ | | | | | SpaceToBatchND | | |
-| SpaceToDepth | | √ | | | | | SpaceToDepth | | SpaceToDepth |
-| SparseToDense | | √ | | | | | SpareToDense | | |
-| Split | √ | √ | √ | √ | | | Split, SplitV | | |
-| Sqrt | | √ | √ | √ | | | Sqrt | | Sqrt |
-| Square | | √ | √ | √ | | | Square | | |
-| SquaredDifference | | √ | | | | | SquaredDifference | | |
-| Squeeze | | √ | √ | √ | | | Squeeze | | Squeeze |
-| StridedSlice | | √ | √ | √ | | | StridedSlice| | |
-| Stack | | √ | | | | | Stack | | |
-| Sub | √ | √ | √ | √ | | √ | Sub | | Sub |
-| Tanh | √ | √ | | | | | Tanh | TanH | |
-| Tile | | √ | | | | | Tile | | Tile |
-| TopK | | √ | √ | √ | | | TopKV2 | | |
-| Transpose | √ | √ | | | | √ | Transpose | Permute | Transpose |
-| Unique | | √ | | | | | Unique | | |
-| Unsqueeze | | √ | √ | √ | | | | | Unsqueeze |
-| Unstack | | √ | | | | | Unstack | | |
-| Where | | √ | | | | | Where | | |
-| ZerosLike | | √ | | | | | ZerosLike | | |
-
-* Clip: only support convert clip(0, 6) to Relu6.
-* DEQUANTIZE: only support to convert fp16 to fp32.
diff --git a/lite/docs/source_zh_cn/apicc/apicc.rst b/lite/docs/source_zh_cn/apicc/apicc.rst
deleted file mode 100644
index 82bbf145ca283f9d79be97c93986fcf03d9e2aed..0000000000000000000000000000000000000000
--- a/lite/docs/source_zh_cn/apicc/apicc.rst
+++ /dev/null
@@ -1,12 +0,0 @@
-C++ API
-=======
-
-.. toctree::
- :maxdepth: 1
-
- class_list
- lite
- session
- tensor
- dataset
- errorcode_and_metatype
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/class_list.md b/lite/docs/source_zh_cn/apicc/class_list.md
deleted file mode 100644
index 3eddc864670fdbfda9390e020164edddb676392c..0000000000000000000000000000000000000000
--- a/lite/docs/source_zh_cn/apicc/class_list.md
+++ /dev/null
@@ -1,15 +0,0 @@
-# 类列表
-
-MindSpore Lite中的类定义及其所属命名空间和描述:
-
-| 命名空间 | 类 | 描述 |
-| --- | --- | --- |
-| mindspore::lite | [Allocator](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 |
-| mindspore::lite | [Context](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#context) | Context用于保存执行期间的环境变量。 |
-| mindspore::lite | [ModelImpl](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 |
-| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#primitivec) | PrimitiveC定义为算子的原型。 |
-| mindspore::lite | [Model](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 |
-| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 |
-| mindspore::session | [LiteSession](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 |
-| mindspore::tensor | [MSTensor](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 |
-| mindspore::dataset | [LiteMat](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/dataset.html#litemat) |LiteMat是一个处理图像的类。 |
diff --git a/lite/docs/source_zh_cn/glossary.md b/lite/docs/source_zh_cn/glossary.md
deleted file mode 100644
index b9cf41a4c6908e4e75c927614335a60e9e8b0ac6..0000000000000000000000000000000000000000
--- a/lite/docs/source_zh_cn/glossary.md
+++ /dev/null
@@ -1,12 +0,0 @@
-# 术语
-
-
-
-| 术语/缩略语 | 说明 |
-| ----- | ----- |
-| MindSpore Lite | 应用在智能终端,边缘册资源受限场景的MindSpore AI 引擎。 |
-| MindSpore Micro | 应用在IoT设备的,包大小更小的MindSpore AI引擎。 |
-| GHLO | Graph high-level optimization,图高层优化。 |
-| GLLO | Graph low-level optimization,图底层优化。 |
-| RT | Runtime运行时。 |
-
diff --git a/lite/docs/source_zh_cn/operator_list.md b/lite/docs/source_zh_cn/operator_list.md
deleted file mode 100644
index 3384d8baf91b1af92ff4758816790af7b6e241bc..0000000000000000000000000000000000000000
--- a/lite/docs/source_zh_cn/operator_list.md
+++ /dev/null
@@ -1,111 +0,0 @@
-# 算子支持
-
-
-
-> √勾选的项为MindSpore Lite所支持的算子。
-
-| 操作名 | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | 支持的Tensorflow
Lite op | 支持的Caffe
Lite op | 支持的Onnx
Lite op |
-|-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------|
-| Abs | | √ | √ | √ | | | Abs | | Abs |
-| Add | √ | √ | √ | √ | | √ | Add | | Add |
-| AddN | | √ | | | | | AddN | | |
-| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax |
-| Argmin | | √ | √ | √ | | | Argmin | | |
-| AvgPool | √ | √ | √ | √ | | √ | MeanPooling| Pooling | AveragePool |
-| BatchNorm | √ | √ | √ | √ | | √ | | BatchNorm | BatchNormalization |
-| BatchToSpace | | √ | √ | √ | | | BatchToSpace, BatchToSpaceND | | |
-| BiasAdd | | √ | √ | √ | | √ | | | BiasAdd |
-| Broadcast | | √ | | | | | BroadcastTo | | Expand |
-| Cast | √ | √ | | √ | | | Cast, DEQUANTIZE* | | Cast |
-| Ceil | | √ | √ | √ | | | Ceil | | Ceil |
-| Concat | √ | √ | √ | √ | √ | √ | Concat | Concat | Concat |
-| Conv2d | √ | √ | √ | √ | √ | √ | Conv2D | Convolution | Conv |
-| Conv2dTranspose | √ | √ | √ | √ | √ | √ | DeConv2D | Deconvolution | ConvTranspose |
-| Cos | | √ | √ | √ | | | Cos | | Cos |
-| Crop | | √ | √ | √ | | | | Crop | |
-| DeDepthwiseConv2D | | √ | √ | √ | | | | Deconvolution| ConvTranspose |
-| DepthToSpace | | √ | √ | √ | | | DepthToSpace| | DepthToSpace |
-| DepthwiseConv2dNative | √ | √ | √ | √ | √ | √ | DepthwiseConv2D | Convolution | Convolution |
-| Div | √ | √ | √ | √ | | √ | Div, RealDiv | | Div |
-| Eltwise | √ | √ | | | | | | Eltwise | |
-| Elu | | √ | | | | | Elu | | Elu |
-| Equal | √ | √ | √ | √ | | | Equal | | Equal |
-| Exp | | √ | | | | | Exp | | Exp |
-| ExpandDims | | √ | | | | | | | |
-| Fill | | √ | | | | | Fill | | |
-| Flatten | | √ | | | | | | Flatten | |
-| Floor | | √ | √ | √ | | | flOOR | | Floor |
-| FloorDiv | √ | √ | | | | | FloorDiv | | |
-| FloorMod | √ | √ | | | | | FloorMod | | |
-| FullConnection | | √ | √ | √ | | | FullyConnected | InnerProduct | |
-| GatherNd | | √ | √ | √ | | | GatherND | | |
-| GatherV2 | | √ | √ | √ | | | Gather | | Gather |
-| Greater | √ | √ | √ | √ | | | Greater | | Greater |
-| GreaterEqual | √ | √ | √ | √ | | | GreaterEqual| | |
-| Hswish | √ | √ | √ | √ | | | HardSwish | | |
-| LeakyReLU | √ | √ | | | | √ | LeakyRelu | | LeakyRelu |
-| Less | √ | √ | √ | √ | | | Less | | Less |
-| LessEqual | √ | √ | √ | √ | | | LessEqual | | |
-| LRN | | √ | | | | | LocalResponseNorm | | Lrn |
-| Log | | √ | √ | √ | | | Log | | Log |
-| LogicalAnd | √ | √ | | | | | LogicalAnd | | |
-| LogicalNot | | √ | √ | √ | | | LogicalNot | | |
-| LogicalOr | √ | √ | | | | | LogicalOr | | |
-| LSTM | | √ | | | | | | | |
-| MatMul | | √ | √ | √ | √ | √ | | | MatMul |
-| Maximum | √ | √ | | | | | Maximum | | Max |
-| MaxPool | √ | √ | √ | √ | | √ | MaxPooling | Pooling | MaxPool |
-| Minimum | √ | √ | | | | | Minimum | | Min |
-| Mul | √ | √ | √ | √ | | √ | Mul | | Mul |
-| NotEqual | √ | √ | √ | √ | | | NotEqual | | |
-| OneHot | | √ | | | | | OneHot | | |
-| Pad | | √ | √ | √ | | | Pad | | Pad |
-| Pow | | √ | √ | √ | | | Pow | Power | Power |
-| PReLU | | √ | | | | √ | | PReLU | |
-| Range | | √ | | | | | Range | | |
-| Rank | | √ | | | | | Rank | | |
-| ReduceMax | √ | √ | √ | √ | | | ReduceMax | | ReduceMax |
-| ReduceMean | √ | √ | √ | √ | | | Mean | | ReduceMean |
-| ReduceMin | √ | √ | √ | √ | | | ReduceMin | | ReduceMin |
-| ReduceProd | √ | √ | √ | √ | | | ReduceProd | | |
-| ReduceSum | √ | √ | √ | √ | | | Sum | | ReduceSum |
-| ReduceSumSquare | √ | √ | √ | √ | | | | | |
-| ReLU | √ | √ | √ | √ | | √ | Relu | ReLU | Relu |
-| ReLU6 | √ | √ | √ | √ | | √ | Relu6 | ReLU6 | Clip* |
-| Reshape | √ | √ | √ | √ | | √ | Reshape | Reshape | Reshape,Flatten |
-| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | |
-| Reverse | | √ | | | | | reverse | | |
-| ReverseSequence | | √ | | | | | ReverseSequence | | |
-| Round | | √ | √ | √ | | | Round | | |
-| Rsqrt | | √ | √ | √ | | | Rsqrt | | |
-| Scale | | √ | | | | | | Scale | |
-| ScatterNd | | √ | | | | | ScatterNd | | |
-| Shape | | √ | | | | | Shape | | Shape |
-| Sigmoid | √ | √ | √ | √ | | √ | Logistic | Sigmoid | Sigmoid |
-| Sin | | √ | √ | √ | | | Sin | | Sin |
-| Slice | | √ | √ | √ | √ | √ | Slice | | Slice |
-| Softmax | √ | √ | √ | √ | | √ | Softmax | Softmax | Softmax |
-| SpaceToBatch | | √ | | | | | | | |
-| SpaceToBatchND | | √ | | | | | SpaceToBatchND | | |
-| SpaceToDepth | | √ | | | | | SpaceToDepth | | SpaceToDepth |
-| SparseToDense | | √ | | | | | SpareToDense | | |
-| Split | √ | √ | √ | √ | | | Split, SplitV | | |
-| Sqrt | | √ | √ | √ | | | Sqrt | | Sqrt |
-| Square | | √ | √ | √ | | | Square | | |
-| SquaredDifference | | √ | | | | | SquaredDifference | | |
-| Squeeze | | √ | √ | √ | | | Squeeze | | Squeeze |
-| StridedSlice | | √ | √ | √ | | | StridedSlice| | |
-| Stack | | √ | | | | | Stack | | |
-| Sub | √ | √ | √ | √ | | √ | Sub | | Sub |
-| Tanh | √ | √ | | | | | Tanh | TanH | |
-| Tile | | √ | | | | | Tile | | Tile |
-| TopK | | √ | √ | √ | | | TopKV2 | | |
-| Transpose | √ | √ | | | | √ | Transpose | Permute | Transpose |
-| Unique | | √ | | | | | Unique | | |
-| Unsqueeze | | √ | √ | √ | | | | | Unsqueeze |
-| Unstack | | √ | | | | | Unstack | | |
-| Where | | √ | | | | | Where | | |
-| ZerosLike | | √ | | | | | ZerosLike | | |
-
-* Clip: 仅支持将clip(0, 6)转换为Relu6.
-* DEQUANTIZE: 仅支持将fp16转换为fp32.
diff --git a/lite/lite.md b/lite/lite.md
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/lite/lite_en.md b/lite/lite_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/resource/api_mapping.md b/resource/api_mapping.md
index 0eed65d611cd8eaba77c9f0804c892e8c2913d4e..3f4a30cd61ed62dafe076ec22402dadeacff1809 100644
--- a/resource/api_mapping.md
+++ b/resource/api_mapping.md
@@ -36,7 +36,7 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.expm1 | mindspore.ops.operations.Expm1 |
| torch.eye | mindspore.ops.operations.Eye |
| torch.flatten | mindspore.ops.operations.Flatten |
-| torch.flip | mindspore.ops.operations.ReverseV2
+| torch.flip | mindspore.ops.operations.ReverseV2 |
| torch.floor | mindspore.ops.operations.Floor |
| torch.fmod | mindspore.ops.operations.Mod |
| torch.linspace | mindspore.nn.LinSpace |
@@ -167,13 +167,13 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.utils.data.distributed.DistributedSampler | mindspore.dataset.DistributedSampler |
| torch.zeros | mindspore.ops.operations.ZerosLike |
| torch.zeros_like | mindspore.ops.operations.ZerosLike |
-| torchvision.datasets.ImageFolder | mindspore.dataset.ImageFolderDatasetV2 |
+| torchvision.datasets.ImageFolder | mindspore.dataset.ImageFolderDataset |
| torchvision.ops.nms | mindspore.ops.operations.NMSWithMask |
| torchvision.ops.roi_align | mindspore.ops.operations.ROIAlign |
-| torchvision.transforms.CenterCrop | mindspore.dataset.vision.py_transforms.CenterCrop |
-| torchvision.transforms.ColorJitter | mindspore.dataset.vision.py_transforms.RandomColorAdjust |
-| torchvision.transforms.Compose | mindspore.dataset.vision.py_transforms.Compose |
-| torchvision.transforms.Normalize | mindspore.dataset.vision.py_transforms.Normalize |
-| torchvision.transforms.RandomHorizontalFlip | mindspore.dataset.vision.py_transforms.RandomHorizontalFlip |
-| torchvision.transforms.Resize | mindspore.dataset.vision.py_transforms.Resize |
-| torchvision.transforms.ToTensor | mindspore.dataset.vision.py_transforms.ToTensor |
+| torchvision.transforms.CenterCrop | mindspore.dataset.vision.py_transforms.CenterCrop |
+| torchvision.transforms.ColorJitter | mindspore.dataset.vision.py_transforms.RandomColorAdjust |
+| torchvision.transforms.Compose | mindspore.dataset.transforms.py_transforms.Compose |
+| torchvision.transforms.Normalize | mindspore.dataset.vision.py_transforms.Normalize |
+| torchvision.transforms.RandomHorizontalFlip | mindspore.dataset.vision.py_transforms.RandomHorizontalFlip |
+| torchvision.transforms.Resize | mindspore.dataset.vision.py_transforms.Resize |
+| torchvision.transforms.ToTensor | mindspore.dataset.vision.py_transforms.ToTensor |
diff --git a/tutorials/inference/Makefile b/tutorials/inference/Makefile
new file mode 100644
index 0000000000000000000000000000000000000000..1eff8952707bdfa503c8d60c1e9a903053170ba2
--- /dev/null
+++ b/tutorials/inference/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line, and also
+# from the environment for the first two.
+SPHINXOPTS ?=
+SPHINXBUILD ?= sphinx-build
+SOURCEDIR = source_zh_cn
+BUILDDIR = build_zh_cn
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/tutorials/requirements.txt b/tutorials/inference/requirements.txt
similarity index 100%
rename from tutorials/requirements.txt
rename to tutorials/inference/requirements.txt
diff --git a/tutorials/inference/source_en/_static/logo_notebook.png b/tutorials/inference/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/tutorials/inference/source_en/_static/logo_notebook.png differ
diff --git a/tutorials/inference/source_en/_static/logo_source.png b/tutorials/inference/source_en/_static/logo_source.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc347d271abe082ae8d16242328551648766b6fb
Binary files /dev/null and b/tutorials/inference/source_en/_static/logo_source.png differ
diff --git a/tutorials/source_en/conf.py b/tutorials/inference/source_en/conf.py
similarity index 100%
rename from tutorials/source_en/conf.py
rename to tutorials/inference/source_en/conf.py
diff --git a/tutorials/inference/source_en/index.rst b/tutorials/inference/source_en/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..8c1fc859ff9762ca299f564d60e75606bfb06080
--- /dev/null
+++ b/tutorials/inference/source_en/index.rst
@@ -0,0 +1,21 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 09:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+Inference Using MindSpore
+=================================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+ :caption: Use
+
+ multi_platform_inference
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+ :caption: Inference Service
+
+ serving
diff --git a/tutorials/source_en/use/multi_platform_inference.md b/tutorials/inference/source_en/multi_platform_inference.md
similarity index 84%
rename from tutorials/source_en/use/multi_platform_inference.md
rename to tutorials/inference/source_en/multi_platform_inference.md
index 15b2ce276ea41856f8d2c10661fa7732054c12cc..6e4c5125d912fa1c50af9da59e80b97269a2be6d 100644
--- a/tutorials/source_en/use/multi_platform_inference.md
+++ b/tutorials/inference/source_en/multi_platform_inference.md
@@ -20,7 +20,7 @@
-
+
## Overview
@@ -77,22 +77,21 @@ MindSpore supports the following inference scenarios based on the hardware platf
print("============== {} ==============".format(acc))
```
In the preceding information:
- `model.eval` is an API for model validation. For details about the API, see .
- > Inference sample code: .
+ `model.eval` is an API for model validation. For details about the API, see .
+ > Inference sample code: .
1.2 Remote Storage
- When the pre-trained models are saved remotely, the steps of performing inference on validation dataset are as follows: firstly creating a model, then loading model and parameters using `hub.load_weights`, and finally performing inference on validation dataset once created. The processing method of the validation dataset is the same as that of the training dataset.
+ When the pre-trained models are saved remotely, the steps of performing inference on validation dataset are as follows: firstly determine which model to be used, then loading model and parameters using `mindspore_hub.load`, and finally performing inference on validation dataset once created. The processing method of the validation dataset is the same as that of the training dataset.
```python
- network = LeNet5(cfg.num_classes)
+ model_uid = "mindspore/ascend/0.7/googlenet_v1_cifar10" # using GoogleNet as an example.
+ network = mindspore_hub.load(model_uid, num_classes=10)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
net_opt = nn.Momentum(network.trainable_params(), cfg.lr, cfg.momentum)
model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()})
print("============== Starting Testing ==============")
- hub.load_weights(network, network_name="lenet", **{"device_target":
- "ascend", "dataset":"mnist", "version": "0.5.0"})
dataset = create_dataset(os.path.join(args.data_path, "test"),
cfg.batch_size,
1)
@@ -101,14 +100,14 @@ MindSpore supports the following inference scenarios based on the hardware platf
```
In the preceding information:
- `hub.load_weights` is an API for loading model parameters. PLease check the details in .
+ `mindpsore_hub.load` is an API for loading model parameters. PLease check the details in .
2. Use the `model.predict` API to perform inference.
```python
model.predict(input_data)
```
In the preceding information:
- `model.predict` is an API for inference. For details about the API, see .
+ `model.predict` is an API for inference. For details about the API, see .
## Inference on the Ascend 310 AI processor
@@ -116,7 +115,7 @@ MindSpore supports the following inference scenarios based on the hardware platf
The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or AIR format. For inference on the Ascend 310 AI processor, perform the following steps:
-1. Generate a model in ONNX or AIR format on the training platform. For details, see [Export AIR Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-air-model) and [Export ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-onnx-model).
+1. Generate a model in ONNX or AIR format on the training platform. For details, see [Export AIR Model](https://www.mindspore.cn/tutorial/training/en/r1.0/use/save_and_load_model.html#export-air-model) and [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.0/use/save_and_load_model.html#export-onnx-model).
2. Convert the ONNX or AIR model file into an OM model file and perform inference.
- For performing inference in the cloud environment (ModelArt), see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/en-us/bestpractice-modelarts/modelarts_10_0026.html).
@@ -130,7 +129,7 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
-1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-onnx-model).
+1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.0/use/save_and_load_model.html#export-onnx-model).
2. Perform inference on a GPU by referring to the runtime or SDK document. For example, use TensorRT to perform inference on the NVIDIA GPU. For details, see [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
@@ -142,10 +141,10 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
Similar to the inference on a GPU, the following steps are required:
-1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-onnx-model).
+1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.0/use/save_and_load_model.html#export-onnx-model).
2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime).
## On-Device Inference
-MindSpore Lite is an inference engine for on-device inference. For details, see [Export MINDIR Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model) and [On-Device Inference](https://www.mindspore.cn/lite/en).
+MindSpore Lite is an inference engine for on-device inference. For details, see [Export MINDIR Model](https://www.mindspore.cn/tutorial/training/en/r1.0/use/save_and_load_model.html#export-mindir-model) and [On-Device Inference](https://www.mindspore.cn/lite/en).
diff --git a/tutorials/source_en/advanced_use/serving.md b/tutorials/inference/source_en/serving.md
similarity index 98%
rename from tutorials/source_en/advanced_use/serving.md
rename to tutorials/inference/source_en/serving.md
index f892856eb52288cc6a82cf78f4c96489ceffee10..9077bc2ff424b46dba89f1105b7574b8fac3d7db 100644
--- a/tutorials/source_en/advanced_use/serving.md
+++ b/tutorials/inference/source_en/serving.md
@@ -16,7 +16,7 @@
- [REST API Client Sample](#rest-api-client-sample)
-
+
## Overview
diff --git a/tutorials/inference/source_zh_cn/_static/logo_notebook.png b/tutorials/inference/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/tutorials/inference/source_zh_cn/_static/logo_notebook.png differ
diff --git a/tutorials/inference/source_zh_cn/_static/logo_source.png b/tutorials/inference/source_zh_cn/_static/logo_source.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc347d271abe082ae8d16242328551648766b6fb
Binary files /dev/null and b/tutorials/inference/source_zh_cn/_static/logo_source.png differ
diff --git a/tutorials/source_zh_cn/conf.py b/tutorials/inference/source_zh_cn/conf.py
similarity index 96%
rename from tutorials/source_zh_cn/conf.py
rename to tutorials/inference/source_zh_cn/conf.py
index c2d1fb828249e37f44afaa4cda7e45d641784785..0c819a8b0622e1914ff199e5bd29a591595470b3 100644
--- a/tutorials/source_zh_cn/conf.py
+++ b/tutorials/inference/source_zh_cn/conf.py
@@ -58,6 +58,6 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
-html_search_options = {'dict': '../resource/jieba.txt'}
+html_search_options = {'dict': '../../resource/jieba.txt'}
html_static_path = ['_static']
\ No newline at end of file
diff --git a/tutorials/inference/source_zh_cn/index.rst b/tutorials/inference/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..1c83dd1d9ac116ed627583284a78c69ca7498dfc
--- /dev/null
+++ b/tutorials/inference/source_zh_cn/index.rst
@@ -0,0 +1,21 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 09:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+使用MindSpore推理
+=================================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+ :caption: 推理模型
+
+ multi_platform_inference
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+ :caption: 搭建推理服务
+
+ serving
diff --git a/tutorials/source_zh_cn/use/multi_platform_inference.md b/tutorials/inference/source_zh_cn/multi_platform_inference.md
similarity index 81%
rename from tutorials/source_zh_cn/use/multi_platform_inference.md
rename to tutorials/inference/source_zh_cn/multi_platform_inference.md
index 5628bdc5c1a2269ce5eee3254ca36efa3cf57951..f9ec205b1ba89eb8f54e0f1eb51e43ce11841e90 100644
--- a/tutorials/source_zh_cn/use/multi_platform_inference.md
+++ b/tutorials/inference/source_zh_cn/multi_platform_inference.md
@@ -1,4 +1,4 @@
-# 多平台推理
+# 推理模型
`Linux` `Ascend` `GPU` `CPU` `推理应用` `初级` `中级` `高级`
@@ -20,7 +20,7 @@
-
+
## 概述
@@ -76,21 +76,20 @@ CPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。
print("============== {} ==============".format(acc))
```
其中,
- `model.eval`为模型验证接口,对应接口说明:。
- > 推理样例代码:。
+ `model.eval`为模型验证接口,对应接口说明:。
+ > 推理样例代码:。
- 1.2 模型保存在华为云
+ 1.2 使用MindSpore Hub从华为云加载模型
- 首先构建模型,然后使用`hub.load_weights`从云端加载模型参数,传入验证数据集后即可进行推理,验证数据集的处理方式与训练数据集相同。
+ 首先构建模型,然后使用`mindspore_hub.load`从云端加载模型参数,传入验证数据集后即可进行推理,验证数据集的处理方式与训练数据集相同。
```python
- network = LeNet5(cfg.num_classes)
+ model_uid = "mindspore/ascend/0.7/googlenet_v1_cifar10" # using GoogleNet as an example.
+ network = mindspore_hub.load(model_uid, num_classes=10)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
net_opt = nn.Momentum(network.trainable_params(), cfg.lr, cfg.momentum)
model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()})
print("============== Starting Testing ==============")
- hub.load_weights(network, network_name="lenet", **{"device_target":
- "ascend", "dataset":"mnist", "version": "0.5.0"})
dataset = create_dataset(os.path.join(args.data_path, "test"),
cfg.batch_size,
1)
@@ -98,14 +97,14 @@ CPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。
print("============== {} ==============".format(acc))
```
其中,
- `hub.load_weights`为加载模型参数接口,对应接口说明:。
+ `mindspore_hub.load`为加载模型参数接口,对应接口说明:。
2. 使用`model.predict`接口来进行推理操作。
```python
model.predict(input_data)
```
其中,
- `model.predict`为推理接口,对应接口说明:。
+ `model.predict`为推理接口,对应接口说明:。
## Ascend 310 AI处理器上推理
@@ -113,7 +112,7 @@ CPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。
Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需要从ONNX或者AIR模型进行转换。所以在Ascend 310 AI处理器上推理,需要下述两个步骤:
-1. 在训练平台上生成ONNX或AIR格式模型,具体步骤请参考[导出AIR格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#air)和[导出ONNX格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#onnx)。
+1. 在训练平台上生成ONNX或AIR格式模型,具体步骤请参考[导出AIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/save_load_model_hybrid_parallel.html#air)和[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/save_load_model_hybrid_parallel.html#onnx)。
2. 将ONNX/AIR格式模型文件,转化为OM格式模型,并进行推理。
- 云上(ModelArt环境),请参考[Ascend910训练和Ascend310推理的样例](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html)完成推理操作。
@@ -127,7 +126,7 @@ Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需
### 使用ONNX格式文件推理
-1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#onnx)。
+1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/save_load_model_hybrid_parallel.html#onnx)。
2. 在GPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如在Nvidia GPU上进行推理,使用常用的TensorRT,可参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt)。
@@ -139,10 +138,10 @@ Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需
### 使用ONNX格式文件推理
与在GPU上进行推理类似,需要以下几个步骤:
-1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#onnx)。
+1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/save_load_model_hybrid_parallel.html#onnx)。
2. 在CPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如使用ONNX Runtime,可以参考[ONNX Runtime说明文档](https://github.com/microsoft/onnxruntime)。
## 端侧推理
-端侧推理需使用MindSpore Lite推理引擎,详细操作请参考[导出MINDIR格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)和[端侧推理教程](https://www.mindspore.cn/lite)。
+端侧推理需使用MindSpore Lite推理引擎,详细操作请参考[导出MINDIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/save_load_model_hybrid_parallel.html#mindir)和[端侧推理教程](https://www.mindspore.cn/lite)。
diff --git a/tutorials/source_zh_cn/advanced_use/serving.md b/tutorials/inference/source_zh_cn/serving.md
similarity index 94%
rename from tutorials/source_zh_cn/advanced_use/serving.md
rename to tutorials/inference/source_zh_cn/serving.md
index b7a7b58ef2b4a3c45d95841d128d4322e053bb6f..677c01e97d166d806a7236bd8ce926a8be0dc352 100644
--- a/tutorials/source_zh_cn/advanced_use/serving.md
+++ b/tutorials/inference/source_zh_cn/serving.md
@@ -16,7 +16,7 @@
- [REST API客户端示例](#rest-api客户端示例)
-
+
## 概述
@@ -50,7 +50,7 @@ ms_serving [--help] [--model_path=] [--model_name=] [--p
### 导出模型
> 导出模型之前,需要配置MindSpore[基础环境](https://www.mindspore.cn/install)。
-使用[add_model.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。
+使用[add_model.py](https://gitee.com/mindspore/mindspore/blob/r1.0/serving/example/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。
```python
python add_model.py
@@ -70,7 +70,7 @@ ms_serving --model_path={model directory} --model_name=tensor_add.mindir
#### Python客户端示例
> 执行客户端前,需将`/{your python path}/lib/python3.7/site-packages/mindspore`对应的路径添加到环境变量PYTHONPATH中。
-获取[ms_client.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/python_client/ms_client.py),启动Python客户端。
+获取[ms_client.py](https://gitee.com/mindspore/mindspore/blob/r1.0/serving/example/python_client/ms_client.py),启动Python客户端。
```bash
python ms_client.py
```
@@ -156,7 +156,7 @@ ms client received:
3. 调用gRPC接口和已经启动的Serving服务通信,并取回返回值。
```Status status = stub_->Predict(&context, request, &reply);```
-完整代码参考[ms_client](https://gitee.com/mindspore/mindspore/blob/master/serving/example/cpp_client/ms_client.cc)。
+完整代码参考[ms_client](https://gitee.com/mindspore/mindspore/blob/r1.0/serving/example/cpp_client/ms_client.cc)。
### REST API客户端示例
1. `data`形式发送数据:
diff --git a/tutorials/lite/Makefile b/tutorials/lite/Makefile
new file mode 100644
index 0000000000000000000000000000000000000000..1eff8952707bdfa503c8d60c1e9a903053170ba2
--- /dev/null
+++ b/tutorials/lite/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line, and also
+# from the environment for the first two.
+SPHINXOPTS ?=
+SPHINXBUILD ?= sphinx-build
+SOURCEDIR = source_zh_cn
+BUILDDIR = build_zh_cn
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/lite/docs/requirements.txt b/tutorials/lite/requirements.txt
similarity index 63%
rename from lite/docs/requirements.txt
rename to tutorials/lite/requirements.txt
index 8788e57530c50ff0656148a12b6d3a4480f4e430..ea17a9e73613ddd99cc31690ddcf283d9a721450 100644
--- a/lite/docs/requirements.txt
+++ b/tutorials/lite/requirements.txt
@@ -1,5 +1,5 @@
-sphinx
+sphinx >= 2.2.1, <= 2.4.4
recommonmark
sphinx-markdown-tables
sphinx_rtd_theme
-numpy
\ No newline at end of file
+jieba
\ No newline at end of file
diff --git a/tutorials/lite/source_en/_static/logo_notebook.png b/tutorials/lite/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/tutorials/lite/source_en/_static/logo_notebook.png differ
diff --git a/tutorials/lite/source_en/_static/logo_source.png b/tutorials/lite/source_en/_static/logo_source.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc347d271abe082ae8d16242328551648766b6fb
Binary files /dev/null and b/tutorials/lite/source_en/_static/logo_source.png differ
diff --git a/lite/tutorials/source_en/conf.py b/tutorials/lite/source_en/conf.py
similarity index 100%
rename from lite/tutorials/source_en/conf.py
rename to tutorials/lite/source_en/conf.py
diff --git a/tutorials/lite/source_en/images/lite_quick_start_app_result.png b/tutorials/lite/source_en/images/lite_quick_start_app_result.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7cc49f582440e31b6b5b14dbba5131bfed2a4b4
Binary files /dev/null and b/tutorials/lite/source_en/images/lite_quick_start_app_result.png differ
diff --git a/lite/tutorials/source_en/images/lite_quick_start_home.png b/tutorials/lite/source_en/images/lite_quick_start_home.png
similarity index 100%
rename from lite/tutorials/source_en/images/lite_quick_start_home.png
rename to tutorials/lite/source_en/images/lite_quick_start_home.png
diff --git a/lite/tutorials/source_en/images/lite_quick_start_project_structure.png b/tutorials/lite/source_en/images/lite_quick_start_project_structure.png
similarity index 100%
rename from lite/tutorials/source_en/images/lite_quick_start_project_structure.png
rename to tutorials/lite/source_en/images/lite_quick_start_project_structure.png
diff --git a/lite/tutorials/source_en/images/lite_quick_start_run_app.PNG b/tutorials/lite/source_en/images/lite_quick_start_run_app.PNG
similarity index 100%
rename from lite/tutorials/source_en/images/lite_quick_start_run_app.PNG
rename to tutorials/lite/source_en/images/lite_quick_start_run_app.PNG
diff --git a/lite/tutorials/source_en/images/lite_quick_start_sdk.png b/tutorials/lite/source_en/images/lite_quick_start_sdk.png
similarity index 100%
rename from lite/tutorials/source_en/images/lite_quick_start_sdk.png
rename to tutorials/lite/source_en/images/lite_quick_start_sdk.png
diff --git a/lite/tutorials/source_en/images/side_infer_process.png b/tutorials/lite/source_en/images/side_infer_process.png
similarity index 100%
rename from lite/tutorials/source_en/images/side_infer_process.png
rename to tutorials/lite/source_en/images/side_infer_process.png
diff --git a/lite/tutorials/source_en/index.rst b/tutorials/lite/source_en/index.rst
similarity index 66%
rename from lite/tutorials/source_en/index.rst
rename to tutorials/lite/source_en/index.rst
index 26e9445ec1baace48e64a3418dd12fe1a1ec36a3..efbe495429d9ea1065815c0d7564855c41a9e164 100644
--- a/lite/tutorials/source_en/index.rst
+++ b/tutorials/lite/source_en/index.rst
@@ -3,8 +3,8 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore Lite Tutorials
-========================
+Using MindSpore on Mobile and IoT
+=================================
.. toctree::
:glob:
@@ -16,9 +16,12 @@ MindSpore Lite Tutorials
.. toctree::
:glob:
:maxdepth: 1
- :caption: Use
+ :caption: Basic Use
- build
- use/converter_tool
+ use/build
+ use/convert_model
use/evaluating_the_model
+ use/image_processing
use/runtime
+ use/benchmark_tool
+ use/timeprofiler_tool
diff --git a/lite/tutorials/source_en/quick_start/quick_start.md b/tutorials/lite/source_en/quick_start/quick_start.md
similarity index 65%
rename from lite/tutorials/source_en/quick_start/quick_start.md
rename to tutorials/lite/source_en/quick_start/quick_start.md
index b0712f03d6a6b713fa0b63160f5be2714a3fc8a2..40a337274f2c804108d2582a5480893e8500cdca 100644
--- a/lite/tutorials/source_en/quick_start/quick_start.md
+++ b/tutorials/lite/source_en/quick_start/quick_start.md
@@ -17,7 +17,7 @@
-
+
## Overview
@@ -29,7 +29,7 @@ This tutorial demonstrates the on-device deployment process based on the image c
2. Convert the model into a MindSpore Lite model.
3. Use the MindSpore Lite inference model on the device. The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite image classification models to perform on-device inference, classify the content captured by a device camera, and display the most possible classification result on the application's image preview screen.
-> Click to find [Android image classification models](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite) and [sample code](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification).
+> Click to find [Android image classification models](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite) and [sample code](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo/official/lite/image_classification).
## Selecting a Model
@@ -39,11 +39,11 @@ In addition, you can use the preset model to perform migration learning to imple
## Converting a Model
-After you retrain a model provided by MindSpore, export the model in the [.mindir format](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model). Use the MindSpore Lite [model conversion tool](https://www.mindspore.cn/lite/tutorial/en/master/use/converter_tool.html) to convert the .mindir model to a .ms model.
+After you retrain a model provided by MindSpore, export the model in the [.mindir format](https://www.mindspore.cn/tutorial/training/en/r1.0/use/save_and_load_model.html#export-mindir-model). Use the MindSpore Lite [model conversion tool](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/converter_tool.html) to convert the .mindir model to a .ms model.
Take the mobilenetv2 model as an example. Execute the following script to convert a model into a MindSpore Lite model for on-device inference.
```bash
-./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
+./converter_lite --fmk=MINDIR --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
## Deploying an Application
@@ -54,9 +54,9 @@ The following section describes how to build and execute an on-device image clas
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
- Native development kit (NDK) 21.3
-- CMake 3.10.2
+- [CMake](https://cmake.org/download) 3.10.2
- Android software development kit (SDK) 26 or later
-- OpenCV 4.0.0 or later (included in the sample code)
+- [JDK]( https://www.oracle.com/downloads/otn-pub/java/JDK/) 1.8 or later
### Building and Running
@@ -68,7 +68,7 @@ The following section describes how to build and execute an on-device image clas

- (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`.
+ (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the NDK location in `Android NDK location` of `Project Structure`.

@@ -80,6 +80,8 @@ The following section describes how to build and execute an on-device image clas
For details about how to connect the Android Studio to a device for debugging, see .
+ The mobile phone needs to be turn on "USB debugging mode" before Android Studio can recognize the mobile phone. Huawei mobile phones generally turn on "USB debugging model" in Settings > system and update > developer Options > USB debugging.
+
3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.

@@ -87,7 +89,7 @@ The following section describes how to build and execute an on-device image clas
## Detailed Description of the Sample Program
-This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/tutorial/en/master/use/runtime.html).
+This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/runtime.html).
> This following describes the JNI layer implementation of the sample program. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge.
@@ -95,31 +97,22 @@ This image classification sample program on the Android device includes a Java l
```
app
-|
-├── libs # library files that store MindSpore Lite dependencies
-│ └── arm64-v8a
-│ ├── libopencv_java4.so
-│ └── libmindspore-lite.so
│
-├── opencv # dependency files related to OpenCV
-│ └── ...
-|
├── src/main
│ ├── assets # resource files
-| | └── model.ms # model file
+| | └── mobilenetv2.ms # model file
│ |
│ ├── cpp # main logic encapsulation classes for model loading and prediction
-| | ├── include # header files related to MindSpore calling
-| | | └── ...
-│ | |
+| | |── ...
+| | ├── mindspore_lite_x.x.x-minddata-arm64-cpu` #MindSpore Lite version
| | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling
│ | └── MindSporeNetnative.h # header file
│ |
│ ├── java # application code at the Java layer
-│ │ └── com.huawei.himindsporedemo
+│ │ └── com.mindspore.himindsporedemo
│ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling
│ │ │ └── ...
-│ │ └── obejctdetect # implementation related to camera enabling and drawing
+│ │ └── widget # implementation related to camera enabling and drawing
│ │ └── ...
│ │
│ ├── res # resource files related to Android
@@ -128,12 +121,13 @@ app
├── CMakeList.txt # CMake compilation entry file
│
├── build.gradle # Other Android configuration file
+├── download.gradle # MindSpore version download
└── ...
```
### Configuring MindSpore Lite Dependencies
-When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/tutorial/en/master/build.html) to generate the `libmindspore-lite.so` library file.
+When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html) to generate the `libmindspore-lite.so` library file.
In Android Studio, place the compiled `libmindspore-lite.so` library file (which can contain multiple compatible architectures) in the `app/libs/ARM64-V8a` (Arm64) or `app/libs/armeabi-v7a` (Arm32) directory of the application project. In the `build.gradle` file of the application, configure the compilation support of CMake, `arm64-v8a`, and `armeabi-v7a`.
@@ -156,42 +150,40 @@ android{
Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
```
-# Set MindSpore Lite Dependencies.
-include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore)
+# ============== Set MindSpore Dependencies. =============
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/ir/dtype)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/schema)
+
add_library(mindspore-lite SHARED IMPORTED )
-set_target_properties(mindspore-lite PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so")
+add_library(minddata-lite SHARED IMPORTED )
-# Set OpenCV Dependecies.
-include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include)
-add_library(lib-opencv SHARED IMPORTED )
-set_target_properties(lib-opencv PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so")
+set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
+set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
+# --------------- MindSpore Lite set End. --------------------
# Link target library.
target_link_libraries(
...
- mindspore-lite
- lib-opencv
+ # --- mindspore ---
+ minddata-lite
+ mindspore-lite
...
)
```
-In this example, the download.gradle File configuration auto download ` libmindspot-lite.so `and `libopencv_ Java4.so` library file, placed in the 'app / libs / arm64-v8a' directory.
+In this example, the download.gradle File configuration auto download MindSpore Lite version, placed in the `app/src/main/cpp/mindspore_lite_x.x.x-minddata-arm64-cpu` directory.
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
-libmindspore-lite.so [libmindspore-lite.so]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
-
-libmindspore-lite include [libmindspore-lite include]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
-
-libopencv_java4.so [libopencv_java4.so](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
-
-libopencv include [libopencv include]( https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
-
-
+MindSpore Lite version [MindSpore Lite version](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%201.0/mindspore-lite-1.0.0-minddata-arm64-cpu.tar.gz)
### Downloading and Deploying a Model File
@@ -201,8 +193,6 @@ Note: if the automatic download fails, please manually download the relevant lib
mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
-
-
### Compiling On-Device Inference Code
Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
@@ -225,10 +215,8 @@ The inference code process is as follows. For details about the complete code, s
*labelEnv = labelNet;
// Create context.
- lite::Context *context = new lite::Context;
-
- context->device_ctx_.type = lite::DT_CPU;
- context->thread_num_ = numThread; //Specify the number of threads to run inference
+ mindspore::lite::Context *context = new mindspore::lite::Context;
+ context->thread_num_ = num_thread;
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
@@ -253,7 +241,7 @@ The inference code process is as follows. For details about the complete code, s
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
- BitmapToMat(env, srcBitmap, matImageSrc);
+ BitmapToMat(env, srcBitmap, matImageSrc);
// Processing such as zooming the picture size.
matImgPreprocessed = PreProcessImageData(matImageSrc);
@@ -278,7 +266,38 @@ The inference code process is as follows. For details about the complete code, s
delete[] (dataHWC);
```
-3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
+3. Pretreat the input data.
+
+ ```cpp
+ bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
+ bool ret = false;
+ LiteMat lite_mat_resize;
+ LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
+ ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+ if (!ret) {
+ MS_PRINT("ResizeBilinear error");
+ return false;
+ }
+ LiteMat lite_mat_convert_float;
+ ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
+ if (!ret) {
+ MS_PRINT("ConvertTo error");
+ return false;
+ }
+ LiteMat lite_mat_cut;
+ ret = Crop(lite_mat_convert_float, lite_mat_cut, 16, 16, 224, 224);
+ if (!ret) {
+ MS_PRINT("Crop error");
+ return false;
+ }
+ float means[3] = {0.485, 0.456, 0.406};
+ float vars[3] = {1.0 / 0.229, 1.0 / 0.224, 1.0 / 0.225};
+ SubStractMeanNormalize(lite_mat_cut, lite_norm_mat_cut, means, vars);
+ return true;
+ }
+ ```
+
+4. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
- Perform graph execution and on-device inference.
@@ -289,7 +308,12 @@ The inference code process is as follows. For details about the complete code, s
- Obtain the output data.
```cpp
- auto msOutputs = mSession->GetOutputs();
+ auto names = mSession->GetOutputTensorNames();
+ std::unordered_map msOutputs;
+ for (const auto &name : names) {
+ auto temp_dat =mSession->GetOutputByTensorName(name);
+ msOutputs.insert(std::pair {name, temp_dat});
+ }
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
@@ -298,39 +322,34 @@ The inference code process is as follows. For details about the complete code, s
std::string ProcessRunnetResult(std::unordered_map msOutputs, int runnetRet) {
- // Get model output results.
- std::unordered_map::iterator iter;
- iter = msOutputs.begin();
- auto brach1_string = iter->first;
- auto branch1_tensor = iter->second;
+ std::unordered_map::iterator iter;
+ iter = msOutputs.begin();
- int OUTPUTS_LEN = branch1_tensor->ElementsNum();
+ // The mobilenetv2.ms model output just one branch.
+ auto outputTensor = iter->second;
+ int tensorNum = outputTensor->ElementsNum();
- float *temp_scores = static_cast(branch1_tensor->MutableData());
+ // Get a pointer to the first score.
+ float *temp_scores = static_cast(outputTensor->MutableData());
- float scores[RET_CATEGORY_SUM];
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- scores[i] = temp_scores[i];
+ float scores[RET_CATEGORY_SUM];
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ if (temp_scores[i] > 0.5) {
+ MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
}
+ scores[i] = temp_scores[i];
+ }
- // Converted to text information that needs to be displayed in the APP.
- std::string retStr = "";
- if (runnetRet == 0) {
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- if (scores[i] > 0.3){
- retStr += g_labels_name_map[i];
- retStr += ":";
- std::string score_str = std::to_string(scores[i]);
- retStr += score_str;
- retStr += ";";
- }
- }
- else {
- MS_PRINT("MindSpore run net failed!");
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- retStr += " :0.0;";
- }
- }
- return retStr;
+ // Score for each category.
+ // Converted to text information that needs to be displayed in the APP.
+ std::string categoryScore = "";
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ categoryScore += labels_name_map[i];
+ categoryScore += ":";
+ std::string score_str = std::to_string(scores[i]);
+ categoryScore += score_str;
+ categoryScore += ";";
+ }
+ return categoryScore;
}
```
\ No newline at end of file
diff --git a/lite/tutorials/source_en/use/benchmark_tool.md b/tutorials/lite/source_en/use/benchmark_tool.md
similarity index 83%
rename from lite/tutorials/source_en/use/benchmark_tool.md
rename to tutorials/lite/source_en/use/benchmark_tool.md
index d6b3a09ae8554a462fd9a464120ee8cfc1f228f1..3984cb87f6208243e639cc61def0834896b45c55 100644
--- a/lite/tutorials/source_en/use/benchmark_tool.md
+++ b/tutorials/lite/source_en/use/benchmark_tool.md
@@ -12,7 +12,7 @@
-
+
## Overview
@@ -22,9 +22,9 @@ After model conversion and before inference, you can use the Benchmark tool to p
To use the Benchmark tool, you need to prepare the environment as follows:
-- Compilation: Install build dependencies and perform build. The code of the Benchmark tool is stored in the `mindspore/lite/tools/benchmark` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example) in the build document.
+- Compilation: Install build dependencies and perform build. The code of the Benchmark tool is stored in the `mindspore/lite/tools/benchmark` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#compilation-example) in the build document.
-- Run: Obtain the `Benchmark` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description) in the build document.
+- Run: Obtain the `Benchmark` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#output-description) in the build document.
## Example
@@ -64,23 +64,16 @@ Mean bias of all nodes: 0%
```
-When the origin model's input or output data type is uint8, they needs to be reduced by 128 and converted to int8 type before it can be used as benchmark data to verify accuracy. And when the output data type is INT8, you need to specify calibDataType as INT8 in the parameter.
-
-```bash
-./benchmark --modelPath=./models/test_benchmark_int8.ms --inDataPath=./input/test_benchmark_int8.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark_int8.out --calibDataType=INT8
-```
-
## Parameter Description
The command used for benchmark testing based on the compiled Benchmark tool is as follows:
```bash
./benchmark [--modelPath=] [--accuracyThreshold=]
- [--calibDataPath=] [--cpuBindMode=]
- [--device=] [--help] [--inDataPath=]
- [--inDataType=] [--loopCount=]
- [--numThreads=] [--omModelPath=]
- [--resizeDims=] [--warmUpLoopCount=]
+ [--calibDataPath=] [--calibDataType=]
+ [--cpuBindMode=] [--device=] [--help]
+ [--inDataPath=] [--loopCount=]
+ [--numThreads=] [--warmUpLoopCount=]
[--fp16Priority=]
```
@@ -91,7 +84,7 @@ The following describes the parameters in detail.
| `--modelPath=` | Mandatory | Specifies the file path of the MindSpore Lite model for benchmark testing. | String | Null | - |
| `--accuracyThreshold=` | Optional | Specifies the accuracy threshold. | Float | 0.5 | - |
| `--calibDataPath=` | Optional | Specifies the file path of the benchmark data. The benchmark data, as the comparison output of the tested model, is output from the forward inference of the tested model under other deep learning frameworks using the same input. | String | Null | - |
-| `--calibDataType=` | Optional | Specifies the calibration data type. | String | FLOAT | FLOAT or INT8 |
+| `--calibDataType=` | Optional | Specifies the calibration data type. | String | FLOAT | UINT8, FLOAT or INT8 |
| `--cpuBindMode=` | Optional | Specifies the type of the CPU core bound to the model inference program. | Integer | 1 | −1: medium core
1: large core
0: not bound |
| `--device=` | Optional | Specifies the type of the device on which the model inference program runs. | String | CPU | CPU or GPU |
| `--help` | Optional | Displays the help information about the `benchmark` command. | - | - | - |
diff --git a/lite/tutorials/source_en/build.md b/tutorials/lite/source_en/use/build.md
similarity index 67%
rename from lite/tutorials/source_en/build.md
rename to tutorials/lite/source_en/use/build.md
index ef1282a257493900b1c43c9371d083058f2e04de..ff40cefcd3d9eee34533ef066adb0789a65c6e45 100644
--- a/lite/tutorials/source_en/build.md
+++ b/tutorials/lite/source_en/use/build.md
@@ -10,24 +10,21 @@
- [Output Description](#output-description)
- [Description of Converter's Directory Structure](#description-of-converters-directory-structure)
- [Description of Runtime and Other tools' Directory Structure](#description-of-runtime-and-other-tools-directory-structure)
- - [Windows Environment Compilation](#windows-environment-compilation)
- - [Environment Requirements](#environment-requirements-1)
- - [Compilation Options](#compilation-options-1)
- - [Compilation Example](#compilation-example-1)
- - [Output Description](#output-description-1)
+ - [Description of Imageprocess's Directory Structure](#description-of-imageprocesss-directory-structure)
-
+
This chapter introduces how to quickly compile MindSpore Lite, which includes the following modules:
| Module | Support Platform | Description |
| --- | ---- | ---- |
-| converter | Linux、Windows | Model Conversion Tool |
+| converter | Linux | Model Conversion Tool |
| runtime | Linux、Android | Model Inference Framework |
| benchmark | Linux、Android | Benchmarking Tool |
-| time_profiler | Linux、Android | Performance Analysis Tool |
+| timeprofiler | Linux、Android | Performance Analysis Tool |
+| imageprocess | Linux、Android | Image Processing Library |
## Linux Environment Compilation
@@ -35,7 +32,7 @@ This chapter introduces how to quickly compile MindSpore Lite, which includes th
- The compilation environment supports Linux x86_64 only. Ubuntu 18.04.02 LTS is recommended.
-- Compilation dependencies of runtime、benchmark and time_profiler:
+- Compilation dependencies of runtime、benchmark and timeprofiler:
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
- [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
@@ -53,6 +50,7 @@ This chapter introduces how to quickly compile MindSpore Lite, which includes th
- [Libevent](https://libevent.org) >= 2.0
- [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18
- [OpenSSL](https://www.openssl.org/) >= 1.1.1
+ - [Python](https://www.python.org/) >= 3.7.5
> - To install and use `Android_NDK`, you need to configure environment variables. The command example is `export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`.
> - In the `build.sh` script, run the `git clone` command to obtain the code in the third-party dependency library. Ensure that the network settings of Git are correct.
@@ -69,6 +67,7 @@ MindSpore Lite provides a compilation script `build.sh` for one-click compilatio
| -j[n] | Sets the number of threads used during compilation. Otherwise, the number of threads is set to 8 by default. | Integer | No |
| -e | In the Arm architecture, select the backend operator and set the `gpu` parameter. The built-in GPU operator of the framework is compiled at the same time. | GPU | No |
| -h | Displays the compilation help information. | None | No |
+| -n | Specifies to compile the lightweight image processing module. | lite_cv | No |
> When the `-I` parameter changes, such as `-I x86_64` is converted to `-I arm64`, adding `-i` for parameter compilation does not take effect.
@@ -77,7 +76,7 @@ MindSpore Lite provides a compilation script `build.sh` for one-click compilatio
First, download source code from the MindSpore code repository.
```bash
-git clone https://gitee.com/mindspore/mindspore.git
+git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
Then, run the following commands in the root directory of the source code to compile MindSpore Lite of different versions:
@@ -102,11 +101,17 @@ Then, run the following commands in the root directory of the source code to com
bash build.sh -I arm64 -e gpu
```
+- Compile ARM64 with image preprocessing module:
+ ```bash
+ bash build.sh -I arm64 -n lite_cv
+ ```
+
### Output Description
-After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into two parts.
+After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into three parts.
- `mindspore-lite-{version}-converter-{os}.tar.gz`:Contains model conversion tool.
- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:Contains model inference framework、benchmarking tool and performance analysis tool.
+- `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`:Contains image processing library ImageProcess.
> version: version of the output, consistent with that of the MindSpore.
>
@@ -119,6 +124,7 @@ Execute the decompression command to obtain the compiled output:
```bash
tar -xvf mindspore-lite-{version}-converter-{os}.tar.gz
tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
+tar -xvf mindspore-lite-{version}-minddata-{os}-{device}.tar.gz
```
#### Description of Converter's Directory Structure
@@ -147,7 +153,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
- │ └── time_profile # Model network layer time-consuming analysis tool
+ │ └── time_profiler # Model network layer time-consuming analysis tool
```
@@ -158,74 +164,45 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── benchmark # Benchmarking Tool
│ └── lib # Inference framework dynamic library
│ ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
- │ ├── liboptimize.so # Operator performance optimization library in MindSpore Lite
+ │ ├── libmindspore-lite-fp16.so # Operator performance optimization library support float16 in MindSpore Lite
+ │ ├── libmindspore-lite-optimize.so # Operator performance optimization library support dotprod instruction in MindSpore Lite
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
- │ └── time_profile # Model network layer time-consuming analysis tool
+ │ └── time_profiler # Model network layer time-consuming analysis tool
```
- When the compilation option is `-I arm32`:
```
|
- ├── mindspore-lite-{version}-runtime-arm64-cpu
+ ├── mindspore-lite-{version}-runtime-arm32-cpu
│ └── benchmark # Benchmarking Tool
│ └── lib # Inference framework dynamic library
│ ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
- │ └── time_profile # Model network layer time-consuming analysis tool
+ │ └── time_profiler # Model network layer time-consuming analysis tool
```
-> 1. `liboptimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
-> 2. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
-> 3. Before running the tools in the converter, benchmark or time_profile directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`; configure benchmark and timeprofiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
-
-## Windows Environment Compilation
-
-### Environment Requirements
-
-- The supported compilation environment is: Windows 10, 64-bit.
-
-- Compilation dependencies are:
- - [CMake](https://cmake.org/download/) >= 3.14.1
- - [MinGW GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download) = 7.3.0
- - [Python](https://www.python.org/) >= 3.7.5
-
-> The compilation script will execute `git clone` to obtain the code of the third-party dependent libraries. Please make sure that the git network settings are correct and available in advance.
-
-### Compilation Options
+> 1. `libmindspore-lite-optimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support dotprod instruction.
+> 2. `libmindspore-lite-fp16.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
+> 3. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
+> 4. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`; configure benchmark and timeprofiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
-The compilation options of MindSpore Lite are as follows:
+#### Description of Imageprocess's Directory Structure
-| Parameter | Parameter Description | Mandatory or Not |
-| -------- | ----- | ---- |
-| **lite** | **Set this parameter to compile the Mindspore Lite project.** | **Yes** |
-| [n] | Set the number of threads used during compilation, otherwise the default is set to 6 threads. | No |
+The image processing library is only available under the `-I arm64 -n lite_cv` compilation option, and the content includes the following parts:
-### Compilation Example
-
-First, use the git tool to download the source code from the MindSpore code repository.
-```bash
-git clone https://gitee.com/mindspore/mindspore.git
```
-
-Then, use the cmd tool to compile MindSpore Lite in the root directory of the source code and execute the following commands.
-
-- Compile the Windows version with the default number of threads (6 threads).
- ```bash
- call build.bat lite
- ```
-- Compile the Windows version with the specified number of threads 8.
- ```bash
- call build.bat lite 8
- ```
-
-### Output Description
-
-After the compilation is complete, enter the `mindspore/output/` directory, unzip the output file `mindspore-lite-{version}-converter-win-cpu.zip`, which contains the conversion tool executable file.
-
-> version: version of the output, consistent with that of the MindSpore.
+|
+├── mindspore-lite-{version}-minddata-{os}-{device}
+│ └── include # Head file
+│ ├── lite_cv # Image processing library header file
+│ └── lib # Dynamic library
+│ ├── libminddata-lite.so # Image processing dynamic library
+│ └── third_party # Third-party Iibrary header files and libraries
+│ ├── flatbuffers # Header files of FlatBuffers
+```
diff --git a/tutorials/lite/source_en/use/convert_model.rst b/tutorials/lite/source_en/use/convert_model.rst
new file mode 100644
index 0000000000000000000000000000000000000000..d8c26274a0c2469f17508dd34401a4ef5220e592
--- /dev/null
+++ b/tutorials/lite/source_en/use/convert_model.rst
@@ -0,0 +1,8 @@
+Convert Into The MindSpore Lite Model
+=====================================
+
+.. toctree::
+ :maxdepth: 1
+
+ converter_tool
+ post_training_quantization
\ No newline at end of file
diff --git a/lite/tutorials/source_en/use/converter_tool.md b/tutorials/lite/source_en/use/converter_tool.md
similarity index 70%
rename from lite/tutorials/source_en/use/converter_tool.md
rename to tutorials/lite/source_en/use/converter_tool.md
index 38cd115fb12a93031009cf9f2d12e1ab77045a46..264421bcc3a80a6bc89706b1ce39bf3ba8bd164a 100644
--- a/lite/tutorials/source_en/use/converter_tool.md
+++ b/tutorials/lite/source_en/use/converter_tool.md
@@ -1,21 +1,21 @@
-# Convert to MindSpore Lite
+# Convert to MindSpore Lite Model
-- [Convert to MindSpore Lite](#convert-to-mindspore-lite)
+- [Convert to MindSpore Lite Model](#convert-to-mindspore-lite-model)
- [Overview](#overview)
- [Linux Environment Instructions](#linux-environment-instructions)
- [Environment Preparation](#environment-preparation)
- [Example](#example)
- [Parameter Description](#parameter-description)
- - [Windows Environment Instructions](#windows-environment-instructions)
+ - [Windows Environment Instructions](#windows-environment-instructions)
- [Environment Preparation](#environment-preparation-1)
- [Parameter Description](#parameter-description-1)
- - [Example](#example-1)
+ - [Example](#example-1)
-
+
## Overview
@@ -29,9 +29,9 @@ Currently, the following input formats are supported: MindSpore, TensorFlow Lite
To use the MindSpore Lite model conversion tool, you need to prepare the environment as follows:
-- Compilation: Install basic and additional build dependencies and perform build. The build version is x86_64. The code of the model conversion tool is stored in the `mindspore/lite/tools/converter` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example) in the build document.
+- Compilation: Install basic and additional build dependencies and perform build. The build version is x86_64. The code of the model conversion tool is stored in the `mindspore/lite/tools/converter` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#compilation-example) in the build document.
-- Run: Obtain the `converter` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description) in the build document.
+- Run: Obtain the `converter` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#output-description) in the build document.
### Example
@@ -53,7 +53,7 @@ The following describes how to use the conversion command by using several commo
The output is as follows:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
This indicates that the Caffe model is successfully converted into the MindSpore Lite model and the new file `lenet.ms` is generated.
@@ -61,7 +61,7 @@ The following describes how to use the conversion command by using several commo
- MindSpore model `model.mindir`
```bash
- ./converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ ./converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite model `model.tflite`
@@ -79,15 +79,16 @@ The following describes how to use the conversion command by using several commo
./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining
```
- - TensorFlow Lite aware quantization model `model_quant.tflite` set the input and output data type to be int8
+ - TensorFlow Lite aware quantization model `model_quant.tflite` set the input and output data type to be float
```bash
- ./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining --inputInferenceType=INT8 --inferenceType=INT8
+ ./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining --inferenceType=FLOAT
```
In the preceding scenarios, the following information is displayed, indicating that the conversion is successful. In addition, the target file `model.ms` is obtained.
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- If fail to run the conversion command, an [errorcode](https://www.mindspore.cn/doc/api_cpp/en/r1.0/errorcode_and_metatype.html) will be output.
### Parameter Description
@@ -100,13 +101,12 @@ The following describes the parameters in detail.
| Parameter | Mandatory or Not | Parameter Description | Value Range | Default Value |
| -------- | ------- | ----- | --- | ---- |
| `--help` | No | Prints all help information. | - | - |
-| `--fmk=` | Yes | Original format of the input model. | MS, CAFFE, TFLITE, or ONNX | - |
+| `--fmk=` | Yes | Original format of the input model. | MINDIR, CAFFE, TFLITE, or ONNX | - |
| `--modelFile=` | Yes | Path of the input model. | - | - |
| `--outputFile=` | Yes | Path of the output model. (If the path does not exist, a directory will be automatically created.) The suffix `.ms` can be automatically generated. | - | - |
| `--weightFile=` | Yes (for Caffe models only) | Path of the weight file of the input model. | - | - |
| `--quantType=` | No | Sets the quant type of the model. | PostTraining: quantization after training
AwareTraining: perceptual quantization | - |
-|`--inputInferenceType=` | No(supported by aware quant models only) | Sets the input data type of the converted model. If the type is different from the origin model, the convert tool will insert data type convert op before the model to make sure the input data type is same as the input of origin model. | FLOAT or INT8 | FLOAT |
-|`--inferenceType= `| No(supported by aware quant models only) | Sets the output data type of the converted model. If the type is different from the origin model, the convert tool will insert data type convert op before the model to make sure the output data type is same as the input of origin model. | FLOAT or INT8 | FLOAT |
+|`--inferenceType= `| No(supported by aware quant models only) | Sets the input and output data type of the converted model. If the types are different from the origin model, the convert tool will insert data type convert op in the inputs and outputs of the model to make sure the data types are same as origin model. | UINT8, FLOAT or INT8 | FLOAT |
|`--stdDev=`| No(supported by aware quant models only) | Sets the standard deviation of the input data. | (0,+∞) | 128 |
|`--mean=`| No(supported by aware quant models only) | Sets the mean value of the input data. | [-128, 127] | -0.5 |
@@ -119,22 +119,15 @@ The following describes the parameters in detail.
To use the MindSpore Lite model conversion tool, the following environment preparations are required.
-- Compile: The model conversion tool code is in the `mindspore/lite/tools/converter` directory of the MindSpore source code, refer to the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements-1) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example-1) in the build document.
-
-- Run: Refer to [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description-1) in the deployment document to obtain the `converter` tool, and set the environment variable of MinGW(Add the bin directory of MinGW in the system variable Path).
+- Get the toolkit: To obtain the 'Converter' tool, download the zip package of windows conversion tool and unzip it to the local directory.
### Parameter Description
-Reference description Linux environment model conversion tool [parameter description](https://www.mindspore.cn/lite/tutorial/en/master/use/converter_tool.html#parameter-description).
+Reference description Linux environment model conversion tool [parameter description](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/converter_tool.html#parameter-description).
### Example
-First, use the cmd tool to enter the command to compile in the root directory of the source code, refer to `build.md`.
-```bash
-call build.bat lite
-```
-
-Then, set the log printing level to INFO.
+Set the log printing level to INFO.
```bash
set MSLOG=INFO
```
@@ -151,7 +144,7 @@ Several common examples are selected below to illustrate the use of conversion c
The result is shown as:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
This means that the Caffe model has been successfully converted to the MindSpore Lite model and the new file `lenet.ms` has been obtained.
@@ -159,7 +152,7 @@ Several common examples are selected below to illustrate the use of conversion c
- MindSpore model `model.mindir`
```bash
- call converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ call converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite model`model.tflite`
@@ -179,5 +172,6 @@ Several common examples are selected below to illustrate the use of conversion c
In the above cases, the following conversion success prompt is displayed, and the `model.ms` target file is obtained at the same time.
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- If fail to run the conversion command, an [errorcode](https://www.mindspore.cn/doc/api_cpp/en/r1.0/errorcode_and_metatype.html) will be output.
diff --git a/lite/tutorials/source_en/use/evaluating_the_model.rst b/tutorials/lite/source_en/use/evaluating_the_model.rst
similarity index 100%
rename from lite/tutorials/source_en/use/evaluating_the_model.rst
rename to tutorials/lite/source_en/use/evaluating_the_model.rst
diff --git a/tutorials/lite/source_en/use/image_processing.md b/tutorials/lite/source_en/use/image_processing.md
new file mode 100644
index 0000000000000000000000000000000000000000..9d4048a73c8033ce09be19cb2e531cc3d0881bff
--- /dev/null
+++ b/tutorials/lite/source_en/use/image_processing.md
@@ -0,0 +1,151 @@
+# Preprocess image data
+
+
+
+- [Preprocess image data](#preprocess-image-data)
+ - [Overview](#Overview)
+ - [Import image preprocessing function library](#import-image-preprocessing-function-library)
+ - [Initialize the image](#initialize-the-image)
+ - [Usage example](#usage-example)
+ - [Optional image preprocessing operator](#optional-image-preprocessing-operator)
+ - [Resize image](#resize-image)
+ - [Usage example](#usage-example-1)
+ - [Convert the image data type](#convert-the-image-data-type)
+ - [Usage example](#usage-example-2)
+ - [Crop image data](#crop-image-data)
+ - [Usage example](#usage-example-3)
+ - [Normalize image data](#normalize-image-data)
+ - [Usage example](#usage-example-4)
+
+
+
+
+
+## Overview
+
+The main purpose of image preprocessing is to eliminate irrelevant information in the image, restore useful real information, enhance the detectability of related information and simplify data to the greatest extent, thereby improving the reliability of feature extraction, image segmentation, matching and recognition. Here, by creating a LiteMat object, the image data is processed before inference to meet the data format requirements for model inference.
+
+The process is as follows:
+
+## Import image preprocessing function library
+
+```
+#include "lite_cv/lite_mat.h"
+#include "lite_cv/image_process.h"
+```
+
+## Initialize the image
+
+Here, the [InitFromPixel](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#initfrompixel) function in the `image_process.h` file is used to initialize the image.
+
+```
+bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m);
+```
+
+### Usage example
+
+```
+// Create the data object of the LiteMat object.
+LiteMat lite_mat_bgr;
+
+// Initialize the lite_mat_bgr object.
+// The image data pointer passed in by the user (The data in the Bitmap corresponding to the Android platform).
+InitFromPixel(pixel_ptr, LPixelType::RGBA2GRAY, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+```
+
+## Optional image preprocessing operator
+
+The image processing operators here can be used in any combination according to the actual situation.
+
+### Resize image
+
+Here we use the [ResizeBilinear](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#resizebilinear) function in `image_process.h` to resize the image through a bilinear algorithm. Currently, the supported data type is unit8, the supported channels are 3 and 1.
+
+```
+bool ResizeBilinear(const LiteMat &src, LiteMat &dst, int dst_w, int dst_h);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create a resize image data object.
+LiteMat lite_mat_resize;
+
+// Resize the image.
+ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+```
+
+### Convert the image data type
+
+Here we use the [ConvertTo](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#convertto) function in `image_process.h` to convert the image data type. Currently, the supported conversion is to convert uint8 to float.
+
+```
+bool ConvertTo(const LiteMat &src, LiteMat &dst, double scale = 1.0);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the converted data type object.
+LiteMat lite_mat_convert_float;
+
+// Perform conversion type operations on the object. Currently, the supported conversion is to convert uint8 to float.
+ConvertTo(lite_mat_bgr, lite_mat_convert_float);
+```
+
+### Crop image data
+
+Here we use the [Crop](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#crop) function in `image_process.h` to crop the image. Currently, channels 3 and 1 are supported.
+
+```
+bool Crop(const LiteMat &src, LiteMat &dst, int x, int y, int w, int h);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the cropped object.
+LiteMat lite_mat_cut;
+
+// The image is cropped by the values of x, y, w, h.
+Crop(lite_mat_bgr, lite_mat_cut, 16, 16, 224, 224);
+```
+
+### Normalize image data
+
+In order to eliminate the dimensional influence among the data indicators, and solve the comparability problem among the data indicators through standardization processing, here is the use of the [SubStractMeanNormalize](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#substractmeannormalize) function in `image_process.h` to normalize the image data.
+
+```
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, float *mean, float *norm);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// The mean value of the image data.
+// The variance of the image data.
+float means[1] = {0.485};
+float norm[1] = {1.0 / 0.229};
+
+// Create a normalized image object.
+LiteMat lite_mat_bgr_norm;
+
+// The image data is normalized by the mean value and variance of the image data.
+SubStractMeanNormalize(lite_mat_bgr, lite_mat_bgr_norm, means, norm);
+```
\ No newline at end of file
diff --git a/tutorials/lite/source_en/use/post_training_quantization.md b/tutorials/lite/source_en/use/post_training_quantization.md
new file mode 100644
index 0000000000000000000000000000000000000000..128543f399780dca0edd373c7e4d17f28690ac43
--- /dev/null
+++ b/tutorials/lite/source_en/use/post_training_quantization.md
@@ -0,0 +1,3 @@
+# Note
+
+Post training quantization is being translated, will be released soon.
\ No newline at end of file
diff --git a/lite/tutorials/source_en/use/runtime.md b/tutorials/lite/source_en/use/runtime.md
similarity index 83%
rename from lite/tutorials/source_en/use/runtime.md
rename to tutorials/lite/source_en/use/runtime.md
index 748ef39812baddc070e870445719177ee72218b9..b3571dabc246c1767e497f05ac7653cd3e9327f2 100644
--- a/lite/tutorials/source_en/use/runtime.md
+++ b/tutorials/lite/source_en/use/runtime.md
@@ -28,11 +28,15 @@
- [Example](#example-5)
- [Obtaining Version String](#obtaining-version-string)
- [Example](#example-6)
+ - [Session parallel launch](#session-parallel-launch)
+ - [Single Session parallel launch](#single-session-parallel-launch)
+ - [Multiple Session parallel launch](#multiple-session-parallel-launch)
+ - [Example](#example-7)
-
+
## Overview
@@ -77,66 +81,16 @@ Contexts save some basic configuration parameters required by sessions to guide
MindSpore Lite supports heterogeneous inference. The preferred backend for inference is specified by `device_ctx_` in `Context` and is CPU by default. During graph compilation, operator selection and scheduling are performed based on the preferred backend.
-```cpp
-/// \brief DeviceType defined for holding user's preferred backend.
-typedef enum {
- DT_CPU, /**< CPU device type */
- DT_GPU, /**< GPU device type */
- DT_NPU /**< NPU device type, not supported yet */
-} DeviceType;
-
-/// \brief DeviceContext defined for holding DeviceType.
-typedef struct {
- DeviceType type; /**< device type */
-} DeviceContext;
-
-DeviceContext device_ctx_{DT_CPU};
-```
-
MindSpore Lite has a built-in thread pool shared by processes. During inference, `thread_num_` is used to specify the maximum number of threads in the thread pool. The default maximum number is 2. It is recommended that the maximum number be no more than 4. Otherwise, the performance may be affected.
-```c++
-int thread_num_ = 2; /**< thread number config for thread pool */
-```
-
MindSpore Lite supports dynamic memory allocation and release. If `allocator` is not specified, a default `allocator` is generated during inference. You can also use the `Context` method to allow multiple `Context` to share the memory allocator.
If users create the `Context` by using `new`, it should be released by using `delete` once it's not required. Usually the `Context` is released after finishing the session creation.
-```cpp
-/// \brief Allocator defined a memory pool for malloc memory and free memory dynamically.
-///
-/// \note List public class and interface for reference.
-class Allocator;
-
-/// \brief Context defined for holding environment variables during runtime.
-class MS_API Context {
- public:
- /// \brief Constructor of MindSpore Lite Context using input value for parameters.
- ///
- /// \param[in] thread_num Define the work thread number during the runtime.
- /// \param[in] allocator Define the allocator for malloc.
- /// \param[in] device_ctx Define device information during the runtime.
- Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx);
-
- public:
- std::shared_ptr allocator = nullptr;
-}
-```
-
### Creating Sessions
Use the `Context` created in the previous step to call the static `CreateSession` method of LiteSession to create `LiteSession`. The `LiteSession` instance returned by the function is a pointer, which is created by using `new`. If the pointer is not required, you need to release it by using `delete`.
-```cpp
-/// \brief Static method to create a LiteSession pointer.
-///
-/// \param[in] context Define the context of session to be created.
-///
-/// \return Pointer of MindSpore Lite LiteSession.
-static LiteSession *CreateSession(lite::Context *context);
-```
-
### Example
The following sample code demonstrates how to create a `Context` and how to allow two `LiteSession` to share a memory pool.
@@ -148,13 +102,16 @@ if (context == nullptr) {
return RET_ERROR;
}
// The preferred backend is GPU, which means, if there is a GPU operator, it will run on the GPU first, otherwise it will run on the CPU.
-context->device_ctx_.type = lite::DT_GPU;
+context->device_type_ = lite::DT_GPU;
// The medium core takes priority in thread and core binding methods. This parameter will work in the BindThread interface. For specific binding effect, see the "Run Graph" section.
context->cpu_bind_mode_ = MID_CPU;
// Configure the number of worker threads in the thread pool to 2, including the main thread.
context->thread_num_ = 2;
// Allocators can be shared across multiple Contexts.
-auto *context2 = new Context(context->thread_num_, context->allocator, context->device_ctx_);
+auto *context2 = new Context();
+context2->thread_num_ = context->thread_num_;
+context2->allocator = context->allocator;
+context2->device_type_ = context->device_type_;
context2->cpu_bind_mode_ = context->cpu_bind_mode_;
// Use Context to create Session.
auto session1 = session::LiteSession::CreateSession(context);
@@ -167,7 +124,7 @@ if (session1 == nullptr) {
// session1 and session2 can share one memory pool.
auto session2 = session::LiteSession::CreateSession(context2);
delete (context2);
-if (session == nullptr) {
+if (session2 == nullptr) {
MS_LOG(ERROR) << "CreateSession failed while running %s", modelName.c_str();
return RET_ERROR;
}
@@ -179,19 +136,7 @@ if (session == nullptr) {
When using MindSpore Lite for inference, after the session creation and graph compilation have been completed, if you need to resize the input shape, you can reset the shape of the input tensor, and then call the session's Resize() interface.
-```cpp
-/// \brief Get input MindSpore Lite MSTensors of model.
-///
-/// \return The vector of MindSpore Lite MSTensor.
-virtual std::vector GetInputs() const = 0;
-
-/// \brief Resize inputs shape.
-///
-/// \param[in] inputs Define the new inputs shape.
-///
-/// \return STATUS as an error code of resize inputs, STATUS is defined in errorcode.h.
-virtual int Resize(const std::vector &inputs) = 0;
-```
+> Not all models support variable dimensions. For example, when there is a MatMul operator in the model whose input Tensor is a weight tensor and an input tensor, calling the variable dimension interface will cause the shape of the input tensor and the weight tensor being unmatched.
### Example
@@ -202,8 +147,9 @@ The following code demonstrates how to resize the input of MindSpore Lite:
auto inputs = session->GetInputs();
std::vector resize_shape = {1, 128, 128, 3};
// Assume the model has only one input,resize input shape to [1, 128, 128, 3]
-inputs[0]->set_shape(resize_shape);
-session->Resize(inputs);
+std::vector> new_shapes;
+new_shapes.push_back(resize_shape);
+session->Resize(inputs, new_shapes);
```
### Compiling Graphs
@@ -506,16 +452,16 @@ virtual void *MutableData() const = 0;
### Example
-The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputMapByNode` method and print the first ten data or all data records of each output `MSTensor`.
+The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputs` method and print the first ten data or all data records of each output `MSTensor`.
```cpp
// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByNode();
+auto output_map = session->GetOutputs();
// Assume that the model has only one output node.
auto out_node_iter = output_map.begin();
std::string name = out_node_iter->first;
// Assume that the unique output node has only one output tensor.
-auto out_tensor = out_node_iter->second.front();
+auto out_tensor = out_node_iter->second;
if (out_tensor == nullptr) {
std::cerr << "Output tensor is nullptr" << std::endl;
return -1;
@@ -539,7 +485,7 @@ std::cout << std::endl;
// The elements in outputs do not need to be free by users, because outputs are managed by the MindSpore Lite.
```
-Note that the vectors or map returned by the `GetOutputsByNodeName`, `GetOutputMapByNode`, `GetOutputByTensorName` and `GetOutputMapByTensor` methods do not need to be released by users.
+Note that the vectors or map returned by the `GetOutputsByNodeName`, `GetOutputByTensorName` and `GetOutputs` methods do not need to be released by users.
The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputsByNodeName` method.
@@ -555,19 +501,6 @@ if (out_tensor == nullptr) {
}
```
-The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputMapByTensor` method.
-
-```cpp
-// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByTensor();
-// Assume that output node named output_node_name_0 has only one output tensor.
-auto out_tensor = output_vec.front();
-if (out_tensor == nullptr) {
- std::cerr << "Output tensor is nullptr" << std::endl;
- return -1;
-}
-```
-
The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputByTensorName` method.
```cpp
@@ -594,3 +527,112 @@ The following sample code shows how to obtain version string using `Version` met
#include "include/version.h"
std::string version = mindspore::lite::Version();
```
+
+## Session parallel launch
+MindSpore Lite supports multiple `LiteSession` parallel inferences, but does not support multiple threads calling the `RunGraph` interface of a single `LiteSession` at the same time.
+
+### Single Session parallel launch
+
+MindSpore Lite does not support multi-threaded parallel calling of the inference interface of a single `LiteSession`, otherwise we will get the following error message:
+```cpp
+ERROR [mindspore/lite/src/lite_session.cc:297] RunGraph] 10 Not support multi-threading
+```
+
+### Multiple Session parallel launch
+
+MindSpore Lite supports multiple `LiteSession` in doing inference in parallel. The thread pool and memory pool of each `LiteSession` are independent.
+
+### Example
+
+The following code shows how to create multiple `LiteSession` and do inference in parallel:
+```cpp
+#include
+#include "src/common/file_utils.h"
+#include "include/model.h"
+#include "include/version.h"
+#include "include/context.h"
+#include "include/lite_session.h"
+
+mindspore::session::LiteSession *GenerateSession(mindspore::lite::Model *model) {
+ if (model == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return nullptr;
+ }
+ auto context = new (std::nothrow) mindspore::lite::Context;
+ if (context == nullptr) {
+ std::cerr << "New context failed while running" << std::endl;
+ return nullptr;
+ }
+
+ auto session = mindspore::session::LiteSession::CreateSession(context);
+ delete (context);
+ if (session == nullptr) {
+ std::cerr << "CreateSession failed while running" << std::endl;
+ return nullptr;
+ }
+ auto ret = session->CompileGraph(model);
+ if (ret != mindspore::lite::RET_OK) {
+ std::cout << "CompileGraph failed while running" << std::endl;
+ delete (session);
+ return nullptr;
+ }
+ auto msInputs = session->GetInputs();
+ for (auto msInput : msInputs) {
+ (void)msInput->MutableData();
+ }
+ return session;
+}
+
+int main(int argc, const char **argv) {
+ size_t size = 0;
+ char *graphBuf = mindspore::lite::ReadFile("test.ms", &size);
+ if (graphBuf == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return -1;
+ }
+ auto model = mindspore::lite::Model::Import(graphBuf, size);
+ if (model == nullptr) {
+ std::cerr << "Import model file failed while running" << std::endl;
+ delete[](graphBuf);
+ return -1;
+ }
+ delete[](graphBuf);
+ auto session1 = GenerateSession(model);
+ if (session1 == nullptr) {
+ std::cerr << "GenerateSession failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+ auto session2 = GenerateSession(model);
+ if (session2 == nullptr) {
+ std::cerr << "GenerateSession failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+
+ std::thread thread1([&](){
+ auto status = session1->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session1 inference success" << std::endl;
+ });
+
+ std::thread thread2([&](){
+ auto status = session2->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session2 inference success" << std::endl;
+ });
+
+ thread1.join();
+ thread2.join();
+ delete (session1);
+ delete (session2);
+ delete (model);
+ return 0;
+}
+```
diff --git a/lite/tutorials/source_en/use/timeprofiler_tool.md b/tutorials/lite/source_en/use/timeprofiler_tool.md
similarity index 87%
rename from lite/tutorials/source_en/use/timeprofiler_tool.md
rename to tutorials/lite/source_en/use/timeprofiler_tool.md
index b0e3d35860448974da085d8230d58654bf46868e..f2895eed55fe5f960cd2a5bd6ac7b5daef378851 100644
--- a/lite/tutorials/source_en/use/timeprofiler_tool.md
+++ b/tutorials/lite/source_en/use/timeprofiler_tool.md
@@ -10,7 +10,7 @@
-
+
## Overview
@@ -20,16 +20,16 @@ After model conversion and before inference, you can use the TimeProfiler tool t
To use the TimeProfiler tool, you need to prepare the environment as follows:
-- Compilation: Install build dependencies and perform build. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profile` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example) in the build document.
+- Compilation: Install build dependencies and perform build. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profiler` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#compilation-example) in the build document.
-- Run: Obtain the `timeprofile` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description) in the build document.
+- Run: Obtain the `timeprofiler` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/tutorial/lite/en/r1.0/use/build.html#output-description) in the build document.
## Parameter Description
The command used for analyzing the time consumption of forward inference at the network layer based on the compiled TimeProfiler tool is as follows:
```bash
-./timeprofile --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
+./timeprofiler --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
```
The following describes the parameters in detail.
@@ -49,7 +49,7 @@ The following describes the parameters in detail.
Take the `test_timeprofiler.ms` model as an example and set the number of model inference cycles to 10. The command for using TimeProfiler to analyze the time consumption at the network layer is as follows:
```bash
-./timeprofile --modelPath=./models/test_timeprofiler.ms --loopCount=10
+./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10
```
After this command is executed, the TimeProfiler tool outputs the statistics on the running time of the model at the network layer. In this example, the command output is as follows: The statistics are displayed by`opName` and `optype`. `opName` indicates the operator name, `optype` indicates the operator type, and `avg` indicates the average running time of the operator per single run, `percent` indicates the ratio of the operator running time to the total operator running time, `calledTimess` indicates the number of times that the operator is run, and `opTotalTime` indicates the total time that the operator is run for a specified number of times. Finally, `total time` and `kernel cost` show the average time consumed by a single inference operation of the model and the sum of the average time consumed by all operators in the model inference, respectively.
diff --git a/tutorials/lite/source_zh_cn/_static/logo_notebook.png b/tutorials/lite/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/tutorials/lite/source_zh_cn/_static/logo_notebook.png differ
diff --git a/tutorials/lite/source_zh_cn/_static/logo_source.png b/tutorials/lite/source_zh_cn/_static/logo_source.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc347d271abe082ae8d16242328551648766b6fb
Binary files /dev/null and b/tutorials/lite/source_zh_cn/_static/logo_source.png differ
diff --git a/lite/tutorials/source_zh_cn/conf.py b/tutorials/lite/source_zh_cn/conf.py
similarity index 96%
rename from lite/tutorials/source_zh_cn/conf.py
rename to tutorials/lite/source_zh_cn/conf.py
index c2ba20320a75ee8ae812f03dbc21cb39f30827a8..b0893bf9b01ea64f10ee612112e4625cee2759b6 100644
--- a/lite/tutorials/source_zh_cn/conf.py
+++ b/tutorials/lite/source_zh_cn/conf.py
@@ -58,4 +58,6 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
html_static_path = ['_static']
\ No newline at end of file
diff --git a/tutorials/lite/source_zh_cn/images/lite_quick_start_app_result.png b/tutorials/lite/source_zh_cn/images/lite_quick_start_app_result.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7cc49f582440e31b6b5b14dbba5131bfed2a4b4
Binary files /dev/null and b/tutorials/lite/source_zh_cn/images/lite_quick_start_app_result.png differ
diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_home.png b/tutorials/lite/source_zh_cn/images/lite_quick_start_home.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/lite_quick_start_home.png
rename to tutorials/lite/source_zh_cn/images/lite_quick_start_home.png
diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_install.png b/tutorials/lite/source_zh_cn/images/lite_quick_start_install.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/lite_quick_start_install.png
rename to tutorials/lite/source_zh_cn/images/lite_quick_start_install.png
diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_project_structure.png b/tutorials/lite/source_zh_cn/images/lite_quick_start_project_structure.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/lite_quick_start_project_structure.png
rename to tutorials/lite/source_zh_cn/images/lite_quick_start_project_structure.png
diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_run_app.PNG b/tutorials/lite/source_zh_cn/images/lite_quick_start_run_app.PNG
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/lite_quick_start_run_app.PNG
rename to tutorials/lite/source_zh_cn/images/lite_quick_start_run_app.PNG
diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_sdk.png b/tutorials/lite/source_zh_cn/images/lite_quick_start_sdk.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/lite_quick_start_sdk.png
rename to tutorials/lite/source_zh_cn/images/lite_quick_start_sdk.png
diff --git a/lite/tutorials/source_zh_cn/images/side_infer_process.png b/tutorials/lite/source_zh_cn/images/side_infer_process.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/images/side_infer_process.png
rename to tutorials/lite/source_zh_cn/images/side_infer_process.png
diff --git a/lite/tutorials/source_zh_cn/index.rst b/tutorials/lite/source_zh_cn/index.rst
similarity index 66%
rename from lite/tutorials/source_zh_cn/index.rst
rename to tutorials/lite/source_zh_cn/index.rst
index 3bfde552d2bec6205ba366d6d30c200bce0904d7..3daac309648b5fd742185b3582ec04086dd6172e 100644
--- a/lite/tutorials/source_zh_cn/index.rst
+++ b/tutorials/lite/source_zh_cn/index.rst
@@ -3,8 +3,8 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore端侧教程
-==================
+在手机或IoT设备上使用MindSpore
+=================================
.. toctree::
:glob:
@@ -16,9 +16,12 @@ MindSpore端侧教程
.. toctree::
:glob:
:maxdepth: 1
- :caption: 使用指南
+ :caption: 基础使用
- build
- use/converter_tool
+ use/build
+ use/convert_model
use/evaluating_the_model
+ use/image_processing
use/runtime
+ use/benchmark_tool
+ use/timeprofiler_tool
\ No newline at end of file
diff --git a/lite/tutorials/source_zh_cn/quick_start/quick_start.md b/tutorials/lite/source_zh_cn/quick_start/quick_start.md
similarity index 62%
rename from lite/tutorials/source_zh_cn/quick_start/quick_start.md
rename to tutorials/lite/source_zh_cn/quick_start/quick_start.md
index ef76d900d3bbb15f9e2680656e356f7e9bf71b2a..d9726b00dca4eedc8c8d9e208e4872a0d84b9c9e 100644
--- a/lite/tutorials/source_zh_cn/quick_start/quick_start.md
+++ b/tutorials/lite/source_zh_cn/quick_start/quick_start.md
@@ -17,7 +17,7 @@
-
+
## 概述
@@ -28,7 +28,7 @@
2. 将模型转换成MindSpore Lite模型格式。
3. 在端侧使用MindSpore Lite推理模型。详细说明如何在端侧利用MindSpore Lite C++ API(Android JNI)和MindSpore Lite图像分类模型完成端侧推理,实现对设备摄像头捕获的内容进行分类,并在APP图像预览界面中,显示出最可能的分类结果。
-> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。
+> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo/official/lite/image_classification)。
## 选择模型
@@ -38,11 +38,11 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
## 转换模型
-如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)将.mindir模型转换成.ms格式。
+如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/save_and_load_model.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/converter_tool.html)将.mindir模型转换成.ms格式。
以mobilenetv2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。
```bash
-./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
+./converter_lite --fmk=MINDIR --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
## 部署应用
@@ -53,9 +53,9 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
- Android Studio >= 3.2 (推荐4.0以上版本)
- NDK 21.3
-- CMake 3.10.2
+- [CMake](https://cmake.org/download) 3.10.2
- Android SDK >= 26
-- OpenCV >= 4.0.0 (本示例代码已包含)
+- [JDK]( https://www.oracle.com/downloads/otn-pub/java/JDK/) >= 1.8
### 构建与运行
@@ -67,7 +67,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds

- (可选)若安装时出现NDK版本问题,可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)(本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定SDK的位置。
+ (可选)若安装时出现NDK版本问题,可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)(本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定NDK的位置。

@@ -79,10 +79,14 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
Android Studio连接设备调试操作,可参考。
+ 手机需开启“USB调试模式”,Android Studio才能识别到手机。 华为手机一般在`设置->系统和更新->开发人员选项->USB调试`中打开“USB调试模式”。
+
3. 在Android设备上,点击“继续安装”,安装完即可查看到设备摄像头捕获的内容和推理结果。

+
+
识别结果如下图所示。

@@ -90,7 +94,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
## 示例程序详细说明
-本端侧图像分类Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能;JNI层在[Runtime](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)中完成模型推理的过程。
+本端侧图像分类Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能;JNI层在[Runtime](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/runtime.html)中完成模型推理的过程。
> 此处详细说明示例程序的JNI层实现,JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能,需读者具备一定的Android开发基础知识。
@@ -98,29 +102,22 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
```
app
-|
-├── libs # 存放MindSpore Lite依赖的库文件
-│ └── arm64-v8a
-│ ├── libopencv_java4.so
-│ └── libmindspore-lite.so
-│
-├── opencv # opencv 相关依赖文件
-│ └── ...
-|
├── src/main
│ ├── assets # 资源文件
-| | └── model.ms # 存放模型文件
+| | └── mobilenetv2.ms # 存放模型文件
│ |
│ ├── cpp # 模型加载和预测主要逻辑封装类
| | ├── ..
+| | ├── mindspore_lite_x.x.x-minddata-arm64-cpu # MindSpore Lite版本
| | ├── MindSporeNetnative.cpp # MindSpore调用相关的JNI方法
│ | └── MindSporeNetnative.h # 头文件
+| | └── MsNetWork.cpp # MindSpore接口封装
│ |
│ ├── java # java层应用代码
-│ │ └── com.huawei.himindsporedemo
+│ │ └── com.mindspore.himindsporedemo
│ │ ├── gallery.classify # 图像处理及MindSpore JNI调用相关实现
│ │ │ └── ...
-│ │ └── obejctdetect # 开启摄像头及绘制相关实现
+│ │ └── widget # 开启摄像头及绘制相关实现
│ │ └── ...
│ │
│ ├── res # 存放Android相关的资源文件
@@ -129,26 +126,19 @@ app
├── CMakeList.txt # cmake编译入口文件
│
├── build.gradle # 其他Android配置文件
+├── download.gradle # 工程依赖文件下载
└── ...
```
### 配置MindSpore Lite依赖项
-Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html)生成`libmindspore-lite.so`库文件。
+Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html)生成`libmindspore-lite.so`库文件。
-本示例中,bulid过程由download.gradle文件配置自动下载`libmindspore-lite.so`以及OpenCV的`libopencv_java4.so`库文件,并放置在`app/libs/arm64-v8a`目录下。
+本示例中,build过程由download.gradle文件自动从华为服务器下载MindSpore Lite版本文件,并放置在`app/src/ main/cpp/mindspore_lite_x.x.x-minddata-arm64-cpu`目录下。
注: 若自动下载失败,请手动下载相关库文件并将其放在对应位置:
-libmindspore-lite.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
-
-libmindspore-lite include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
-
-libopencv_java4.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
-
-libopencv include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
-
-
+MindSpore Lite版本 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%201.0/mindspore-lite-1.0.0-minddata-arm64-cpu.tar.gz)
```
android{
@@ -169,23 +159,29 @@ android{
在`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。
```
-# Set MindSpore Lite Dependencies.
-include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore)
+# ============== Set MindSpore Dependencies. =============
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/ir/dtype)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/schema)
+
add_library(mindspore-lite SHARED IMPORTED )
-set_target_properties(mindspore-lite PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so")
+add_library(minddata-lite SHARED IMPORTED )
-# Set OpenCV Dependecies.
-include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include)
-add_library(lib-opencv SHARED IMPORTED )
-set_target_properties(lib-opencv PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so")
+set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
+set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
+# --------------- MindSpore Lite set End. --------------------
# Link target library.
target_link_libraries(
...
- mindspore-lite
- lib-opencv
+ # --- mindspore ---
+ minddata-lite
+ mindspore-lite
...
)
```
@@ -218,13 +214,12 @@ target_link_libraries(
*labelEnv = labelNet;
// Create context.
- lite::Context *context = new lite::Context;
- context->device_ctx_.type = lite::DT_CPU;
- context->thread_num_ = numThread; //Specify the number of threads to run inference
+ mindspore::lite::Context *context = new mindspore::lite::Context;
+ context->thread_num_ = num_thread;
// Create the mindspore session.
- labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
- delete(context);
+ labelNet->CreateSessionMS(modelBuffer, bufferLen, context);
+ delete (context);
```
@@ -245,7 +240,7 @@ target_link_libraries(
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
- BitmapToMat(env, srcBitmap, matImageSrc);
+ BitmapToMat(env, srcBitmap, matImageSrc);
// Processing such as zooming the picture size.
matImgPreprocessed = PreProcessImageData(matImageSrc);
@@ -270,7 +265,38 @@ target_link_libraries(
delete[] (dataHWC);
```
-3. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
+3. 对输入数据进行处理。
+
+ ```cpp
+ bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
+ bool ret = false;
+ LiteMat lite_mat_resize;
+ LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
+ ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+ if (!ret) {
+ MS_PRINT("ResizeBilinear error");
+ return false;
+ }
+ LiteMat lite_mat_convert_float;
+ ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
+ if (!ret) {
+ MS_PRINT("ConvertTo error");
+ return false;
+ }
+ LiteMat lite_mat_cut;
+ ret = Crop(lite_mat_convert_float, lite_mat_cut, 16, 16, 224, 224);
+ if (!ret) {
+ MS_PRINT("Crop error");
+ return false;
+ }
+ float means[3] = {0.485, 0.456, 0.406};
+ float vars[3] = {1.0 / 0.229, 1.0 / 0.224, 1.0 / 0.225};
+ SubStractMeanNormalize(lite_mat_cut, lite_norm_mat_cut, means, vars);
+ return true;
+ }
+ ```
+
+4. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
- 图执行,端测推理。
@@ -281,7 +307,12 @@ target_link_libraries(
- 获取输出数据。
```cpp
- auto msOutputs = mSession->GetOutputs();
+ auto names = mSession->GetOutputTensorNames();
+ std::unordered_map msOutputs;
+ for (const auto &name : names) {
+ auto temp_dat =mSession->GetOutputByTensorName(name);
+ msOutputs.insert(std::pair {name, temp_dat});
+ }
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
@@ -290,39 +321,34 @@ target_link_libraries(
std::string ProcessRunnetResult(std::unordered_map msOutputs, int runnetRet) {
- // Get model output results.
- std::unordered_map::iterator iter;
- iter = msOutputs.begin();
- auto brach1_string = iter->first;
- auto branch1_tensor = iter->second;
+ std::unordered_map::iterator iter;
+ iter = msOutputs.begin();
- int OUTPUTS_LEN = branch1_tensor->ElementsNum();
+ // The mobilenetv2.ms model output just one branch.
+ auto outputTensor = iter->second;
+ int tensorNum = outputTensor->ElementsNum();
- float *temp_scores = static_cast(branch1_tensor->MutableData());
- float scores[RET_CATEGORY_SUM];
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- scores[i] = temp_scores[i];
- }
+ // Get a pointer to the first score.
+ float *temp_scores = static_cast(outputTensor->MutableData());
- // Converted to text information that needs to be displayed in the APP.
- std::string retStr = "";
- if (runnetRet == 0) {
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- if (scores[i] > 0.3){
- retStr += g_labels_name_map[i];
- retStr += ":";
- std::string score_str = std::to_string(scores[i]);
- retStr += score_str;
- retStr += ";";
- }
- }
- else {
- MS_PRINT("MindSpore run net failed!");
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- retStr += " :0.0;";
- }
- }
+ float scores[RET_CATEGORY_SUM];
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ if (temp_scores[i] > 0.5) {
+ MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
+ }
+ scores[i] = temp_scores[i];
+ }
- return retStr;
+ // Score for each category.
+ // Converted to text information that needs to be displayed in the APP.
+ std::string categoryScore = "";
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ categoryScore += labels_name_map[i];
+ categoryScore += ":";
+ std::string score_str = std::to_string(scores[i]);
+ categoryScore += score_str;
+ categoryScore += ";";
+ }
+ return categoryScore;
}
```
diff --git a/lite/tutorials/source_zh_cn/use/benchmark_tool.md b/tutorials/lite/source_zh_cn/use/benchmark_tool.md
similarity index 82%
rename from lite/tutorials/source_zh_cn/use/benchmark_tool.md
rename to tutorials/lite/source_zh_cn/use/benchmark_tool.md
index 83c6aadc638de8c469b46875c7a1f863e148c539..2367a583da4c09ca793f9fa8fa2c17265bc0e02b 100644
--- a/lite/tutorials/source_zh_cn/use/benchmark_tool.md
+++ b/tutorials/lite/source_zh_cn/use/benchmark_tool.md
@@ -12,7 +12,7 @@
-
+
## 概述
@@ -22,9 +22,9 @@
使用Benchmark工具,需要进行如下环境准备工作。
-- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id3)执行编译。
+- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id1)和[编译示例](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id3)执行编译。
-- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id4),获得`benchmark`工具,并配置环境变量。
+- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id4),获得`benchmark`工具,并配置环境变量。
## 使用示例
@@ -63,12 +63,6 @@ Mean bias of all nodes: 0%
=======================================================
```
-原模型输入输出数据类型为uint8时,需要将其减去128再转换为int8类型后才能作为标杆数据验证精度,输出数据类型为int8时需要在参数中指定calibDataType为INT8。
-
-```bash
-./benchmark --modelPath=./models/test_benchmark_int8.ms --inDataPath=./input/test_benchmark_int8.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark_int8.out --calibDataType=INT8
-```
-
## 参数说明
@@ -76,11 +70,10 @@ Mean bias of all nodes: 0%
```bash
./benchmark [--modelPath=] [--accuracyThreshold=]
- [--calibDataPath=] [--cpuBindMode=]
- [--device=] [--help] [--inDataPath=]
- [--inDataType=] [--loopCount=]
- [--numThreads=] [--omModelPath=]
- [--resizeDims=] [--warmUpLoopCount=]
+ [--calibDataPath=] [--calibDataType=]
+ [--cpuBindMode=] [--device=] [--help]
+ [--inDataPath=] [--loopCount=]
+ [--numThreads=] [--warmUpLoopCount=]
[--fp16Priority=]
```
@@ -91,7 +84,7 @@ Mean bias of all nodes: 0%
| `--modelPath=` | 必选 | 指定需要进行基准测试的MindSpore Lite模型文件路径。 | String | null | - |
| `--accuracyThreshold=` | 可选 | 指定准确度阈值。 | Float | 0.5 | - |
| `--calibDataPath=` | 可选 | 指定标杆数据的文件路径。标杆数据作为该测试模型的对比输出,是该测试模型使用相同输入并由其它深度学习框架前向推理而来。 | String | null | - |
-| `--calibDataType=` | 可选 | 指定标杆数据类型。 | String | FLOAT | FLOAT、INT8 |
+| `--calibDataType=` | 可选 | 指定标杆数据类型。 | String | FLOAT | FLOAT、INT8、UINT8 |
| `--cpuBindMode=` | 可选 | 指定模型推理程序运行时绑定的CPU核类型。 | Integer | 1 | -1:表示中核
1:表示大核
0:表示不绑定 |
| `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、GPU |
| `--help` | 可选 | 显示`benchmark`命令的帮助信息。 | - | - | - |
diff --git a/lite/tutorials/source_zh_cn/build.md b/tutorials/lite/source_zh_cn/use/build.md
similarity index 66%
rename from lite/tutorials/source_zh_cn/build.md
rename to tutorials/lite/source_zh_cn/use/build.md
index a3e60383d37df133bbfc65f5b614311e45119032..3395618c156e700be3519d832e6eb2b0ded0d7e9 100644
--- a/lite/tutorials/source_zh_cn/build.md
+++ b/tutorials/lite/source_zh_cn/use/build.md
@@ -10,24 +10,21 @@
- [编译输出](#编译输出)
- [模型转换工具converter目录结构说明](#模型转换工具converter目录结构说明)
- [模型推理框架runtime及其他工具目录结构说明](#模型推理框架runtime及其他工具目录结构说明)
- - [Windows环境编译](#windows环境编译)
- - [环境要求](#环境要求-1)
- - [编译选项](#编译选项-1)
- - [编译示例](#编译示例-1)
- - [编译输出](#编译输出-1)
+ - [图像处理库目录结构说明](#图像处理库目录结构说明)
-
+
本章节介绍如何快速编译出MindSpore Lite,其包含的模块如下:
| 模块 | 支持平台 | 说明 |
| --- | ---- | ---- |
-| converter | Linux、Windows | 模型转换工具 |
+| converter | Linux | 模型转换工具 |
| runtime | Linux、Android | 模型推理框架 |
| benchmark | Linux、Android | 基准测试工具 |
-| time_profiler | Linux、Android | 性能分析工具 |
+| timeprofiler | Linux、Android | 性能分析工具 |
+| imageprocess | Linux、Android | 图像处理库 |
## Linux环境编译
@@ -35,7 +32,7 @@
- 系统环境:Linux x86_64,推荐使用Ubuntu 18.04.02LTS
-- runtime、benchmark、time_profiler编译依赖
+- runtime、benchmark、timeprofiler编译依赖
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
- [Android_NDK](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip) >= r20
@@ -53,6 +50,7 @@
- [Libevent](https://libevent.org) >= 2.0
- [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18
- [OpenSSL](https://www.openssl.org/) >= 1.1.1
+ - [Python](https://www.python.org/) >= 3.7.5
> - 当安装完依赖项Android_NDK后,需配置环境变量:`export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`。
> - 编译脚本中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
@@ -69,6 +67,7 @@ MindSpore Lite提供编译脚本`build.sh`用于一键式编译,位于MindSpor
| -j[n] | 设定编译时所用的线程数,否则默认设定为8线程 | Integer | 否 |
| -e | 选择除CPU之外的其他内置算子类型,仅在ARM架构下适用,当前仅支持GPU | GPU | 否 |
| -h | 显示编译帮助信息 | 无 | 否 |
+| -n | 指定编译轻量级图片处理模块 | lite_cv | 否 |
> 在`-I`参数变动时,如`-I x86_64`变为`-I arm64`,添加`-i`参数进行增量编译不生效。
@@ -77,7 +76,7 @@ MindSpore Lite提供编译脚本`build.sh`用于一键式编译,位于MindSpor
首先,在进行编译之前,需从MindSpore代码仓下载源码。
```bash
-git clone https://gitee.com/mindspore/mindspore.git
+git clone https://gitee.com/mindspore/mindspore.git -b r1.0
```
然后,在源码根目录下执行如下命令,可编译不同版本的MindSpore Lite。
@@ -102,11 +101,17 @@ git clone https://gitee.com/mindspore/mindspore.git
bash build.sh -I arm64 -e gpu
```
+- 编译ARM64带图像预处理模块。
+ ```bash
+ bash build.sh -I arm64 -n lite_cv
+ ```
+
### 编译输出
-编译完成后,进入`mindspore/output/`目录,可查看编译后生成的文件。文件分为两部分:
+编译完成后,进入`mindspore/output/`目录,可查看编译后生成的文件。文件分为三部分:
- `mindspore-lite-{version}-converter-{os}.tar.gz`:包含模型转换工具converter。
-- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:包含模型推理框架runtime、基准测试工具benchmark和性能分析工具time_profiler。
+- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:包含模型推理框架runtime、基准测试工具benchmark和性能分析工具timeprofiler。
+- `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`:包含图像处理库imageprocess。
> version:输出件版本号,与所编译的分支代码对应的版本一致。
>
@@ -119,6 +124,7 @@ git clone https://gitee.com/mindspore/mindspore.git
```bash
tar -xvf mindspore-lite-{version}-converter-{os}.tar.gz
tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
+tar -xvf mindspore-lite-{version}-minddata-{os}-{device}.tar.gz
```
#### 模型转换工具converter目录结构说明
@@ -148,7 +154,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
- │ └── time_profile # 模型网络层耗时分析工具
+ │ └── time_profiler # 模型网络层耗时分析工具
```
@@ -159,75 +165,45 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── benchmark # 基准测试工具
│ └── lib # 推理框架动态库
│ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库
- │ ├── liboptimize.so # MindSpore Lite算子性能优化库
+ │ ├── libmindspore-lite-fp16.so # MindSpore Lite Float16算子性能优化库
+ │ ├── libmindspore-lite-optimize.so # MindSpore Lite量化算子性能优化库
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
- │ └── time_profile # 模型网络层耗时分析工具
+ │ └── time_profiler # 模型网络层耗时分析工具
```
- 当编译选项为`-I arm32`时:
```
|
- ├── mindspore-lite-{version}-runtime-arm64-cpu
+ ├── mindspore-lite-{version}-runtime-arm32-cpu
│ └── benchmark # 基准测试工具
│ └── lib # 推理框架动态库
│ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
- │ └── time_profile # 模型网络层耗时分析工具
+ │ └── time_profiler # 模型网络层耗时分析工具
```
-> 1. `liboptimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2和支持fp16特性的CPU上使用。
-> 2. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。
-> 3. 运行converter、benchmark或time_profile目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`;配置benchmark和timeprofiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`。
+> 1. `libmindspore-lite-optimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2及以上版本且支持dotprod指令的CPU上使用的性能优化库。
+> 2. `libmindspore-lite-fp16.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2及以上版本且支持fp16的CPU上使用的性能优化库。
+> 3. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。
+> 4. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`;配置benchmark和timeprofiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`。
-## Windows环境编译
+#### 图像处理库目录结构说明
-### 环境要求
-
-- 支持的编译环境为:Windows 10,64位。
-
-- 编译依赖
- - [CMake](https://cmake.org/download/) >= 3.14.1
- - [MinGW GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download) = 7.3.0
- - [Python](https://www.python.org/) >= 3.7.5
-
-> 编译脚本中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
-
-### 编译选项
-
-MindSpore Lite的编译选项如下。
-
-| 参数 | 参数说明 | 是否必选 |
-| -------- | ----- | ---- |
-| **lite** | **设置该参数,则对Mindspore Lite工程进行编译** | **是** |
-| [n] | 设定编译时所用的线程数,否则默认设定为6线程 | 否 |
-
-### 编译示例
+图像处理库在`-I arm64 -n lite_cv`编译选项下获得,内容包括以下几部分:
-首先,使用git工具从MindSpore代码仓下载源码。
-
-```bash
-git clone https://gitee.com/mindspore/mindspore.git
```
-
-然后,使用cmd工具在源码根目录下,执行如下命令即可编译MindSpore Lite。
-
-- 以默认线程数(6线程)编译Windows版本。
- ```bash
- call build.bat lite
- ```
-- 以指定线程数8编译Windows版本。
- ```bash
- call build.bat lite 8
- ```
-
-### 编译输出
-
-编译完成之后,进入`mindspore/output/`目录,解压后即可获取输出件`mindspore-lite-{version}-converter-win-cpu.zip`,其中含有转换工具可执行文件。
-
-> version:输出件版本号,与所编译的分支代码对应的版本一致。
+|
+├── mindspore-lite-{version}-minddata-{os}-{device}
+│ └── include # 头文件
+│ ├── lite_cv # 图像处理库头文件
+│ └── lib # 动态库
+│ ├── libminddata-lite.so # 图像处理动态库
+│ └── third_party # 第三方库头文件和库
+│ ├── flatbuffers # Flatbuffers的动态库
+```
diff --git a/tutorials/lite/source_zh_cn/use/convert_model.rst b/tutorials/lite/source_zh_cn/use/convert_model.rst
new file mode 100644
index 0000000000000000000000000000000000000000..740dbcafe16c372a770cf5f420b54dc7ce8ab9ca
--- /dev/null
+++ b/tutorials/lite/source_zh_cn/use/convert_model.rst
@@ -0,0 +1,8 @@
+转换为MindSpore Lite模型
+========================
+
+.. toctree::
+ :maxdepth: 1
+
+ converter_tool
+ post_training_quantization
\ No newline at end of file
diff --git a/lite/tutorials/source_zh_cn/use/converter_tool.md b/tutorials/lite/source_zh_cn/use/converter_tool.md
similarity index 67%
rename from lite/tutorials/source_zh_cn/use/converter_tool.md
rename to tutorials/lite/source_zh_cn/use/converter_tool.md
index 1b9ad944df5fa482e4e91a49b80a0234a86cc8f9..51c0a03921563f04438a49e9eb1c495dd436eb04 100644
--- a/lite/tutorials/source_zh_cn/use/converter_tool.md
+++ b/tutorials/lite/source_zh_cn/use/converter_tool.md
@@ -15,7 +15,7 @@
-
+
## 概述
@@ -29,9 +29,9 @@ MindSpore Lite提供离线转换模型功能的工具,支持多种类型的模
使用MindSpore Lite模型转换工具,需要进行如下环境准备工作。
-- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id3)编译x86_64版本。
+- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id1)和[编译示例](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id3)编译x86_64版本。
-- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id4),获得`converter`工具,并配置环境变量。
+- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id4),获得`converter`工具,并配置环境变量。
### 使用示例
@@ -53,7 +53,7 @@ bash build.sh -I x86_64
结果显示为:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
这表示已经成功将Caffe模型转化为MindSpore Lite模型,获得新文件`lenet.ms`。
@@ -61,7 +61,7 @@ bash build.sh -I x86_64
- MindSpore模型`model.mindir`
```bash
- ./converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ ./converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite模型`model.tflite`
@@ -79,18 +79,19 @@ bash build.sh -I x86_64
./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining
```
- - 感知量化模型输入设置为int8,输出设置为int8
+ - 感知量化模型输入输出类型设置为float
```bash
- ./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining --inputInferenceType=INT8 --inferenceType=INT8
+ ./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining --inferenceType=FLOAT
```
以上几种情况下,均显示如下转换成功提示,且同时获得`model.ms`目标文件。
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- 如果转换命令执行失败,程序会返回一个[错误码](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/errorcode_and_metatype.html)。
-> 训练后量化示例请参考。
+> 训练后量化示例请参考。
### 参数说明
@@ -101,15 +102,19 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需
| 参数 | 是否必选 | 参数说明 | 取值范围 | 默认值 |
| -------- | ------- | ----- | --- | ---- |
| `--help` | 否 | 打印全部帮助信息。 | - | - |
-| `--fmk=` | 是 | 输入模型的原始格式。 | MS、CAFFE、TFLITE、ONNX | - |
+| `--fmk=` | 是 | 输入模型的原始格式。 | MINDIR、CAFFE、TFLITE、ONNX | - |
| `--modelFile=` | 是 | 输入模型的路径。 | - | - |
| `--outputFile=` | 是 | 输出模型的路径(不存在时将自动创建目录),不需加后缀,可自动生成`.ms`后缀。 | - | - |
| `--weightFile=` | 转换Caffe模型时必选 | 输入模型weight文件的路径。 | - | - |
-| `--quantType=` | 否 | 设置模型的量化类型。 | PostTraining:训练后量化
AwareTraining:感知量化。 | - |
-|` --inputInferenceType=` | 否 | 设置感知量化模型输入数据类型,如果和原模型不一致则转换工具会在模型前插转换算子,使得转换后的模型输入类型和inputInferenceType保持一致。 | FLOAT、INT8 | FLOAT |
-| `--inferenceType=` | 否 | 设置感知量化模型输出数据类型,如果和原模型不一致则转换工具会在模型前插转换算子,使得转换后的模型输出类型和inferenceType保持一致。 | FLOAT、INT8 | FLOAT |
+| `--quantType=` | 否 | 设置模型的量化类型。 | WeightQuant:训练后量化(权重量化)
PostTraining:训练后量化(全量化)
AwareTraining:感知量化 | - |
+|` --inferenceType=` | 否 | 设置感知量化模型输入输出数据类型,如果和原模型不一致则转换工具会在模型前后插转换算子,使得转换后的模型输入输出类型和inferenceType保持一致。 | UINT8、FLOAT、INT8 | FLOAT |
| `--stdDev= `| 否 | 感知量化模型转换时用于设置输入数据的标准差。 | (0,+∞) | 128 |
| `--mean=` | 否 | 感知量化模型转换时用于设置输入数据的均值。 | [-128, 127] | -0.5 |
+| `--bitNum=` | 否 | 设定训练后量化(权重量化)的比特数,目前仅支持8bit量化 | 8 | 8 |
+| `--quantSize=` | 否 | 设定参与训练后量化(权重量化)的卷积核尺寸阈值,若卷积核尺寸大于该值,则对此权重进行量化 | (0,+∞) | 0 |
+| `--convWeightQuantChannelThreshold=` | 否 | 设定参与训练后量化(权重量化)的卷积通道数阈值,若卷积通道数大于该值,则对此权重进行量化 | (0,+∞) | 16 |
+| `--config_file=` | 否 | 训练后量化(全量化)校准数据集配置文件路径 | - | - |
+
> - 参数名和参数值之间用等号连接,中间不能有空格。
> - Caffe模型一般分为两个文件:`*.prototxt`模型结构,对应`--modelFile`参数;`*.caffemodel`模型权值,对应`--weightFile`参数。
@@ -120,22 +125,15 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需
使用MindSpore Lite模型转换工具,需要进行如下环境准备工作。
-- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id5)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id7)编译Windows版本。
-
-- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id8),获得`converter`工具,,并配置MinGW环境变量(在系统变量Path里添加MinGW的bin目录)。
+- 获取工具包:下载Windows转换工具的Zip包并解压至本地目录,获得`converter`工具。
### 参数说明
-参考Linux环境模型转换工具的[参数说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html#id4)。
+参考Linux环境模型转换工具的[参数说明](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/converter_tool.html#id4)。
### 使用示例
-首先,使用cmd工具在源码根目录下,输入命令进行编译,可参考`build.md`。
-```bash
-call build.bat lite
-```
-
-然后,设置日志打印级别为INFO。
+设置日志打印级别为INFO。
```bash
set MSLOG=INFO
```
@@ -152,7 +150,7 @@ set MSLOG=INFO
结果显示为:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
这表示已经成功将Caffe模型转化为MindSpore Lite模型,获得新文件`lenet.ms`。
@@ -160,7 +158,7 @@ set MSLOG=INFO
- MindSpore模型`model.mindir`
```bash
- call converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ call converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite模型`model.tflite`
@@ -180,5 +178,6 @@ set MSLOG=INFO
以上几种情况下,均显示如下转换成功提示,且同时获得`model.ms`目标文件。
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- 如果转换命令执行失败,程序会返回一个[错误码](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/errorcode_and_metatype.html)。
diff --git a/lite/tutorials/source_zh_cn/use/evaluating_the_model.rst b/tutorials/lite/source_zh_cn/use/evaluating_the_model.rst
similarity index 100%
rename from lite/tutorials/source_zh_cn/use/evaluating_the_model.rst
rename to tutorials/lite/source_zh_cn/use/evaluating_the_model.rst
diff --git a/tutorials/lite/source_zh_cn/use/image_processing.md b/tutorials/lite/source_zh_cn/use/image_processing.md
new file mode 100644
index 0000000000000000000000000000000000000000..043c764b277188f91d5177a0883bb833555e3f35
--- /dev/null
+++ b/tutorials/lite/source_zh_cn/use/image_processing.md
@@ -0,0 +1,151 @@
+# 预处理图像数据
+
+
+
+- [预处理图像数据](#预处理图像数据)
+ - [概述](#概述)
+ - [导入图像预处理函数的库](#导入图像预处理函数的库)
+ - [对图像进行初始化](#对图像进行初始化)
+ - [使用示例](#使用示例)
+ - [可选的图像预处理算子](#可选的图像预处理算子)
+ - [对图像进行缩放操作](#对图像进行缩放操作)
+ - [使用示例](#使用示例-1)
+ - [对图像数据类型进行转换](#对图像数据类型进行转换)
+ - [使用示例](#使用示例-2)
+ - [对图像数据进行裁剪](#对图像数据进行裁剪)
+ - [使用示例](#使用示例-3)
+ - [对图像数据进行归一化处理](#对图像数据进行归一化处理)
+ - [使用示例](#使用示例-4)
+
+
+
+
+
+## 概述
+
+图像预处理的主要目的是消除图像中无关的信息,恢复有用的真实信息,增强有关信息的可检测性和最大限度地简化数据,从而改进特征抽取、图像分割、匹配和识别的可靠性。此处是通过创建LiteMat对象,在推理前对图像数据进行处理,达到模型推理所需要的数据格式要求。
+
+流程如下:
+
+## 导入图像预处理函数的库
+
+```
+#include "lite_cv/lite_mat.h"
+#include "lite_cv/image_process.h"
+```
+
+## 对图像进行初始化
+
+这边使用的是`image_process.h`文件中的[InitFromPixel](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#initfrompixel)函数对图像进行初始化操作。
+
+```
+bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m);
+```
+
+### 使用示例
+
+```
+// Create the data object of the LiteMat object.
+LiteMat lite_mat_bgr;
+
+// Initialize the lite_mat_bgr object.
+// The image data pointer passed in by the user (The data in the Bitmap corresponding to the Android platform).
+InitFromPixel(pixel_ptr, LPixelType::RGBA2GRAY, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+```
+
+## 可选的图像预处理算子
+
+此处的图像处理算子,用户可以根据实际情况任意搭配使用。
+
+### 对图像进行缩放操作
+
+这边利用的是`image_process.h`中的[ResizeBilinear](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#resizebilinear)函数通过双线性算法调整图像大小,当前仅支持的数据类型为uint8,当前支持的通道为3和1。
+
+```
+bool ResizeBilinear(const LiteMat &src, LiteMat &dst, int dst_w, int dst_h);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create a resize image data object.
+LiteMat lite_mat_resize;
+
+// Resize the image.
+ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+```
+
+### 对图像数据类型进行转换
+
+这边利用的是`image_process.h`中的[ConvertTo](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#convertto)函数对图像数据类型进行转换,目前支持的转换是将uint8转换为float。
+
+```
+bool ConvertTo(const LiteMat &src, LiteMat &dst, double scale = 1.0);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the converted data type object.
+LiteMat lite_mat_convert_float;
+
+// Perform conversion type operations on the object. The currently supported conversion is to convert uint8 to float.
+ConvertTo(lite_mat_bgr, lite_mat_convert_float);
+```
+
+### 对图像数据进行裁剪
+
+这边利用的是`image_process.h`中的[Crop](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#crop)函数对图像进行裁剪,目前支持通道3和1。
+
+```
+bool Crop(const LiteMat &src, LiteMat &dst, int x, int y, int w, int h);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the cropped object.
+LiteMat lite_mat_cut;
+
+// The image is cropped by the values of x, y, w, h.
+Crop(lite_mat_bgr, lite_mat_cut, 16, 16, 224, 224);
+```
+
+### 对图像数据进行归一化处理
+
+为了消除数据指标之间的量纲影响,通过标准化处理来解决数据指标之间的可比性问题,这边利用的是`image_process.h`中的[SubStractMeanNormalize](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#substractmeannormalize)函数对图像数据进行归一化处理。
+
+```
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, float *mean, float *norm);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// The mean value of the image data.
+// The variance of the image data.
+float means[1] = {0.485};
+float norm[1] = {1.0 / 0.229};
+
+// Create a normalized image object.
+LiteMat lite_mat_bgr_norm;
+
+// The image data is normalized by the mean value and variance of the image data.
+SubStractMeanNormalize(lite_mat_bgr, lite_mat_bgr_norm, means, norm);
+```
\ No newline at end of file
diff --git a/lite/tutorials/source_zh_cn/use/post_training_quantization.md b/tutorials/lite/source_zh_cn/use/post_training_quantization.md
similarity index 37%
rename from lite/tutorials/source_zh_cn/use/post_training_quantization.md
rename to tutorials/lite/source_zh_cn/use/post_training_quantization.md
index 839a7347ac9387f3b7de95852484447a65f1f75c..a72d7c8571e43e21ed3e0db4a82d3244e9724b92 100644
--- a/lite/tutorials/source_zh_cn/use/post_training_quantization.md
+++ b/tutorials/lite/source_zh_cn/use/post_training_quantization.md
@@ -1,27 +1,101 @@
-# 训练后量化
+# 转换为MindSpore Lite量化模型(训练后量化)
-- [训练后量化](#训练后量化)
+- [转换为MindSpore Lite量化模型(训练后量化)](#转换为mindspore-lite量化模型训练后量化)
- [概述](#概述)
- - [使用示例](#使用示例)
- - [部分模型精度结果](#部分模型精度结果)
- - [参数说明](#参数说明)
+ - [权重量化](#权重量化)
+ - [参数说明](#参数说明)
+ - [使用步骤](#使用步骤)
+ - [部分模型精度结果](#部分模型精度结果)
+ - [全量化](#全量化)
+ - [参数说明](#参数说明-1)
+ - [使用步骤](#使用步骤-1)
+ - [部分模型精度结果](#部分模型精度结果-1)
-
+
## 概述
-对于已经训练好的`float32`模型,通过训练后量化将模型转为`int8`模型,不仅能减小模型大小,而且能显著提高推理性能。在MindSpore端侧框架中,这部分功能集成在模型转换工具`conveter_lite`中,通过增加命令行参数,便能够转换得到量化后模型。
+对于已经训练好的`float32`模型,通过训练后量化将其转为`int8`,不仅能减小模型大小,而且能显著提高推理性能。在MindSpore Lite中,这部分功能集成在模型转换工具`conveter_lite`内,通过增加命令行参数,便能够转换得到量化后模型。
目前训练后量化属于alpha阶段(支持部分网络,不支持多输入模型),正在持续完善中。
+MindSpore Lite训练后量化分为两类:
+1. 权重量化:单独对模型的权值进行量化;
+2. 全量化:对模型的权值、激活值、bias值统一进行量化。
+
+训练后量化在两种情况下所需的数据类型和参数设定不同,但均可通过转换工具设定。有关转换工具`converter_lite`的使用方法可参考[转换为MindSpore Lite模型](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/converter_tool.html)。在此基础之上进行配置,启用训练后量化。
+
+## 权重量化
+
+下面对权重量化的使用方式和效果进行阐述。
+
+### 参数说明
+
+权重量化转换命令的一般形式为:
+```
+./converter_lite --fmk=ModelType --modelFile=ModelFilePath --outputFile=ConvertedModelPath --quantType=WeightQuant --bitNum=BitNumValue --quantSize=QuantizationSizeThresholdValue --convWeightQuantChannelThreshold=ConvWeightQuantChannelThresholdValue
+```
+下面对此命令的量化相关参数进行说明:
+
+| 参数 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
+| -------- | ------- | ----- | ----- |----- | ----- |
+| `--quantType=` | 必选 | 设置为WeightQuant,启用权重量化 | String | - | 必须设置为WeightQuant |
+| `--bitNum=` | 可选 | 设定权重量化的比特数,目前仅支持8bit量化 | Integer | 8 | 8 |
+| `--quantSize=` | 可选 | 设定参与权重量化的卷积核尺寸阈值,若卷积核尺寸大于该值,则对此权重进行量化;建议设置为500 | Integer | 0 | (0,+∞) |
+| `--convWeightQuantChannelThreshold=` | 可选 | 设定参与权重量化的卷积通道数阈值,若卷积通道数大于该值,则对此权重进行量化;建议设置为16 | Integer | 16 | (0,+∞) |
+
+用户可根据模型及自身需要对权重量化的参数作出调整。
+
+
+### 使用步骤
+
+1. 正确编译出`converter_lite`可执行文件。该部分可参考构建文档[编译MindSpore Lite](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html),获得`converter_lite`工具,并配置环境变量。
+2. 以TensorFlow Lite模型为例,执行权重量化模型转换命令:
+ ```
+ ./converter_lite --fmk=TFLITE --modelFile=Inception_v3.tflite --outputFile=Inception_v3.tflite --quantType=WeightQuant --bitNum=8 --quantSize=0 --convWeightQuantChannelThreshold=0
+ ```
+3. 上述命令执行成功后,便可得到量化后的模型`Inception_v3.tflite.ms`,量化后的模型大小通常会下降到FP32模型的1/4。
+
+### 部分模型精度结果
+
+ | 模型 | 测试数据集 | FP32模型精度 | 权重量化精度 |
+ | -------- | ------- | ----- | ----- |
+ | [Inception_V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | [ImageNet](http://image-net.org/) | 77.92% | 77.84% |
+ | [Mobilenet_V1_1.0_224](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | [ImageNet](http://image-net.org/) | 70.96% | 70.56% |
+
+> 以上所有结果均在x86环境上测得。
+
+## 全量化
+
+下面对全量化的使用方式和效果进行阐述。
+
+### 参数说明
+
+全量化转换命令的一般形式为:
```
./converter_lite --fmk=ModelType --modelFile=ModelFilePath --outputFile=ConvertedModelPath --quantType=PostTraining --config_file=config.cfg
```
+下面对此命令的量化相关参数进行说明:
-## 使用示例
+| 参数 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
+| -------- | ------- | ----- | ----- |----- | ----- |
+| `--quantType=` | 必选 | 设置为PostTraining,启用全量化 | String | - | 必须设置为PostTraining |
+| `--config_file=` | 必选 | 校准数据集配置文件路径 | String | - | - |
+
+为了计算激活值的量化参数,用户需要提供校准数据集。校准数据集最好来自真实推理场景,能表征模型的实际输入情况,数量在100个左右。
+校准数据集配置文件采用`key=value`的方式定义相关参数,需要配置的`key`如下:
+
+| 参数名 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
+| -------- | ------- | ----- | ----- | ----- | ----- |
+| image_path | 必选 | 存放校准数据集的目录 | String | - | 该目录存放可直接用于执行推理的输入数据。由于目前框架还不支持数据预处理,所有数据必须事先完成所需的转换,使得它们满足推理的输入要求。 |
+| batch_count | 可选 | 使用的输入数目 | Integer | 100 | (0,+∞) |
+| method_x | 可选 | 网络层输入输出数据量化算法 | String | KL | KL,MAX_MIN。 KL: 基于[KL散度](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)对数据范围作量化校准; MAX_MIN:基于最大值、最小值计算数据的量化参数。 在模型以及数据集比较较简单的情况下,推荐使用MAX_MIN |
+| thread_num | 可选 | 使用校准数据集执行推理流程时的线程数 | Integer | 1 | (0,+∞) |
+
+### 使用步骤
1. 正确编译出`converter_lite`可执行文件。
2. 准备校准数据集,假设存放在`/dir/images`目录,编写配置文件`config.cfg`,内容如下:
@@ -32,34 +106,17 @@
thread_num=1
```
校准数据集可以选择测试数据集的子集,要求`/dir/images`目录下存放的每个文件均是预处理好的输入数据,每个文件都可以直接用于推理的输入。
-3. 以MindSpore模型为例,执行带训练后量化的模型转换命令:
+3. 以MindSpore模型为例,执行全量化的模型转换命令:
```
- ./converter_lite --fmk=MS --modelFile=lenet.ms --outputFile=lenet_quant --quantType=PostTraining --config_file=config.cfg
+ ./converter_lite --fmk=MINDIR --modelFile=lenet.mindir --outputFile=lenet_quant --quantType=PostTraining --config_file=config.cfg
```
-4. 上述命令执行成功后,便可得到量化后的模型lenet_quant.ms,通常量化后的模型大小会下降到FP32模型的1/4。
+4. 上述命令执行成功后,便可得到量化后的模型`lenet_quant.ms`,通常量化后的模型大小会下降到FP32模型的1/4。
-## 部分模型精度结果
+### 部分模型精度结果
- | 模型 | 测试数据集 | method_x | FP32模型精度 | 训练后量化精度 | 说明 |
+ | 模型 | 测试数据集 | method_x | FP32模型精度 | 全量化精度 | 说明 |
| -------- | ------- | ----- | ----- | ----- | ----- |
| [Inception_V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | [ImageNet](http://image-net.org/) | KL | 77.92% | 77.95% | 校准数据集随机选择ImageNet Validation数据集中的100张 |
| [Mobilenet_V1_1.0_224](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | [ImageNet](http://image-net.org/) | KL | 70.96% | 70.69% | 校准数据集随机选择ImageNet Validation数据集中的100张 |
> 以上所有结果均在x86环境上测得。
-
-## 参数说明
-
-| 参数 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
-| -------- | ------- | ----- | ----- |----- | ----- |
-| --quantType | 必选 | 设置为PostTraining,启用训练后量化 | String | - | 必须设置为PostTraining |
-| --config_file | 必选 | 校准数据集配置文件路径 | String | - | - |
-
-为了计算激活值的量化参数,用户需要提供校准数据集。校准数据集最好来自真实推理场景,能表征模型的实际输入情况,数量在100个左右。
-校准数据集配置文件采用`key=value`的方式定义相关参数,需要配置的`key`如下:
-
-| 参数名 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
-| -------- | ------- | ----- | ----- | ----- | ----- |
-| image_path | 必选 | 存放校准数据集的目录 | String | - | 该目录存放可直接用于执行推理的输入数据。由于目前框架还不支持数据预处理,所有数据必须事先完成所需的转换,使得它们满足推理的输入要求。 |
-| batch_count | 可选 | 使用的输入数目 | Integer | 100 | 大于0 |
-| method_x | 可选 | 网络层输入输出数据量化算法 | String | KL | KL,MAX_MIN。 KL: 基于[KL散度](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)对数据范围作量化校准; MAX_MIN:基于最大值、最小值计算数据的量化参数。 在模型以及数据集比较较简单的情况下,推荐使用MAX_MIN |
-| thread_num | 可选 | 使用校准数据集执行推理流程时的线程数 | Integer | 1 | 大于0 |
\ No newline at end of file
diff --git a/lite/tutorials/source_zh_cn/use/runtime.md b/tutorials/lite/source_zh_cn/use/runtime.md
similarity index 82%
rename from lite/tutorials/source_zh_cn/use/runtime.md
rename to tutorials/lite/source_zh_cn/use/runtime.md
index 2ba5ab7bad1af9f591d3e7c7a2b2f92a18953c25..b63d69771d204c96dd811ba0ab4d37b7c9731247 100644
--- a/lite/tutorials/source_zh_cn/use/runtime.md
+++ b/tutorials/lite/source_zh_cn/use/runtime.md
@@ -28,10 +28,14 @@
- [使用示例](#使用示例-5)
- [获取版本号](#获取版本号)
- [使用示例](#使用示例-6)
+ - [Session并行](#Session并行)
+ - [单Session并行](#单session并行)
+ - [多Session并行](#多session并行)
+ - [使用示例](#使用示例-7)
-
+
## 概述
@@ -76,66 +80,16 @@ static Model *Import(const char *model_buf, size_t size);
MindSpore Lite支持异构推理,推理时的主选后端由`Context`中的`device_ctx_`指定,默认为CPU。在进行图编译时,会根据主选后端进行算子选型调度。
-```cpp
-/// \brief DeviceType defined for holding user's preferred backend.
-typedef enum {
- DT_CPU, /**< CPU device type */
- DT_GPU, /**< GPU device type */
- DT_NPU /**< NPU device type, not supported yet */
-} DeviceType;
-
-/// \brief DeviceContext defined for holding DeviceType.
-typedef struct {
- DeviceType type; /**< device type */
-} DeviceContext;
-
-DeviceContext device_ctx_{DT_CPU};
-```
-
MindSpore Lite内置一个进程共享的线程池,推理时通过`thread_num_`指定线程池的最大线程数,默认为2线程,推荐最多不超过4个线程,否则可能会影响性能。
-```cpp
-int thread_num_ = 2; /**< thread number config for thread pool */
-```
-
MindSpore Lite支持动态内存分配和释放,如果没有指定`allocator`,推理时会生成一个默认的`allocator`,也可以通过`Context`方法在多个`Context`中共享内存分配器。
如果用户通过`new`创建`Context`,不再需要时,需要用户通过`delete`释放。一般在创建完Session后,Context即可释放。
-```cpp
-/// \brief Allocator defined a memory pool for malloc memory and free memory dynamically.
-///
-/// \note List public class and interface for reference.
-class Allocator;
-
-/// \brief Context defined for holding environment variables during runtime.
-class MS_API Context {
- public:
- /// \brief Constructor of MindSpore Lite Context using input value for parameters.
- ///
- /// \param[in] thread_num Define the work thread number during the runtime.
- /// \param[in] allocator Define the allocator for malloc.
- /// \param[in] device_ctx Define device information during the runtime.
- Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx);
-
- public:
- std::shared_ptr allocator = nullptr;
-}
-```
-
### 创建会话
用上一步创建得到的`Context`,调用LiteSession的静态`CreateSession`方法来创建`LiteSession`。函数返回的`LiteSession`实例是一个指针,通过`new`创建,不再需要时,需要用户通过`delete`释放。
-```cpp
-/// \brief Static method to create a LiteSession pointer.
-///
-/// \param[in] context Define the context of session to be created.
-///
-/// \return Pointer of MindSpore Lite LiteSession.
-static LiteSession *CreateSession(lite::Context *context);
-```
-
### 使用示例
下面示例代码演示了`Context`的创建,以及在两个`LiteSession`间共享内存池的功能:
@@ -147,13 +101,16 @@ if (context == nullptr) {
return RET_ERROR;
}
// The preferred backend is GPU, which means, if there is a GPU operator, it will run on the GPU first, otherwise it will run on the CPU.
-context->device_ctx_.type = lite::DT_GPU;
+context->device_type_ = lite::DT_GPU;
// The medium core takes priority in thread and core binding methods. This parameter will work in the BindThread interface. For specific binding effect, see the "Run Graph" section.
context->cpu_bind_mode_ = MID_CPU;
// Configure the number of worker threads in the thread pool to 2, including the main thread.
context->thread_num_ = 2;
// Allocators can be shared across multiple Contexts.
-auto *context2 = new Context(context->thread_num_, context->allocator, context->device_ctx_);
+auto *context2 = new Context();
+context2->thread_num_ = context->thread_num_;
+context2->allocator = context->allocator;
+context2->device_type_ = context->device_type_;
context2->cpu_bind_mode_ = context->cpu_bind_mode_;
// Use Context to create Session.
auto session1 = session::LiteSession::CreateSession(context);
@@ -166,7 +123,7 @@ if (session1 == nullptr) {
// session1 and session2 can share one memory pool.
auto session2 = session::LiteSession::CreateSession(context2);
delete (context2);
-if (session == nullptr) {
+if (session2 == nullptr) {
MS_LOG(ERROR) << "CreateSession failed while running %s", modelName.c_str();
return RET_ERROR;
}
@@ -178,19 +135,7 @@ if (session == nullptr) {
使用MindSpore Lite进行推理时,在已完成会话创建与图编译之后,如果需要对输入的shape进行Resize,则可以通过对输入的tensor重新设置shape,然后调用session的Resize()接口。
-```cpp
-/// \brief Get input MindSpore Lite MSTensors of model.
-///
-/// \return The vector of MindSpore Lite MSTensor.
-virtual std::vector GetInputs() const = 0;
-
-/// \brief Resize inputs shape.
-///
-/// \param[in] inputs Define the new inputs shape.
-///
-/// \return STATUS as an error code of resize inputs, STATUS is defined in errorcode.h.
-virtual int Resize(const std::vector &inputs) = 0;
-```
+> 某些网络是不支持可变维度,会提示错误信息后异常退出,比如,模型中有MatMul算子,并且MatMul的一个输入Tensor是权重,另一个输入Tensor是输入时,调用可变维度接口会导致输入Tensor和权重Tensor的Shape不匹配,最终导致推理失败。
### 使用示例
@@ -200,8 +145,9 @@ virtual int Resize(const std::vector &inputs) = 0;
auto inputs = session->GetInputs();
std::vector resize_shape = {1, 128, 128, 3};
// Assume the model has only one input,resize input shape to [1, 128, 128, 3]
-inputs[0]->set_shape(resize_shape);
-session->Resize(inputs);
+std::vector> new_shapes;
+new_shapes.push_back(resize_shape);
+session->Resize(inputs, new_shapes);
```
### 图编译
@@ -503,16 +449,16 @@ virtual void *MutableData() const = 0;
### 使用示例
-下面示例代码演示了使用`GetOutputMapByNode`接口获取输出`MSTensor`,并打印了每个输出`MSTensor`的前十个数据或所有数据:
+下面示例代码演示了使用`GetOutputs`接口获取输出`MSTensor`,并打印了每个输出`MSTensor`的前十个数据或所有数据:
```cpp
// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByNode();
+auto output_map = session->GetOutputs();
// Assume that the model has only one output node.
auto out_node_iter = output_map.begin();
std::string name = out_node_iter->first;
// Assume that the unique output node has only one output tensor.
-auto out_tensor = out_node_iter->second.front();
+auto out_tensor = out_node_iter->second;
if (out_tensor == nullptr) {
std::cerr << "Output tensor is nullptr" << std::endl;
return -1;
@@ -536,7 +482,7 @@ std::cout << std::endl;
// The elements in outputs do not need to be free by users, because outputs are managed by the MindSpore Lite.
```
-需要注意的是,`GetOutputsByNodeName`、`GetOutputMapByNode`、`GetOutputByTensorName`和`GetOutputMapByTensor`方法返回的vector或map不需要用户释放。
+需要注意的是,`GetOutputsByNodeName`、`GetOutputByTensorName`和`GetOutputs`方法返回的vector或map不需要用户释放。
下面示例代码演示了使用`GetOutputsByNodeName`接口获取输出`MSTensor`的方法:
@@ -552,19 +498,6 @@ if (out_tensor == nullptr) {
}
```
-下面示例代码演示了使用`GetOutputMapByTensor`接口获取输出`MSTensor`的方法:
-
-```cpp
-// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByTensor();
-// Assume that output node named output_node_name_0 has only one output tensor.
-auto out_tensor = output_vec.front();
-if (out_tensor == nullptr) {
- std::cerr << "Output tensor is nullptr" << std::endl;
- return -1;
-}
-```
-
下面示例代码演示了使用`GetOutputByTensorName`接口获取输出`MSTensor`的方法:
```cpp
@@ -591,3 +524,112 @@ MindSpore Lite提供了`Version`方法可以获取版本号,包含在`include/
#include "include/version.h"
std::string version = mindspore::lite::Version();
```
+
+## Session并行
+MindSpore Lite支持多个`LiteSession`并行推理,但不支持多个线程同时调用单个`LiteSession`的`RunGraph`接口。
+
+### 单Session并行
+
+MindSpore Lite不支持多线程并行执行单个`LiteSession`的推理,否则会得到以下错误信息:
+```cpp
+ERROR [mindspore/lite/src/lite_session.cc:297] RunGraph] 10 Not support multi-threading
+```
+
+### 多Session并行
+
+MindSpore Lite支持多个`LiteSession`同时进行推理的场景,每个`LiteSession`的线程池和内存池都是独立的。
+
+### 使用示例
+
+下面代码演示了如何创建多个`LiteSession`,并且并行执行推理的过程:
+```cpp
+#include
+#include "src/common/file_utils.h"
+#include "include/model.h"
+#include "include/version.h"
+#include "include/context.h"
+#include "include/lite_session.h"
+
+mindspore::session::LiteSession *GenerateSession(mindspore::lite::Model *model) {
+ if (model == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return nullptr;
+ }
+ auto context = new (std::nothrow) mindspore::lite::Context;
+ if (context == nullptr) {
+ std::cerr << "New context failed while running" << std::endl;
+ return nullptr;
+ }
+
+ auto session = mindspore::session::LiteSession::CreateSession(context);
+ delete (context);
+ if (session == nullptr) {
+ std::cerr << "CreateSession failed while running" << std::endl;
+ return nullptr;
+ }
+ auto ret = session->CompileGraph(model);
+ if (ret != mindspore::lite::RET_OK) {
+ std::cout << "CompileGraph failed while running" << std::endl;
+ delete (session);
+ return nullptr;
+ }
+ auto msInputs = session->GetInputs();
+ for (auto msInput : msInputs) {
+ (void)msInput->MutableData();
+ }
+ return session;
+}
+
+int main(int argc, const char **argv) {
+ size_t size = 0;
+ char *graphBuf = mindspore::lite::ReadFile("test.ms", &size);
+ if (graphBuf == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return -1;
+ }
+ auto model = mindspore::lite::Model::Import(graphBuf, size);
+ if (model == nullptr) {
+ std::cerr << "Import model file failed while running" << std::endl;
+ delete[](graphBuf);
+ return -1;
+ }
+ delete[](graphBuf);
+ auto session1 = GenerateSession(model);
+ if (session1 == nullptr) {
+ std::cerr << "GenerateSession failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+ auto session2 = GenerateSession(model);
+ if (session2 == nullptr) {
+ std::cerr << "GenerateSession failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+
+ std::thread thread1([&](){
+ auto status = session1->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session1 inference success" << std::endl;
+ });
+
+ std::thread thread2([&](){
+ auto status = session2->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session2 inference success" << std::endl;
+ });
+
+ thread1.join();
+ thread2.join();
+ delete (session1);
+ delete (session2);
+ delete (model);
+ return 0;
+}
+```
diff --git a/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md b/tutorials/lite/source_zh_cn/use/timeprofiler_tool.md
similarity index 88%
rename from lite/tutorials/source_zh_cn/use/timeprofiler_tool.md
rename to tutorials/lite/source_zh_cn/use/timeprofiler_tool.md
index fbe404c17898439bb7659b9d2e5afaf841dbf5be..bec1da2fada2e0017a244da3b925139ecb7a548b 100644
--- a/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md
+++ b/tutorials/lite/source_zh_cn/use/timeprofiler_tool.md
@@ -10,7 +10,7 @@
-
+
## 概述
@@ -20,16 +20,16 @@
使用TimeProfiler工具,需要进行如下环境准备工作。
-- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profile`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id3)执行编译。
+- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profiler`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id1)和[编译示例](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id3)执行编译。
-- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id4),获得`timeprofile`工具,并配置环境变量。
+- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.0/use/build.html#id4),获得`timeprofiler`工具,并配置环境变量。
## 使用示例
使用TimeProfiler对`test_timeprofiler.ms`模型的网络层进行耗时分析,并且设置模型推理循环运行次数为10,则其命令代码如下:
```bash
-./timeprofile --modelPath=./models/test_timeprofiler.ms --loopCount=10
+./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10
```
该条命令执行后,TimeProfiler工具会输出模型网络层运行耗时的相关统计信息。对于本例命令,输出的统计信息如下。其中统计信息按照`opName`和`optype`两种划分方式分别显示,`opName`表示算子名,`optype`表示算子类别,`avg`表示该算子的平均单次运行时间,`percent`表示该算子运行耗时占所有算子运行总耗时的比例,`calledTimess`表示该算子的运行次数,`opTotalTime`表示该算子运行指定次数的总耗时。最后,`total time`和`kernel cost`分别显示了该模型单次推理的平均耗时和模型推理中所有算子的平均耗时之和。
@@ -77,7 +77,7 @@ total time : 2.90800 ms, kernel cost : 2.74851 ms
使用编译好的TimeProfiler工具进行模型网络层耗时分析时,其命令格式如下所示。
```bash
-./timeprofile --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
+./timeprofiler --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
```
下面提供详细的参数说明。
diff --git a/tutorials/notebook/README.md b/tutorials/notebook/README.md
index 23361b0ea844c7d28503e980a4689061e8226d6b..3221cd630f1542808bd7f0eb4b26fb975ba3a850 100644
--- a/tutorials/notebook/README.md
+++ b/tutorials/notebook/README.md
@@ -50,19 +50,19 @@
| 教 程 名 称 | 文 件 名 称 | 教 程 类 别 | 内 容 描 述
| :----------- | :----------- | :------- |:------
-| 手写数字分类识别入门体验教程 | [quick_start.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/quick_start.ipynb) | 快速入门 | - CPU平台下从数据集到模型验证的全过程解读
- 体验教程中各功能模块的使用说明
- 数据集图形化展示
- 了解LeNet5具体结构和参数作用
- 学习使用自定义回调函数
- loss值与训练步数的变化图
- 模型精度与训练步数的变化图
- 使用模型应用到手写图片的预测与分类上
-| 线性拟合 | [linear_regression.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/linear_regression.ipynb) | 快速入门 | - 了解线性拟合的算法原理
- 了解在MindSpore中如何实现线性拟合的算法原理
- 学习使用MindSpore实现AI训练中的正向传播和方向传播
- 可视化线性函数拟合数据的全过程。
-| 加载数据集 | [loading_dataset.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/loading_dataset.ipynb) | 使用指南 | - 学习MindSpore中加载数据集的方法
- 展示加载常用数据集的方法
- 展示加载MindRecord格式数据集的方法
- 展示加载自定义格式数据集的方法
-| 将数据集转换为MindSpore数据格式 | [convert_dataset_to_mindspore_data_format.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb) | 使用指南 | - 展示将MNIST数据集转换为MindSpore数据格式
- 展示将CSV数据集转换为MindSpore数据格式
- 展示将CIFAR-10数据集转换为MindSpore数据格式
- 展示将CIFAR-100数据集转换为MindSpore数据格式
- 展示将ImageNet数据集转换为MindSpore数据格式
- 展示用户自定义生成MindSpore数据格式
-| 数据处理与数据增强 | [data_loading_enhancement.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/data_loading_enhance/data_loading_enhancement.ipynb) | 使用指南 | - 学习MindSpore中数据处理和增强的方法
- 展示数据处理、增强方法的实际操作
- 对比展示数据处理前和处理后的效果
- 表述在数据处理、增强后的意义
-| 自然语言处理应用 | [nlp_application.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/nlp_application.ipynb) | 应用实践 | - 展示MindSpore在自然语言处理的应用
- 展示自然语言处理中数据集特定的预处理方法
- 展示如何定义基于LSTM的SentimentNet网络
-| 计算机视觉应用 | [computer_vision_application.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/computer_vision_application.ipynb) | 应用实践 | - 学习MindSpore卷积神经网络在计算机视觉应用的过程
- 学习下载CIFAR-10数据集,搭建运行环境
- 学习使用ResNet-50构建卷积神经网络
- 学习使用Momentum和SoftmaxCrossEntropyWithLogits构建优化器和损失函数
- 学习调试参数训练模型,判断模型精度
-| 模型的训练及验证同步方法 | [synchronization_training_and_evaluation.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/synchronization_training_and_evaluation.ipynb) | 应用实践 | - 了解模型训练和验证同步进行的方法
- 学习同步训练和验证中参数设置方法
- 利用绘图函数从保存的模型中挑选出最优模型
-| 优化数据准备的性能 | [optimize_the_performance_of_data_preparation.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb) | 应用实践 | - 数据加载性能优化
- shuffle性能优化
- 数据增强性能优化
- 性能优化方案总结
-| 使用PyNative进行神经网络的训练调试体验 | [debugging_in_pynative_mode.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/debugging_in_pynative_mode.ipynb) | 模型调优 | - GPU平台下从数据集获取单个数据进行单个step训练的数据变化全过程解读
- 了解PyNative模式下的调试方法
- 图片数据在训练过程中的变化情况的图形展示
- 了解构建权重梯度计算函数的方法
- 展示1个step过程中权重的变化及数据展示
-| 自定义调试信息体验文档 | [customized_debugging_information.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/customized_debugging_information.ipynb) | 模型调优 | - 了解MindSpore的自定义调试算子
- 学习使用自定义调试算子Callback设置定时训练
- 学习设置metrics算子输出相对应的模型精度信息
- 学习设置日志环境变量来控制glog输出日志
-| MindInsight的模型溯源和数据溯源体验 | [mindinsight_model_lineage_and_data_lineage.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb) | 模型调优 | - 了解MindSpore中训练数据的采集及展示
- 学习使用SummaryRecord记录数据
- 学习使用回调函数SummaryCollector进行数据采集
- 使用MindInsight进行数据可视化
- 了解数据溯源和模型溯源的使用方法
-| 计算图和数据图可视化 | [calculate_and_datagraphic.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/mindinsight/calculate_and_datagraphic.ipynb) | 模型调优 | - 了解MindSpore中新增可视化功能
- 学习使用MindInsight可视化看板
- 学习使用查看计算图可视化图的信息的方法
- 学习使用查看数据图中展示的信息的方法
-| 标量、直方图、图像和张量可视化 | [mindinsight_image_histogram_scalar_tensor.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar_tensor.ipynb) | 模型调优 | - 了解完整的MindSpore深度学习及MindInsight可视化展示的过程
- 学习使用MindInsight对训练过程中标量、直方图、图像和张量信息进行可视化展示
- 学习使用Summary算子记录标量、直方图、图像和张量信息
- 学习单独对标量、直方图、图像和张量信息进行记录并可视化展示的方法
-| 混合精度 | [mixed_precision.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/mixed_precision.ipynb) | 性能优化 | - 了解混合精度训练的原理
- 学习在MindSpore中使用混合精度训练
- 对比单精度训练和混合精度训练的对模型训练的影响
-| 模型安全 | [model_security.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/model_security.ipynb) | AI安全和隐私 | - 了解AI算法的安全威胁的概念和影响
- 介绍MindArmour提供的模型安全防护手段
- 学习如何模拟攻击训练模型
- 学习针对被攻击模型进行对抗性防御
+| 手写数字分类识别入门体验教程 | [quick_start.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/quick_start.ipynb) | 快速入门 | - CPU平台下从数据集到模型验证的全过程解读
- 体验教程中各功能模块的使用说明
- 数据集图形化展示
- 了解LeNet5具体结构和参数作用
- 学习使用自定义回调函数
- loss值与训练步数的变化图
- 模型精度与训练步数的变化图
- 使用模型应用到手写图片的预测与分类上
+| 线性拟合 | [linear_regression.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/linear_regression.ipynb) | 快速入门 | - 了解线性拟合的算法原理
- 了解在MindSpore中如何实现线性拟合的算法原理
- 学习使用MindSpore实现AI训练中的正向传播和方向传播
- 可视化线性函数拟合数据的全过程。
+| 加载数据集 | [loading_dataset.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/loading_dataset.ipynb) | 使用指南 | - 学习MindSpore中加载数据集的方法
- 展示加载常用数据集的方法
- 展示加载MindRecord格式数据集的方法
- 展示加载自定义格式数据集的方法
+| 将数据集转换为MindSpore数据格式 | [convert_dataset_to_mindspore_data_format.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb) | 使用指南 | - 展示将MNIST数据集转换为MindSpore数据格式
- 展示将CSV数据集转换为MindSpore数据格式
- 展示将CIFAR-10数据集转换为MindSpore数据格式
- 展示将CIFAR-100数据集转换为MindSpore数据格式
- 展示将ImageNet数据集转换为MindSpore数据格式
- 展示用户自定义生成MindSpore数据格式
+| 数据处理与数据增强 | [data_loading_enhancement.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/data_loading_enhance/data_loading_enhancement.ipynb) | 使用指南 | - 学习MindSpore中数据处理和增强的方法
- 展示数据处理、增强方法的实际操作
- 对比展示数据处理前和处理后的效果
- 表述在数据处理、增强后的意义
+| 自然语言处理应用 | [nlp_application.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/nlp_application.ipynb) | 应用实践 | - 展示MindSpore在自然语言处理的应用
- 展示自然语言处理中数据集特定的预处理方法
- 展示如何定义基于LSTM的SentimentNet网络
+| 计算机视觉应用 | [computer_vision_application.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/computer_vision_application.ipynb) | 应用实践 | - 学习MindSpore卷积神经网络在计算机视觉应用的过程
- 学习下载CIFAR-10数据集,搭建运行环境
- 学习使用ResNet-50构建卷积神经网络
- 学习使用Momentum和SoftmaxCrossEntropyWithLogits构建优化器和损失函数
- 学习调试参数训练模型,判断模型精度
+| 模型的训练及验证同步方法 | [synchronization_training_and_evaluation.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/synchronization_training_and_evaluation.ipynb) | 应用实践 | - 了解模型训练和验证同步进行的方法
- 学习同步训练和验证中参数设置方法
- 利用绘图函数从保存的模型中挑选出最优模型
+| 优化数据准备的性能 | [optimize_the_performance_of_data_preparation.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb) | 应用实践 | - 数据加载性能优化
- shuffle性能优化
- 数据增强性能优化
- 性能优化方案总结
+| 使用PyNative进行神经网络的训练调试体验 | [debugging_in_pynative_mode.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/debugging_in_pynative_mode.ipynb) | 模型调优 | - GPU平台下从数据集获取单个数据进行单个step训练的数据变化全过程解读
- 了解PyNative模式下的调试方法
- 图片数据在训练过程中的变化情况的图形展示
- 了解构建权重梯度计算函数的方法
- 展示1个step过程中权重的变化及数据展示
+| 自定义调试信息体验文档 | [customized_debugging_information.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/customized_debugging_information.ipynb) | 模型调优 | - 了解MindSpore的自定义调试算子
- 学习使用自定义调试算子Callback设置定时训练
- 学习设置metrics算子输出相对应的模型精度信息
- 学习设置日志环境变量来控制glog输出日志
+| MindInsight的模型溯源和数据溯源体验 | [mindinsight_model_lineage_and_data_lineage.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb) | 模型调优 | - 了解MindSpore中训练数据的采集及展示
- 学习使用SummaryRecord记录数据
- 学习使用回调函数SummaryCollector进行数据采集
- 使用MindInsight进行数据可视化
- 了解数据溯源和模型溯源的使用方法
+| 计算图和数据图可视化 | [calculate_and_datagraphic.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/mindinsight/calculate_and_datagraphic.ipynb) | 模型调优 | - 了解MindSpore中新增可视化功能
- 学习使用MindInsight可视化看板
- 学习使用查看计算图可视化图的信息的方法
- 学习使用查看数据图中展示的信息的方法
+| 标量、直方图、图像和张量可视化 | [mindinsight_image_histogram_scalar_tensor.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar_tensor.ipynb) | 模型调优 | - 了解完整的MindSpore深度学习及MindInsight可视化展示的过程
- 学习使用MindInsight对训练过程中标量、直方图、图像和张量信息进行可视化展示
- 学习使用Summary算子记录标量、直方图、图像和张量信息
- 学习单独对标量、直方图、图像和张量信息进行记录并可视化展示的方法
+| 混合精度 | [mixed_precision.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/mixed_precision.ipynb) | 性能优化 | - 了解混合精度训练的原理
- 学习在MindSpore中使用混合精度训练
- 对比单精度训练和混合精度训练的对模型训练的影响
+| 模型安全 | [model_security.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/model_security.ipynb) | AI安全和隐私 | - 了解AI算法的安全威胁的概念和影响
- 介绍MindArmour提供的模型安全防护手段
- 学习如何模拟攻击训练模型
- 学习针对被攻击模型进行对抗性防御
diff --git a/tutorials/notebook/computer_vision_application.ipynb b/tutorials/notebook/computer_vision_application.ipynb
index 6d8dfd2d87f44f46f8ca5573d295735a4ff30d91..ddf652cb21786ef7f15096e2f65c931a71548a34 100644
--- a/tutorials/notebook/computer_vision_application.ipynb
+++ b/tutorials/notebook/computer_vision_application.ipynb
@@ -71,7 +71,7 @@
"metadata": {},
"source": [
"本次面向Ascend 910 AI处理器硬件平台,将卷积神经网络ResNet加入到案例中,你可以在这里下载完整的样例代码案例作为基础用例:\n",
- "https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/resnet"
+ "https://gitee.com/mindspore/docs/tree/r1.0/tutorials/tutorial_code/resnet"
]
},
{
@@ -213,7 +213,7 @@
"import mindspore.common.dtype as mstype\n",
"import mindspore.ops.functional as F\n",
"import mindspore.dataset as ds\n",
- "import mindspore.dataset.transforms.vision.c_transforms as C\n",
+ "import mindspore.dataset.vision.c_transforms as C\n",
"import mindspore.dataset.transforms.c_transforms as C2\n",
"\n",
"\n",
@@ -252,8 +252,8 @@
" changeswap_op]\n",
"\n",
" # Apply map operations on images\n",
- " cifar_ds = cifar_ds.map(input_columns=\"label\", operations=type_cast_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=c_trans)\n",
+ " cifar_ds = cifar_ds.map(operations=type_cast_op, input_columns=\"label\")\n",
+ " cifar_ds = cifar_ds.map(operations=c_trans, input_columns=\"image\")\n",
"\n",
" # Apply shuffle operations\n",
" cifar_ds = cifar_ds.shuffle(buffer_size=10)\n",
@@ -314,7 +314,7 @@
"import matplotlib.pyplot as plt\n",
"dataset_show = create_dataset()\n",
"iterator_show= dataset_show.create_dict_iterator()\n",
- "images = iterator_show.get_next()[\"image\"]\n",
+ "images = iterator_show.get_next()[\"image\"].asnumpy()\n",
"# Images[0].shape is (3,224,224).We need transpose as (224,224,3) for using in plt.show().\n",
"picture_show = np.transpose(images[0],(1,2,0))\n",
"plt.imshow(picture_show)\n"
diff --git a/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb b/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb
index f34bc0c817f399bc5bdac90a497910d626d24d5f..e95583048e06576df1ba45a15372e5253a4778d1 100644
--- a/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb
+++ b/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb
@@ -194,7 +194,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1,\n",
+ "{'data': Tensor(shape=[803], dtype=UInt8, value= [255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1,\n",
" 0, 0, 1, 0, 1, 0, 0, 255, 219, 0, 67, 0, 2,\n",
" 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2, 2, 2,\n",
" 2, 4, 3, 2, 2, 2, 2, 5, 4, 4, 3, 4, 6,\n",
@@ -250,8 +250,7 @@
" 143, 6, 252, 112, 209, 62, 35, 120, 247, 224, 174, 137, 168,\n",
" 77, 241, 3, 92, 240, 206, 167, 29, 245, 142, 155, 115, 114,\n",
" 80, 27, 5, 157, 73, 203, 164, 139, 42, 249, 103, 12, 145,\n",
- " 195, 22, 229, 5, 128, 31, 149, 148, 81, 69, 21, 255, 217],\n",
- " dtype=uint8), 'label': array(3, dtype=int64)}\n"
+ " 195, 22, 229, 5, 128, 31, 149, 148, 81, 69, 21, 255, 217]), 'label': Tensor(shape=[], dtype=Int64, value= 3)}\n"
]
}
],
@@ -281,7 +280,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "- 本例中需要的数据位置在https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/convert_dataset_to_mindspore_data_format/csv_data/data.csv\n",
+ "- 本例中需要的数据位置在https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/convert_dataset_to_mindspore_data_format/csv_data/data.csv\n",
"中,使用过程中可以在此路径下找到文件并下载,并且保存在`jupyter工作目录/dataset/`下,如图所示:"
]
},
@@ -376,7 +375,7 @@
"# create MindDataset for reading data\n",
"csv_data_set = ds.MindDataset(dataset_file=csv_mindrecord_path)\n",
"# create a dictionary iterator and read a data record through the iterator\n",
- "print(next(csv_data_set.create_dict_iterator()))"
+ "print(next(csv_data_set.create_dict_iterator(output_numpy=True)))"
]
},
{
@@ -493,7 +492,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, ..., 35, 255, 217], dtype=uint8), 'id': array(30707, dtype=int64), 'label': array(4, dtype=int64)}\n"
+ "{'data': Tensor(shape=[1431], dtype=UInt8, value= [255, 216, 255, ..., 35, 255, 217]), 'id': Tensor(shape=[], dtype=Int64, value= 30707), 'label': Tensor(shape=[], dtype=Int64, value= 4)}\n"
]
}
],
@@ -620,7 +619,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, ..., 127, 255, 217], dtype=uint8), 'fine_label': array(88, dtype=int64), 'coarse_label': array(8, dtype=int64), 'id': array(10349, dtype=int64)}\n"
+ "{'data': Tensor(shape=[1374], dtype=UInt8, value= [255, 216, 255, ..., 127, 255, 217]), 'fine_label': Tensor(shape=[], dtype=Int64, value= 88), 'coarse_label': Tensor(shape=[], dtype=Int64, value= 8), 'id': Tensor(shape=[], dtype=Int64, value= 10349)}\n"
]
}
],
@@ -767,7 +766,7 @@
"# create MindDataset for reading data\n",
"imagenet_data_set = ds.MindDataset(dataset_file=file_name)\n",
"# create a dictionary iterator and read a data record through the iterator\n",
- "print(next(imagenet_data_set.create_dict_iterator()))"
+ "print(next(imagenet_data_set.create_dict_iterator(output_numpy=True)))"
]
},
{
@@ -838,7 +837,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "3. 准备需要写入的数据,按照用户定义的Schema形式,准备需要写入的样本列表,本例中需要的数据位置在https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/convert_dataset_to_mindspore_data_format/images/transform.jpg\n",
+ "3. 准备需要写入的数据,按照用户定义的Schema形式,准备需要写入的样本列表,本例中需要的数据位置在https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/convert_dataset_to_mindspore_data_format/images/transform.jpg\n",
"中,使用过程中可以在此路径下找到图片并下载,并且保存在`jupyter工作目录/dataset/`下。"
]
},
@@ -938,7 +937,7 @@
"# create MindDataset for reading data\n",
"define_data_set = ds.MindDataset(dataset_file=file_name)\n",
"# create a dictionary iterator and read a data record through the iterator\n",
- "print(next(define_data_set.create_dict_iterator()))"
+ "print(next(define_data_set.create_dict_iterator(output_numpy=True)))"
]
},
{
diff --git a/tutorials/notebook/customized_debugging_information.ipynb b/tutorials/notebook/customized_debugging_information.ipynb
index 44be7bd3a753b3c00b1851729badec85be8b4584..ff1da4a809f89a123ac72eec924e7d0a08634032 100644
--- a/tutorials/notebook/customized_debugging_information.ipynb
+++ b/tutorials/notebook/customized_debugging_information.ipynb
@@ -18,7 +18,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、`Print算子`、日志打印等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。\n",
+ "本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/tutorial_code/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、`Print算子`、日志打印等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。\n",
"体验过程如下:\n",
"1. 数据集准备。\n",
"2. 定义深度学习网络LeNet5。\n",
@@ -46,7 +46,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "这里我们需要将MNIST数据集中随机取出一张图片,并增强成适合LeNet网络的数据格式(如何处理请参考[quick_start.ipynb](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/quick_start.ipynb)),训练数据集下载地址:{\"\", \"\"}。\n",
+ "这里我们需要将MNIST数据集中随机取出一张图片,并增强成适合LeNet网络的数据格式(如何处理请参考[quick_start.ipynb](https://gitee.com/mindspore/docs/blob/r1.0/tutorials/notebook/quick_start.ipynb)),训练数据集下载地址:{\"\", \"\"}。\n",
"