diff --git a/.gitignore b/.gitignore
index 5cd9b811d971d597120a234199999efa0b2353ef..e6af4aa9f0e2a8cd27dee2d2781c7865d4b0001a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,10 +1,20 @@
# Built html files
-api/build_en
-api/build_zh_cn
-docs/build_en
-docs/build_zh_cn
-tutorials/build_en
-tutorials/build_zh_cn
+docs/api_cpp/build_en
+docs/api_cpp/build_zh_cn
+docs/api_java/build_en
+docs/api_java/build_zh_cn
+docs/api_python/build_en
+docs/api_python/build_zh_cn
+docs/faq/build_en
+docs/faq/build_zh_cn
+docs/note/build_en
+docs/note/build_zh_cn
+docs/programming_guide/build_en
+docs/programming_guide/build_zh_cn
+tutorials/inference/build_en
+tutorials/inference/build_zh_cn
+tutorials/training/build_en
+tutorials/training/build_zh_cn
# Workspace
.idea/
diff --git a/api/source_en/api/python/mindspore/mindspore.dtype.rst b/api/source_en/api/python/mindspore/mindspore.dtype.rst
deleted file mode 100644
index ecedea971844071ff47fa5505b9c852b5e77ff1f..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.dtype.rst
+++ /dev/null
@@ -1,111 +0,0 @@
-mindspore.dtype
-===============
-
-Data Type
-----------
-
-.. class:: mindspore.dtype
-
-Create a data type object of MindSpore.
-
-The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
-Run the following command to import the package:
-
-.. code-block::
-
- import mindspore.common.dtype as mstype
-
-or
-
-.. code-block::
-
- from mindspore import dtype as mstype
-
-Numeric Type
-~~~~~~~~~~~~
-
-Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
-The following table lists the details.
-
-============================================== =============================
-Definition Description
-============================================== =============================
-``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
-``mindspore.int16`` , ``mindspore.short`` 16-bit integer
-``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
-``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
-``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
-``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
-``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
-``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
-``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
-``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
-``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
-============================================== =============================
-
-Other Type
-~~~~~~~~~~
-
-For other defined types, see the following table.
-
-============================ =================
-Type Description
-============================ =================
-``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
-``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
-``bool_`` Boolean ``True`` or ``False``.
-``int_`` Integer scalar.
-``uint`` Unsigned integer scalar.
-``float_`` Floating-point scalar.
-``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
-``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
-``type_type`` Type definition of type.
-``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
-``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
-``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
-============================ =================
-
-Tree Topology
-~~~~~~~~~~~~~~
-
-The relationships of the above types are as follows:
-
-.. code-block::
-
-
- └─────── number
- │ ├─── bool_
- │ ├─── int_
- │ │ ├─── int8, byte
- │ │ ├─── int16, short
- │ │ ├─── int32, intc
- │ │ └─── int64, intp
- │ ├─── uint
- │ │ ├─── uint8, ubyte
- │ │ ├─── uint16, ushort
- │ │ ├─── uint32, uintc
- │ │ └─── uint64, uintp
- │ └─── float_
- │ ├─── float16
- │ ├─── float32
- │ └─── float64
- ├─── tensor
- │ ├─── Array[Float32]
- │ └─── ...
- ├─── list_
- │ ├─── List[Int32,Float32]
- │ └─── ...
- ├─── tuple_
- │ ├─── Tuple[Int32,Float32]
- │ └─── ...
- ├─── function
- │ ├─── Func
- │ ├─── Func[(Int32, Float32), Int32]
- │ └─── ...
- ├─── MetaTensor
- ├─── type_type
- ├─── type_none
- ├─── symbolic_key
- └─── env_type
\ No newline at end of file
diff --git a/api/source_en/api/python/mindspore/mindspore.hub.rst b/api/source_en/api/python/mindspore/mindspore.hub.rst
deleted file mode 100644
index 458c704fc392ff12901a1324b719303c5098eeee..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.hub.rst
+++ /dev/null
@@ -1,4 +0,0 @@
-mindspore.hub
-=============
-
-.. autofunction:: mindspore.hub.load_weights
diff --git a/api/source_en/api/python/mindspore/mindspore.ops.composite.rst b/api/source_en/api/python/mindspore/mindspore.ops.composite.rst
deleted file mode 100644
index 4dc22f1dcf4fc899a211b5d1ec7114bea7680aa5..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.ops.composite.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.composite
-=======================
-
-.. automodule:: mindspore.ops.composite
- :members:
diff --git a/api/source_en/api/python/mindspore/mindspore.ops.operations.rst b/api/source_en/api/python/mindspore/mindspore.ops.operations.rst
deleted file mode 100644
index 29bf49176bf455593d3398d8e2f1af17ebfe21a4..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.ops.operations.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.operations
-========================
-
-.. automodule:: mindspore.ops.operations
- :members:
diff --git a/api/source_en/api/python/mindspore/mindspore.rst b/api/source_en/api/python/mindspore/mindspore.rst
deleted file mode 100644
index 44c49e3df3e08d66f6f8d54c23891de30b85a922..0000000000000000000000000000000000000000
--- a/api/source_en/api/python/mindspore/mindspore.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore
-=========
-
-.. automodule:: mindspore
- :members:
\ No newline at end of file
diff --git a/api/source_en/index.rst b/api/source_en/index.rst
deleted file mode 100644
index 12f19eaff2f8e9bdeec2f0977238dfd4d1be8238..0000000000000000000000000000000000000000
--- a/api/source_en/index.rst
+++ /dev/null
@@ -1,53 +0,0 @@
-.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Mar 24 11:00:00 2020.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-MindSpore API
-=============
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Python API
-
- api/python/mindspore/mindspore
- api/python/mindspore/mindspore.dtype
- api/python/mindspore/mindspore.common.initializer
- api/python/mindspore/mindspore.communication
- api/python/mindspore/mindspore.context
- api/python/mindspore/mindspore.hub
- api/python/mindspore/mindspore.nn
- api/python/mindspore/mindspore.nn.dynamic_lr
- api/python/mindspore/mindspore.nn.learning_rate_schedule
- api/python/mindspore/mindspore.nn.probability
- api/python/mindspore/mindspore.ops
- api/python/mindspore/mindspore.ops.composite
- api/python/mindspore/mindspore.ops.operations
- api/python/mindspore/mindspore.train
- api/python/mindspore/mindspore.dataset
- api/python/mindspore/mindspore.dataset.config
- api/python/mindspore/mindspore.dataset.text
- api/python/mindspore/mindspore.dataset.transforms
- api/python/mindspore/mindspore.dataset.vision
- api/python/mindspore/mindspore.mindrecord
- api/python/mindspore/mindspore.profiler
-
-.. toctree::
- :maxdepth: 1
- :caption: MindArmour Python API
-
- api/python/mindarmour/mindarmour
- api/python/mindarmour/mindarmour.adv_robustness.attacks
- api/python/mindarmour/mindarmour.adv_robustness.defenses
- api/python/mindarmour/mindarmour.adv_robustness.detectors
- api/python/mindarmour/mindarmour.adv_robustness.evaluations
- api/python/mindarmour/mindarmour.fuzz_testing
- api/python/mindarmour/mindarmour.privacy.diff_privacy
- api/python/mindarmour/mindarmour.privacy.evaluation
- api/python/mindarmour/mindarmour.utils
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Hub Python API
-
- api/python/mindspore_hub/mindspore_hub
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dtype.rst b/api/source_zh_cn/api/python/mindspore/mindspore.dtype.rst
deleted file mode 100644
index 633cd1e23e5c3d54077db437deb063c78aa9a9a2..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.dtype.rst
+++ /dev/null
@@ -1,112 +0,0 @@
-mindspore.dtype
-===============
-
-Data Type
-----------
-
-.. class:: mindspore.dtype
-
-Create a data type object of MindSpore.
-
-The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
-Run the following command to import the package:
-
-.. code-block::
-
- import mindspore.common.dtype as mstype
-
-or
-
-.. code-block::
-
- from mindspore import dtype as mstype
-
-Numeric Type
-~~~~~~~~~~~~
-
-Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
-The following table lists the details.
-
-============================================== =============================
-Definition Description
-============================================== =============================
-``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
-``mindspore.int16`` , ``mindspore.short`` 16-bit integer
-``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
-``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
-``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
-``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
-``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
-``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
-``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
-``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
-``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
-============================================== =============================
-
-Other Type
-~~~~~~~~~~
-
-For other defined types, see the following table.
-
-============================ =================
-Type Description
-============================ =================
-``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
-``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
-``bool_`` Boolean ``True`` or ``False``.
-``int_`` Integer scalar.
-``uint`` Unsigned integer scalar.
-``float_`` Floating-point scalar.
-``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
-``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
-``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
-``type_type`` Type definition of type.
-``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
-``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
-``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
-============================ =================
-
-Tree Topology
-~~~~~~~~~~~~~~
-
-The relationships of the above types are as follows:
-
-.. code-block::
-
-
- └─── mindspore.dtype
- ├─── number
- │ ├─── bool_
- │ ├─── int_
- │ │ ├─── int8, byte
- │ │ ├─── int16, short
- │ │ ├─── int32, intc
- │ │ └─── int64, intp
- │ ├─── uint
- │ │ ├─── uint8, ubyte
- │ │ ├─── uint16, ushort
- │ │ ├─── uint32, uintc
- │ │ └─── uint64, uintp
- │ └─── float_
- │ ├─── float16
- │ ├─── float32
- │ └─── float64
- ├─── tensor
- │ ├─── Array[float32]
- │ └─── ...
- ├─── list_
- │ ├─── List[int32,float32]
- │ └─── ...
- ├─── tuple_
- │ ├─── Tuple[int32,float32]
- │ └─── ...
- ├─── function
- │ ├─── Func
- │ ├─── Func[(int32, float32), int32]
- │ └─── ...
- ├─── MetaTensor
- ├─── type_type
- ├─── type_none
- ├─── symbolic_key
- └─── env_type
\ No newline at end of file
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.hub.rst b/api/source_zh_cn/api/python/mindspore/mindspore.hub.rst
deleted file mode 100644
index 458c704fc392ff12901a1324b719303c5098eeee..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.hub.rst
+++ /dev/null
@@ -1,4 +0,0 @@
-mindspore.hub
-=============
-
-.. autofunction:: mindspore.hub.load_weights
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.ops.composite.rst b/api/source_zh_cn/api/python/mindspore/mindspore.ops.composite.rst
deleted file mode 100644
index 4dc22f1dcf4fc899a211b5d1ec7114bea7680aa5..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.ops.composite.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.composite
-=======================
-
-.. automodule:: mindspore.ops.composite
- :members:
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.ops.operations.rst b/api/source_zh_cn/api/python/mindspore/mindspore.ops.operations.rst
deleted file mode 100644
index 29bf49176bf455593d3398d8e2f1af17ebfe21a4..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.ops.operations.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore.ops.operations
-========================
-
-.. automodule:: mindspore.ops.operations
- :members:
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.rst b/api/source_zh_cn/api/python/mindspore/mindspore.rst
deleted file mode 100644
index 44c49e3df3e08d66f6f8d54c23891de30b85a922..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/api/python/mindspore/mindspore.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-mindspore
-=========
-
-.. automodule:: mindspore
- :members:
\ No newline at end of file
diff --git a/api/source_zh_cn/index.rst b/api/source_zh_cn/index.rst
deleted file mode 100644
index 502e8495e2162735d542889ada4167c8e27fbf6d..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/index.rst
+++ /dev/null
@@ -1,59 +0,0 @@
-.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Mar 24 11:00:00 2020.
- You can adapt this file completely to your liking, but it should at least
- contain the root `toctree` directive.
-
-MindSpore API
-=============
-
-.. toctree::
- :maxdepth: 1
- :caption: 编程指南
-
- programming_guide/api_structure
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Python API
-
- api/python/mindspore/mindspore
- api/python/mindspore/mindspore.dtype
- api/python/mindspore/mindspore.common.initializer
- api/python/mindspore/mindspore.communication
- api/python/mindspore/mindspore.context
- api/python/mindspore/mindspore.hub
- api/python/mindspore/mindspore.nn
- api/python/mindspore/mindspore.nn.dynamic_lr
- api/python/mindspore/mindspore.nn.learning_rate_schedule
- api/python/mindspore/mindspore.nn.probability
- api/python/mindspore/mindspore.ops
- api/python/mindspore/mindspore.ops.composite
- api/python/mindspore/mindspore.ops.operations
- api/python/mindspore/mindspore.train
- api/python/mindspore/mindspore.dataset
- api/python/mindspore/mindspore.dataset.config
- api/python/mindspore/mindspore.dataset.text
- api/python/mindspore/mindspore.dataset.transforms
- api/python/mindspore/mindspore.dataset.vision
- api/python/mindspore/mindspore.mindrecord
- api/python/mindspore/mindspore.profiler
-
-.. toctree::
- :maxdepth: 1
- :caption: MindArmour Python API
-
- api/python/mindarmour/mindarmour
- api/python/mindarmour/mindarmour.adv_robustness.attacks
- api/python/mindarmour/mindarmour.adv_robustness.defenses
- api/python/mindarmour/mindarmour.adv_robustness.detectors
- api/python/mindarmour/mindarmour.adv_robustness.evaluations
- api/python/mindarmour/mindarmour.fuzz_testing
- api/python/mindarmour/mindarmour.privacy.diff_privacy
- api/python/mindarmour/mindarmour.privacy.evaluation
- api/python/mindarmour/mindarmour.utils
-
-.. toctree::
- :maxdepth: 1
- :caption: MindSpore Hub Python API
-
- api/python/mindspore_hub/mindspore_hub
diff --git a/api/source_zh_cn/programming_guide/api_structure.md b/api/source_zh_cn/programming_guide/api_structure.md
deleted file mode 100644
index 9a42ef664223fdccb211cc09fa9034ce1f1a83a7..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/api_structure.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# MindSpore API概述
-
-
-
-- [MindSpore API概述](#mindsporeapi概述)
- - [设计理念](#设计理念)
- - [层次结构](#层次结构)
-
-
-
-
-
-## 设计理念
-
-MindSpore源于全产业的最佳实践,向数据科学家和算法工程师提供了统一的模型训练、推理和导出等接口,支持端、边、云等不同场景下的灵活部署,推动深度学习和科学计算等领域繁荣发展。
-
-MindSpore提供了动态图和静态图统一的编码方式,用户无需开发多套代码,仅变更一行代码便可切换动态图/静态图模式,从而拥有更轻松的开发调试及性能体验。
-
-此外,由于MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,大大降低了AI开发门槛。
-
-## 层次结构
-
-MindSpore向用户提供了3个不同层次的API,支撑用户进行网络构建、整图执行、子图执行以及单算子执行,从低到高分别为Low-Level Python API、Medium-Level Python API以及High-Level Python API。
-
-
-
-- Low-Level Python API
-
- 第一层为低阶API,主要包括张量定义、基础算子、自动微分等模块,用户可使用低阶API轻松实现张量操作和求导计算。
-
-- Medium-Level Python API
-
- 第二层为中阶API,其封装了低价API,提供网络层、优化器、损失函数等模块,用户可通过中阶API灵活构建神经网络和控制执行流程,快速实现模型算法逻辑。
-
-- High-Level Python API
-
- 第三层为高阶API,其在中阶API的基础上又提供了训练推理的管理、Callback、混合精度训练等高级接口,方便用户控制整网的执行流程和实现神经网络的训练及推理。
diff --git a/api/source_zh_cn/programming_guide/auto_augmentation.md b/api/source_zh_cn/programming_guide/auto_augmentation.md
deleted file mode 100644
index 57ec65060e666988f7e7d9fa6f7f783256374f33..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/auto_augmentation.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# 自动数据增强
-
-
-
-- [自动数据增强](#自动数据增强)
- - [基于概率动态调整数据增强策略](#基于概率动态调整数据增强策略)
- - [基于训练结果信息动态调整数据增强策略](#基于训练结果信息动态调整数据增强策略)
-
-
-
-
-
-## 基于概率动态调整数据增强策略
-
-MindSpore提供一系列基于概率的数据增强的API,用户可以对各种图像操作进行随机选择、组合,使数据增强更灵活。
-
-- [`RandomApply(transforms, prob=0.5)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.html?highlight=randomapply#mindspore.dataset.transforms.c_transforms.RandomApply)
-以一定的概率指定`transforms`操作,即可能执行,也可以能不执行;`transform`可以是一个,也可以是一系列。
-
- ```python
-
- rand_apply_list = RandomApply([c_vision.RandomCrop(), c_vision.RandomColorAdjust()])
- ds = ds.map(operations=rand_apply_list)
-
- ```
-
- 按50%的概率来顺序执行`RandomCrop`和`RandomColorAdjust`操作,否则都不执行。
-
-- [`RandomChoice(transforms)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.html?highlight=randomchoice#mindspore.dataset.transforms.c_transforms.RandomChoice)
-从`transfrom`操作中随机选择一个执行。
-
- ```python
-
- rand_choice = RandomChoice([c_vision.CenterCrop(), c_vision.RandomCrop()])
- ds = ds.map(operations=rand_choice)
-
- ```
-
- 分别以50%概率来执行`CenterCrop`和`RandomCrop`操作。
-
-- [`RandomSelectSubpolicy(policy)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.vision.html?highlight=randomselectsubpolicy#mindspore.dataset.transforms.vision.c_transforms.RandomSelectSubpolicy)
-用户可以预置策略(Policy),每次随机选择一个子策略(SubPolicy)组合,同一子策略中由若干个顺序执行的图像增强操作组成,每个操作与两个参数关联:1) 执行操作的概率 2)执行操作的幅度;
-对于一个batch中的每张图像,随机选择子策略来变换图像。
-
- ```python
-
- policy = [
- [(c_vision.RandomRotation((45, 45)), 0.5), (c_vision.RandomVerticalFlip(), 1.0), (c_vision.RandomColorAdjust(), 0.8)],
- [(c_vision.RandomRotation((90, 90)), 1), (c_vision.RandomColorAdjust(), 0.2)]
- ]
- ds = ds.map(operations=c_vision.RandomSelectSubpolicy(policy), input_columns=["image"])
-
- ```
-
- 示例中包括2条子策略,其中子策略1中包含`RandomRotation`、`RandomVerticalFlip`、`RandomColorAdjust`3个操作,概率分别为0.5、1.0、0.8;子策略2中包含`RandomRotation`和`RandomColorAdjust`,概率分别为1.0、2.0。
-
-## 基于训练结果信息动态调整数据增强策略
-
-Mindspore的`sync_wait`接口支持按batch或epoch粒度来调整数据增强策略,实现训练过程中动态调整数据增强策略。
-`sync_wait`必须和`sync_update`配合使用实现数据pipeline上的同步回调。
-
-- [`sync_wait(condition_name, num_batch=1, callback=None)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html?highlight=sync_wait#mindspore.dataset.ImageFolderDatasetV2.sync_wait)
-- [`sync_update(condition_name, num_batch=None, data=None)`](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html?highlight=sync_update#mindspore.dataset.ImageFolderDatasetV2.sync_update)
-
-`sync_wait`将阻塞整个数据处理pipeline直到`sync_update`触发用户预先定义的`callback`函数。
-
-1. 用户预先定义class`Augment`,其中`preprocess`为`map`操作中的自定义数据增强函数,`update`为更新数据增强策略的回调函数。
-
- ```python
- import mindspore.dataset.transforms.vision.py_transforms as transforms
- import mindspore.dataset as de
- import numpy as np
-
- class Augment:
- def __init__(self):
- self.ep_num = 0
- self.step_num = 0
-
- def preprocess(self, input_):
- return (np.array((input_ + self.step_num ** self.ep_num - 1), ))
-
- def update(self, data):
- self.ep_num = data['ep_num']
- self.step_num = data['step_num']
-
- ```
-
-2. 数据处理pipeline先回调自定义的增强策略更新函数`auto_aug.update`,然后在`map`操作中按更新后的策略来执行`auto_aug.preprocess`中定义的数据增强。
-
- ```python
-
- arr = list(range(1, 4))
- ds = de.NumpySlicesDataset(arr, shuffle=False)
- aug = Augment()
- ds= ds.sync_wait(condition_name="policy", callback=aug.update)
- ds = ds.map(operations=[aug.preprocess])
-
- ```
-
-3. 在每个step调用`sync_update`进行数据增强策略的更新。
-
- ```python
- epochs = 5
- itr = ds.create_tuple_iterator(num_epochs=epochs)
- step_num = 0
- for ep_num in range(epochs):
- for data in itr:
- print("epcoh: {}, step:{}, data :{}".format(ep_num, step_num, data))
- step_num += 1
- ds.sync_update(condition_name="policy", data={'ep_num': ep_num, 'step_num': step_num})
-
- ```
diff --git a/api/source_zh_cn/programming_guide/cell.md b/api/source_zh_cn/programming_guide/cell.md
deleted file mode 100644
index 8570e80c0a9d38524dcd8e21bd196ab4ef1da9b1..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/cell.md
+++ /dev/null
@@ -1,290 +0,0 @@
-# cell模块概述
-
-
-
-- [cell模块概述](#cell模块概述)
- - [概念用途](#概念用途)
- - [关键成员函数](#关键成员函数)
- - [模型层](#模型层)
- - [损失函数](#损失函数)
- - [网络构造](#Cell构造自定义网络)
-
-
-
-## 概念用途
-
-MindSpose的Cell类是构建所有网络的基类,也是网络的基本单元。当用户需要自定义网络时,需要继承Cell类,并重写__init_方法和contruct方法。
-
-损失函数,优化器和模型层等本质上也属于网络结构,也需要继承Cell类才能实现功能,同样用户也可以根据业务需求自定义这部分内容。
-
-本节内容首先将会介绍Cell类的关键成员函数,然后介绍基于Cell实现的MindSpore内置损失函数,优化器和模型层及使用方法,最后通过实例介绍
-如何利用Cell类构建自定义网络。
-
-## 关键成员函数
-
-### construct方法
-
-Cell类重写了__call__方法,在cell类的实例被调用时,会执行contruct方法。网络结构在contruct方法里面定义。
-
-下面的样例中,我们构建了一个简单的网络。用例的网络结构为Conv2d->BatchNorm2d->ReLU->Flatten->Dense。
-在construct方法中,x为输入数据, out是经过网络的每层计算后得到的计算结果。
-
-```
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
- self.bn = nn.BatchNorm2d(64)
- self.relu = nn.ReLU()
- self.flatten = nn.Flatten()
- self.fc = nn.Dense(64 * 222 * 222, 3)
-
- def construct(self, x):
- x = self.conv(x)
- x = self.bn(x)
- x = self.relu(x)
- x = self.flatten(x)
- out = self.fc(x)
- return out
-```
-
-### parameters_dict
-
-parameters_dict方法识别出网络结构中所有的参数,返回一个以key为参数名,value为参数值的OrderedDict()。
-
-Cell类中返回参数的方法还有许多,例如get_parameters(),trainable_params()等, 具体使用方法可以参见MindSpore API手册。
-
-代码样例如下:
-
-```
-net = Net()
-result = net.parameters_dict()
-print(result.keys())
-print(result['conv.weight'])
-```
-
-样例中的Net()采用上文构造网络的用例,打印了网络中是所有参数的名字和conv.weight参数的结果。
-
-运行结果如下:
-```
-odict_keys(['conv.weight', 'bn.moving_mean', 'bn.moving_variance', 'bn.gamma', 'bn.beta', 'fc.weight', 'fc.bias'])
-Parameter (name=conv.weight, value=[[[[ 1.07402597e-02 7.70052336e-03 5.55867562e-03]
- [-3.21971579e-03 -3.75304517e-04 -8.73021083e-04]
-...
-[-1.81201510e-02 -1.31190736e-02 -4.27651079e-03]]]])
-```
-
-### cells_and_names
-
-cells_and_names方法是一个迭代器,返回网络中每个cell的名字和它的内容本身。
-
-用例简单实现了网络的cell获取与打印每个cell名字的功能,其中根据上文网络结构可知,存在五个cell分别是'conv','bn','relu','flatten','fc'。
-
-代码样例如下:
-```
-net = Net()
-names = []
-for m in net.cells_and_names():
- names.append(m[0]) if m[0] else None
-print(names)
-```
-
-运行结果:
-```
-['conv', 'bn', 'relu', 'flatten', 'fc']
-```
-## 模型层
-
-在讲述了Cell的使用方法后可知,MindSpore能够以Cell为基类构造网络结构。
-
-为了方便业界需求及用户使用方便,MindSpore框架内置了大量的模型层,用户可以通过接口直接调用。
-
-同样,用户也可以自定义模型,此内容在cell自定义构建中介绍。
-
-### 内置模型层
-
-MindSpore框架在nn的layer层内置了丰富的接口,主要内容如下:
-
-- 激活层:
-
- 激活层内置了大量的激活函数,在定义网络结构中经常使用。激活函数为网络加入了非线性运算,使得网络能够拟合效果更好。
-
- 主要接口有Softmax,Relu,Elu,Tanh,Sigmoid等。
-
-- 基础层:
-
- 基础层实现了网络中一些常用的基础结构,例如全连接层,Onehot编码,Dropout,平铺层等都在此部分实现。
-
- 主要接口有Dense,Flatten,Dropout,Norm,OneHot等。
-
-- 容器层:
-
- 容器层主要功能是实现一些存储多个cell的数据结构。
-
- 主要接口有SequentialCell,CellList等。
-
-- 卷积层:
-
- 卷积层提供了一些卷积计算的功能,如普通卷积,深度卷积和卷积转置等。
-
- 主要接口有Conv2d,Conv1d,Conv2dTranspose,DepthwiseConv2d,Conv1dTranspose等。
-
-- 池化层:
-
- 池化层提供了平均池化和最大池化等计算的功能。
-
- 主要接口有AvgPool2d,MaxPool2d,AvgPool1d。
-
-- 嵌入层:
-
- 嵌入层提供word embedding的计算功能,将输入的单词映射为稠密向量。
-
- 主要接口有:Embedding,EmbeddingLookup,EmbeddingLookUpSplitMode等。
-
-- 长短记忆循环层:
-
- 长短记忆循环层提供LSTM计算功能。其中LSTM内部会调用LSTMCell接口, LSTMCell是一个LSTM单元,
- 对一个LSTM层做运算,当涉及多LSTM网络层运算时,使用LSTM接口。
-
- 主要接口有:LSTM,LSTMCell。
-
-- 标准化层:
-
- 标准化层提供了一些标准化的方法,即通过线性变换等方式将数据转换成均值和标准差。
-
- 主要接口有:BatchNorm1d,BatchNorm2d,LayerNorm,GroupNorm,GlobalBatchNorm等。
-
-- 数学计算层:
-
- 数据计算层提供一些算子拼接而成的计算功能,例如数据生成和一些数学计算等。
-
- 主要接口有ReduceLogSumExp,Range,LinSpace,LGamma等。
-
-- 图片层:
-
- 图片计算层提供了一些矩阵计算相关的功能,将图片数据进行一些变换与计算。
-
- 主要接口有ImageGradients,SSIM,MSSSIM,PSNR,CentralCrop等。
-
-- 量化层:
-
- 量化是指将数据从float的形式转换成一段数据范围的int类型,所以量化层提供了一些数据量化的方法和模型层结构封装。
-
- 主要接口有Conv2dBnAct,DenseBnAct,Conv2dBnFoldQuant,LeakyReLUQuant等。
-
-### 应用实例
-
-MindSpore的模型层在mindspore.nn下,使用方法如下所示:
-
-```
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
- self.bn = nn.BatchNorm2d(64)
- self.relu = nn.ReLU()
- self.flatten = nn.Flatten()
- self.fc = nn.Dense(64 * 222 * 222, 3)
-
- def construct(self, x):
- x = self.conv(x)
- x = self.bn(x)
- x = self.relu(x)
- x = self.flatten(x)
- out = self.fc(x)
- return out
-```
-
-依然是上述网络构造的用例,从这个用例中可以看出,程序调用了Conv2d,BatchNorm2d,ReLU,Flatten和Dense模型层的接口。
-在Net初始化方法里面被定义,然后在construct方法里面真正运行,这些模型层接口有序的连接,形成一个可执行的网络。
-
-## 损失函数
-
-目前MindSpore主要支持的损失函数有L1Loss,MSELoss,SmoothL1Loss,SoftmaxCrossEntropyWithLogits,SoftmaxCrossEntropyExpand
-和CosineEmbeddingLoss。
-
-MindSpore的损失函数全部是Cell的子类实现,所以也支持用户自定义损失函数,其构造方法在cell自定义构建中进行介绍。
-
-### 内置损失函数
-
-- L1Loss:
-
- 计算两个输入数据的绝对值误差,用于回归模型。reduction参数默认值为mean,返回loss平均值结果,
-若reduction值为sum,返回loss累加结果,若reduction值为none,返回每个loss的结果。
-
-- MSELoss:
-
- 计算两个输入数据的平方误差,用于回归模型。reduction参数默认值为mean,返回loss平均值结果,
-若reduction值为sum,返回loss累加结果,若reduction值为none,返回每个loss的结果。
-
-- SmoothL1Loss:
-
- SmoothL1Loss为平滑L1损失函数,用于回归模型,阈值sigma默认参数为1。
-
-- SoftmaxCrossEntropyWithLogits:
-
- 交叉熵损失函数,用于分类模型。当标签数据不是one-hot编码形式时,需要输入参数sparse为True。reduction参数
- 与L1Loss一致。
-
-- SoftmaxCrossEntropyExpand:
-
- 交叉熵扩展损失函数,用于分类模型。当标签数据不是one-hot编码形式时,需要输入参数sparse为True。
-
-- CosineEmbeddingLoss:
-
- CosineEmbeddingLoss用于衡量两个输入相似程度,用于分类模型。margin默认为0.0,reduction参数与L1Loss一致
-
-### 应用实例
-
-MindSpore的损失函数全部在mindspore.nn下,使用方法如下所示:
-
-```
-import numpy as np
-import mindspore.nn as nn
-from mindspore import Tensor
-
-loss = nn.L1Loss()
-input_data = Tensor(np.array([[1, 2, 3], [2, 3, 4]]).astype(np.float32))
-target_data = Tensor(np.array([[0, 2, 5], [3, 1, 1]]).astype(np.float32))
-print(loss(input_data, target_data))
-```
-
-此用例构造了两个Tensor数据,利用nn.L1Loss()接口定义了L1Loss,将input_data和target_data传入loss,
-执行L1Loss的计算,结果为1.5。若loss = nn.L1Loss(reduction='sum'),则结果为9.0。
-若loss = nn.L1Loss(reduction='none'),结果为[[1. 0. 2.] [1. 2. 3.]]
-
-
-## Cell构造自定义网络
-
-无论是网络结构,还是前文提到的模型层,损失函数和优化器等,本质上都是一个Cell,因此都可以自定义实现。
-
-首先构造一个继承cell的子类,然后在__init__方法里面定义算子和模型层等,然后在construct方法里面构造网络结构。
-
-以lenet5网络为例,在__init__方法中定义了卷积层,池化层和全连接层等结构单元,然后在construct方法将定义的内容连接在一起,
-形成一个完整lenet5的网络结构。
-
-lenet5网络实现方式如下所示:
-```
-import mindspore.nn as nn
-
-class LeNet5(nn.Cell):
- def __init__(self):
- super(LeNet5, self).__init__()
- self.conv1 = nn.Conv2d(3, 6, 5, pad_mode="valid")
- self.conv2 = nn.Conv2d(6, 16, 5, pad_mode="valid")
- self.fc1 = nn.Dense(16 * 5 * 5, 120)
- self.fc2 = nn.Dense(120, 84)
- self.fc3 = nn.Dense(84, 3)
- self.relu = nn.ReLU()
- self.max_pool2d = nn.MaxPool2d(kernel_size=2)
- self.flatten = nn.Flatten()
-
- def construct(self, x):
- x = self.max_pool2d(self.relu(self.conv1(x)))
- x = self.max_pool2d(self.relu(self.conv2(x)))
- x = self.flatten(x)
- x = self.relu(self.fc1(x))
- x = self.relu(self.fc2(x))
- x = self.fc3(x)
- return x
-```
diff --git a/api/source_zh_cn/programming_guide/dataset_conversion.md b/api/source_zh_cn/programming_guide/dataset_conversion.md
deleted file mode 100644
index 59e68a061ff4a680539cd85a9b9973622b189ee5..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/dataset_conversion.md
+++ /dev/null
@@ -1,521 +0,0 @@
-# MindSpore数据格式转换
-
-
-
-- [MindSpore数据格式转换](#mindspore数据格式转换)
- - [概述](#概述)
- - [非标准数据集转换MindRecord](#非标准数据集转换mindrecord)
- - [CV类数据集](#cv类数据集)
- - [NLP类数据集](#nlp类数据集)
- - [常用数据集转换MindRecord](#常用数据集转换mindrecord)
- - [转换CIFAR-10数据集](#转换cifar-10数据集)
- - [转换CIFAR-100数据集](#转换cifar-100数据集)
- - [转换ImageNet数据集](#转换imagenet数据集)
- - [转换MNIST数据集](#转换mnist数据集)
- - [转换CSV数据集](#转换csv数据集)
- - [转换TFRecord数据集](#转换tfrecord数据集)
-
-
-
-
-
-## 概述
-
-用户可以将非标准的数据集和常见的经典数据集转换为MindSpore数据格式即MindRecord,从而方便地加载到MindSpore中进行训练。同时,MindSpore在部分场景做了性能优化,使用MindSpore数据格式可以获得更好的性能体验。
-
-## 非标准数据集转换MindRecord
-
-主要介绍如何将CV类数据和NLP类数据转换为MindRecord格式,并通过MindDataset实现MindRecoed格式文件的读取。
-
-### CV类数据集
-
- ```python
- """
- 示例说明:本示例主要介绍用户如何将自己的CV类数据集转换成MindRecoed格式,并使用MindDataset读取。
- 详细步骤:
- 1. 创建一个包含100条记录的MindRecord文件,其样本包含file_name(字符串), label(整形), data(二进制)三个字段;
- 2. 使用MindDataset读取MindRecord文件。
- """
-
- from io import BytesIO
- import os
- import mindspore.dataset as ds
- from mindspore.mindrecord import FileWriter
- import mindspore.dataset.transforms.vision.c_transforms as vision
- from PIL import Image
-
- ################################ 生成MindRecord文件 ################################
-
- mindrecord_filename = "test.mindrecord"
-
- # 如果存在MindRecord文件,则需要先删除
- if os.path.exists(mindrecord_filename):
- os.remove(mindrecord_filename)
- os.remove(mindrecord_filename + ".db")
-
- # 创建写对象,将会生成 mindrecord_filename 和 mindrecord_filename.db 两个文件
- writer = FileWriter(file_name=mindrecord_filename, shard_num=1)
-
- # 定义数据集Schema
- cv_schema = {"file_name": {"type": "string"}, "label": {"type": "int32"}, "data": {"type": "bytes"}}
- writer.add_schema(cv_schema, "it is a cv dataset")
-
- # [可选]定义索引字段,只能是标量字段
- writer.add_index(["file_name", "label"])
-
- # 按Schema方式组织训练数据,并将其写入MindRecord文件
- # 此处使用Image.new(...)模拟图片数据,真实场景可以使用io接口读取磁盘上的图像数据
- data = []
- for i in range(100): # 模拟数据集有100个样本
- i += 1
-
- sample = {}
- white_io = BytesIO()
- Image.new('RGB', (i*10, i*10), (255, 255, 255)).save(white_io, 'JPEG') # 图片大小可以不同
- image_bytes = white_io.getvalue()
- sample['file_name'] = str(i) + ".jpg" # 对应file_name字段
- sample['label'] = i # 对应label字段
- sample['data'] = white_io.getvalue() # 对应data字段
-
- data.append(sample)
- if i % 10 == 0: # 每10条样本做一次写操作
- writer.write_raw_data(data)
- data = []
-
- if data: # 写入可能剩余的数据
- writer.write_raw_data(data)
-
- writer.commit() # 关闭写入操作
-
- ################################ 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=mindrecord_filename) # 创建读取对象,默认开启shuffle
- decode_op = vision.Decode()
- data_set = data_set.map(input_columns=["data"], operations=decode_op, num_parallel_workers=2) # 解码data字段
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
-
-### NLP类数据集
-
-> 因为NLP类数据一般会经过预处理转换为字典序,此预处理过程不在本示例范围,该示例只演示转换后的字典序数据如何写入MindRecord。
-
- ```python
- """
- 示例说明:本示例主要介绍用户如何将自己的NLP类数据集转换成MindRecoed格式,并使用MindDataset读取。
- 详细步骤:
- 1. 创建一个包含100条记录的MindRecord文件,其样本包含八个字段,均为整形数组;
- 2. 使用MindDataset读取MindRecord文件。
- """
-
- import os
- import numpy as np
- import mindspore.dataset as ds
- from mindspore.mindrecord import FileWriter
-
- ################################ 生成MindRecord文件 ################################
-
- mindrecord_filename = "test.mindrecord"
-
- # 如果存在MindRecord文件,则需要先删除
- if os.path.exists(mindrecord_filename):
- os.remove(mindrecord_filename)
- os.remove(mindrecord_filename + ".db")
-
- # 创建写对象,将会生成 mindrecord_filename 和 mindrecord_filename.db 两个文件
- writer = FileWriter(file_name=mindrecord_filename, shard_num=1)
-
- # 定义数据集Schema,此处认为文本已经转为字典序
- nlp_schema = {"source_sos_ids": {"type": "int64", "shape": [-1]},
- "source_sos_mask": {"type": "int64", "shape": [-1]},
- "source_eos_ids": {"type": "int64", "shape": [-1]},
- "source_eos_mask": {"type": "int64", "shape": [-1]},
- "target_sos_ids": {"type": "int64", "shape": [-1]},
- "target_sos_mask": {"type": "int64", "shape": [-1]},
- "target_eos_ids": {"type": "int64", "shape": [-1]},
- "target_eos_mask": {"type": "int64", "shape": [-1]}}
- writer.add_schema(nlp_schema, "it is a preprocessed nlp dataset")
-
- # 按Schema方式组织训练数据,并将其写入MindRecord文件
- data = []
- for i in range(100): # 模拟数据集有100个样本
- i += 1
-
- # 组织训练数据
- sample = {"source_sos_ids": np.array([i, i+1, i+2, i+3, i+4], dtype=np.int64),
- "source_sos_mask": np.array([i*1, i*2, i*3, i*4, i*5, i*6, i*7], dtype=np.int64),
- "source_eos_ids": np.array([i+5, i+6, i+7, i+8, i+9, i+10], dtype=np.int64),
- "source_eos_mask": np.array([19, 20, 21, 22, 23, 24, 25, 26, 27], dtype=np.int64),
- "target_sos_ids": np.array([28, 29, 30, 31, 32], dtype=np.int64),
- "target_sos_mask": np.array([33, 34, 35, 36, 37, 38], dtype=np.int64),
- "target_eos_ids": np.array([39, 40, 41, 42, 43, 44, 45, 46, 47], dtype=np.int64),
- "target_eos_mask": np.array([48, 49, 50, 51], dtype=np.int64)}
-
- data.append(sample)
- if i % 10 == 0: # 每10条样本做一次写操作
- writer.write_raw_data(data)
- data = []
-
- if data: # 写入可能剩余的数据
- writer.write_raw_data(data)
-
- writer.commit() # 关闭写入操作
-
- ################################ 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=mindrecord_filename) # 创建读取对象,默认开启shuffle
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
-
-## 常用数据集转换MindRecord
-
-MindSpore提供转换常见数据集的工具类,能够将常见的经典数据集转换为MindRecord格式。常见数据集及其对应的工具类列表如下。
-
-| 数据集 | 格式转换工具类 |
-| -------- | ------------ |
-| CIFAR-10 | Cifar10ToMR |
-| CIFAR-100 | Cifar100ToMR |
-| ImageNet | ImageNetToMR |
-| MNIST | MnistToMR |
-| TFRecord | TFRecordToMR |
-| CSV File | CsvToMR |
-
-更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.mindrecord.html)。
-
-### 转换CIFAR-10数据集
-
-用户可以通过`Cifar10ToMR`类,将CIFAR-10原始数据转换为MindRecord格式。
-
-1. 下载[CIFAR-10数据集](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz)并解压,目录结构如下所示。
-
- ```
- └─cifar-10-batches-py
- ├─batches.meta
- ├─data_batch_1
- ├─data_batch_2
- ├─data_batch_3
- ├─data_batch_4
- ├─data_batch_5
- ├─readme.html
- └─test_batch
- ```
-
-2. 导入数据集转换工具类`Cifar10ToMR`。
-
- ```python
- from mindspore.mindrecord import Cifar10ToMR
- ```
-
-3. 创建`Cifar10ToMR`对象,调用`transform`接口,将CIFAR-10数据集转换为MindRecord格式。
-
- ```python
- CIFAR10_DIR = "./cifar10/cifar-10-batches-py"
- MINDRECORD_FILE = "./cifar10.mindrecord"
- cifar10_transformer = Cifar10ToMR(CIFAR10_DIR, MINDRECORD_FILE)
- cifar10_transformer.transform(['label'])
- ```
-
- **参数说明:**
- - `CIFAR10_DIR`:CIFAR-10数据集的文件夹路径。
- - `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-### 转换CIFAR-100数据集
-
-用户可以通过`Cifar100ToMR`类,将CIFAR-100原始数据转换为MindRecord格式。
-
-1. 准备好CIFAR-100数据集,将文件解压至指定的目录(示例中将数据集保存到`cifar100`目录),如下所示。
-
- ```
- % ll cifar100/cifar-100-python/
- meta
- test
- train
- ```
- > CIFAR-100数据集下载地址:
-
-2. 导入转换数据集的工具类`Cifar100ToMR`。
-
- ```python
- from mindspore.mindrecord import Cifar100ToMR
- ```
-
-3. 实例化`Cifar100ToMR`对象,调用`transform`接口,将CIFAR-100数据集转换为MindSpore数据格式。
-
- ```python
- CIFAR100_DIR = "./cifar100/cifar-100-python"
- MINDRECORD_FILE = "./cifar100.mindrecord"
- cifar100_transformer = Cifar100ToMR(CIFAR100_DIR, MINDRECORD_FILE)
- cifar100_transformer.transform(['fine_label', 'coarse_label'])
- ```
-
- **参数说明:**
- - `CIFAR100_DIR`:CIFAR-100数据集的文件夹路径。
- - `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-### 转换ImageNet数据集
-
-用户可以通过`ImageNetToMR`类,将ImageNet原始数据(图片、标注)转换为MindSpore数据格式。
-
-1. 下载并按照要求准备好ImageNet数据集。
-
- > ImageNet数据集下载地址:
-
- 对下载后的ImageNet数据集,整理数据集组织形式为一个包含所有图片的文件夹,以及一个记录图片对应标签的映射文件。
-
- 标签映射文件包含2列,分别为各类别图片目录、标签ID,用空格隔开,映射文件示例如下:
- ```
- n01440760 0
- n01443537 1
- n01484850 2
- n01491361 3
- n01494475 4
- n01496331 5
- ```
-
-2. 导入转换数据集的工具类`ImageNetToMR`。
-
- ```python
- from mindspore.mindrecord import ImageNetToMR
- ```
-
-3. 实例化`ImageNetToMR`对象,调用`transform`接口,将数据集转换为MindSpore数据格式。
- ```python
- IMAGENET_MAP_FILE = "./testImageNetDataWhole/labels_map.txt"
- IMAGENET_IMAGE_DIR = "./testImageNetDataWhole/images"
- MINDRECORD_FILE = "./testImageNetDataWhole/imagenet.mindrecord"
- PARTITION_NUMBER = 4
- imagenet_transformer = ImageNetToMR(IMAGENET_MAP_FILE, IMAGENET_IMAGE_DIR, MINDRECORD_FILE, PARTITION_NUMBER)
- imagenet_transformer.transform()
- ```
- 其中,
- `IMAGENET_MAP_FILE`:ImageNetToMR数据集的标签映射文件路径。
- `IMAGENET_IMAGE_DIR`:包含ImageNet所有图片的文件夹路径。
- `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-### 转换MNIST数据集
-
-用户可以通过`MnistToMR`类,将MNIST原始数据转换为MindSpore数据格式。
-
-1. 准备MNIST数据集,将下载好的文件放至指定的目录,如下所示:
-
- ```
- % ll mnist_data/
- train-images-idx3-ubyte.gz
- train-labels-idx1-ubyte.gz
- t10k-images-idx3-ubyte.gz
- t10k-labels-idx1-ubyte.gz
- ```
-
- > MNIST数据集下载地址:
-
-2. 导入转换数据集的工具类`MnistToMR`。
-
- ```python
- from mindspore.mindrecord import MnistToMR
- ```
-
-3. 实例化`MnistToMR`对象,调用`transform`接口,将MNIST数据集转换为MindSpore数据格式。
-
- ```python
- MNIST_DIR = "./mnist_data"
- MINDRECORD_FILE = "./mnist.mindrecord"
- mnist_transformer = MnistToMR(MNIST_DIR, MINDRECORD_FILE)
- mnist_transformer.transform()
- ```
-
- ***参数说明:***
- - `MNIST_DIR`:MNIST数据集的文件夹路径。
- - `MINDRECORD_FILE`:输出的MindSpore数据格式文件路径。
-
-
-### 转换CSV数据集
-
- ```python
- """
- 示例说明:本示例首先创建一个CSV文件,然后通过MindSpore中CsvToMR工具,
- 将Csv文件转换为MindRecord文件,并最终通过MindDataset将其读取出来。
- 详细步骤:
- 1. 创建一个包含5条记录的CSV文件;
- 2. 使用CsvToMR工具将CSV转换为MindRecord;
- 3. 使用MindDataset读取MindRecord文件。
- """
-
- import csv
- import os
- import mindspore.dataset as ds
- from mindspore.mindrecord import CsvToMR
-
- CSV_FILE_NAME = "test.csv" # 创建的CSV文件
- MINDRECORD_FILE_NAME = "test.mindrecord" # 转换后的MindRecord文件
- PARTITION_NUM = 1
-
- ################################ 创建CSV文件 ################################
-
- # 生成CSV文件
- def generate_csv():
- headers = ["id", "name", "math", "english"]
- rows = [(1, "Lily", 78.5, 90),
- (2, "Lucy", 99, 85.2),
- (3, "Mike", 65, 71),
- (4, "Tom", 95, 99),
- (5, "Jeff", 85, 78.5)]
- with open(CSV_FILE_NAME, 'w', encoding='utf-8') as f:
- writer = csv.writer(f)
- writer.writerow(headers)
- writer.writerows(rows)
-
- generate_csv()
-
- if os.path.exists(MINDRECORD_FILE_NAME):
- os.remove(MINDRECORD_FILE_NAME)
- os.remove(MINDRECORD_FILE_NAME + ".db")
-
- ################################ CSV 转 MindRecord文件 ################################
-
- # 调用CsvToMR工具,初始化
- csv_transformer = CsvToMR(CSV_FILE_NAME, MINDRECORD_FILE_NAME, partition_number=PARTITION_NUM)
- # 执行转换操作
- csv_transformer.transform()
-
- assert os.path.exists(MINDRECORD_FILE_NAME)
- assert os.path.exists(MINDRECORD_FILE_NAME + ".db")
-
- ############################### 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE_NAME) # 创建读取对象,默认开启shuffle
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
-
-### 转换TFRecord数据集
-
- ```python
- """
- 示例说明:本示例通过TF创建一个TFRecord文件,然后通过MindSpore中TFRecordToMR工具,
- 将TFRecord文件转换为MindRecord文件,并最终通过MindDataset将其读取出来。
- 详细步骤:
- 1. 创建一个包含10条记录,且样本格式为:
- feature_dict = {"file_name": tf.io.FixedLenFeature([], tf.string),
- "image_bytes": tf.io.FixedLenFeature([], tf.string),
- "int64_scalar": tf.io.FixedLenFeature([], tf.int64),
- "float_scalar": tf.io.FixedLenFeature([], tf.float32),
- "int64_list": tf.io.FixedLenFeature([6], tf.int64),
- "float_list": tf.io.FixedLenFeature([7], tf.float32)}
- 的TFRecord文件;
- 2. 使用TFRecordToMR工具将TFRecord转换为MindRecord;
- 3. 使用MindDataset读取MindRecord文件,并通过Decode算子对其image_bytes字段进行解码。
- """
-
- import collections
- from io import BytesIO
- import os
- import mindspore.dataset as ds
- from mindspore.mindrecord import TFRecordToMR
- import mindspore.dataset.transforms.vision.c_transforms as vision
- from PIL import Image
- import tensorflow as tf # 需要tensorflow >= 2.1.0
-
- TFRECORD_FILE_NAME = "test.tfrecord" # 创建的TFRecord文件
- MINDRECORD_FILE_NAME = "test.mindrecord" # 转换后的MindRecord文件
- PARTITION_NUM = 1
-
- ################################ 创建TFRecord文件 ################################
-
- # 生成TFRecord文件
- def generate_tfrecord():
- def create_int_feature(values):
- if isinstance(values, list):
- feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values))) # values: [int, int, int]
- else:
- feature = tf.train.Feature(int64_list=tf.train.Int64List(value=[values])) # values: int
- return feature
-
- def create_float_feature(values):
- if isinstance(values, list):
- feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values))) # values: [float, float]
- else:
- feature = tf.train.Feature(float_list=tf.train.FloatList(value=[values])) # values: float
- return feature
-
- def create_bytes_feature(values):
- if isinstance(values, bytes):
- white_io = BytesIO()
- Image.new('RGB', (10, 10), (255, 255, 255)).save(white_io, 'JPEG') # 图片大小可以不同
- image_bytes = white_io.getvalue()
- feature = tf.train.Feature(bytes_list=tf.train.BytesList(value=[image_bytes])) # values: bytes
- else:
- # values: string
- feature = tf.train.Feature(bytes_list=tf.train.BytesList(value=[bytes(values, encoding='utf-8')]))
- return feature
-
- writer = tf.io.TFRecordWriter(TFRECORD_FILE_NAME)
-
- example_count = 0
- for i in range(10):
- file_name = "000" + str(i) + ".jpg"
- image_bytes = bytes(str("aaaabbbbcccc" + str(i)), encoding="utf-8")
- int64_scalar = i
- float_scalar = float(i)
- int64_list = [i, i+1, i+2, i+3, i+4, i+1234567890]
- float_list = [float(i), float(i+1), float(i+2.8), float(i+3.2),
- float(i+4.4), float(i+123456.9), float(i+98765432.1)]
-
- features = collections.OrderedDict()
- features["file_name"] = create_bytes_feature(file_name)
- features["image_bytes"] = create_bytes_feature(image_bytes)
- features["int64_scalar"] = create_int_feature(int64_scalar)
- features["float_scalar"] = create_float_feature(float_scalar)
- features["int64_list"] = create_int_feature(int64_list)
- features["float_list"] = create_float_feature(float_list)
-
- tf_example = tf.train.Example(features=tf.train.Features(feature=features))
- writer.write(tf_example.SerializeToString())
- example_count += 1
- writer.close()
- print("Write {} rows in tfrecord.".format(example_count))
-
- generate_tfrecord()
-
- ################################ TFRecord 转 MindRecord文件 ################################
-
- feature_dict = {"file_name": tf.io.FixedLenFeature([], tf.string),
- "image_bytes": tf.io.FixedLenFeature([], tf.string),
- "int64_scalar": tf.io.FixedLenFeature([], tf.int64),
- "float_scalar": tf.io.FixedLenFeature([], tf.float32),
- "int64_list": tf.io.FixedLenFeature([6], tf.int64),
- "float_list": tf.io.FixedLenFeature([7], tf.float32),
- }
-
- if os.path.exists(MINDRECORD_FILE_NAME):
- os.remove(MINDRECORD_FILE_NAME)
- os.remove(MINDRECORD_FILE_NAME + ".db")
-
- # 调用TFRecordToMR工具,初始化
- tfrecord_transformer = TFRecordToMR(TFRECORD_FILE_NAME, MINDRECORD_FILE_NAME, feature_dict, ["image_bytes"])
- # 执行转换操作
- tfrecord_transformer.transform()
-
- assert os.path.exists(MINDRECORD_FILE_NAME)
- assert os.path.exists(MINDRECORD_FILE_NAME + ".db")
-
- ############################### 读取MindRecord文件 ################################
-
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE_NAME) # 创建读取对象,默认开启shuffle
- decode_op = vision.Decode()
- data_set = data_set.map(input_columns=["image_bytes"], operations=decode_op, num_parallel_workers=2) # 解码图像字段
- count = 0
- for item in data_set.create_dict_iterator(): # 循环读取MindRecord中所有数据
- print("sample: {}".format(item))
- count += 1
- print("Got {} samples".format(count))
- ```
diff --git a/api/source_zh_cn/programming_guide/images/batch.png b/api/source_zh_cn/programming_guide/images/batch.png
deleted file mode 100644
index cce0f467eac154d0633543e5c69613ce7bdbbdcc..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/batch.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/concat.png b/api/source_zh_cn/programming_guide/images/concat.png
deleted file mode 100644
index 742aa2a0203f078ee7d06549c3372ce271cea455..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/concat.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/ctrans_invert.png b/api/source_zh_cn/programming_guide/images/ctrans_invert.png
deleted file mode 100644
index a27301d28dd11b037ab973cc97d1b3042f24f3b0..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/ctrans_invert.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/ctrans_resize.png b/api/source_zh_cn/programming_guide/images/ctrans_resize.png
deleted file mode 100644
index f4f2b23642cc8d87f3ad5684205c968c79bd794d..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/ctrans_resize.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/map.png b/api/source_zh_cn/programming_guide/images/map.png
deleted file mode 100644
index abe704717045e3816f3ffe4d10a8b023ec983b3d..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/map.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/pytrans_compose.png b/api/source_zh_cn/programming_guide/images/pytrans_compose.png
deleted file mode 100644
index 66221a4f5e7a9f985475fa2dd68f1994903636c3..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/pytrans_compose.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/randomcrop.png b/api/source_zh_cn/programming_guide/images/randomcrop.png
deleted file mode 100644
index 8095bceb67cd3643dda1dce6c060a98ccb40373f..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/randomcrop.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/randomhorizontalflip.png b/api/source_zh_cn/programming_guide/images/randomhorizontalflip.png
deleted file mode 100644
index f127d7ab479851049262fc3713dba7d14b2c908a..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/randomhorizontalflip.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/repeat.png b/api/source_zh_cn/programming_guide/images/repeat.png
deleted file mode 100644
index 7cb40834c41b8d17e37cf2da8ba368ad72212f48..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/repeat.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/shuffle.png b/api/source_zh_cn/programming_guide/images/shuffle.png
deleted file mode 100644
index d4af0f38c4ecbff6fb80ad3c06b974ef71adeb56..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/shuffle.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_bad.png b/api/source_zh_cn/programming_guide/images/tranform_bad.png
deleted file mode 100644
index 2d3ee60ccffdbe7c9ad3f5adb4235cdc8f3532d2..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_bad.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_good_1.png b/api/source_zh_cn/programming_guide/images/tranform_good_1.png
deleted file mode 100644
index 3c4b373ead883539b6d4673c68665bec20034e18..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_good_1.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_good_2.png b/api/source_zh_cn/programming_guide/images/tranform_good_2.png
deleted file mode 100644
index 066a5d082387206a01ceb6ad54cc9dd7e074c672..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_good_2.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_good_3.png b/api/source_zh_cn/programming_guide/images/tranform_good_3.png
deleted file mode 100644
index 500b36c18eb53253c58f84515d5b90b1136d23c0..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_good_3.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/tranform_pipeline.png b/api/source_zh_cn/programming_guide/images/tranform_pipeline.png
deleted file mode 100644
index 07906d4751f286de989a4c873d9fd422207eb5eb..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/tranform_pipeline.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/images/zip.png b/api/source_zh_cn/programming_guide/images/zip.png
deleted file mode 100644
index 2839b2c36f00533917b2406d7f215249ad8dbc6b..0000000000000000000000000000000000000000
Binary files a/api/source_zh_cn/programming_guide/images/zip.png and /dev/null differ
diff --git a/api/source_zh_cn/programming_guide/type.md b/api/source_zh_cn/programming_guide/type.md
deleted file mode 100644
index 3ccdb560386cb1a9fa71fd8dc6e724f2ca135662..0000000000000000000000000000000000000000
--- a/api/source_zh_cn/programming_guide/type.md
+++ /dev/null
@@ -1,54 +0,0 @@
-# 数据类型
-
-
-
-- [数据类型](#数据类型)
- - [概述](#概述)
- - [操作接口](#操作接口)
-
-
-
-
-
-
-## 概述
-
-MindSpore张量支持不同的数据类型,有`int8`、`int16`、`int32`、`int64`、`uint8`、`uint16`、`uint32`、`uint64`、
-`float16`、`float32`、`float64`、`bool_`, 与NumPy的数据类型一一对应,Python里的`int`数会被转换为定义的int64进行运算,
-Python里的`float`数会被转换为定义的`float32`进行运算。
-
-## 操作接口
-- `dtype_to_nptype`
-
- 可通过该接口将MindSpore的数据类型转换为NumPy对应的数据类型。
-
-- `dtype_to_pytype`
-
- 可通过该接口将MindSpore的数据类型转换为Python对应的内置数据类型。
-
-
-- `pytype_to_dtype`
-
- 可通过该接口将Python内置的数据类型转换为MindSpore对应的数据类型。
-
-示例如下:
-
-```
-from mindspore import dtype as mstype
-
-np_type = mstype.dtype_to_nptype(mstype.int32)
-ms_type = mstype.pytype_to_dtype(int)
-py_type = mstype.dtype_to_pytype(mstype.float64)
-
-print(np_type)
-print(ms_type)
-print(py_type)
-```
-
-输出如下:
-
-```
-
-Int64
-
-```
diff --git a/api/Makefile b/docs/api_cpp/Makefile
similarity index 100%
rename from api/Makefile
rename to docs/api_cpp/Makefile
diff --git a/api/requirements.txt b/docs/api_cpp/requirements.txt
similarity index 100%
rename from api/requirements.txt
rename to docs/api_cpp/requirements.txt
diff --git a/tutorials/source_zh_cn/_static/logo_notebook.png b/docs/api_cpp/source_en/_static/logo_notebook.png
similarity index 100%
rename from tutorials/source_zh_cn/_static/logo_notebook.png
rename to docs/api_cpp/source_en/_static/logo_notebook.png
diff --git a/api/source_en/_static/logo_source.png b/docs/api_cpp/source_en/_static/logo_source.png
similarity index 100%
rename from api/source_en/_static/logo_source.png
rename to docs/api_cpp/source_en/_static/logo_source.png
diff --git a/docs/api_cpp/source_en/class_list.md b/docs/api_cpp/source_en/class_list.md
new file mode 100644
index 0000000000000000000000000000000000000000..236b192159d23e4d25548361858f80634d75ed21
--- /dev/null
+++ b/docs/api_cpp/source_en/class_list.md
@@ -0,0 +1,16 @@
+# Class List
+
+Here is a list of all classes with links to the namespace documentation for each member:
+
+| Namespace | Class Name | Description |
+| --- | --- | --- |
+| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. |
+| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#context) | Context defines for holding environment variables during runtime. |
+| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. |
+| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#primitivec) | Primitive defines as prototype of operator. |
+| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#model) | Model defines model in MindSpore Lite for managing graph. |
+| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#modelbuilder) | ModelBuilder is defined to build the model. |
+| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/en/r1.0/session.html#litesession) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. |
+| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/r1.0/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. |
+| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/en/r1.0/dataset.html#litemat) |Class that represents a LiteMat of a Image. |
+
diff --git a/docs/api_cpp/source_en/conf.py b/docs/api_cpp/source_en/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..4787de3f631f53db97bad94ffb7c95441edf0bb7
--- /dev/null
+++ b/docs/api_cpp/source_en/conf.py
@@ -0,0 +1,60 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/source_en/apicc/dataset.md b/docs/api_cpp/source_en/dataset.md
similarity index 90%
rename from lite/docs/source_en/apicc/dataset.md
rename to docs/api_cpp/source_en/dataset.md
index 984ffc15eaa2fc44ed5e17c87f89b561083a5eae..47d29e5406a727f4b113cdae2b35921682aec1da 100644
--- a/lite/docs/source_en/apicc/dataset.md
+++ b/docs/api_cpp/source_en/dataset.md
@@ -1,11 +1,13 @@
# mindspore::dataset
-#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
-#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
+#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
+#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
## Functions of image_process.h
+### ResizeBilinear
+
```
bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
```
@@ -22,6 +24,8 @@ Resize image by bilinear algorithm, currently the data type only supports uint8,
Return True or False.
+### InitFromPixel
+
```
bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m)
```
@@ -40,6 +44,8 @@ Initialize LiteMat from pixel, currently the conversion supports rbgaTorgb and r
Return True or False.
+### ConvertTo
+
```
bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
```
@@ -56,6 +62,8 @@ Convert the data type, currently it supports converting the data type from uint8
Return True or False.
+### Crop
+
```
bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
```
@@ -74,6 +82,8 @@ Crop image, the channel supports is 3 and 1.
Return True or False.
+### SubStractMeanNormalize
+
```
bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float *norm)
```
@@ -90,11 +100,13 @@ Normalize image, currently the supports data type is float.
Return True or False.
+### Pad
+
```
-bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
+bool Pad(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
```
-Padd image, the channel supports is 3 and 1.
+Pad image, the channel supports is 3 and 1.
- Parameters
@@ -112,6 +124,8 @@ Padd image, the channel supports is 3 and 1.
Return True or False.
+### Affine
+
```
void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue)
```
@@ -140,6 +154,8 @@ Apply affine transformation for 3 channel image.
- `dsize`: The size of the output image.
- `borderValue`: The pixel value is used for filing after the image is captured.
+### GetDefaultBoxes
+
```
std::vector> GetDefaultBoxes(BoxesConfig config)
```
@@ -154,6 +170,8 @@ Get default anchor boxes for Faster R-CNN, SSD, YOLO etc.
Return the default boxes.
+### ConvertBoxes
+
```
void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config)
```
@@ -166,6 +184,8 @@ Convert the prediction boxes to the actual boxes with (y, x, h, w).
- `default_boxes`: Default box.
- `config`: Objects of BoxesConfig structure.
+### ApplyNms
+
```
std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes)
```
@@ -190,6 +210,7 @@ Class that represents a lite Mat of a Image.
**Constructors & Destructors**
+### LiteMat
```
LiteMat()
@@ -211,6 +232,7 @@ Destructor of MindSpore dataset LiteMat.
**Public Member Functions**
+### Init
```
void Init(int width, LDataType data_type = LDataType::UINT8)
@@ -222,6 +244,8 @@ void Init(int width, int height, int channel, LDataType data_type = LDataType::U
The function to initialize the channel, width and height of the image, but the parameters are different.
+### IsEmpty
+
```
bool IsEmpty() const
```
@@ -232,6 +256,8 @@ A function to determine whether the object is empty.
Return True or False.
+### Release
+
```
void Release()
```
@@ -240,6 +266,8 @@ A function to release memory.
**Private Member Functions**
+### AlignMalloc
+
```
void *AlignMalloc(unsigned int size)
```
@@ -254,6 +282,8 @@ Apply for memory alignment.
Return the size of a pointer.
+### AlignFree
+
```
void AlignFree(void *ptr)
```
@@ -270,6 +300,8 @@ Initialize the value of elem_size_ by data_type.
- `data_type`: Type of data.
+### addRef
+
```
int addRef(int *p, int value)
```
diff --git a/lite/docs/source_en/apicc/errorcode_and_metatype.md b/docs/api_cpp/source_en/errorcode_and_metatype.md
similarity index 92%
rename from lite/docs/source_en/apicc/errorcode_and_metatype.md
rename to docs/api_cpp/source_en/errorcode_and_metatype.md
index df566213408154cd2034eb2932a5f6d1380f89f3..45b4877a858d82df61c1dffa8dc734edddd300a5 100644
--- a/lite/docs/source_en/apicc/errorcode_and_metatype.md
+++ b/docs/api_cpp/source_en/errorcode_and_metatype.md
@@ -13,6 +13,7 @@ Description of error code and meta type supported in MindSpore Lite.
| RET_NO_CHANGE | -4 | No change. |
| RET_SUCCESS_EXIT | -5 | No error but exit. |
| RET_MEMORY_FAILED | -6 | Fail to create memory. |
+| RET_NOT_SUPPORT | -7 | Fail to support. |
| RET_OUT_OF_TENSOR_RANGE | -101 | Failed to check range. |
| RET_INPUT_TENSOR_ERROR | -102 | Failed to check input tensor. |
| RET_REENTRANT_ERROR | -103 | Exist executor running. |
@@ -24,6 +25,8 @@ Description of error code and meta type supported in MindSpore Lite.
| RET_FORMAT_ERR | -401 | Failed to check the tensor format. |
| RET_INFER_ERR | -501 | Failed to infer shape. |
| RET_INFER_INVALID | -502 | Invalid infer shape before runtime. |
+| RET_INPUT_PARAM_INVALID | -601 | Invalid input param by user. |
+| RET_INPUT_PARAM_LACK | -602 | Lack input param by user. |
## MetaType
An **enum** type.
diff --git a/lite/docs/source_en/index.rst b/docs/api_cpp/source_en/index.rst
similarity index 48%
rename from lite/docs/source_en/index.rst
rename to docs/api_cpp/source_en/index.rst
index abecfe957e16896bca6efeb5a1cb376835251fa6..6b3fb87da08b8e47644ddb3bc308dd63de1d8d21 100644
--- a/lite/docs/source_en/index.rst
+++ b/docs/api_cpp/source_en/index.rst
@@ -1,16 +1,18 @@
.. MindSpore documentation master file, created by
- sphinx-quickstart on Thu Aug 17 10:00:00 2020.
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore Lite Documentation
-============================
+MindSpore C++ API
+=================
.. toctree::
- :glob:
- :maxdepth: 1
+ :glob:
+ :maxdepth: 1
- architecture
- apicc/apicc
- operator_list
- glossary
+ class_list
+ lite
+ session
+ tensor
+ dataset
+ errorcode_and_metatype
\ No newline at end of file
diff --git a/lite/docs/source_en/apicc/lite.md b/docs/api_cpp/source_en/lite.md
similarity index 65%
rename from lite/docs/source_en/apicc/lite.md
rename to docs/api_cpp/source_en/lite.md
index 93bc93edf0d709c8d227723f921ea39f9a39f3b0..6e2c33eeb2741a0e88b778ebab245716078d168d 100644
--- a/lite/docs/source_en/apicc/lite.md
+++ b/docs/api_cpp/source_en/lite.md
@@ -1,10 +1,10 @@
# mindspore::lite
-#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)>
+#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/context.h)>
-#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)>
+#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/model.h)>
-#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)>
+#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/version.h)>
## Allocator
@@ -23,23 +23,6 @@ Context()
Constructor of MindSpore Lite Context using default value for parameters.
-```
-Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx)
-```
-Constructor of MindSpore Lite Context using input value for parameters.
-
-- Parameters
-
- - `thread_num`: Define the work thread number during the runtime.
-
- - `allocator`: Define the allocator for malloc.
-
- - `device_ctx`: Define device information during the runtime.
-
-- Returns
-
- The instance of MindSpore Lite Context.
-
```
~Context()
```
@@ -52,10 +35,12 @@ float16_priority
```
A **bool** value. Defaults to **false**. Prior enable float16 inference.
+> Enabling float16 inference may cause low precision inference,because some variables may exceed the range of float16 during forwarding.
+
```
-device_ctx_{DT_CPU}
+device_type
```
-A [**DeviceContext**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicecontext) struct defined at the bottom of the text. Using to specify the device.
+A [**DeviceType**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#devicetype) **enum** type. Defaults to **DT_CPU**. Using to specify the device.
```
thread_num_
@@ -67,13 +52,13 @@ An **int** value. Defaults to **2**. Thread number config for thread pool.
allocator
```
-A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#allocator).
+A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#allocator).
```
cpu_bind_mode_
```
-A [**CpuBindMode**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#cpubindmode) enum variable. Defaults to **MID_CPU**.
+A [**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/lite.html#cpubindmode) **enum** variable. Defaults to **MID_CPU**.
## PrimitiveC
Primitive is defined as prototype of operator.
@@ -121,6 +106,7 @@ Static method to create a Model pointer.
An **enum** type. CpuBindMode defined for holding bind cpu strategy argument.
**Attributes**
+
```
MID_CPU = -1
```
@@ -153,16 +139,6 @@ GPU device type.
DT_NPU = 0
```
NPU device type, not supported yet.
-## DeviceContext
-
-A **struct**. DeviceContext defined for holding DeviceType.
-
-**Attributes**
-```
-type
-```
-A [**DeviceType**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicetype) variable. The device type.
-
## Version
```
diff --git a/lite/docs/source_en/apicc/session.md b/docs/api_cpp/source_en/session.md
similarity index 85%
rename from lite/docs/source_en/apicc/session.md
rename to docs/api_cpp/source_en/session.md
index 3ecee43e21d6ef04213fba8ee566093c2ff7d9b5..63aa4aea1930e8ae9046441e9db24ecb30ff1cf4 100644
--- a/lite/docs/source_en/apicc/session.md
+++ b/docs/api_cpp/source_en/session.md
@@ -1,6 +1,6 @@
# mindspore::session
-#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)>
+#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/lite_session.h)>
## LiteSession
@@ -41,7 +41,7 @@ Compile MindSpore Lite model.
- Returns
- STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h).
+ STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h).
```
virtual std::vector GetInputs() const
@@ -73,13 +73,13 @@ Run session with callback.
- Parameters
- - `before`: A [**KernelCallBack**](https://www.mindspore.cn/lite/docs/en/master/apicc/session.html#kernelcallback) function. Define a callback function to be called before running each node.
+ - `before`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/session.html#kernelcallback) function. Define a callback function to be called before running each node.
- - `after`: A [**KernelCallBack**](https://www.mindspore.cn/lite/docs/en/master/apicc/session.html#kernelcallback) function. Define a callback function to be called after running each node.
+ - `after`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.0/session.html#kernelcallback) function. Define a callback function to be called after running each node.
- Returns
- STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h).
+ STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h).
```
virtual std::vector GetOutputsByNodeName(const std::string &node_name) const
@@ -151,7 +151,7 @@ Resize inputs shape.
- Returns
- STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h).
+ STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h).
**Static Public Member Functions**
diff --git a/lite/docs/source_en/apicc/tensor.md b/docs/api_cpp/source_en/tensor.md
similarity index 50%
rename from lite/docs/source_en/apicc/tensor.md
rename to docs/api_cpp/source_en/tensor.md
index 014929ba12ea2d636478ea7515562559bd9af087..f74d7a33ec2395f67192ebc3002143ac85f2f871 100644
--- a/lite/docs/source_en/apicc/tensor.md
+++ b/docs/api_cpp/source_en/tensor.md
@@ -1,6 +1,6 @@
# mindspore::tensor
-#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)>
+#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/ms_tensor.h)>
## MSTensor
@@ -30,25 +30,12 @@ virtual TypeId data_type() const
```
Get data type of the MindSpore Lite MSTensor.
-> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
+> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
- Returns
MindSpore Lite TypeId of the MindSpore Lite MSTensor.
-```
-virtual TypeId set_data_type(TypeId data_type)
-```
-Set data type for the MindSpore Lite MSTensor.
-
-- Parameters
-
- - `data_type`: Define MindSpore Lite TypeId to be set in the MindSpore Lite MSTensor.
-
-- Returns
-
- MindSpore Lite TypeId of the MindSpore Lite MSTensor after set.
-
```
virtual std::vector shape() const
```
@@ -59,19 +46,6 @@ Get shape of the MindSpore Lite MSTensor.
A vector of int as the shape of the MindSpore Lite MSTensor.
-```
-virtual size_t set_shape(const std::vector &shape)
-```
-Set shape for the MindSpore Lite MSTensor.
-
-- Parameters
-
- - `shape`: Define a vector of int as shape to be set into the MindSpore Lite MSTensor.
-
-- Returns
-
- Size of shape of the MindSpore Lite MSTensor after set.
-
```
virtual int DimensionSize(size_t index) const
```
@@ -96,16 +70,6 @@ Get number of element in MSTensor.
Number of element in MSTensor.
-```
-virtual std::size_t hash() const
-```
-
-Get hash of the MindSpore Lite MSTensor.
-
-- Returns
-
- Hash of the MindSpore Lite MSTensor.
-
```
virtual size_t Size() const
```
@@ -129,23 +93,3 @@ Get the pointer of data in MSTensor.
- Returns
The pointer points to data in MSTensor.
-
-**Static Public Member Functions**
-
-```
-static MSTensor *CreateTensor(TypeId data_type, const std::vector &shape)
-```
-
-Static method to create a MSTensor pointer.
-
-> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
-
-- Parameters
-
- - `data_type`: Define the data type of tensor to be created.
-
- - `shape`: Define the shape of tensor to be created.
-
-- Returns
-
- The pointer of MSTensor.
\ No newline at end of file
diff --git a/docs/api_cpp/source_zh_cn/_static/logo_notebook.png b/docs/api_cpp/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_cpp/source_zh_cn/_static/logo_notebook.png differ
diff --git a/api/source_zh_cn/_static/logo_source.png b/docs/api_cpp/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from api/source_zh_cn/_static/logo_source.png
rename to docs/api_cpp/source_zh_cn/_static/logo_source.png
diff --git a/docs/api_cpp/source_zh_cn/class_list.md b/docs/api_cpp/source_zh_cn/class_list.md
new file mode 100644
index 0000000000000000000000000000000000000000..2999e89bd33b017aa40e54c5a60874604f98a424
--- /dev/null
+++ b/docs/api_cpp/source_zh_cn/class_list.md
@@ -0,0 +1,15 @@
+# 类列表
+
+MindSpore Lite中的类定义及其所属命名空间和描述:
+
+| 命名空间 | 类 | 描述 |
+| --- | --- | --- |
+| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 |
+| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#context) | Context用于保存执行期间的环境变量。 |
+| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 |
+| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#primitivec) | PrimitiveC定义为算子的原型。 |
+| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 |
+| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 |
+| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 |
+| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 |
+| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/dataset.html#litemat) |LiteMat是一个处理图像的类。 |
diff --git a/docs/api_cpp/source_zh_cn/conf.py b/docs/api_cpp/source_zh_cn/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..625e5acd3bde751f170596e75261be4bb2bde60f
--- /dev/null
+++ b/docs/api_cpp/source_zh_cn/conf.py
@@ -0,0 +1,65 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_search_language = 'zh'
+
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/dataset.md b/docs/api_cpp/source_zh_cn/dataset.md
similarity index 90%
rename from lite/docs/source_zh_cn/apicc/dataset.md
rename to docs/api_cpp/source_zh_cn/dataset.md
index 379d3e11632327b3075c0f8a56d53c852cdeae80..92db1aefdf187f945c83c9ead638df4c956d9576 100644
--- a/lite/docs/source_zh_cn/apicc/dataset.md
+++ b/docs/api_cpp/source_zh_cn/dataset.md
@@ -1,11 +1,13 @@
# mindspore::dataset
-#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
-#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
+#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)>
+#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)>
## image_process.h文件的函数
+### ResizeBilinear
+
```
bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
```
@@ -22,6 +24,8 @@ bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
返回True或者False。
+### InitFromPixel
+
```
bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m)
```
@@ -40,6 +44,8 @@ bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType d
返回True或者False。
+### ConvertTo
+
```
bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
```
@@ -56,6 +62,8 @@ bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
返回True或者False。
+### Crop
+
```
bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
```
@@ -74,6 +82,8 @@ bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
返回True或者False。
+### SubStractMeanNormalize
+
```
bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float *norm)
```
@@ -90,8 +100,10 @@ bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float
返回True或者False。
+### Pad
+
```
-bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
+bool Pad(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
```
填充图像,通道支持为3和1。
@@ -112,6 +124,8 @@ bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int
返回True或者False。
+### Affine
+
```
void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue)
```
@@ -140,6 +154,8 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi
- `dsize`: 输出图像的大小。
- `borderValue`: 采图之后用于填充的像素值。
+### GetDefaultBoxes
+
```
std::vector> GetDefaultBoxes(BoxesConfig config)
```
@@ -154,6 +170,8 @@ std::vector> GetDefaultBoxes(BoxesConfig config)
返回默认框。
+### ConvertBoxes
+
```
void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config)
```
@@ -166,6 +184,8 @@ void ConvertBoxes(std::vector> &boxes, std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes)
```
@@ -190,6 +210,7 @@ LiteMat是一个处理图像的类。
**构造函数和析构函数**
+### LiteMat
```
LiteMat()
@@ -211,6 +232,7 @@ MindSpore dataset LiteMat的析构函数。
**公有成员函数**
+### Init
```
void Init(int width, LDataType data_type = LDataType::UINT8)
@@ -222,6 +244,8 @@ void Init(int width, int height, int channel, LDataType data_type = LDataType::U
该函数用于初始化图像的通道,宽度和高度,参数不同。
+### IsEmpty
+
```
bool IsEmpty() const
```
@@ -232,6 +256,8 @@ bool IsEmpty() const
返回True或者False。
+### Release
+
```
void Release()
```
@@ -240,6 +266,8 @@ void Release()
**私有成员函数**
+### AlignMalloc
+
```
void *AlignMalloc(unsigned int size)
```
@@ -254,12 +282,17 @@ void *AlignMalloc(unsigned int size)
返回指针的大小。
+### AlignFree
+
```
void AlignFree(void *ptr)
```
释放指针内存大小的方法。
+
+### InitElemSize
+
```
void InitElemSize(LDataType data_type)
```
diff --git a/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md
similarity index 92%
rename from lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
rename to docs/api_cpp/source_zh_cn/errorcode_and_metatype.md
index 4195eaedcfa2cda8e0470d3db06950e35e2050d8..59f0d81ea4a3a254c7b37e9895c89de1d0357b3d 100644
--- a/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
+++ b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md
@@ -13,6 +13,7 @@
| RET_NO_CHANGE | -4 | 无改变。 |
| RET_SUCCESS_EXIT | -5 | 无错误退出。 |
| RET_MEMORY_FAILED | -6 | 创建内存失败。 |
+| RET_NOT_SUPPORT | -7 | 尚未支持。 |
| RET_OUT_OF_TENSOR_RANGE | -101 | 输出检查越界。 |
| RET_INPUT_TENSOR_ERROR | -102 | 输入检查越界。 |
| RET_REENTRANT_ERROR | -103 | 存在运行中的执行器。 |
@@ -24,6 +25,8 @@
| RET_FORMAT_ERR | -401 | 张量格式检查失败。 |
| RET_INFER_ERR | -501 | 维度推理失败。 |
| RET_INFER_INVALID | -502 | 无效的维度推理。 |
+| RET_INPUT_PARAM_INVALID | -601 | 无效的用户输入参数。 |
+| RET_INPUT_PARAM_LACK | -602 | 缺少必要的输入参数。 |
## MetaType
diff --git a/docs/api_cpp/source_zh_cn/index.rst b/docs/api_cpp/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..6b3fb87da08b8e47644ddb3bc308dd63de1d8d21
--- /dev/null
+++ b/docs/api_cpp/source_zh_cn/index.rst
@@ -0,0 +1,18 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore C++ API
+=================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ class_list
+ lite
+ session
+ tensor
+ dataset
+ errorcode_and_metatype
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/lite.md b/docs/api_cpp/source_zh_cn/lite.md
similarity index 60%
rename from lite/docs/source_zh_cn/apicc/lite.md
rename to docs/api_cpp/source_zh_cn/lite.md
index 2673487a861f56db5c8b9f6bab8daac555cb7fed..f0797d6a7e5d056f4cfa883d7c752a0f12115513 100644
--- a/lite/docs/source_zh_cn/apicc/lite.md
+++ b/docs/api_cpp/source_zh_cn/lite.md
@@ -1,197 +1,169 @@
-# mindspore::lite
-
-#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)>
-
-#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)>
-
-#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)>
-
-
-## Allocator
-
-Allocator类定义了一个内存池,用于动态地分配和释放内存。
-
-## Context
-
-Context类用于保存执行中的环境变量。
-
-**构造函数和析构函数**
-
-```
-Context()
-```
-
-用默认参数构造MindSpore Lite Context 对象。
-
-```
-Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx)
-```
-
-根据输入参数构造MindSpore Lite Context 对象。
-
-- 参数
-
- - `thread_num`: 定义了执行线程数。
-
- - `allocator`: 定义了内存分配器。
-
- - `device_ctx`: 定义了设备信息。
-
-- 返回值
-
- MindSpore Lite Context 指针。
-
-```
-~Context()
-```
-
-MindSpore Lite Context 的析构函数。
-
-**公有属性**
-
-```
-float16_priority
-```
-
-**bool** 值,默认为**false**,用于使能float16 推理。
-
-```
-device_ctx_{DT_CPU}
-```
-
-[**DeviceContext**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicecontext)结构体。用于设置设备信息。
-
-```
-thread_num_
-```
-
-**int** 值,默认为**2**,设置线程数。
-
-```
-allocator
-```
-
-指针类型,指向内存分配器[**Allocator**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#allocator)的指针。
-
-```
-cpu_bind_mode_
-```
-
-[**CpuBindMode**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#cpubindmode)枚举类型,默认为**MID_CPU**。
-
-## PrimitiveC
-
-PrimitiveC定义为算子的原型。
-
-## Model
-
-Model定义了MindSpore Lite中的模型,便于计算图管理。
-
-**析构函数**
-
-```
-~Model()
-```
-
-MindSpore Lite Model的析构函数。
-
-**公有成员函数**
-
-```
-void Destroy()
-```
-
-释放Model内的所有过程中动态分配的内存。
-
-```
-void Free()
-```
-
-释放MindSpore Lite Model中的MetaGraph。
-
-**静态公有成员函数**
-
-```
-static Model *Import(const char *model_buf, size_t size)
-```
-
-创建Model指针的静态方法。
-
-- 参数
-
- - `model_buf`: 定义了读取模型文件的缓存区。
-
- - `size`: 定义了模型缓存区的字节数。
-
-- 返回值
-
- 指向MindSpore Lite的Model的指针。
-
-## CpuBindMode
-枚举类型,设置cpu绑定策略。
-
-**属性**
-
-```
-MID_CPU = -1
-```
-
-优先中等CPU绑定策略。
-
-```
-HIGHER_CPU = 1
-```
-
-优先高级CPU绑定策略。
-
-```
-NO_BIND = 0
-```
-
-不绑定。
-
-## DeviceType
-枚举类型,设置设备类型。
-
-**属性**
-
-```
-DT_CPU = -1
-```
-
-设备为CPU。
-
-```
-DT_GPU = 1
-```
-
-设备为GPU。
-
-```
-DT_NPU = 0
-```
-
-设备为NPU,暂不支持。
-
-## DeviceContext
-
-定义设备类型的结构体。
-
-**属性**
-
-```
-type
-```
-
-[**DeviceType**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicetype) 变量。设备类型。
-
-## Version
-
-```
-std::string Version()
-```
-全局方法,用于获取版本的字符串。
-
-- 返回值
-
+# mindspore::lite
+
+#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/context.h)>
+
+#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/model.h)>
+
+#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/version.h)>
+
+
+## Allocator
+
+Allocator类定义了一个内存池,用于动态地分配和释放内存。
+
+## Context
+
+Context类用于保存执行中的环境变量。
+
+**构造函数和析构函数**
+
+```
+Context()
+```
+
+用默认参数构造MindSpore Lite Context 对象。
+
+```
+~Context()
+```
+
+MindSpore Lite Context 的析构函数。
+
+**公有属性**
+
+```
+float16_priority
+```
+
+**bool**值,默认为**false**,用于使能float16 推理。
+
+> 使能float16推理可能会导致模型推理精度下降,因为在模型推理的中间过程中,有些变量可能会超出float16的数值范围。
+
+```
+device_type
+```
+
+[**DeviceType**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#devicetype)枚举类型。默认为**DT_CPU**,用于设置设备信息。
+
+```
+thread_num_
+```
+
+**int** 值,默认为**2**,设置线程数。
+
+```
+allocator
+```
+
+指针类型,指向内存分配器[**Allocator**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#allocator)的指针。
+
+```
+cpu_bind_mode_
+```
+
+[**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/lite.html#cpubindmode)枚举类型,默认为**MID_CPU**。
+
+## PrimitiveC
+
+PrimitiveC定义为算子的原型。
+
+## Model
+
+Model定义了MindSpore Lite中的模型,便于计算图管理。
+
+**析构函数**
+
+```
+~Model()
+```
+
+MindSpore Lite Model的析构函数。
+
+**公有成员函数**
+
+```
+void Destroy()
+```
+
+释放Model内的所有过程中动态分配的内存。
+
+```
+void Free()
+```
+
+释放MindSpore Lite Model中的MetaGraph。
+
+**静态公有成员函数**
+
+```
+static Model *Import(const char *model_buf, size_t size)
+```
+
+创建Model指针的静态方法。
+
+- 参数
+
+ - `model_buf`: 定义了读取模型文件的缓存区。
+
+ - `size`: 定义了模型缓存区的字节数。
+
+- 返回值
+
+ 指向MindSpore Lite的Model的指针。
+
+## CpuBindMode
+枚举类型,设置cpu绑定策略。
+
+**属性**
+
+```
+MID_CPU = -1
+```
+
+优先中等CPU绑定策略。
+
+```
+HIGHER_CPU = 1
+```
+
+优先高级CPU绑定策略。
+
+```
+NO_BIND = 0
+```
+
+不绑定。
+
+## DeviceType
+枚举类型,设置设备类型。
+
+**属性**
+
+```
+DT_CPU = -1
+```
+
+设备为CPU。
+
+```
+DT_GPU = 1
+```
+
+设备为GPU。
+
+```
+DT_NPU = 0
+```
+
+设备为NPU,暂不支持。
+
+## Version
+
+```
+std::string Version()
+```
+全局方法,用于获取版本的字符串。
+
+- 返回值
+
MindSpore Lite版本的字符串。
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/session.md b/docs/api_cpp/source_zh_cn/session.md
similarity index 83%
rename from lite/docs/source_zh_cn/apicc/session.md
rename to docs/api_cpp/source_zh_cn/session.md
index 86556e1351e97bf4ad435e09db907fdca4e5fefd..f83b7a467ac38e07cfc46e6e1f367d85a20c36cb 100644
--- a/lite/docs/source_zh_cn/apicc/session.md
+++ b/docs/api_cpp/source_zh_cn/session.md
@@ -1,177 +1,177 @@
-# mindspore::session
-
-#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)>
-
-
-## LiteSession
-
-LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。
-
-**构造函数和析构函数**
-
-```
-LiteSession()
-```
-MindSpore Lite LiteSession的构造函数,使用默认参数。
-```
-~LiteSession()
-```
-MindSpore Lite LiteSession的析构函数。
-
-**公有成员函数**
-```
-virtual void BindThread(bool if_bind)
-```
-尝试将线程池中的线程绑定到指定的cpu内核,或从指定的cpu内核进行解绑。
-
-- 参数
-
- - `if_bind`: 定义了对线程进行绑定或解绑。
-
-```
-virtual int CompileGraph(lite::Model *model)
-```
-编译MindSpore Lite模型。
-
-> 注意: CompileGraph必须在RunGraph方法之后调用。
-
-- 参数
-
- - `model`: 定义了需要被编译的模型。
-
-- 返回值
-
- STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。
-
-```
-virtual std::vector GetInputs() const
-```
-获取MindSpore Lite模型的MSTensors输入。
-
-- 返回值
-
- MindSpore Lite MSTensor向量。
-
-```
-std::vector GetInputsByName(const std::string &node_name) const
-```
-通过节点名获取MindSpore Lite模型的MSTensors输入。
-
-- 参数
-
- - `node_name`: 定义了节点名。
-
-- 返回值
-
- MindSpore Lite MSTensor向量。
-
-```
-virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr)
-```
-运行带有回调函数的会话。
-> 注意: RunGraph必须在CompileGraph方法之后调用。
-
-- 参数
-
- - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。
-
- - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。
-
-- 返回值
-
- STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。
-
-```
-virtual std::vector GetOutputsByNodeName(const std::string &node_name) const
-```
-通过节点名获取MindSpore Lite模型的MSTensors输出。
-
-- 参数
-
- - `node_name`: 定义了节点名。
-
-- 返回值
-
- MindSpore Lite MSTensor向量。
-
-```
-virtual std::unordered_map GetOutputs() const
-```
-获取与张量名相关联的MindSpore Lite模型的MSTensors输出。
-
-- 返回值
-
- 包含输出张量名和MindSpore Lite MSTensor的容器类型变量。
-
-```
-virtual std::vector GetOutputTensorNames() const
-```
-获取由当前会话所编译的模型的输出张量名。
-
-- 返回值
-
- 字符串向量,其中包含了按顺序排列的输出张量名。
-
-```
-virtual mindspore::tensor::MSTensor *GetOutputByTensorName(const std::string &tensor_name) const
-```
-通过张量名获取MindSpore Lite模型的MSTensors输出。
-
-- 参数
-
- - `tensor_name`: 定义了张量名。
-
-- 返回值
-
- 指向MindSpore Lite MSTensor的指针。
-
-```
-virtual int Resize(const std::vector &inputs, const std::vector> &dims)
-```
-调整输入的形状。
-
-- 参数
-
- - `inputs`: 模型对应的所有输入。
- - `dims`: 输入对应的新的shape,顺序注意要与inputs一致。
-
-- 返回值
-
- STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。
-
-**静态公有成员函数**
-
-```
-static LiteSession *CreateSession(lite::Context *context)
-```
-用于创建一个LiteSession指针的静态方法。
-
-- 参数
-
- - `context`: 定义了所要创建的session的上下文。
-
-- 返回值
-
- 指向MindSpore Lite LiteSession的指针。
-## KernelCallBack
-
-```
-using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)>
-```
-
-一个函数包装器。KernelCallBack 定义了指向回调函数的指针。
-
-## CallBackParam
-
-一个结构体。CallBackParam定义了回调函数的输入参数。
-**属性**
-
-```
-name_callback_param
-```
-**string** 类型变量。节点名参数。
-
-```
-type_callback_param
-```
+# mindspore::session
+
+#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/lite_session.h)>
+
+
+## LiteSession
+
+LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。
+
+**构造函数和析构函数**
+
+```
+LiteSession()
+```
+MindSpore Lite LiteSession的构造函数,使用默认参数。
+```
+~LiteSession()
+```
+MindSpore Lite LiteSession的析构函数。
+
+**公有成员函数**
+```
+virtual void BindThread(bool if_bind)
+```
+尝试将线程池中的线程绑定到指定的cpu内核,或从指定的cpu内核进行解绑。
+
+- 参数
+
+ - `if_bind`: 定义了对线程进行绑定或解绑。
+
+```
+virtual int CompileGraph(lite::Model *model)
+```
+编译MindSpore Lite模型。
+
+> 注意: CompileGraph必须在RunGraph方法之后调用。
+
+- 参数
+
+ - `model`: 定义了需要被编译的模型。
+
+- 返回值
+
+ STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h)中定义。
+
+```
+virtual std::vector GetInputs() const
+```
+获取MindSpore Lite模型的MSTensors输入。
+
+- 返回值
+
+ MindSpore Lite MSTensor向量。
+
+```
+std::vector GetInputsByName(const std::string &node_name) const
+```
+通过节点名获取MindSpore Lite模型的MSTensors输入。
+
+- 参数
+
+ - `node_name`: 定义了节点名。
+
+- 返回值
+
+ MindSpore Lite MSTensor向量。
+
+```
+virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr)
+```
+运行带有回调函数的会话。
+> 注意: RunGraph必须在CompileGraph方法之后调用。
+
+- 参数
+
+ - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/session.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。
+
+ - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.0/session.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。
+
+- 返回值
+
+ STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h)中定义。
+
+```
+virtual std::vector GetOutputsByNodeName(const std::string &node_name) const
+```
+通过节点名获取MindSpore Lite模型的MSTensors输出。
+
+- 参数
+
+ - `node_name`: 定义了节点名。
+
+- 返回值
+
+ MindSpore Lite MSTensor向量。
+
+```
+virtual std::unordered_map GetOutputs() const
+```
+获取与张量名相关联的MindSpore Lite模型的MSTensors输出。
+
+- 返回值
+
+ 包含输出张量名和MindSpore Lite MSTensor的容器类型变量。
+
+```
+virtual std::vector GetOutputTensorNames() const
+```
+获取由当前会话所编译的模型的输出张量名。
+
+- 返回值
+
+ 字符串向量,其中包含了按顺序排列的输出张量名。
+
+```
+virtual mindspore::tensor::MSTensor *GetOutputByTensorName(const std::string &tensor_name) const
+```
+通过张量名获取MindSpore Lite模型的MSTensors输出。
+
+- 参数
+
+ - `tensor_name`: 定义了张量名。
+
+- 返回值
+
+ 指向MindSpore Lite MSTensor的指针。
+
+```
+virtual int Resize(const std::vector &inputs, const std::vector> &dims)
+```
+调整输入的形状。
+
+- 参数
+
+ - `inputs`: 模型对应的所有输入。
+ - `dims`: 输入对应的新的shape,顺序注意要与inputs一致。
+
+- 返回值
+
+ STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/errorcode.h)中定义。
+
+**静态公有成员函数**
+
+```
+static LiteSession *CreateSession(lite::Context *context)
+```
+用于创建一个LiteSession指针的静态方法。
+
+- 参数
+
+ - `context`: 定义了所要创建的session的上下文。
+
+- 返回值
+
+ 指向MindSpore Lite LiteSession的指针。
+## KernelCallBack
+
+```
+using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)>
+```
+
+一个函数包装器。KernelCallBack 定义了指向回调函数的指针。
+
+## CallBackParam
+
+一个结构体。CallBackParam定义了回调函数的输入参数。
+**属性**
+
+```
+name_callback_param
+```
+**string** 类型变量。节点名参数。
+
+```
+type_callback_param
+```
**string** 类型变量。节点类型参数。
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/apicc/tensor.md b/docs/api_cpp/source_zh_cn/tensor.md
similarity index 46%
rename from lite/docs/source_zh_cn/apicc/tensor.md
rename to docs/api_cpp/source_zh_cn/tensor.md
index e9eae1f0fd9a62aa59e7b578b09a455bab843f1d..269d54d7428541c174a35e66d2af61e3bd91c74c 100644
--- a/lite/docs/source_zh_cn/apicc/tensor.md
+++ b/docs/api_cpp/source_zh_cn/tensor.md
@@ -1,6 +1,6 @@
# mindspore::tensor
-#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)>
+#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/lite/include/ms_tensor.h)>
## MSTensor
@@ -15,7 +15,7 @@ MindSpore Lite MSTensor的构造函数。
- 返回值
- MindSpore Lite MSTensor 的实例。
+ MindSpore Lite MSTensor的实例。
```
virtual ~MSTensor()
@@ -29,25 +29,12 @@ virtual TypeId data_type() const
```
获取MindSpore Lite MSTensor的数据类型。
-> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
+> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/r1.0/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
- 返回值
MindSpore Lite MSTensor类的MindSpore Lite TypeId。
-```
-virtual TypeId set_data_type(TypeId data_type)
-```
-设置MindSpore Lite MSTensor的数据类型。
-
-- 参数
-
- - `data_type`: 定义了MindSpore Lite MSTensor所需设置的MindSpore Lite TypeId。
-
-- 返回值
-
- 设置后的MindSpore Lite MSTensor的MindSpore Lite TypeI。
-
```
virtual std::vector shape() const
```
@@ -57,23 +44,10 @@ virtual std::vector shape() const
一个包含MindSpore Lite MSTensor形状数值的整型向量。
-```
-virtual size_t set_shape(const std::vector &shape)
-```
-设置MindSpore Lite MSTensor的形状.
-
-- 参数
-
- - `shape`: 定义了一个整型向量,包含了所需设置的MindSpore Lite MSTensor形状数值。
-
-- 返回值
-
- 设置形状后的MindSpore Lite MSTensor的大小。
-
```
virtual int DimensionSize(size_t index) const
```
-Get size of the dimension of the MindSpore Lite MSTensor index by the parameter index.
+通过参数索引获取MindSpore Lite MSTensor的维度的大小。
- 参数
@@ -92,15 +66,6 @@ virtual int ElementsNum() const
MSTensor中的元素个数
-```
-virtual std::size_t hash() const
-```
-获取MindSpore Lite MSTensor的哈希码。
-
-- 返回值
-
- MindSpore Lite MSTensor的哈希码。
-
```
virtual size_t Size() const
```
@@ -121,22 +86,3 @@ virtual void *MutableData() const
- 返回值
指向MSTensor中的数据的指针。
-
-**静态公有成员函数**
-
-```
-static MSTensor *CreateTensor(TypeId data_type, const std::vector &shape)
-```
-创建MSTensor指针的静态方法。
-
-> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
-
-- 参数
-
- - `data_type`: 定义了所要创建的张量的数据类型。
-
- - `shape`: 定义了所要创建的张量的形状。
-
-- 返回值
-
- 指向MSTensor的指针。
\ No newline at end of file
diff --git a/docs/Makefile b/docs/api_java/Makefile
similarity index 100%
rename from docs/Makefile
rename to docs/api_java/Makefile
diff --git a/docs/requirements.txt b/docs/api_java/requirements.txt
similarity index 100%
rename from docs/requirements.txt
rename to docs/api_java/requirements.txt
diff --git a/docs/api_java/source_en/_static/logo_notebook.png b/docs/api_java/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_java/source_en/_static/logo_notebook.png differ
diff --git a/docs/source_en/_static/logo_source.png b/docs/api_java/source_en/_static/logo_source.png
similarity index 100%
rename from docs/source_en/_static/logo_source.png
rename to docs/api_java/source_en/_static/logo_source.png
diff --git a/docs/api_java/source_en/conf.py b/docs/api_java/source_en/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..4020d50f7b5f7a90b26785749cb1d41046b4723c
--- /dev/null
+++ b/docs/api_java/source_en/conf.py
@@ -0,0 +1,61 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/api_java/source_zh_cn/_static/logo_notebook.png b/docs/api_java/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_java/source_zh_cn/_static/logo_notebook.png differ
diff --git a/docs/source_zh_cn/_static/logo_source.png b/docs/api_java/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from docs/source_zh_cn/_static/logo_source.png
rename to docs/api_java/source_zh_cn/_static/logo_source.png
diff --git a/docs/api_java/source_zh_cn/conf.py b/docs/api_java/source_zh_cn/conf.py
new file mode 100644
index 0000000000000000000000000000000000000000..e3dfb2a0a9fc6653113e7b2bb878a5497ceb4a2b
--- /dev/null
+++ b/docs/api_java/source_zh_cn/conf.py
@@ -0,0 +1,65 @@
+# Configuration file for the Sphinx documentation builder.
+#
+# This file only contains a selection of the most common options. For a full
+# list see the documentation:
+# https://www.sphinx-doc.org/en/master/usage/configuration.html
+
+# -- Path setup --------------------------------------------------------------
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+# import sys
+# sys.path.append('..')
+# sys.path.insert(0, os.path.abspath('.'))
+
+# -- Project information -----------------------------------------------------
+
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
+
+# The full version, including alpha/beta/rc tags
+release = 'master'
+
+
+# -- General configuration ---------------------------------------------------
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx_markdown_tables',
+ 'recommonmark',
+]
+
+source_suffix = {
+ '.rst': 'restructuredtext',
+ '.md': 'markdown',
+}
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = []
+
+pygments_style = 'sphinx'
+
+
+# -- Options for HTML output -------------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+html_search_language = 'zh'
+
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
+html_static_path = ['_static']
\ No newline at end of file
diff --git a/lite/docs/Makefile b/docs/api_python/Makefile
similarity index 100%
rename from lite/docs/Makefile
rename to docs/api_python/Makefile
diff --git a/api/numpy_objects.inv b/docs/api_python/numpy_objects.inv
similarity index 100%
rename from api/numpy_objects.inv
rename to docs/api_python/numpy_objects.inv
diff --git a/api/python_objects.inv b/docs/api_python/python_objects.inv
similarity index 100%
rename from api/python_objects.inv
rename to docs/api_python/python_objects.inv
diff --git a/docs/api_python/requirements.txt b/docs/api_python/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/api_python/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/api/run.sh b/docs/api_python/run.sh
similarity index 100%
rename from api/run.sh
rename to docs/api_python/run.sh
diff --git a/docs/api_python/source_en/_static/logo_notebook.png b/docs/api_python/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_python/source_en/_static/logo_notebook.png differ
diff --git a/lite/docs/source_en/_static/logo_source.png b/docs/api_python/source_en/_static/logo_source.png
similarity index 100%
rename from lite/docs/source_en/_static/logo_source.png
rename to docs/api_python/source_en/_static/logo_source.png
diff --git a/api/source_en/conf.py b/docs/api_python/source_en/conf.py
similarity index 100%
rename from api/source_en/conf.py
rename to docs/api_python/source_en/conf.py
diff --git a/docs/api_python/source_en/index.rst b/docs/api_python/source_en/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..c7cf2eb81d7a59a5fe85264ef66ac2b6f4bcdfad
--- /dev/null
+++ b/docs/api_python/source_en/index.rst
@@ -0,0 +1,48 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 11:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore API
+=============
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Python API
+
+ mindspore/mindspore
+ mindspore/mindspore.common.initializer
+ mindspore/mindspore.communication
+ mindspore/mindspore.context
+ mindspore/mindspore.dataset
+ mindspore/mindspore.dataset.config
+ mindspore/mindspore.dataset.text
+ mindspore/mindspore.dataset.transforms
+ mindspore/mindspore.dataset.vision
+ mindspore/mindspore.mindrecord
+ mindspore/mindspore.nn
+ mindspore/mindspore.nn.dynamic_lr
+ mindspore/mindspore.nn.probability
+ mindspore/mindspore.ops
+ mindspore/mindspore.profiler
+ mindspore/mindspore.train
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindArmour Python API
+
+ mindarmour/mindarmour
+ mindarmour/mindarmour.adv_robustness.attacks
+ mindarmour/mindarmour.adv_robustness.defenses
+ mindarmour/mindarmour.adv_robustness.detectors
+ mindarmour/mindarmour.adv_robustness.evaluations
+ mindarmour/mindarmour.fuzz_testing
+ mindarmour/mindarmour.privacy.diff_privacy
+ mindarmour/mindarmour.privacy.evaluation
+ mindarmour/mindarmour.utils
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Hub Python API
+
+ mindspore_hub/mindspore_hub
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.attacks.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.attacks.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.defenses.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.defenses.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.detectors.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.detectors.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst b/docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.evaluations.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.adv_robustness.evaluations.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.fuzz_testing.rst b/docs/api_python/source_en/mindarmour/mindarmour.fuzz_testing.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.fuzz_testing.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.fuzz_testing.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst b/docs/api_python/source_en/mindarmour/mindarmour.privacy.diff_privacy.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.privacy.diff_privacy.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.privacy.evaluation.rst b/docs/api_python/source_en/mindarmour/mindarmour.privacy.evaluation.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.privacy.evaluation.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.privacy.evaluation.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.rst b/docs/api_python/source_en/mindarmour/mindarmour.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.rst
diff --git a/api/source_en/api/python/mindarmour/mindarmour.utils.rst b/docs/api_python/source_en/mindarmour/mindarmour.utils.rst
similarity index 100%
rename from api/source_en/api/python/mindarmour/mindarmour.utils.rst
rename to docs/api_python/source_en/mindarmour/mindarmour.utils.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.common.initializer.rst b/docs/api_python/source_en/mindspore/mindspore.common.initializer.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.common.initializer.rst
rename to docs/api_python/source_en/mindspore/mindspore.common.initializer.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.communication.rst b/docs/api_python/source_en/mindspore/mindspore.communication.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.communication.rst
rename to docs/api_python/source_en/mindspore/mindspore.communication.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.context.rst b/docs/api_python/source_en/mindspore/mindspore.context.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.context.rst
rename to docs/api_python/source_en/mindspore/mindspore.context.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.config.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.config.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.config.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.config.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.text.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.text.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.text.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.text.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.transforms.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.transforms.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.dataset.vision.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.vision.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.dataset.vision.rst
rename to docs/api_python/source_en/mindspore/mindspore.dataset.vision.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.mindrecord.rst b/docs/api_python/source_en/mindspore/mindspore.mindrecord.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.mindrecord.rst
rename to docs/api_python/source_en/mindspore/mindspore.mindrecord.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.dynamic_lr.rst b/docs/api_python/source_en/mindspore/mindspore.nn.dynamic_lr.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.dynamic_lr.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.dynamic_lr.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst b/docs/api_python/source_en/mindspore/mindspore.nn.learning_rate_schedule.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.learning_rate_schedule.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.probability.rst b/docs/api_python/source_en/mindspore/mindspore.nn.probability.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.probability.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.probability.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.nn.rst b/docs/api_python/source_en/mindspore/mindspore.nn.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.nn.rst
rename to docs/api_python/source_en/mindspore/mindspore.nn.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.ops.rst b/docs/api_python/source_en/mindspore/mindspore.ops.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.ops.rst
rename to docs/api_python/source_en/mindspore/mindspore.ops.rst
diff --git a/api/source_en/api/python/mindspore/mindspore.profiler.rst b/docs/api_python/source_en/mindspore/mindspore.profiler.rst
similarity index 100%
rename from api/source_en/api/python/mindspore/mindspore.profiler.rst
rename to docs/api_python/source_en/mindspore/mindspore.profiler.rst
diff --git a/docs/api_python/source_en/mindspore/mindspore.rst b/docs/api_python/source_en/mindspore/mindspore.rst
new file mode 100644
index 0000000000000000000000000000000000000000..f510c8b5fcf74c317579ce0b95f25d24a324fe79
--- /dev/null
+++ b/docs/api_python/source_en/mindspore/mindspore.rst
@@ -0,0 +1,109 @@
+mindspore
+=========
+
+.. class:: mindspore.dtype
+
+ Create a data type object of MindSpore.
+
+ The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
+ Run the following command to import the package:
+
+ .. code-block::
+
+ import mindspore.common.dtype as mstype
+
+ or
+
+ .. code-block::
+
+ from mindspore import dtype as mstype
+
+ * **Numeric Type**
+
+ Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
+ The following table lists the details.
+
+ ============================================== =============================
+ Definition Description
+ ============================================== =============================
+ ``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
+ ``mindspore.int16`` , ``mindspore.short`` 16-bit integer
+ ``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
+ ``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
+ ``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
+ ``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
+ ``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
+ ``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
+ ``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
+ ``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
+ ``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
+ ============================================== =============================
+
+ * **Other Type**
+
+ For other defined types, see the following table.
+
+ ============================ =================
+ Type Description
+ ============================ =================
+ ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
+ ``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
+ ``bool_`` Boolean ``True`` or ``False``.
+ ``int_`` Integer scalar.
+ ``uint`` Unsigned integer scalar.
+ ``float_`` Floating-point scalar.
+ ``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
+ ``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
+ ``type_type`` Type definition of type.
+ ``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
+ ``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
+ ``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
+ ============================ =================
+
+ * **Tree Topology**
+
+ The relationships of the above types are as follows:
+
+ .. code-block::
+
+
+ └─────── number
+ │ ├─── bool_
+ │ ├─── int_
+ │ │ ├─── int8, byte
+ │ │ ├─── int16, short
+ │ │ ├─── int32, intc
+ │ │ └─── int64, intp
+ │ ├─── uint
+ │ │ ├─── uint8, ubyte
+ │ │ ├─── uint16, ushort
+ │ │ ├─── uint32, uintc
+ │ │ └─── uint64, uintp
+ │ └─── float_
+ │ ├─── float16
+ │ ├─── float32
+ │ └─── float64
+ ├─── tensor
+ │ ├─── Array[Float32]
+ │ └─── ...
+ ├─── list_
+ │ ├─── List[Int32,Float32]
+ │ └─── ...
+ ├─── tuple_
+ │ ├─── Tuple[Int32,Float32]
+ │ └─── ...
+ ├─── function
+ │ ├─── Func
+ │ ├─── Func[(Int32, Float32), Int32]
+ │ └─── ...
+ ├─── MetaTensor
+ ├─── type_type
+ ├─── type_none
+ ├─── symbolic_key
+ └─── env_type
+
+.. automodule:: mindspore
+ :members:
+ :exclude-members: Model, dataset_helper,
\ No newline at end of file
diff --git a/api/source_en/api/python/mindspore/mindspore.train.rst b/docs/api_python/source_en/mindspore/mindspore.train.rst
similarity index 75%
rename from api/source_en/api/python/mindspore/mindspore.train.rst
rename to docs/api_python/source_en/mindspore/mindspore.train.rst
index eb6753e672430d68f149e034791d0d7443125a78..3d24633055440776b8db533368c656d7a7a18fce 100644
--- a/api/source_en/api/python/mindspore/mindspore.train.rst
+++ b/docs/api_python/source_en/mindspore/mindspore.train.rst
@@ -1,6 +1,18 @@
mindspore.train
===============
+mindspore.train.model
+---------------------
+
+.. automodule:: mindspore.train.model
+ :members:
+
+mindspore.train.dataset_helper
+------------------------------
+
+.. automodule:: mindspore.train.dataset_helper
+ :members:
+
mindspore.train.summary
-----------------------
diff --git a/api/source_en/api/python/mindspore_hub/mindspore_hub.rst b/docs/api_python/source_en/mindspore_hub/mindspore_hub.rst
similarity index 100%
rename from api/source_en/api/python/mindspore_hub/mindspore_hub.rst
rename to docs/api_python/source_en/mindspore_hub/mindspore_hub.rst
diff --git a/docs/api_python/source_zh_cn/_static/logo_notebook.png b/docs/api_python/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/api_python/source_zh_cn/_static/logo_notebook.png differ
diff --git a/lite/docs/source_zh_cn/_static/logo_source.png b/docs/api_python/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from lite/docs/source_zh_cn/_static/logo_source.png
rename to docs/api_python/source_zh_cn/_static/logo_source.png
diff --git a/api/source_zh_cn/conf.py b/docs/api_python/source_zh_cn/conf.py
similarity index 97%
rename from api/source_zh_cn/conf.py
rename to docs/api_python/source_zh_cn/conf.py
index 2e2c89c42b33b7410e1fffcf68104ebfdd93c068..e10907fd0168de0180f3b0e093d9ba0a253ff05a 100644
--- a/api/source_zh_cn/conf.py
+++ b/docs/api_python/source_zh_cn/conf.py
@@ -76,7 +76,7 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
-html_search_options = {'dict': '../resource/jieba.txt'}
+html_search_options = {'dict': '../../resource/jieba.txt'}
html_static_path = ['_static']
diff --git a/docs/api_python/source_zh_cn/index.rst b/docs/api_python/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..7131254879048e85e77240cfd80be9508cc06b59
--- /dev/null
+++ b/docs/api_python/source_zh_cn/index.rst
@@ -0,0 +1,54 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 11:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore API
+=============
+
+.. toctree::
+ :maxdepth: 1
+ :caption: 编程指南
+
+ programming_guide/api_structure
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Python API
+
+ mindspore/mindspore
+ mindspore/mindspore.common.initializer
+ mindspore/mindspore.communication
+ mindspore/mindspore.context
+ mindspore/mindspore.dataset
+ mindspore/mindspore.dataset.config
+ mindspore/mindspore.dataset.text
+ mindspore/mindspore.dataset.transforms
+ mindspore/mindspore.dataset.vision
+ mindspore/mindspore.mindrecord
+ mindspore/mindspore.nn
+ mindspore/mindspore.nn.dynamic_lr
+ mindspore/mindspore.nn.probability
+ mindspore/mindspore.ops
+ mindspore/mindspore.profiler
+ mindspore/mindspore.train
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindArmour Python API
+
+ mindarmour/mindarmour
+ mindarmour/mindarmour.adv_robustness.attacks
+ mindarmour/mindarmour.adv_robustness.defenses
+ mindarmour/mindarmour.adv_robustness.detectors
+ mindarmour/mindarmour.adv_robustness.evaluations
+ mindarmour/mindarmour.fuzz_testing
+ mindarmour/mindarmour.privacy.diff_privacy
+ mindarmour/mindarmour.privacy.evaluation
+ mindarmour/mindarmour.utils
+
+.. toctree::
+ :maxdepth: 1
+ :caption: MindSpore Hub Python API
+
+ mindspore_hub/mindspore_hub
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.attacks.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.attacks.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.attacks.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.defenses.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.defenses.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.defenses.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.detectors.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.detectors.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.detectors.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.evaluations.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.adv_robustness.evaluations.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.adv_robustness.evaluations.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.fuzz_testing.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.fuzz_testing.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.fuzz_testing.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.fuzz_testing.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.diff_privacy.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.diff_privacy.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.diff_privacy.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.evaluation.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.evaluation.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.privacy.evaluation.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.privacy.evaluation.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.rst
diff --git a/api/source_zh_cn/api/python/mindarmour/mindarmour.utils.rst b/docs/api_python/source_zh_cn/mindarmour/mindarmour.utils.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindarmour/mindarmour.utils.rst
rename to docs/api_python/source_zh_cn/mindarmour/mindarmour.utils.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.common.initializer.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.common.initializer.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.common.initializer.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.common.initializer.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.communication.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.communication.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.communication.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.communication.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.context.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.context.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.context.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.context.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.config.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.config.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.config.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.config.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.text.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.text.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.text.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.text.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.transforms.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.transforms.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.dataset.vision.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.vision.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.dataset.vision.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.dataset.vision.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.mindrecord.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.mindrecord.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.mindrecord.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.mindrecord.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.dynamic_lr.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.dynamic_lr.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.dynamic_lr.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.dynamic_lr.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.learning_rate_schedule.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.learning_rate_schedule.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.learning_rate_schedule.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.probability.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.probability.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.nn.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.nn.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.nn.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.ops.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.ops.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.profiler.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.profiler.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore/mindspore.profiler.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.profiler.rst
diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.rst
new file mode 100644
index 0000000000000000000000000000000000000000..f510c8b5fcf74c317579ce0b95f25d24a324fe79
--- /dev/null
+++ b/docs/api_python/source_zh_cn/mindspore/mindspore.rst
@@ -0,0 +1,109 @@
+mindspore
+=========
+
+.. class:: mindspore.dtype
+
+ Create a data type object of MindSpore.
+
+ The actual path of ``dtype`` is ``/mindspore/common/dtype.py``.
+ Run the following command to import the package:
+
+ .. code-block::
+
+ import mindspore.common.dtype as mstype
+
+ or
+
+ .. code-block::
+
+ from mindspore import dtype as mstype
+
+ * **Numeric Type**
+
+ Currently, MindSpore supports ``Int`` type, ``Uint`` type and ``Float`` type.
+ The following table lists the details.
+
+ ============================================== =============================
+ Definition Description
+ ============================================== =============================
+ ``mindspore.int8`` , ``mindspore.byte`` 8-bit integer
+ ``mindspore.int16`` , ``mindspore.short`` 16-bit integer
+ ``mindspore.int32`` , ``mindspore.intc`` 32-bit integer
+ ``mindspore.int64`` , ``mindspore.intp`` 64-bit integer
+ ``mindspore.uint8`` , ``mindspore.ubyte`` unsigned 8-bit integer
+ ``mindspore.uint16`` , ``mindspore.ushort`` unsigned 16-bit integer
+ ``mindspore.uint32`` , ``mindspore.uintc`` unsigned 32-bit integer
+ ``mindspore.uint64`` , ``mindspore.uintp`` unsigned 64-bit integer
+ ``mindspore.float16`` , ``mindspore.half`` 16-bit floating-point number
+ ``mindspore.float32`` , ``mindspore.single`` 32-bit floating-point number
+ ``mindspore.float64`` , ``mindspore.double`` 64-bit floating-point number
+ ============================================== =============================
+
+ * **Other Type**
+
+ For other defined types, see the following table.
+
+ ============================ =================
+ Type Description
+ ============================ =================
+ ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see [tensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/tensor.py).
+ ``MetaTensor`` A tensor only has data type and shape. For details, see [MetaTensor](https://www.gitee.com/mindspore/mindspore/blob/master/mindspore/common/parameter.py).
+ ``bool_`` Boolean ``True`` or ``False``.
+ ``int_`` Integer scalar.
+ ``uint`` Unsigned integer scalar.
+ ``float_`` Floating-point scalar.
+ ``number`` Number, including ``int_`` , ``uint`` , ``float_`` and ``bool_`` .
+ ``list_`` List constructed by ``tensor`` , such as ``List[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``tuple_`` Tuple constructed by ``tensor`` , such as ``Tuple[T0,T1,...,Tn]`` , where the element ``Ti`` can be of different types.
+ ``function`` Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,...,Tn], retval: T) when function is None.
+ ``type_type`` Type definition of type.
+ ``type_none`` No matching return type, corresponding to the ``type(None)`` in Python.
+ ``symbolic_key`` The value of a variable is used as a key of the variable in ``env_type`` .
+ ``env_type`` Used to store the gradient of the free variable of a function, where the key is the ``symbolic_key`` of the free variable's node and the value is the gradient.
+ ============================ =================
+
+ * **Tree Topology**
+
+ The relationships of the above types are as follows:
+
+ .. code-block::
+
+
+ └─────── number
+ │ ├─── bool_
+ │ ├─── int_
+ │ │ ├─── int8, byte
+ │ │ ├─── int16, short
+ │ │ ├─── int32, intc
+ │ │ └─── int64, intp
+ │ ├─── uint
+ │ │ ├─── uint8, ubyte
+ │ │ ├─── uint16, ushort
+ │ │ ├─── uint32, uintc
+ │ │ └─── uint64, uintp
+ │ └─── float_
+ │ ├─── float16
+ │ ├─── float32
+ │ └─── float64
+ ├─── tensor
+ │ ├─── Array[Float32]
+ │ └─── ...
+ ├─── list_
+ │ ├─── List[Int32,Float32]
+ │ └─── ...
+ ├─── tuple_
+ │ ├─── Tuple[Int32,Float32]
+ │ └─── ...
+ ├─── function
+ │ ├─── Func
+ │ ├─── Func[(Int32, Float32), Int32]
+ │ └─── ...
+ ├─── MetaTensor
+ ├─── type_type
+ ├─── type_none
+ ├─── symbolic_key
+ └─── env_type
+
+.. automodule:: mindspore
+ :members:
+ :exclude-members: Model, dataset_helper,
\ No newline at end of file
diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.train.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.train.rst
similarity index 75%
rename from api/source_zh_cn/api/python/mindspore/mindspore.train.rst
rename to docs/api_python/source_zh_cn/mindspore/mindspore.train.rst
index eb6753e672430d68f149e034791d0d7443125a78..3d24633055440776b8db533368c656d7a7a18fce 100644
--- a/api/source_zh_cn/api/python/mindspore/mindspore.train.rst
+++ b/docs/api_python/source_zh_cn/mindspore/mindspore.train.rst
@@ -1,6 +1,18 @@
mindspore.train
===============
+mindspore.train.model
+---------------------
+
+.. automodule:: mindspore.train.model
+ :members:
+
+mindspore.train.dataset_helper
+------------------------------
+
+.. automodule:: mindspore.train.dataset_helper
+ :members:
+
mindspore.train.summary
-----------------------
diff --git a/api/source_zh_cn/api/python/mindspore_hub/mindspore_hub.rst b/docs/api_python/source_zh_cn/mindspore_hub/mindspore_hub.rst
similarity index 100%
rename from api/source_zh_cn/api/python/mindspore_hub/mindspore_hub.rst
rename to docs/api_python/source_zh_cn/mindspore_hub/mindspore_hub.rst
diff --git a/lite/tutorials/Makefile b/docs/faq/Makefile
similarity index 100%
rename from lite/tutorials/Makefile
rename to docs/faq/Makefile
diff --git a/docs/faq/requirements.txt b/docs/faq/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/faq/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/docs/source_en/FAQ.md b/docs/faq/source_en/FAQ.md
similarity index 94%
rename from docs/source_en/FAQ.md
rename to docs/faq/source_en/FAQ.md
index 153aaf1e004ca93f5752dd32ea114f0895cbf2d2..d96cb9365b6fe0af540f1fa44885e02719f9c2e0 100644
--- a/docs/source_en/FAQ.md
+++ b/docs/faq/source_en/FAQ.md
@@ -1,4 +1,4 @@
-# FAQ
+# FAQ
`Linux` `Windows` `Ascend` `GPU` `CPU` `Environmental Setup` `Model Export` `Model Training` `Beginner` `Intermediate` `Expert`
@@ -18,7 +18,7 @@
- [Supported Features](#supported-features)
-
+
## Installation
@@ -116,7 +116,7 @@ A: After MindSpore is installed on a CPU hardware platform, run the `python -c'i
Q: What can I do if the LSTM example on the official website cannot run on Ascend?
-A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/docs/en/master/operator_list.html) to view the supported operators.
+A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/doc/programming_guide/en/r1.0/operator_list_ms.html) to view the supported operators.
@@ -134,7 +134,7 @@ A: MindSpore uses protocol buffers (protobuf) to store training parameters and c
Q: How do I use models trained by MindSpore on Ascend 310? Can they be converted to models used by HiLens Kit?
-A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html).
+A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/inference/en/r1.0/multi_platform_inference.html).
@@ -146,19 +146,19 @@ A: When building a network, use `if self.training: x = dropput(x)`. During verif
Q: Where can I view the sample code or tutorial of MindSpore training and inference?
-A: Please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/index.html).
+A: Please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/r1.0/index.html).
Q: What types of model is currently supported by MindSpore for training?
-A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md) for detailed information.
+A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md) for detailed information.
Q: What are the available recommendation or text generation networks or models provided by MindSpore?
-A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
+A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo).
@@ -176,7 +176,7 @@ A: Ascend 310 can only be used for inference. MindSpore supports training on Asc
Q: Does MindSpore require computing units such as GPUs and NPUs? What hardware support is required?
-A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/docs/en/master/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md).
+A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/doc/note/en/r1.0/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md).
@@ -188,7 +188,7 @@ A: MindSpore provides pluggable device management interface so that developer co
Q: What is the relationship between MindSpore and ModelArts? Can MindSpore be used on ModelArts?
-A: ModelArts is an online training and inference platform on HUAWEI CLOUD. MindSpore is a Huawei deep learning framework. You can view the tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html) to learn how to train MindSpore models on ModelArts.
+A: ModelArts is an online training and inference platform on HUAWEI CLOUD. MindSpore is a Huawei deep learning framework. You can view the tutorials on the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/use_on_the_cloud.html) to learn how to train MindSpore models on ModelArts.
@@ -232,7 +232,7 @@ A: The problem is that the Graph mode is selected but the PyNative mode is used.
- PyNative mode: dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model.
- Graph mode: static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution. This mode uses technologies such as graph optimization to improve the running performance and facilitates large-scale deployment and cross-platform running.
-You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/debugging_in_pynative_mode.html).
+You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/debug_in_pynative_mode.html).
## Programming Language Extensions
@@ -255,7 +255,7 @@ A: In addition to data parallelism, MindSpore distributed training also supports
Q: Has MindSpore implemented the anti-pooling operation similar to `nn.MaxUnpool2d`?
-A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/en/master/use/custom_operator.html).
+A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/custom_operator_ascend.html).
@@ -291,7 +291,7 @@ A: The TensorFlow's object detection pipeline API belongs to the TensorFlow's Mo
Q: How do I migrate scripts or models of other frameworks to MindSpore?
-A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/en/master/advanced_use/network_migration.html).
+A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/migrate_3rd_scripts.html).
diff --git a/docs/faq/source_en/_static/logo_notebook.png b/docs/faq/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/faq/source_en/_static/logo_notebook.png differ
diff --git a/lite/tutorials/source_en/_static/logo_source.png b/docs/faq/source_en/_static/logo_source.png
similarity index 100%
rename from lite/tutorials/source_en/_static/logo_source.png
rename to docs/faq/source_en/_static/logo_source.png
diff --git a/docs/source_en/conf.py b/docs/faq/source_en/conf.py
similarity index 97%
rename from docs/source_en/conf.py
rename to docs/faq/source_en/conf.py
index 6db42071de7116aaf62bbb7f0b2c09825b13c69b..a1fd767271ac159540440ed65bd0d676163366a9 100644
--- a/docs/source_en/conf.py
+++ b/docs/faq/source_en/conf.py
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
diff --git a/docs/faq/source_en/index.rst b/docs/faq/source_en/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..a8969301c9ae45646390ccc0e329f56e648bdef5
--- /dev/null
+++ b/docs/faq/source_en/index.rst
@@ -0,0 +1,13 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore FAQ
+=================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ faq
\ No newline at end of file
diff --git a/docs/source_zh_cn/FAQ.md b/docs/faq/source_zh_cn/FAQ.md
similarity index 91%
rename from docs/source_zh_cn/FAQ.md
rename to docs/faq/source_zh_cn/FAQ.md
index f768d6bcc284427c62a83a3b7daddafb45dc1b88..a0c816d094a978fb83d7cd70a66ca37c0f169bd4 100644
--- a/docs/source_zh_cn/FAQ.md
+++ b/docs/faq/source_zh_cn/FAQ.md
@@ -18,7 +18,7 @@
- [特性支持](#特性支持)
-
+
## 安装类
@@ -123,7 +123,7 @@ A:CPU硬件平台安装MindSpore后测试是否安装成功,只需要执行命
Q:官网的LSTM示例在Ascend上跑不通
-A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html)查看算子支持情况。
+A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.0/operator_list_ms.html)查看算子支持情况。
@@ -143,7 +143,7 @@ A: MindSpore采用protbuf存储训练参数,无法直接读取其他框架
Q:用MindSpore训练出的模型如何在Ascend 310上使用?可以转换成适用于HiLens Kit用的吗?
-A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型.
+A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.0/multi_platform_inference.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型.
@@ -155,19 +155,19 @@ A:在构造网络的时候可以通过 `if self.training: x = dropput(x)`,
Q:从哪里可以查看MindSpore训练及推理的样例代码或者教程?
-A:可以访问[MindSpore官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)。
+A:可以访问[MindSpore官网教程](https://www.mindspore.cn/tutorial/zh-CN/r1.0/index.html)。
Q:MindSpore支持哪些模型的训练?
-A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md)。
+A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md)。
Q:MindSpore有哪些现成的推荐类或生成类网络或模型可用?
-A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
+A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo)。
@@ -187,7 +187,7 @@ A:Ascend 310只能用作推理,MindSpore支持在Ascend 910训练,训练
Q:安装运行MindSpore时,是否要求平台有GPU、NPU等计算单元?需要什么硬件支持?
-A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/docs/zh-CN/master/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md)获取最新信息。
+A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/doc/note/zh-CN/r1.0/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/r1.0/RELEASE.md)获取最新信息。
@@ -199,7 +199,7 @@ A:MindSpore提供了可插拔式的设备管理接口,其他计算单元(
Q:MindSpore与ModelArts是什么关系,在ModelArts中能使用MindSpore吗?
-A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。
+A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。
@@ -245,7 +245,7 @@ A:这边的问题是选择了Graph模式却使用了PyNative的写法,所以
- PyNative模式:也称动态图模式,将神经网络中的各个算子逐一下发执行,方便用户编写和调试神经网络模型。
- Graph模式:也称静态图模式或者图模式,将神经网络模型编译成一整张图,然后下发执行。该模式利用图优化等技术提高运行性能,同时有助于规模部署和跨平台运行。
-用户可以参考[官网教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/debugging_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。
+用户可以参考[官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/debug_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。
@@ -273,7 +273,7 @@ A:MindSpore分布式训练除了支持数据并行,还支持算子级模型
Q:请问MindSpore实现了反池化操作了吗?类似于`nn.MaxUnpool2d` 这个反池化操作?
-A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/zh-CN/master/use/custom_operator.html)。
+A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/custom_operator_ascend.html)。
@@ -309,7 +309,7 @@ A:TensorFlow的对象检测Pipeline接口属于TensorFlow Model模块。待Min
Q:其他框架的脚本或者模型怎么迁移到MindSpore?
-A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/network_migration.html)的介绍。
+A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/migrate_3rd_scripts.html)的介绍。
diff --git a/docs/faq/source_zh_cn/_static/logo_notebook.png b/docs/faq/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/faq/source_zh_cn/_static/logo_notebook.png differ
diff --git a/lite/tutorials/source_zh_cn/_static/logo_source.png b/docs/faq/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from lite/tutorials/source_zh_cn/_static/logo_source.png
rename to docs/faq/source_zh_cn/_static/logo_source.png
diff --git a/docs/source_zh_cn/conf.py b/docs/faq/source_zh_cn/conf.py
similarity index 95%
rename from docs/source_zh_cn/conf.py
rename to docs/faq/source_zh_cn/conf.py
index f58451f3fafe89fa2a734d62f19c94424250757b..95d7701759707ab95a3c199cd8a22e2e2cc1194d 100644
--- a/docs/source_zh_cn/conf.py
+++ b/docs/faq/source_zh_cn/conf.py
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
@@ -59,6 +57,6 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
-html_search_options = {'dict': '../resource/jieba.txt'}
+html_search_options = {'dict': '../../resource/jieba.txt'}
html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/faq/source_zh_cn/index.rst b/docs/faq/source_zh_cn/index.rst
new file mode 100644
index 0000000000000000000000000000000000000000..a8969301c9ae45646390ccc0e329f56e648bdef5
--- /dev/null
+++ b/docs/faq/source_zh_cn/index.rst
@@ -0,0 +1,13 @@
+.. MindSpore documentation master file, created by
+ sphinx-quickstart on Thu Mar 24 10:00:00 2020.
+ You can adapt this file completely to your liking, but it should at least
+ contain the root `toctree` directive.
+
+MindSpore FAQ
+=================
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ faq
\ No newline at end of file
diff --git a/tutorials/Makefile b/docs/note/Makefile
similarity index 100%
rename from tutorials/Makefile
rename to docs/note/Makefile
diff --git a/docs/note/requirements.txt b/docs/note/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..162b50040286bb9a0177801c580a31013082a360
--- /dev/null
+++ b/docs/note/requirements.txt
@@ -0,0 +1,6 @@
+sphinx >= 2.2.1, <= 2.4.4
+recommonmark
+sphinx-markdown-tables
+sphinx_rtd_theme
+numpy
+jieba
diff --git a/docs/note/source_en/_static/logo_notebook.png b/docs/note/source_en/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/note/source_en/_static/logo_notebook.png differ
diff --git a/tutorials/source_en/_static/logo_source.png b/docs/note/source_en/_static/logo_source.png
similarity index 100%
rename from tutorials/source_en/_static/logo_source.png
rename to docs/note/source_en/_static/logo_source.png
diff --git a/docs/source_en/benchmark.md b/docs/note/source_en/benchmark.md
similarity index 100%
rename from docs/source_en/benchmark.md
rename to docs/note/source_en/benchmark.md
diff --git a/docs/source_en/community.rst b/docs/note/source_en/community.rst
similarity index 90%
rename from docs/source_en/community.rst
rename to docs/note/source_en/community.rst
index 80c0ddf710394273e53eff9db6a1495d4a613a17..a0743659bf7114e4c183ee3e57d0203901fdb2da 100644
--- a/docs/source_en/community.rst
+++ b/docs/note/source_en/community.rst
@@ -1,4 +1,4 @@
-Community
+Participate in MindSpore Community
=========
Contributing Code
diff --git a/lite/docs/source_en/conf.py b/docs/note/source_en/conf.py
similarity index 93%
rename from lite/docs/source_en/conf.py
rename to docs/note/source_en/conf.py
index fd89055b9bf9e2a889d23de6fa7395072650db42..a1fd767271ac159540440ed65bd0d676163366a9 100644
--- a/lite/docs/source_en/conf.py
+++ b/docs/note/source_en/conf.py
@@ -15,9 +15,9 @@ import os
# -- Project information -----------------------------------------------------
-project = 'MindSpore Lite'
-copyright = '2020, MindSpore Lite'
-author = 'MindSpore Lite'
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = 'master'
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
diff --git a/docs/source_en/constraints_on_network_construction.md b/docs/note/source_en/constraints_on_network_construction.md
similarity index 83%
rename from docs/source_en/constraints_on_network_construction.md
rename to docs/note/source_en/constraints_on_network_construction.md
index 2da31582ec511af4c51e499d81e350b0b4d91797..ccd96972dc660f5e520a0924b21956e3f7346bcd 100644
--- a/docs/source_en/constraints_on_network_construction.md
+++ b/docs/note/source_en/constraints_on_network_construction.md
@@ -232,34 +232,49 @@ Currently, the following syntax is not supported in network constructors:
### Other Constraints
-Input parameters of the construct function on the entire network and parameters of functions modified by the ms_function decorator are generalized during the graph compilation. Therefore, they cannot be transferred to operators as constant input. Therefore, in graph mode, the parameter passed to the entry network can only be Tensor. As shown in the following example:
-* The following is an example of incorrect input:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
-
- def construct(self, input_x, input_axis):
- return self.expandDims(input_x, input_axis)
- expand_dim = ExpandDimsTest()
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x, 0)
- ```
- In the example, ExpandDimsTest is a single-operator network with two inputs: input_x and input_axis. The second input of the ExpandDims operator must be a constant. This is because input_axis is required when the output dimension of the ExpandDims operator is deduced during graph compilation. As the network parameter input, the value of input_axis is generalized into a variable and cannot be determined. As a result, the output dimension of the operator cannot be deduced, causing the graph compilation failure. Therefore, the input required by deduction in the graph compilation phase must be a constant. In APIs, the "constant input is needed" is marked for parameters that require constant input of these operators.
+1. Input parameters of the `construct` function on the entire network and parameters of functions modified by the `ms_function` decorator are generalized during the graph compilation and cannot be passed to operators as constant input. Therefore, in graph mode, the parameter passed to the entry network can only be `Tensor`. As shown in the following example:
+
+ * The following is an example of incorrect input:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+
+ def construct(self, input_x, input_axis):
+ return self.expandDims(input_x, input_axis)
+ expand_dim = ExpandDimsTest()
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x, 0)
+ ```
+ In the example, `ExpandDimsTest` is a single-operator network with two inputs: `input_x` and `input_axis`. The second input of the `ExpandDims` operator must be a constant. This is because `input_axis` is required when the output dimension of the `ExpandDims` operator is deduced during graph compilation. As the network parameter input, the value of `input_axis` is generalized into a variable and cannot be determined. As a result, the output dimension of the operator cannot be deduced, causing the graph compilation failure. Therefore, the input required by deduction in the graph compilation phase must be a constant. In the API, the parameters of this type of operator that require constant input will be explained, marked `const input is needed`.
+
+ * Directly enter the needed value or a member variable in a class for the constant input of the operator in the construct function. The following is an example of correct input:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self, axis):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+ self.axis = axis
+
+ def construct(self, input_x):
+ return self.expandDims(input_x, self.axis)
+ axis = 0
+ expand_dim = ExpandDimsTest(axis)
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x)
+ ```
+
+2. It is not allowed to modify `non-Parameter` type data members of the network. Examples are as follows:
-* Directly enter the needed value or a member variable in a class for the constant input of the operator in the construct function. The following is an example of correct input:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self, axis):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
- self.axis = axis
-
- def construct(self, input_x):
- return self.expandDims(input_x, self.axis)
- axis = 0
- expand_dim = ExpandDimsTest(axis)
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x)
```
+ class Net(Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.num = 2
+ self.par = Parameter(Tensor(np.ones((2, 3, 4))), name="par")
+
+ def construct(self, x, y):
+ return x + y
+ ```
+In the network defined above, `self.num` is not a `Parameter` and cannot be modified, but `self.par` is a `Parameter` and can be modified.
diff --git a/docs/source_zh_cn/design.rst b/docs/note/source_en/design.rst
similarity index 72%
rename from docs/source_zh_cn/design.rst
rename to docs/note/source_en/design.rst
index 5db1a3815235941f9a2a74420fe85dd36e1afad5..47a8c3c10493000762fdf82fa6c60abc8a24096c 100644
--- a/docs/source_zh_cn/design.rst
+++ b/docs/note/source_en/design.rst
@@ -1,14 +1,13 @@
-设计文档
+Design
===========
.. toctree::
:maxdepth: 1
- architecture
- technical_white_paper
- design/mindspore/ir
+ design/mindspore/architecture
+ design/mindspore/architecture_lite
+ design/mindspore/mindir
design/mindspore/distributed_training_design
- design/mindinsight/profiler_design
design/mindinsight/training_visual_design
design/mindinsight/graph_visual_design
design/mindinsight/tensor_visual_design
diff --git a/docs/note/source_en/design/mindarmour/differential_privacy_design.md b/docs/note/source_en/design/mindarmour/differential_privacy_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..86ce6049a28d0b7541c7b3bb10f27db8a062c846
--- /dev/null
+++ b/docs/note/source_en/design/mindarmour/differential_privacy_design.md
@@ -0,0 +1,71 @@
+# Differential Privacy
+
+`Ascend` `Model Development` `Model Optimization` `Framework Development` `Enterprise` `Expert` `Contributor`
+
+
+
+- [Differential Privacy](#differential-privacy)
+ - [Overall Design](#overall-design)
+ - [DP Optimizer](#dp-optimizer)
+ - [DP Mechanisms](#dp-mechanisms)
+ - [Monitor](#monitor)
+ - [Code Implementation](#code-implementation)
+ - [References](#references)
+
+
+
+
+
+## Overall Design
+
+The Differential-Privacy module of MindArmour implements the differential privacy training capability. Model training consists of building training dataset, computing loss, computing gradient, and updating model parameters. Currently, the differential privacy training of MindArmour focuses on the gradient computing process and uses the corresponding algorithm to clip and add noise to the gradient. In this way, user data privacy is protected.
+
+
+
+
Figure 1 Overall design of differential privacy
+
+Figure 1 shows an overall design of differential privacy training, and mainly including differential privacy noise mechanisms (DP mechanisms), a differential privacy optimizer (DP optimizer), and a privacy monitor.
+
+
+### DP Optimizer
+
+DP optimizer inherits capabilities of the MindSpore optimizer and uses the DP mechanisms to scramble and protect gradients. Currently, MindArmour provides three types of DP optimizers: constant Gaussian optimizer, adaptive Gaussian optimizer, and adaptive clipping optimizer. Each type of DP optimizer adds differential privacy protection capabilities to common optimizers such as SGD and Momentum from different perspectives.
+
+* Constant Gaussian optimizer is a DP optimizer for non-adaptive Gaussian noise. The advantage is that the differential privacy budget ϵ can be strictly controlled. The disadvantage is that in the model training process, the noise amount added in each step is fixed. If the number of training steps is too large, the noise in the later phase of training makes the model convergence difficult, or even causes the performance to deteriorate greatly and the model availability to be poor.
+* Adaptive Gaussian optimizer adaptively adjusts the standard deviation to adjust the Gaussian distribution noise. In the initial phase of model training, a large amount of noise is added. As the model gradually converges, the noise amount gradually decreases, and the impact of the noise on the model availability is reduced. A disadvantage of the adaptive Gaussian noise is that a differential privacy budget cannot be strictly controlled.
+* Adaptive clipping optimizer is a DP optimizer that adaptively adjusts a clipping granularity. Gradient clipping is an important operation in differential privacy training. The adaptive clipping optimizer can control a ratio of gradient clipping to fluctuate within a given range and control the gradient clipping granularity during training steps.
+
+### DP Mechanisms
+
+The noise mechanism is a basis for building a differential privacy training capability. Different noise mechanisms meet requirements of different DP optimizers, including multiple mechanisms such as constant Gaussian distribution noise, adaptive Gaussian distribution noise, adaptive clipping Gaussian distribution noise, and Laplace distribution noise.
+
+### Monitor
+
+Monitor provides callback functions such as Rényi differential privacy (RDP) and zero-concentrated differential privacy (ZCDP) to monitor the differential privacy budget of the model.
+
+* ZCDP[2]
+
+ ZCDP is a loose differential privacy definition. It uses the Rényi divergence to measure the distribution difference of random functions on adjacent datasets.
+
+* RDP[3]
+
+ RDP is a more general differential privacy definition based on the Rényi divergence. It uses the Rényi divergence to measure the distribution difference between two adjacent datasets.
+
+
+Compared with traditional differential privacy, ZCDP and RDP provide stricter privacy budget upper bound guarantee.
+
+
+## Code Implementation
+
+* [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise.
+* [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation.
+* [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned.
+* [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability.
+
+## References
+
+[1] Dwork, Cynthia, and Jing Lei. "Differential privacy and robust statistics." *Proceedings of the forty-first annual ACM symposium on Theory of computing*. 2009.
+
+[2] Lee, Jaewoo, and Daniel Kifer. "Concentrated differentially private gradient descent with adaptive per-iteration privacy budget." *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*. 2018.
+
+[3] Mironov, Ilya. "Rényi differential privacy." *2017 IEEE 30th Computer Security Foundations Symposium (CSF)*. IEEE, 2017.
diff --git a/docs/note/source_en/design/mindarmour/fuzzer_design.md b/docs/note/source_en/design/mindarmour/fuzzer_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..2a41c2342eb3ed7fe13804890f7d97f491e2f20e
--- /dev/null
+++ b/docs/note/source_en/design/mindarmour/fuzzer_design.md
@@ -0,0 +1,74 @@
+# AI Model Security Test
+
+`Linux` `Ascend` `GPU` `CPU` `Data Preparation` `Model Development` `Model Training` `Model Optimization` `Enterprise` `Expert`
+
+
+
+
+- [AI Model Security Test](#ai-model-security-test)
+ - [Background](#background)
+ - [Fuzz Testing Design](#fuzz-testing-design)
+ - [Fuzz Testing Process](#fuzz-testing-process)
+ - [Code Implementation](#code-implementation)
+ - [References](#references)
+
+
+
+
+
+## Background
+
+Different from [fuzzing security test for traditional programs](https://zhuanlan.zhihu.com/p/43432370), MindArmour provides the AI model security test module fuzz_testing for deep neural network. Based on the neural network features, the concept of neuron coverage rate [1] is introduced to guide the fuzz testing. Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate so that more neurons can be activated by inputs. The distribution range of neuron values is wider to fully test DNN and explore the output results of different types of models and model error behavior.
+
+## Fuzz Testing Design
+
+The following figure shows the security test design of the AI model.
+
+
+
+At the user interface layer, users need to provide the original dataset `DataSet`, tested model `Model`, and Fuzzer parameter `Fuzzer configuration`. After fuzzing the model and data, Fuzzer module returns the security report `Security Report`.
+
+Fuzz testting architecture consists of three modules:
+
+1. Natural Threat/Adversarial Example Generator:
+
+ Randomly select a mutation method to mutate seed data and generate multiple variants. Mutation policies supporting multiple samples include:
+
+ - Image affine transformation methods: Translate, Rotate, Scale, and Shear.
+ - Methods based on image pixel value changes: Contrast, Brightness, Blur, and Noise.
+ - Methods for generating adversarial examples based on white-box and black-box attacks: FGSM, PGD, and MDIIM.
+
+2. Fuzzer Moduler:
+
+ Perform fuzz testing on the mutated data to observe the change of the neuron coverage rate. If the generated data increases the neuron coverage rate, add the data to the mutated seed queue for the next round of data mutation. Currently, the following neuron coverage metrics are supported: KMNC, NBC, and SNAC [2].
+
+3. Evaluation:
+
+ Evaluate the fuzz testing effect, quality of generated data, and strength of mutation methods. Five metrics of three types are supported, including the general evaluation metric (accuracy), neuron coverage rate metrics (kmnc, nbc, and snac), and adversarial attack evaluation metric (attack_success_rate).
+
+## Fuzz Testing Process
+
+
+
+The fuzz testing process is as follows:
+
+1. Select seed A from the seed queue according to the policy.
+2. Randomly select a mutation policy to mutate seed A and generate multiple variants A1, A2, ...
+3. Use the target model to predict the variants. If the semantics of variant is consistent with the seed, the variant enters the Fuzzed Tests.
+4. If the prediction is correct, use the neuron coverage metric for analysis.
+5. If a variant increases the coverage rate, place the variant in the seed queue for the next round of mutation.
+
+Through multiple rounds of mutations, you can obtain a series of variant data in the Fuzzed Tests, perform further analysis, and provide security reports from multiple perspectives. You can use them to deeply analyze defects of the neural network model and enhance the model to improve its universality and robustness.
+
+## Code Implementation
+
+1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process.
+2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC.
+3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods.
+4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks.
+
+## References
+
+[1] Pei K, Cao Y, Yang J, et al. Deepxplore: Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017: 1-18.
+
+[2] Ma L, Juefei-Xu F, Zhang F, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems[C]//Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ACM, 2018: 120-131.
\ No newline at end of file
diff --git a/docs/note/source_en/design/mindarmour/images/dp_arch.png b/docs/note/source_en/design/mindarmour/images/dp_arch.png
new file mode 100644
index 0000000000000000000000000000000000000000..c903e4e2acece6c6de882852dc3570126b6fcb05
Binary files /dev/null and b/docs/note/source_en/design/mindarmour/images/dp_arch.png differ
diff --git a/docs/source_zh_cn/design/mindarmour/images/fuzz_architecture.png b/docs/note/source_en/design/mindarmour/images/fuzz_architecture.png
similarity index 100%
rename from docs/source_zh_cn/design/mindarmour/images/fuzz_architecture.png
rename to docs/note/source_en/design/mindarmour/images/fuzz_architecture.png
diff --git a/docs/source_zh_cn/design/mindarmour/images/fuzz_process.png b/docs/note/source_en/design/mindarmour/images/fuzz_process.png
similarity index 100%
rename from docs/source_zh_cn/design/mindarmour/images/fuzz_process.png
rename to docs/note/source_en/design/mindarmour/images/fuzz_process.png
diff --git a/docs/source_en/design/mindinsight/graph_visual_design.md b/docs/note/source_en/design/mindinsight/graph_visual_design.md
similarity index 100%
rename from docs/source_en/design/mindinsight/graph_visual_design.md
rename to docs/note/source_en/design/mindinsight/graph_visual_design.md
diff --git a/docs/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png b/docs/note/source_en/design/mindinsight/images/analyser_class_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/analyser_class_profiler.png
rename to docs/note/source_en/design/mindinsight/images/analyser_class_profiler.png
diff --git a/docs/note/source_en/design/mindinsight/images/context_profiler.png b/docs/note/source_en/design/mindinsight/images/context_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..f11782ebfe473ddfaec9736055c9012a5129a26f
Binary files /dev/null and b/docs/note/source_en/design/mindinsight/images/context_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_class_design.png b/docs/note/source_en/design/mindinsight/images/graph_visual_class_design.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/graph_visual_class_design.png
rename to docs/note/source_en/design/mindinsight/images/graph_visual_class_design.png
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_main.png b/docs/note/source_en/design/mindinsight/images/graph_visual_main.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/graph_visual_main.png
rename to docs/note/source_en/design/mindinsight/images/graph_visual_main.png
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_right_side.png b/docs/note/source_en/design/mindinsight/images/graph_visual_right_side.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/graph_visual_right_side.png
rename to docs/note/source_en/design/mindinsight/images/graph_visual_right_side.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/module_profiler.png b/docs/note/source_en/design/mindinsight/images/module_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/module_profiler.png
rename to docs/note/source_en/design/mindinsight/images/module_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/parser_module_profiler.png b/docs/note/source_en/design/mindinsight/images/parser_module_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/parser_module_profiler.png
rename to docs/note/source_en/design/mindinsight/images/parser_module_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png b/docs/note/source_en/design/mindinsight/images/proposer_class_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/proposer_class_profiler.png
rename to docs/note/source_en/design/mindinsight/images/proposer_class_profiler.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png b/docs/note/source_en/design/mindinsight/images/proposer_module_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/proposer_module_profiler.png
rename to docs/note/source_en/design/mindinsight/images/proposer_module_profiler.png
diff --git a/docs/source_en/design/mindinsight/images/tensor_histogram.png b/docs/note/source_en/design/mindinsight/images/tensor_histogram.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/tensor_histogram.png
rename to docs/note/source_en/design/mindinsight/images/tensor_histogram.png
diff --git a/docs/source_en/design/mindinsight/images/tensor_table.png b/docs/note/source_en/design/mindinsight/images/tensor_table.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/tensor_table.png
rename to docs/note/source_en/design/mindinsight/images/tensor_table.png
diff --git a/docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png b/docs/note/source_en/design/mindinsight/images/time_order_profiler.png
similarity index 100%
rename from docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png
rename to docs/note/source_en/design/mindinsight/images/time_order_profiler.png
diff --git a/docs/source_en/design/mindinsight/images/training_visualization_architecture.png b/docs/note/source_en/design/mindinsight/images/training_visualization_architecture.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/training_visualization_architecture.png
rename to docs/note/source_en/design/mindinsight/images/training_visualization_architecture.png
diff --git a/docs/source_en/design/mindinsight/images/training_visualization_data_flow.png b/docs/note/source_en/design/mindinsight/images/training_visualization_data_flow.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/training_visualization_data_flow.png
rename to docs/note/source_en/design/mindinsight/images/training_visualization_data_flow.png
diff --git a/docs/source_en/design/mindinsight/images/training_visualization_data_model.png b/docs/note/source_en/design/mindinsight/images/training_visualization_data_model.png
similarity index 100%
rename from docs/source_en/design/mindinsight/images/training_visualization_data_model.png
rename to docs/note/source_en/design/mindinsight/images/training_visualization_data_model.png
diff --git a/docs/note/source_en/design/mindinsight/profiler_design.md b/docs/note/source_en/design/mindinsight/profiler_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..e18497237388c37ddada8552fa01844026926fa6
--- /dev/null
+++ b/docs/note/source_en/design/mindinsight/profiler_design.md
@@ -0,0 +1,175 @@
+# Profiler Design Document
+
+`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
+
+
+
+- [Profiler Design Document](#profiler-design-document)
+ - [Background](#background)
+ - [Profiler Architecture Design](#profiler-architecture-design)
+ - [Context](#context)
+ - [Module Structure](#module-structure)
+ - [Internal Module Interaction](#internal-module-interaction)
+ - [Sub-Module Design](#sub-module-design)
+ - [ProfilerAPI and Controller](#profilerapi-and-controller)
+ - [Description](#description)
+ - [Design](#design)
+ - [Parser](#parser)
+ - [Description](#description-1)
+ - [Design](#design-1)
+ - [Analyser](#analyser)
+ - [Description](#description-2)
+ - [Design](#design-2)
+ - [Proposer](#proposer)
+ - [Description](#description-3)
+ - [Design](#design-3)
+
+
+
+
+
+## Background
+
+To support model development and performance debugging in MindSpore, an easy-to-use profile tool is required to intuitively display the performance information of each dimension of a network model, provide users with easy-to-use and abundant profiling functions, and help users quickly locate network performance faults.
+
+## Profiler Architecture Design
+The Profiler architecture design is introduced from the following three aspects: the overall context interaction relationship of Profiler; the internal structure of Profiler, including the module structure and module layers; the interactive calling relationship between modules.
+
+### Context
+
+Profiler is a part of the MindSpore debugging and optimization tool. The following figure shows the tool context.
+
+
+
+Figure 1 Context relationship
+
+As shown in the preceding figure, the interaction between the Profiler and other components is as follows:
+
+1. In the training script, MindSpore Profiler is called to send the command to the MindSpore ada communication module for starting performance data collection. Finally, the ada generates original performance data.
+
+2. MindSpore Profiler parses the original data in the user script and generates the intermediate data results in the specified folder.
+
+3. MindInsight Profiler connects to the intermediate data and provides the visualized Profiler function for users.
+### Module Structure
+
+Modules are classified into the following layers:
+
+
+
+Figure 2 Relationships between modules at different layers
+
+
+Module functions are as follows:
+1. ProfilerAPI is a calling entry provided by code, including the performance collection startup API and analysis API.
+2. Controller is a module at a layer lower than that of ProfilerAPI. It is called by the startup API of ProfilerAPI to start or stop the performance collection function. The original data is written to a fixed position by ada.
+3. Parser is a module for parsing original performance data which is collected on the device and cannot be directly understood by users. Parser parses, combines, and converts the data to generate intermediate results that can be understood by users and analyzed by upper layers.
+4. Analyser obtains the intermediate results parsed by the lower-layer Parser, encapsulates, filters, and sorts the intermediate results, and returns the various information to the upper-layer Profiler API and RESTful.
+5. RESTful is used to call the common API provided by the backend Analyser to obtain objective data and use RESTful to connect to the frontend.
+
+### Internal Module Interaction
+Users can use API or RESTful to complete internal module interaction process. The following uses the API as an example:
+
+
+
+Figure 3 Module interaction
+
+The interaction process of each module is as follows:
+
+1. ProfilerAPI calls the control function of the lower-layer Controller to control the lower-layer collection module to collect performance information. Currently, the collection module (ada) receives commands in resident process mode and independently collects performance information.
+
+2. After the training is complete, users call the analysis API of ProfilerAPI.
+
+3. Profiler API analysis API uses the Parser module to parse performance data, generates intermediate results, calls the Aalayser module to analyze the results, and returns various information to users.
+
+## Sub-Module Design
+### ProfilerAPI and Controller
+
+#### Description
+ProfilerAPI provides an entry API in the training script for users to start performance collection and analyze performance data.
+ProfilerAPI delivers commands through Controller to control the startup of ada.
+
+#### Design
+ProfilerAPI belongs to the API layer of upper-layer application and is integrated by the training script. The function is divided into two parts:
+
+- Before training, call the bottom-layer Controller API to deliver a command to start a profiling task.
+
+- After training, call the bottom-layer Controller API to deliver commands to stop the profiling task, call the Analyser and Parser APIs to parse data files and generate result data such as operator performance statistics and training trace statistics.
+
+
+Controller provides an API for the upper layer, calls API of the lower-layer performance collection module, and delivers commands for starting and stopping performance collection.
+
+The generated original performance data includes:
+
+- `hwts.log.data.45.dev.profiler_default_tag` file: stores operator execution information, including the start and end of a task and stream ID.
+- `DATA_PREPROCESS.dev.AICPU` file: specifies AI CPU operator execution time at each stage.
+- `Framework.host.task_desc_info` file: stores the mapping between operator IDs and operator names and the input and output information of each operator.
+- `training_trace.46.dev.profiler_default_tag` file: stores the start and end time of each step and time of step interval, forward and backward propagation, and step tail.
+
+### Parser
+#### Description
+Parser is a module for parsing original performance data which is collected on the device and cannot be directly understood by users. Parser parses, combines, and converts the data to generate intermediate results that can be understood by users and analyzed by upper layers.
+#### Design
+
+
+Figure 4 Parser module
+
+As shown in the preceding figure, there are HWTS Parser, AI CPU Parser, Framework Parser, and Training Trace Parser modules. Each module parses a type of original data to obtain the intermediate file that can be read by users.
+
+- HWTS Parser: parses the `hwts.log.data.45.dev.profiler_default_tag` file to obtain the task-based statistics of the device, such as the start and end of each task and stream ID, which are used to compute the operator execution time.
+- AI CPU Parser: parses the `DATA_PREPROCESS.dev.AICPU` file to obtain the AI CPU operator execution time at each stage.
+- Framework Parser: parses the `Framework.host.task_desc_info` file to obtain the mapping between AI Core operator and task, and key operator information.
+- Training Trace Parser: parses the `training_trace.46.dev.profiler_default_tag` file to analyze the time at each training stage.
+
+### Analyser
+
+#### Description
+Analyzer is used to filter, sort, query, and page the intermediate results generated at the parsing stage.
+
+#### Design
+
+This module parses the intermediate files generated by Parser, provides a general API for upper-layer data analysis, and returns the analyzed data to the upper layer for display. Various intermediate files have certain common points which can be abstracted. Therefore, following figure shows the design of the Analyser class.
+
+
+
+Figure 5 Analyser class
+
+As shown in the preceding figure, multiple Analysers are implemented for different contents to be queried. Filter, sorting, and pagination conditions can be defined for each Analyser. Each Analyser knows which intermediate files are required to merge, filter, and sort data. Analyser is associated with Parser through the intermediate files generated by Parser, and no function is called. In this way, Analyser and Parser are decoupled.
+
+Currently, there are two types of analyzers for operator information:
+
+- Filter the average information of operator types.
+- Filter the detailed average information of each operator in two Analysers (AicoreTypeAnalyser and AicoreDetailAnalyser).
+
+To hide the internal implementation of Analyser and facilitate calling, the simple factory mode is used to obtain the specified Analyser through AnalyserFactory.
+
+
+### Proposer
+#### Description
+Proposer is a Profiler performance optimization suggestion module. Proposer calls the Analyser module to obtain performance data, analyzes the performance data based on optimization rules, and displays optimization suggestions for users through the UI and API.
+
+#### Design
+
+Modules are classified into the following layers:
+
+
+
+Figure 6 Proposer module
+
+As shown in the preceding figure:
+
+- Proposer provides API for calling the API and RESTful to obtain optimization suggestions.
+- Proposer calls the Analyser API to obtain performance data and obtain optimization suggestions based on optimization rules.
+- Proposer calls Analyser factory to obtain the Analyser object.
+
+You can call the query API of the Analyser object to obtain information, including the top N AICore, AICoreType, and AICpu operators that are sorted by time and the time information of each traning trace stage.
+
+The following figure shows the module class design:
+
+
+
+Figure 7 Proposer class
+
+As shown in the preceding figure:
+
+- Proposers of various types inherit the abstract class Proposer and implement the analyze methods.
+- API and CLI call the ProposerFactory to obtain the Proposer and call the Proposer.analyze function to obtain the optimization suggestions of each type of Proposer.
\ No newline at end of file
diff --git a/docs/source_en/design/mindinsight/tensor_visual_design.md b/docs/note/source_en/design/mindinsight/tensor_visual_design.md
similarity index 100%
rename from docs/source_en/design/mindinsight/tensor_visual_design.md
rename to docs/note/source_en/design/mindinsight/tensor_visual_design.md
diff --git a/docs/source_en/design/mindinsight/training_visual_design.md b/docs/note/source_en/design/mindinsight/training_visual_design.md
similarity index 100%
rename from docs/source_en/design/mindinsight/training_visual_design.md
rename to docs/note/source_en/design/mindinsight/training_visual_design.md
diff --git a/docs/source_en/architecture.md b/docs/note/source_en/design/mindspore/architecture.md
similarity index 94%
rename from docs/source_en/architecture.md
rename to docs/note/source_en/design/mindspore/architecture.md
index 43cf28ef2fa68e47a8c50d59d3fd5ad752fb280b..accf4a9b5a7f446f465281e6a1d01ae35ea12d42 100644
--- a/docs/source_en/architecture.md
+++ b/docs/note/source_en/design/mindspore/architecture.md
@@ -2,7 +2,7 @@
`Linux` `Windows` `Ascend` `GPU` `CPU` `On Device` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
-
+
The MindSpore framework consists of the Frontend Expression layer, Graph Engine layer, and Backend Runtime layer.
diff --git a/lite/docs/source_en/architecture.md b/docs/note/source_en/design/mindspore/architecture_lite.md
similarity index 89%
rename from lite/docs/source_en/architecture.md
rename to docs/note/source_en/design/mindspore/architecture_lite.md
index 64585775720d39c365190d9f8f24c82931cf24e3..de7a82a18f1952e893971842654e17d54ae2a4e4 100644
--- a/lite/docs/source_en/architecture.md
+++ b/docs/note/source_en/design/mindspore/architecture_lite.md
@@ -1,6 +1,8 @@
# Overall Architecture
-
+`Linux` `Windows` `On Device` `Inference Application` `Intermediate` `Expert` `Contributor`
+
+
The overall architecture of MindSpore Lite is as follows:
diff --git a/docs/note/source_en/design/mindspore/distributed_training_design.md b/docs/note/source_en/design/mindspore/distributed_training_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..b943ae1f4fdeab8f7ac68211b6f513b6f54c3a8a
--- /dev/null
+++ b/docs/note/source_en/design/mindspore/distributed_training_design.md
@@ -0,0 +1,144 @@
+# Distributed Training Design
+
+`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
+
+
+
+- [Distributed Training Design](#distributed-training-design)
+ - [Background](#background)
+ - [Concepts](#concepts)
+ - [Collective Communication](#collective-communication)
+ - [Synchronization Mode](#synchronization-mode)
+ - [Data Parallelism](#data-parallelism)
+ - [Principle of Data Parallelism](#principle-of-data-parallelism)
+ - [Data Parallel Code](#data-parallel-code)
+ - [Automatic Parallelism](#automatic-parallelism)
+ - [Principle of Automatic Parallelism](#principle-of-automatic-parallelism)
+ - [Automatic Parallel Code](#automatic-parallel-code)
+
+
+
+
+
+## Background
+
+With the rapid development of deep learning, the number of datasets and parameters are growing exponentially to improve the accuracy and generalization capability of neural networks. Parallel distributed training has become a development trend to resolve the performance bottleneck of ultra-large scale networks. MindSpore supports the mainstream distributed training paradigm and develops an automatic hybrid parallel solution. The following describes the design principles of several parallel training modes and provides guidance for users to perform custom development.
+
+
+## Concepts
+
+### Collective Communication
+
+Collective communication is defined as communication that involves a group of processes. All processes in the group send and receive data after meeting certain conditions. MindSpore implements data transmission during parallel training through collective communication. On Ascend chips, MindSpore depends on the Huawei Collective Communication Library (`HCCL`) to implement the task. On GPU, MindSpore depends on the NVIDIA Collective Communication Library (`NCCL`) to implement the task.
+
+### Synchronization Mode
+
+In synchronous mode, all devices strart training at the same time and update parameter values synchronously after the backward propagation algorithm is executed. Currently, MindSpore uses the synchronous training mode.
+
+## Data Parallelism
+
+This section describes how the data parallel mode `ParallelMode.DATA_PARALLEL` works in MindSpore.
+
+### Principle of Data Parallelism
+
+
+
+1. Environment dependencies
+
+ Each time before parallel training starts, the `mindspore.communication.init` API is called to initialize communication resources and the global communication group `WORLD_COMM_GROUP` is automatically created.
+
+2. Data distribution
+
+ The key of data parallelism is to split datasets based on the sample dimension and deliver the split datasets to different devices. Each dataset loading API provided by the `mindspore.dataset` module has the `num_shards` and `shard_id` parameters. The parameters are used to split a dataset into multiple datasets, perform cyclic sampling, and collect data of the `batch` size to each device. When the data volume is insufficient, the sampling restarts from the beginning.
+
+3. Network structure
+
+ The scripting method of data parallel network is the same as that of standalone network. This is because, although models of each device are executed independently during the forward and backward propagation processes, the same network structure is maintained. To ensure the synchronous training between devices, the initial values of corresponding network parameters must be the same. You are advised to set the same random number seed on each device by using `numpy.random.seed` to broadcast models.
+
+4. Gradient aggregation
+
+ Theoretically, the training effect of data parallel network should be the same as that of the standalone network. To ensure the consistency of the calculation logic, the `AllReduce` operator is inserted after gradient calculation to implement the gradient aggregation operation between devices. You can enable `mean` to average the sum of gradient values, or regard `mean` as a hyperparameter. Enabling `mean` is equivalent to reducing the learning rate by multiple times.
+
+5. Parameter update
+
+ Because the gradient aggregation operation is introduced, the models of each device perform parameter update with the same gradient value. Therefore, MindSpore implements a synchronous data parallel training mode. Theoretically, models trained by each device are the same. If the reduce operation on samples is involved on the network, the network output may be different. This is determined by the sharding attribute of data parallelism.
+
+### Data Parallel Code
+
+1. Collective communication
+
+ - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer.
+ - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition.
+
+2. Gradient aggregation
+
+ - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation.
+
+
+## Automatic Parallelism
+
+As a key feature of MindSpore, automatic parallelism is used to implement hybrid parallel training that combines automatic data parallelism and model parallelism. It aims to help users express the parallel algorithm logic using standalone scripts, reduce the difficulty of distributed training, improve the algorithm R&D efficiency, and maintain the high performance of training. This section describes how the automatic parallel mode `ParallelMode.AUTO_PARALLEL` and semi-automatic parallel mode `ParallelMode.SEMI_AUTO_PARALLEL` work in MindSpore.
+
+### Principle of Automatic Parallelism
+
+
+
+1. Distributed operator and tensor layout
+
+ As shown in the preceding figure, the automatic parallel process traverses the standalone forward ANF graphs and performs shard modeling on tensors in the unit of distributed operator, indicating how the input and output tensors of an operator are distributed to each device of the cluster, that is, the tensor layout. Users do not need to know which device runs which slice of a model. The framework automatically schedules and allocates model slices.
+
+ To obtain the tensor layout model, each operator has a shard strategy, which indicates the shard status of each input of the operator in the corresponding dimension. Generally, tensors can be sharded in any dimension as long as the value is a multiple of 2, and the even distribution principle is met. The following figure shows an example of the three-dimensional `BatchMatmul` operation. The parallel strategy consists of two tuples, indicating the sharding of `input` and `weight`, respectively. Elements in a tuple correspond to tensor dimensions one by one. `2^N` indicates the shard unit, and `1` indicates that the tuple is not sharded. If you want to express a parallel data shard strategy, that is, only data in the `batch` dimension of `input` is sharded, and data in other dimensions are not sharded, you can use `strategy=((2^N, 1, 1),(1, 1, 1))`. If you want to express a parallel model shard strategy, that is, only model in the non-`batch` dimension of `weight` is sharded, for example, only the `channel` dimension is sharded, you can use `strategy=((1, 1, 1),(1, 1, 2^N))`. If you want to express a hybrid parallel shard strategy, one of which is `strategy=((2^N, 1, 1),(1, 1, 2^N))`.
+
+ 
+
+ Based on the shard strategy of an operator, the framework automatically derives the distribution model of input tensors and output tensors of the operator. This distribution model consists of `device_matrix`, `tensor_shape`, and `tensor map`, which indicate the device matrix shape, tensor shape, and mapping between devices and tensor dimensions, respectively. Based on the tensor layout model, distributed operator determines whether to insert extra computation and communication operations in the graph to ensure that the operator computing logic is correct.
+
+2. Tensor Redistribution
+
+ When the output tensor model of an operator is inconsistent with the input tensor model of the next operator, computation and communication operations need to be introduced to implement the change between tensor layouts. The automatic parallel process introduces the tensor redistribution algorithm, which can be used to derive the communication conversion operations between random tensor layouts. The following three examples represent a parallel computing process of the formula `Z=(X×W)×V`, that is, a `MatMul` operation of two two-dimensional matrices, and show how to perform conversion between different parallel modes.
+
+ In example 1, the output of the first data parallel matrix multiplication is sharded in the row rection, and the input of the second model parallel matrix multiplication requires full tensors. The framework automatically inserts the `AllGather` operator to implement redistribution.
+
+ 
+
+ In example 2, the output of parallel matrix multiplication of the first model is sharded in the column direction, and the input of parallel matrix multiplication of the second model is sharded in the row direction. The framework automatically inserts a communication operator equivalent to the `AlltoAll` operation in collective communication to implement redistribution.
+
+ 
+
+ In example 3, an output shard mode of the first hybrid parallel matrix multiplication is the same as an input shard mode of the second hybrid parallel matrix multiplication. Therefore, redistribution does not need to be introduced. In the second matrix multiplication operation, the related dimensions of the two inputs are sharded. Therefore, the `AllReduce` operator needs to be inserted to ensure the operation correctness.
+
+ 
+
+ In general, this distributed representation breaks the boundary between data parallelism and model parallelism, making it easy to implement hybrid parallelism. From the perspective of scripts, users only need to construct a standalone network to express the parallel algorithm logic. Framework automatically shards the entire graph.
+
+3. Efficient parallel strategy search algorithm
+
+ The `SEMI_AUTO_PARALLEL` semi-automatic parallel mode indicates that you manually configure the parallel strategy for operators when you are familiar with the operator sharding representation. This mode is helpful for manual optimization, with certain commissioning difficulty. You need to master the parallel principle and obtain a high-performance parallel solution based on the network structure and cluster topology. To further help users accelerate the parallel network training process, the automatic parallel mode `AUTO_PARALLEL` introduces the automatic search feature of the parallel strategy on the basis of the semi-automatic parallel mode. Automatic parallelism builds cost models based on the hardware platform, and calculates the computation cost, memory cost, and communication cost of a certain amount of data and specific operators based on different parallel strategies Then, by using the dynamic programming algorithm or recursive programming algorithm and taking the memory upper limit of a single device as a constraint condition, a parallel strategy with optimal performance is efficiently searched out.
+
+ Strategy search replaces manual model sharding and provides a high-performance sharding solution within a short period of time, greatly reducing the threshold for parallel training.
+
+
+4. Convenient distributed automatic differentiation
+
+ In addition to forward network communication, the traditional manual model sharding needs to consider backward parallel computing. MindSpore encapsulates communication operations into operators and automatically generates backward propagation of communication operators based on the original automatic differentiation operations of the framework. Therefore, even during distributed training, users only need to pay attention to the forward propagation of the network to implement actual automatic parallel training.
+
+### Automatic Parallel Code
+
+1. Tensor layout model
+ - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated.
+
+2. Distributed operators
+ - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated.
+
+3. Strategy search algorithm
+ - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations.
+
+4. Device management
+ - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`.
+
+5. Entire graph sharding
+ - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode.
+
+
+6. Backward propagation of communication operators
+ - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`.
diff --git a/lite/docs/source_en/images/MindSpore-Lite-architecture.png b/docs/note/source_en/design/mindspore/images/MindSpore-Lite-architecture.png
similarity index 100%
rename from lite/docs/source_en/images/MindSpore-Lite-architecture.png
rename to docs/note/source_en/design/mindspore/images/MindSpore-Lite-architecture.png
diff --git a/docs/source_en/images/architecture.eddx b/docs/note/source_en/design/mindspore/images/architecture.eddx
similarity index 100%
rename from docs/source_en/images/architecture.eddx
rename to docs/note/source_en/design/mindspore/images/architecture.eddx
diff --git a/docs/source_en/images/architecture.png b/docs/note/source_en/design/mindspore/images/architecture.png
similarity index 100%
rename from docs/source_en/images/architecture.png
rename to docs/note/source_en/design/mindspore/images/architecture.png
diff --git a/docs/source_zh_cn/design/mindspore/images/auto_parallel.png b/docs/note/source_en/design/mindspore/images/auto_parallel.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/auto_parallel.png
rename to docs/note/source_en/design/mindspore/images/auto_parallel.png
diff --git a/docs/note/source_en/design/mindspore/images/data_parallel.png b/docs/note/source_en/design/mindspore/images/data_parallel.png
new file mode 100644
index 0000000000000000000000000000000000000000..a92c82aa64615b398e83b9bc2cf0aa2c5db9f904
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/data_parallel.png differ
diff --git a/docs/source_en/design/mindspore/images/ir/cf.dot b/docs/note/source_en/design/mindspore/images/ir/cf.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/cf.dot
rename to docs/note/source_en/design/mindspore/images/ir/cf.dot
diff --git a/docs/source_en/design/mindspore/images/ir/cf.png b/docs/note/source_en/design/mindspore/images/ir/cf.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/cf.png
rename to docs/note/source_en/design/mindspore/images/ir/cf.png
diff --git a/docs/source_en/design/mindspore/images/ir/closure.dot b/docs/note/source_en/design/mindspore/images/ir/closure.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/closure.dot
rename to docs/note/source_en/design/mindspore/images/ir/closure.dot
diff --git a/docs/source_en/design/mindspore/images/ir/closure.png b/docs/note/source_en/design/mindspore/images/ir/closure.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/closure.png
rename to docs/note/source_en/design/mindspore/images/ir/closure.png
diff --git a/docs/source_en/design/mindspore/images/ir/hof.dot b/docs/note/source_en/design/mindspore/images/ir/hof.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/hof.dot
rename to docs/note/source_en/design/mindspore/images/ir/hof.dot
diff --git a/docs/source_en/design/mindspore/images/ir/hof.png b/docs/note/source_en/design/mindspore/images/ir/hof.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/hof.png
rename to docs/note/source_en/design/mindspore/images/ir/hof.png
diff --git a/docs/source_en/design/mindspore/images/ir/ir.dot b/docs/note/source_en/design/mindspore/images/ir/ir.dot
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/ir.dot
rename to docs/note/source_en/design/mindspore/images/ir/ir.dot
diff --git a/docs/source_en/design/mindspore/images/ir/ir.png b/docs/note/source_en/design/mindspore/images/ir/ir.png
similarity index 100%
rename from docs/source_en/design/mindspore/images/ir/ir.png
rename to docs/note/source_en/design/mindspore/images/ir/ir.png
diff --git a/docs/source_zh_cn/design/mindspore/images/operator_split.png b/docs/note/source_en/design/mindspore/images/operator_split.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/operator_split.png
rename to docs/note/source_en/design/mindspore/images/operator_split.png
diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png
rename to docs/note/source_en/design/mindspore/images/tensor_redistribution.png
diff --git a/docs/note/source_en/design/mindspore/images/tensor_redistribution1.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution1.png
new file mode 100644
index 0000000000000000000000000000000000000000..ed4d79416a0a07f8d75e738aa544d214834ae778
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/tensor_redistribution1.png differ
diff --git a/docs/note/source_en/design/mindspore/images/tensor_redistribution2.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution2.png
new file mode 100644
index 0000000000000000000000000000000000000000..114f984c66ae578722dbcdbb59ab03c44dbcb097
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/tensor_redistribution2.png differ
diff --git a/docs/note/source_en/design/mindspore/images/tensor_redistribution3.png b/docs/note/source_en/design/mindspore/images/tensor_redistribution3.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd66c9120615f50f2b3f60cfe139954cb4adf307
Binary files /dev/null and b/docs/note/source_en/design/mindspore/images/tensor_redistribution3.png differ
diff --git a/docs/source_en/design/mindspore/ir.md b/docs/note/source_en/design/mindspore/mindir.md
similarity index 99%
rename from docs/source_en/design/mindspore/ir.md
rename to docs/note/source_en/design/mindspore/mindir.md
index 4837ba94baccb0f15638d6bb744ec13f9035bb1b..98743518453e919a3b70d280ef5e72f1f34b9a25 100644
--- a/docs/source_en/design/mindspore/ir.md
+++ b/docs/note/source_en/design/mindspore/mindir.md
@@ -1,7 +1,7 @@
# MindSpore IR (MindIR)
-`Framework Development` `Intermediate` `Expert` `Contributor`
+`Linux` `Windows` `Framework Development` `Intermediate` `Expert` `Contributor`
diff --git a/docs/source_en/glossary.md b/docs/note/source_en/glossary.md
similarity index 87%
rename from docs/source_en/glossary.md
rename to docs/note/source_en/glossary.md
index ae1fb21e9168f00bd574fdef787b2a7b3a86f831..3f08ac2a4124b14bf6551de670ec44f8eddaffcf 100644
--- a/docs/source_en/glossary.md
+++ b/docs/note/source_en/glossary.md
@@ -32,9 +32,10 @@
| LSTM | Long short-term memory, an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. |
| Manifest | A data format file. Huawei ModelArt adopts this format. For details, see . |
| ME | Mind Expression, MindSpore frontend, which is used to compile tasks from user source code to computational graphs, control execution during training, maintain contexts (in non-sink mode), and dynamically generate graphs (in PyNative mode). |
-| MindArmour | MindSpore security component, which is used for AI adversarial example management, AI model attack defense and enhancement, and AI model robustness evaluation. |
+| MindArmour | The security module of MindSpore, which improves the confidentiality, integrity and usability of the model through technical means such as differential privacy and adversarial attack and defense. MindArmour prevents attackers from maliciously modifying the model or cracking the internal components of the model to steal the parameters of the model. |
| MindData | MindSpore data framework, which provides data loading, enhancement, dataset management, and visualization. |
| MindInsight | MindSpore visualization component, which visualizes information such as scalars, images, computational graphs, and model hyperparameters. |
+| MindRecord | It is a data format defined by MindSpore, it is a module for reading, writing, searching and converting data sets in MindSpore format. |
| MindSpore | Huawei-leaded open-source deep learning framework. |
| MindSpore Lite | A lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side. |
| MNIST database | Modified National Handwriting of Images and Technology database, a large handwritten digit database, which is usually used to train various image processing systems. |
@@ -43,5 +44,5 @@
| ResNet-50 | Residual Neural Network 50, a residual neural network proposed by four Chinese people, including Kaiming He from Microsoft Research Institute. |
| Schema | Data set structure definition file, which defines the fields contained in a dataset and the field types. |
| Summary | An operator that monitors the values of tensors on the network. It is a peripheral operation in the figure and does not affect the data flow. |
-| TBE | Tensor Boost Engine, an operator development tool that is extended based on the Tensor Virtual Machine (TVM) framework. |
+| TBE | Tensor Boost Engine, it is a self-developed NPU operator development tool developed by Huawei, which is expanded on the basis of the TVM (Tensor Virtual Machine) framework. It provides a set of Python API to implement development activities and develop custom operators. |
| TFRecord | Data format defined by TensorFlow. |
diff --git a/docs/source_en/help_seeking_path.md b/docs/note/source_en/help_seeking_path.md
similarity index 100%
rename from docs/source_en/help_seeking_path.md
rename to docs/note/source_en/help_seeking_path.md
diff --git a/docs/note/source_en/image_classification.md b/docs/note/source_en/image_classification.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f478abb7251ffc3f7d5de22f7f308afd3eb3ef1
--- /dev/null
+++ b/docs/note/source_en/image_classification.md
@@ -0,0 +1,33 @@
+# Image classification
+
+
+
+## Image classification introduction
+
+Image classification is to identity what an image represents, to predict the object list and the probabilites. For example,the following tabel shows the classification results after mode inference.
+
+
+
+| Category | Probability |
+| ---------- | ----------- |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification).
+
+## Image classification model list
+
+The following table shows the data of some image classification models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| model name | link | size | precision | CPU 4 thread delay |
+|-----------------------|----------|----------|----------|-----------|
+| MobileNetV2 | | | | |
+| LeNet | | | | |
+| AlexNet | | | | |
+| GoogleNet | | | | |
+| ResNext50 | | | | |
+
diff --git a/docs/source_en/images/help_seeking_path.png b/docs/note/source_en/images/help_seeking_path.png
similarity index 100%
rename from docs/source_en/images/help_seeking_path.png
rename to docs/note/source_en/images/help_seeking_path.png
diff --git a/lite/tutorials/source_en/images/lite_quick_start_app_result.png b/docs/note/source_en/images/image_classification_result.png
similarity index 100%
rename from lite/tutorials/source_en/images/lite_quick_start_app_result.png
rename to docs/note/source_en/images/image_classification_result.png
diff --git a/docs/note/source_en/images/object_detection.png b/docs/note/source_en/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/docs/note/source_en/images/object_detection.png differ
diff --git a/docs/source_en/index.rst b/docs/note/source_en/index.rst
similarity index 57%
rename from docs/source_en/index.rst
rename to docs/note/source_en/index.rst
index b998ceb9e7171ee985df6dad5a31ce4ef7528c5f..3e3751cfc891d569cc53f24a3eb7e3d77a1064d1 100644
--- a/docs/source_en/index.rst
+++ b/docs/note/source_en/index.rst
@@ -3,20 +3,13 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
-MindSpore Documentation
-=======================
+MindSpore Note
+=================
.. toctree::
:glob:
:maxdepth: 1
design
- roadmap
- benchmark
- network_list
- operator_list
- constraints_on_network_construction
- glossary
- FAQ
- help_seeking_path
- community
+ specification_note
+ others
diff --git a/docs/note/source_en/network_list.rst b/docs/note/source_en/network_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..e4b29f1e46ddd503a7a7628cb229c97802f76ebe
--- /dev/null
+++ b/docs/note/source_en/network_list.rst
@@ -0,0 +1,7 @@
+Network List
+============
+
+.. toctree::
+ :maxdepth: 1
+
+ network_list_ms
\ No newline at end of file
diff --git a/docs/source_en/network_list.md b/docs/note/source_en/network_list_ms.md
similarity index 96%
rename from docs/source_en/network_list.md
rename to docs/note/source_en/network_list_ms.md
index 897111be5078687a3c4b4671c0c9f05904226128..6775c4fbbe26ee7caa05190e8adadf797a4bcff4 100644
--- a/docs/source_en/network_list.md
+++ b/docs/note/source_en/network_list_ms.md
@@ -1,60 +1,60 @@
-# Network List
-
-`Linux` `Ascend` `GPU` `CPU` `Model Development` `Intermediate` `Expert`
-
-
-
-- [Network List](#network-list)
- - [Model Zoo](#model-zoo)
- - [Pre-trained Models](#pre-trained-models)
-
-
-
-
-
-## Model Zoo
-
-| Domain | Sub Domain | Network | Ascend | GPU | CPU
-|:------ |:------| :----------- |:------ |:------ |:-----
-|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
-| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
-|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
-|Computer Vision (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
-| Computer Vision (CV) | Targets Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Targets Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
-| Computer Vision(CV) | Targets Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
-| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
-| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
-| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
-| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
-| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
-
-> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts.
-
-## Pre-trained Models
-*It refers to the released MindSpore version. The hardware platforms that support model training are CPU, GPU and Ascend. As shown in the table below, ✓ indicates that the pre-trained model run on the selected platform.
-
-| Domain | Sub Domain| Network | Dataset | CPU | GPU | Ascend | 0.5.0-beta*
-|:------ |:------ | :------- |:------ |:------ |:------ |:----- |:-----
-|Computer Vision (CV) | Image Classification| [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | CIFAR-10| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/alexnet/alexnet_ascend_0.5.0_cifar10_official_classification_20200716.tar.gz)
-|Computer Vision (CV) | Image Classification| [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py)| MNIST | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/lenet/lenet_ascend_0.5.0_mnist_official_classification_20200716.tar.gz)
-|Computer Vision (CV) | Image Classification| [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py)| CIFAR-10 | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/vgg/vgg16_ascend_0.5.0_cifar10_official_classification_20200715.tar.gz)
-|Computer Vision (CV) | Image Classification| [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | CIFAR-10| | | ✓ |[Download](http://download.mindspore.cn/model_zoo/official/cv/resnet/resnet50_v1.5_ascend_0.3.0_cifar10_official_classification_20200718.tar.gz)
-|Computer Vision (CV) | Targets Detection| [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | COCO 2014| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/yolo/yolov3_darknet53_ascend_0.5.0_coco2014_official_object_detection_20200717.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_Base](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | zhwiki | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_base_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_NEZHA](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py)| zhwiki| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_nezha_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py)| WMT English-German| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/transformer/transformer_ascend_0.5.0_wmtende_official_machine_translation_20200713.tar.gz)
+# MindSpore Network List
+
+`Linux` `Ascend` `GPU` `CPU` `Model Development` `Intermediate` `Expert`
+
+
+
+- [MindSpore Network List](#mindspore-network-list)
+ - [Model Zoo](#model-zoo)
+ - [Pre-trained Models](#pre-trained-models)
+
+
+
+
+
+## Model Zoo
+
+| Domain | Sub Domain | Network | Ascend | GPU | CPU
+|:------ |:------| :----------- |:------ |:------ |:-----
+|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
+| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
+|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
+|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
+|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
+| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
+|Computer Vision (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
+| Computer Vision (CV) | Targets Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Targets Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
+| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Doing | Doing
+| Computer Vision(CV) | Targets Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
+| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
+| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
+| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
+
+> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts.
+
+## Pre-trained Models
+*It refers to the released MindSpore version. The hardware platforms that support model training are CPU, GPU and Ascend. As shown in the table below, ✓ indicates that the pre-trained model run on the selected platform.
+
+| Domain | Sub Domain| Network | Dataset | CPU | GPU | Ascend | 0.5.0-beta*
+|:------ |:------ | :------- |:------ |:------ |:------ |:----- |:-----
+|Computer Vision (CV) | Image Classification| [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | CIFAR-10| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/alexnet/alexnet_ascend_0.5.0_cifar10_official_classification_20200716.tar.gz)
+|Computer Vision (CV) | Image Classification| [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py)| MNIST | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/lenet/lenet_ascend_0.5.0_mnist_official_classification_20200716.tar.gz)
+|Computer Vision (CV) | Image Classification| [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py)| CIFAR-10 | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/vgg/vgg16_ascend_0.5.0_cifar10_official_classification_20200715.tar.gz)
+|Computer Vision (CV) | Image Classification| [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | CIFAR-10| | | ✓ |[Download](http://download.mindspore.cn/model_zoo/official/cv/resnet/resnet50_v1.5_ascend_0.3.0_cifar10_official_classification_20200718.tar.gz)
+|Computer Vision (CV) | Targets Detection| [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | COCO 2014| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/yolo/yolov3_darknet53_ascend_0.5.0_coco2014_official_object_detection_20200717.tar.gz)
+| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_Base](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | zhwiki | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_base_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
+| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_NEZHA](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py)| zhwiki| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_nezha_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
+| Natural Language Processing (NLP) | Natural Language Understanding| [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py)| WMT English-German| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/transformer/transformer_ascend_0.5.0_wmtende_official_machine_translation_20200713.tar.gz)
diff --git a/docs/note/source_en/object_detection.md b/docs/note/source_en/object_detection.md
new file mode 100644
index 0000000000000000000000000000000000000000..fbd7b794f05a08b7e5096bbd78e9791d6cc3958a
--- /dev/null
+++ b/docs/note/source_en/object_detection.md
@@ -0,0 +1,29 @@
+# Object detection
+
+
+
+## Object dectectin introduction
+
+Object detection can identify the object in the image and its position in the image. For the following figure, the output of the object detection model is shown in the following table. The rectangular box is used to identify the position of the object in the graph and the probability of the object category is marked. The four numbers in the coordinates are Xmin, Ymin, Xmax, Ymax; the probability represents the probility of the detected object.
+
+
+
+| Category | Probability | Coordinate |
+| -------- | ----------- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection).
+
+## Object detection model list
+
+The following table shows the data of some object detection models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| model name | link | size | precision | CPU 4 thread delay |
+|-----------------------|----------|----------|----------|-----------|
+| SSD | | | | |
+| Faster_RCNN | | | | |
+| Yolov3_Darknet | | | | |
+| Mask_RCNN | | | | |
+
diff --git a/docs/note/source_en/operator_list.rst b/docs/note/source_en/operator_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..8a013bafdf8a1b57b72a2e9f0d4444d703f59383
--- /dev/null
+++ b/docs/note/source_en/operator_list.rst
@@ -0,0 +1,10 @@
+Operator List
+=============
+
+.. toctree::
+ :maxdepth: 1
+
+ operator_list_ms
+ operator_list_implicit
+ operator_list_parallel
+ operator_list_lite
\ No newline at end of file
diff --git a/docs/note/source_en/operator_list_implicit.md b/docs/note/source_en/operator_list_implicit.md
new file mode 100644
index 0000000000000000000000000000000000000000..c36420936f6b393f9ed0c15d4e1d459c57c65842
--- /dev/null
+++ b/docs/note/source_en/operator_list_implicit.md
@@ -0,0 +1,104 @@
+# MindSpore Implicit Type Conversion Operator List
+
+`Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert`
+
+
+
+- [MindSpore Implicit Type Conversion Operator List](#mindspore-implicit-type-conversion-operator-list)
+ - [Implicit Type Conversion](#implicit-type-conversion)
+ - [conversion rules](#conversion-rules)
+ - [data types involved in conversion](#data-types-involved-in-conversion)
+ - [support ops](#support-ops)
+
+
+
+
+
+## Implicit Type Conversion
+
+### conversion rules
+* Scalar and Tensor operations: during operation, the scalar is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation;
+when Tensor is a bool data type and the scalar is int or float, both the scalar and Tensor are converted to the Tensor with the data type of int32 or float32.
+* Tensor operation of different data types: the priority of data type is bool < uint8 < int8 < int16 < int32 < int64 < float16 < float32
+
+| Operation | CPU FP16 | CPU FP32 | CPU Int8 | CPU UInt8 | GPU FP16 | GPU FP32 | Tensorflow Lite op supported | Caffe Lite op supported | Onnx Lite op supported |
+|-----------------------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------|
+| Abs | | Supported | Supported | Supported | Supported | Supported | Abs | | Abs |
+| Add | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add |
+| AddN | | Supported | | | | | AddN | | |
+| Argmax | | Supported | Supported | Supported | | | Argmax | ArgMax | ArgMax |
+| Argmin | | Supported | Supported | Supported | | | Argmin | | |
+| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling| Pooling | AveragePool |
+| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | BatchNorm | BatchNormalization |
+| BatchToSpace | | Supported | Supported | Supported | | | BatchToSpace, BatchToSpaceND | | |
+| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | BiasAdd |
+| Broadcast | | Supported | | | | | BroadcastTo | | Expand |
+| Cast | Supported | Supported | | Supported | Supported | Supported | Cast, DEQUANTIZE* | | Cast |
+| Ceil | | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil |
+| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat |
+| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv |
+| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose |
+| Cos | | Supported | Supported | Supported | Supported | Supported | Cos | | Cos |
+| Crop | | Supported | Supported | Supported | | | | Crop | |
+| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | Deconvolution| ConvTranspose |
+| DepthToSpace | | Supported | Supported | Supported | | | DepthToSpace| | DepthToSpace |
+| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | Convolution |
+| Div | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div |
+| Eltwise | Supported | Supported | | | | | | Eltwise | |
+| Elu | | Supported | | | | | Elu | | Elu |
+| Equal | Supported | Supported | Supported | Supported | | | Equal | | Equal |
+| Exp | | Supported | | | Supported | Supported | Exp | | Exp |
+| ExpandDims | | Supported | | | | | | | |
+| Fill | | Supported | | | | | Fill | | |
+| Flatten | | Supported | | | | | | Flatten | |
+| Floor | | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor |
+| FloorDiv | Supported | Supported | | | | | FloorDiv | | |
+| FloorMod | Supported | Supported | | | | | FloorMod | | |
+| FullConnection | | Supported | Supported | Supported | Supported | Supported | FullyConnected | InnerProduct | |
+| GatherNd | | Supported | Supported | Supported | | | GatherND | | |
+| GatherV2 | | Supported | Supported | Supported | | | Gather | | Gather |
+| Greater | Supported | Supported | Supported | Supported | | | Greater | | Greater |
+| GreaterEqual | Supported | Supported | Supported | Supported | | | GreaterEqual| | |
+| Hswish | Supported | Supported | Supported | Supported | | | HardSwish | | |
+| LeakyReLU | Supported | Supported | | | Supported | Supported | LeakyRelu | | LeakyRelu |
+| Less | Supported | Supported | Supported | Supported | | | Less | | Less |
+| LessEqual | Supported | Supported | Supported | Supported | | | LessEqual | | |
+| LRN | | Supported | | | | | LocalResponseNorm | | Lrn |
+| Log | | Supported | Supported | Supported | Supported | Supported | Log | | Log |
+| LogicalAnd | Supported | Supported | | | | | LogicalAnd | | |
+| LogicalNot | | Supported | Supported | Supported | Supported | Supported | LogicalNot | | |
+| LogicalOr | Supported | Supported | | | | | LogicalOr | | |
+| LSTM | | Supported | | | | | | | |
+| MatMul | | Supported | Supported | Supported | Supported | Supported | | | MatMul |
+| Maximum | Supported | Supported | | | | | Maximum | | Max |
+| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool |
+| Minimum | Supported | Supported | | | | | Minimum | | Min |
+| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul |
+| NotEqual | Supported | Supported | Supported | Supported | | | NotEqual | | |
+| OneHot | | Supported | | | | | OneHot | | |
+| Pad | | Supported | Supported | Supported | | | Pad | | Pad |
+| Pow | | Supported | Supported | Supported | | | Pow | Power | Power |
+| PReLU | | Supported | | | Supported | Supported | | PReLU | |
+| Range | | Supported | | | | | Range | | |
+| Rank | | Supported | | | | | Rank | | |
+| ReduceMax | Supported | Supported | Supported | Supported | | | ReduceMax | | ReduceMax |
+| ReduceMean | Supported | Supported | Supported | Supported | | | Mean | | ReduceMean |
+| ReduceMin | Supported | Supported | Supported | Supported | | | ReduceMin | | ReduceMin |
+| ReduceProd | Supported | Supported | Supported | Supported | | | ReduceProd | | |
+| ReduceSum | Supported | Supported | Supported | Supported | | | Sum | | ReduceSum |
+| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | |
+| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu |
+| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip* |
+| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | Reshape | Reshape | Reshape,Flatten |
+| Resize | | Supported | Supported | Supported | | | ResizeBilinear, NearestNeighbor | Interp | |
+| Reverse | | Supported | | | | | reverse | | |
+| ReverseSequence | | Supported | | | | | ReverseSequence | | |
+| Round | | Supported | Supported | Supported | Supported | Supported | Round | | |
+| Rsqrt | | Supported | Supported | Supported | Supported | Supported | Rsqrt | | |
+| Scale | | Supported | | | Supported | Supported | | Scale | |
+| ScatterNd | | Supported | | | | | ScatterNd | | |
+| Shape | | Supported | | | | | Shape | | Shape |
+| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | Logistic | Sigmoid | Sigmoid |
+| Sin | | Supported | Supported | Supported | Supported | Supported | Sin | | Sin |
+| Slice | | Supported | Supported | Supported | Supported | Supported | Slice | | Slice |
+| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax |
+| SpaceToBatch | | Supported | | | | | | | |
+| SpaceToBatchND | | Supported | | | | | SpaceToBatchND | | |
+| SpaceToDepth | | Supported | | | | | SpaceToDepth | | SpaceToDepth |
+| SparseToDense | | Supported | | | | | SpareToDense | | |
+| Split | Supported | Supported | Supported | Supported | | | Split, SplitV | | |
+| Sqrt | | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt |
+| Square | | Supported | Supported | Supported | Supported | Supported | Square | | |
+| SquaredDifference | | Supported | | | | | SquaredDifference | | |
+| Squeeze | | Supported | Supported | Supported | | | Squeeze | | Squeeze |
+| StridedSlice | | Supported | Supported | Supported | | | StridedSlice| | |
+| Stack | | Supported | | | | | Stack | | |
+| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub |
+| Tanh | Supported | Supported | | | Supported | Supported | Tanh | TanH | |
+| Tile | | Supported | | | | | Tile | | Tile |
+| TopK | | Supported | Supported | Supported | | | TopKV2 | | |
+| Transpose | Supported | Supported | | | Supported | Supported | Transpose | Permute | Transpose |
+| Unique | | Supported | | | | | Unique | | |
+| Unsqueeze | | Supported | Supported | Supported | | | | | Unsqueeze |
+| Unstack | | Supported | | | | | Unstack | | |
+| Where | | Supported | | | | | Where | | |
+| ZerosLike | | Supported | | | | | ZerosLike | | |
+
+* Clip: only support convert clip(0, 6) to Relu6.
+* DEQUANTIZE: only support to convert fp16 to fp32.
diff --git a/docs/source_en/operator_list.md b/docs/note/source_en/operator_list_ms.md
similarity index 76%
rename from docs/source_en/operator_list.md
rename to docs/note/source_en/operator_list_ms.md
index 672de46b5ab7e69e5c8743b03fa3cfd323d899d7..f8e83eb7e1b300f620c7f7e2803fd4219c5885c6 100644
--- a/docs/source_en/operator_list.md
+++ b/docs/note/source_en/operator_list_ms.md
@@ -1,4 +1,4 @@
-# Operator List
+# MindSpore Operator List
`Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert`
@@ -8,15 +8,10 @@
- [mindspore.nn](#mindsporenn)
- [mindspore.ops.operations](#mindsporeopsoperations)
- [mindspore.ops.functional](#mindsporeopsfunctional)
- - [Distributed Operator](#distributed-operator)
- - [Implicit Type Conversion](#implicit-type-conversion)
- - [conversion rules](#conversion-rules)
- - [data types involved in conversion](#data-types-involved-in-conversion)
- - [support ops](#support-ops)
-
+
## mindspore.nn
@@ -37,7 +32,7 @@
| [mindspore.nn.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
| [mindspore.nn.Dense](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
-| [mindspore.nn.Norm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Doing | Supported | Doing |layer/basic
+| [mindspore.nn.Norm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Supported | Supported | Doing |layer/basic
| [mindspore.nn.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
| [mindspore.nn.Range](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
| [mindspore.nn.SequentialCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
@@ -65,6 +60,17 @@
| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Doing | Doing |layer/quant
| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Doing |layer/quant
+| [mindspore.nn.FakeQuantWithMinMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FakeQuantWithMinMax) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnFoldQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnWithoutFoldQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnWithoutFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.DenseQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.ActQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ActQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.LeakyReLUQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLUQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSwishQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSwishQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSigmoidQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSigmoidQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.TensorAddQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TensorAddQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.MulQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MulQuant) | Supported | Supported | Supported |layer/quant
| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
| [mindspore.nn.MSELoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) |Supported |Doing | Doing |loss/loss
@@ -84,7 +90,6 @@
| [mindspore.nn.WithLossCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.WithGradCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.DataWrapper](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DataWrapper) |Doing | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
@@ -100,7 +105,7 @@
| :----------- |:------ |:------ |:-----|:---
| [mindspore.ops.operations.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Flatten) | Supported | Supported |Supported | nn_ops
| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops
+| [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops
@@ -344,6 +349,8 @@
| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy) | Supported | Doing | Doing | math_ops
| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy) | Supported | Doing | Doing | math_ops
| [mindspore.ops.operations.HistogramFixedWidth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.operations.Eps](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Eps) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.operations.ReLUV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLUV2) | Supported | Doing | Doing | nn_ops
## mindspore.ops.functional
@@ -380,156 +387,3 @@
| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | tensor_pow
> At present, functional supports some operators without attributes, which will be further completed in the future.
-
-## Distributed Operator
-
-| op name | constraints
-| :----------- | :-----------
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | None
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | None
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | None
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | None
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | None
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | None
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | None
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | None
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | None
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | None
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | None
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | None
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | None
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | None
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | None
-| [mindspore.ops.operations.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Dropout) | Repeated calculation is not supported.
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | None
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | None
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | None
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | None
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | None
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | None
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | None
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | None
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | None
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | None
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | None
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | None
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | None
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | None
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | None
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Need to be used in conjunction with `DropoutDoMask`.
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported.
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.
-| [mindspore.ops.operations.SparseGatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseGatherV2) | The same as GatherV2.
-| [mindspore.ops.operations.EmbeddingLookup](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EmbeddingLookup) | The same as GatherV2.
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | The last dimension of logits and labels can't be splited; Only supports using output[0].
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | `transpose_a=True` is not supported.
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | `transpore_a=True` is not supported.
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | The shard strategy in channel dimension of input_x should be consistent with weight.
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Only support 1-dim indices.
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | None
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | None
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | None
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | The output index can't be used as the input of other operators.
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | The output index can't be used as the input of other operators.
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | None
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Configuring shard strategy is not supported.
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is not supported when the strides of dimension is 1.
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Only support configuring shard strategy for multiples.
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | None
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Configuring shard strategy is not supported.
-
-> Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.
->
-
-## Implicit Type Conversion
-
-### conversion rules
-* Scalar and Tensor operations: during operation, the scalar is automatically converted to Tensor, and the data type is consistent with the Tensor data type involved in the operation;
-when Tensor is a bool data type and the scalar is int or float, both the scalar and Tensor are converted to the Tensor with the data type of int32 or float32.
-* Tensor operation of different data types: the priority of data type is bool < uint8 < int8 < int16 < int32 < int64 < float16 < float32
+
+- [MindSpore Distributed Operator List](#mindspore-distributed-operator-list)
+ - [Distributed Operator](#distributed-operator)
+
+
+
+
+
+## Distributed Operator
+
+| op name | constraints
+| :----------- | :-----------
+| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | None
+| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | None
+| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | None
+| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | None
+| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | None
+| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | None
+| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | None
+| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | None
+| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | None
+| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | None
+| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | None
+| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | None
+| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | None
+| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | None
+| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | None
+| [mindspore.ops.operations.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Dropout) | Repeated calculation is not supported.
+| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | None
+| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | None
+| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | None
+| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | None
+| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | None
+| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | None
+| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | None
+| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | None
+| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | None
+| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | None
+| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | None
+| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | None
+| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | None
+| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | None
+| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | None
+| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Need to be used in conjunction with `DropoutDoMask`.
+| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported.
+| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.
+| [mindspore.ops.operations.SparseGatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseGatherV2) | The same as GatherV2.
+| [mindspore.ops.operations.EmbeddingLookup](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EmbeddingLookup) | The same as GatherV2.
+| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | The last dimension of logits and labels can't be splited; Only supports using output[0].
+| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | `transpose_a=True` is not supported.
+| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | `transpore_a=True` is not supported.
+| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | The shard strategy in channel dimension of input_x should be consistent with weight.
+| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Only support 1-dim indices.
+| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | None
+| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | None
+| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | None
+| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | The output index can't be used as the input of other operators.
+| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | The output index can't be used as the input of other operators.
+| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | None
+| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Configuring shard strategy is not supported.
+| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is not supported when the strides of dimension is 1.
+| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Only support configuring shard strategy for multiples.
+| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | None
+| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Configuring shard strategy is not supported.
+
+> Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.
+>
diff --git a/docs/note/source_en/others.rst b/docs/note/source_en/others.rst
new file mode 100644
index 0000000000000000000000000000000000000000..61a9cdf7576454946cfc8700fe75b80718a191dc
--- /dev/null
+++ b/docs/note/source_en/others.rst
@@ -0,0 +1,11 @@
+Others
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ glossary
+ roadmap
+ help_seeking_path
+ community
+
diff --git a/docs/source_en/roadmap.md b/docs/note/source_en/roadmap.md
similarity index 100%
rename from docs/source_en/roadmap.md
rename to docs/note/source_en/roadmap.md
diff --git a/docs/note/source_en/specification_note.rst b/docs/note/source_en/specification_note.rst
new file mode 100644
index 0000000000000000000000000000000000000000..66b021a446e93dad29de47666b59a646cef37558
--- /dev/null
+++ b/docs/note/source_en/specification_note.rst
@@ -0,0 +1,11 @@
+规格说明
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ benchmark
+ network_list
+ operator_list
+ constraints_on_network_construction
+
diff --git a/docs/note/source_zh_cn/_static/logo_notebook.png b/docs/note/source_zh_cn/_static/logo_notebook.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b60a39049880c74956d5e37c985ebfd7f401d5d
Binary files /dev/null and b/docs/note/source_zh_cn/_static/logo_notebook.png differ
diff --git a/tutorials/source_zh_cn/_static/logo_source.png b/docs/note/source_zh_cn/_static/logo_source.png
similarity index 100%
rename from tutorials/source_zh_cn/_static/logo_source.png
rename to docs/note/source_zh_cn/_static/logo_source.png
diff --git a/docs/source_zh_cn/benchmark.md b/docs/note/source_zh_cn/benchmark.md
similarity index 100%
rename from docs/source_zh_cn/benchmark.md
rename to docs/note/source_zh_cn/benchmark.md
diff --git a/docs/source_zh_cn/community.rst b/docs/note/source_zh_cn/community.rst
similarity index 100%
rename from docs/source_zh_cn/community.rst
rename to docs/note/source_zh_cn/community.rst
diff --git a/lite/docs/source_zh_cn/conf.py b/docs/note/source_zh_cn/conf.py
similarity index 92%
rename from lite/docs/source_zh_cn/conf.py
rename to docs/note/source_zh_cn/conf.py
index dd8e0482bb14ee1ec4242dcd8e550cbb12eb0712..95d7701759707ab95a3c199cd8a22e2e2cc1194d 100644
--- a/lite/docs/source_zh_cn/conf.py
+++ b/docs/note/source_zh_cn/conf.py
@@ -15,9 +15,9 @@ import os
# -- Project information -----------------------------------------------------
-project = 'MindSpore Lite'
-copyright = '2020, MindSpore Lite'
-author = 'MindSpore Lite'
+project = 'MindSpore'
+copyright = '2020, MindSpore'
+author = 'MindSpore'
# The full version, including alpha/beta/rc tags
release = 'master'
@@ -48,8 +48,6 @@ exclude_patterns = []
pygments_style = 'sphinx'
-autodoc_inherit_docstrings = False
-
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
@@ -59,4 +57,6 @@ html_theme = 'sphinx_rtd_theme'
html_search_language = 'zh'
+html_search_options = {'dict': '../../resource/jieba.txt'}
+
html_static_path = ['_static']
\ No newline at end of file
diff --git a/docs/source_zh_cn/constraints_on_network_construction.md b/docs/note/source_zh_cn/constraints_on_network_construction.md
similarity index 83%
rename from docs/source_zh_cn/constraints_on_network_construction.md
rename to docs/note/source_zh_cn/constraints_on_network_construction.md
index 8b352b2625d65ebdb811e75a84caddc9a71f0b78..6b3b2204a088049ac0eff67933012048b5299309 100644
--- a/docs/source_zh_cn/constraints_on_network_construction.md
+++ b/docs/note/source_zh_cn/constraints_on_network_construction.md
@@ -231,34 +231,49 @@ tuple也支持切片取值操作, 但不支持切片类型为Tensor类型,支
### 其他约束
-整网construct函数输入的参数以及使用ms_function装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用。所以,在图模式下,限制入口网络的参数只能是Tensor,如下例所示:
-* 错误的写法如下:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
-
- def construct(self, input_x, input_axis):
- return self.expandDims(input_x, input_axis)
- expand_dim = ExpandDimsTest()
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x, 0)
- ```
- 在示例中,ExpandDimsTest是一个只有单算子的网络,网络的输入有input_x和input_axis两个。因为ExpandDims算子的第二个输入需要是常量,这是因为在图编译过程中推导ExpandDims算子输出维度的时候需要用到,而input_axis作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
+1. 整网`construct`函数输入的参数以及使用`ms_function`装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用。所以,在图模式下,限制入口网络的参数只能是`Tensor`,如下例所示:
+
+ * 错误的写法如下:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+
+ def construct(self, input_x, input_axis):
+ return self.expandDims(input_x, input_axis)
+ expand_dim = ExpandDimsTest()
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x, 0)
+ ```
+ 在示例中,`ExpandDimsTest`是一个只有单算子的网络,网络的输入有`input_x`和`input_axis`两个。因为`ExpandDims`算子的第二个输入需要是常量,这是因为在图编译过程中推导`ExpandDims`算子输出维度的时候需要用到,而`input_axis`作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
+
+ * 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self, axis):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+ self.axis = axis
+
+ def construct(self, input_x):
+ return self.expandDims(input_x, self.axis)
+ axis = 0
+ expand_dim = ExpandDimsTest(axis)
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x)
+ ```
+
+2. 不允许修改网络的非`Parameter`类型数据成员。示例如下:
-* 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self, axis):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
- self.axis = axis
-
- def construct(self, input_x):
- return self.expandDims(input_x, self.axis)
- axis = 0
- expand_dim = ExpandDimsTest(axis)
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x)
```
+ class Net(Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+ self.num = 2
+ self.par = Parameter(Tensor(np.ones((2, 3, 4))), name="par")
+
+ def construct(self, x, y):
+ return x + y
+ ```
+ 上面所定义的网络里,`self.num`不是一个`Parameter`,不允许被修改,而`self.par`是一个`Parameter`,可以被修改。
diff --git a/docs/note/source_zh_cn/design.rst b/docs/note/source_zh_cn/design.rst
new file mode 100644
index 0000000000000000000000000000000000000000..2eddcf074fcd968f211f11e0c48f600c97ae157d
--- /dev/null
+++ b/docs/note/source_zh_cn/design.rst
@@ -0,0 +1,17 @@
+设计说明
+===========
+
+.. toctree::
+ :maxdepth: 1
+
+ design/technical_white_paper
+ design/mindspore/architecture
+ design/mindspore/architecture_lite
+ design/mindspore/mindir
+ design/mindspore/distributed_training_design
+ design/mindinsight/profiler_design
+ design/mindinsight/training_visual_design
+ design/mindinsight/graph_visual_design
+ design/mindinsight/tensor_visual_design
+ design/mindarmour/differential_privacy_design
+ design/mindarmour/fuzzer_design
diff --git a/docs/source_zh_cn/design/mindarmour/differential_privacy_design.md b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md
similarity index 95%
rename from docs/source_zh_cn/design/mindarmour/differential_privacy_design.md
rename to docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md
index 276219655693f4c3424f23906cde901cd465217a..72868f5dc71065bebed6525c1b404b15b8b6a7cb 100644
--- a/docs/source_zh_cn/design/mindarmour/differential_privacy_design.md
+++ b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md
@@ -1,70 +1,70 @@
-# 差分隐私
-
-`Linux` `Ascend` `模型开发` `模型调优` `框架开发` `企业` `高级` `贡献者`
-
-
-
-- [差分隐私](#差分隐私)
- - [总体设计](#总体设计)
- - [差分隐私优化器](#差分隐私优化器)
- - [差分隐私的噪声机制](#差分隐私的噪声机制)
- - [Monitor](#monitor)
- - [代码实现](#代码实现)
- - [参考文献](#参考文献)
-
-
-
-
-
-## 总体设计
-
-MindArmour的Differential-Privacy模块实现了差分隐私训练的能力。模型的训练主要由构建训练数据集、计算损失、计算梯度以及更新模型参数等过程组成,目前MindArmour的差分隐私训练主要着力于计算梯度的过程,通过相应的算法对梯度进行裁剪、加噪等处理,从而保护用户数据隐私。
-
-
-
-