diff --git a/docs/api_cpp/source_en/api.md b/docs/api_cpp/source_en/api.md
new file mode 100644
index 0000000000000000000000000000000000000000..e0c9aead9a5e5a1a083f5a37611c3436fd7ee6f0
--- /dev/null
+++ b/docs/api_cpp/source_en/api.md
@@ -0,0 +1,390 @@
+# mindspore::api
+
+
+
+## Context
+
+\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)>
+
+The Context class is used to store environment variables during execution.
+
+### Static Public Member Function
+
+#### Instance
+
+```cpp
+static Context &Instance();
+```
+
+Obtains the MindSpore Context instance object.
+
+### Public Member Functions
+
+#### GetDeviceTarget
+
+```cpp
+const std::string &GetDeviceTarget() const;
+```
+
+Obtains the target device type.
+
+- Returns
+
+ Current DeviceTarget type.
+
+#### GetDeviceID
+
+```cpp
+uint32_t GetDeviceID() const;
+```
+
+Obtains the device ID.
+
+- Returns
+
+ Current device ID.
+
+#### SetDeviceTarget
+
+```cpp
+Context &SetDeviceTarget(const std::string &device_target);
+```
+
+Configures the target device.
+
+- Parameters
+
+ - `device_target`: target device to be configured. The options are `kDeviceTypeAscend310` and `kDeviceTypeAscend910`.
+
+- Returns
+
+ MindSpore Context instance object.
+
+#### SetDeviceID
+
+```cpp
+Context &SetDeviceID(uint32_t device_id);
+```
+
+Obtains the device ID.
+
+- Parameters
+
+ - `device_id`: device ID to be configured.
+
+- Returns
+
+ MindSpore Context instance object.
+
+## Serialization
+
+\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/serialization.h)>
+
+The Serialization class is used to summarize methods for reading and writing model files.
+
+### Static Public Member Function
+
+#### LoadModel
+
+- Parameters
+
+ - `file`: model file path.
+ - `model_type`: model file type. The options are `ModelType::kMindIR` and `ModelType::kOM`.
+
+- Returns
+
+ Object for storing graph data.
+
+## Model
+
+\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model.h)>
+
+A Model class is used to define a MindSpore model, facilitating computational graph management.
+
+### Constructor and Destructor
+
+```cpp
+Model(const GraphCell &graph);
+~Model();
+```
+
+`GraphCell` is a derivative of `Cell`. `Cell` is not open for use currently. `GraphCell` can be constructed from `Graph`, for example, `Model model(GraphCell(graph))`.
+
+### Public Member Functions
+
+#### Build
+
+```cpp
+Status Build(const std::map &options);
+```
+
+Builds a model so that it can run on a device.
+
+- Parameters
+
+ - `options`: model build options. In the following table, Key indicates the option name, and Value indicates the corresponding option.
+
+| Key | Value |
+| --- | --- |
+| kModelOptionInsertOpCfgPath | [AIPP](https://support.huaweicloud.com/intl/en-us/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html) configuration file path. |
+| kModelOptionInputFormat | Manually specifies the model input format. The options are `"NCHW"` and `"NHWC"`. |
+| kModelOptionInputShape | Manually specifies the model input shape, for example, `"input_op_name1: n1,c2,h3,w4;input_op_name2: n4,c3,h2,w1"` |
+| kModelOptionOutputType | Manually specifies the model output type, for example, `"FP16"` or `"UINT8"`. The default value is `"FP32"`. |
+| kModelOptionPrecisionMode | Model precision mode. The options are `"force_fp16"`, `"allow_fp32_to_fp16"`, `"must_keep_origin_dtype"`, and `"allow_mix_precision"`. The default value is `"force_fp16"`. |
+| kModelOptionOpSelectImplMode | Operator selection mode. The options are `"high_performance"` and `"high_precision"`. The default value is `"high_performance"`. |
+
+- Returns
+
+ Status code.
+
+#### Predict
+
+```cpp
+Status Predict(const std::vector &inputs, std::vector *outputs);
+```
+
+Inference model.
+
+- Parameters
+
+ - `inputs`: a `vector` where model inputs are arranged in sequence.
+ - `outputs`: output parameter, which is the pointer to a `vector`. The model outputs are filled in the container in sequence.
+
+- Returns
+
+ Status code.
+
+#### GetInputsInfo
+
+```cpp
+Status GetInputsInfo(std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const;
+```
+
+Obtains the model input information.
+
+- Parameters
+
+ - `names`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input names are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+ - `shapes`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input shapes are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+ - `data_types`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input data types are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+ - `mem_sizes`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input memory lengths (in bytes) are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+
+- Returns
+
+ Status code.
+
+#### GetOutputsInfo
+
+```cpp
+Status GetOutputsInfo(std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const;
+```
+
+Obtains the model output information.
+
+- Parameters
+
+ - `names`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output names are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+ - `shapes`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output shapes are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+ - `data_types`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output data types are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+ - `mem_sizes`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output memory lengths (in bytes) are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained.
+
+- Returns
+
+ Status code.
+
+## Tensor
+
+\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)>
+
+### Constructor and Destructor
+
+```cpp
+Tensor();
+Tensor(const std::string &name, DataType type, const std::vector &shape, const void *data, size_t data_len);
+~Tensor();
+```
+
+### Static Public Member Function
+
+#### GetTypeSize
+
+```cpp
+static int GetTypeSize(api::DataType type);
+```
+
+Obtains the memory length of a data type, in bytes.
+
+- Parameters
+
+ - `type`: data type.
+
+- Returns
+
+ Memory length, in bytes.
+
+### Public Member Functions
+
+#### Name
+
+```cpp
+const std::string &Name() const;
+```
+
+Obtains the name of a tensor.
+
+- Returns
+
+ Tensor name.
+
+#### DataType
+
+```cpp
+api::DataType DataType() const;
+```
+
+Obtains the data type of a tensor.
+
+- Returns
+
+ Tensor data type.
+
+#### Shape
+
+```cpp
+const std::vector &Shape() const;
+```
+
+Obtains the shape of a tensor.
+
+- Returns
+
+ Tensor shape.
+
+#### SetName
+
+```cpp
+void SetName(const std::string &name);
+```
+
+Sets the name of a tensor.
+
+- Parameters
+
+ - `name`: name to be set.
+
+#### SetDataType
+
+```cpp
+void SetDataType(api::DataType type);
+```
+
+Sets the data type of a tensor.
+
+- Parameters
+
+ - `type`: type to be set.
+
+#### SetShape
+
+```cpp
+void SetShape(const std::vector &shape);
+```
+
+Sets the shape of a tensor.
+
+- Parameters
+
+ - `shape`: shape to be set.
+
+#### Data
+
+```cpp
+const void *Data() const;
+```
+
+Obtains the constant pointer to the tensor data.
+
+- Returns
+
+ Constant pointer to the tensor data.
+
+#### MutableData
+
+```cpp
+void *MutableData();
+```
+
+Obtains the pointer to the tensor data.
+
+- Returns
+
+ Pointer to the tensor data.
+
+#### DataSize
+
+```cpp
+size_t DataSize() const;
+```
+
+Obtains the memory length (in bytes) of the tensor data.
+
+- Returns
+
+ Memory length of the tensor data, in bytes.
+
+#### ResizeData
+
+```cpp
+bool ResizeData(size_t data_len);
+```
+
+Adjusts the memory size of the tensor.
+
+- Parameters
+
+ - `data_len`: number of bytes in the memory after adjustment.
+
+- Returns
+
+ A value of bool indicates whether the operation is successful.
+
+#### SetData
+
+```cpp
+bool SetData(const void *data, size_t data_len);
+```
+
+Adjusts the memory data of the tensor.
+
+- Parameters
+
+ - `data`: memory address of the source data.
+ - `data_len`: length of the source data memory.
+
+- Returns
+
+ A value of bool indicates whether the operation is successful.
+
+#### ElementNum
+
+```cpp
+int64_t ElementNum() const;
+```
+
+Obtains the number of elements in a tensor.
+
+- Returns
+
+ Number of elements in a tensor.
+
+#### Clone
+
+```cpp
+Tensor Clone() const;
+```
+
+Performs a self copy.
+
+- Returns
+
+ A deep copy.
\ No newline at end of file
diff --git a/docs/api_cpp/source_en/index.rst b/docs/api_cpp/source_en/index.rst
index 779317bee1f0397ac1c5a78905b31b236f33d4f8..b5f76d3c78fd947026a99ea4a9ba8afd91355ed8 100644
--- a/docs/api_cpp/source_en/index.rst
+++ b/docs/api_cpp/source_en/index.rst
@@ -12,6 +12,7 @@ MindSpore C++ API
class_list
mindspore
+ api
dataset
vision
lite
diff --git a/docs/api_cpp/source_zh_cn/api.md b/docs/api_cpp/source_zh_cn/api.md
index 7c0af98ec52b620064f95b13b38d80d618a8fe5d..ed98393806abbb1699deb49af6f96b9ca2229973 100644
--- a/docs/api_cpp/source_zh_cn/api.md
+++ b/docs/api_cpp/source_zh_cn/api.md
@@ -179,7 +179,7 @@ Status GetInputsInfo(std::vector *names, std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const;
```
-获取模型输入信息。
+获取模型输出信息。
- 参数
diff --git a/docs/api_java/source_en/index.rst b/docs/api_java/source_en/index.rst
index 935aa0a5d22565b2d51fc919a8f81c00d9702b02..1a531e3f3a89cec32b3882e59a74980552ef0864 100644
--- a/docs/api_java/source_en/index.rst
+++ b/docs/api_java/source_en/index.rst
@@ -14,4 +14,5 @@ MindSpore Java API
lite_session
model
msconfig
- mstensor
\ No newline at end of file
+ mstensor
+ lite_java_example
\ No newline at end of file
diff --git a/docs/api_java/source_en/lite_java_example.rst b/docs/api_java/source_en/lite_java_example.rst
new file mode 100644
index 0000000000000000000000000000000000000000..9cb08fa346e469ff755473891e7676904897e7ab
--- /dev/null
+++ b/docs/api_java/source_en/lite_java_example.rst
@@ -0,0 +1,7 @@
+Example
+========
+
+.. toctree::
+ :maxdepth: 1
+
+ Quick Start
\ No newline at end of file
diff --git a/docs/api_python/source_en/mindspore/mindspore.ops.rst b/docs/api_python/source_en/mindspore/mindspore.ops.rst
index 779103f524e5fba8fede87ba25f6bd58a6756850..7f0d5f5e61ef48d91bbaeddc3327c8a1288b5d7d 100644
--- a/docs/api_python/source_en/mindspore/mindspore.ops.rst
+++ b/docs/api_python/source_en/mindspore/mindspore.ops.rst
@@ -29,6 +29,7 @@ The composite operators are the pre-defined combination of operators.
mindspore.ops.normal
mindspore.ops.poisson
mindspore.ops.repeat_elements
+ mindspore.ops.sequence_mask
mindspore.ops.tensor_dot
mindspore.ops.uniform
diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
index 779103f524e5fba8fede87ba25f6bd58a6756850..7f0d5f5e61ef48d91bbaeddc3327c8a1288b5d7d 100644
--- a/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
+++ b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst
@@ -29,6 +29,7 @@ The composite operators are the pre-defined combination of operators.
mindspore.ops.normal
mindspore.ops.poisson
mindspore.ops.repeat_elements
+ mindspore.ops.sequence_mask
mindspore.ops.tensor_dot
mindspore.ops.uniform
diff --git a/docs/note/source_en/design/overall.rst b/docs/note/source_en/design/overall.rst
index bec96d2c15254cf9a888536a6cab4aff59ef9c00..5aeb51194e95a4155161c9c0475c7f23654863c2 100644
--- a/docs/note/source_en/design/overall.rst
+++ b/docs/note/source_en/design/overall.rst
@@ -4,5 +4,6 @@ Overall Design
.. toctree::
:maxdepth: 1
+ technical_white_paper
mindspore/architecture
mindspore/architecture_lite
diff --git a/docs/note/source_en/design/technical_white_paper.md b/docs/note/source_en/design/technical_white_paper.md
new file mode 100644
index 0000000000000000000000000000000000000000..41dec93c4019650d1a144c077a67862c33b7694d
--- /dev/null
+++ b/docs/note/source_en/design/technical_white_paper.md
@@ -0,0 +1,5 @@
+# Technical White Paper
+
+Please stay tuned...
+
+
diff --git a/docs/note/source_en/env_var_list.md b/docs/note/source_en/env_var_list.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6cefcfc8475228d83b2a80ba342af90e2e921e1
--- /dev/null
+++ b/docs/note/source_en/env_var_list.md
@@ -0,0 +1,5 @@
+# Environment Variables List
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/docs/note/source_en/index.rst b/docs/note/source_en/index.rst
index e3aa74528572fe3a544fd4f85dd3b04f5502852e..f49f2d63f9a44196cf6027532fbfced153506b38 100644
--- a/docs/note/source_en/index.rst
+++ b/docs/note/source_en/index.rst
@@ -25,7 +25,9 @@ MindSpore Design And Specification
benchmark
network_list
operator_list
+ syntax_list
model_lite
+ env_var_list
.. toctree::
:glob:
diff --git a/docs/note/source_en/network_list_ms.md b/docs/note/source_en/network_list_ms.md
index 95416313e766dae7ddaafd22da645f65861c3683..3a0e0e32dc4567ab97a9e0484aea0daaa5a02da2 100644
--- a/docs/note/source_en/network_list_ms.md
+++ b/docs/note/source_en/network_list_ms.md
@@ -26,21 +26,22 @@
|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing
| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing
| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Image Classification | [InceptionV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv4/src/inceptionv4.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Image Classification | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Image Classification | [MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv1/src/mobilenet_v1.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Image Classification | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing
| Computer Vision (CV) | Image Classification | [MobileNetV2(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing
| Computer Vision (CV) | Image Classification | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing
+| Computer Vision (CV) | Image Classification | [Shufflenetv1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv1/src/shufflenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Image Classification | [NASNET](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing
| Computer Vision (CV) | Image Classification | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing
| Computer Vision (CV) | Image Classification | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ghostnet/src/ghostnet.py) | Doing | Doing | Supported | Supported | Doing | Doing
| Computer Vision (CV) | Image Classification | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Image Classification | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Computer Vision (CV) | Image Classification | [TinyNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/tinynet/src/tinynet.py) | Supported | Doing | Doing | Doing | Doing | Doing
- Computer Vision(CV) | Image Classification | [FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Computer Vision(CV) | Image Classification | [FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Computer Vision(CV) | Image Classificationn | [FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Computer Vision (CV) | Image Classification | [SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing
+ Computer Vision(CV) | Image Classification | [FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| Computer Vision(CV) | Image Classification | [FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| Computer Vision(CV) | Image Classificationn | [FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Image Classification | [SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Supported | Doing | Doing | Doing | Doing
|Computer Vision (CV) | Object Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported | Supported | Supported | Supported | Supported
| Computer Vision (CV) | Object Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing
@@ -51,28 +52,31 @@
| Computer Vision(CV) | Object Detection | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing
| Computer Vision(CV) | Object Detection | [CenterFace](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Computer Vision(CV) | Object Detection | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Computer Vision (CV) | Object Detection | [MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Object Detection | [MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Object Detection | [SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Object Detection | [YoloV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Text Detection | [PSENet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Text Recognition | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Computer Vision (CV) | Semantic Segmentation | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing
- Computer Vision (CV) | Keypoint Detection | [Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Keypoint Detection | [Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Optical Character Recognition | [CRNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/crnn/src/crnn.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported
| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [TextCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/textcnn/src/textcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported | Supported | Doing
| Recommender | Recommender System, Search, Ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing
+| Recommender | Recommender System | [NCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/ncf/src/ncf.py) | Supported | Doing | Supported | Doing | Doing | Doing
| Graph Neural Networks (GNN) | Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Graph Neural Networks (GNN) | Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing
| Graph Neural Networks (GNN) | Recommender System | [BGCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing
| Audio | Auto Tagging | [FCN-4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing
-| High Performance Computing | Molecular Dynamics | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| High Performance Computing | Ocean Model | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing
+| High Performance Computing | Molecular Dynamics | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| High Performance Computing | Ocean Model | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Supported | Doing | Doing
> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts.
diff --git a/docs/note/source_en/static_graph_syntax_support.md b/docs/note/source_en/static_graph_syntax_support.md
new file mode 100644
index 0000000000000000000000000000000000000000..47c6aa29c96a1352fe48688a9f66a72134e2800a
--- /dev/null
+++ b/docs/note/source_en/static_graph_syntax_support.md
@@ -0,0 +1,5 @@
+# Static Graph Syntax Support
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/docs/note/source_en/syntax_list.rst b/docs/note/source_en/syntax_list.rst
new file mode 100644
index 0000000000000000000000000000000000000000..597c59c2b324118dffe760c9e087fd773f644493
--- /dev/null
+++ b/docs/note/source_en/syntax_list.rst
@@ -0,0 +1,7 @@
+Syntax Support
+================
+
+.. toctree::
+ :maxdepth: 1
+
+ static_graph_syntax_support
\ No newline at end of file
diff --git a/docs/note/source_zh_cn/network_list_ms.md b/docs/note/source_zh_cn/network_list_ms.md
index 67d9e26b7a4b1c0987703d087c4b5c03d7cf213a..eb640e625ad16ec6ad122bb2fdee56301d619f8e 100644
--- a/docs/note/source_zh_cn/network_list_ms.md
+++ b/docs/note/source_zh_cn/network_list_ms.md
@@ -26,21 +26,22 @@
|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv4/src/inceptionv4.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv1/src/mobilenet_v1.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [Shufflenetv1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv1/src/shufflenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [NASNET](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ghostnet/src/ghostnet.py) | Doing | Doing | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [TinyNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/tinynet/src/tinynet.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) |[SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) |[SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Supported | Doing | Doing | Doing | Doing
|计算机视觉(CV) | 目标检测(Object Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported |Supported |Supported | Supported | Supported
| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing
@@ -51,28 +52,31 @@
| 计算机视觉(CV) | 目标检测(Object Detection) | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing
| 计算机视觉(CV) | 目标检测(Object Detection) | [CenterFace](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 目标检测(Object Detection) | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Object Detection) |[MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Object Detection) |[MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 目标检测(Object Detection) |[SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 目标检测(Object Detection) |[YoloV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 计算机视觉 (CV) | 文本检测 (Text Detection) | [PSENet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 计算机视觉 (CV) | 文本识别 (Text Recognition) | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing
-| 计算机视觉(CV) | 语义分割(Semantic Segmentation) |[Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 关键点检测(Keypoint Detection) |[Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 光学字符识别(Optical Character Recognition) |[CRNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/crnn/src/crnn.py) | Supported | Doing | Doing | Doing | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Doing | Doing | Doing | Doing | Doing
-| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Supported | Doing | Doing | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TextCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/textcnn/src/textcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing
+| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Supported | Doing
| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search, Ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing
+| 推荐(Recommender) | 推荐系统(Recommender System) | [NCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/ncf/src/ncf.py) | Supported | Doing | Supported | Doing| Doing | Doing
| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing
| 图神经网络(GNN) | 推荐系统(Recommender System) | [BGCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing
|语音(Audio) | 音频标注(Audio Tagging) | [FCN-4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing
-|高性能计算(HPC) | 分子动力学(Molecular Dynamics) | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Doing | Doing | Doing | Doing | Doing
-|高性能计算(HPC) | 海洋模型(Ocean Model) | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing
+|高性能计算(HPC) | 分子动力学(Molecular Dynamics) | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Supported| Doing | Doing | Doing | Doing
+|高性能计算(HPC) | 海洋模型(Ocean Model) | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Supported | Doing | Doing
> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) 快速生成经典网络脚本。
diff --git a/docs/programming_guide/source_en/cache.md b/docs/programming_guide/source_en/cache.md
new file mode 100644
index 0000000000000000000000000000000000000000..71473379768d27be1f09b9346d59a0aecceeaf94
--- /dev/null
+++ b/docs/programming_guide/source_en/cache.md
@@ -0,0 +1,5 @@
+# Single Node Data Cache
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/docs/programming_guide/source_en/context.md b/docs/programming_guide/source_en/context.md
index 709e994e1e140c79786e5e2d62d14c7757f8e92b..20bd3e4633f479ace0e27a7abdddbe18ac85ecfd 100644
--- a/docs/programming_guide/source_en/context.md
+++ b/docs/programming_guide/source_en/context.md
@@ -118,13 +118,25 @@ The system can collect profiling data during training and use the profiling tool
- `enable_profiling`: indicates whether to enable the profiling function. If this parameter is set to True, the profiling function is enabled, and profiling options are read from enable_options. If this parameter is set to False, the profiling function is disabled and only training_trace is collected.
-- `profiling_options`: profiling collection options. The values are as follows. Multiple data items can be collected. training_trace: collects step trace data, that is, software information about training tasks and AI software stacks, to analyze the performance of training tasks. It focuses on data argumentation, forward and backward computation, and gradient aggregation update. task_trace: collects task trace data, that is, hardware information of the Ascend 910 processor HWTS/AICore and analysis of task start and end information. op_trace: collects performance data of a single operator. Format: ['op_trace','task_trace','training_trace']
+- `profiling_options`: profiling collection options. The values are as follows. Multiple data items can be collected.
+ result_path: saving the path of the profiling collection result file. The directory spectified by this parameter needs to be created in advance on the training environment (container or host side) and ensure that the running user configured during installation has read and write permissions. It supports the configuration of absolute or relative paths(relative to the current path when executing the command line). The absolute path configuration starts with '/', for example:/home/data/output. The relative path configuration directly starts with the directory name, for example:output;
+ training_trace: collect iterative trajectory data, that is, the training task and software information of the AI software stack, to achieve performance analysis of the training task, focusing on data enhancement, forward and backward calculation, gradient aggregation update and other related data. The value is on/off;
+ task_trace: collect task trajectory data, that is, the hardware information of the HWTS/AICore of the Ascend 910 processor, and analyze the information of beginning and ending of the task. The value is on/off;
+ aicpu_trace: collect profiling data enhanced by aicpu data. The value is on/off;
+ fp_point: specify the start position of the forward operator of the training network iteration trajectory, which is used to record the start timestamp of the forward calculation. The configuration value is the name of the first operator specified in the forward direction. when the value is empty, the system will automatically obtain the forward operator name;
+ bp_point: specify the end position of the iteration trajectory reversal operator of the training network, record the end timestamp of the backward calculation. The configuration value is the name of the operator after the specified reverse. when the value is empty, the system will automatically obtain the backward operator name;
+ ai_core_metrics: the values are as follows:
+ - ArithmeticUtilization: percentage statistics of various calculation indicators;
+ - PipeUtilization: the time-consuming ratio of calculation unit and handling unit, this item is the default value;
+ - Memory: percentage of external memory read and write instructions;
+ - MemoryL0: percentage of internal memory read and write instructions;
+ - ResourceConflictRatio: proportion of pipline queue instructions.
A code example is as follows:
```python
from mindspore import context
-context.set_context(enable_profiling=True, profiling_options="training_trace")
+context.set_context(enable_profiling=True, profiling_options='{"result_path":"/home/data/output","training_trace":"on"}')
```
### Saving MindIR
diff --git a/docs/programming_guide/source_en/data_pipeline.rst b/docs/programming_guide/source_en/data_pipeline.rst
index 75d7846d2d8692dc3031b80737d5daaee0c487d4..0e52d9ddf432e0ea22730d34e8ccf448f617c014 100644
--- a/docs/programming_guide/source_en/data_pipeline.rst
+++ b/docs/programming_guide/source_en/data_pipeline.rst
@@ -11,3 +11,4 @@ Data Pipeline
tokenizer
dataset_conversion
auto_augmentation
+ cache
diff --git a/docs/programming_guide/source_en/probability.md b/docs/programming_guide/source_en/probability.md
index 56aa7ea8333d8896d3f5a1740b304123ccf68ac7..f79546780c6f6e6fdd0a904c8f4c9a92aaadb8a6 100644
--- a/docs/programming_guide/source_en/probability.md
+++ b/docs/programming_guide/source_en/probability.md
@@ -361,23 +361,28 @@ mean_b = Tensor(1.0, dtype=mstype.float32)
sd_b = Tensor(2.0, dtype=mstype.float32)
kl = my_normal.kl_loss('Normal', mean_b, sd_b)
+# get the distribution args as a tuple
+dist_arg = my_normal.get_dist_args()
+
print("mean: ", mean)
print("var: ", var)
print("entropy: ", entropy)
print("prob: ", prob)
print("cdf: ", cdf)
print("kl: ", kl)
+print("dist_arg: ", dist_arg)
```
The output is as follows:
```python
-mean: 0.0
-var: 1.0
-entropy: 1.4189385
-prob: [0.35206532, 0.3989423, 0.35206532]
-cdf: [0.3085482, 0.5, 0.6914518]
-kl: 0.44314718
+mean: 0.0
+var: 1.0
+entropy: 1.4189385
+prob: [0.35206532 0.3989423 0.35206532]
+cdf: [0.30853754 0.5 0.69146246]
+kl: 0.44314718
+dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1))
```
### Probability Distribution Class Application in Graph Mode
@@ -463,7 +468,7 @@ tx = Tensor(x, dtype=dtype.float32)
cdf = LogNormal.cdf(tx)
# generate samples from the distribution
-shape = ((3, 2))
+shape = (3, 2)
sample = LogNormal.sample(shape)
# get information of the distribution
@@ -473,26 +478,24 @@ print("underlying distribution:\n", LogNormal.distribution)
print("bijector:\n", LogNormal.bijector)
# get the computation results
print("cdf:\n", cdf)
-print("sample:\n", sample)
+print("sample shape:\n", sample.shape)
```
The output is as follows:
```python
TransformedDistribution<
- (_bijector): Exp
- (_distribution): Normal
- >
+ (_bijector): Exp
+ (_distribution): Normal
+ >
underlying distribution:
- Normal
+ Normal
bijector:
- Exp
+ Exp
cdf:
- [0.7558914 0.9462397 0.9893489]
-sample:
- [[ 3.451917 0.645654 ]
- [ 0.86533326 1.2023963 ]
- [ 2.3343778 11.053896 ]]
+ [0.7558914 0.9462397 0.9893489]
+sample shape:
+(3, 2)
```
When the `TransformedDistribution` is constructed to map the transformed `is_constant_jacobian = true` (for example, `ScalarAffine`), the constructed `TransformedDistribution` instance can use the `mean` API to calculate the average value. For example:
@@ -544,15 +547,14 @@ x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32)
tx = Tensor(x, dtype=dtype.float32)
cdf, sample = net(tx)
print("cdf: ", cdf)
-print("sample: ", sample)
+print("sample shape: ", sample.shape)
```
The output is as follows:
```python
cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ]
-sample: [[0.5361498 0.26627186 2.766659 ]
- [1.5831033 0.4096472 2.008679 ]]
+sample shape: (2, 3)
```
## Probability Distribution Mapping
@@ -694,11 +696,11 @@ print("inverse_log_jacobian: ", inverse_log_jaco)
The output is as follows:
```python
-PowerTransform
-forward: [2.23606801e+00, 2.64575124e+00, 3.00000000e+00, 3.31662488e+00]
-inverse: [1.50000000e+00, 4.00000048e+00, 7.50000000e+00, 1.20000010e+01]
-forward_log_jacobian: [-8.04718971e-01, -9.72955048e-01, -1.09861231e+00, -1.19894767e+00]
-inverse_log_jacobian: [6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00]
+PowerTransform
+forward: [2.236068 2.6457515 3. 3.3166249]
+inverse: [ 1.5 4. 7.5 12.000001]
+forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]
+inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]
```
### Invoking a Bijector Instance in Graph Mode
@@ -740,10 +742,10 @@ print("inverse_log_jaco: ", inverse_log_jaco)
The output is as follows:
```python
-forward: [2.236068 2.6457512 3. 3.3166249]
-inverse: [ 1.5 4.0000005 7.5 12.000001 ]
-forward_log_jaco: [-0.804719 -0.97295505 -1.0986123 -1.1989477 ]
-inverse_log_jaco: [0.6931472 1.0986123 1.3862944 1.609438 ]
+forward: [2.236068 2.6457515 3. 3.3166249]
+inverse: [ 1.5 4. 7.5 12.000001]
+forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]
+inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]
```
## Deep Probabilistic Network
diff --git a/docs/programming_guide/source_zh_cn/context.md b/docs/programming_guide/source_zh_cn/context.md
index 40723c9363079be090f52bad4e4ced5e2e7130e9..f694038a60b12d0e2e6386d2c9e13b23c67820f9 100644
--- a/docs/programming_guide/source_zh_cn/context.md
+++ b/docs/programming_guide/source_zh_cn/context.md
@@ -122,13 +122,25 @@ context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, grad
- `enable_profiling`:是否开启profiling功能。设置为True,表示开启profiling功能,从enable_options读取profiling的采集选项;设置为False,表示关闭profiling功能,仅采集training_trace。
-- `profiling_options`:profiling采集选项,取值如下,支持采集多项数据。training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据;task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息;op_trace:采集单算子性能数据。
+- `profiling_options`:profiling采集选项,取值如下,支持采集多项数据。
+ result_path: Profiling采集结果文件保存路径。该参数指定的目录需要在启动训练的环境上(容器或Host侧)提前创建且确保安装时配置的运行用户具有读写权限,支持配置绝对路径或相对路径(相对执行命令时的当前路径);
+ training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据,取值on/off。
+ task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息,取值on/off;
+ aicpu_trace: 采集aicpu数据增强的profiling数据。取值on/off;
+ fp_point: training_trace为on时需要配置。指定训练网络迭代轨迹正向算子的开始位置,用于记录前向算子开始时间戳。配置值为指定的正向第一个算子名字。当该值为空时,系统自动获取正向第一个算子名字;
+ bp_point: training_trace为on时需要配置。指定训练网络迭代轨迹反向算子的结束位置,用于记录反向算子结束时间戳。配置值为指定的反向最后一个算子名字。当该值为空时,系统自动获取反向最后一个算子名字;
+ ai_core_metrics: 取值如下:
+ - ArithmeticUtilization: 各种计算类指标占比统计。
+ - PipeUtilization: 计算单元和搬运单元耗时占比,该项为默认值。
+ - Memory: 外部内存读写类指令占比。
+ - MemoryL0: 内部内存读写类指令占比。
+ - ResourceConflictRatio: 流水线队列类指令占比。
代码样例如下:
```python
from mindspore import context
-context.set_context(enable_profiling=True, profiling_options="training_trace")
+context.set_context(enable_profiling=True, profiling_options= '{"result_path":"/home/data/output","training_trace":"on"}')
```
### 保存MindIR
diff --git a/docs/programming_guide/source_zh_cn/probability.md b/docs/programming_guide/source_zh_cn/probability.md
index ea6cd8e22217e580648faa1be465ab41cd1c9e20..dafa7d36343456d38b09d822b4bcd1b57934a4c9 100644
--- a/docs/programming_guide/source_zh_cn/probability.md
+++ b/docs/programming_guide/source_zh_cn/probability.md
@@ -361,23 +361,28 @@ mean_b = Tensor(1.0, dtype=mstype.float32)
sd_b = Tensor(2.0, dtype=mstype.float32)
kl = my_normal.kl_loss('Normal', mean_b, sd_b)
+# get the distribution args as a tuple
+dist_arg = my_normal.get_dist_args()
+
print("mean: ", mean)
print("var: ", var)
print("entropy: ", entropy)
print("prob: ", prob)
print("cdf: ", cdf)
print("kl: ", kl)
+print("dist_arg: ", dist_arg)
```
输出为:
```text
-mean: 0.0
-var: 1.0
-entropy: 1.4189385
-prob: [0.35206532, 0.3989423, 0.35206532]
-cdf: [0.3085482, 0.5, 0.6914518]
-kl: 0.44314718
+mean: 0.0
+var: 1.0
+entropy: 1.4189385
+prob: [0.35206532 0.3989423 0.35206532]
+cdf: [0.30853754 0.5 0.69146246]
+kl: 0.44314718
+dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1))
```
### 概率分布类在图模式下的应用
@@ -465,7 +470,7 @@ tx = Tensor(x, dtype=dtype.float32)
cdf = LogNormal.cdf(tx)
# generate samples from the distribution
-shape = ((3, 2))
+shape = (3, 2)
sample = LogNormal.sample(shape)
# get information of the distribution
@@ -475,26 +480,24 @@ print("underlying distribution:\n", LogNormal.distribution)
print("bijector:\n", LogNormal.bijector)
# get the computation results
print("cdf:\n", cdf)
-print("sample:\n", sample)
+print("sample shape:\n", sample.shape)
```
输出为:
```text
TransformedDistribution<
- (_bijector): Exp
- (_distribution): Normal
- >
+ (_bijector): Exp
+ (_distribution): Normal
+ >
underlying distribution:
-Normal
-bijector
-Exp
+ Normal
+bijector:
+ Exp
cdf:
-[7.55891383e-01, 9.46239710e-01, 9.89348888e-01]
-sample:
-[[7.64315844e-01, 3.01435232e-01],
- [1.17166102e+00, 2.60277224e+00],
- [7.02699006e-01, 3.91564220e-01]]
+ [0.7558914 0.9462397 0.9893489]
+sample shape:
+(3, 2)
```
当构造 `TransformedDistribution` 映射变换的 `is_constant_jacobian = true` 时(如 `ScalarAffine`),构造的 `TransformedDistribution` 实例可以使用直接使用 `mean` 接口计算均值,例如:
@@ -546,15 +549,14 @@ x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32)
tx = Tensor(x, dtype=dtype.float32)
cdf, sample = net(tx)
print("cdf: ", cdf)
-print("sample: ", sample)
+print("sample shape: ", sample.shape)
```
输出为:
```text
cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ]
-sample: [[0.5361498 0.26627186 2.766659 ]
- [1.5831033 0.4096472 2.008679 ]]
+sample shape: (2, 3)
```
## 概率分布映射
@@ -695,11 +697,11 @@ print("inverse_log_jacobian: ", inverse_log_jaco)
输出:
```text
-PowerTransform
-forward: [2.23606801e+00, 2.64575124e+00, 3.00000000e+00, 3.31662488e+00]
-inverse: [1.50000000e+00, 4.00000048e+00, 7.50000000e+00, 1.20000010e+01]
-forward_log_jacobian: [-8.04718971e-01, -9.72955048e-01, -1.09861231e+00, -1.19894767e+00]
-inverse_log_jacobian: [6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00]
+PowerTransform
+forward: [2.236068 2.6457515 3. 3.3166249]
+inverse: [ 1.5 4. 7.5 12.000001]
+forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]
+inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]
```
### 图模式下调用Bijector实例
@@ -741,10 +743,10 @@ print("inverse_log_jacobian: ", inverse_log_jaco)
输出为:
```text
-forward: [2.236068 2.6457515 3. 3.3166249]
-inverse: [ 1.5 4. 7.5 12.000001]
-forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]
-inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]
+forward: [2.236068 2.6457515 3. 3.3166249]
+inverse: [ 1.5 4. 7.5 12.000001]
+forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477]
+inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ]
```
## 深度概率网络
diff --git a/install/mindspore_ascend310_install_pip.md b/install/mindspore_ascend310_install_pip.md
index 31b43e5a4adba665e47ceb3472b3abad3126fc12..96d77a6d6a3b58060d935d2de213b005e7a1cb4b 100644
--- a/install/mindspore_ascend310_install_pip.md
+++ b/install/mindspore_ascend310_install_pip.md
@@ -43,7 +43,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
- `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend 310版本可支持以下系统`euleros_aarch64`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。
diff --git a/install/mindspore_ascend310_install_pip_en.md b/install/mindspore_ascend310_install_pip_en.md
index 340b98c4c12345e8591ad23e8af0046c8a36a891..ee759e8e06103b39bcbbc137c5e8efa061431948 100644
--- a/install/mindspore_ascend310_install_pip_en.md
+++ b/install/mindspore_ascend310_install_pip_en.md
@@ -1 +1,121 @@
-# Installing MindSpore in Ascend 310 by pip
+# Installing MindSpore in Ascend 310 by pip
+
+
+
+- [Installing MindSpore in Ascend 310 by pip](#installing-mindspore-in-ascend-310-by-pip)
+ - [Checking System Environment Information](#checking-system-environment-information)
+ - [Installing MindSpore](#installing-mindspore)
+ - [Configuring Environment Variables](#configuring-environment-variables)
+ - [Verifying the Installation](#verifying-the-installation)
+ - [Installing MindSpore Serving](#installing-mindspore-serving)
+
+
+
+
+
+The following describes how to quickly install MindSpore by pip on Linux in the Ascend 310 environment.
+
+## Checking System Environment Information
+
+- Ensure that the 64-bit Ubuntu 18.04, CentOS 7.6, or EulerOS 2.8 is installed.
+- Ensure that [GCC 7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) is installed.
+- Ensure that [GMP 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed.
+- Ensure that [CMake 3.18.3 or later](https://cmake.org/download/) is installed.
+ - After installation, add the path of CMake to the system environment variables.
+- Ensure that Python 3.7.5 is installed.
+ - If Python 3.7.5 (64-bit) is not installed, download it from the [Python official website](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) or [HUAWEI CLOUD](https://mirrors.huaweicloud.com/python/3.7.5/Python-3.7.5.tgz) and install it.
+- Ensure that the Ascend 310 AI Processor software packages (Atlas Data Center Solution V100R020C10: [A300-3000 1.0.7.SPC103 (aarch64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3000-pid-250702915/software/251999079?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702915), [A300-3010 1.0.7.SPC103 (x86_64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3010-pid-251560253/software/251894987?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251560253), [CANN V100R020C10](https://support.huawei.com/enterprise/en/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373)) are installed.
+ - Ensure that you have permissions to access the installation path `/usr/local/Ascend` of the Ascend 310 AI Processor software package. If not, ask the user root to add you to a user group to which `/usr/local/Ascend` belongs. For details about the configuration, see the description document in the software package.
+ - Ensure that the Ascend 310 AI Processor software package that matches GCC 7.3 is installed.
+ - Install the .whl package provided with the Ascend 310 AI Processor software package. The .whl package is released with the software package. After the software package is upgraded, you need to reinstall the .whl package.
+
+ ```bash
+ pip install /usr/local/Ascend/atc/lib64/topi-{version}-py3-none-any.whl
+ pip install /usr/local/Ascend/atc/lib64/te-{version}-py3-none-any.whl
+ ```
+
+## Installing MindSpore
+
+```bash
+pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/ascend/{system}/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
+```
+
+In the preceding information:
+
+- When the network is connected, dependencies of the MindSpore installation package are automatically downloaded during the .whl package installation. For details about dependencies, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). In other cases, install the dependencies by yourself.
+- `{version}` specifies the MindSpore version number. For example, when installing MindSpore 1.1.0, set `{version}` to 1.1.0.
+- `{arch}` specifies the system architecture. For example, if a Linux OS architecture is x86_64, set `{arch}` to `x86_64`. If the system architecture is ARM64, set `{arch}` to `aarch64`.
+- `{system}` specifies the system version. For example, if EulerOS ARM64 is used, set `{system}` to `euleros_aarch64`. Currently, Ascend 310 supports the following systems: `euleros_aarch64`, `centos_aarch64`, `centos_x86`, `ubuntu_aarch64`, and `ubuntu_x86`.
+
+## Configuring Environment Variables
+
+After MindSpore is installed, export runtime environment variables. In the following command, `/usr/local/Ascend` in `LOCAL_ASCEND=/usr/local/Ascend` indicates the installation path of the software package. Change it to the actual installation path.
+
+```bash
+# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
+export GLOG_v=2
+
+# Conda environmental options
+LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package
+
+# lib libraries that the run package depends on
+export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH}
+
+# lib libraries that the mindspore depends on
+export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH}
+
+# Environment variables that must be configured
+export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
+export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path
+export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
+export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
+```
+
+## Verifying the Installation
+
+Create a directory to store the sample code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample`. You can obtain the code from the [official website](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/sample_resources/ascend310_single_op_sample.zip). A simple example of adding `[1, 2, 3, 4]` to `[2, 3, 4, 5]` is used and the code project directory structure is as follows:
+
+```text
+
+└─ascend310_single_op_sample
+ ├── CMakeLists.txt // Build script
+ ├── README.md // Usage description
+ ├── main.cc // Main function
+ └── tensor_add.mindir // MindIR model file
+```
+
+Go to the directory of the sample project and change the path based on the actual requirements.
+
+```bash
+cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample
+```
+
+Build a project by referring to `README.md`.
+
+```bash
+cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`
+make
+```
+
+After the build is successful, execute the case.
+
+```bash
+./tensor_add_sample
+```
+
+The following information is displayed:
+
+```text
+3
+5
+7
+9
+```
+
+The preceding information indicates that MindSpore is successfully installed.
+
+## Installing MindSpore Serving
+
+If you want to quickly experience the MindSpore online inference service, you can install MindSpore Serving.
+
+For details, see [MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README.md).
diff --git a/install/mindspore_ascend310_install_source.md b/install/mindspore_ascend310_install_source.md
index 1eef13b06700ddd4e4d36f254fbb6d1adb82a450..f3cb418e881abcb992ed6364c2f4218654a4f6e6 100644
--- a/install/mindspore_ascend310_install_source.md
+++ b/install/mindspore_ascend310_install_source.md
@@ -76,7 +76,7 @@ pip install output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl -i htt
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
## 配置环境变量
diff --git a/install/mindspore_ascend310_install_source_en.md b/install/mindspore_ascend310_install_source_en.md
index 4827c91e89727c6d3cc3d430ecf786c06fc0fb1a..316cf1f3695bad4af5b7dabaa7048c593539496a 100644
--- a/install/mindspore_ascend310_install_source_en.md
+++ b/install/mindspore_ascend310_install_source_en.md
@@ -1 +1,153 @@
-# Installing MindSpore in Ascend 310 by Source Code
+# Installing MindSpore in Ascend 310 by Source Code Compilation
+
+
+
+- [Installing MindSpore in Ascend 310 by Source Code Compilation](#installing-mindspore-in-ascend-310-by-source-code-compilation)
+ - [Checking System Environment Information](#checking-system-environment-information)
+ - [Downloading Source Code from the Code Repository](#downloading-source-code-from-the-code-repository)
+ - [Building MindSpore](#building-mindspore)
+ - [Installing MindSpore](#installing-mindspore)
+ - [Configuring Environment Variables](#configuring-environment-variables)
+ - [Verifying the Installation](#verifying-the-installation)
+ - [Installing MindSpore Serving](#installing-mindspore-serving)
+
+
+
+
+
+The following describes how to quickly install MindSpore by compiling the source code on Linux in the Ascend 310 environment.
+
+## Checking System Environment Information
+
+- Ensure that the 64-bit Ubuntu 18.04, CentOS 7.6, or EulerOS 2.8 is installed.
+- Ensure that [GCC 7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) is installed.
+- Ensure that [GMP 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed.
+- Ensure that [Python 3.7.5](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) is installed.
+- Ensure that [OpenSSL 1.1.1 or later](https://github.com/openssl/openssl.git) is installed.
+ - After installation, set the environment variable `export OPENSSL_ROOT_DIR= "OpenSSL installation directory"`.
+- Ensure that [CMake 3.18.3 or later](https://cmake.org/download/) is installed.
+ - After installation, add the path of CMake to the system environment variables.
+- Ensure that [patch 2.5 or later](http://ftp.gnu.org/gnu/patch/) is installed.
+ - After installation, add the patch path to the system environment variables.
+- Ensure that [wheel 0.32.0 or later](https://pypi.org/project/wheel/) is installed.
+- Ensure that the Ascend 310 AI Processor software packages (Atlas Data Center Solution V100R020C10: [A300-3000 1.0.7.SPC103 (aarch64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3000-pid-250702915/software/251999079?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702915), [A300-3010 1.0.7.SPC103 (x86_64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3010-pid-251560253/software/251894987?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251560253), [CANN V100R020C10](https://support.huawei.com/enterprise/en/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373)) are installed.
+ - Ensure that you have permissions to access the installation path `/usr/local/Ascend` of the Ascend 310 AI Processor software package. If not, ask the user root to add you to a user group to which `/usr/local/Ascend` belongs. For details about the configuration, see the description document in the software package.
+ - Ensure that the Ascend 310 AI Processor software package that matches GCC 7.3 is installed.
+ - Install the .whl package provided with the Ascend 310 AI Processor software package. The .whl package is released with the software package. After the software package is upgraded, you need to reinstall the .whl package.
+
+ ```bash
+ pip install /usr/local/Ascend/atc/lib64/topi-{version}-py3-none-any.whl
+ pip install /usr/local/Ascend/atc/lib64/te-{version}-py3-none-any.whl
+ ```
+
+- Ensure that the git tool is installed.
+ If not, run the following command to download and install it:
+
+ ```bash
+ apt-get install git # ubuntu and so on
+ yum install git # centos and so on
+ ```
+
+## Downloading Source Code from the Code Repository
+
+```bash
+git clone https://gitee.com/mindspore/mindspore.git
+```
+
+## Building MindSpore
+
+Run the following command in the root directory of the source code.
+
+```bash
+bash build.sh -e ascend -V 310
+```
+
+In the preceding information:
+
+The default number of build threads is 8 in `build.sh`. If the compiler performance is poor, build errors may occur. You can add -j{Number of threads} to script to reduce the number of threads. For example, `bash build.sh -e ascend -V 310 -j4`.
+
+## Installing MindSpore
+
+```bash
+chmod +x output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl
+pip install output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl -i https://pypi.tuna.tsinghua.edu.cn/simple
+```
+
+In the preceding information:
+
+- When the network is connected, dependencies of the MindSpore installation package are automatically downloaded during the .whl package installation. For details about dependencies, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). In other cases, install the dependencies by yourself.
+- `{version}` specifies the MindSpore version number. For example, when installing MindSpore 1.1.0, set `{version}` to 1.1.0.
+- `{arch}` specifies the system architecture. For example, if a Linux OS architecture is x86_64, set `{arch}` to `x86_64`. If the system architecture is ARM64, set `{arch}` to `aarch64`.
+
+## Configuring Environment Variables
+
+After MindSpore is installed, export runtime environment variables. In the following command, `/usr/local/Ascend` in `LOCAL_ASCEND=/usr/local/Ascend` indicates the installation path of the software package. Change it to the actual installation path.
+
+```bash
+# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
+export GLOG_v=2
+
+# Conda environmental options
+LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package
+
+# lib libraries that the run package depends on
+export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH}
+
+# lib libraries that the mindspore depends on
+export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH}
+
+# Environment variables that must be configured
+export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
+export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path
+export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
+export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
+```
+
+## Verifying the Installation
+
+Create a directory to store the sample code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample`. You can obtain the code from the [official website](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/sample_resources/ascend310_single_op_sample.zip). A simple example of adding `[1, 2, 3, 4]` to `[2, 3, 4, 5]` is used and the code project directory structure is as follows:
+
+```text
+
+└─ascend310_single_op_sample
+ ├── CMakeLists.txt // Build script
+ ├── README.md // Usage description
+ ├── main.cc // Main function
+ └── tensor_add.mindir // MindIR model file
+```
+
+Go to the directory of the sample project and change the path based on the actual requirements.
+
+```bash
+cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample
+```
+
+Build a project by referring to `README.md`.
+
+```bash
+cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath`
+make
+```
+
+After the build is successful, execute the case.
+
+```bash
+./tensor_add_sample
+```
+
+The following information is displayed:
+
+```text
+3
+5
+7
+9
+```
+
+The preceding information indicates that MindSpore is successfully installed.
+
+## Installing MindSpore Serving
+
+If you want to quickly experience the MindSpore online inference service, you can install MindSpore Serving.
+
+For details, see [MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README.md).
diff --git a/install/mindspore_ascend_install_conda.md b/install/mindspore_ascend_install_conda.md
index 9a51ce0891cb3ffc7395657bb12d96f819549107..6cce0a62dec163f98b87f6c108bf25fad077d4fc 100644
--- a/install/mindspore_ascend_install_conda.md
+++ b/install/mindspore_ascend_install_conda.md
@@ -71,7 +71,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
- `{system}`表示系统,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前可支持以下系统`euleros_aarch64`/`euleros_x86`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。
diff --git a/install/mindspore_ascend_install_pip.md b/install/mindspore_ascend_install_pip.md
index bab51e4049fcfcfd926b85970e3ed9e01d102752..61b77dc40244ebd2748285708cfe44d5b841edf0 100644
--- a/install/mindspore_ascend_install_pip.md
+++ b/install/mindspore_ascend_install_pip.md
@@ -45,7 +45,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
- `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend版本可支持以下系统`euleros_aarch64`/`euleros_x86`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。
diff --git a/install/mindspore_ascend_install_pip_en.md b/install/mindspore_ascend_install_pip_en.md
index 60d48e276d24dab98436b75084a740bbdec1e900..65e78e316cabac39c2bcc5fffbec7316c6c4edfe 100644
--- a/install/mindspore_ascend_install_pip_en.md
+++ b/install/mindspore_ascend_install_pip_en.md
@@ -45,7 +45,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.
- `{system}` denotes the system version. For example, if you are using EulerOS ARM architecture, `{system}` should be `euleros_aarch64`. Currently, the following systems are supported by Ascend: `euleros_aarch64`/`euleros_x86`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`.
diff --git a/install/mindspore_ascend_install_source.md b/install/mindspore_ascend_install_source.md
index 721e2ea9e71a91ecda3d962a6c7447b9490cede0..2002c68cdf95ef109983d857f1fd39b186232372 100644
--- a/install/mindspore_ascend_install_source.md
+++ b/install/mindspore_ascend_install_source.md
@@ -98,7 +98,7 @@ pip install build/package/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
## 配置环境变量
diff --git a/install/mindspore_ascend_install_source_en.md b/install/mindspore_ascend_install_source_en.md
index 7135e8f030b7cb562c0c42be061a1746cef504e8..2505df2eec37373c8e23c4296b51906beecaf22b 100644
--- a/install/mindspore_ascend_install_source_en.md
+++ b/install/mindspore_ascend_install_source_en.md
@@ -100,7 +100,7 @@ pip install build/package/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.
## Configuring Environment Variables
diff --git a/install/mindspore_cpu_install_conda.md b/install/mindspore_cpu_install_conda.md
index 27d191004f359d55ac46008f27ba5c2d07ca7e9f..3abc0b48593f21065734786290aff139d54d3523 100644
--- a/install/mindspore_cpu_install_conda.md
+++ b/install/mindspore_cpu_install_conda.md
@@ -58,7 +58,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
- `{system}`表示系统,例如使用的Ubuntu系统X86架构,`{system}`应写为`ubuntu_x86`,目前CPU版本可支持以下系统`ubuntu_aarch64`/`ubuntu_x86`。
diff --git a/install/mindspore_cpu_install_pip.md b/install/mindspore_cpu_install_pip.md
index 80d2d21c99702197a87a86c105350639924786e2..a7abd6c15f149bd5c0b22a05376901a0390b9f7a 100644
--- a/install/mindspore_cpu_install_pip.md
+++ b/install/mindspore_cpu_install_pip.md
@@ -32,7 +32,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
- `{system}`表示系统,例如使用的Ubuntu系统X86架构,`{system}`应写为`ubuntu_x86`,目前CPU版本可支持以下系统`ubuntu_aarch64`/`ubuntu_x86`。
diff --git a/install/mindspore_cpu_install_pip_en.md b/install/mindspore_cpu_install_pip_en.md
index 1174728459384c640ee89d22b9577790d320136e..7aa8c70d5c06c947c90b4e3bd38b3eefb31d8a11 100644
--- a/install/mindspore_cpu_install_pip_en.md
+++ b/install/mindspore_cpu_install_pip_en.md
@@ -32,7 +32,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.
- `{system}` denotes the system version. For example, if you are using Ubuntu x86 architecture, `{system}` should be `ubuntu_x86`. Currently, the following systems are supported by CPU: `ubuntu_aarch64`/`ubuntu_x86`.
diff --git a/install/mindspore_cpu_install_source.md b/install/mindspore_cpu_install_source.md
index 2e26d4cc416bd423bf36e7f367895816a2043062..5133d9abde16ef728033b4bc0628fc5d8efe0e29 100644
--- a/install/mindspore_cpu_install_source.md
+++ b/install/mindspore_cpu_install_source.md
@@ -71,7 +71,7 @@ pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl -i htt
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARMv8架构64位,则写为`aarch64`。
## 验证安装是否成功
diff --git a/install/mindspore_cpu_install_source_en.md b/install/mindspore_cpu_install_source_en.md
index 80957491277cf9a1d2ae5c217121e139c61d9d78..94280b766b71d9da2882ae15f9a7a8444e90fd2b 100644
--- a/install/mindspore_cpu_install_source_en.md
+++ b/install/mindspore_cpu_install_source_en.md
@@ -72,7 +72,7 @@ pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl -i htt
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.
## Installation Verification
diff --git a/install/mindspore_cpu_macos_install_conda.md b/install/mindspore_cpu_macos_install_conda.md
index 1cee10f453ccb46d4545b61faeb7f160d3701175..6969711dbccf5b4456a4b38bc5f43ddd9afc3f44 100644
--- a/install/mindspore_cpu_macos_install_conda.md
+++ b/install/mindspore_cpu_macos_install_conda.md
@@ -53,7 +53,7 @@ conda activate mindspore
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
## 验证是否安装成功
diff --git a/install/mindspore_cpu_macos_install_pip.md b/install/mindspore_cpu_macos_install_pip.md
index 7292ebd303976d06a6d077df3df50bef43290aaf..32dd85ba12538f04934c127871694fd78ee97cc9 100644
--- a/install/mindspore_cpu_macos_install_pip.md
+++ b/install/mindspore_cpu_macos_install_pip.md
@@ -29,7 +29,7 @@
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
## 验证是否安装成功
diff --git a/install/mindspore_cpu_macos_install_pip_en.md b/install/mindspore_cpu_macos_install_pip_en.md
index a611e95c980ce0ce2e4655a47ff0a8a8519f4320..2889f0d231045dbc6f59c19fdc37638b0c443be9 100644
--- a/install/mindspore_cpu_macos_install_pip_en.md
+++ b/install/mindspore_cpu_macos_install_pip_en.md
@@ -28,7 +28,7 @@ This document describes how to quickly install MindSpore by pip in a macOS syste
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
## Installation Verification
diff --git a/install/mindspore_cpu_macos_install_source.md b/install/mindspore_cpu_macos_install_source.md
index ecd185cd7947054444803728dbe10fe30659e315..b9a4f0e5c1b3b37105d986c8bf66e5f65f7e2635 100644
--- a/install/mindspore_cpu_macos_install_source.md
+++ b/install/mindspore_cpu_macos_install_source.md
@@ -57,7 +57,7 @@ pip install build/package/mindspore-{version}-py37-none-any.whl -i https://pypi.
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
## 验证是否安装成功
diff --git a/install/mindspore_cpu_macos_install_source_en.md b/install/mindspore_cpu_macos_install_source_en.md
index e680ba9cf9b97c3f85f56cd84baca505012ab19e..e0d82d5a4e3aa272d3708685e0427b66a85b3f24 100644
--- a/install/mindspore_cpu_macos_install_source_en.md
+++ b/install/mindspore_cpu_macos_install_source_en.md
@@ -57,7 +57,7 @@ pip install build/package/mindspore-{version}-py37-none-any.whl -i https://pypi.
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
## Installation Verification
diff --git a/install/mindspore_cpu_win_install_conda.md b/install/mindspore_cpu_win_install_conda.md
index 3d913dee797aa56817f567b0aa56c9f1b4113dde..2e26bab630af50f82451826fe35f417b256efba2 100644
--- a/install/mindspore_cpu_win_install_conda.md
+++ b/install/mindspore_cpu_win_install_conda.md
@@ -59,7 +59,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
## 验证是否安装成功
diff --git a/install/mindspore_cpu_win_install_pip.md b/install/mindspore_cpu_win_install_pip.md
index eeadbd9231adaf3d1ad771eb7a2509f8a73142c9..ae5795f907f24c33955849d7de03424959f697b7 100644
--- a/install/mindspore_cpu_win_install_pip.md
+++ b/install/mindspore_cpu_win_install_pip.md
@@ -32,7 +32,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
## 验证是否安装成功
diff --git a/install/mindspore_cpu_win_install_pip_en.md b/install/mindspore_cpu_win_install_pip_en.md
index 24a0d86bb38546a8261b8cae6dc7c41226437f0a..8a2b1c48b64005391a0912245f0a4d0000d23831 100644
--- a/install/mindspore_cpu_win_install_pip_en.md
+++ b/install/mindspore_cpu_win_install_pip_en.md
@@ -32,7 +32,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
## Installation Verification
diff --git a/install/mindspore_cpu_win_install_source.md b/install/mindspore_cpu_win_install_source.md
index c6bfbfd9ec5799a8599961199f7db5d5d753ae9e..600eac130fb46c1d9c557237208563a27919e64c 100644
--- a/install/mindspore_cpu_win_install_source.md
+++ b/install/mindspore_cpu_win_install_source.md
@@ -21,7 +21,7 @@
- 确认安装Windows 10是x86架构64位操作系统。
- 确认安装[Visual C++ Redistributable for Visual Studio 2015](https://www.microsoft.com/zh-CN/download/details.aspx?id=48145)。
- 确认安装了[git](https://github.com/git-for-windows/git/releases/download/v2.29.2.windows.2/Git-2.29.2.2-64-bit.exe)工具。
- - 如果git没有安装在`ProgramFiles`,在执行上述命令前,需设置环境变量指定`patch.exe`的位置,例如git安装在`D:\git`时,需设置`set MS_PATCH_PATH=D:\git\usr\bin`。
+ - 如果git没有安装在`ProgramFiles`,需设置环境变量指定`patch.exe`的位置,例如git安装在`D:\git`时,需设置`set MS_PATCH_PATH=D:\git\usr\bin`。
- 确认安装[MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z)。
- 安装路径中不能出现中文和日文,安装完成后将安装路径下的`MinGW\bin`添加到系统环境变量。例如安装在`D:\gcc`,则需要将`D:\gcc\MinGW\bin`添加到系统环境变量Path中。
- 确认安装[CMake 3.18.3版本](https://github.com/Kitware/Cmake/releases/tag/v3.18.3)。
@@ -54,7 +54,7 @@ pip install build/package/mindspore-{version}-cp37-cp37m-win_amd64.whl -i https:
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
## 验证是否安装成功
diff --git a/install/mindspore_cpu_win_install_source_en.md b/install/mindspore_cpu_win_install_source_en.md
index 5bfdcbb9f2394f8b0b5cd6a9fe1427fb6f36da01..33a60fe6bb131eb1e8369d3e1ecd99a4dd83fe0c 100644
--- a/install/mindspore_cpu_win_install_source_en.md
+++ b/install/mindspore_cpu_win_install_source_en.md
@@ -53,7 +53,7 @@ pip install build/package/mindspore-{version}-cp37-cp37m-win_amd64.whl -i https:
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
## Installation Verification
diff --git a/install/mindspore_gpu_install_conda.md b/install/mindspore_gpu_install_conda.md
index 0e0965dd76409b488bd0009d1dd71cd57f9f65e9..1c7b1d1bb137e2e61fa2fbb8c22ab108c97a633c 100644
--- a/install/mindspore_gpu_install_conda.md
+++ b/install/mindspore_gpu_install_conda.md
@@ -64,7 +64,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
## 验证是否成功安装
diff --git a/install/mindspore_gpu_install_pip.md b/install/mindspore_gpu_install_pip.md
index d9316e4879e5a739fc22bb82b50726b7d530c718..4577c9d9683f63a09cc21a02724f6a4d406c2013 100644
--- a/install/mindspore_gpu_install_pip.md
+++ b/install/mindspore_gpu_install_pip.md
@@ -39,7 +39,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
## 验证是否成功安装
diff --git a/install/mindspore_gpu_install_pip_en.md b/install/mindspore_gpu_install_pip_en.md
index 687428753629f087e2a9ed1eedec366b3d6466ca..4c6ea5341c10e907b784eea73312ec9958238d6a 100644
--- a/install/mindspore_gpu_install_pip_en.md
+++ b/install/mindspore_gpu_install_pip_en.md
@@ -39,7 +39,7 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.
## Installation Verification
diff --git a/install/mindspore_gpu_install_source.md b/install/mindspore_gpu_install_source.md
index 1a8ac68847785e17e9c1c64fcc57072274e89e0a..a33187da6ac2912bd130852ec24ef0967b2db01e 100644
--- a/install/mindspore_gpu_install_source.md
+++ b/install/mindspore_gpu_install_source.md
@@ -83,7 +83,7 @@ pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -i
其中:
- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。
-- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。
+- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。
- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。
## 验证是否成功安装
diff --git a/install/mindspore_gpu_install_source_en.md b/install/mindspore_gpu_install_source_en.md
index 35f41e5e3b7ef8619cd6bd1623abcfb716fa0e7a..496b79cc5a7bede856e8c2c9f6ead210d8aacf8b 100644
--- a/install/mindspore_gpu_install_source_en.md
+++ b/install/mindspore_gpu_install_source_en.md
@@ -82,7 +82,7 @@ pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -i
Of which,
- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items.
-- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1.
+- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0.
- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`.
## Installation Verification
diff --git a/tutorials/inference/source_en/index.rst b/tutorials/inference/source_en/index.rst
index 77e779bf19d5c2c7227a3267093afa288ff086fd..352af34cc80aac119b0da4e3afba8ae9bfd25269 100644
--- a/tutorials/inference/source_en/index.rst
+++ b/tutorials/inference/source_en/index.rst
@@ -24,3 +24,6 @@ Inference Using MindSpore
:caption: Inference Service
serving_example
+ serving_grpc
+ serving_restful
+ serving_model
diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst b/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst
index d16b94a6134bb498484d11cc4a9535cfddc6f39a..1544dd6a232ca90820288d832336763cff2b3774 100644
--- a/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst
+++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst
@@ -5,3 +5,4 @@ Inference on Ascend 310
:maxdepth: 1
multi_platform_inference_ascend_310_air
+ multi_platform_inference_ascend_310_mindir
\ No newline at end of file
diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md b/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md
new file mode 100644
index 0000000000000000000000000000000000000000..1a98b0300b6cb18b86e9573f7b7aa719e275dc6e
--- /dev/null
+++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md
@@ -0,0 +1,5 @@
+# Inference on the Ascend 310 AI Processor Using Mindir Model
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/inference/source_en/serving_grpc.md b/tutorials/inference/source_en/serving_grpc.md
new file mode 100644
index 0000000000000000000000000000000000000000..596ae4d3e37be28fa677746bb360f1b79b54c9bf
--- /dev/null
+++ b/tutorials/inference/source_en/serving_grpc.md
@@ -0,0 +1,5 @@
+# Access MindSpore Serving service based on gRPC interface
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/inference/source_en/serving_model.md b/tutorials/inference/source_en/serving_model.md
new file mode 100644
index 0000000000000000000000000000000000000000..260cbf46319ea10e51b54c5ad328da000e108606
--- /dev/null
+++ b/tutorials/inference/source_en/serving_model.md
@@ -0,0 +1,5 @@
+# Servable provided by configuration model
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/inference/source_en/serving_restful.md b/tutorials/inference/source_en/serving_restful.md
new file mode 100644
index 0000000000000000000000000000000000000000..1f2f22a30fa75ed2750a357a4d0dcf5a59bcdd20
--- /dev/null
+++ b/tutorials/inference/source_en/serving_restful.md
@@ -0,0 +1,5 @@
+# Access MindSpore Serving service based on RESTful interface
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/lite/source_en/images/side_train_sequence.png b/tutorials/lite/source_en/images/side_train_sequence.png
index 16e4af67a46370813760c09a15da756ad87fa643..058f03d3973beab9c8a245d6aa898f938d486315 100644
Binary files a/tutorials/lite/source_en/images/side_train_sequence.png and b/tutorials/lite/source_en/images/side_train_sequence.png differ
diff --git a/tutorials/lite/source_en/index.rst b/tutorials/lite/source_en/index.rst
index fcddb633dca596106919215c815e3ac44b6e86e5..12b05c6bf3f9bda98aff73da24bcaeeac649759e 100644
--- a/tutorials/lite/source_en/index.rst
+++ b/tutorials/lite/source_en/index.rst
@@ -274,7 +274,7 @@ Using MindSpore on Mobile and IoT
Performing Benchmark Testing of MindSpore ToD
diff --git a/tutorials/lite/source_en/quick_start/train_lenet.md b/tutorials/lite/source_en/quick_start/train_lenet.md
index f8c631481e11e3021d800002b581a9d10ef4611d..08422b683e62a1212d813c4fd963b29faafa52f6 100644
--- a/tutorials/lite/source_en/quick_start/train_lenet.md
+++ b/tutorials/lite/source_en/quick_start/train_lenet.md
@@ -67,10 +67,10 @@ Acquire `converter` and `runtime-arm64-cpu` tool-package based on MindSpore Lite
```shell
# generate converter tools and runtime package on x86
-bash build.sh -I x86_64 -T on -e CPU -j8
+bash build.sh -I x86_64 -T on -e cpu -j8
# generate runtime package on arm64
-bash build.sh -I arm64 -T on -e CPU -j8
+bash build.sh -I arm64 -T on -e cpu -j8
```
You could also directly download them from [here](https://www.mindspore.cn/tutorial/lite/en/master/use/downloads.html) and store them in the `output` directory related to the MindSpore source code (if no `output` directory exists, please create it).
diff --git a/tutorials/lite/source_en/use/net_train_tool.md b/tutorials/lite/source_en/use/benchmark_train_tool.md
similarity index 82%
rename from tutorials/lite/source_en/use/net_train_tool.md
rename to tutorials/lite/source_en/use/benchmark_train_tool.md
index 98e66eaefde71b80da80a2cd3e1acd816e0e0985..560edf44ee51e0f8e6ad7cf56384a5594bff8d81 100644
--- a/tutorials/lite/source_en/use/net_train_tool.md
+++ b/tutorials/lite/source_en/use/benchmark_train_tool.md
@@ -15,28 +15,28 @@
-
+
## Overview
-The same as [`benchmark` tool](https://www.mindspore.cn/tutorial/lite/en/master/use/benchmark_tool.html), you can use the `net_train` tool to perform benchmark testing on a MindSpore ToD (Train on Device) model. It can not only perform quantitative analysis (performance) on the execution duration the model, but also perform comparative error analysis (accuracy) based on the output of the specified model.
+The same as `benchmark`, you can use the `benchmark_train` tool to perform benchmark testing on a MindSpore ToD (Train on Device) model. It can not only perform quantitative analysis (performance) on the execution duration the model, but also perform comparative error analysis (accuracy) based on the output of the specified model.
## Linux Environment Usage
### Environment Preparation
-To use the `net_train` tool, you need to prepare the environment as follows:
+To use the `benchmark_train` tool, you need to prepare the environment as follows:
-- Compilation: Install build dependencies and perform build. The code of the `net_train` tool is stored in the `mindspore/lite/tools/net_train` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html#compilation-example) in the build document.
+- Compilation: Install build dependencies and perform build. The code of the `benchmark_train` tool is stored in the `mindspore/lite/tools/benchmark_train` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html#compilation-example) in the build document.
-- Run: Obtain the `net_train` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html#output-description) in the build document.
+- Run: Obtain the `benchmark_train` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/tutorial/lite/en/master/use/build.html#output-description) in the build document.
### Parameter Description
-The command used for benchmark testing based on the compiled `net_train` tool is as follows:
+The command used for benchmark testing based on the compiled `benchmark_train` tool is as follows:
```bash
-./net_train [--modelFile=] [--accuracyThreshold=]
+./benchmark_train [--modelFile=] [--accuracyThreshold=]
[--expectedDataFile=] [--warmUpLoopCount=]
[--timeProfiling=] [--help]
[--inDataFile=] [--epochs=]
@@ -50,7 +50,7 @@ The following describes the parameters in detail.
| `--modelFile=` | Mandatory | Specifies the file path of the MindSpore Lite model for benchmark testing. | String | Null | - |
| `--accuracyThreshold=` | Optional | Specifies the accuracy threshold. | Float | 0.5 | - |
| `--expectedDataFile=` | Optional | Specifies the file path of the benchmark data. The benchmark data, as the comparison output of the tested model, is output from the forward inference of the tested model under other deep learning frameworks using the same input. | String | Null | - |
-| `--help` | Optional | Displays the help information about the `net_train` command. | - | - | - |
+| `--help` | Optional | Displays the help information about the `benchmark_train` command. | - | - | - |
| `--warmUpLoopCount=` | Optional | Specifies the number of preheating inference times of the tested model before multiple rounds of the benchmark test are executed. | Integer | 3 | - |
| `--timeProfiling=` | Optional | Specifies whether to use TimeProfiler to print every kernel's cost time. | Boolean | false | true, false |
| `--inDataFile=` | Optional | Specifies the file path of the input data of the tested model. If this parameter is not set, a random value will be used. | String | Null | - |
@@ -59,14 +59,14 @@ The following describes the parameters in detail.
### Example
-When using the `net_train` tool to perform benchmark testing, you can set different parameters to implement different test functions. The testing is classified into performance test and accuracy test.
+When using the `benchmark_train` tool to perform benchmark testing, you can set different parameters to implement different test functions. The testing is classified into performance test and accuracy test.
#### Performance Test
The main test indicator of the performance test performed by the Benchmark tool is the duration of a single forward inference. In a performance test, you do not need to set benchmark data parameters such as `benchmarkDataFile`. But you can set the parameter `timeProfiling` as True or False to decide whether to print the running time of the model at the network layer on a certain device. The default value of `timeProfiling` is False. For example:
```bash
-./net_train --modelFile=./models/test_benchmark.ms --s=10
+./benchmark_train --modelFile=./models/test_benchmark.ms --s=10
```
This command uses a random input, and other parameters use default values. After this command is executed, the following statistics are displayed. The statistics include the minimum duration, maximum duration, and average duration of a single inference after the tested model runs for the specified number of inference rounds.
@@ -76,7 +76,7 @@ Model = test_benchmark.ms, numThreads = 2, MinRunTime = 72.228996 ms, MaxRuntime
```
```bash
-./net_train --modelFile=./models/test_benchmark.ms --timeProfiling=true
+./benchmark_train --modelFile=./models/test_benchmark.ms --timeProfiling=true
```
This command uses a random input, sets the parameter `timeProfiling` as true, times and other parameters use default values. After this command is executed, the statistics on the running time of the model at the network layer will be displayed as follows. In this case, the statistics are displayed by`opName` and `optype`. `opName` indicates the operator name, `optype` indicates the operator type, and `avg` indicates the average running time of the operator per single run, `percent` indicates the ratio of the operator running time to the total operator running time, `calledTimess` indicates the number of times that the operator is run, and `opTotalTime` indicates the total time that the operator is run for a specified number of times. Finally, `total time` and `kernel cost` show the average time consumed by a single inference operation of the model and the sum of the average time consumed by all operators in the model inference, respectively.
@@ -124,7 +124,7 @@ total time : 2.90800 ms, kernel cost : 2.74851 ms
The accuracy test performed by the Benchmark tool aims to verify the accuracy of the MinSpore model output by setting benchmark data (the default input and benchmark data type are float32). In an accuracy test, in addition to the `modelFile` parameter, the `benchmarkDataFile` parameter must be set. For example:
```bash
-./net_train --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --device=CPU --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out
+./benchmark_train --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --device=CPU --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out
```
This command specifies the input data and benchmark data of the tested model, specifies that the model inference program runs on the CPU, and sets the accuracy threshold to 3%. After this command is executed, the following statistics are displayed, including the single input data of the tested model, output result and average deviation rate of the output node, and average deviation rate of all nodes.
@@ -141,5 +141,5 @@ Mean bias of all nodes: 0%
To set specified input shapes (such as 1,32,32,1), use the command as follows:
```bash
-./net_train --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --device=CPU --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out
+./benchmark_train --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --device=CPU --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out
```
diff --git a/tutorials/lite/source_en/use/build.md b/tutorials/lite/source_en/use/build.md
index df53ff963a56480123ac9fbbf1237a649fa9383d..c9abc27f8ce0800ba7cf495e304c05df98b445e6 100644
--- a/tutorials/lite/source_en/use/build.md
+++ b/tutorials/lite/source_en/use/build.md
@@ -22,7 +22,6 @@
- [Output Description](#output-description)
- [Description of Converter's Directory Structure](#description-of-converter-s-directory-structure-1)
- [Description of Benchmark's Directory Structure](#description-of-benchmark-s-directory-structure)
- - [Training Output Description](#training-output-description-1)
@@ -37,19 +36,19 @@ Modules in inference version:
| converter | Linux, Windows | Model Conversion Tool |
| runtime(cpp, java) | Linux, Windows, Android | Model Inference Framework(Windows platform does not support java version runtime) |
| benchmark | Linux, Windows, Android | Benchmarking Tool |
-| cropper | Linux | static library crop tool for libmindspore-lite.a |
+| cropper | Linux | Static library crop tool for libmindspore-lite.a |
| minddata | Linux, Android | Image Processing Library |
Modules in training version:
-| Module | Support Platform | Description |
-| ------------ | ---------------- | -------------------------------------------- |
-| converter | Linux | Model Conversion Tool |
-| runtime(cpp) | Linux, Android | Model Inference/Train Framework(cpp) |
-| benchmark | Linux, Android | Image Processing Library |
-| cropper | Linux | static library crop tool for libmindspore-lite.a |
-| minddata | Linux, Android | Image Processing Library |
-| net_train | Linux, Android | Verify bit exactness |
+| Module | Support Platform | Description |
+| --------------- | ---------------- | ------------------------------------------------ |
+| converter | Linux | Model Conversion Tool |
+| runtime(cpp) | Linux, Android | Model Train Framework(java is not support) |
+| benchmark | Linux, Android | Benchmarking Tool |
+| cropper | Linux | Static library crop tool for libmindspore-lite.a |
+| minddata | Linux, Android | Image Processing Library |
+| benchmark_train | Linux, Android | Performance and Accuracy Validation |
## Linux Environment Compilation
@@ -107,17 +106,17 @@ MindSpore Lite provides a compilation script `build.sh` for one-click compilatio
| -n | Specifies to compile the lightweight image processing module. | lite_cv | No |
| -A | Language used by mindspore lite, default cpp. If the parameter is set to java,the AAR is compiled. | cpp, java | No |
| -C | If this parameter is set, the converter is compiled, default on. | on, off | No |
-| -o | If this parameter is set, the benchmark and static library crop tool is compiled, default on. | on, off | No |
+| -o | If this parameter is set, the benchmark and static library crop tool are compiled, default on. | on, off | No |
| -t | If this parameter is set, the testcase is compiled, default off. | on, off | No |
-| -T | If this parameter is set, ToD(Train on Device) is compiled, i.e., this option is required when compiling MindSpore ToD, default off. | on, off | No |
+| -T | If this parameter is set, MindSpore Lite training version is compiled, i.e., this option is required when compiling, default off. | on, off | No |
> When the `-I` parameter changes, such as `-I x86_64` is converted to `-I arm64`, adding `-i` for parameter compilation does not take effect.
>
-> When compiling the AAR package, the `-A java` parameter must be added, and there is no need to add the `-I` parameter.
+> When compiling the AAR package, the `-A java` parameter must be added, and there is no need to add the `-I` parameter. By default, the built-in CPU and GPU operators are compiled at the same time.
>
> The compiler will only generate training packages when `-T` is opened.
>
-> Any `-e` compilation option, the CPU operator will be compiled into it.
+> Any `-e` compilation option, the CPU operators will be compiled into it.
### Compilation Example
@@ -153,19 +152,19 @@ Then, run the following commands in the root directory of the source code to com
bash build.sh -I arm64 -i -j32
```
-- Release version of the ARM 64-bit architecture, with the built-in CPU operator compiled:
+- Release version of the ARM 64-bit architecture, with the built-in CPU operators compiled:
```bash
bash build.sh -I arm64 -e cpu
```
-- Release version of the ARM 64-bit architecture, with the built-in CPU and GPU operator compiled:
+- Release version of the ARM 64-bit architecture, with the built-in CPU and GPU operators compiled:
```bash
bash build.sh -I arm64 -e gpu
```
-- Release version of the ARM 64-bit architecture, with the built-in CPU and NPU operator compiled:
+- Release version of the ARM 64-bit architecture, with the built-in CPU and NPU operators compiled:
```bash
bash build.sh -I arm64 -e npu
@@ -177,23 +176,35 @@ Then, run the following commands in the root directory of the source code to com
bash build.sh -I arm64 -n lite_cv
```
-- Compile MindSpore Lite AAR:
+- Compile MindSpore Lite AAR, with the built-in CPU and GPU operators compiled:
+
+ ```bash
+ bash build.sh -A java
+ ```
+
+- Compile MindSpore Lite AAR, with the built-in CPU operators compiled:
+
+ ```bash
+ bash build.sh -A java -e cpu
+ ```
+
+- Release version of the x86_64 architecture, with the benchmark, cropper and converter compiled:
```bash
- bash build.sh -A java -i
+ bash build.sh -I x86_64
```
- Release version of the x86_64 architecture, with the converter compiled and train on device enabled:
```bash
- bash build.sh -I x86_64 -C on -T on
+ bash build.sh -I x86_64 -T on
```
### Inference Output Description
After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into the following parts.
-- `mindspore-lite-{version}-converter-{os}-{arch}.tar.gz`: Contains model conversion tool.
+- `mindspore-lite-{version}-converter-{os}-{arch}.tar.gz`: Model converter.
- `mindspore-lite-{version}-inference-{os}-{arch}.tar.gz`: Contains model inference framework, benchmarking tool, performance analysis tool and library crop tool.
- `mindspore-lite-maven-{version}.zip`: Contains model reasoning framework AAR package.
@@ -283,6 +294,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
```
> 1. Compile ARM64 to get the inference framework output of cpu/gpu/npu by default, if you add `-e gpu`, you will get the inference framework output of cpu/gpu, ARM32 only supports CPU.
+>
> 2. Before running the tools in the converter, benchmark directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite are located to the path where the system searches for dynamic libraries.
Configure converter:
@@ -299,14 +311,14 @@ export LD_LIBRARY_PATH= ./output/mindspore-lite-{version}-inference-{os}-{arch}/
### Training Output Description
-If the `-T on` is added to the MindSpore ToD (Train on Device), go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into the following parts.
+If the `-T on` is added to the MindSpore Lite, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into the following parts.
-- `mindspore-lite-{version}-train-converter-{os}-{arch}.tar.gz`: Contains model conversion tool, Only supports the MindIR model.
+- `mindspore-lite-{version}-train-converter-{os}-{arch}.tar.gz`: Model converter, only support MINDIR format model file.
- `mindspore-lite-{version}-train-{os}-{arch}.tar.gz`: Contains model training framework, performance analysis tool.
> version: Version of the output, consistent with that of the MindSpore.
>
-> device: The processor that runs ToD. Currently only build-in CPU is available.
+> device: The processor that runs MindSpore Lite. Currently only build-in CPU is available.
>
> os: Operating system on which the output will be deployed.
>
@@ -357,35 +369,15 @@ The MindSpore Lite training framework can be obtained under `-I x86_64`, `-I arm
│ ├── lite_mat.h # The Header files of image data class structure
│ └── lib # Image processing dynamic library
│ ├── libminddata-lite.so # The files of image processing dynamic library
- │ └── net_train
- │ ├── net_train # training model benchmark tool
- ```
-
-- When the compilation option is `-I arm64`:
-
- ```text
- |
- ├── mindspore-lite-{version}-train-android-aarch64
- │ └── include # Header files of training framework
- │ └── lib # Training framework library
- │ ├── libmindspore-lite.a # Static library of training framework in MindSpore Lite
- │ ├── libmindspore-lite.so # Dynamic library of training framework in MindSpore Lite
- │ └── minddata # Image processing dynamic library
- │ └── include # Header files
- │ └── lite_cv # The Header files of image processing dynamic library
- │ ├── image_process.h # The Header files of image processing function
- │ ├── lite_mat.h # The Header files of image data class structure
- │ └── lib # Image processing dynamic library
- │ ├── libminddata-lite.so # The files of image processing dynamic library
- │ └── net_train
- │ ├── net_train # training model benchmark tool
+ │ └── benchmark_train
+ │ ├── benchmark_train # training model benchmark tool
```
-- When the compilation option is `-I arm32`:
+- When the compilation option is `-I arm64` or `-I arm32`:
```text
|
- ├── mindspore-lite-{version}-train-android-aarch32
+ ├── mindspore-lite-{version}-train-android-{arch}
│ └── include # Header files of training framework
│ └── lib # Training framework library
│ ├── libmindspore-lite.a # Static library of training framework in MindSpore Lite
@@ -397,11 +389,11 @@ The MindSpore Lite training framework can be obtained under `-I x86_64`, `-I arm
│ ├── lite_mat.h # The Header files of image data class structure
│ └── lib # Image processing dynamic library
│ ├── libminddata-lite.so # The files of image processing dynamic library
- │ └── net_train
- │ ├── net_train # training model benchmark tool
+ │ └── benchmark_train
+ │ ├── benchmark_train # training model benchmark tool
```
-> Before running the tools in the converter and the net_train directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite are located to the path where the system searches for dynamic libraries.
+> Before running the tools in the converter and the benchmark_train directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite are located to the path where the system searches for dynamic libraries.
Configure converter:
@@ -409,7 +401,7 @@ Configure converter:
export LD_LIBRARY_PATH=./output/mindspore-lite-{version}-train-converter-{os}-{arch}/lib:./output/mindspore-lite-{version}-train-converter-{os}-{arch}/third_party/glog/lib:${LD_LIBRARY_PATH}
```
-Configure net_train:
+Configure benchmark_train:
```bash
export LD_LIBRARY_PATH= ./output/mindspore-lite-{version}-train-{os}-{arch}/lib:${LD_LIBRARY_PATH}
@@ -510,6 +502,4 @@ The content includes the following parts:
│ └── include # Header files of inference framework
```
-### Training Output Description
-
-Currently, MindSpore ToD (Train on Device) is not supported on Windows.
+> Currently, MindSpore Lite is not supported on Windows.
\ No newline at end of file
diff --git a/tutorials/lite/source_en/use/converter_train.md b/tutorials/lite/source_en/use/converter_train.md
index 7316b278a43ea4e51c0f2076ebb466b4739df969..0c6bba0ba4fe4123db1205ebadc82015c163888c 100644
--- a/tutorials/lite/source_en/use/converter_train.md
+++ b/tutorials/lite/source_en/use/converter_train.md
@@ -1,10 +1,10 @@
-# Creating MindSpore ToD Models
+# Creating MindSpore Lite Models
`Linux` `Environment Preparation` `Model Export` `Model Converting` `Intermediate` `Expert`
-- [Creating MindSpore ToD Models](#creating-mindspore-tod-model)
+- [Creating MindSpore Lite Models](#creating-mindspore-lite-model)
- [Overview](#overview)
- [Linux Environment](#linux-environment)
- [Environment Preparation](#environment-preparation)
@@ -17,16 +17,16 @@
## Overview
-Creating your MindSpore ToD(Train on Device) model is a two step procedure:
+Creating your MindSpore Lite(Train on Device) model is a two step procedure:
- In the first step the model is defined and the layers that should be trained must be declared. This is being done on the server, using a MindSpore-based [Python code](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-mindir-model). The model is then exported into a protobuf format, which is called MINDIR.
-- In the seconde step this `.mindir` model is converted into a `.ms` format that can be loaded onto an embedded device and can be trained using the MindSpore ToD framework. The converted `.ms` models can be used for both training and inference.
+- In the seconde step this `.mindir` model is converted into a `.ms` format that can be loaded onto an embedded device and can be trained using the MindSpore Lite framework. The converted `.ms` models can be used for both training and inference.
## Linux Environment
### Environment Preparation
-MindSpore ToD model transfer tool (only suppot Linux OS) has provided multiple parameters. The procedure is as follows:
+MindSpore Lite model transfer tool (only suppot Linux OS) has provided multiple parameters. The procedure is as follows:
- Compile or download the compiled model transfer tool.
@@ -34,7 +34,7 @@ MindSpore ToD model transfer tool (only suppot Linux OS) has provided multiple p
### Parameters Description
-The table below shows the parameters used in the MindSpore ToD model training transfer tool.
+The table below shows the parameters used in the MindSpore Lite model training transfer tool.
| Parameters | required | Parameter Description | Value Range | Default Value |
| --------------------------- | -------- | ------------------------------------------------------------ | ----------- | ------------- |
| `--help` | no | Prints all the help information. | - | - |
diff --git a/tutorials/lite/source_en/use/runtime_cpp.md b/tutorials/lite/source_en/use/runtime_cpp.md
index 840fdd631e16cf8282fecaf5eac8c3eb1e2890dd..d01d94e19c4fb3e75a740f2a4e86352a23337d5b 100644
--- a/tutorials/lite/source_en/use/runtime_cpp.md
+++ b/tutorials/lite/source_en/use/runtime_cpp.md
@@ -75,7 +75,9 @@ When MindSpore Lite is used for inference, sessions are the main entrance of inf
Contexts save some basic configuration parameters required by sessions to guide graph compilation and execution. The definition of [Context](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#context) is as follows:
-MindSpore Lite supports heterogeneous inference. The preferred backend for inference is specified by [device_list_](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#device-list) in [Context](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#context) and is CPU by default. During graph compilation, operator selection and scheduling are performed based on backend configuration information in [device_list_](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#device-list). At present, only CPU, GPU and NPU are supported. When configuring the [DeviceContext](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) for GPU, GPU backend is preferred; When configuring the [DeviceContext](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) for NPU, NPU backend is preferred
+MindSpore Lite supports heterogeneous inference. The preferred backend for inference is specified by [device_list_](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#device-list) in [Context](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#context) and is CPU by default. During graph compilation, operator selection and scheduling are performed based on backend configuration information in [device_list_](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#device-list). At present, only two configurations are supported, CPU and GPU or CPU and NPU. When configuring the [DeviceContext](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) for GPU, GPU backend is preferred; When configuring the [DeviceContext](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) for NPU, NPU backend is preferred.
+
+> `device_list_[0]` must be CPU's `DeviceContext`. `device_list_[1]` may be GPU's `DeviceContext` or NPU's `DeviceContext`. For now, it is not supported to set CPU, GPU and NPU `devicecontext` at the same time.
MindSpore Lite has a built-in thread pool shared by processes. During inference, [thread_num_](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#thread-num) is used to specify the maximum number of threads in the thread pool. The default maximum number is 2. It is recommended that the maximum number should be no more than 4. Otherwise, the performance may be affected.
diff --git a/tutorials/lite/source_en/use/runtime_java.md b/tutorials/lite/source_en/use/runtime_java.md
index 7809ce1e90231d16cfd31d8e9768e1da4db91215..8b4eb1c5f1b2cf96c576fe08211df42e98acbb67 100644
--- a/tutorials/lite/source_en/use/runtime_java.md
+++ b/tutorials/lite/source_en/use/runtime_java.md
@@ -76,7 +76,7 @@ if (!model.loadModel(context, "model.ms")) {
[MSConfig](https://www.mindspore.cn/doc/api_java/en/master/msconfig.html#msconfig) saves some basic configuration parameters required by the session, which is used to guide graph compilation and graph execution
-MindSpore Lite supports heterogeneous inference. The preferred backend for inference is specified by `deviceType` in [MSConfig](https://www.mindspore.cn/doc/api_java/en/master/msconfig.html#msconfig) and is CPU by default. During graph compilation, operator selection and scheduling are performed based on the preferred backend.
+MindSpore Lite supports heterogeneous inference. The preferred backend for inference is specified by `deviceType` in [MSConfig](https://www.mindspore.cn/doc/api_java/en/master/msconfig.html#msconfig) and CPU and GPU is supported. During graph compilation, operator selection and scheduling are performed based on the preferred backend.
MindSpore Lite has a built-in thread pool shared by processes. During inference, `threadNum` is used to specify the maximum number of threads in the thread pool. The default maximum number is 2. It is recommended that the maximum number does not exceed 4. Otherwise, the performance may be affected.
diff --git a/tutorials/lite/source_en/use/runtime_train_cpp.md b/tutorials/lite/source_en/use/runtime_train_cpp.md
index 0775e8fea6bf086c0da442bab452fb9d28e13d03..b100da8d808bba9f58dfc99c6e1bee66f66289d6 100644
--- a/tutorials/lite/source_en/use/runtime_train_cpp.md
+++ b/tutorials/lite/source_en/use/runtime_train_cpp.md
@@ -33,16 +33,8 @@
## Overview
-Here we review the operations that can be performed on the already converted MindSpore ToD(Train on Device) model.
-
-The model itself can represent different training schemes, for example:
-
-- A thin neural network that will be fully trained on the embedded device,
-- An already trained network that will only be fine tuned on the device,
-- A Transfer Learning Model in which an already trained "backbone" will be kept static while the "head" of the network will be trained.
-
The exact training scheme is encapsulated within the `.ms` model. The software that we will discuss below is not aware of it but rather perform training and inference in a generic manner.
-Following the conversion of the model on the server to an `.ms` format, the file should be downloaded to the embedded device for the ToD(Train on Device) process.
+Following the conversion of the model on the server to an `.ms` format, the file should be downloaded to the embedded device for the ToD process.
A sequence diagram explaining the train sequence is shown in the image below:
@@ -53,16 +45,16 @@ In this diagram the drawn objects represents:
- `OS`: A software element that is responsible to access storage data.
- `User`: The application/object that performs the training.
- `DataLoader`: An object that is responsible to load data from the storage and perform pre-processing prior to using it in the training (e.g., reading an image, rescaling it to a given size and converting it to bitmap).
-- `TrainSession`: A software module provided by MindSpore ToD(Train on Device), that provides flatbuffer DeSerialization into a network of nodes and interconnecting tensors. It performs graph compilation and calls the graph executor for train and inference.
+- `TrainSession`: A software module provided by MindSpore Lite, that provides flatbuffer DeSerialization into a network of nodes and interconnecting tensors. It performs graph compilation and calls the graph executor for train and inference.
## Session Creation
-In MindSpore ToD(Train on Device) framework, `TrainSession` class provides the main API to the system. Here we will see how to interact with a `TrainSession` object.
+In MindSpore Lite framework, `TrainSession` class provides the main API to the system. Here we will see how to interact with a `TrainSession` object.
### Reading Models
A Model file is flatbuffer-serialized file which was converted using the [MindSpore Model Converter Tool](https://www.mindspore.cn/tutorial/lite/en/master/use/converter_tool.html). These files have a `.ms` extension. Before model training and/or inference, the model needs to be loaded from the file system and parsed. Related operations are mainly implemented in the [`Model`](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#model) class which holds the model data such as the network structure, tensors sizes, weights data and operators attributes.
-> Unlike in MindSpore Lite framework, in MindSpore ToD the user is not allowed to access the `Model` object, since it is being used by `TrainSession` during training. All interaction with `Model` including instantiation, Compiling and deletion are handled within `TrainSession`.
+> Unlike in MindSpore Lite framework, in MindSpore Lite the user is not allowed to access the `Model` object, since it is being used by `TrainSession` during training. All interaction with `Model` including instantiation, Compiling and deletion are handled within `TrainSession`.
### Creating Contexts
@@ -75,7 +67,7 @@ Once the TrainSession is created with the `Context` object, it is no longer need
There are two methods to create a session:
-- The first API allows MindSpore ToD(Train on Device) to access the filesystem and read the model from a file, parse it, compile it and produce a valid TrainSession object. The `Context` described above is passed to the TrainSession as a basic configuration. The static function has the following signature `TrainSession *TrainSession::CreateSession(const string& filename, const Context *context, bool mode)`, where `filename` is the model's file name, context is the `Context` and mode is the initial training mode of the session (Train/Eval). On Success, a fully compiled and ready to use `TrainSession` instance is returned by the function, this instance must be freed using `delete` on the termination of the process.
+- The first API allows MindSpore Lite to access the filesystem and read the model from a file, parse it, compile it and produce a valid TrainSession object. The `Context` described above is passed to the TrainSession as a basic configuration. The static function has the following signature `TrainSession *TrainSession::CreateSession(const string& filename, const Context *context, bool mode)`, where `filename` is the model's file name, context is the `Context` and mode is the initial training mode of the session (Train/Eval). On Success, a fully compiled and ready to use `TrainSession` instance is returned by the function, this instance must be freed using `delete` on the termination of the process.
- The second API is similar to the first but uses an in-memory copy of the flatbuffer in order to create the `TrainSession`. The static function has the following signature `TrainSession *TrainSession::CreateSession(const char* buf, size_t size, const Context *context, bool mode)`, where `buf` is a pointer to the in-memory buffer and `size` is its length. On Success, a fully compiled and ready-to-use `TrainSession` instance is returned by the function. If needed, the buf pointer can be freed immediately. The returned `TrainSession` instance must be freed using `delete` when no longer needed.
### Example
@@ -135,12 +127,12 @@ if (ret != RET_OK) {
### Obtaining Input Tensors
Before graph execution, whether it is during training or inference, the input data must be filled-in into the model input tensors.
-MindSpore ToD(Train on Device) provides the following methods to obtain model input tensors:
+MindSpore Lite provides the following methods to obtain model input tensors:
1. Use the `GetInputsByTensorName` method to obtain model input tensors that are connected to the model input node based on the tensor name.
```cpp
- /// \brief Get input MindSpore ToD MSTensors of model by tensor name.
+ /// \brief Get input MindSpore Lite MSTensors of model by tensor name.
///
/// \param[in] tensor_name Define tensor name.
///
@@ -233,7 +225,7 @@ if ((in_data == nullptr)|| (in_labels == nullptr)) {
memcpy(in_data, data_ptr, inputs.at(data_index)->Size());
memcpy(in_labels, label_ptr, inputs.at(label_index)->Size());
// After filling the input tensors the data_ptr and label_ptr may be freed
-// The input tensors themselves are managed by MindSpore ToD and users are not allowd to access them or delete them
+// The input tensors themselves are managed by MindSpore Lite and users are not allowd to access them or delete them
```
Note:
diff --git a/tutorials/lite/source_en/use/tools_train.rst b/tutorials/lite/source_en/use/tools_train.rst
index e12c66c01512ab4a82772c0c743698d0a7942380..64e1d1fb2404756d38d62cfbb66c5e9a37fa274b 100644
--- a/tutorials/lite/source_en/use/tools_train.rst
+++ b/tutorials/lite/source_en/use/tools_train.rst
@@ -4,4 +4,4 @@ Other Tools
.. toctree::
:maxdepth: 1
- net_train_tool
\ No newline at end of file
+ benchmark_train_tool
\ No newline at end of file
diff --git a/tutorials/lite/source_zh_cn/images/train_sequence.png b/tutorials/lite/source_zh_cn/images/train_sequence.png
index 16e4af67a46370813760c09a15da756ad87fa643..058f03d3973beab9c8a245d6aa898f938d486315 100644
Binary files a/tutorials/lite/source_zh_cn/images/train_sequence.png and b/tutorials/lite/source_zh_cn/images/train_sequence.png differ
diff --git a/tutorials/lite/source_zh_cn/index.rst b/tutorials/lite/source_zh_cn/index.rst
index f3821352e55e1521065ee2c5a9b5e0c2657a25c9..109e542af619cf816e0081d88076b0a308aebc25 100644
--- a/tutorials/lite/source_zh_cn/index.rst
+++ b/tutorials/lite/source_zh_cn/index.rst
@@ -272,7 +272,7 @@
\n",
+ "\n",
+ " 根据该计算图的结果所示,其中图3为未使能图算融合时的对应计算图,图4为使能图算融合后的对应计算图。可以看到不仅自定义算子`MyOp`中的基本算子进行了融合,并且与其他算子也进行了更大范围融合。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 总结\n",
+ "\n",
+ "以上便完成了图算融合的体验过程,我们通过本次体验全面了解了如何开启图算融合模式,理解了如何生成高性能的融合算子。"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/tutorials/training/source_en/advanced_use/achieve_high_order_differentiation.md b/tutorials/training/source_en/advanced_use/achieve_high_order_differentiation.md
new file mode 100644
index 0000000000000000000000000000000000000000..d28199513cc743ca537b230caa65ca33bb9c9db2
--- /dev/null
+++ b/tutorials/training/source_en/advanced_use/achieve_high_order_differentiation.md
@@ -0,0 +1,5 @@
+# Achieve High Order Differentiation
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/training/source_en/advanced_use/cv.rst b/tutorials/training/source_en/advanced_use/cv.rst
index 6805f048e9a1f14791df47d5b9ee506eb50b77c4..3f3db6d6cfc64e3eb129b5bc57907ed20aa29172 100644
--- a/tutorials/training/source_en/advanced_use/cv.rst
+++ b/tutorials/training/source_en/advanced_use/cv.rst
@@ -5,4 +5,5 @@ Computer Vision
:maxdepth: 1
cv_resnet50
- cv_resnet50_second_order_optimizer
\ No newline at end of file
+ cv_resnet50_second_order_optimizer
+ cv_mobilenetv2_fine_tune
\ No newline at end of file
diff --git a/tutorials/training/source_en/advanced_use/cv_mobilenetv2_fine_tune.md b/tutorials/training/source_en/advanced_use/cv_mobilenetv2_fine_tune.md
new file mode 100644
index 0000000000000000000000000000000000000000..d203617c97e89f698256ad613209be701ba3fb01
--- /dev/null
+++ b/tutorials/training/source_en/advanced_use/cv_mobilenetv2_fine_tune.md
@@ -0,0 +1,409 @@
+# Using MobileNetV2 to Implement Fine-Tuning
+
+`Linux` `Windows` `Ascend` `GPU` `CPU` `Model Development` `Intermediate` `Expert`
+
+
+
+- [Using MobileNetV2 to Implement Fine-Tuning](#using-mobilenetv2-to-implement-fine-tuning)
+ - [Overview](#overview)
+ - [Task Description and Preparations](#task-description-and-preparations)
+ - [Environment Configuration](#environment-configuration)
+ - [Downloading Code](#downloading-code)
+ - [Preparing a Pre-Trained Model](#preparing-a-pre-trained-model)
+ - [Preparing Data](#preparing-data)
+ - [Code for Loading a Pre-Trained Model](#code-for-loading-a-pre-trained-model)
+ - [Parameter Description](#parameter-description)
+ - [Running Python Files](#running-python-files)
+ - [Running Shell Scripts](#running-shell-scripts)
+ - [Loading Fine-Tuning Training](#loading-fine-tuning-training)
+ - [Loading Training on CPU](#loading-training-on-cpu)
+ - [Loading Training on GPU](#loading-training-on-gpu)
+ - [Loading Training on Ascend AI Processor](#loading-training-on-ascend-ai-processor)
+ - [Fine-Tuning Training Result](#fine-tuning-training-result)
+ - [Validating the Fine-Tuning Training Model](#validating-the-fine-tuning-training-model)
+ - [Validating the Model](#validating-the-model)
+ - [Validation Result](#validation-result)
+
+
+
+
+
+## Overview
+
+In a computer vision task, training a network from scratch is time-consuming and requires a large amount of computing power. Pre-trained models often select open large datasets such as OpenImage, ImageNet, VOC, and COCO. The number of images in these datasets reaches hundreds of thousands or even millions. Most tasks have a large amount of data. If a pre-trained model is not used during network model training, the training from scratch consumes a large amount of time and computing power. As a result, the model is prone to local minimum and overfitting. Therefore, most tasks perform fine-tuning on pre-trained models.
+
+MindSpore is a diversified machine learning framework. It can run on devices such as mobile phones and PCs, or on server clusters on the cloud. Currently, MobileNetV2 supports fine-tuning on a single CPU or on one or more Ascend AI Processors or GPUs on Windows, EulerOS, and Ubuntu systems. This tutorial describes how to perform fine-tuning training and validation in the MindSpore frameworks of different systems and processors.
+
+Currently, only the CPU is supported on Windows, and the CPU, GPU, and Ascend AI Processor are supported on Ubuntu and EulerOS.
+
+> You can obtain the complete executable sample code at .
+
+## Task Description and Preparations
+
+### Environment Configuration
+
+If running a task in a local environment, install the MindSpore framework and configure the CPU, GPU, or Ascend AI Processor. If running a task in the HUAWEI CLOUD environment, skip this section because the installation and configuration are not required.
+
+On the Windows operating system, backslashes `\` are used to separate directories of different levels in a path address. On the Linux operating system, slashes `/` are used. The following uses `/` by default. If you use Windows operating system, replace `/` in the path address with `\`.
+
+1. Install the MindSpore framework.
+ [Install](https://www.mindspore.cn/install/en) a MindSpore framework based on the processor architecture and the EulerOS, Ubuntu, or Windows system.
+
+2. Configure the CPU environment.
+ Set the following code before calling the CPU to start training or testing:
+
+ ```python
+ if config.platform == "CPU":
+ context.set_context(mode=context.GRAPH_MODE, device_target=config.platform, \
+ save_graphs=False)
+ ```
+
+3. Configure the GPU environment.
+ Set the following code before calling the GPU to start training or testing:
+
+ ```python
+ elif config.platform == "GPU":
+ context.set_context(mode=context.GRAPH_MODE, device_target=config.platform, save_graphs=False)
+ if config.run_distribute:
+ init("nccl")
+ context.set_auto_parallel_context(device_num=get_group_size(),
+ parallel_mode=ParallelMode.DATA_PARALLEL,
+ gradients_mean=True)
+ ```
+
+4. Configure the Ascend environment.
+ The following uses the JSON configuration file `hccl_config.json` in an environment with eight Ascend 910 AI processors as an example. Adjust `"server_count"` and `device` based on the following example to switch between the single-device and multi-device environments:
+
+ ```json
+ {
+ "version": "1.0",
+ "server_count": "1",
+ "server_list": [
+ {
+ "server_id": "10.155.111.140",
+ "device": [
+ {"device_id": "0","device_ip": "192.1.27.6","rank_id": "0"},
+ {"device_id": "1","device_ip": "192.2.27.6","rank_id": "1"},
+ {"device_id": "2","device_ip": "192.3.27.6","rank_id": "2"},
+ {"device_id": "3","device_ip": "192.4.27.6","rank_id": "3"},
+ {"device_id": "4","device_ip": "192.1.27.7","rank_id": "4"},
+ {"device_id": "5","device_ip": "192.2.27.7","rank_id": "5"},
+ {"device_id": "6","device_ip": "192.3.27.7","rank_id": "6"},
+ {"device_id": "7","device_ip": "192.4.27.7","rank_id": "7"}],
+ "host_nic_ip": "reserve"
+ }
+ ],
+ "status": "completed"
+ }
+ ```
+
+ Set the following code before calling the Ascend AI Processor to start training or testing:
+
+ ```python
+ elif config.platform == "Ascend":
+ context.set_context(mode=context.GRAPH_MODE, device_target=config.platform, device_id=config.device_id,
+ save_graphs=False)
+ if config.run_distribute:
+ context.set_auto_parallel_context(device_num=config.rank_size,
+ parallel_mode=ParallelMode.DATA_PARALLEL,
+ gradients_mean=True,
+ all_reduce_fusion_config=[140])
+ init()
+ ...
+ ```
+
+### Downloading Code
+
+Run the following command to clone [MindSpore open-source project repository](https://gitee.com/mindspore/mindspore.git) in Gitee and go to `./model_zoo/official/cv/mobilenetv2/`.
+
+```bash
+git clone https://gitee.com/mindspore/mindspore.git
+cd ./mindspore/model_zoo/official/cv/mobilenetv2
+```
+
+The code structure is as follows:
+
+```bash
+├─MobileNetV2
+ ├─README.md # descriptions about MobileNetV2
+ ├─scripts
+ │ run_train.sh # Shell script for train with Ascend or GPU
+ │ run_eval.sh # Shell script for evaluation with Ascend or GPU
+ ├─src
+ │ config.py # parameter configuration
+ │ dataset.py # creating dataset
+ │ launch.py # start Python script
+ │ lr_generator.py # learning rate config
+ │ mobilenetV2.py # MobileNetV2 architecture
+ │ mobilenetV2_fusion.py # MobileNetV2 fusion architecture
+ │ models.py # net utils to load ckpt_file, define_net...
+ │ utils.py # net utils to switch precision, set_context and so on
+ ├─train.py # training script
+ └─eval.py # evaluation script
+```
+
+During fine-tuning training and testing, python files `train.py` and `eval.py` can be used on Windows, Ubuntu, and EulerOS, and shell script files `run_train.sh` and `run_eval.sh` can be used on Ubuntu and EulerOS.
+
+If the script file `run_train.sh` is used, it runs `launch.py` and inputs parameters to `launch.py` which starts one or more processes to run `train.py` based on the number of allocated CPUs, GPUs, or Ascend AI Processors. Each process is allocated with a processor.
+
+### Preparing a Pre-Trained Model
+
+Download a [CPU/GPU pre-trained model]() or [Ascend pre-trained model](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2_ascend.ckpt) to the following directories based on the processor type:
+`./pretrain_checkpoint/`
+
+- CPU/GPU
+
+ ```bash
+ mkdir pretrain_checkpoint
+ wget -P ./pretrain_checkpoint https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2_cpu_gpu.ckpt
+ ```
+
+- Ascend AI Processor
+
+ ```bash
+ mkdir pretrain_checkpoint
+ wget -P ./pretrain_checkpoint https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2_ascend.ckpt
+ ```
+
+### Preparing Data
+
+Prepare the dataset managed in ImageFolder format. Add the `` parameter when running `run_train.sh`, and add the `--dataset_path ` parameter when running `train.py`.
+
+The dataset structure is as follows:
+
+```bash
+└─ImageFolder
+ ├─train
+ │ class1Folder
+ │ class2Folder
+ │ ......
+ └─eval
+ class1Folder
+ class2Folder
+ ......
+```
+
+## Code for Loading a Pre-Trained Model
+
+During fine-tuning, you need to load a pre-trained model. The distribution of the feature extraction layer (convolutional layer) in different datasets and tasks tends to be consistent. However, the combination of feature vectors (fully connected layer) is different, and the number of classes (output_size of the fully connected layer) is usually different. During fine-tuning, parameters of the feature extraction layer are loaded and trained, while those of the fully connected layer are not. During fine-tuning and initial training, both feature extraction layer parameters and fully connected layer parameters are loaded and trained.
+
+Before training and testing, build a backbone network and a head network of MobileNetV2 on the first line of the code, and build a MobileNetV2 network containing the two subnets. Lines 3 to 10 of the code show how to define `backbone_net` and `head_net` and how to add the two subnets to `mobilenet_v2`. Lines 12 to 23 of the code show that in fine-tuning training mode, the pre-trained model needs to be loaded to the `backbone_net` subnet, and parameters in `backbone_net` are frozen and do not participate in training. Lines 21 to 23 of the code show how to freeze network parameters.
+
+```python
+ 1: backbone_net, head_net, net = define_net(args_opt, config)
+ 2: ...
+ 3: def define_net(config, is_training):
+ 4: backbone_net = MobileNetV2Backbone()
+ 5: activation = config.activation if not is_training else "None"
+ 6: head_net = MobileNetV2Head(input_channel=backbone_net.out_channels,
+ 7: num_classes=config.num_classes,
+ 8: activation=activation)
+ 9: net = mobilenet_v2(backbone_net, head_net)
+10: return backbone_net, head_net, net
+11: ...
+12: if args_opt.pretrain_ckpt and args_opt.freeze_layer == "backbone":
+13: load_ckpt(backbone_net, args_opt.pretrain_ckpt, trainable=False)
+14: ...
+15: def load_ckpt(network, pretrain_ckpt_path, trainable=True):
+16: """
+17: train the param weight or not
+18: """
+19: param_dict = load_checkpoint(pretrain_ckpt_path)
+20: load_param_into_net(network, param_dict)
+21: if not trainable:
+22: for param in network.get_parameters():
+23: param.requires_grad = False
+```
+
+## Parameter Description
+
+Change the value of each parameter based on the local processor type, data path, and pre-trained model path.
+
+### Running Python Files
+
+When using `train.py` for training on Windows and Linux, input `dataset_path`, `platform`, `pretrain_ckpt`, and `freeze_layer`. When using `eval.py` for validation, input `dataset_path`, `platform`, and `pretrain_ckpt`.
+
+```bash
+# Windows/Linux train with Python file
+python train.py --platform [PLATFORM] --dataset_path --pretrain_ckpt [PRETRAIN_CHECKPOINT_PATH] --freeze_layer[("none", "backbone")]
+
+# Windows/Linux eval with Python file
+python eval.py --platform [PLATFORM] --dataset_path --pretrain_ckpt
+```
+
+- `--dataset_path`: path of the training or validation dataset. There is no default value. This parameter is mandatory for training or validation.
+- `--platform`: processor type. The default value is `Ascend`. You can set it to `CPU` or `GPU`.
+- `--pretrain_ckpt`: path of the `pretrain_checkpoint` file required for loading a weight of a pre-trained model parameter during incremental training or optimization.
+- `--freeze_layer`: frozen network layer. Enter `none` or `backbone`.
+
+### Running Shell Scripts
+
+You can run the shell scripts `./scripts/run_train.sh` and `./scripts/run_eval.sh` on Linux. Input parameters on the interaction interface.
+
+```bash
+# Windows doesn't support Shell
+# Linux train with Shell script
+sh run_train.sh [FREEZE_LAYER]
+
+# Linux eval with Shell script for fine tune
+sh run_eval.sh
+```
+
+- ``: processor type. The default value is `Ascend`. You can set it to `GPU`.
+- ``: number of processes on each node (equivalent to a server or PC). You are advised to set this parameter to the number of Ascend AI Processors or GPUs on a server.
+- ``: device ID of character string type. During training, a process is bound to a device with the corresponding ID based on ``. Multiple device IDs are separated by commas (,). It is recommended that the number of IDs be the same as the number of processes.
+- ``: a JSON file configured when platform is set to `Ascend`
+- ``: path of the training or validation dataset. There is no default value. This parameter is mandatory for training or validation.
+- ``: path of the checkpoint file required for loading a weight of a pre-trained model parameter during incremental training or optimization.
+- `[FREEZE_LAYER]`: frozen network layer during fine-tuned model validation. Enter `none` or `backbone`.
+
+## Loading Fine-Tuning Training
+
+Only `train.py` can be run on Windows when MobileNetV2 is used for fine-tuning training. You can run the shell script `run_train.sh` and input [parameters](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/cv_mobilenetv2_fine_tune.html#id8) on Linux when MobileNetV2 is used for fine-tuning training.
+
+The Windows system outputs information to an interactive command line. When running `run_train.sh` on the Linux system, use `&> ` at the end of the command line to write the standard output and error output to the log file. After the fine-tuning is successful, training starts. The training time and loss of each epoch are continuously written into the `./train/rank*/log*.log` file. If the fine-tuning fails, an error message is recorded in the preceding log file.
+
+### Loading Training on CPU
+
+- Set the number of nodes.
+
+ Currently, `train.py` supports only a single processor. You do not need to adjust the number of processors. When the `run_train.sh` file is run, a single `CPU` is used by default. The number of CPUs cannot be changed.
+
+- Start incremental training.
+
+ Example 1: Use the python file to call a CPU.
+
+ ```bash
+ # Windows or Linux with Python
+ python train.py --platform CPU --dataset_path --pretrain_ckpt ./pretrain_checkpoint/mobilenetv2_cpu_gpu.ckpt --freeze_layer backbone
+ ```
+
+ Example 2: Use the shell file to call a CPU.
+
+ ```bash
+ # Linux with Shell
+ sh run_train.sh CPU ../pretrain_checkpoint/mobilenetV2_cpu_gpu.ckpt backbone
+ ```
+
+### Loading Training on GPU
+
+- Set the number of nodes.
+
+ Currently, `train.py` supports only a single processor. You do not need to adjust the number of nodes. When running the `run_train.sh` file, set `` to the number of GPUs and `` to IDs of available processors, that is, GPU IDs. You can select one or more device IDs and separate them with commas (,).
+
+- Start incremental training.
+
+ - Example 1: Use the python file to call a GPU.
+
+ ```bash
+ # Windows or Linux with Python
+ python train.py --platform GPU --dataset_path --pretrain_ckpt ./pretrain_checkpoint/mobilenetv2_cpu_gpu.ckpt --freeze_layer backbone
+ ```
+
+ - Example 2: Use the shell script to call a GPU whose device ID is `0`.
+
+ ```bash
+ # Linux with Shell
+ sh run_train.sh GPU 1 0 ../pretrain_checkpoint/mobilenetv2_cpu_gpu.ckpt backbone
+ ```
+
+ - Example 3: Use the shell script to call eight GPUs whose device IDs are `0,1,2,3,4,5,6,7`.
+
+ ```bash
+ # Linux with Shell
+ sh run_train.sh GPU 8 0,1,2,3,4,5,6,7 ../pretrain_checkpoint/mobilenetv2_cpu_gpu.ckpt backbone
+ ```
+
+### Loading Training on Ascend AI Processor
+
+- Set the number of nodes.
+
+ Currently, `train.py` supports only a single processor. You do not need to adjust the number of nodes. When running the `run_train.sh` file, set `` to the number of Ascend AI Processors and `` to IDs of available processors, that is, Ascend AI Processor IDs. You can select one or more device IDs from 0 to 7 on an 8-device server and separate them with commas (,). Currently, the number of Ascend AI Processors can only be set to 1 or 8.
+
+- Start incremental training.
+
+ - Example 1: Use the python file to call an Ascend AI Processor.
+
+ ```bash
+ # Windows or Linux with Python
+ python train.py --platform Ascend --dataset_path --pretrain_ckpt ./pretrain_checkpoint mobilenetv2_ascend.ckpt --freeze_layer backbone
+ ```
+
+ - Example 2: Use the shell script to call an Ascend AI Processor whose device ID is `0`.
+
+ ```bash
+ # Linux with Shell
+ sh run_train.sh Ascend 1 0 ~/rank_table.json ../pretrain_checkpoint/mobilenetv2_ascend.ckpt backbone
+ ```
+
+ - Example 3: Use the shell script to call eight Ascend AI Processors whose device IDs are `0,1,2,3,4,5,6,7`.
+
+ ```bash
+ # Linux with Shell
+ sh run_train.sh Ascend 8 0,1,2,3,4,5,6,7 ~/rank_table.json ../pretrain_checkpoint/mobilenetv2_ascend.ckpt backbone
+ ```
+
+### Fine-Tuning Training Result
+
+- View the running result.
+
+ - When running the python file, view the output information in the interactive command line. After running the shell script on `Linux`, run the `cat ./train/rank0/log0.log` command to view the output information. The output is as follows:
+
+ ```bash
+ train args: Namespace(dataset_path='./dataset/train', platform='CPU', \
+ pretrain_ckpt='./pretrain_checkpoint/mobilenetv2_cpu_gpu.ckpt', freeze_layer='backbone')
+ cfg: {'num_classes': 26, 'image_height': 224, 'image_width': 224, 'batch_size': 150, \
+ 'epoch_size': 200, 'warmup_epochs': 0, 'lr_max': 0.03, 'lr_end': 0.03, 'momentum': 0.9, \
+ 'weight_decay': 4e-05, 'label_smooth': 0.1, 'loss_scale': 1024, 'save_checkpoint': True, \
+ 'save_checkpoint_epochs': 1, 'keep_checkpoint_max': 20, 'save_checkpoint_path': './', \
+ 'platform': 'CPU'}
+ Processing batch: 16: 100%|███████████████████████████████████████████ █████████████████████| 16/16 [00:00, ?it/s]
+ epoch[200], iter[16] cost: 256.030, per step time: 256.030, avg loss: 1.775total cos 7.2574 s
+ ```
+
+- Check the saved checkpoint files.
+
+ - On Windows, run the `dir checkpoint` command to view the saved model files.
+
+ ```bash
+ dir ckpt_0
+ 2020//0814 11:20 267,727 mobilenetv2_1.ckpt
+ 2020//0814 11:21 267,727 mobilenetv2_10.ckpt
+ 2020//0814 11:21 267,727 mobilenetv2_11.ckpt
+ ...
+ 2020//0814 11:21 267,727 mobilenetv2_7.ckpt
+ 2020//0814 11:21 267,727 mobilenetv2_8.ckpt
+ 2020//0814 11:21 267,727 mobilenetv2_9.ckpt
+ ```
+
+ - On Linux, run the `ls ./checkpoint` command to view the saved model files.
+
+ ```bash
+ ls ./ckpt_0/
+ mobilenetv2_1.ckpt mobilenetv2_2.ckpt
+ mobilenetv2_3.ckpt mobilenetv2_4.ckpt
+ ...
+ ```
+
+## Validating the Fine-Tuning Training Model
+
+### Validating the Model
+
+Set mandatory [parameters](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/cv_mobilenetv2_fine_tune.html#id8) when using the validation set to test model performance. The default value of `--platform` is `Ascend`. You can set it to `CPU` or `GPU`. Finally, the standard output and error output are displayed in the interactive command line or written to the `eval.log` file.
+
+```bash
+# Windows/Linux with Python
+python eval.py --platform CPU --dataset_path --pretrain_ckpt ./ckpt_0/mobilenetv2_15.ckpt
+
+# Linux with Shell
+sh run_eval.sh CPU ../ckpt_0/mobilenetv2_15.ckpt
+```
+
+### Validation Result
+
+When the python file is run, the validation result is output in the interactive command line. The shell script writes the information to `./eval.log`. You need to run the `cat ./eval.log` command to view the information. The result is as follows:
+
+```bash
+result:{'acc': 0.9466666666666666666667}
+pretrain_ckpt = ./ckpt_0/mobilenetv2_15.ckpt
+```
diff --git a/tutorials/training/source_en/advanced_use/dashboard.md b/tutorials/training/source_en/advanced_use/dashboard.md
index c2709a946a74f5da0b17ba53e5d9d8c2720ca100..e8bc866609ff02c38618da4192d3de230e49a85d 100644
--- a/tutorials/training/source_en/advanced_use/dashboard.md
+++ b/tutorials/training/source_en/advanced_use/dashboard.md
@@ -175,15 +175,16 @@ Figure 13 shows tensors recorded by a user in a form of a histogram. Click the u
2. When using the Summary operator to collect data in training, 'HistogramSummary' operator will affect performance, so please use as few as possible.
3. To limit memory usage, MindInsight limits the number of tags and steps:
- - There are 300 tags at most in each training dashboard. The total number of scalar tags, image tags, computation graph tags, parameter distribution(histogram) tags, tensor tags cannot exceed 300. Specially, there are 10 computation graph tags and 6 tensor tags at most. When tags exceed limit, MindInsight preserves the most recently processed tags.
- - There are 1000 steps at most for each scalar tag in each training dashboard. When steps exceed limit, MindInsight will sample steps randomly to meet this limit.
- - There are 10 steps at most for each image tag in each training dashboard. When steps exceed limit, MindInsight will sample steps randomly to meet this limit.
- - There are 50 steps at most for each parameter distribution(histogram) tag in each training dashboard. When steps exceed limit, MindInsight will sample steps randomly to meet this limit.
- - There are 20 steps at most for each tensor tag in each training dashboard. When steps exceed limit, MindInsight will sample steps randomly to meet this limit.
+ - There are 300 tags at most in each training dashboard. The total number of scalar tags, image tags, computation graph tags, parameter distribution(histogram) tags, tensor tags cannot exceed 300. Specially, there are 10 computation graph tags and 6 tensor tags at most. When the number of tags exceeds the limit, MindInsight preserves the most recently processed tags.
+ - There are 1000 steps at most for each scalar tag in each training dashboard. When the number of steps exceeds the limit, MindInsight will sample steps randomly to meet this limit.
+ - There are 10 steps at most for each image tag in each training dashboard. When the number of steps exceeds the limit, MindInsight will sample steps randomly to meet this limit.
+ - There are 50 steps at most for each parameter distribution(histogram) tag in each training dashboard. When the number of steps exceeds the limit, MindInsight will sample steps randomly to meet this limit.
+ - There are 20 steps at most for each tensor tag in each training dashboard. When the number of steps exceeds the limit, MindInsight will sample steps randomly to meet this limit.
4. Since `TensorSummary` will record complete tensor data, the amount of data is usually relatively large. In order to limit memory usage and ensure performance, MindInsight make the following restrictions with the size of tensor and the number of value responsed and displayed on the front end:
- - MindInsight supports loading tensor containing up to 10 million values.
- - After the tensor is loaded, in the tensor-visible table view, you can view the maximum of 100,000 values. If the value obtained by the selected dimension query exceeds this limit, it cannot be displayed.
+ - MindInsight supports loading tensors that contain up to 10 million values.
+ - MindInsight supports the column of tensor displayed on the front end up to 1000 columns for each query.
+ - After the tensor is loaded, in the tensor-visible table view, you can view the maximum of 100,000 values. If the value obtained by the selected dimension query exceeds this limit, it cannot be displayed.
5. Since tensor visualizatioin (`TensorSummary`) records raw tensor data, it requires a large amount of storage space. Before using `TensorSummary` and during training, please check that the system storage space is sufficient.
The storage space occupied by the tensor visualizatioin function can be reduced by the following methods:
@@ -195,6 +196,6 @@ Figure 13 shows tensors recorded by a user in a form of a histogram. Click the u
Remarks: The method of estimating the space usage of `TensorSummary` is as follows:
- The size of a `TensorSummary` data = the number of values in the tensor \* 4 bytes. Assuming that the size of the tensor recorded by `TensorSummary` is 32 \* 1 \* 256 \* 256, then a `TensorSummary` data needs about 32 \* 1 \* 256 \* 256 \* 4 bytes = 8,388,608 bytes = 8MiB. `TensorSummary` will record data of 20 steps by default. Then the required space when recording these 20 sets of data is about 20 * 8 MiB = 160MiB. It should be noted that due to the overhead of data structure and other factors, the actual storage space used will be slightly larger than 160MiB.
+ The size of a `TensorSummary` data = the number of values in the tensor \* 4 bytes. Assuming that the size of the tensor recorded by `TensorSummary` is 32 \* 1 \* 256 \* 256, then a `TensorSummary` data needs about 32 \* 1 \* 256 \* 256 \* 4 bytes = 8,388,608 bytes = 8MiB. `TensorSummary` will record data of 20 steps by default. Then the required space when recording these 20 sets of data is about 20 \* 8 MiB = 160MiB. It should be noted that due to the overhead of data structure and other factors, the actual storage space used will be slightly larger than 160MiB.
6. The training log file is large when using `TensorSummary` because the complete tensor data is recorded. MindInsight needs more time to parse the training log file, please be patient.
diff --git a/tutorials/training/source_en/advanced_use/enable_cache.md b/tutorials/training/source_en/advanced_use/enable_cache.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e15bc3bd9483057d5db8bde3adb40a6ef8d4f1e
--- /dev/null
+++ b/tutorials/training/source_en/advanced_use/enable_cache.md
@@ -0,0 +1,5 @@
+# Applying single node data cache
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/training/source_en/advanced_use/hpc.rst b/tutorials/training/source_en/advanced_use/hpc.rst
new file mode 100644
index 0000000000000000000000000000000000000000..fa5559449e371004a08b4b8da6e40d84a8381847
--- /dev/null
+++ b/tutorials/training/source_en/advanced_use/hpc.rst
@@ -0,0 +1,7 @@
+High Performance Computing
+=============================
+
+.. toctree::
+ :maxdepth: 1
+
+ hpc_gomo
\ No newline at end of file
diff --git a/tutorials/training/source_en/advanced_use/hpc_gomo.md b/tutorials/training/source_en/advanced_use/hpc_gomo.md
new file mode 100644
index 0000000000000000000000000000000000000000..66f2bfa945538776862543ca55256451aa287bc3
--- /dev/null
+++ b/tutorials/training/source_en/advanced_use/hpc_gomo.md
@@ -0,0 +1,5 @@
+# Realizing the regional ocean model GOMO
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/training/source_en/advanced_use/images/introduce.PNG b/tutorials/training/source_en/advanced_use/images/introduce.PNG
index 9a59eda78c4b9782c6b4f3838ed7f881f7b72d20..2cf02257c7b5f27eff12872d780a3204d12c778d 100644
Binary files a/tutorials/training/source_en/advanced_use/images/introduce.PNG and b/tutorials/training/source_en/advanced_use/images/introduce.PNG differ
diff --git a/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md b/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md
index 73337706653defa72ead334ac953b702dcbb4d48..03ec1bacc3be43550d9084e2ce66078eb421806a 100644
--- a/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md
+++ b/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md
@@ -19,6 +19,8 @@
Model lineage, data lineage and comparison Kanban in mindinsight are the same as training dashboard. In the visualization of training data, different scalar trend charts are observed by comparison dashboard to find problems, and then the lineage function is used to locate the problem causes, so as to give users the ability of efficient tuning in data enhancement and deep neural network.
+Access the Training Dashboard by selecting Comparison Dashboard.
+
## Model Lineage
Model lineage visualization is used to display the parameter information of all training models.
@@ -97,7 +99,7 @@ Figure 9: Scalars comparision function area
Figure 9 shows the scalars comparision function area, which allows you to view scalar information by selecting different trainings or tags, different dimensions of the horizontal axis, and smoothness.
-- Training Selection: Select or filter the required trainings to view the corresponding scalar information.
+- Training Selection: Click the expand button and select or filter the required trainings to view the corresponding scalar information.
- Tag Selection: Select the required tags to view the corresponding scalar information.
- Horizontal Axis: Select any of Step, Relative Time, and Absolute Time as the horizontal axis of the scalar curve.
- Smoothness: Adjust the smoothness to smooth the scalar curve.
diff --git a/tutorials/training/source_en/advanced_use/migrate_3rd_scripts_mindconverter.md b/tutorials/training/source_en/advanced_use/migrate_3rd_scripts_mindconverter.md
index 21b0dde0bc98c8e9c6dde127418be3762f3d52c8..046a9197759582a9f25a82263b007332b1a2aa62 100644
--- a/tutorials/training/source_en/advanced_use/migrate_3rd_scripts_mindconverter.md
+++ b/tutorials/training/source_en/advanced_use/migrate_3rd_scripts_mindconverter.md
@@ -112,7 +112,7 @@ The AST mode is recommended for the first demand (AST mode is only supported for
For the second demand, the Graph mode is recommended. As the computational graph is a standard descriptive language, it is not affected by user's coding style. This mode may have more operators converted as long as these operators are supported by MindConverter.
-Some typical image classification networks such as ResNet and VGG have been tested for the Graph mode. Note that:
+Some typical networks in computer vision field have been tested for the Graph mode. Note that:
> 1. Currently, the Graph mode does not support models with multiple inputs. Only models with a single input and single output are supported.
> 2. The Dropout operator will be lost after conversion because the inference mode is used to load the PyTorch or TensorFlow model. Manually re-implement is necessary.
diff --git a/tutorials/training/source_en/advanced_use/nlp.rst b/tutorials/training/source_en/advanced_use/nlp.rst
index 194ce49f50e47ff9a1255f0eeb86a68871f0d035..9e81949150668904a30036c266a21929b3bb995d 100644
--- a/tutorials/training/source_en/advanced_use/nlp.rst
+++ b/tutorials/training/source_en/advanced_use/nlp.rst
@@ -5,3 +5,4 @@ Natural Language Processing
:maxdepth: 1
nlp_sentimentnet
+ nlp_bert_poetry
diff --git a/tutorials/training/source_en/advanced_use/performance_profiling.md b/tutorials/training/source_en/advanced_use/performance_profiling.md
index fce027e4c738eb4c9bf42fc8b46f78178f1368d4..4d9242608d6f1c2a876c704eb46d29ff9c70b818 100644
--- a/tutorials/training/source_en/advanced_use/performance_profiling.md
+++ b/tutorials/training/source_en/advanced_use/performance_profiling.md
@@ -7,7 +7,6 @@
- [Performance Profiling(Ascend)](#performance-profilingascend)
- [Overview](#overview)
- [Operation Process](#operation-process)
- - [Preparing the Environment](#preparing-the-environment)
- [Preparing the Training Script](#preparing-the-training-script)
- [Launch MindInsight](#launch-mindinsight)
- [Performance Analysis](#performance-analysis)
@@ -28,13 +27,9 @@ Performance data like operator's execution time is recorded in files and can be
## Operation Process
- Prepare a training script, add profiler APIs in the training script and run the training script.
-- Start MindInsight and specify the profiler data directory using startup parameters. After MindInsight is started, access the visualization page based on the IP address and port number. The default access IP address is `http://127.0.0.1:8080`.
+- Start MindInsight and specify the summary-base-dir using startup parameters, note that summary-base-dir is the parent directory of the directory created by Profiler. For example, the directory created by Profiler is `/home/user/code/data/`, the summary-base-dir should be `/home/user/code`. After MindInsight is started, access the visualization page based on the IP address and port number. The default access IP address is `http://127.0.0.1:8080`.
- Find the training in the list, click the performance profiling link and view the data on the web page.
-## Preparing the Environment
-
-Before using Profiler, ensure that the background tool process (ada) is started correctly. Users are required to start the ada process with the user of the HwHiAiUser user group or root, and run the scripts using the same user. The start command is `/usr/local/Ascend/driver/tools/ada`.
-
## Preparing the Training Script
To enable the performance profiling of neural networks, MindSpore Profiler APIs should be added into the script. At first, the MindSpore `Profiler` object need to be set after `set_context` is set and before the network and HCCL initialization. Then, at the end of the training, `Profiler.analyse()` should be called to finish profiling and generate the perforamnce analyse results.
@@ -55,6 +50,8 @@ def test_profiler():
context.set_context(mode=context.GRAPH_MODE, device_target='Ascend', device_id=int(os.environ["DEVICE_ID"]))
# Init Profiler
+ # Note that 'data' directory is created in current path by default. To visualize the profiling data by MindInsight,
+ # 'data' directory should be placed under summary-base-dir.
profiler = Profiler()
# Init hyperparameter
diff --git a/tutorials/training/source_en/advanced_use/use_on_the_cloud.md b/tutorials/training/source_en/advanced_use/use_on_the_cloud.md
new file mode 100644
index 0000000000000000000000000000000000000000..0ae46ab51b80eba772cc38fc6c3f0a121d15c7fb
--- /dev/null
+++ b/tutorials/training/source_en/advanced_use/use_on_the_cloud.md
@@ -0,0 +1,5 @@
+# Use MindSpore on the Cloud
+
+No English version available right now, welcome to contribute.
+
+
diff --git a/tutorials/training/source_en/index.rst b/tutorials/training/source_en/index.rst
index 69f97b5e7f966b354e66dd2ecbb8553d74f6793c..47416b6bed71a692c8940d4951508f20f4b12ee8 100644
--- a/tutorials/training/source_en/index.rst
+++ b/tutorials/training/source_en/index.rst
@@ -46,6 +46,7 @@ Train with MindSpore
advanced_use/custom_operator
advanced_use/migrate_script
advanced_use/apply_deep_probability_programming
+ advanced_use/achieve_high_order_differentiation
.. toctree::
:glob:
@@ -69,6 +70,7 @@ Train with MindSpore
advanced_use/enable_mixed_precision
advanced_use/enable_graph_kernel_fusion
advanced_use/apply_gradient_accumulation
+ advanced_use/enable_cache
.. toctree::
:glob:
@@ -97,6 +99,8 @@ Train with MindSpore
advanced_use/cv
advanced_use/nlp
+ advanced_use/hpc
+ advanced_use/use_on_the_cloud
.. raw:: html
@@ -693,7 +697,18 @@ Train with MindSpore