diff --git a/README.md b/README.md index b79bbd2d9b961550b7ded278871cec0fd6e518c5..a340846c25ab655e2ff95cc29ae932ae3fe98ab6 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ MindSpore tutorials and API documents can be generated by [Sphinx](https://www.s 1. Download code of the MindSpore Docs repository. ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. Go to the api_python directory and install the dependency items in the `requirements.txt` file. diff --git a/README_CN.md b/README_CN.md index ac420699c9536a0188543582b30a64b2d7792f0b..185a6e2198b14b8d48990155ff6f1ee38058ae99 100644 --- a/README_CN.md +++ b/README_CN.md @@ -40,7 +40,7 @@ MindSpore的教程和API文档均可由[Sphinx](https://www.sphinx-doc.org/en/ma 1. 下载MindSpore Docs仓代码。 ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. 进入api_python目录,安装该目录下`requirements.txt`文件中的依赖项。 diff --git a/docs/api_cpp/source_en/api.md b/docs/api_cpp/source_en/api.md new file mode 100644 index 0000000000000000000000000000000000000000..2f441f23923e20d351c0db4fc2e4c0ca50548bee --- /dev/null +++ b/docs/api_cpp/source_en/api.md @@ -0,0 +1,390 @@ +# mindspore::api + + + +## Context + +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/context.h)> + +The Context class is used to store environment variables during execution. + +### Static Public Member Function + +#### Instance + +```cpp +static Context &Instance(); +``` + +Obtains the MindSpore Context instance object. + +### Public Member Functions + +#### GetDeviceTarget + +```cpp +const std::string &GetDeviceTarget() const; +``` + +Obtains the target device type. + +- Returns + + Current DeviceTarget type. + +#### GetDeviceID + +```cpp +uint32_t GetDeviceID() const; +``` + +Obtains the device ID. + +- Returns + + Current device ID. + +#### SetDeviceTarget + +```cpp +Context &SetDeviceTarget(const std::string &device_target); +``` + +Configures the target device. + +- Parameters + + - `device_target`: target device to be configured. The options are `kDeviceTypeAscend310` and `kDeviceTypeAscend910`. + +- Returns + + MindSpore Context instance object. + +#### SetDeviceID + +```cpp +Context &SetDeviceID(uint32_t device_id); +``` + +Obtains the device ID. + +- Parameters + + - `device_id`: device ID to be configured. + +- Returns + + MindSpore Context instance object. + +## Serialization + +\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/serialization.h)> + +The Serialization class is used to summarize methods for reading and writing model files. + +### Static Public Member Function + +#### LoadModel + +- Parameters + + - `file`: model file path. + - `model_type`: model file type. The options are `ModelType::kMindIR` and `ModelType::kOM`. + +- Returns + + Object for storing graph data. + +## Model + +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/model.h)> + +A Model class is used to define a MindSpore model, facilitating computational graph management. + +### Constructor and Destructor + +```cpp +Model(const GraphCell &graph); +~Model(); +``` + +`GraphCell` is a derivative of `Cell`. `Cell` is not open for use currently. `GraphCell` can be constructed from `Graph`, for example, `Model model(GraphCell(graph))`. + +### Public Member Functions + +#### Build + +```cpp +Status Build(const std::map &options); +``` + +Builds a model so that it can run on a device. + +- Parameters + + - `options`: model build options. In the following table, Key indicates the option name, and Value indicates the corresponding option. + +| Key | Value | +| --- | --- | +| kModelOptionInsertOpCfgPath | [AIPP](https://support.huaweicloud.com/intl/en-us/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html) configuration file path. | +| kModelOptionInputFormat | Manually specifies the model input format. The options are `"NCHW"` and `"NHWC"`. | +| kModelOptionInputShape | Manually specifies the model input shape, for example, `"input_op_name1: n1,c2,h3,w4;input_op_name2: n4,c3,h2,w1"` | +| kModelOptionOutputType | Manually specifies the model output type, for example, `"FP16"` or `"UINT8"`. The default value is `"FP32"`. | +| kModelOptionPrecisionMode | Model precision mode. The options are `"force_fp16"`, `"allow_fp32_to_fp16"`, `"must_keep_origin_dtype"`, and `"allow_mix_precision"`. The default value is `"force_fp16"`. | +| kModelOptionOpSelectImplMode | Operator selection mode. The options are `"high_performance"` and `"high_precision"`. The default value is `"high_performance"`. | + +- Returns + + Status code. + +#### Predict + +```cpp +Status Predict(const std::vector &inputs, std::vector *outputs); +``` + +Inference model. + +- Parameters + + - `inputs`: a `vector` where model inputs are arranged in sequence. + - `outputs`: output parameter, which is the pointer to a `vector`. The model outputs are filled in the container in sequence. + +- Returns + + Status code. + +#### GetInputsInfo + +```cpp +Status GetInputsInfo(std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const; +``` + +Obtains the model input information. + +- Parameters + + - `names`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input names are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + - `shapes`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input shapes are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + - `data_types`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input data types are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + - `mem_sizes`: optional output parameter, which is the pointer to a `vector` where model inputs are arranged in sequence. The input memory lengths (in bytes) are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + +- Returns + + Status code. + +#### GetOutputsInfo + +```cpp +Status GetOutputsInfo(std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const; +``` + +Obtains the model output information. + +- Parameters + + - `names`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output names are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + - `shapes`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output shapes are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + - `data_types`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output data types are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + - `mem_sizes`: optional output parameter, which is the pointer to a `vector` where model outputs are arranged in sequence. The output memory lengths (in bytes) are filled in the container in sequence. If `nullptr` is input, the attribute is not obtained. + +- Returns + + Status code. + +## Tensor + +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/types.h)> + +### Constructor and Destructor + +```cpp +Tensor(); +Tensor(const std::string &name, DataType type, const std::vector &shape, const void *data, size_t data_len); +~Tensor(); +``` + +### Static Public Member Function + +#### GetTypeSize + +```cpp +static int GetTypeSize(api::DataType type); +``` + +Obtains the memory length of a data type, in bytes. + +- Parameters + + - `type`: data type. + +- Returns + + Memory length, in bytes. + +### Public Member Functions + +#### Name + +```cpp +const std::string &Name() const; +``` + +Obtains the name of a tensor. + +- Returns + + Tensor name. + +#### DataType + +```cpp +api::DataType DataType() const; +``` + +Obtains the data type of a tensor. + +- Returns + + Tensor data type. + +#### Shape + +```cpp +const std::vector &Shape() const; +``` + +Obtains the shape of a tensor. + +- Returns + + Tensor shape. + +#### SetName + +```cpp +void SetName(const std::string &name); +``` + +Sets the name of a tensor. + +- Parameters + + - `name`: name to be set. + +#### SetDataType + +```cpp +void SetDataType(api::DataType type); +``` + +Sets the data type of a tensor. + +- Parameters + + - `type`: type to be set. + +#### SetShape + +```cpp +void SetShape(const std::vector &shape); +``` + +Sets the shape of a tensor. + +- Parameters + + - `shape`: shape to be set. + +#### Data + +```cpp +const void *Data() const; +``` + +Obtains the constant pointer to the tensor data. + +- Returns + + Constant pointer to the tensor data. + +#### MutableData + +```cpp +void *MutableData(); +``` + +Obtains the pointer to the tensor data. + +- Returns + + Pointer to the tensor data. + +#### DataSize + +```cpp +size_t DataSize() const; +``` + +Obtains the memory length (in bytes) of the tensor data. + +- Returns + + Memory length of the tensor data, in bytes. + +#### ResizeData + +```cpp +bool ResizeData(size_t data_len); +``` + +Adjusts the memory size of the tensor. + +- Parameters + + - `data_len`: number of bytes in the memory after adjustment. + +- Returns + + A value of bool indicates whether the operation is successful. + +#### SetData + +```cpp +bool SetData(const void *data, size_t data_len); +``` + +Adjusts the memory data of the tensor. + +- Parameters + + - `data`: memory address of the source data. + - `data_len`: length of the source data memory. + +- Returns + + A value of bool indicates whether the operation is successful. + +#### ElementNum + +```cpp +int64_t ElementNum() const; +``` + +Obtains the number of elements in a tensor. + +- Returns + + Number of elements in a tensor. + +#### Clone + +```cpp +Tensor Clone() const; +``` + +Performs a self copy. + +- Returns + + A deep copy. \ No newline at end of file diff --git a/docs/api_cpp/source_en/class_list.md b/docs/api_cpp/source_en/class_list.md index d6f4cd6216606ff0b95bc67f0cc18ebbbb8e4f9d..51ec2313b0f1f4001f5ecd5939b4d0bc2cc1f922 100644 --- a/docs/api_cpp/source_en/class_list.md +++ b/docs/api_cpp/source_en/class_list.md @@ -1,18 +1,18 @@ # Class List - + Here is a list of all classes with links to the namespace documentation for each member: | Namespace | Class Name | Description | | --- | --- | --- | -| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/en/master/mindspore.html#kernelcallback) | KernelCallBack defines the function pointer for callback. | -| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. | -| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#context) | Context is defined for holding environment variables during runtime. | -| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. | -| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#primitivec) | Primitive is defined as prototype of operator. | -| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#model) | Model defines model in MindSpore Lite for managing graph. | -| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#modelbuilder) | ModelBuilder is defined to build the model. | -| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/en/master/session.html#litesession) | LiteSession defines sessions in MindSpore Lite for compiling Model and forwarding model. | -| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/master/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. | -| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/en/master/dataset.html#litemat) |LiteMat is a class used to process images. | +| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#kernelcallback) | KernelCallBack defines the function pointer for callback. | +| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. | +| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#context) | Context is defined for holding environment variables during runtime. | +| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. | +| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#primitivec) | Primitive is defined as prototype of operator. | +| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#model) | Model defines model in MindSpore Lite for managing graph. | +| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#modelbuilder) | ModelBuilder is defined to build the model. | +| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/en/r1.1/session.html#litesession) | LiteSession defines sessions in MindSpore Lite for compiling Model and forwarding model. | +| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/r1.1/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. | +| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/en/r1.1/dataset.html#litemat) |LiteMat is a class used to process images. | diff --git a/docs/api_cpp/source_en/conf.py b/docs/api_cpp/source_en/conf.py index 4787de3f631f53db97bad94ffb7c95441edf0bb7..a44ca580d3d6539a56c49fcaec32c617cb6dc907 100644 --- a/docs/api_cpp/source_en/conf.py +++ b/docs/api_cpp/source_en/conf.py @@ -22,7 +22,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_cpp/source_en/dataset.md b/docs/api_cpp/source_en/dataset.md index 64abbd55cea593857841139316afb38602ad0d1d..61892af43814b2288280c8d3de8605cdb4bf4063 100644 --- a/docs/api_cpp/source_en/dataset.md +++ b/docs/api_cpp/source_en/dataset.md @@ -1,10 +1,10 @@ # mindspore::dataset - + ## ResizeBilinear -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h) @@ -25,7 +25,7 @@ Resize image by bilinear algorithm, currently the data type only supports uint8, ## InitFromPixel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m) @@ -48,7 +48,7 @@ Initialize LiteMat from pixel, providing data in RGB or BGR format does not need ## ConvertTo -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0) @@ -68,7 +68,7 @@ Convert the data type, currently it supports converting the data type from uint8 ## Crop -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h) @@ -91,7 +91,7 @@ Crop image, the channel supports 3 and 1. ## SubStractMeanNormalize -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std) @@ -112,7 +112,7 @@ Normalize image, currently the supports data type is float. ## Pad -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r) @@ -139,7 +139,7 @@ Pad image, the channel supports 3 and 1. ## ExtractChannel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ExtractChannel(const LiteMat &src, LiteMat &dst, int col) @@ -158,7 +158,7 @@ Extract image channel by index. ## Split -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Split(const LiteMat &src, std::vector &mv) @@ -177,7 +177,7 @@ Split image channels to single channel. ## Merge -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Merge(const std::vector &mv, LiteMat &dst) @@ -196,7 +196,7 @@ Create a multi-channel image out of several single-channel arrays. ## Affine -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue) @@ -216,7 +216,7 @@ Apply affine transformation to the 1-channel image. void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C3 borderValue) ``` -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> Apply affine transformation to the 3-channel image. @@ -230,7 +230,7 @@ Apply affine transformation to the 3-channel image. ## GetDefaultBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp std::vector> GetDefaultBoxes(BoxesConfig config) @@ -248,7 +248,7 @@ Get default anchor boxes for Faster R-CNN, SSD, YOLO, etc. ## ConvertBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config) @@ -264,7 +264,7 @@ Convert the prediction boxes to the actual boxes with (y, x, h, w). ## ApplyNms -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes) @@ -285,7 +285,7 @@ Real-size box non-maximum suppression. ## LiteMat -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> LiteMat is a class that processes images. @@ -431,7 +431,7 @@ A **pointer** to the address of the reference counter. ## Subtract -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Subtract(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -451,7 +451,7 @@ Calculates the difference between the two images for each element. ## Divide -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Divide(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -471,7 +471,7 @@ Calculates the division between the two images for each element. ## Multiply -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Multiply(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) diff --git a/docs/api_cpp/source_en/errorcode_and_metatype.md b/docs/api_cpp/source_en/errorcode_and_metatype.md index de25e800ff7ac6ec3d04eba326b1e78cbc01e7b1..b7f8cd51d4b045c9f3373dba5acfee553072c552 100644 --- a/docs/api_cpp/source_en/errorcode_and_metatype.md +++ b/docs/api_cpp/source_en/errorcode_and_metatype.md @@ -1,6 +1,6 @@ # ErrorCode and MetaType - + ## 1.0.1 diff --git a/docs/api_cpp/source_en/index.rst b/docs/api_cpp/source_en/index.rst index 779317bee1f0397ac1c5a78905b31b236f33d4f8..b5f76d3c78fd947026a99ea4a9ba8afd91355ed8 100644 --- a/docs/api_cpp/source_en/index.rst +++ b/docs/api_cpp/source_en/index.rst @@ -12,6 +12,7 @@ MindSpore C++ API class_list mindspore + api dataset vision lite diff --git a/docs/api_cpp/source_en/lite.md b/docs/api_cpp/source_en/lite.md index aacd2963a46835e7e7888c66fdb781f71b82a6f9..4105777415b4c3360b6655616f04119121f26ef2 100644 --- a/docs/api_cpp/source_en/lite.md +++ b/docs/api_cpp/source_en/lite.md @@ -1,16 +1,16 @@ # mindspore::lite - + ## Allocator -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Allocator defines a memory pool for dynamic memory malloc and memory free. ## Context -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Context is defined for holding environment variables during runtime. @@ -56,7 +56,7 @@ An **int** value. Defaults to **2**. Thread number config for thread pool. allocator ``` -A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#allocator). +A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#allocator). #### device_list_ @@ -64,19 +64,19 @@ A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/e device_list_ ``` -A [**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontextvector) contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) variables. +A [**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#devicecontextvector) contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#devicecontext) variables. > Only CPU and GPU are supported now. If GPU device context is set, use GPU device first, otherwise use CPU device first. ## PrimitiveC -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Primitive is defined as prototype of operator. ## Model -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Model defines model in MindSpore Lite for managing graph. @@ -130,7 +130,7 @@ Static method to create a Model pointer. ## CpuBindMode -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> An **enum** type. CpuBindMode is defined for holding arguments of the bind CPU strategy. @@ -162,7 +162,7 @@ No bind. ## DeviceType -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> An **enum** type. DeviceType is defined for holding user's preferred backend. @@ -194,7 +194,7 @@ NPU device type, not supported yet. ## Version -\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)> +\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/version.h)> ```cpp std::string Version() @@ -232,13 +232,13 @@ Global method to get strings from MSTensor. ## DeviceContextVector -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> -A **vector** contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) variable. +A **vector** contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#devicecontext) variable. ## DeviceContext -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> DeviceContext defines different device contexts. @@ -258,11 +258,11 @@ An **enum** type. Defaults to **DT_CPU**. DeviceType is defined for holding device_info_ ``` - An **union** value, contains [**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#cpudeviceinfo) and [**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#gpudeviceinfo) + An **union** value, contains [**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#cpudeviceinfo) and [**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#gpudeviceinfo) ## DeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> An **union** value. DeviceInfo is defined for backend's configuration information. @@ -274,7 +274,7 @@ An **union** value. DeviceInfo is defined for backend's configuration informatio cpu_device_info_ ``` -[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#cpudeviceinfo) is defined for CPU's configuration information. +[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#cpudeviceinfo) is defined for CPU's configuration information. #### gpu_device_info_ @@ -282,17 +282,17 @@ cpu_device_info_ gpu_device_info_ ``` -[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#gpudeviceinfo) is defined for GPU's configuration information. +[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#gpudeviceinfo) is defined for GPU's configuration information. ```cpp npu_device_info_ ``` -[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#gpudeviceinfo) is defined for NPU's configuration information. +[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#gpudeviceinfo) is defined for NPU's configuration information. ## CpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> CpuDeviceInfo is defined for CPU's configuration information. @@ -314,11 +314,11 @@ A **bool** value. Defaults to **false**. This attribute enables to perform the G cpu_bind_mode_ ``` -A [**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#cpubindmode) **enum** variable. Defaults to **MID_CPU**. +A [**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#cpubindmode) **enum** variable. Defaults to **MID_CPU**. ## GpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> GpuDeviceInfo is defined for GPU's configuration information. @@ -336,7 +336,7 @@ A **bool** value. Defaults to **false**. This attribute enables to perform the G ## NpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> NpuDeviceInfo is defined for NPU's configuration information. @@ -348,7 +348,7 @@ A **int** value. Defaults to **3**. This attribute is used to set the NPU freque ## TrainModel -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Inherited from Model, TrainModel defines a class that allows to import and export the MindSpore trainable model. diff --git a/docs/api_cpp/source_en/lite_cpp_example.rst b/docs/api_cpp/source_en/lite_cpp_example.rst index 58b12e4c8761b3766fcd352332ed35339707778c..5a67ea115e776f6169f9d4f0454f4a4aeadecc55 100644 --- a/docs/api_cpp/source_en/lite_cpp_example.rst +++ b/docs/api_cpp/source_en/lite_cpp_example.rst @@ -4,5 +4,5 @@ Example .. toctree:: :maxdepth: 1 - Quick Start - High-level Usage \ No newline at end of file + Quick Start + High-level Usage \ No newline at end of file diff --git a/docs/api_cpp/source_en/mindspore.md b/docs/api_cpp/source_en/mindspore.md index 1106b7c5875ea5928191f55e0d3c821afa131630..edd928e424e2479ade7d0ba74e76794ea386c181 100644 --- a/docs/api_cpp/source_en/mindspore.md +++ b/docs/api_cpp/source_en/mindspore.md @@ -1,8 +1,8 @@ # mindspore - + -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> ## KernelCallBack diff --git a/docs/api_cpp/source_en/session.md b/docs/api_cpp/source_en/session.md index 9f69ccee5c2ea5119fd0bc1a906cd0314a34a2a9..2fa9059efa019c9a43002739e186ea306252010f 100644 --- a/docs/api_cpp/source_en/session.md +++ b/docs/api_cpp/source_en/session.md @@ -1,10 +1,10 @@ # mindspore::session - + ## LiteSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> LiteSession defines sessions in MindSpore Lite for compiling Model and forwarding inference. @@ -56,7 +56,7 @@ Compile MindSpore Lite model. - Returns - STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). #### GetInputs @@ -97,13 +97,13 @@ Run session with callback. - Parameters - - `before`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/master/mindspore.html#kernelcallback) function. Define a callback function to be called before running each node. + - `before`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#kernelcallback) function. Define a callback function to be called before running each node. - - `after`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/master/mindspore.html#kernelcallback) function. Define a callback function to be called after running each node. + - `after`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#kernelcallback) function. Define a callback function to be called after running each node. - Returns - STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). #### GetOutputsByNodeName @@ -178,7 +178,7 @@ Resize inputs shape. - Returns - STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). ### Static Public Member Functions @@ -218,7 +218,7 @@ Static method to create a LiteSession pointer. The returned LiteSession pointer ## TrainSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> Inherited from LiteSession, TrainSession defines the class that allows training the MindSpore model. @@ -318,7 +318,7 @@ Set model to train mode. - Returns - STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h) + STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h) #### IsTrain @@ -342,7 +342,7 @@ Set model to eval mode. - Returns - STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). #### IsEval diff --git a/docs/api_cpp/source_en/tensor.md b/docs/api_cpp/source_en/tensor.md index 2e7913a146098b902a0f8891cd32fa4690dd1b79..f7cdbad66d4fa88c9390d46a638efb363dff798a 100644 --- a/docs/api_cpp/source_en/tensor.md +++ b/docs/api_cpp/source_en/tensor.md @@ -1,10 +1,10 @@ # mindspore::tensor - + ## MSTensor -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> MSTensor defined tensor in MindSpore Lite. @@ -40,7 +40,7 @@ virtual TypeId data_type() const Get data type of the MindSpore Lite MSTensor. -> TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types or kObjectTypeString in TypeId enum are applicable for MSTensor. +> TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/core/ir/dtype/type_id.h). Only number types or kObjectTypeString in TypeId enum are applicable for MSTensor. - Returns diff --git a/docs/api_cpp/source_en/vision.md b/docs/api_cpp/source_en/vision.md index 3b8cd99c98863f875280c5487f8337ae41e8521d..1d26ddda27eb8a84212165c924b1c31785b9016a 100644 --- a/docs/api_cpp/source_en/vision.md +++ b/docs/api_cpp/source_en/vision.md @@ -1,10 +1,10 @@ # mindspore::dataset::vision - + ## HWC2CHW -\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision.h)> +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> ```cpp std::shared_ptr HWC2CHW() @@ -18,7 +18,7 @@ Convert the channel of the input image from (H, W, C) to (C, H, W). ## CenterCrop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr CenterCrop(std::vector size) @@ -36,7 +36,7 @@ Crop the center area of the input image to the given size. ## Crop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Crop(std::vector coordinates, std::vector size) @@ -55,7 +55,7 @@ Crop an image based on the location and crop size. ## Decode -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Decode(bool rgb = true) @@ -73,7 +73,7 @@ Decode the input image. ## Normalize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Normalize(std::vector mean, std::vector std) @@ -92,7 +92,7 @@ Normalize the input image with the given mean and standard deviation. ## Resize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Resize(std::vector size, InterpolationMode interpolation = InterpolationMode::kLinear) diff --git a/docs/api_cpp/source_zh_cn/api.md b/docs/api_cpp/source_zh_cn/api.md index 7c0af98ec52b620064f95b13b38d80d618a8fe5d..84dedb7d9f2dff53e749b415e38267a07fef8b6c 100644 --- a/docs/api_cpp/source_zh_cn/api.md +++ b/docs/api_cpp/source_zh_cn/api.md @@ -1,10 +1,10 @@ # mindspore::api - + ## Context -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/context.h)> Context类用于保存执行中的环境变量。 @@ -78,7 +78,7 @@ Context &SetDeviceID(uint32_t device_id); ## Serialization -\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/serialization.h)> +\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/serialization.h)> Serialization类汇总了模型文件读写的方法。 @@ -97,7 +97,7 @@ Serialization类汇总了模型文件读写的方法。 ## Model -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/model.h)> Model定义了MindSpore中的模型,便于计算图管理。 @@ -179,7 +179,7 @@ Status GetInputsInfo(std::vector *names, std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const; ``` -获取模型输入信息。 +获取模型输出信息。 - 参数 @@ -194,7 +194,7 @@ Status GetOutputsInfo(std::vector *names, std::vector + MindSpore Lite中的类定义及其所属命名空间和描述: | 命名空间 | 类 | 描述 | | --- | --- | --- | -| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/mindspore.html#kernelcallback) | KernelCallBack定义了指向回调函数的指针。 | -| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 | -| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#context) | Context用于保存执行期间的环境变量。 | -| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 | -| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#primitivec) | PrimitiveC定义为算子的原型。 | -| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | -| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 | -| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | -| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 | -| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/dataset.html#litemat) |LiteMat是一个处理图像的类。 | +| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#kernelcallback) | KernelCallBack定义了指向回调函数的指针。 | +| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 | +| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#context) | Context用于保存执行期间的环境变量。 | +| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 | +| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#primitivec) | PrimitiveC定义为算子的原型。 | +| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | +| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 | +| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | +| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 | +| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/dataset.html#litemat) |LiteMat是一个处理图像的类。 | MindSpore中的类定义及其所属命名空间和描述: | 命名空间 | 类 | 描述 | | --- | --- | --- | -| mindspore::api | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#context) | Context用于保存执行期间的环境变量。 | -| mindspore::api | [Serialization](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#serialization) | Serialization汇总了模型文件读写的方法。 | -| mindspore::api | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#model) | Model定义了MindSpore中的模型,便于计算图管理。 | -| mindspore::api | [Tensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#tensor) | Tensor定义了MindSpore中的张量。 | -| mindspore::api | [Buffer](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#buffer) | Buffer管理了一段内存空间。 | +| mindspore::api | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/api.html#context) | Context用于保存执行期间的环境变量。 | +| mindspore::api | [Serialization](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/api.html#serialization) | Serialization汇总了模型文件读写的方法。 | +| mindspore::api | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/api.html#model) | Model定义了MindSpore中的模型,便于计算图管理。 | +| mindspore::api | [Tensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/api.html#tensor) | Tensor定义了MindSpore中的张量。 | +| mindspore::api | [Buffer](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/api.html#buffer) | Buffer管理了一段内存空间。 | diff --git a/docs/api_cpp/source_zh_cn/conf.py b/docs/api_cpp/source_zh_cn/conf.py index 625e5acd3bde751f170596e75261be4bb2bde60f..2d0cee29dc19d12263e0c6a46bb969ee29f0268f 100644 --- a/docs/api_cpp/source_zh_cn/conf.py +++ b/docs/api_cpp/source_zh_cn/conf.py @@ -23,7 +23,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_cpp/source_zh_cn/dataset.md b/docs/api_cpp/source_zh_cn/dataset.md index 7b1dc332b87cb7728b29e88c921bcdd35e52b40d..12e6252bb0c19f220173035555cd8c615031ae07 100644 --- a/docs/api_cpp/source_zh_cn/dataset.md +++ b/docs/api_cpp/source_zh_cn/dataset.md @@ -1,10 +1,10 @@ # mindspore::dataset - + ## ResizeBilinear -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h) @@ -25,7 +25,7 @@ bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h) ## InitFromPixel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m) @@ -48,7 +48,7 @@ bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType d ## ConvertTo -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0) @@ -68,7 +68,7 @@ bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0) ## Crop -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h) @@ -91,7 +91,7 @@ bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h) ## SubStractMeanNormalize -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std) @@ -112,7 +112,7 @@ bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector< ## Pad -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r) @@ -139,7 +139,7 @@ bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int ri ## ExtractChannel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ExtractChannel(const LiteMat &src, LiteMat &dst, int col) @@ -158,7 +158,7 @@ bool ExtractChannel(const LiteMat &src, LiteMat &dst, int col) ## Split -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Split(const LiteMat &src, std::vector &mv) @@ -177,7 +177,7 @@ bool Split(const LiteMat &src, std::vector &mv) ## Merge -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Merge(const std::vector &mv, LiteMat &dst) @@ -196,7 +196,7 @@ bool Merge(const std::vector &mv, LiteMat &dst) ## Affine -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue) @@ -216,7 +216,7 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C3 borderValue) ``` -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> 对3通道图像应用仿射变换。 @@ -230,7 +230,7 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi ## GetDefaultBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp std::vector> GetDefaultBoxes(BoxesConfig config) @@ -248,7 +248,7 @@ std::vector> GetDefaultBoxes(BoxesConfig config) ## ConvertBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config) @@ -264,7 +264,7 @@ void ConvertBoxes(std::vector> &boxes, std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes) @@ -285,7 +285,7 @@ std::vector ApplyNms(std::vector> &all_boxes, std::vecto ## LiteMat -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> LiteMat是一个处理图像的类。 @@ -431,7 +431,7 @@ ref_count_ ## Subtract -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Subtract(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -451,7 +451,7 @@ bool Subtract(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) ## Divide -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Divide(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -471,7 +471,7 @@ bool Divide(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) ## Multiply -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Multiply(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) diff --git a/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md index a116e2dbdeae97584e914b97863d953f0510c895..06ed2acd032ce88e61fb148f775ea669116ded8e 100644 --- a/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md +++ b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md @@ -1,6 +1,6 @@ # 错误码及元类型 - + ## 1.0.1 diff --git a/docs/api_cpp/source_zh_cn/lite.md b/docs/api_cpp/source_zh_cn/lite.md index f42570091cf8626a63db230113f90c87047939bb..e25e9a31369666205ece89b76ae4f1d93850563a 100644 --- a/docs/api_cpp/source_zh_cn/lite.md +++ b/docs/api_cpp/source_zh_cn/lite.md @@ -1,16 +1,16 @@ # mindspore::lite - + ## Allocator -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Allocator类定义了一个内存池,用于动态地分配和释放内存。 ## Context -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Context类用于保存执行中的环境变量。 @@ -56,7 +56,7 @@ thread_num_ allocator ``` -**pointer**类型,指向内存分配器 [**Allocator**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#allocator) 的指针。 +**pointer**类型,指向内存分配器 [**Allocator**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#allocator) 的指针。 #### device_list_ @@ -64,19 +64,19 @@ allocator device_list_ ``` -[**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicecontextvector) 类型, 元素为 [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicecontext) 的**vector**. +[**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicecontextvector) 类型, 元素为 [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicecontext) 的**vector**. > 现在只支持CPU和GPU。如果设置了GPU设备环境变量,优先使用GPU设备,否则优先使用CPU设备。 ## PrimitiveC -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> PrimitiveC定义为算子的原型。 ## Model -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Model定义了MindSpore Lite中的模型,便于计算图管理。 @@ -130,7 +130,7 @@ static Model *Import(const char *model_buf, size_t size) ## CpuBindMode -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> 枚举类型,设置cpu绑定策略。 @@ -162,7 +162,7 @@ NO_BIND = 0 ## DeviceType -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> 枚举类型,设置设备类型。 @@ -194,7 +194,7 @@ DT_NPU = 2 ## Version -\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)> +\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/version.h)> ```cpp std::string Version() @@ -232,13 +232,13 @@ std::vector MSTensorToStrings(const tensor::MSTensor *tensor) ## DeviceContextVector -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> -元素为[**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicecontext) 的**vector**。 +元素为[**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicecontext) 的**vector**。 ## DeviceContext -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> DeviceContext类定义不同硬件设备的环境信息。 @@ -250,7 +250,7 @@ DeviceContext类定义不同硬件设备的环境信息。 device_type ``` -[**DeviceType**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicetype) 枚举类型。默认为**DT_CPU**,标明设备信息。 +[**DeviceType**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicetype) 枚举类型。默认为**DT_CPU**,标明设备信息。 #### device_info_ @@ -258,11 +258,11 @@ device_type device_info_ ``` -**union**类型,包含[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#cpudeviceinfo) 和[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#gpudeviceinfo) 。 +**union**类型,包含[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#cpudeviceinfo) 和[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#gpudeviceinfo) 。 ## DeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> **union**类型,设置不同硬件的环境变量。 @@ -274,7 +274,7 @@ device_info_ cpu_device_info_ ``` -[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#cpudeviceinfo) 类型,配置CPU的环境变量。 +[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#cpudeviceinfo) 类型,配置CPU的环境变量。 #### gpu_device_info_ @@ -282,7 +282,7 @@ cpu_device_info_ gpu_device_info_ ``` -[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#gpudeviceinfo) 类型,配置GPU的环境变量。 +[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#gpudeviceinfo) 类型,配置GPU的环境变量。 #### npu_device_info_ @@ -290,11 +290,11 @@ gpu_device_info_ npu_device_info_ ``` -[**NpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#npudeviceinfo) 类型,配置NPU的环境变量。 +[**NpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#npudeviceinfo) 类型,配置NPU的环境变量。 ## CpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> CpuDeviceInfo类,配置CPU的环境变量。 @@ -316,11 +316,11 @@ enable_float16_ cpu_bind_mode_ ``` -[**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#cpubindmode) 枚举类型,默认为**MID_CPU**。 +[**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#cpubindmode) 枚举类型,默认为**MID_CPU**。 ## GpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> GpuDeviceInfo类,用来配置GPU的环境变量。 @@ -338,7 +338,7 @@ enable_float16_ ## NpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> NpuDeviceInfo类,用来配置NPU的环境变量。 @@ -354,7 +354,7 @@ frequency_ ## TrainModel -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> 继承于结构体Model,用于导入或导出训练模型。 diff --git a/docs/api_cpp/source_zh_cn/lite_cpp_example.rst b/docs/api_cpp/source_zh_cn/lite_cpp_example.rst index bcc092dcf08208782beb9373e1a4e39e087134e1..c5ea73e4aa3e68a8446d08e8be5b2a08e6f408e4 100644 --- a/docs/api_cpp/source_zh_cn/lite_cpp_example.rst +++ b/docs/api_cpp/source_zh_cn/lite_cpp_example.rst @@ -4,5 +4,5 @@ .. toctree:: :maxdepth: 1 - 快速入门 - 高阶用法 \ No newline at end of file + 快速入门 + 高阶用法 \ No newline at end of file diff --git a/docs/api_cpp/source_zh_cn/mindspore.md b/docs/api_cpp/source_zh_cn/mindspore.md index f6195d8368dad856bb40c6c9e5e8f38646716c09..d8883557a70c8f9df071042239a7b3934c0ca6e5 100644 --- a/docs/api_cpp/source_zh_cn/mindspore.md +++ b/docs/api_cpp/source_zh_cn/mindspore.md @@ -1,8 +1,8 @@ # mindspore - + -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> ## KernelCallBack diff --git a/docs/api_cpp/source_zh_cn/session.md b/docs/api_cpp/source_zh_cn/session.md index 0fbc07e90b7a6c4e94fd90db609b6b7ece4ca993..c3cdaf310e2b4dd2a1b061049f4299b2b0e62bb6 100644 --- a/docs/api_cpp/source_zh_cn/session.md +++ b/docs/api_cpp/source_zh_cn/session.md @@ -1,10 +1,10 @@ # mindspore::session - + ## LiteSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 @@ -56,7 +56,7 @@ virtual int CompileGraph(lite::Model *model) - 返回值 - STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。 + STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h)中定义。 #### GetInputs @@ -97,13 +97,13 @@ virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBac - 参数 - - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。 + - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。 - - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。 + - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。 - 返回值 - STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。 + STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h)中定义。 #### GetOutputsByNodeName @@ -176,7 +176,7 @@ virtual int Resize(const std::vector &inputs, const std::ve - 返回值 - STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。 + STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h)中定义。 ### 静态公有成员函数 @@ -216,7 +216,7 @@ static LiteSession *CreateSession(const char *model_buf, size_t size, const lite ## TrainSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> 继承于类 LiteSession,用于训练模型。 diff --git a/docs/api_cpp/source_zh_cn/tensor.md b/docs/api_cpp/source_zh_cn/tensor.md index 21a32c86ca85c7678ac606aa9d4c0ffa7be2c788..c3624c7107e6736b8570e90d6fe98081d65b28c1 100644 --- a/docs/api_cpp/source_zh_cn/tensor.md +++ b/docs/api_cpp/source_zh_cn/tensor.md @@ -1,10 +1,10 @@ # mindspore::tensor - + ## MSTensor -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> MSTensor定义了MindSpore Lite中的张量。 @@ -40,7 +40,7 @@ virtual TypeId data_type() const 获取MindSpore Lite MSTensor的数据类型。 -> TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型或kObjectTypeString可用于MSTensor。 +> TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型或kObjectTypeString可用于MSTensor。 - 返回值 diff --git a/docs/api_cpp/source_zh_cn/vision.md b/docs/api_cpp/source_zh_cn/vision.md index a8073f22b79cf6bc8dbaad929faedb16527c1ae9..258f5468d2fd8dd43f3f0b026d90f90ae33db681 100644 --- a/docs/api_cpp/source_zh_cn/vision.md +++ b/docs/api_cpp/source_zh_cn/vision.md @@ -1,10 +1,10 @@ # mindspore::dataset::vision - + ## HWC2CHW -\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision.h)> +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> ```cpp std::shared_ptr HWC2CHW() @@ -18,7 +18,7 @@ std::shared_ptr HWC2CHW() ## CenterCrop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr CenterCrop(std::vector size) @@ -36,7 +36,7 @@ std::shared_ptr CenterCrop(std::vector size) ## Crop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Crop(std::vector coordinates, std::vector size) @@ -55,7 +55,7 @@ std::shared_ptr Crop(std::vector coordinates, std::vecto ## Decode -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Decode(bool rgb = true) @@ -73,7 +73,7 @@ std::shared_ptr Decode(bool rgb = true) ## Normalize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Normalize(std::vector mean, std::vector std) @@ -92,7 +92,7 @@ std::shared_ptr Normalize(std::vector mean, std::vect ## Resize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Resize(std::vector size, InterpolationMode interpolation = InterpolationMode::kLinear) diff --git a/docs/api_java/source_en/class_list.md b/docs/api_java/source_en/class_list.md index a8073d9d47c1aa0434e41b637b0bd3e23d6f49a6..7260b4fba43437e138539e1f66eb94a31de0c53a 100644 --- a/docs/api_java/source_en/class_list.md +++ b/docs/api_java/source_en/class_list.md @@ -1,14 +1,14 @@ # Class List - + | Package | Class Name | Description | | ------------------------- | -------------- | ------------------------------------------------------------ | -| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/en/master/msconfig.html) | MSConfig defines for holding environment variables during runtime. | -| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode defines the CPU binding mode. | -| com.mindspore.lite.config | [DeviceType](https://www.mindspore.cn/doc/api_java/zh-CN/master/mstensor.html) | DeviceType defines the back-end device type. | -| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/en/master/lite_session.html) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. | -| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/en/master/model.html) | Model defines the model in MindSpore Lite for managing graph. | -| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/en/master/mstensor.html) | MSTensor defines the tensor in MindSpore Lite. | -| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType defines the supported data types. | -| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version is used to obtain the version information of MindSpore Lite. | +| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/en/r1.1/msconfig.html) | MSConfig defines for holding environment variables during runtime. | +| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode defines the CPU binding mode. | +| com.mindspore.lite.config | [DeviceType](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/mstensor.html) | DeviceType defines the back-end device type. | +| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/en/r1.1/lite_session.html) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. | +| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/en/r1.1/model.html) | Model defines the model in MindSpore Lite for managing graph. | +| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/en/r1.1/mstensor.html) | MSTensor defines the tensor in MindSpore Lite. | +| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType defines the supported data types. | +| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version is used to obtain the version information of MindSpore Lite. | diff --git a/docs/api_java/source_en/conf.py b/docs/api_java/source_en/conf.py index 4020d50f7b5f7a90b26785749cb1d41046b4723c..71b7386f7a6c58c6814685c843069d827171b488 100644 --- a/docs/api_java/source_en/conf.py +++ b/docs/api_java/source_en/conf.py @@ -23,7 +23,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_java/source_en/index.rst b/docs/api_java/source_en/index.rst index 935aa0a5d22565b2d51fc919a8f81c00d9702b02..1a531e3f3a89cec32b3882e59a74980552ef0864 100644 --- a/docs/api_java/source_en/index.rst +++ b/docs/api_java/source_en/index.rst @@ -14,4 +14,5 @@ MindSpore Java API lite_session model msconfig - mstensor \ No newline at end of file + mstensor + lite_java_example \ No newline at end of file diff --git a/docs/api_java/source_en/lite_java_example.rst b/docs/api_java/source_en/lite_java_example.rst new file mode 100644 index 0000000000000000000000000000000000000000..35e8f359f470841a62f4beee329183f5d8610296 --- /dev/null +++ b/docs/api_java/source_en/lite_java_example.rst @@ -0,0 +1,7 @@ +Example +======== + +.. toctree:: + :maxdepth: 1 + + Quick Start \ No newline at end of file diff --git a/docs/api_java/source_en/lite_session.md b/docs/api_java/source_en/lite_session.md index df13e94ca209a0fafecdf2a977ea72cc9818fdee..30eda6e20c5e3566e5bc62bb7c58d505a4bf3900 100644 --- a/docs/api_java/source_en/lite_session.md +++ b/docs/api_java/source_en/lite_session.md @@ -1,6 +1,6 @@ # LiteSession - + ```java import com.mindspore.lite.LiteSession; diff --git a/docs/api_java/source_en/model.md b/docs/api_java/source_en/model.md index c0928bc1b861f62c4d819b71f19d8e8fac86fc24..aa4f9903ca1c856791eca442c2fd70a6c3312195 100644 --- a/docs/api_java/source_en/model.md +++ b/docs/api_java/source_en/model.md @@ -1,6 +1,6 @@ # Model - + ```java import com.mindspore.lite.Model; diff --git a/docs/api_java/source_en/msconfig.md b/docs/api_java/source_en/msconfig.md index 21b02746a03490919ca63f256203a5f309db556f..70acf659795e8e4733040128930eed872bdf235d 100644 --- a/docs/api_java/source_en/msconfig.md +++ b/docs/api_java/source_en/msconfig.md @@ -1,6 +1,6 @@ # MSConfig - + ```java import com.mindspore.lite.config.MSConfig; @@ -29,10 +29,10 @@ Initialize MSConfig. - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. - - `enable_float16`:Whether to use float16 operator for priority. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `threadNum`: Thread number config for thread pool. + - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. + - `enable_float16`:Whether to use float16 operator for priority. - Returns @@ -46,9 +46,9 @@ Initialize MSConfig. - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. + - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. - Returns @@ -62,7 +62,7 @@ Initialize MSConfig, `cpuBindMode` defaults to `CpuBindMode.MID_CPU`. - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - `threadNum`: Thread number config for thread pool. - Returns @@ -77,7 +77,7 @@ Initialize MSConfig,`cpuBindMode` defaults to `CpuBindMode.MID_CPU`, `threadNu - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - Returns diff --git a/docs/api_java/source_en/mstensor.md b/docs/api_java/source_en/mstensor.md index 4d4f19ac60fabb96f159740e7226d94ab3c2f8aa..5c0e34a6c052331050c1ca466f0bcace202ace7f 100644 --- a/docs/api_java/source_en/mstensor.md +++ b/docs/api_java/source_en/mstensor.md @@ -1,6 +1,6 @@ # MSTensor - + ```java import com.mindspore.lite.MSTensor; @@ -42,7 +42,7 @@ Get the shape of the MindSpore Lite MSTensor. public int getDataType() ``` -> DataType is defined in [com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java). +> DataType is defined in [com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java). - Returns diff --git a/docs/api_java/source_zh_cn/class_list.md b/docs/api_java/source_zh_cn/class_list.md index 6c2d18a87f5910b6e00aa06a1debe90aa3719f95..f9534fdb20b17e6b0ad10df8f3283a0411b5a3d8 100644 --- a/docs/api_java/source_zh_cn/class_list.md +++ b/docs/api_java/source_zh_cn/class_list.md @@ -1,14 +1,14 @@ # 类列表 - + | 包 | 类 | 描述 | | ------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/zh-CN/master/msconfig.html) | MSConfig用于保存执行期间的配置变量。 | -| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode定义了CPU绑定模式。 | -| com.mindspore.lite.config | [DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DeviceType定义了后端设备类型。 | -| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/zh-CN/master/lite_session.html) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | -| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/zh-CN/master/model.html) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | -| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/zh-CN/master/mstensor.html) | MSTensor定义了MindSpore Lite中的张量。 | -| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType定义了所支持的数据类型。 | -| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version用于获取MindSpore Lite的版本信息。 | +| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/msconfig.html) | MSConfig用于保存执行期间的配置变量。 | +| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode定义了CPU绑定模式。 | +| com.mindspore.lite.config | [DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DeviceType定义了后端设备类型。 | +| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/lite_session.html) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | +| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/model.html) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | +| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/mstensor.html) | MSTensor定义了MindSpore Lite中的张量。 | +| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType定义了所支持的数据类型。 | +| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version用于获取MindSpore Lite的版本信息。 | diff --git a/docs/api_java/source_zh_cn/conf.py b/docs/api_java/source_zh_cn/conf.py index e3dfb2a0a9fc6653113e7b2bb878a5497ceb4a2b..d68b7e7966909b7631790f148a864c950696ec0c 100644 --- a/docs/api_java/source_zh_cn/conf.py +++ b/docs/api_java/source_zh_cn/conf.py @@ -22,7 +22,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_java/source_zh_cn/lite_java_example.rst b/docs/api_java/source_zh_cn/lite_java_example.rst index 905bf7d9bb71ee8e7d108990155a454a6e80ab92..19f6b218c4c53be61e43cd7c27c08723eed06940 100644 --- a/docs/api_java/source_zh_cn/lite_java_example.rst +++ b/docs/api_java/source_zh_cn/lite_java_example.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - 快速入门 \ No newline at end of file + 快速入门 \ No newline at end of file diff --git a/docs/api_java/source_zh_cn/lite_session.md b/docs/api_java/source_zh_cn/lite_session.md index 9d3d0fca6357adbf015f09219aa2e86a821258ff..5bd9c1c05f974439453505011866270471676a47 100644 --- a/docs/api_java/source_zh_cn/lite_session.md +++ b/docs/api_java/source_zh_cn/lite_session.md @@ -1,6 +1,6 @@ # LiteSession - + ```java import com.mindspore.lite.LiteSession; @@ -62,7 +62,7 @@ public boolean compileGraph(Model model) - 参数 - - `Model`: 需要被编译的模型。 + - `Model`: 需要被编译的模型。 - 返回值 @@ -102,7 +102,7 @@ public MSTensor getInputByTensorName(String tensorName) - 参数 - - `tensorName`: 张量名。 + - `tensorName`: 张量名。 - 返回值 @@ -118,7 +118,7 @@ public List getOutputsByNodeName(String nodeName) - 参数 - - `nodeName`: 节点名。 + - `nodeName`: 节点名。 - 返回值 diff --git a/docs/api_java/source_zh_cn/model.md b/docs/api_java/source_zh_cn/model.md index 7fbd94321689c1f082ec239c20d8fd02627770e9..373ec68acbb2f9ee90b76ffc82c48e94824136d4 100644 --- a/docs/api_java/source_zh_cn/model.md +++ b/docs/api_java/source_zh_cn/model.md @@ -1,6 +1,6 @@ # Model - + ```java import com.mindspore.lite.Model; diff --git a/docs/api_java/source_zh_cn/msconfig.md b/docs/api_java/source_zh_cn/msconfig.md index 3b2da52153de453f360fe736b0af6417ea047bfb..76759a3472bd31b207aff3f15c2a966015b14080 100644 --- a/docs/api_java/source_zh_cn/msconfig.md +++ b/docs/api_java/source_zh_cn/msconfig.md @@ -1,6 +1,6 @@ # MSConfig - + ```java import com.mindspore.lite.config.MSConfig; @@ -29,9 +29,9 @@ public boolean init(int deviceType, int threadNum, int cpuBindMode, boolean enab - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 - `enable_float16`:是否优先使用float16算子。 - 返回值 @@ -46,9 +46,9 @@ public boolean init(int deviceType, int threadNum, int cpuBindMode) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 - 返回值 @@ -62,7 +62,7 @@ public boolean init(int deviceType, int threadNum) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - `threadNum`: 线程数。 - 返回值 @@ -77,7 +77,7 @@ public boolean init(int deviceType) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - 返回值 diff --git a/docs/api_java/source_zh_cn/mstensor.md b/docs/api_java/source_zh_cn/mstensor.md index 056cd1dd94514a508c9fecdd243befdedd95e740..8139f3d585857831cf6e05f774c0e8e268be2e6b 100644 --- a/docs/api_java/source_zh_cn/mstensor.md +++ b/docs/api_java/source_zh_cn/mstensor.md @@ -1,6 +1,6 @@ # MSTensor - + ```java import com.mindspore.lite.MSTensor; @@ -42,7 +42,7 @@ public int[] getShape() public int getDataType() ``` -> DataType在[com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java)中定义。 +> DataType在[com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java)中定义。 - 返回值 diff --git a/docs/api_python/source_en/conf.py b/docs/api_python/source_en/conf.py index c88194339c838fa4ef46289d8f6643a0f135fd53..50815c0e73cb6b6c98920579502e46598d77c262 100644 --- a/docs/api_python/source_en/conf.py +++ b/docs/api_python/source_en/conf.py @@ -32,7 +32,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_python/source_en/mindspore/mindspore.ops.rst b/docs/api_python/source_en/mindspore/mindspore.ops.rst index 779103f524e5fba8fede87ba25f6bd58a6756850..7f0d5f5e61ef48d91bbaeddc3327c8a1288b5d7d 100644 --- a/docs/api_python/source_en/mindspore/mindspore.ops.rst +++ b/docs/api_python/source_en/mindspore/mindspore.ops.rst @@ -29,6 +29,7 @@ The composite operators are the pre-defined combination of operators. mindspore.ops.normal mindspore.ops.poisson mindspore.ops.repeat_elements + mindspore.ops.sequence_mask mindspore.ops.tensor_dot mindspore.ops.uniform diff --git a/docs/api_python/source_en/mindspore/mindspore.rst b/docs/api_python/source_en/mindspore/mindspore.rst index 0b8c7204849fbcccf421d496294cdff7a00434dc..82325677874a0b3de48590277110f4eaa03558a2 100644 --- a/docs/api_python/source_en/mindspore/mindspore.rst +++ b/docs/api_python/source_en/mindspore/mindspore.rst @@ -40,8 +40,8 @@ mindspore ============================ ================= Type Description ============================ ================= - ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. - ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. + ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. + ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. ``bool_`` Boolean ``True`` or ``False``. ``int_`` Integer scalar. ``uint`` Unsigned integer scalar. diff --git a/docs/api_python/source_zh_cn/conf.py b/docs/api_python/source_zh_cn/conf.py index d1220b8f461bd09a54464c8b09042cfa4577d0be..6eca0f0e635ee5b74e813547cad0fec70ff648ae 100644 --- a/docs/api_python/source_zh_cn/conf.py +++ b/docs/api_python/source_zh_cn/conf.py @@ -32,7 +32,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst index 779103f524e5fba8fede87ba25f6bd58a6756850..7f0d5f5e61ef48d91bbaeddc3327c8a1288b5d7d 100644 --- a/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst +++ b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst @@ -29,6 +29,7 @@ The composite operators are the pre-defined combination of operators. mindspore.ops.normal mindspore.ops.poisson mindspore.ops.repeat_elements + mindspore.ops.sequence_mask mindspore.ops.tensor_dot mindspore.ops.uniform diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.rst index 879b8dbb4e6dd454549dd564c0b574a755b9d97d..0444fafe6df08219dbc652b503b06ce65967a407 100644 --- a/docs/api_python/source_zh_cn/mindspore/mindspore.rst +++ b/docs/api_python/source_zh_cn/mindspore/mindspore.rst @@ -40,8 +40,8 @@ mindspore ============================ ================= Type Description ============================ ================= - ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. - ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. + ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. + ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. ``bool_`` Boolean ``True`` or ``False``. ``int_`` Integer scalar. ``uint`` Unsigned integer scalar. diff --git a/docs/faq/source_en/backend_running.md b/docs/faq/source_en/backend_running.md index 6223b8cf48a80f6a680e9c5bd57d350cc61476ac..0d71c7e6aede314baca3b86740c2892e0585b579 100644 --- a/docs/faq/source_en/backend_running.md +++ b/docs/faq/source_en/backend_running.md @@ -2,7 +2,7 @@ `Ascend` `GPU` `CPU` `Environmental Setup` `Operation Mode` `Model Training` `Beginner` `Intermediate` `Expert` - + **Q: What can I do if the network performance is abnormal and weight initialization takes a long time during training after MindSpore is installed?** @@ -88,7 +88,7 @@ A: The problem is that the Graph mode is selected but the PyNative mode is used. - PyNative mode: dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model. - Graph mode: static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution. This mode uses technologies such as graph optimization to improve the running performance and facilitates large-scale deployment and cross-platform running. -You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/debug_in_pynative_mode.html). +You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/debug_in_pynative_mode.html).
diff --git a/docs/faq/source_en/conf.py b/docs/faq/source_en/conf.py index a1fd767271ac159540440ed65bd0d676163366a9..a2abcc9090f480f4504ca43ff682a2e762a5a89f 100644 --- a/docs/faq/source_en/conf.py +++ b/docs/faq/source_en/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/faq/source_en/installation.md b/docs/faq/source_en/installation.md index 24c5b09dd209b8852a8d80906e2d8fa751de0997..da352c948dc603c8e318f70691dfea70b302ae07 100644 --- a/docs/faq/source_en/installation.md +++ b/docs/faq/source_en/installation.md @@ -13,7 +13,7 @@ - + ## Installing Using pip diff --git a/docs/faq/source_en/mindinsight_use.md b/docs/faq/source_en/mindinsight_use.md index 1c95a1df32a66f56797bd5ab301ad74b4994f064..59d96a742383f206c383d41652593a0c687c8254 100644 --- a/docs/faq/source_en/mindinsight_use.md +++ b/docs/faq/source_en/mindinsight_use.md @@ -2,7 +2,7 @@ `Linux` `Ascend` `GPU` `Environment Preparation` - + **Q: What can I do if the error message `ImportError: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory` is displayed in the MindInsight running logs after MindInsight failed to start?** diff --git a/docs/faq/source_en/mindspore_lite.md b/docs/faq/source_en/mindspore_lite.md index f48da6347d452427a2f3fbb984545738bb2246ad..d408d900bd6bf677615d7742f9452b5ac255e41d 100644 --- a/docs/faq/source_en/mindspore_lite.md +++ b/docs/faq/source_en/mindspore_lite.md @@ -1,6 +1,6 @@ # MindSpore Lite Use - + **Q: What are the limitations of NPU?** @@ -8,4 +8,4 @@ A: Currently NPU only supports system ROM version EMUI>=11. Chip support inclu **Q: Why does the static library after cutting with the cropper tool fail to compile during integration?** -A: Currently the cropper tool only supports CPU libraries, that is, `-e CPU` is specified in the compilation command. For details, please refer to [Use clipping tool to reduce library file size](https://www.mindspore.cn/tutorial/lite/en/master/use/cropper_tool.html) document. +A: Currently the cropper tool only supports CPU libraries, that is, `-e CPU` is specified in the compilation command. For details, please refer to [Use clipping tool to reduce library file size](https://www.mindspore.cn/tutorial/lite/en/r1.1/use/cropper_tool.html) document. diff --git a/docs/faq/source_en/network_models.md b/docs/faq/source_en/network_models.md index edc5609767bdd0fc7f73b0dc58aeb188d6eabd74..99df3e742715ae85f4add228c06187388c8ef9ee 100644 --- a/docs/faq/source_en/network_models.md +++ b/docs/faq/source_en/network_models.md @@ -2,7 +2,7 @@ `Data Processing` `Environmental Setup` `Model Export` `Model Training` `Beginner` `Intermediate` `Expert` - + **Q: After a model is trained, how do I save the model output in text or `npy` format?** @@ -18,11 +18,11 @@ np.save("output.npy", out.asnumpy()) **Q: Must data be converted into MindRecords when MindSpore is used for segmentation training?** -A: [build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)is used to generate MindRecords based on a dataset. You can directly use or adapt it to your dataset. Alternatively, you can use `GeneratorDataset` if you want to read the dataset by yourself. +A: [build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)is used to generate MindRecords based on a dataset. You can directly use or adapt it to your dataset. Alternatively, you can use `GeneratorDataset` if you want to read the dataset by yourself. -[GenratorDataset example](https://www.mindspore.cn/doc/programming_guide/en/master/dataset_loading.html#loading-user-defined-dataset) +[GenratorDataset example](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_loading.html#loading-user-defined-dataset) -[GeneratorDataset API description](https://www.mindspore.cn/doc/api_python/en/master/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) +[GeneratorDataset API description](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset)
@@ -46,7 +46,7 @@ A: MindSpore uses protocol buffers (protobuf) to store training parameters and c **Q: How do I use models trained by MindSpore on Ascend 310? Can they be converted to models used by HiLens Kit?** -A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_ascend_310.html). +A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310.html).
@@ -58,19 +58,19 @@ A: When building a network, use `if self.training: x = dropput(x)`. During verif **Q: Where can I view the sample code or tutorial of MindSpore training and inference?** -A: Please visit the [MindSpore official website training](https://www.mindspore.cn/tutorial/training/en/master/index.html) and [MindSpore official website inference](https://www.mindspore.cn/tutorial/inference/en/master/index.html). +A: Please visit the [MindSpore official website training](https://www.mindspore.cn/tutorial/training/en/r1.1/index.html) and [MindSpore official website inference](https://www.mindspore.cn/tutorial/inference/en/r1.1/index.html).
**Q: What types of model is currently supported by MindSpore for training?** -A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#) for detailed information. +A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#) for detailed information.
**Q: What are the available recommendation or text generation networks or models provided by MindSpore?** -A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo).
@@ -141,7 +141,7 @@ if __name__ == "__main__": **Q: How do I use MindSpore to fit quadratic functions such as $f(x)=ax^2+bx+c$?** -A: The following code is referenced from the official [MindSpore tutorial code](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/linear_regression.py). +A: The following code is referenced from the official [MindSpore tutorial code](https://gitee.com/mindspore/docs/blob/r1.1/tutorials/tutorial_code/linear_regression.py). Modify the following items to fit $f(x) = ax^2 + bx + c$: diff --git a/docs/faq/source_en/platform_and_system.md b/docs/faq/source_en/platform_and_system.md index 754a9208fcc1e9b02d52d6d9c60fd70389c1cf38..9853e7eae1a3df0af6cacd365113955afff815ce 100644 --- a/docs/faq/source_en/platform_and_system.md +++ b/docs/faq/source_en/platform_and_system.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Hardware Support` `Beginner` `Intermediate` - + **Q: Does MindSpore run only on Huawei `NPUs`?** @@ -30,7 +30,7 @@ A: Ascend 310 can only be used for inference. MindSpore supports training on Asc **Q: Does MindSpore require computing units such as GPUs and NPUs? What hardware support is required?** -A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/doc/note/en/master/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#). +A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/doc/note/en/r1.1/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#).
diff --git a/docs/faq/source_en/programming_language_extensions.md b/docs/faq/source_en/programming_language_extensions.md index 7686964b62803147f6dff587d1e42020a3abcfc1..7afc8e66d3c1d667ca9928475ce88a6a65642c9a 100644 --- a/docs/faq/source_en/programming_language_extensions.md +++ b/docs/faq/source_en/programming_language_extensions.md @@ -2,7 +2,7 @@ `Python` `Support Plan` - + **Q: The recent announced programming language such as taichi got Python extensions that could be directly used as `import taichi as ti`. Does MindSpore have similar support?** diff --git a/docs/faq/source_en/supported_features.md b/docs/faq/source_en/supported_features.md index 0323a48d07bc029fa6d3321beefd68d8df45cd1c..3c2043e1e549abd563f2bb8b0bda0f180a6d9019 100644 --- a/docs/faq/source_en/supported_features.md +++ b/docs/faq/source_en/supported_features.md @@ -2,7 +2,7 @@ `Characteristic Advantages` `On-device Inference` `Functional Module` `Reasoning Tools` - + **Q: How do I change hyperparameters for calculating loss values during neural network training?** @@ -12,7 +12,7 @@ A: Sorry, this function is not available yet. You can find the optimal hyperpara **Q: Can you introduce the dedicated data processing framework?** -A: MindData provides the heterogeneous hardware acceleration function for data processing. The high-concurrency data processing pipeline supports NPUs, GPUs, and CPUs. The CPU usage is reduced by 30%. For details, see [Optimizing Data Processing](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/optimize_data_processing.html). +A: MindData provides the heterogeneous hardware acceleration function for data processing. The high-concurrency data processing pipeline supports NPUs, GPUs, and CPUs. The CPU usage is reduced by 30%. For details, see [Optimizing Data Processing](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/optimize_data_processing.html).
@@ -54,7 +54,7 @@ A: In addition to data parallelism, MindSpore distributed training also supports **Q: Has MindSpore implemented the anti-pooling operation similar to `nn.MaxUnpool2d`?** -A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_operator_ascend.html). +A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_operator_ascend.html).
@@ -90,7 +90,7 @@ A: The TensorFlow's object detection pipeline API belongs to the TensorFlow's Mo **Q: How do I migrate scripts or models of other frameworks to MindSpore?** -A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). +A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/migrate_3rd_scripts.html).
diff --git a/docs/faq/source_en/supported_operators.md b/docs/faq/source_en/supported_operators.md index f8d2e9a96cc4823b6c788c49e046389291305e1e..bb83bb414f86ddf186e69bb547a6bb595903913d 100644 --- a/docs/faq/source_en/supported_operators.md +++ b/docs/faq/source_en/supported_operators.md @@ -2,7 +2,7 @@ `Ascend` `GPU` `CPU` `Environmental Setup` `Beginner` `Intermediate` `Expert` - + **Q: Why is data loading abnormal when MindSpore1.0.1 is used in graph data offload mode?** @@ -38,7 +38,7 @@ In MindSpore, you can manually initialize the weight corresponding to the `paddi **Q: What can I do if the LSTM example on the official website cannot run on Ascend?** -A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/doc/note/en/master/operator_list_ms.html) to view the supported operators. +A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/doc/note/en/r1.1/operator_list_ms.html) to view the supported operators.
diff --git a/docs/faq/source_zh_cn/backend_running.md b/docs/faq/source_zh_cn/backend_running.md index 890dd7a3f3f6b99ffd07749a834c253cd6f18187..c8b1b67b1c71e815998a4fe65bc3b3155bd1a24a 100644 --- a/docs/faq/source_zh_cn/backend_running.md +++ b/docs/faq/source_zh_cn/backend_running.md @@ -2,7 +2,7 @@ `Ascend` `GPU` `CPU` `环境准备` `运行模式` `模型训练` `初级` `中级` `高级` - + **Q:MindSpore安装完成,执行训练时发现网络性能异常,权重初始化耗时过长,怎么办?** @@ -83,7 +83,7 @@ A:这边的问题是选择了Graph模式却使用了PyNative的写法,所以 - Graph模式:也称静态图模式或者图模式,将神经网络模型编译成一整张图,然后下发执行。该模式利用图优化等技术提高运行性能,同时有助于规模部署和跨平台运行。 -用户可以参考[官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/debug_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。 +用户可以参考[官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/debug_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。
diff --git a/docs/faq/source_zh_cn/conf.py b/docs/faq/source_zh_cn/conf.py index 95d7701759707ab95a3c199cd8a22e2e2cc1194d..7be5f453c21b75703c763a14c8180127aed60e6b 100644 --- a/docs/faq/source_zh_cn/conf.py +++ b/docs/faq/source_zh_cn/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/faq/source_zh_cn/installation.md b/docs/faq/source_zh_cn/installation.md index 16bd43f42f12581dbfb2dce732a6eba113495aa7..9d7e2106285c43ad18de0793cdb1ccee817a3a86 100644 --- a/docs/faq/source_zh_cn/installation.md +++ b/docs/faq/source_zh_cn/installation.md @@ -13,7 +13,7 @@ - + ## pip安装 diff --git a/docs/faq/source_zh_cn/mindinsight_use.md b/docs/faq/source_zh_cn/mindinsight_use.md index 32d259e88ce696310ba19ae269198d9277a47312..36a495c7e69572a031d069d2267c45944556e6a3 100644 --- a/docs/faq/source_zh_cn/mindinsight_use.md +++ b/docs/faq/source_zh_cn/mindinsight_use.md @@ -2,7 +2,7 @@ `Linux` `Ascend` `GPU` `环境准备` - + **Q:MindInsight启动失败并且提示:`ImportError: libcrypto.so.1.0.0: cannnot open shared object file: No such file or directory` 如何处理?** diff --git a/docs/faq/source_zh_cn/mindspore_lite.md b/docs/faq/source_zh_cn/mindspore_lite.md index 7a1c25be35eb07f7ea9dc2855ab560c3c87eea7e..f9f7e83107cc65a2450b5792662b1e70c0dfd645 100644 --- a/docs/faq/source_zh_cn/mindspore_lite.md +++ b/docs/faq/source_zh_cn/mindspore_lite.md @@ -1,6 +1,6 @@ # 端侧使用类 - + **Q:NPU推理存在什么限制?** @@ -8,5 +8,5 @@ A:目前NPU仅支持在系统ROM版本EMUI>=11、芯片支持包括Kirin 9000 **Q:为什么使用裁剪工具裁剪后的静态库在集成时存在编译失败情况?** -A:目前裁剪工具仅支持CPU的库,即编译命令中指定了`-e CPU`,具体使用请查看[使用裁剪工具降低库文件大小](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/cropper_tool.html)文档。 +A:目前裁剪工具仅支持CPU的库,即编译命令中指定了`-e CPU`,具体使用请查看[使用裁剪工具降低库文件大小](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.1/use/cropper_tool.html)文档。 diff --git a/docs/faq/source_zh_cn/network_models.md b/docs/faq/source_zh_cn/network_models.md index 741d946538f3f6a31adc34c4b606d5ffe5a56047..cc16e853f54027bdc152d08a7c95e926254043fc 100644 --- a/docs/faq/source_zh_cn/network_models.md +++ b/docs/faq/source_zh_cn/network_models.md @@ -2,7 +2,7 @@ `数据处理` `环境准备` `模型导出` `模型训练` `初级` `中级` `高级` - + **Q:模型已经训练好,如何将模型的输出结果保存为文本或者`npy`的格式?** @@ -18,11 +18,11 @@ np.save("output.npy", out.asnumpy()) **Q:使用MindSpore做分割训练,必须将数据转为MindRecords吗?** -A:[build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)是将数据集生成MindRecord的脚本,可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取,可以使用`GeneratorDataset`自定义数据集加载。 +A:[build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)是将数据集生成MindRecord的脚本,可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取,可以使用`GeneratorDataset`自定义数据集加载。 -[GenratorDataset 示例](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#id5) +[GenratorDataset 示例](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html#id5) -[GenratorDataset API说明](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) +[GenratorDataset API说明](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset)
@@ -34,7 +34,7 @@ A:MindSpore的`ckpt`和TensorFlow的`ckpt`格式是不通用的,虽然都是 **Q:如何不将数据处理为MindRecord格式,直接进行训练呢?** -A:可以使用自定义的数据加载方式 `GeneratorDataset`,具体可以参考[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html)文档中的自定义数据集加载。 +A:可以使用自定义的数据加载方式 `GeneratorDataset`,具体可以参考[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html)文档中的自定义数据集加载。
@@ -46,7 +46,7 @@ A: MindSpore采用protbuf存储训练参数,无法直接读取其他框架 **Q:用MindSpore训练出的模型如何在Ascend 310上使用?可以转换成适用于HiLens Kit用的吗?** -A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_310.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型. +A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型.
@@ -58,19 +58,19 @@ A:在构造网络的时候可以通过 `if self.training: x = dropput(x)`, **Q:从哪里可以查看MindSpore训练及推理的样例代码或者教程?** -A:可以访问[MindSpore官网教程训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)和[MindSpore官网教程推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/index.html)。 +A:可以访问[MindSpore官网教程训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/index.html)和[MindSpore官网教程推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/index.html)。
**Q:MindSpore支持哪些模型的训练?** -A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#)。 +A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#)。
**Q:MindSpore有哪些现成的推荐类或生成类网络或模型可用?** -A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。 +A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo)。
@@ -141,7 +141,7 @@ if __name__ == "__main__": **Q:如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数?** -A:以下代码引用自MindSpore的官方教程的[代码仓](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/linear_regression.py) +A:以下代码引用自MindSpore的官方教程的[代码仓](https://gitee.com/mindspore/docs/blob/r1.1/tutorials/tutorial_code/linear_regression.py) 在以下几处修改即可很好的拟合$f(x)=ax^2+bx+c$: diff --git a/docs/faq/source_zh_cn/platform_and_system.md b/docs/faq/source_zh_cn/platform_and_system.md index 050ef99b749b1ec2bfa4673aef5ec7f819dcb7d2..a27b65a0b8477676addde12e02c7c19a318160ef 100644 --- a/docs/faq/source_zh_cn/platform_and_system.md +++ b/docs/faq/source_zh_cn/platform_and_system.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `硬件支持` `初级` `中级` - + **Q:MindSpore只能在华为自己的`NPU`上跑么?** @@ -30,7 +30,7 @@ A:Ascend 310只能用作推理,MindSpore支持在Ascend 910训练,训练 **Q:安装运行MindSpore时,是否要求平台有GPU、NPU等计算单元?需要什么硬件支持?** -A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/doc/note/zh-CN/master/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#)获取最新信息。 +A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/doc/note/zh-CN/r1.1/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#)获取最新信息。
@@ -42,7 +42,7 @@ A:MindSpore提供了可插拔式的设备管理接口,其他计算单元( **Q:MindSpore与ModelArts是什么关系,在ModelArts中能使用MindSpore吗?** -A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。 +A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。
diff --git a/docs/faq/source_zh_cn/programming_language_extensions.md b/docs/faq/source_zh_cn/programming_language_extensions.md index 1622c8990131a4d12673ad71f26e9a830ee893a3..304e2401372dfd5c47097f966e1c090c119d28fa 100644 --- a/docs/faq/source_zh_cn/programming_language_extensions.md +++ b/docs/faq/source_zh_cn/programming_language_extensions.md @@ -2,7 +2,7 @@ `Python` `支持计划` - + **Q:最近出来的taichi编程语言有Python扩展,类似`import taichi as ti`就能直接用了,MindSpore是否也支持?** diff --git a/docs/faq/source_zh_cn/supported_features.md b/docs/faq/source_zh_cn/supported_features.md index f66be5ea326f5c19a43df002084ff47a494c84cf..6224c6eb55aa204f38da73c332a17eb69c2a420f 100644 --- a/docs/faq/source_zh_cn/supported_features.md +++ b/docs/faq/source_zh_cn/supported_features.md @@ -2,7 +2,7 @@ `特性优势` `端侧推理` `功能模块` `推理工具` - + **Q:如何在训练神经网络过程中对计算损失的超参数进行改变?** @@ -12,7 +12,7 @@ A:您好,很抱歉暂时还未有这样的功能。目前只能通过训练- **Q:第一次看到有专门的数据处理框架,能介绍下么?** -A:MindData提供数据处理异构硬件加速功能,高并发数据处理`pipeline`同时支持`NPU/GPU/CPU`,`CPU`占用降低30%,[点击查询](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/optimize_data_processing.html)。 +A:MindData提供数据处理异构硬件加速功能,高并发数据处理`pipeline`同时支持`NPU/GPU/CPU`,`CPU`占用降低30%,[点击查询](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/optimize_data_processing.html)。
@@ -54,7 +54,7 @@ A:MindSpore分布式训练除了支持数据并行,还支持算子级模型 **Q:请问MindSpore实现了反池化操作了吗?类似于`nn.MaxUnpool2d` 这个反池化操作?** -A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_operator_ascend.html)。 +A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_operator_ascend.html)。
@@ -90,7 +90,7 @@ A:TensorFlow的对象检测Pipeline接口属于TensorFlow Model模块。待Min **Q:其他框架的脚本或者模型怎么迁移到MindSpore?** -A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/migrate_3rd_scripts.html)的介绍。 +A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/migrate_3rd_scripts.html)的介绍。
diff --git a/docs/faq/source_zh_cn/supported_operators.md b/docs/faq/source_zh_cn/supported_operators.md index c090f559a14ec20c8eba16d1357c7282913be45d..2d78f9eeed66ba2c41d9b3919ec9a5247368023f 100644 --- a/docs/faq/source_zh_cn/supported_operators.md +++ b/docs/faq/source_zh_cn/supported_operators.md @@ -2,7 +2,7 @@ `Ascend` `CPU` `GPU` `环境准备` `初级` `中级` `高级` - + **Q:使用MindSpore-1.0.1版本在图数据下沉模式加载数据异常是什么原因?** @@ -32,13 +32,13 @@ A:在PyTorch中`padding_idx`的作用是将embedding矩阵中`padding_idx`位 **Q:Operations中`Tile`算子执行到`__infer__`时`value`值为`None`,丢失了数值是怎么回事?** A:`Tile`算子的`multiples input`必须是一个常量(该值不能直接或间接来自于图的输入)。否则构图的时候会拿到一个`None`的数据,因为图的输入是在图执行的时候才传下去的,构图的时候拿不到图的输入数据。 -相关的资料可以看[相关文档](https://www.mindspore.cn/doc/note/zh-CN/master/static_graph_syntax_support.html)的“其他约束”。 +相关的资料可以看[相关文档](https://www.mindspore.cn/doc/note/zh-CN/r1.1/static_graph_syntax_support.html)的“其他约束”。
**Q:官网的LSTM示例在Ascend上跑不通。** -A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list_ms.html)查看算子支持情况。 +A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list_ms.html)查看算子支持情况。
diff --git a/docs/note/source_en/benchmark.md b/docs/note/source_en/benchmark.md index 9a715e46ceeffbf7a5319b36667743a6dccabbc9..e1e8eb3c4b3291e9e7151f0757f837da0e46d6d5 100644 --- a/docs/note/source_en/benchmark.md +++ b/docs/note/source_en/benchmark.md @@ -13,10 +13,10 @@ - + This document describes the MindSpore benchmarks. -For details about the MindSpore networks, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +For details about the MindSpore networks, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo). ## Training Performance diff --git a/docs/note/source_en/conf.py b/docs/note/source_en/conf.py index a1fd767271ac159540440ed65bd0d676163366a9..a2abcc9090f480f4504ca43ff682a2e762a5a89f 100644 --- a/docs/note/source_en/conf.py +++ b/docs/note/source_en/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/note/source_en/design/mindarmour/differential_privacy_design.md b/docs/note/source_en/design/mindarmour/differential_privacy_design.md index 7038864e653ba412238865bae8c9e12e72f7a735..1b9ab9636881e5fb5c88f93c31333fd8ff147609 100644 --- a/docs/note/source_en/design/mindarmour/differential_privacy_design.md +++ b/docs/note/source_en/design/mindarmour/differential_privacy_design.md @@ -14,7 +14,7 @@ - + ## Overall Design @@ -54,10 +54,10 @@ Compared with traditional differential privacy, ZCDP and RDP provide stricter pr ## Code Implementation -- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise. -- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation. -- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned. -- [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability. +- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise. +- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation. +- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned. +- [model.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability. ## References diff --git a/docs/note/source_en/design/mindarmour/fuzzer_design.md b/docs/note/source_en/design/mindarmour/fuzzer_design.md index 34cfc563dc10d75662950114a1db2337fe5f9596..b4a1db0bf5723ad6b51d68da6bd3a05d2adcfd51 100644 --- a/docs/note/source_en/design/mindarmour/fuzzer_design.md +++ b/docs/note/source_en/design/mindarmour/fuzzer_design.md @@ -13,7 +13,7 @@ - + ## Background @@ -61,10 +61,10 @@ Through multiple rounds of mutations, you can obtain a series of variant data in ## Code Implementation -1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process. -2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC. -3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods. -4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks. +1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process. +2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC. +3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods. +4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/r1.1/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks. ## References diff --git a/docs/note/source_en/design/mindinsight/graph_visual_design.md b/docs/note/source_en/design/mindinsight/graph_visual_design.md index 8633d64951454033d95e05a8302c4cdeb825a59d..8f3e7e712d1459f9b4403ea137a740630e974dad 100644 --- a/docs/note/source_en/design/mindinsight/graph_visual_design.md +++ b/docs/note/source_en/design/mindinsight/graph_visual_design.md @@ -15,7 +15,7 @@ - + ## Background @@ -71,4 +71,4 @@ RESTful API is used for data interaction between the MindInsight frontend and ba #### File API Design Data interaction between MindSpore and MindInsight uses the data format defined by [Protocol Buffer](https://developers.google.cn/protocol-buffers/docs/pythontutorial). -The main entry is the [summary.proto file](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_summary.proto). A message object of a computational graph is defined as `GraphProto`. For details about `GraphProto`, see the [anf_ir.proto file](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto). +The main entry is the [summary.proto file](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_summary.proto). A message object of a computational graph is defined as `GraphProto`. For details about `GraphProto`, see the [anf_ir.proto file](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto). diff --git a/docs/note/source_en/design/mindinsight/tensor_visual_design.md b/docs/note/source_en/design/mindinsight/tensor_visual_design.md index 86a364148c6002864ea62d9c5b38bda03775674c..b94fb0c06576241898e997f873c9520a94979603 100644 --- a/docs/note/source_en/design/mindinsight/tensor_visual_design.md +++ b/docs/note/source_en/design/mindinsight/tensor_visual_design.md @@ -14,7 +14,7 @@ - + ## Background @@ -55,7 +55,7 @@ Figure 2 shows tensors recorded by a user in a form of a histogram. ### API Design -In tensor visualization, there are file API and RESTful API. The file API is the [summary.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/summary.proto) file, which is used for data interconnection between MindInsight and MindSpore. RESTful API is an internal API used for data interaction between the MindInsight frontend and backend. +In tensor visualization, there are file API and RESTful API. The file API is the [summary.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/summary.proto) file, which is used for data interconnection between MindInsight and MindSpore. RESTful API is an internal API used for data interaction between the MindInsight frontend and backend. #### File API Design @@ -102,4 +102,4 @@ The `summary.proto` file is the main entry. TensorProto data is stored in the su } ``` -TensorProto is defined in the [anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/anf_ir.proto) file. +TensorProto is defined in the [anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/anf_ir.proto) file. diff --git a/docs/note/source_en/design/mindinsight/training_visual_design.md b/docs/note/source_en/design/mindinsight/training_visual_design.md index 05cadfe220ab59d397cc4e2342d2fbf6d43325b6..40380e07483da6422999b31cf82af36691fd5ee8 100644 --- a/docs/note/source_en/design/mindinsight/training_visual_design.md +++ b/docs/note/source_en/design/mindinsight/training_visual_design.md @@ -18,7 +18,7 @@ - + [MindInsight](https://gitee.com/mindspore/mindinsight) is a visualized debugging and tuning component of MindSpore. MindInsight can be used to complete tasks such as training visualization, performance tuning, and precision tuning. @@ -40,11 +40,11 @@ The training information collection function in MindSpore consists of training i Training information collection APIs include: -- Training information collection API based on the summary operator. This API contains four summary operators, that is, the ScalarSummary operator for recording scalar data, the ImageSummary operator for recording image data, the HistogramSummary operator for recording parameter distribution histogram data, and the TensorSummary operator for recording tensor data. For details about the operators, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +- Training information collection API based on the summary operator. This API contains four summary operators, that is, the ScalarSummary operator for recording scalar data, the ImageSummary operator for recording image data, the HistogramSummary operator for recording parameter distribution histogram data, and the TensorSummary operator for recording tensor data. For details about the operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). -- Training information collection API based on the Python API. You can use the [SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value) method to collect training information in Python code. +- Training information collection API based on the Python API. You can use the [SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value) method to collect training information in Python code. -- Easy-to-use training information collection callback. The [SummaryCollector](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector) callback function can be used to conveniently collect common training information to training logs. +- Easy-to-use training information collection callback. The [SummaryCollector](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector) callback function can be used to conveniently collect common training information to training logs. The training information persistence module mainly includes a summary_record module used to manage a cache and a write_pool module used to process data in parallel and write data into a file. After the training information is made persistent, it is stored in the training log file (summary file). diff --git a/docs/note/source_en/design/mindspore/architecture.md b/docs/note/source_en/design/mindspore/architecture.md index 1ad9274690e89603abd59a6f2da73af93d9679f7..0c0a4b97f446253938a5814b6697ec5c94d778dd 100644 --- a/docs/note/source_en/design/mindspore/architecture.md +++ b/docs/note/source_en/design/mindspore/architecture.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `On Device` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` - + The MindSpore framework consists of the Frontend Expression layer, Graph Engine layer, and Backend Runtime layer. diff --git a/docs/note/source_en/design/mindspore/architecture_lite.md b/docs/note/source_en/design/mindspore/architecture_lite.md index 5b26ac7ec999c60db6f0062efa15355c15020e84..7a99d70fd2b84671a18c5b6d2d2f096a5903fbb0 100644 --- a/docs/note/source_en/design/mindspore/architecture_lite.md +++ b/docs/note/source_en/design/mindspore/architecture_lite.md @@ -2,7 +2,7 @@ `Linux` `Windows` `On Device` `Inference Application` `Intermediate` `Expert` `Contributor` - + The overall architecture of MindSpore Lite is as follows: diff --git a/docs/note/source_en/design/mindspore/distributed_training_design.md b/docs/note/source_en/design/mindspore/distributed_training_design.md index cf963d8a9f819eeecf08184300edf060361f3834..d07526820e27946e90d9b202b53b5da036eef0e2 100644 --- a/docs/note/source_en/design/mindspore/distributed_training_design.md +++ b/docs/note/source_en/design/mindspore/distributed_training_design.md @@ -18,7 +18,7 @@ - + ## Background @@ -66,12 +66,12 @@ This section describes how the data parallel mode `ParallelMode.DATA_PARALLEL` w 1. Collective communication - - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer. - - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition. + - [management.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer. + - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition. 2. Gradient aggregation - - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation. + - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation. ## Automatic Parallelism @@ -122,19 +122,19 @@ As a key feature of MindSpore, automatic parallelism is used to implement hybrid ### Automatic Parallel Code 1. Tensor layout model - - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated. + - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated. 2. Distributed operators - - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated. + - [ops_info](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated. 3. Strategy search algorithm - - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations. + - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations. 4. Device management - - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`. + - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`. 5. Entire graph sharding - - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode. + - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode. 6. Backward propagation of communication operators - - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`. + - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`. diff --git a/docs/note/source_en/design/mindspore/mindir.md b/docs/note/source_en/design/mindspore/mindir.md index 59f55e31952a36ce34cce402f9a8f328a3f835b3..e6ac2ecc9195a047839e95ecf5401ec4061ab626 100644 --- a/docs/note/source_en/design/mindspore/mindir.md +++ b/docs/note/source_en/design/mindspore/mindir.md @@ -18,7 +18,7 @@ - + ## Overview @@ -88,7 +88,7 @@ lambda (x, y) c end ``` -The corresponding MindIR is [ir.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/ir.dot). +The corresponding MindIR is [ir.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/ir.dot). ![image](./images/ir/ir.png) @@ -122,7 +122,7 @@ def hof(x): return res ``` -The corresponding MindIR is [hof.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/hof.dot). +The corresponding MindIR is [hof.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/hof.dot). ![image](./images/ir/hof.png) In the actual network training scripts, the automatic derivation generic function `GradOperation` and `Partial` and `HyperMap` that are commonly used in the optimizer are typical high-order functions. Higher-order semantics greatly improve the flexibility and simplicity of MindSpore representations. @@ -144,7 +144,7 @@ def fibonacci(n): return fibonacci(n-1) + fibonacci(n-2) ``` -The corresponding MindIR is [cf.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/cf.dot). +The corresponding MindIR is [cf.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/cf.dot). ![image](./images/ir/cf.png) `fibonacci` is a top-level function graph. Two function graphs at the top level are selected and called by `switch`. `✓fibonacci` is the True branch of the first `if`, and `✗fibonacci` is the False branch of the first `if`. `✓✗fibonacci` called in `✗fibonacci` is the True branch of `elif`, and `✗✗fibonacci` is the False branch of `elif`. The key is, in a MindIR, conditional jumps and recursion are represented in the form of higher-order control flows. For example, `✓✗fibonacci` and `✗fibonacci` are transferred in as parameters of the `switch` operator. `switch` selects a function as the return value based on the condition parameter. In this way, `switch` performs a binary selection operation on the input functions as common values and does not call the functions. The real function call is completed on CNode following `switch`. @@ -170,7 +170,7 @@ def ms_closure(): return out1, out2 ``` -The corresponding MindIR is [closure.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/closure.dot). +The corresponding MindIR is [closure.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/closure.dot). ![image](./images/ir/closure.png) In the example, `a` and `b` are free variables because the variables `a` and `b` in `func_inner` are parameters defined in the referenced parent graph `func_outer`. The variable `closure` is a closure, which is the combination of the function `func_inner` and its context `func_outer(1, 2)`. Therefore, the result of `out1` is 4, which is equivalent to `1+2+1`, and the result of `out2` is 5, which is equivalent to `1+2+2`. diff --git a/docs/note/source_en/design/mindspore/profiler_design.md b/docs/note/source_en/design/mindspore/profiler_design.md index d50f623c90860053e06ea22d64b6c08fa3e52d24..9d15dc4ec78822332633614d148c428311cfc7e2 100644 --- a/docs/note/source_en/design/mindspore/profiler_design.md +++ b/docs/note/source_en/design/mindspore/profiler_design.md @@ -26,7 +26,7 @@ - + ## Background diff --git a/docs/note/source_en/design/overall.rst b/docs/note/source_en/design/overall.rst index bec96d2c15254cf9a888536a6cab4aff59ef9c00..5aeb51194e95a4155161c9c0475c7f23654863c2 100644 --- a/docs/note/source_en/design/overall.rst +++ b/docs/note/source_en/design/overall.rst @@ -4,5 +4,6 @@ Overall Design .. toctree:: :maxdepth: 1 + technical_white_paper mindspore/architecture mindspore/architecture_lite diff --git a/docs/note/source_en/design/technical_white_paper.md b/docs/note/source_en/design/technical_white_paper.md new file mode 100644 index 0000000000000000000000000000000000000000..7f7d956d6c05073bc9dc2febe163ad643100fbcb --- /dev/null +++ b/docs/note/source_en/design/technical_white_paper.md @@ -0,0 +1,5 @@ +# Technical White Paper + +Please stay tuned... + + diff --git a/docs/note/source_en/env_var_list.md b/docs/note/source_en/env_var_list.md new file mode 100644 index 0000000000000000000000000000000000000000..37cbdbd9219cd3fcf4111cb52d6b1141c161b6ea --- /dev/null +++ b/docs/note/source_en/env_var_list.md @@ -0,0 +1,5 @@ +# Environment Variables List + +No English version available right now, welcome to contribute. + + diff --git a/docs/note/source_en/glossary.md b/docs/note/source_en/glossary.md index 6ec11c85d39df9af4669d847255544882ac78be4..0600bdf05fbff97a53c6c7e82568ecad9c087d85 100644 --- a/docs/note/source_en/glossary.md +++ b/docs/note/source_en/glossary.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Beginner` `Intermediate` `Expert` - + | Acronym and Abbreviation | Description | | ----- | ----- | diff --git a/docs/note/source_en/help_seeking_path.md b/docs/note/source_en/help_seeking_path.md index 9ac8c6bb6da04e502a89729e32b1e8644c82db51..c4e51e8c6fd9548433f2ae18ca90c61e0ed0c25f 100644 --- a/docs/note/source_en/help_seeking_path.md +++ b/docs/note/source_en/help_seeking_path.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Beginner` `Intermediate` `Expert` - + This document describes how to seek help and support when you encounter problems in using MindSpore. The following flowchart shows the overall help-seeking process which starts from users encountering a problem in using MindSpore and ends with they finding a proper solution. Help-seeking methods are introduced based on the flowchart. @@ -25,5 +25,5 @@ This document describes how to seek help and support when you encounter problems - If you want a detailed solution, start a help post on the [Ascend forum](https://forum.huawei.com/enterprise/en/forum-100504.html). - After the post is sent, a forum moderator collects the question and contacts technical experts to answer the question. The question will be resolved within three working days. - Resolve the problem by referring to solutions provided by technical experts. - + If the expert test result shows that the MindSpore function needs to be improved, you are advised to submit an issue in the [MindSpore repository](https://gitee.com/mindspore). Issues will be resolved in later versions. diff --git a/docs/note/source_en/image_classification_lite.md b/docs/note/source_en/image_classification_lite.md index 0ca49c2c89032753fb2731b4ae5860936f91faeb..ec55fb91168e62da0aeff7645a1f8dac95575cbe 100644 --- a/docs/note/source_en/image_classification_lite.md +++ b/docs/note/source_en/image_classification_lite.md @@ -1,6 +1,6 @@ # Image Classification Model Support (Lite) - + ## Image classification introduction @@ -15,7 +15,7 @@ Image classification is to identity what an image represents, to predict the obj | tree | 0.8584 | | houseplant | 0.7867 | -Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification). +Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_classification). ## Image classification model list diff --git a/docs/note/source_en/image_segmentation_lite.md b/docs/note/source_en/image_segmentation_lite.md index 2bd3c91b2ddd1b0f72de93ec773db29850e7f1eb..39f4a40700f50b9a7d246cade970d96b14e97483 100644 --- a/docs/note/source_en/image_segmentation_lite.md +++ b/docs/note/source_en/image_segmentation_lite.md @@ -1,12 +1,12 @@ # Image Segmentation Model Support (Lite) - + ## Image Segmentation introduction Image segmentation is used to detect the position of the object in the picture or a pixel belongs to which object. -Using MindSpore Lite to perform image segmentation [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_segmentation). +Using MindSpore Lite to perform image segmentation [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_segmentation). ## Image segmentation model list diff --git a/docs/note/source_en/index.rst b/docs/note/source_en/index.rst index e3aa74528572fe3a544fd4f85dd3b04f5502852e..f49f2d63f9a44196cf6027532fbfced153506b38 100644 --- a/docs/note/source_en/index.rst +++ b/docs/note/source_en/index.rst @@ -25,7 +25,9 @@ MindSpore Design And Specification benchmark network_list operator_list + syntax_list model_lite + env_var_list .. toctree:: :glob: diff --git a/docs/note/source_en/network_list_ms.md b/docs/note/source_en/network_list_ms.md index 95416313e766dae7ddaafd22da645f65861c3683..23c2ab896c2e34a2c8288ad4e06905f949f09532 100644 --- a/docs/note/source_en/network_list_ms.md +++ b/docs/note/source_en/network_list_ms.md @@ -9,70 +9,74 @@ - + ## Model Zoo | Domain | Sub Domain | Network | Ascend (Graph) | Ascend (PyNative) | GPU (Graph) | GPU (PyNative)| CPU (Graph) | CPU (PyNative) |:------ |:------| :----------- |:------ |:------ |:------ |:------ |:----- |:----- -|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported -| Computer Vision (CV) | Image Classification | [LeNet(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [ResNet-50(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing -|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [MobileNetV2(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [NASNET](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ghostnet/src/ghostnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [TinyNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/tinynet/src/tinynet.py) | Supported | Doing | Doing | Doing | Doing | Doing - Computer Vision(CV) | Image Classification | [FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Image Classification | [FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Image Classificationn | [FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -|Computer Vision (CV) | Object Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported | Supported | Supported | Supported | Supported -| Computer Vision (CV) | Object Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [MaskRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Object Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision(CV) | Object Detection | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision(CV) | Object Detection | [CenterFace](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Object Detection | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [YoloV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Text Detection | [PSENet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Text Recognition | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Semantic Segmentation | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing - Computer Vision (CV) | Keypoint Detection | [Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported -| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Doing | Doing -| Recommender | Recommender System, Search, Ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Graph Neural Networks (GNN) | Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Graph Neural Networks (GNN) | Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Graph Neural Networks (GNN) | Recommender System | [BGCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Audio | Auto Tagging | [FCN-4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing -| High Performance Computing | Molecular Dynamics | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Doing | Doing | Doing | Doing | Doing -| High Performance Computing | Ocean Model | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing +|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported +| Computer Vision (CV) | Image Classification | [LeNet(Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [ResNet-50(Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing +|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [InceptionV4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv4/src/inceptionv4.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv1/src/mobilenet_v1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV2(Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [Shufflenetv1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv1/src/shufflenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [NASNET](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing + Computer Vision(CV) | Image Classification | [FaceAttributes](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision(CV) | Image Classification | [FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision(CV) | Image Classificationn | [FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [SqueezeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Supported | Doing | Doing | Doing | Doing +|Computer Vision (CV) | Object Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported | Supported | Supported | Supported | Supported +| Computer Vision (CV) | Object Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53(Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [MaskRCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision(CV) | Object Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision(CV) | Object Detection | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision(CV) | Object Detection | [CenterFace](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision(CV) | Object Detection | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [YoloV4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Text Detection | [PSENet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Text Recognition | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Semantic Segmentation | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Keypoint Detection | [Openpose](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Optical Character Recognition | [CRNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/crnn/src/crnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported +| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [TextCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/textcnn/src/textcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported | Supported | Doing +| Recommender | Recommender System, Search, Ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Recommender | Recommender System | [NCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/ncf/src/ncf.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Graph Neural Networks (GNN) | Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Graph Neural Networks (GNN) | Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Graph Neural Networks (GNN) | Recommender System | [BGCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Audio | Auto Tagging | [FCN-4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing +| High Performance Computing | Molecular Dynamics | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Supported | Doing | Doing | Doing | Doing +| High Performance Computing | Ocean Model | [GOMO](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Supported | Doing | Doing -> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts. +> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/r1.1/mindinsight/wizard/) to quickly generate classic network scripts. diff --git a/docs/note/source_en/object_detection_lite.md b/docs/note/source_en/object_detection_lite.md index b3c7e2d9b3ecd91d3158f71d2f09dc52e7f1aaef..3f20272a0bafb302f5d069c2d6a427db9e7f84e5 100644 --- a/docs/note/source_en/object_detection_lite.md +++ b/docs/note/source_en/object_detection_lite.md @@ -1,6 +1,6 @@ # Object Detection Model Support (Lite) - + ## Object dectectin introduction @@ -12,7 +12,7 @@ Object detection can identify the object in the image and its position in the im | -------- | ----------- | ---------------- | | mouse | 0.78 | [10, 25, 35, 43] | -Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection). +Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/object_detection). ## Object detection model list diff --git a/docs/note/source_en/operator_list_implicit.md b/docs/note/source_en/operator_list_implicit.md index d74f3e143afeca92bbdfa0f48f797dfdd6a1ff39..68d81ceb6dfb349f99ec59132f9367ec0e9b524b 100644 --- a/docs/note/source_en/operator_list_implicit.md +++ b/docs/note/source_en/operator_list_implicit.md @@ -12,7 +12,7 @@ - + ## Implicit Type Conversion @@ -38,68 +38,68 @@ | op name | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Assign.html) | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignSub.html) | -| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyMomentum.html) | -| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | -| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | -| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | -| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | -| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | -| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | -| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | -| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | -| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | -| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | -| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | -| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | -| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAddSign.html) | -| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | -| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | -| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | -| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | -| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | -| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BitwiseAnd.html) | -| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BitwiseOr.html) | -| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BitwiseXor.html) | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TensorAdd.html) | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sub.html) | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mul.html) | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Pow.html) | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Minimum.html) | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Maximum.html) | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.RealDiv.html) | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Div.html) | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DivNoNan.html) | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorDiv.html) | -| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TruncateDiv.html) | -| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TruncateMod.html) | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mod.html) | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorMod.html) | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atan2.html) | -| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SquaredDifference.html) | -| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Xdivy.html) | -| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Xlogy.html) | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Equal.html) | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.NotEqual.html) | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Greater.html) | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Less.html) | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LessEqual.html) | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalOr.html) | -| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | -| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | -| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNdSub.html) | -| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | -| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterUpdate.html) | -| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterMax.html) | -| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterMin.html) | -| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterAdd.html) | -| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterSub.html) | -| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterMul.html) | -| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterDiv.html) | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignAdd.html) | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Assign.html) | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | +| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyMomentum.html) | +| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | +| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | +| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | +| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | +| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | +| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | +| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | +| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | +| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | +| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | +| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | +| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | +| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAddSign.html) | +| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | +| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | +| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | +| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | +| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | +| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BitwiseAnd.html) | +| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BitwiseOr.html) | +| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BitwiseXor.html) | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sub.html) | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mul.html) | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Pow.html) | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Div.html) | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | +| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TruncateDiv.html) | +| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TruncateMod.html) | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mod.html) | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | +| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SquaredDifference.html) | +| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Xdivy.html) | +| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Xlogy.html) | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Equal.html) | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Greater.html) | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Less.html) | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | +| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | +| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | +| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNdSub.html) | +| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | +| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterUpdate.html) | +| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterMax.html) | +| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterMin.html) | +| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterAdd.html) | +| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterSub.html) | +| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterMul.html) | +| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterDiv.html) | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | > \ No newline at end of file diff --git a/docs/note/source_en/operator_list_lite.md b/docs/note/source_en/operator_list_lite.md index 600963b57213f840e2d0dac94e0aabefc255f9d8..e3d5a1f12fb7f5489d1852b087fe8bf9951da5bb 100644 --- a/docs/note/source_en/operator_list_lite.md +++ b/docs/note/source_en/operator_list_lite.md @@ -2,7 +2,7 @@ `Linux` `On Device` `Inference Application` `Beginner` `Intermediate` `Expert` - + | Operation | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU | 支持的Tensorflow
Lite算子 | 支持的Caffe
Lite算子 | 支持的Onnx
Lite算子 | | --------------------- | ------------ | ------------ | ------------ | ------------- | ------------ | ------------ | --------- | ------------------------------- | ------------------------ | ----------------------------------------------- | diff --git a/docs/note/source_en/operator_list_ms.md b/docs/note/source_en/operator_list_ms.md index 8611a4282297771d8e5838aa48b862ce3ad85bc9..ba0a5a8399c291c23c6638ae152efcb2f25f98d4 100644 --- a/docs/note/source_en/operator_list_ms.md +++ b/docs/note/source_en/operator_list_ms.md @@ -2,9 +2,9 @@ `Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert` - + You can choose the operators that are suitable for your hardware platform for building the network model according to your needs. -- Supported operator lists in module `mindspore.nn` could be checked on [API page of mindspore.nn](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.nn.html). -- Supported operator lists in module `mindspore.ops` could be checked on [API page of mindspore.ops](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.ops.html). +- Supported operator lists in module `mindspore.nn` could be checked on [API page of mindspore.nn](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.nn.html). +- Supported operator lists in module `mindspore.ops` could be checked on [API page of mindspore.ops](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.ops.html). diff --git a/docs/note/source_en/operator_list_parallel.md b/docs/note/source_en/operator_list_parallel.md index 038f087317d222b155f134c46c08459cc58980ba..938211cc9a11fda4d1c0561a51f2485f1c24e7ff 100644 --- a/docs/note/source_en/operator_list_parallel.md +++ b/docs/note/source_en/operator_list_parallel.md @@ -9,117 +9,117 @@ - + ## Distributed Operator | op name | constraints | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Abs.html) | None | -| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ACos.html) | None | -| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Acosh.html) | None | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | None | -| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Asin.html) | None | -| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Asinh.html) | None | -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Assign.html) | None | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignAdd.html) | None | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignSub.html) | None | -| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atan.html) | None | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atan2.html) | None | -| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atanh.html) | None | -| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BatchMatMul.html) | `transpore_a=True` is not supported. | -| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BesselI0e.html) | None | -| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BesselI1e.html) | None | -| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BiasAdd.html) | None | -| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BroadcastTo.html) | None | -| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Cast.html) | The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode. | -| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Ceil.html) | None | -| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Concat.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Cos.html) | None | -| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Cosh.html) | None | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Div.html) | None | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DivNoNan.html) | None | -| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DropoutDoMask.html) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported. | -| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DropoutGenMask.html) | Need to be used in conjunction with `DropoutDoMask`. | -| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Elu.html) | None | -| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | The same as GatherV2. | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Equal.html) | None | -| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Erf.html) | None | -| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Erfc.html) | None | -| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Exp.html) | None | -| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ExpandDims.html) | None | -| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Expm1.html) | None | -| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Floor.html) | None | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorDiv.html) | None | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorMod.html) | None | -| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GatherV2.html) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported. | -| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Gelu.html) | None | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Greater.html) | None | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | None | -| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Inv.html) | None | -| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.L2Normalize.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Less.html) | None | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LessEqual.html) | None | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | None | -| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalNot.html) | None | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalOr.html) | None | -| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Log.html) | None | -| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Log1p.html) | None | -| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogSoftmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.MatMul.html) | `transpose_a=True` is not supported. | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Maximum.html) | None | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Minimum.html) | None | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mod.html) | None | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mul.html) | None | -| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Neg.html) | None | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.NotEqual.html) | None | -| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.OneHot.html) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs. | -| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.OnesLike.html) | None | -| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Pack.html) | None | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Pow.html) | None | -| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.PReLU.html) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight. | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.RealDiv.html) | None | -| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Reciprocal.html) | None | -| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceMax.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceMin.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceSum.html) | None | -| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceMean.html) | None | -| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReLU.html) | None | -| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReLU6.html) | None | -| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReLUV2.html) | None | -| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Reshape.html) | Configuring shard strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators. | -| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Round.html) | None | -| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Rsqrt.html) | None | -| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sigmoid.html) | None | -| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | None | -| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sign.html) | None | -| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sin.html) | None | -| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sinh.html) | None | -| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Softmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | The last dimension of logits and labels can't be splited; Only supports using output[0]. | -| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Softplus.html) | None | -| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Softsign.html) | None | -| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseGatherV2.html) | The same as GatherV2. | -| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Split.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sqrt.html) | None | -| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Square.html) | None | -| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Squeeze.html) | None | -| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.StridedSlice.html) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1. | -| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Slice.html) | The dimension needs to be split should be all extracted. | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sub.html) | None | -| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Tan.html) | None | -| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Tanh.html) | None | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TensorAdd.html) | None | -| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Tile.html) | Only support configuring shard strategy for multiples. | -| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TopK.html) | The input_x can't be split into the last dimension, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Transpose.html) | None | -| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Unique.html) | Only support the repeat calculate shard strategy (1,). | -| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. | -| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | -| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | -| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ZerosLike.html) | None | +| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Abs.html) | None | +| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ACos.html) | None | +| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Acosh.html) | None | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | None | +| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Asin.html) | None | +| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Asinh.html) | None | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Assign.html) | None | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | None | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | None | +| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atan.html) | None | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | None | +| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atanh.html) | None | +| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BatchMatMul.html) | `transpore_a=True` is not supported. | +| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BesselI0e.html) | None | +| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BesselI1e.html) | None | +| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BiasAdd.html) | None | +| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BroadcastTo.html) | None | +| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Cast.html) | The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode. | +| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Ceil.html) | None | +| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Concat.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Cos.html) | None | +| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Cosh.html) | None | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Div.html) | None | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | None | +| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DropoutDoMask.html) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported. | +| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DropoutGenMask.html) | Need to be used in conjunction with `DropoutDoMask`. | +| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Elu.html) | None | +| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | The same as GatherV2. | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Equal.html) | None | +| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Erf.html) | None | +| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Erfc.html) | None | +| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Exp.html) | None | +| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ExpandDims.html) | None | +| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Expm1.html) | None | +| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Floor.html) | None | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | None | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | None | +| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GatherV2.html) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported. | +| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Gelu.html) | None | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Greater.html) | None | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | None | +| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Inv.html) | None | +| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.L2Normalize.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Less.html) | None | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | None | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | None | +| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalNot.html) | None | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | None | +| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Log.html) | None | +| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Log1p.html) | None | +| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogSoftmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.MatMul.html) | `transpose_a=True` is not supported. | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | None | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | None | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mod.html) | None | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mul.html) | None | +| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Neg.html) | None | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | None | +| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.OneHot.html) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs. | +| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.OnesLike.html) | None | +| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Pack.html) | None | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Pow.html) | None | +| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.PReLU.html) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight. | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | None | +| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Reciprocal.html) | None | +| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceMax.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceMin.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceSum.html) | None | +| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceMean.html) | None | +| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReLU.html) | None | +| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReLU6.html) | None | +| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReLUV2.html) | None | +| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Reshape.html) | Configuring shard strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators. | +| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Round.html) | None | +| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Rsqrt.html) | None | +| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sigmoid.html) | None | +| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | None | +| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sign.html) | None | +| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sin.html) | None | +| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sinh.html) | None | +| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Softmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | The last dimension of logits and labels can't be splited; Only supports using output[0]. | +| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Softplus.html) | None | +| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Softsign.html) | None | +| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseGatherV2.html) | The same as GatherV2. | +| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Split.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sqrt.html) | None | +| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Square.html) | None | +| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Squeeze.html) | None | +| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.StridedSlice.html) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1. | +| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Slice.html) | The dimension needs to be split should be all extracted. | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sub.html) | None | +| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Tan.html) | None | +| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Tanh.html) | None | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | None | +| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Tile.html) | Only support configuring shard strategy for multiples. | +| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TopK.html) | The input_x can't be split into the last dimension, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Transpose.html) | None | +| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Unique.html) | Only support the repeat calculate shard strategy (1,). | +| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. | +| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | +| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | +| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ZerosLike.html) | None | > Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur. > diff --git a/docs/note/source_en/paper_list.md b/docs/note/source_en/paper_list.md index e4efdfd25275f2fc8331aa248f48b80bd99c52e7..92a9dee6423ed5fc80ff22f686cd1f889251917c 100644 --- a/docs/note/source_en/paper_list.md +++ b/docs/note/source_en/paper_list.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Framework Development` `Intermediate` `Expert` `Contributor` - + | Title | Author | Field | Journal/Conference | Link | | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- | -------------- | ------------------------------------------------------------ | diff --git a/docs/note/source_en/posenet_lite.md b/docs/note/source_en/posenet_lite.md index 74468800acadb2bb14448368fbad592463968c97..bead3bfbef35046615ab4a347c3c7766559277f1 100644 --- a/docs/note/source_en/posenet_lite.md +++ b/docs/note/source_en/posenet_lite.md @@ -1,6 +1,6 @@ # Posenet Model Support (Lite) - + ## Posenet introduction @@ -12,4 +12,4 @@ The blue marking points detect the distribution of facial features of the human ![image_posenet](images/posenet_detection.png) -Using MindSpore Lite to realize posenet [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/posenet). +Using MindSpore Lite to realize posenet [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/posenet). diff --git a/docs/note/source_en/roadmap.md b/docs/note/source_en/roadmap.md index ca68dc0a654e245ad069c8c7da176dd2f0c2b6c7..2d17431f7e4d195b6eba2a057e6da64b9f7ee646 100644 --- a/docs/note/source_en/roadmap.md +++ b/docs/note/source_en/roadmap.md @@ -14,7 +14,7 @@ - + MindSpore's top priority plans in the year are displayed as follows. We will continuously adjust the priority based on user feedback. diff --git a/docs/note/source_en/scene_detection_lite.md b/docs/note/source_en/scene_detection_lite.md index 0bd910475c637ea7b6380ce984670df44eb56aeb..673fea50ab7d7d07b9236895a9efb292ae142ac7 100644 --- a/docs/note/source_en/scene_detection_lite.md +++ b/docs/note/source_en/scene_detection_lite.md @@ -1,12 +1,12 @@ # Scene Detection Model Support (Lite) - + ## Scene dectectin introduction Scene detection can identify the type of scene in the device's camera. -Using MindSpore Lite to implement scene detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/scene_detection). +Using MindSpore Lite to implement scene detection [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/scene_detection). ## Scene detection model list diff --git a/docs/note/source_en/static_graph_syntax_support.md b/docs/note/source_en/static_graph_syntax_support.md new file mode 100644 index 0000000000000000000000000000000000000000..bd826e2d17fb0ea273deeca1f57f57bb23db9fd8 --- /dev/null +++ b/docs/note/source_en/static_graph_syntax_support.md @@ -0,0 +1,5 @@ +# Static Graph Syntax Support + +No English version available right now, welcome to contribute. + + diff --git a/docs/note/source_en/style_transfer_lite.md b/docs/note/source_en/style_transfer_lite.md index dcf7100c5933bd53c0af2fb49e1d6dbd484a2525..88c0f56fc896533db7fffc3e9239fad17b4a297a 100644 --- a/docs/note/source_en/style_transfer_lite.md +++ b/docs/note/source_en/style_transfer_lite.md @@ -1,6 +1,6 @@ # Style Transfer Model Support (Lite) - + ## Style transfer introduction @@ -14,4 +14,4 @@ Selecting the first standard image from left to perform the style transfer, as s ![image_after_transfer](images/after_transfer.png) -Using MindSpore Lite to realize style transfer [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/style_transfer). +Using MindSpore Lite to realize style transfer [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/style_transfer). diff --git a/docs/note/source_en/syntax_list.rst b/docs/note/source_en/syntax_list.rst new file mode 100644 index 0000000000000000000000000000000000000000..597c59c2b324118dffe760c9e087fd773f644493 --- /dev/null +++ b/docs/note/source_en/syntax_list.rst @@ -0,0 +1,7 @@ +Syntax Support +================ + +.. toctree:: + :maxdepth: 1 + + static_graph_syntax_support \ No newline at end of file diff --git a/docs/note/source_zh_cn/benchmark.md b/docs/note/source_zh_cn/benchmark.md index e0455d68326fff42797c288cdaf151753529afe6..a6768dde33a871397254b843b811e0f77a66c7df 100644 --- a/docs/note/source_zh_cn/benchmark.md +++ b/docs/note/source_zh_cn/benchmark.md @@ -13,9 +13,9 @@ - + -本文介绍MindSpore的基准性能。MindSpore网络定义可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。 +本文介绍MindSpore的基准性能。MindSpore网络定义可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo)。 ## 训练性能 diff --git a/docs/note/source_zh_cn/conf.py b/docs/note/source_zh_cn/conf.py index 95d7701759707ab95a3c199cd8a22e2e2cc1194d..7be5f453c21b75703c763a14c8180127aed60e6b 100644 --- a/docs/note/source_zh_cn/conf.py +++ b/docs/note/source_zh_cn/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md index 256d719bcf7dc1e789d9895d3841598691da4672..e7d2bedd711a4d1ae8c802b43fb549b44668644a 100644 --- a/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md +++ b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md @@ -14,7 +14,7 @@ - + ## 总体设计 @@ -54,10 +54,10 @@ Monitor提供RDP、ZCDP等回调函数,用于监测模型的差分隐私预算 ## 代码实现 -- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py):这个文件实现了差分隐私训练所需的噪声生成机制,包括简单高斯噪声、自适应高斯噪声、自适应裁剪高斯噪声等。 -- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py):这个文件实现了使用噪声生成机制在反向传播时添加噪声的根本逻辑。 -- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py):实现了计算差分隐私预算的回调函数,模型训练过程中,会反馈当前的差分隐私预算。 -- [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py):这个文件实现了计算损失和梯度的逻辑,差分隐私训练的梯度截断逻辑在此文件中实现,且model.py是用户使用差分隐私训练能力的入口。 +- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py):这个文件实现了差分隐私训练所需的噪声生成机制,包括简单高斯噪声、自适应高斯噪声、自适应裁剪高斯噪声等。 +- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/optimizer/optimizer.py):这个文件实现了使用噪声生成机制在反向传播时添加噪声的根本逻辑。 +- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/monitor/monitor.py):实现了计算差分隐私预算的回调函数,模型训练过程中,会反馈当前的差分隐私预算。 +- [model.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/train/model.py):这个文件实现了计算损失和梯度的逻辑,差分隐私训练的梯度截断逻辑在此文件中实现,且model.py是用户使用差分隐私训练能力的入口。 ## 参考文献 diff --git a/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md b/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md index 0d753d7bb92744cb7bc2ae96a665ac09fafec77b..9bd10b6fb8493284757db011821ba858483706b4 100644 --- a/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md +++ b/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md @@ -3,6 +3,7 @@ `Linux` `Ascend` `GPU` `CPU` `数据准备` `模型开发` `模型训练` `模型调优` `企业` `高级` + - [AI模型安全测试](#ai模型安全测试) - [背景](#背景) - [Fuzz Testing设计图](#fuzz-testing设计图) @@ -12,7 +13,7 @@ - + ## 背景 @@ -60,10 +61,10 @@ Fuzz Testing架构主要包括三个模块: ## 代码实现 -1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py):Fuzzer总体流程。 -2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py):神经元覆盖率指标,包括KMNC,NBC,SNAC。 -3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py):图像变异方法,包括基于像素值的变化方法和仿射变化方法。 -4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks):对抗样本攻击方法,包含多种黑盒、白盒攻击方法。 +1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/fuzzing.py):Fuzzer总体流程。 +2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/model_coverage_metrics.py):神经元覆盖率指标,包括KMNC,NBC,SNAC。 +3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/image_transform.py):图像变异方法,包括基于像素值的变化方法和仿射变化方法。 +4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/r1.1/mindarmour/adv_robustness/attacks):对抗样本攻击方法,包含多种黑盒、白盒攻击方法。 ## 参考文献 diff --git a/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md index be8a8c686bb95ba565506ded1940d8b87281ca51..46e43d52cfdad248e9d315905ae44631b53efa78 100644 --- a/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md +++ b/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md @@ -15,7 +15,7 @@ - + ## 特性背景 @@ -71,4 +71,4 @@ RESTful API接口是MindInsight前后端进行数据交互的接口。 #### 文件接口设计 MindSpore与MindInsight之间的数据交互,采用[protobuf](https://developers.google.cn/protocol-buffers/docs/pythontutorial?hl=zh-cn)定义数据格式。 -[summary.proto文件](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_summary.proto)为总入口,计算图的消息对象定义为 `GraphProto`。`GraphProto`的详细定义可以参考[anf_ir.proto文件](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto)。 +[summary.proto文件](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_summary.proto)为总入口,计算图的消息对象定义为 `GraphProto`。`GraphProto`的详细定义可以参考[anf_ir.proto文件](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto)。 diff --git a/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md index 44d4db5b12ddc5dc04e3ed2cedfb16dd69bb382d..18d9114f6b2b1a8d272799eb11b6a0eec53e8b36 100644 --- a/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md +++ b/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md @@ -14,7 +14,7 @@ - + ## 特性背景 @@ -55,7 +55,7 @@ Tensor可视支持1-N维的Tensor以表格或直方图的形式展示,对于0 ### 接口设计 -在张量可视中,主要有文件接口和RESTful API接口,其中文件接口为[summary.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/summary.proto)文件,是MindInsight和MindSpore进行数据对接的接口。 RESTful API接口是MindInsight前后端进行数据交互的接口,是内部接口。 +在张量可视中,主要有文件接口和RESTful API接口,其中文件接口为[summary.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/summary.proto)文件,是MindInsight和MindSpore进行数据对接的接口。 RESTful API接口是MindInsight前后端进行数据交互的接口,是内部接口。 #### 文件接口设计 @@ -102,4 +102,4 @@ Tensor可视支持1-N维的Tensor以表格或直方图的形式展示,对于0 } ``` -而TensorProto的定义在[anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/anf_ir.proto)文件中。 +而TensorProto的定义在[anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/anf_ir.proto)文件中。 diff --git a/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md index 3a3e33f58a303088a8d7474c6ab192d5805afe5d..ed0b180cb4832bc3102ffaa579b7bd2c8f2b77a1 100644 --- a/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md +++ b/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md @@ -18,7 +18,7 @@ - + [MindInsight](https://gitee.com/mindspore/mindinsight)是MindSpore的可视化调试调优组件。通过MindInsight可以完成训练可视、性能调优、精度调优等任务。 @@ -40,11 +40,11 @@ 训练信息收集API包括: -- 基于summary算子的训练信息收集API。这部分API主要包括4个summary算子,即用于记录标量数据的ScalarSummary算子,用于记录图片数据的ImageSummary算子,用于记录参数分布图(直方图)数据的HistogramSummary算子和用于记录张量数据的TensorSummary算子。请访问[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)以获取关于这些算子的信息。 +- 基于summary算子的训练信息收集API。这部分API主要包括4个summary算子,即用于记录标量数据的ScalarSummary算子,用于记录图片数据的ImageSummary算子,用于记录参数分布图(直方图)数据的HistogramSummary算子和用于记录张量数据的TensorSummary算子。请访问[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)以获取关于这些算子的信息。 -- 基于Python API的训练信息收集API。通过[SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value)方法,可以在Python代码中完成训练信息的收集。 +- 基于Python API的训练信息收集API。通过[SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value)方法,可以在Python代码中完成训练信息的收集。 -- 易用的训练信息收集callback。通过[SummaryCollector](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)这一callback可以方便地收集常用训练信息到训练日志中。 +- 易用的训练信息收集callback。通过[SummaryCollector](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)这一callback可以方便地收集常用训练信息到训练日志中。 训练信息持久化模块主要包括用于管理缓存的summary_record模块和用于并行处理数据、写入文件的write_pool模块。训练信息持久化后,存储在训练日志文件(summary文件中)。 diff --git a/docs/note/source_zh_cn/design/mindspore/architecture.md b/docs/note/source_zh_cn/design/mindspore/architecture.md index 36a14407ccb2387053ca1823be8132623ec700f6..6b9d8f414839c632fc2cea8dbcdbd1ef5460f44f 100644 --- a/docs/note/source_zh_cn/design/mindspore/architecture.md +++ b/docs/note/source_zh_cn/design/mindspore/architecture.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `端侧` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` - + MindSpore框架架构总体分为MindSpore前端表示层、MindSpore计算图引擎和MindSpore后端运行时三层。 diff --git a/docs/note/source_zh_cn/design/mindspore/architecture_lite.md b/docs/note/source_zh_cn/design/mindspore/architecture_lite.md index c25e4442923dedc94a42b065bcf886a30fc9cb92..2d8cecfd16e5a8fbc503921e51a46d1287c28cdc 100644 --- a/docs/note/source_zh_cn/design/mindspore/architecture_lite.md +++ b/docs/note/source_zh_cn/design/mindspore/architecture_lite.md @@ -2,7 +2,7 @@ `Linux` `Windows` `端侧` `推理应用` `中级` `高级` `贡献者` - + MindSpore Lite框架的总体架构如下所示: diff --git a/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md b/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md index 97a9a328b99dba77ff968775ef848096a0c995fc..9dc80e56d6da10cea318decbf2b6cd6ee41a596f 100644 --- a/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md +++ b/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md @@ -18,7 +18,7 @@ - + ## 背景 @@ -66,12 +66,12 @@ 1. 集合通信 - - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py):这个文件中涵盖了集合通信过程中常用的`helper`函数接口,例如获取集群数量和卡的序号等。当在Ascend芯片上执行时,框架会加载环境上的`libhccl.so`库文件,通过它来完成从Python层到底层的通信接口调用。 - - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py):MindSpore将支持的集合通信操作都封装为算子的形式放在这个文件下,包括`AllReduce`、`AllGather`、`ReduceScatter`和`Broadcast`等。`PrimitiveWithInfer`中除了定义算子所需属性外,还包括构图过程中输入到输出的`shape`和`dtype`推导。 + - [management.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/communication/management.py):这个文件中涵盖了集合通信过程中常用的`helper`函数接口,例如获取集群数量和卡的序号等。当在Ascend芯片上执行时,框架会加载环境上的`libhccl.so`库文件,通过它来完成从Python层到底层的通信接口调用。 + - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/operations/comm_ops.py):MindSpore将支持的集合通信操作都封装为算子的形式放在这个文件下,包括`AllReduce`、`AllGather`、`ReduceScatter`和`Broadcast`等。`PrimitiveWithInfer`中除了定义算子所需属性外,还包括构图过程中输入到输出的`shape`和`dtype`推导。 2. 梯度聚合 - - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py):这个文件实现了梯度聚合的过程。对入参`grads`用`HyperMap`展开后插入`AllReduce`算子,这里采用的是全局通信组,用户也可以根据自己网络的需求仿照这个模块进行自定义开发。MindSpore中单机和分布式执行共用一套网络封装接口,在`Cell`内部通过`ParallelMode`来区分是否要对梯度做聚合操作,网络封装接口建议参考`TrainOneStepCell`代码实现。 + - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/nn/wrap/grad_reducer.py):这个文件实现了梯度聚合的过程。对入参`grads`用`HyperMap`展开后插入`AllReduce`算子,这里采用的是全局通信组,用户也可以根据自己网络的需求仿照这个模块进行自定义开发。MindSpore中单机和分布式执行共用一套网络封装接口,在`Cell`内部通过`ParallelMode`来区分是否要对梯度做聚合操作,网络封装接口建议参考`TrainOneStepCell`代码实现。 ## 自动并行 @@ -121,19 +121,19 @@ ### 自动并行代码 1. 张量排布模型 - - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout):这个目录下包含了张量排布模型相关功能的定义及实现。其中`tensor_layout.h`中声明了一个张量排布模型需要具备的成员变量`tensor_map_origin_`,`tensor_shape_`和`device_arrangement_`等。在`tensor_redistribution.h`中声明了实现张量排布间`from_origin_`和`to_origin_`变换的相关方法,将推导得到的重排布操作保存在`operator_list_`中返回,并计算得到重排布所需的通信开销`comm_cost_`, 内存开销`memory_cost_`及计算开销`computation_cost_`。 + - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/tensor_layout):这个目录下包含了张量排布模型相关功能的定义及实现。其中`tensor_layout.h`中声明了一个张量排布模型需要具备的成员变量`tensor_map_origin_`,`tensor_shape_`和`device_arrangement_`等。在`tensor_redistribution.h`中声明了实现张量排布间`from_origin_`和`to_origin_`变换的相关方法,将推导得到的重排布操作保存在`operator_list_`中返回,并计算得到重排布所需的通信开销`comm_cost_`, 内存开销`memory_cost_`及计算开销`computation_cost_`。 2. 分布式算子 - - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info):这个目录下包含了分布式算子的具体实现。在`operator_info.h`中定义了分布式算子实现的基类`OperatorInfo`,开发一个分布式算子需要继承于这个基类并显式实现相关的虚函数。其中`InferTensorInfo`,`InferTensorMap`和`InferDevMatrixShape`函数定义了推导该算子输入、输出张量排布模型的算法。`InferForwardCommunication`,`InferMirrorOps`等函数定义了切分该算子需要插入的额外计算、通信操作。`CheckStrategy`和`GenerateStrategies`函数定义了算子切分策略校验和生成。根据切分策略`SetCostUnderStrategy`将会产生该策略下分布式算子的并行开销值`operator_cost_`。 + - [ops_info](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/ops_info):这个目录下包含了分布式算子的具体实现。在`operator_info.h`中定义了分布式算子实现的基类`OperatorInfo`,开发一个分布式算子需要继承于这个基类并显式实现相关的虚函数。其中`InferTensorInfo`,`InferTensorMap`和`InferDevMatrixShape`函数定义了推导该算子输入、输出张量排布模型的算法。`InferForwardCommunication`,`InferMirrorOps`等函数定义了切分该算子需要插入的额外计算、通信操作。`CheckStrategy`和`GenerateStrategies`函数定义了算子切分策略校验和生成。根据切分策略`SetCostUnderStrategy`将会产生该策略下分布式算子的并行开销值`operator_cost_`。 3. 策略搜索算法 - - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel):这个目录下实现了切分策略搜索的算法。`graph_costmodel.h`定义了构图信息,其中每个点表示一个算子`OperatorInfo`,有向边`edge_costmodel.h`表示算子的输入输出关系及重排布的代价。`operator_costmodel.h`中定义了每个算子的代价模型,包括计算代价、通信代价和内存代价。`dp_algorithm_costmodel.h`主要描述了动态规划算法的主要流程,由一系列图操作组成。在`costmodel.h`中定义了cost和图操作的数据结构。 + - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/auto_parallel):这个目录下实现了切分策略搜索的算法。`graph_costmodel.h`定义了构图信息,其中每个点表示一个算子`OperatorInfo`,有向边`edge_costmodel.h`表示算子的输入输出关系及重排布的代价。`operator_costmodel.h`中定义了每个算子的代价模型,包括计算代价、通信代价和内存代价。`dp_algorithm_costmodel.h`主要描述了动态规划算法的主要流程,由一系列图操作组成。在`costmodel.h`中定义了cost和图操作的数据结构。 4. 设备管理 - - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h):这个文件实现了集群设备通信组的创建及管理。其中设备矩阵模型由`device_matrix.h`定义,通信域由`group_manager.h`管理。 + - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/device_manager.h):这个文件实现了集群设备通信组的创建及管理。其中设备矩阵模型由`device_matrix.h`定义,通信域由`group_manager.h`管理。 5. 整图切分 - - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h):这两个文件包含了自动并行流程的核心实现。首先由`step_auto_parallel.h`调用策略搜索流程并产生分布式算子的`OperatorInfo`,然后在`step_parallel.h`中处理算子切分和张量重排布等流程,对单机计算图进行分布式改造。 + - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_parallel.h):这两个文件包含了自动并行流程的核心实现。首先由`step_auto_parallel.h`调用策略搜索流程并产生分布式算子的`OperatorInfo`,然后在`step_parallel.h`中处理算子切分和张量重排布等流程,对单机计算图进行分布式改造。 6. 通信算子反向 - - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py):这个文件定义了`AllReduce`和`AllGather`等通信算子的反向操作。 + - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/_grad/grad_comm_ops.py):这个文件定义了`AllReduce`和`AllGather`等通信算子的反向操作。 diff --git a/docs/note/source_zh_cn/design/mindspore/mindir.md b/docs/note/source_zh_cn/design/mindspore/mindir.md index 01fd8b8ab770c1db0a3749607f368199f41e36bc..db6ffb70beadc6f7b2642d8b01d73cb21b254ffd 100644 --- a/docs/note/source_zh_cn/design/mindspore/mindir.md +++ b/docs/note/source_zh_cn/design/mindspore/mindir.md @@ -17,7 +17,7 @@ - + ## 简介 @@ -87,7 +87,7 @@ lambda (x, y) c end ``` -对应的MindIR为[ir.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot): +对应的MindIR为[ir.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot): ![image](./images/ir/ir.png) @@ -121,7 +121,7 @@ def hof(x): return res ``` -对应的MindIR为[hof.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot): +对应的MindIR为[hof.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot): ![image](./images/ir/hof.png) @@ -144,7 +144,7 @@ def fibonacci(n): return fibonacci(n-1) + fibonacci(n-2) ``` -对应的MindIR为[cf.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot): +对应的MindIR为[cf.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot): ![image](./images/ir/cf.png) @@ -171,7 +171,7 @@ def ms_closure(): return out1, out2 ``` -对应的MindIR为[closure.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot): +对应的MindIR为[closure.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot): ![image](./images/ir/closure.png) diff --git a/docs/note/source_zh_cn/design/mindspore/profiler_design.md b/docs/note/source_zh_cn/design/mindspore/profiler_design.md index 4b5bcb51685a791e50c6cd25d2c20e4366ae8eb6..b9be2d1272f0c8bdd7bdfa48a6154842d278f8b3 100644 --- a/docs/note/source_zh_cn/design/mindspore/profiler_design.md +++ b/docs/note/source_zh_cn/design/mindspore/profiler_design.md @@ -25,7 +25,7 @@ - + ## 背景 diff --git a/docs/note/source_zh_cn/design/technical_white_paper.md b/docs/note/source_zh_cn/design/technical_white_paper.md index c3ec41c35159513d72d40770a3fdbe593ce3bbf9..e76d2e7298528901b55a4b4f54a6c97884eae8b1 100644 --- a/docs/note/source_zh_cn/design/technical_white_paper.md +++ b/docs/note/source_zh_cn/design/technical_white_paper.md @@ -10,7 +10,7 @@ - + ## 引言 diff --git a/docs/note/source_zh_cn/env_var_list.md b/docs/note/source_zh_cn/env_var_list.md index 97632f65bd85acf74934782520ec2b6b443e9662..8c919663965d1ceaee90153917624fc01b79a4f9 100644 --- a/docs/note/source_zh_cn/env_var_list.md +++ b/docs/note/source_zh_cn/env_var_list.md @@ -2,7 +2,7 @@ `Linux` `Ascend` `GPU` `CPU` `初级` `中级` `高级` - + 本文介绍MindSpore的环境变量。 @@ -20,7 +20,7 @@ |RANK_TABLE_FILE|MindSpore|路径指向文件,包含指定多Ascend AI处理器环境中Ascend AI处理器的"device_id"对应的"device_ip"。|String|文件路径,支持相对路径与绝对路径|与RANK_SIZE配合使用|必选(使用Ascend AI处理器时)| |RANK_SIZE|MindSpore|指定深度学习时调用Ascend AI处理器的数量|Integer|1~8,调用Ascend AI处理器的数量|与RANK_TABLE_FILE配合使用|必选(使用Ascend AI处理器时)| |RANK_ID|MindSpore|指定深度学习时调用Ascend AI处理器的逻辑ID|Integer|0~7,多机并行时不同server中DEVICE_ID会有重复,使用RANK_ID可以避免这个问题(多机并行时 RANK_ID = SERVER_ID * DEVICE_NUM + DEVICE_ID|无|可选| -|MS_SUBMODULE_LOG_v|MindSpore|[MS_SUBMODULE_LOG_v功能与用法]()|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选 +|MS_SUBMODULE_LOG_v|MindSpore|[MS_SUBMODULE_LOG_v功能与用法]()|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选 |OPTION_PROTO_LIB_PATH|MindSpore|RPOTO依赖库库路径|String|文件路径,支持相对路径与绝对路径|无|可选| |GE_USE_STATIC_MEMORY|GraphEngine|当网络模型层数过大时,特征图中间计算数据可能超过25G,例如BERT24网络。多卡场景下为保证通信内存高效协同,需要配置为1,表示使用内存静态分配方式,其他网络暂时无需配置,默认使用内存动态分配方式。
静态内存默认配置为31G,如需要调整可以通过网络运行参数graph_memory_max_size和variable_memory_max_size的总和指定;动态内存是动态申请,最大不会超过graph_memory_max_size和variable_memory_max_size的总和。|Integer|1:使用内存静态分配方式
0:使用内存动态分配方式|无|可选| |DUMP_GE_GRAPH|GraphEngine|把整个流程中各个阶段的图描述信息打印到文件中,此环境变量控制dump图的内容多少|Integer|1:全量dump
2:不含有权重等数据的基本版dump
3:只显示节点关系的精简版dump|无|可选| diff --git a/docs/note/source_zh_cn/glossary.md b/docs/note/source_zh_cn/glossary.md index 630fec2ea6cbb0bfe4cbbd913419853a056adc57..4e90525a691013b4d39acb2103203db3ab3e71aa 100644 --- a/docs/note/source_zh_cn/glossary.md +++ b/docs/note/source_zh_cn/glossary.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `初级` `中级` `高级` - + | 术语/缩略语 | 说明 | | ----- | ----- | diff --git a/docs/note/source_zh_cn/help_seeking_path.md b/docs/note/source_zh_cn/help_seeking_path.md index ac3338260cf2cc17cbd838c5e7fc101da5021cf1..9798ffaf87e65b4bac4a31e2262da74eedcc50a1 100644 --- a/docs/note/source_zh_cn/help_seeking_path.md +++ b/docs/note/source_zh_cn/help_seeking_path.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `初级` `中级` `高级` - + 本文将简述用户在使用MindSpore遇到问题时,如何使用官方提供的问题求助路径解决问题。MindSpore问题求助整体流程如图中所示,从用户使用MindSpore发现问题开始,直至选择到合适的问题解决方法。下面我们基于问题求助流程图对各种求助方法做解释说明。 @@ -28,5 +28,5 @@ - 为提高问题解决速度与质量,发帖前请参考[发帖建议](https://bbs.huaweicloud.com/forum/thread-69695-1-1.html),按照建议格式发帖。 - 帖子发出后会有论坛版主负责将问题收录,并联系技术专家进行解答,问题将在三个工作日内解决。 - 参考技术专家的解决方案,解决当前遇到的问题。 - + 如果在专家测试后确定是MindSpore功能有待完善,推荐用户在[MindSpore仓](https://gitee.com/mindspore)中创建ISSUE,所提问题会在后续的版本中得到修复完善。 diff --git a/docs/note/source_zh_cn/image_classification_lite.md b/docs/note/source_zh_cn/image_classification_lite.md index 6a17c2517a56db85c8658248a5bc691a04492a67..f6de498338e9fc281bb89b4592d6240935257f0a 100644 --- a/docs/note/source_zh_cn/image_classification_lite.md +++ b/docs/note/source_zh_cn/image_classification_lite.md @@ -1,6 +1,6 @@ # 图像分类模型支持(Lite) - + ## 图像分类介绍 @@ -15,7 +15,7 @@ | tree | 0.8584 | | houseplant | 0.7867 | -使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。 +使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_classification)。 ## 图像分类模型列表 diff --git a/docs/note/source_zh_cn/image_segmentation_lite.md b/docs/note/source_zh_cn/image_segmentation_lite.md index 4aa2bd2fa140e975e1cb0a5a04aedb0bbb1f22a1..089ec594b81b7f24e34ea1c3d408e598d6cd31ac 100644 --- a/docs/note/source_zh_cn/image_segmentation_lite.md +++ b/docs/note/source_zh_cn/image_segmentation_lite.md @@ -1,12 +1,12 @@ # 图像分割模型支持(Lite) - + ## 图像分割介绍 图像分割是用于检测目标在图片中的位置或者图片中某一像素是输入何种对象的。 -使用MindSpore Lite实现图像分割的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_segmentation)。 +使用MindSpore Lite实现图像分割的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_segmentation)。 ## 图像分割模型列表 diff --git a/docs/note/source_zh_cn/network_list_ms.md b/docs/note/source_zh_cn/network_list_ms.md index 67d9e26b7a4b1c0987703d087c4b5c03d7cf213a..e749e2614ddd85e0ea24bc6e069814d359806ab0 100644 --- a/docs/note/source_zh_cn/network_list_ms.md +++ b/docs/note/source_zh_cn/network_list_ms.md @@ -9,70 +9,74 @@ - + ## Model Zoo | 领域 | 子领域 | 网络 | Ascend (Graph) | Ascend (PyNative) | GPU (Graph) | GPU (PyNaitve) | CPU (Graph) | CPU (PyNaitve) |:---- |:------- |:---- |:---- |:---- |:---- |:---- |:---- |:---- -|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported -| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing -|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [NASNET](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ghostnet/src/ghostnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [TinyNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/tinynet/src/tinynet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -|计算机视觉(CV) | 目标检测(Object Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported |Supported |Supported | Supported | Supported -| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [MaskRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [CenterFace](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) |[MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) |[SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) |[YoloV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉 (CV) | 文本检测 (Text Detection) | [PSENet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉 (CV) | 文本识别 (Text Recognition) | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 语义分割(Semantic Segmentation) |[Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Doing | Doing -| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search, Ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 图神经网络(GNN) | 推荐系统(Recommender System) | [BGCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing -|语音(Audio) | 音频标注(Audio Tagging) | [FCN-4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing -|高性能计算(HPC) | 分子动力学(Molecular Dynamics) | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Doing | Doing | Doing | Doing | Doing -|高性能计算(HPC) | 海洋模型(Ocean Model) | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported +| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv4/src/inceptionv4.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv1/src/mobilenet_v1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [Shufflenetv1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv1/src/shufflenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [NASNET](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceAttributes](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[SqueezeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Supported | Doing | Doing | Doing | Doing +|计算机视觉(CV) | 目标检测(Object Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported |Supported |Supported | Supported | Supported +| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [MaskRCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [CenterFace](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) |[MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) |[SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) |[YoloV4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉 (CV) | 文本检测 (Text Detection) | [PSENet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉 (CV) | 文本识别 (Text Recognition) | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 关键点检测(Keypoint Detection) |[Openpose](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 光学字符识别(Optical Character Recognition) |[CRNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/crnn/src/crnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TextCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/textcnn/src/textcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Supported | Doing +| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search, Ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 推荐(Recommender) | 推荐系统(Recommender System) | [NCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/ncf/src/ncf.py) | Supported | Doing | Supported | Doing| Doing | Doing +| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 图神经网络(GNN) | 推荐系统(Recommender System) | [BGCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing +|语音(Audio) | 音频标注(Audio Tagging) | [FCN-4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing +|高性能计算(HPC) | 分子动力学(Molecular Dynamics) | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Supported| Doing | Doing | Doing | Doing +|高性能计算(HPC) | 海洋模型(Ocean Model) | [GOMO](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Supported | Doing | Doing -> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) 快速生成经典网络脚本。 +> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/r1.1/mindinsight/wizard/) 快速生成经典网络脚本。 diff --git a/docs/note/source_zh_cn/object_detection_lite.md b/docs/note/source_zh_cn/object_detection_lite.md index 38855ad7eb2071f2fb8097198ae97ef0644d292c..e23056139183aab9a599209cd693a545da8ec1fa 100644 --- a/docs/note/source_zh_cn/object_detection_lite.md +++ b/docs/note/source_zh_cn/object_detection_lite.md @@ -1,6 +1,6 @@ # 目标检测模型支持(Lite) - + ## 目标检测介绍 @@ -12,7 +12,7 @@ | ----- | ---- | ---------------- | | mouse | 0.78 | [10, 25, 35, 43] | -使用MindSpore Lite实现目标检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection)。 +使用MindSpore Lite实现目标检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/object_detection)。 ## 目标检测模型列表 diff --git a/docs/note/source_zh_cn/operator_list_implicit.md b/docs/note/source_zh_cn/operator_list_implicit.md index 5a3c7fade7bab75a20ade9ac1b332b17cb36f3ad..ace1519f4569203a97e1c11415191841e8267d71 100644 --- a/docs/note/source_zh_cn/operator_list_implicit.md +++ b/docs/note/source_zh_cn/operator_list_implicit.md @@ -12,7 +12,7 @@ - + ## 隐式类型转换 @@ -38,68 +38,68 @@ | 算子名 | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Assign.html) | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignSub.html) | -| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyMomentum.html) | -| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | -| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | -| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | -| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | -| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | -| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | -| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | -| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | -| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | -| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | -| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | -| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | -| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAddSign.html) | -| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | -| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | -| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | -| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | -| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | -| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BitwiseAnd.html) | -| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BitwiseOr.html) | -| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BitwiseXor.html) | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TensorAdd.html) | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sub.html) | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mul.html) | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Pow.html) | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Minimum.html) | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Maximum.html) | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.RealDiv.html) | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Div.html) | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DivNoNan.html) | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorDiv.html) | -| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TruncateDiv.html) | -| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TruncateMod.html) | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mod.html) | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorMod.html) | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atan2.html) | -| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SquaredDifference.html) | -| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Xdivy.html) | -| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Xlogy.html) | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Equal.html) | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.NotEqual.html) | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Greater.html) | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Less.html) | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LessEqual.html) | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalOr.html) | -| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | -| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | -| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNdSub.html) | -| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | -| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterUpdate.html) | -| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterMax.html) | -| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterMin.html) | -| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterAdd.html) | -| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterSub.html) | -| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterMul.html) | -| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterDiv.html) | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignAdd.html) | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Assign.html) | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | +| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyMomentum.html) | +| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | +| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | +| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | +| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | +| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | +| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | +| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | +| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | +| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | +| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | +| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | +| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | +| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAddSign.html) | +| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | +| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | +| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | +| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | +| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | +| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BitwiseAnd.html) | +| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BitwiseOr.html) | +| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BitwiseXor.html) | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sub.html) | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mul.html) | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Pow.html) | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Div.html) | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | +| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TruncateDiv.html) | +| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TruncateMod.html) | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mod.html) | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | +| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SquaredDifference.html) | +| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Xdivy.html) | +| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Xlogy.html) | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Equal.html) | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Greater.html) | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Less.html) | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | +| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | +| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | +| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNdSub.html) | +| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | +| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterUpdate.html) | +| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterMax.html) | +| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterMin.html) | +| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterAdd.html) | +| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterSub.html) | +| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterMul.html) | +| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterDiv.html) | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | > \ No newline at end of file diff --git a/docs/note/source_zh_cn/operator_list_lite.md b/docs/note/source_zh_cn/operator_list_lite.md index c5665c3d3414f29e667fa9df486964d13505b6ee..89cbcc0af62a16850a4751cae1001cb0756994cb 100644 --- a/docs/note/source_zh_cn/operator_list_lite.md +++ b/docs/note/source_zh_cn/operator_list_lite.md @@ -2,7 +2,7 @@ `Linux` `Ascend` `端侧` `推理应用` `初级` `中级` `高级` - + | 操作名 | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU | 支持的Tensorflow
Lite算子 | 支持的Caffe
Lite算子 | 支持的Onnx
Lite算子 | |-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------|---------| diff --git a/docs/note/source_zh_cn/operator_list_ms.md b/docs/note/source_zh_cn/operator_list_ms.md index 8a3104db96a43ca98ccd0245602a08014df7dea5..ee61872844c10db36ffe2e7fb467975b8519d224 100644 --- a/docs/note/source_zh_cn/operator_list_ms.md +++ b/docs/note/source_zh_cn/operator_list_ms.md @@ -2,9 +2,9 @@ `Linux` `Ascend` `GPU` `CPU` `模型开发` `初级` `中级` `高级` - + 您可根据需要,选择适用于您硬件平台的算子,构建网络模型。 -- `mindspore.nn`模块支持的算子列表可在[mindspore.nn模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.nn.html)进行查阅。 -- `mindspore.ops`模块支持的算子列表可在[mindspore.ops模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.ops.html)进行查阅。 +- `mindspore.nn`模块支持的算子列表可在[mindspore.nn模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.nn.html)进行查阅。 +- `mindspore.ops`模块支持的算子列表可在[mindspore.ops模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.ops.html)进行查阅。 diff --git a/docs/note/source_zh_cn/operator_list_parallel.md b/docs/note/source_zh_cn/operator_list_parallel.md index 5ada19844e05f433e635c9f85749629573e66ccd..7a96a616ffed8f6b8cf9ea5fbfe0cfdff73e0268 100644 --- a/docs/note/source_zh_cn/operator_list_parallel.md +++ b/docs/note/source_zh_cn/operator_list_parallel.md @@ -9,116 +9,116 @@ - + ## 分布式算子 | 操作名 | 约束 | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Abs.html) | 无 | -| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ACos.html) | 无 | -| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Acosh.html) | 无 | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | 无 | -| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Asin.html) | 无 | -| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Asinh.html) | 无 | -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Assign.html) | 无 | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignAdd.html) | 无 | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignSub.html) | 无 | -| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atan.html) | 无 | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atan2.html) | 无 | -| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atanh.html) | 无 | -| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BatchMatMul.html) | 不支持`transpose_a=True` | -| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BesselI0e.html) | 无 | -| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BesselI1e.html) | 无 | -| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BiasAdd.html) | 无 | -| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BroadcastTo.html) | 无 | -| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Cast.html) | Auto Parallel和Semi Auto Parallel模式下,配置策略不生效 | -| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Ceil.html) | 无 | -| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Concat.html) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Cos.html) | 无 | -| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Cosh.html) | 无 | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Div.html) | 无 | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DivNoNan.html) | 无 | -| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DropoutDoMask.html) | 需和`DropoutGenMask`联合使用,不支持配置切分策略 | -| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DropoutGenMask.html) | 需和`DropoutDoMask`联合使用 | -| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Elu.html) | 无 | -| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | 同GatherV2 | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Equal.html) | 无 | -| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Erf.html) | 无 | -| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Erfc.html) | 无 | -| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Exp.html) | 无 | -| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ExpandDims.html) | 无 | -| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Expm1.html) | 无 | -| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Floor.html) | 无 | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorDiv.html) | 无 | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorMod.html) | 无 | -| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GatherV2.html) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分 | -| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Gelu.html) | 无 | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Greater.html) | 无 | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | 无 | -| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Inv.html) | 无 | -| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.L2Normalize.html) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Less.html) | 无 | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LessEqual.html) | 无 | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | 无 | -| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalNot.html) | 无 | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalOr.html) | 无 | -| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Log.html) | 无 | -| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Log1p.html) | 无 | -| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogSoftmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.MatMul.html) | 不支持`transpose_a=True` | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Maximum.html) | 无 | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Minimum.html) | 无 | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mod.html) | 无 | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mul.html) | 无 | -| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Neg.html) | 无 | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.NotEqual.html) | 无 | -| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.OneHot.html) | 仅支持输入(indices)是1维的Tensor,切分策略要配置输出的切分策略,以及第1和第2个输入的切分策略 | -| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.OnesLike.html) | 无 | -| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Pack.html) | 无 | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Pow.html) | 无 | -| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.PReLU.html) | weight的shape在非[1]的情况下,输入(input_x)的Channel维要和weight的切分方式一致 | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.RealDiv.html) | 无 | -| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Reciprocal.html) | 无 | -| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceMax.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceMin.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceSum.html) | 无 | -| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceMean.html) | 无 | -| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReLU.html) | 无 | -| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReLU6.html) | 无 | -| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReLUV2.html) | 无 | -| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Reshape.html) | 不支持配置切分策略,并且,在自动并行模式下,当reshape算子后接有多个算子,不允许对这些算子配置不同的切分策略 | -| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Round.html) | 无 | -| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Rsqrt.html) | 无 | -| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sigmoid.html) | 无 | -| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | 无 | -| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sign.html) | 无 | -| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sin.html) | 无 | -| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sinh.html) | 无 | -| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Softmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0] | -| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Softplus.html) | 无 | -| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Softsign.html) | 无 | -| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseGatherV2.html) | 同GatherV2 | -| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Split.html) | 轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sqrt.html) | 无 | -| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Square.html) | 无 | -| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Squeeze.html) | 无 | -| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.StridedSlice.html) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分 | -| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Slice.html) | 需要切分的维度必须全部提取 | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sub.html) | 无 | -| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Tan.html) | 无 | -| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Tanh.html) | 无 | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TensorAdd.html) | 无 | -| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Tile.html) | 仅支持对multiples配置切分策略 | -| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TopK.html) | 最后一维不支持切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Transpose.html) | 无 | -| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Unique.html) | 只支持重复计算的策略(1,) | -| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致 | -| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最大值。需要用户进行掩码处理,将最大值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | -| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最小值。需要用户进行掩码处理,将最小值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | -| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ZerosLike.html) | 无 | +| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Abs.html) | 无 | +| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ACos.html) | 无 | +| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Acosh.html) | 无 | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | 无 | +| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Asin.html) | 无 | +| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Asinh.html) | 无 | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Assign.html) | 无 | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | 无 | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | 无 | +| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atan.html) | 无 | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | 无 | +| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atanh.html) | 无 | +| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BatchMatMul.html) | 不支持`transpose_a=True` | +| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BesselI0e.html) | 无 | +| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BesselI1e.html) | 无 | +| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BiasAdd.html) | 无 | +| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BroadcastTo.html) | 无 | +| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Cast.html) | Auto Parallel和Semi Auto Parallel模式下,配置策略不生效 | +| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Ceil.html) | 无 | +| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Concat.html) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Cos.html) | 无 | +| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Cosh.html) | 无 | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Div.html) | 无 | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | 无 | +| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DropoutDoMask.html) | 需和`DropoutGenMask`联合使用,不支持配置切分策略 | +| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DropoutGenMask.html) | 需和`DropoutDoMask`联合使用 | +| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Elu.html) | 无 | +| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | 同GatherV2 | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Equal.html) | 无 | +| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Erf.html) | 无 | +| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Erfc.html) | 无 | +| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Exp.html) | 无 | +| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ExpandDims.html) | 无 | +| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Expm1.html) | 无 | +| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Floor.html) | 无 | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | 无 | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | 无 | +| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GatherV2.html) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分 | +| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Gelu.html) | 无 | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Greater.html) | 无 | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | 无 | +| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Inv.html) | 无 | +| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.L2Normalize.html) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Less.html) | 无 | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | 无 | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | 无 | +| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalNot.html) | 无 | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | 无 | +| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Log.html) | 无 | +| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Log1p.html) | 无 | +| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogSoftmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.MatMul.html) | 不支持`transpose_a=True` | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | 无 | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | 无 | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mod.html) | 无 | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mul.html) | 无 | +| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Neg.html) | 无 | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | 无 | +| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.OneHot.html) | 仅支持输入(indices)是1维的Tensor,切分策略要配置输出的切分策略,以及第1和第2个输入的切分策略 | +| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.OnesLike.html) | 无 | +| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Pack.html) | 无 | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Pow.html) | 无 | +| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.PReLU.html) | weight的shape在非[1]的情况下,输入(input_x)的Channel维要和weight的切分方式一致 | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | 无 | +| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Reciprocal.html) | 无 | +| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceMax.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceMin.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceSum.html) | 无 | +| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceMean.html) | 无 | +| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReLU.html) | 无 | +| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReLU6.html) | 无 | +| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReLUV2.html) | 无 | +| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Reshape.html) | 不支持配置切分策略,并且,在自动并行模式下,当reshape算子后接有多个算子,不允许对这些算子配置不同的切分策略 | +| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Round.html) | 无 | +| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Rsqrt.html) | 无 | +| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sigmoid.html) | 无 | +| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | 无 | +| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sign.html) | 无 | +| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sin.html) | 无 | +| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sinh.html) | 无 | +| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Softmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0] | +| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Softplus.html) | 无 | +| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Softsign.html) | 无 | +| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseGatherV2.html) | 同GatherV2 | +| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Split.html) | 轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sqrt.html) | 无 | +| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Square.html) | 无 | +| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Squeeze.html) | 无 | +| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.StridedSlice.html) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分 | +| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Slice.html) | 需要切分的维度必须全部提取 | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sub.html) | 无 | +| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Tan.html) | 无 | +| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Tanh.html) | 无 | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | 无 | +| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Tile.html) | 仅支持对multiples配置切分策略 | +| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TopK.html) | 最后一维不支持切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Transpose.html) | 无 | +| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Unique.html) | 只支持重复计算的策略(1,) | +| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致 | +| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最大值。需要用户进行掩码处理,将最大值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | +| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最小值。需要用户进行掩码处理,将最小值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | +| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ZerosLike.html) | 无 | > 重复计算是指,机器没有用满,比如:集群有8张卡跑分布式训练,切分策略只对输入切成了4份。这种情况下会发生重复计算。 diff --git a/docs/note/source_zh_cn/paper_list.md b/docs/note/source_zh_cn/paper_list.md index 26bb3dbaf93daa91263c8e3b6ad4234ec6d6dad7..2c3530452c3d971f34a11df514385266ed247d15 100644 --- a/docs/note/source_zh_cn/paper_list.md +++ b/docs/note/source_zh_cn/paper_list.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `框架开发` `中级` `高级` `贡献者` - + | 论文标题 | 论文作者 | 领域 | 期刊/会议名称 | 论文链接 | | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- | -------------- | ------------------------------------------------------------ | diff --git a/docs/note/source_zh_cn/posenet_lite.md b/docs/note/source_zh_cn/posenet_lite.md index cf910548b8b397e766cd95546199c45db0b17d9a..901b18f69cbfad75b86d7d457b3cd5cd6eb96bb3 100644 --- a/docs/note/source_zh_cn/posenet_lite.md +++ b/docs/note/source_zh_cn/posenet_lite.md @@ -1,6 +1,6 @@ # 骨骼检测模型支持(Lite) - + ## 骨骼检测介绍 @@ -12,4 +12,4 @@ ![image_posenet](images/posenet_detection.png) -使用MindSpore Lite实现骨骼检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/posenet)。 +使用MindSpore Lite实现骨骼检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/posenet)。 diff --git a/docs/note/source_zh_cn/roadmap.md b/docs/note/source_zh_cn/roadmap.md index 62c7657b6f592cb4632dbc4a31653fdd3643a19a..6f8a76c2cc4ac2197928eae3f193d494d6932f33 100644 --- a/docs/note/source_zh_cn/roadmap.md +++ b/docs/note/source_zh_cn/roadmap.md @@ -15,7 +15,7 @@ - + 以下将展示MindSpore近一年的高阶计划,我们会根据用户的反馈诉求,持续调整计划的优先级。 diff --git a/docs/note/source_zh_cn/scene_detection_lite.md b/docs/note/source_zh_cn/scene_detection_lite.md index 19b3d7db410944cf9a4d1e14e10ed4a5c828cf76..9acb0a21e382437a28cca9f7ca3f23654cd318d2 100644 --- a/docs/note/source_zh_cn/scene_detection_lite.md +++ b/docs/note/source_zh_cn/scene_detection_lite.md @@ -1,12 +1,12 @@ # 场景检测模型支持(Lite) - + ## 场景检测介绍 场景检测可以识别设备摄像头中场景的类型。 -使用MindSpore Lite实现场景检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/scene_detection)。 +使用MindSpore Lite实现场景检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/scene_detection)。 ## 场景检测模型列表 diff --git a/docs/note/source_zh_cn/static_graph_syntax_support.md b/docs/note/source_zh_cn/static_graph_syntax_support.md index 996674bbabc106c51b27b395ba3a985bd54abf15..bc59ea9df39ca6824de6439dbe63f99964e573b0 100644 --- a/docs/note/source_zh_cn/static_graph_syntax_support.md +++ b/docs/note/source_zh_cn/static_graph_syntax_support.md @@ -55,20 +55,20 @@ - + ## 概述 在Graph模式下,Python代码并不是由Python解释器去执行,而是将代码编译成静态计算图,然后执行静态计算图。 - 关于Graph模式和计算图,可参考文档: + 关于Graph模式和计算图,可参考文档: 当前仅支持编译`@ms_function`装饰器修饰的函数、Cell及其子类的实例。 对于函数,则编译函数定义;对于网络,则编译`construct`方法及其调用的其他方法或者函数。 - `ms_function`使用规则可参考文档: + `ms_function`使用规则可参考文档: - `Cell`定义可参考文档: + `Cell`定义可参考文档: 由于语法解析的限制,当前在编译构图时,支持的数据类型、语法以及相关操作并没有完全与Python语法保持一致,部分使用受限。 @@ -256,7 +256,7 @@ 可以通过`@constexpr`装饰器修饰函数,在函数里生成`Tensor`。 -关于`@constexpr`的用法可参考: +关于`@constexpr`的用法可参考: 对于网络中需要用到的常量`Tensor`,可以作为网络的属性,在`init`的时候定义,即`self.x = Tensor(args...)`,然后在`construct`里使用。 @@ -667,9 +667,9 @@ def generate_tensor(): 当前不支持在网络调用`Primitive`及其子类相关属性和接口。 -`Primitive`定义可参考文档: +`Primitive`定义可参考文档: -当前已定义的`Primitive`可参考文档: +当前已定义的`Primitive`可参考文档: #### Cell @@ -679,9 +679,9 @@ def generate_tensor(): 当前不支持在网络调用`Cell`及其子类相关属性和接口,除非是在`Cell`自己的`contrcut`中通过`self`调用。 -`Cell`定义可参考文档: +`Cell`定义可参考文档: -当前已定义的`Cell`可参考文档: +当前已定义的`Cell`可参考文档: ## 运算符 @@ -689,7 +689,7 @@ def generate_tensor(): 之所以支持,是因为这些运算符会转换成同名算子进行运算,这些算子支持了隐式类型转换。 -规则可参考文档: +规则可参考文档: ### 算术运算符 @@ -1265,20 +1265,20 @@ result Tensor(shape=[3], dtype=Int64, value= [1, 2, 3])) ### 整网实例类型 -- 带[@ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。 +- 带[@ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。 -- 继承自[nn.Cell](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Cell.html)的Cell子类。 +- 继承自[nn.Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Cell.html)的Cell子类。 ### 网络构造组件 | 类别 | 内容 | :----------- |:-------- -| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Cell.html)。 +| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Cell.html)。 | `Cell`实例的成员函数 | Cell的construct中可以调用其他类成员函数。 | `dataclass`实例 | 使用@dataclass装饰的类。 -| `Primitive`算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.ops.html) -| `Composite`算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.ops.html) -| `constexpr`生成算子 |使用[@constexpr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.constexpr.html)生成的值计算算子。 +| `Primitive`算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.ops.html) +| `Composite`算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.ops.html) +| `constexpr`生成算子 |使用[@constexpr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.constexpr.html)生成的值计算算子。 | 函数 | 自定义Python函数、前文中列举的系统函数。 ### 网络使用约束 diff --git a/docs/note/source_zh_cn/style_transfer_lite.md b/docs/note/source_zh_cn/style_transfer_lite.md index bad44095535a9f1a9707e16cd0f6e5392fda10b9..ff9c14728200c720c50da3adfc21a9ff567d3cc8 100644 --- a/docs/note/source_zh_cn/style_transfer_lite.md +++ b/docs/note/source_zh_cn/style_transfer_lite.md @@ -1,6 +1,6 @@ # 风格迁移模型支持(Lite) - + ## 风格迁移介绍 @@ -14,4 +14,4 @@ ![image_after_transfer](images/after_transfer.png) -使用MindSpore Lite实现风格迁移的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/style_transfer)。 +使用MindSpore Lite实现风格迁移的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/style_transfer)。 diff --git a/docs/programming_guide/source_en/api_structure.md b/docs/programming_guide/source_en/api_structure.md index e7cc4f2649e29e89ad7abc537446d728b993b7e0..5486017da068c3d905e63d36fc9af1d4eaac897b 100644 --- a/docs/programming_guide/source_en/api_structure.md +++ b/docs/programming_guide/source_en/api_structure.md @@ -9,19 +9,19 @@ - + ## Overall Architecture MindSpore is a deep learning framework in all scenarios, aiming to achieve easy development, efficient execution, and all-scenario coverage. Easy development features include API friendliness and low debugging difficulty. Efficient execution includes computing efficiency, data preprocessing efficiency, and distributed training efficiency. All-scenario coverage means that the framework supports cloud, edge, and device scenarios. -The overall architecture of MindSpore consists of the Mind Expression (ME), Graph Engine (GE), and backend runtime. ME provides user-level APIs for scientific computing, building and training neural networks, and converting Python code of users into graphs. GE is a manager of operators and hardware resources, and is responsible for controlling execution of graphs received from ME. Backend runtime includes efficient running environments, such as the CPU, GPU, Ascend AI processors, and Android/iOS, on the cloud, edge, and device. For more information about the overall architecture, see [Overall Architecture](https://www.mindspore.cn/doc/note/en/master/design/mindspore/architecture.html). +The overall architecture of MindSpore consists of the Mind Expression (ME), Graph Engine (GE), and backend runtime. ME provides user-level APIs for scientific computing, building and training neural networks, and converting Python code of users into graphs. GE is a manager of operators and hardware resources, and is responsible for controlling execution of graphs received from ME. Backend runtime includes efficient running environments, such as the CPU, GPU, Ascend AI processors, and Android/iOS, on the cloud, edge, and device. For more information about the overall architecture, see [Overall Architecture](https://www.mindspore.cn/doc/note/en/r1.1/design/mindspore/architecture.html). ## Design Concept MindSpore originates from the best practices of the entire industry and provides unified model training, inference, and export APIs for data scientists and algorithm engineers. It supports flexible deployment in different scenarios such as the device, edge, and cloud, and promotes the prosperity of domains such as deep learning and scientific computing. -MindSpore provides the Python programming paradigm. Users can use the native control logic of Python to build complex neural network models, simplifying AI programming. For details, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html). +MindSpore provides the Python programming paradigm. Users can use the native control logic of Python to build complex neural network models, simplifying AI programming. For details, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html). Currently, there are two execution modes of a mainstream deep learning framework: a static graph mode and a dynamic graph mode. The static graph mode has a relatively high training performance, but is difficult to debug. On the contrary, the dynamic graph mode is easy to debug, but is difficult to execute efficiently. MindSpore provides an encoding mode that unifies dynamic and static graphs, which greatly improves the compatibility between static and dynamic graphs. Instead of developing multiple sets of code, users can switch between the dynamic and static graph modes by changing only one line of code. For example, set `context.set_context(mode=context.PYNATIVE_MODE)` to switch to the dynamic graph mode, or set `context.set_context(mode=context.GRAPH_MODE)` to switch to the static graph mode, which facilitates development and debugging, and improves performance experience. @@ -56,11 +56,11 @@ In the first step, a function (computational graph) is defined. In the second st In addition, the SCT can convert Python code into an intermediate representation (IR) of a MindSpore function. The IR constructs a computational graph that can be parsed and executed on different devices. Before the computational graph is executed, a plurality of software and hardware collaborative optimization technologies are used, and performance and efficiency in different scenarios such as device, edge, and cloud, are improved. -Improving the data processing capability to match the computing power of AI chips is the key to ensure the ultimate performance of AI chips. MindSpore provides multiple data processing operators and uses automatic data acceleration technology to implement high-performance pipelines, including data loading, data demonstration, and data conversion. It supports data processing capabilities in all scenarios, such as CV, NLP, and GNN. MindRecord is a self-developed data format of MindSpore. It features efficient read and write and easy distributed processing. Users can convert non-standard and common datasets to the MindRecord format to obtain better performance experience. For details about the conversion, see [MindSpore Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/master/dataset_conversion.html). MindSpore supports the loading of common datasets and datasets in multiple data storage formats. For example, users can use `dataset=dataset.Cifar10Dataset("Cifar10Data/")` to load the CIFAR-10 dataset. `Cifar10Data/` indicates the local directory of the dataset, and users can also use `GeneratorDataset` to customize the dataset loading mode. Data augmentation is a method of generating new data based on (limited) data, which can reduce the overfitting phenomenon of network model and improve the generalization ability of the model. In addition to user-defined data augmentation, MindSpore provides automatic data augmentation, making data augmentation more flexible. For details, see [Automatic Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/master/auto_augmentation.html). +Improving the data processing capability to match the computing power of AI chips is the key to ensure the ultimate performance of AI chips. MindSpore provides multiple data processing operators and uses automatic data acceleration technology to implement high-performance pipelines, including data loading, data demonstration, and data conversion. It supports data processing capabilities in all scenarios, such as CV, NLP, and GNN. MindRecord is a self-developed data format of MindSpore. It features efficient read and write and easy distributed processing. Users can convert non-standard and common datasets to the MindRecord format to obtain better performance experience. For details about the conversion, see [MindSpore Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_conversion.html). MindSpore supports the loading of common datasets and datasets in multiple data storage formats. For example, users can use `dataset=dataset.Cifar10Dataset("Cifar10Data/")` to load the CIFAR-10 dataset. `Cifar10Data/` indicates the local directory of the dataset, and users can also use `GeneratorDataset` to customize the dataset loading mode. Data augmentation is a method of generating new data based on (limited) data, which can reduce the overfitting phenomenon of network model and improve the generalization ability of the model. In addition to user-defined data augmentation, MindSpore provides automatic data augmentation, making data augmentation more flexible. For details, see [Automatic Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/r1.1/auto_augmentation.html). -The deep learning neural network model usually contains many hidden layers for feature extraction. However, the feature extraction is random and the debugging process is invisible, which limits the trustworthiness and optimization of the deep learning technology. MindSpore supports visualized debugging and optimization (MindInsight) and provides functions such as training dashboard, lineage, performance analysis, and debugger to help users detect deviations during model training and easily debug and optimize models. For example, before initializing the network, users can use `profiler=Profiler()` to initialize the `Profiler` object, automatically collect information such as the operator time consumption during training, and record the information in a file. After the training is complete, call `profiler.analyse()` to stop collecting data and generate performance analysis results. Users can view and analyze the visualized results to more efficiently debug network performance. For details about debugging and optimization, see [Training Process Visualization](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/visualization_tutorials.html). +The deep learning neural network model usually contains many hidden layers for feature extraction. However, the feature extraction is random and the debugging process is invisible, which limits the trustworthiness and optimization of the deep learning technology. MindSpore supports visualized debugging and optimization (MindInsight) and provides functions such as training dashboard, lineage, performance analysis, and debugger to help users detect deviations during model training and easily debug and optimize models. For example, before initializing the network, users can use `profiler=Profiler()` to initialize the `Profiler` object, automatically collect information such as the operator time consumption during training, and record the information in a file. After the training is complete, call `profiler.analyse()` to stop collecting data and generate performance analysis results. Users can view and analyze the visualized results to more efficiently debug network performance. For details about debugging and optimization, see [Training Process Visualization](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/visualization_tutorials.html). -As a scale of neural network models and datasets continuously increases, parallel distributed training becomes a common practice of neural network training. However, policy selection and compilation of parallel distributed training are very complex, which severely restricts training efficiency of a deep learning model and hinders development of deep learning. MindSpore unifies the encoding methods of standalone and distributed training. Developers do not need to write complex distributed policies. Instead, they can implement distributed training by adding a small amount of codes to the standalone code. For example, after `context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)` is set, a cost model can be automatically established, and a better parallel mode can be selected for users. This improves the training efficiency of neural networks, greatly decreases the AI development difficulty, and enables users to quickly implement model. For more information, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +As a scale of neural network models and datasets continuously increases, parallel distributed training becomes a common practice of neural network training. However, policy selection and compilation of parallel distributed training are very complex, which severely restricts training efficiency of a deep learning model and hinders development of deep learning. MindSpore unifies the encoding methods of standalone and distributed training. Developers do not need to write complex distributed policies. Instead, they can implement distributed training by adding a small amount of codes to the standalone code. For example, after `context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)` is set, a cost model can be automatically established, and a better parallel mode can be selected for users. This improves the training efficiency of neural networks, greatly decreases the AI development difficulty, and enables users to quickly implement model. For more information, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). ## Level Structure diff --git a/docs/programming_guide/source_en/augmentation.md b/docs/programming_guide/source_en/augmentation.md index 93d3d00940d082d27f48f78ec2710105af94c027..e9c569744a1efeaf68df2baa378cc86607337206 100644 --- a/docs/programming_guide/source_en/augmentation.md +++ b/docs/programming_guide/source_en/augmentation.md @@ -16,7 +16,7 @@ - + ## Overview @@ -29,7 +29,7 @@ MindSpore provides the `c_transforms` and `py_transforms` modules for data augme | c_transforms | Implemented based on C++. | This module provides high performance. | | py_transforms | Implemented based on Python PIL | This module provides multiple image augmentation methods and can convert PIL images to NumPy arrays. | -The following table lists the common data augmentation operators supported by MindSpore. For details about more data augmentation operators, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.vision.html). +The following table lists the common data augmentation operators supported by MindSpore. For details about more data augmentation operators, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.vision.html). | Module | Operator | Description | | ---- | ---- | ---- | diff --git a/docs/programming_guide/source_en/auto_augmentation.md b/docs/programming_guide/source_en/auto_augmentation.md index d2ce902d1d7ff5e90649fedda3d2874243a39963..6bcf28bb835c1ed7668fc8a7bc7454c6b899a074 100644 --- a/docs/programming_guide/source_en/auto_augmentation.md +++ b/docs/programming_guide/source_en/auto_augmentation.md @@ -12,7 +12,7 @@ - + ## Overview @@ -24,7 +24,7 @@ Auto augmentation can be implemented based on probability or callback parameters MindSpore provides a series of probability-based auto augmentation APIs. You can randomly select and combine various data augmentation operations to make data augmentation more flexible. -For details about APIs, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.transforms.html). +For details about APIs, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.transforms.html). ### RandomApply diff --git a/docs/programming_guide/source_en/auto_parallel.md b/docs/programming_guide/source_en/auto_parallel.md index fa4001c4a70e515482cc1b033f8f088af52968bc..ec9223aa35d4cc91c9b2ffe93de1e27063ba5aed 100644 --- a/docs/programming_guide/source_en/auto_parallel.md +++ b/docs/programming_guide/source_en/auto_parallel.md @@ -33,7 +33,7 @@ - + ## Overview @@ -101,7 +101,7 @@ context.get_auto_parallel_context("gradients_mean") - `semi_auto_parallel`: semi-automatic parallel mode. In this mode, you can use the `shard` method to configure a segmentation policy for an operator. If no policy is configured, the data parallel policy is used by default. - `auto_parallel`: automatic parallel mode. In this mode, the framework automatically creates a cost model and selects the optimal segmentation policy for users. -The complete examples of `auto_parallel` and `data_parallel` are provided in [Distributed Training](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html). +The complete examples of `auto_parallel` and `data_parallel` are provided in [Distributed Training](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html). The following is a code example: @@ -338,10 +338,10 @@ x = Parameter(Tensor(np.ones([2, 2])), layerwise_parallel=True) Data parallel refers to the parallel mode in which data is segmented. Generally, data is segmented by batch and distributed to each computing unit (worker) for model calculation. In data parallel mode, datasets must be imported in data parallel mode, and `parallel_mode` must be set to `data_parallel`. -For details about the test cases, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +For details about the test cases, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). ## Automatic Parallel Automatic parallel is a distributed parallel mode that integrates data parallel, model parallel, and hybrid parallel. It can automatically establish a cost model and select a parallel mode for users. The cost model refers to modeling the training time based on the memory computing overhead and the communication overhead, and designing an efficient algorithm to find a parallel policy with a relatively short training time. In automatic parallel mode, datasets must be imported in data parallel mode, and `parallel_mode` must be set to `auto_parallel`. -For details about the test cases, see the [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +For details about the test cases, see the [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). diff --git a/docs/programming_guide/source_en/cache.md b/docs/programming_guide/source_en/cache.md new file mode 100644 index 0000000000000000000000000000000000000000..0d307652d883b052c1e8b2d6f15fe6a97e7cf3b1 --- /dev/null +++ b/docs/programming_guide/source_en/cache.md @@ -0,0 +1,5 @@ +# Single Node Data Cache + +No English version available right now, welcome to contribute. + + diff --git a/docs/programming_guide/source_en/callback.md b/docs/programming_guide/source_en/callback.md index 793d1637fa558241b0cd2d650935d75f108350ae..b8f8cfc00e48faeac83859fb374218ce6c9274f1 100644 --- a/docs/programming_guide/source_en/callback.md +++ b/docs/programming_guide/source_en/callback.md @@ -9,7 +9,7 @@ - + ## Overview @@ -23,19 +23,19 @@ In MindSpore, the callback mechanism is generally used in the network training p This function is combined with the model training process, and saves the model and network parameters after training to facilitate re-inference or re-training. `ModelCheckpoint` is generally used together with `CheckpointConfig`. `CheckpointConfig` is a parameter configuration class that can be used to customize the checkpoint storage policy. - For details, see [Saving Models](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html). + For details, see [Saving Models](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html). - SummaryCollector This function collects common information, such as loss, learning rate, computational graph, and parameter weight, helping you visualize the training process and view information. In addition, you can perform the summary operation to collect data from the summary file. - For details, see [Collecting Summary Record](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/summary_record.html). + For details, see [Collecting Summary Record](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/summary_record.html). - LossMonitor This function monitors the loss change during training. When the loss is NAN or INF, the training is terminated in advance. Loss information can be recorded in logs for you to view. - For details, see the [Custom Debugging Information](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#mindsporecallback). + For details, see the [Custom Debugging Information](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#mindsporecallback). - TimeMonitor @@ -51,6 +51,6 @@ The following examples are used to introduce the custom callback functions: 2. Save the checkpoint file with the highest accuracy during training. You can customize the function to save a model with the highest accuracy after each epoch. -For details, see [Custom Callback](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#custom-callback). +For details, see [Custom Callback](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#custom-callback). According to the tutorial, you can easily customize other callback functions. For example, customize a function to output the detailed training information, including the training progress, training step, training name, and loss value, after each training is complete; terminate training when the loss or model accuracy reaches a certain value by setting the loss or model accuracy threshold. When the loss or model accuracy reaches the threshold, the training is terminated in advance. diff --git a/docs/programming_guide/source_en/cell.md b/docs/programming_guide/source_en/cell.md index 8b42e2b8d45ae402670cfdb8369693f02843d133..76ffdf862fe8a23f9722f51935e04237ebc8a66f 100644 --- a/docs/programming_guide/source_en/cell.md +++ b/docs/programming_guide/source_en/cell.md @@ -21,7 +21,7 @@ - + ## Overview @@ -64,7 +64,7 @@ class Net(nn.Cell): The `parameters_dict` method is used to identify all parameters in the network structure and return `OrderedDict` with key as the parameter name and value as the parameter value. -There are many other methods for returning parameters in the `Cell` class, such as `get_parameters` and `trainable_params`. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/nn/mindspore.nn.Cell.html). +There are many other methods for returning parameters in the `Cell` class, such as `get_parameters` and `trainable_params`. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Cell.html). A code example is as follows: @@ -338,7 +338,7 @@ In this case, two pieces of tensor data are built. The `nn.L1Loss` API is used t ## Optimization Algorithms -`mindspore.nn.optim` is a module that implements various optimization algorithms in the MindSpore framework. For details, see [Optimization Algorithms](https://www.mindspore.cn/doc/programming_guide/en/master/optim.html) +`mindspore.nn.optim` is a module that implements various optimization algorithms in the MindSpore framework. For details, see [Optimization Algorithms](https://www.mindspore.cn/doc/programming_guide/en/r1.1/optim.html) ## Building a Customized Network diff --git a/docs/programming_guide/source_en/conf.py b/docs/programming_guide/source_en/conf.py index a1fd767271ac159540440ed65bd0d676163366a9..a2abcc9090f480f4504ca43ff682a2e762a5a89f 100644 --- a/docs/programming_guide/source_en/conf.py +++ b/docs/programming_guide/source_en/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/programming_guide/source_en/context.md b/docs/programming_guide/source_en/context.md index 709e994e1e140c79786e5e2d62d14c7757f8e92b..aa777e4b80a3f7695f00bb4edf3178588c9314cc 100644 --- a/docs/programming_guide/source_en/context.md +++ b/docs/programming_guide/source_en/context.md @@ -16,7 +16,7 @@ - + ## Overview @@ -106,7 +106,7 @@ from mindspore.context import ParallelMode context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, gradients_mean=True) ``` -For details about distributed parallel training, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +For details about distributed parallel training, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). ## Maintenance and Test Management @@ -118,13 +118,25 @@ The system can collect profiling data during training and use the profiling tool - `enable_profiling`: indicates whether to enable the profiling function. If this parameter is set to True, the profiling function is enabled, and profiling options are read from enable_options. If this parameter is set to False, the profiling function is disabled and only training_trace is collected. -- `profiling_options`: profiling collection options. The values are as follows. Multiple data items can be collected. training_trace: collects step trace data, that is, software information about training tasks and AI software stacks, to analyze the performance of training tasks. It focuses on data argumentation, forward and backward computation, and gradient aggregation update. task_trace: collects task trace data, that is, hardware information of the Ascend 910 processor HWTS/AICore and analysis of task start and end information. op_trace: collects performance data of a single operator. Format: ['op_trace','task_trace','training_trace'] +- `profiling_options`: profiling collection options. The values are as follows. Multiple data items can be collected. + result_path: saving the path of the profiling collection result file. The directory spectified by this parameter needs to be created in advance on the training environment (container or host side) and ensure that the running user configured during installation has read and write permissions. It supports the configuration of absolute or relative paths(relative to the current path when executing the command line). The absolute path configuration starts with '/', for example:/home/data/output. The relative path configuration directly starts with the directory name, for example:output; + training_trace: collect iterative trajectory data, that is, the training task and software information of the AI software stack, to achieve performance analysis of the training task, focusing on data enhancement, forward and backward calculation, gradient aggregation update and other related data. The value is on/off; + task_trace: collect task trajectory data, that is, the hardware information of the HWTS/AICore of the Ascend 910 processor, and analyze the information of beginning and ending of the task. The value is on/off; + aicpu_trace: collect profiling data enhanced by aicpu data. The value is on/off; + fp_point: specify the start position of the forward operator of the training network iteration trajectory, which is used to record the start timestamp of the forward calculation. The configuration value is the name of the first operator specified in the forward direction. when the value is empty, the system will automatically obtain the forward operator name; + bp_point: specify the end position of the iteration trajectory reversal operator of the training network, record the end timestamp of the backward calculation. The configuration value is the name of the operator after the specified reverse. when the value is empty, the system will automatically obtain the backward operator name; + ai_core_metrics: the values are as follows: + - ArithmeticUtilization: percentage statistics of various calculation indicators; + - PipeUtilization: the time-consuming ratio of calculation unit and handling unit, this item is the default value; + - Memory: percentage of external memory read and write instructions; + - MemoryL0: percentage of internal memory read and write instructions; + - ResourceConflictRatio: proportion of pipline queue instructions. A code example is as follows: ```python from mindspore import context -context.set_context(enable_profiling=True, profiling_options="training_trace") +context.set_context(enable_profiling=True, profiling_options='{"result_path":"/home/data/output","training_trace":"on"}') ``` ### Saving MindIR @@ -142,13 +154,13 @@ from mindspore import context context.set_context(save_graphs=True) ``` -> For details about the debugging method, see [Asynchronous Dump](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#asynchronous-dump). +> For details about the debugging method, see [Asynchronous Dump](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#asynchronous-dump). ### Print Operator Disk Flushing By default, the MindSpore self-developed print operator can output the tensor or character string information entered by users. Multiple character string inputs, multiple tensor inputs, and hybrid inputs of character strings and tensors are supported. The input parameters are separated by commas (,). -> For details about the print function, see [MindSpore Print Operator](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#mindspore-print-operator). +> For details about the print function, see [MindSpore Print Operator](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#mindspore-print-operator). - `print_file_path`: saves the print operator data to a file and disables the screen printing function. If the file to be saved exists, a timestamp suffix is added to the file. Saving data to a file can solve the problem that the data displayed on the screen is lost when the data volume is large. @@ -159,4 +171,4 @@ from mindspore import context context.set_context(print_file_path="print.pb") ``` -> For details about the context API, see [mindspore.context](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.context.html). +> For details about the context API, see [mindspore.context](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.context.html). diff --git a/docs/programming_guide/source_en/customized.rst b/docs/programming_guide/source_en/customized.rst index 69d484707e05903fe9e29d6546a10189930a3dab..617a24057f563cb73ccb7f8507c510a3240f98a7 100644 --- a/docs/programming_guide/source_en/customized.rst +++ b/docs/programming_guide/source_en/customized.rst @@ -4,6 +4,6 @@ Custom Operators .. toctree:: :maxdepth: 1 - Custom Operators(Ascend) - Custom Operators(GPU) - Custom Operators(CPU) + Custom Operators(Ascend) + Custom Operators(GPU) + Custom Operators(CPU) diff --git a/docs/programming_guide/source_en/data_pipeline.rst b/docs/programming_guide/source_en/data_pipeline.rst index 75d7846d2d8692dc3031b80737d5daaee0c487d4..0e52d9ddf432e0ea22730d34e8ccf448f617c014 100644 --- a/docs/programming_guide/source_en/data_pipeline.rst +++ b/docs/programming_guide/source_en/data_pipeline.rst @@ -11,3 +11,4 @@ Data Pipeline tokenizer dataset_conversion auto_augmentation + cache diff --git a/docs/programming_guide/source_en/dataset_conversion.md b/docs/programming_guide/source_en/dataset_conversion.md index 84320bf41642fbbd4e18c4f2d6e1a50cd7277aa6..fbe20fbfeeb1fdf69e49ff69d29a635a97c705e1 100644 --- a/docs/programming_guide/source_en/dataset_conversion.md +++ b/docs/programming_guide/source_en/dataset_conversion.md @@ -15,7 +15,7 @@ - + ## Overview @@ -180,7 +180,7 @@ MindSpore provides tool classes for converting common datasets to MindRecord. Th | TFRecord | TFRecordToMR | | CSV File | CsvToMR | -For details about dataset conversion, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.mindrecord.html). +For details about dataset conversion, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.mindrecord.html). ### Converting the CIFAR-10 Dataset diff --git a/docs/programming_guide/source_en/dataset_loading.md b/docs/programming_guide/source_en/dataset_loading.md index 993718c7c61e1be5d0452805ab1b5c7c142886f7..dbde222ba6b899c46c87d3740c17d6eff63c49d7 100644 --- a/docs/programming_guide/source_en/dataset_loading.md +++ b/docs/programming_guide/source_en/dataset_loading.md @@ -21,7 +21,7 @@ - + ## Overview @@ -50,7 +50,7 @@ MindSpore can also load datasets in different data storage formats. You can dire MindSpore also supports user-defined dataset loading using `GeneratorDataset`. You can implement your own dataset classes as required. -> For details about the API for dataset loading, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.html). +> For details about the API for dataset loading, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.html). ## Loading Common Dataset @@ -205,7 +205,7 @@ The following describes how to load dataset files in specific formats. MindRecord is a data format defined by MindSpore. Using MindRecord can improve performance. -> For details about how to convert a dataset into the MindRecord data format, see [Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/master/dataset_conversion.html). +> For details about how to convert a dataset into the MindRecord data format, see [Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_conversion.html). The following example uses the `MindDataset` API to load MindRecord files, and displays labels of the loaded data. diff --git a/docs/programming_guide/source_en/dtype.md b/docs/programming_guide/source_en/dtype.md index 29437cc5de0d6dfbf696f4548167d62f04f3d682..2f922384f37ec6d3ee8b74d17ef879367df7a687 100644 --- a/docs/programming_guide/source_en/dtype.md +++ b/docs/programming_guide/source_en/dtype.md @@ -8,7 +8,7 @@ - + ## Overview @@ -16,7 +16,7 @@ MindSpore tensors support different data types, including `int8`, `int16`, `int3 In the computation process of MindSpore, the `int` data type in Python is converted into the defined `int64` type, and the `float` data type is converted into the defined `float32` type. -For details about the supported types, see . +For details about the supported types, see . In the following code, the data type of MindSpore is int32. diff --git a/docs/programming_guide/source_en/infer.md b/docs/programming_guide/source_en/infer.md index 84ade7f462316d52e379492e024dbbbc1ab3867d..4fabbef22f6c55976c9927b344a72720e1b42ad8 100644 --- a/docs/programming_guide/source_en/infer.md +++ b/docs/programming_guide/source_en/infer.md @@ -6,12 +6,12 @@ - + Based on the model trained by MindSpore, it supports the execution of inferences on various platforms such as Ascend 910 AI processor, Ascend 310 AI processor, GPU, CPU, and device side. For more details, please refer to the following tutorials: -- [Inference on the Ascend 910 AI processor](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_ascend_910.html) -- [Inference on the Ascend 310 AI processor](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_ascend_310.html) -- [Inference on a GPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_gpu.html) -- [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_cpu.html) -- [Inference on the device side](https://www.mindspore.cn/tutorial/lite/en/master/quick_start/quick_start.html) +- [Inference on the Ascend 910 AI processor](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_910.html) +- [Inference on the Ascend 310 AI processor](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310.html) +- [Inference on a GPU](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_gpu.html) +- [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_cpu.html) +- [Inference on the device side](https://www.mindspore.cn/tutorial/lite/en/r1.1/quick_start/quick_start.html) diff --git a/docs/programming_guide/source_en/network_component.md b/docs/programming_guide/source_en/network_component.md index 102931b1b8cc131df26e1b18207db03cfb48bebe..84d0c47b2a2eceb93305d793a811391f865c9189 100644 --- a/docs/programming_guide/source_en/network_component.md +++ b/docs/programming_guide/source_en/network_component.md @@ -10,7 +10,7 @@ - + ## Overview @@ -22,7 +22,7 @@ The following describes three network components, `GradOperation`, `WithLossCell ## GradOperation -GradOperation is used to generate the gradient of the input function. The `get_all`, `get_by_list`, and `sens_param` parameters are used to control the gradient calculation method. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GradOperation.html) +GradOperation is used to generate the gradient of the input function. The `get_all`, `get_by_list`, and `sens_param` parameters are used to control the gradient calculation method. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GradOperation.html) The following is an example of using GradOperation: ```python diff --git a/docs/programming_guide/source_en/network_list.rst b/docs/programming_guide/source_en/network_list.rst index 5118f160b0b99ba8edf4e7cc9f3aba11a24a15a9..a1ef6c0ae242fbf3f5cb6a4f2dc2f585ac7fc2ab 100644 --- a/docs/programming_guide/source_en/network_list.rst +++ b/docs/programming_guide/source_en/network_list.rst @@ -4,4 +4,4 @@ Network List .. toctree:: :maxdepth: 1 - MindSpore Network List \ No newline at end of file + MindSpore Network List \ No newline at end of file diff --git a/docs/programming_guide/source_en/operator_list.rst b/docs/programming_guide/source_en/operator_list.rst index d2c966c71941f94f84667fa994506bf1dac82440..5a4bf5a86fee49a81d812d121473dd5b12855d40 100644 --- a/docs/programming_guide/source_en/operator_list.rst +++ b/docs/programming_guide/source_en/operator_list.rst @@ -4,7 +4,7 @@ Operator List .. toctree:: :maxdepth: 1 - MindSpore Operator List - MindSpore Implicit Type Conversion - MindSpore Distributed Operator List - MindSpore Lite Operator List \ No newline at end of file + MindSpore Operator List + MindSpore Implicit Type Conversion + MindSpore Distributed Operator List + MindSpore Lite Operator List \ No newline at end of file diff --git a/docs/programming_guide/source_en/operators.md b/docs/programming_guide/source_en/operators.md index 12c255d8a5f4bb74c7b2a4bf7e295b6b2d47ccb1..440226969ee66cefd0dcc487e7afdc71c5817ad4 100644 --- a/docs/programming_guide/source_en/operators.md +++ b/docs/programming_guide/source_en/operators.md @@ -40,7 +40,7 @@ - + ## Overview @@ -56,7 +56,7 @@ APIs related to operators include operations, functional, and composite. Operato ### mindspore.ops.operations -The operations API provides all primitive operator APIs, which are the lowest-order operator APIs open to users. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +The operations API provides all primitive operator APIs, which are the lowest-order operator APIs open to users. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). Primitive operators directly encapsulate the implementation of operators at bottom layers such as Ascend, GPU, AICPU, and CPU, providing basic operator capabilities for users. @@ -85,7 +85,7 @@ output = [ 1. 8. 64.] ### mindspore.ops.functional -To simplify the calling process of operators without attributes, MindSpore provides the functional version of some operators. For details about the input parameter requirements, see the input and output requirements of the original operator. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list_ms.html#mindspore-ops-functional). +To simplify the calling process of operators without attributes, MindSpore provides the functional version of some operators. For details about the input parameter requirements, see the input and output requirements of the original operator. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list_ms.html#mindspore-ops-functional). For example, the functional version of the `P.Pow` operator is `F.tensor_pow`. @@ -168,7 +168,7 @@ tensor [[2.4, 4.2] scalar 3 ``` -In addition, the high-order function `GradOperation` provides the method of computing the gradient function corresponding to the input function. For details, see [mindspore.ops](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GradOperation.html). +In addition, the high-order function `GradOperation` provides the method of computing the gradient function corresponding to the input function. For details, see [mindspore.ops](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GradOperation.html). ### Combination usage of operations/functional/composite three types of operators @@ -190,7 +190,7 @@ pow = ops.Pow() ## Operator Functions -Operators can be classified into seven functional modules: tensor operations, network operations, array operations, image operations, encoding operations, debugging operations, and quantization operations. For details about the supported operators on the Ascend AI processors, GPU, and CPU, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +Operators can be classified into seven functional modules: tensor operations, network operations, array operations, image operations, encoding operations, debugging operations, and quantization operations. For details about the supported operators on the Ascend AI processors, GPU, and CPU, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). ### Tensor Operations diff --git a/docs/programming_guide/source_en/optim.md b/docs/programming_guide/source_en/optim.md index 7a697d2df242ef9cb06b369230f312f81b29426a..af3de158463a35f0599ee70a5b12bd0c9484f2d5 100644 --- a/docs/programming_guide/source_en/optim.md +++ b/docs/programming_guide/source_en/optim.md @@ -13,7 +13,7 @@ - + ## Overview diff --git a/docs/programming_guide/source_en/parameter.md b/docs/programming_guide/source_en/parameter.md index 1d3f7e3327130ccb009f749e4621f995b786c86d..752d171a00d19431aa1d404f6fbe15bc54279d65 100644 --- a/docs/programming_guide/source_en/parameter.md +++ b/docs/programming_guide/source_en/parameter.md @@ -11,7 +11,7 @@ - + ## Overview @@ -37,7 +37,7 @@ To update a parameter, set `requires_grad` to `True`. When `layerwise_parallel` is set to True, this parameter will be filtered out during parameter broadcast and parameter gradient aggregation. -For details about the configuration of distributed parallelism, see . +For details about the configuration of distributed parallelism, see . In the following example, `Parameter` objects are built using three different data types. All the three `Parameter` objects need to be updated, and layerwise parallelism is not used. @@ -121,7 +121,7 @@ data: Parameter (name=x) - `set_data`: sets the data saved by `Parameter`. `Tensor`, `Initializer`, `int`, and `float` can be input for setting. When the input parameter `slice_shape` of the method is set to True, the shape of `Parameter` can be changed. Otherwise, the configured shape must be the same as the original shape of `Parameter`. -- `set_param_ps`: controls whether training parameters are trained by using the [Parameter Server](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/apply_parameter_server_training.html). +- `set_param_ps`: controls whether training parameters are trained by using the [Parameter Server](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/apply_parameter_server_training.html). - `clone`: clones `Parameter`. You can specify the parameter name after cloning. diff --git a/docs/programming_guide/source_en/performance_optimization.md b/docs/programming_guide/source_en/performance_optimization.md index 8df9d479158c6e361d005f150dd10339ec7f6500..843acbd9085682cfa1c4653ff73be289a7b1dad6 100644 --- a/docs/programming_guide/source_en/performance_optimization.md +++ b/docs/programming_guide/source_en/performance_optimization.md @@ -6,13 +6,13 @@ - + MindSpore provides a variety of performance optimization methods, users can use them to improve the performance of training and inference according to the actual situation. | Optimization Stage | Optimization Method | Supported | | --- | --- | --- | -| Training | [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html) | Ascend, GPU | -| | [Mixed Precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) | Ascend, GPU | -| | [Graph Kernel Fusion](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_graph_kernel_fusion.html) | Ascend | -| | [Gradient Accumulation](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/apply_gradient_accumulation.html) | Ascend, GPU | +| Training | [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html) | Ascend, GPU | +| | [Mixed Precision](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/enable_mixed_precision.html) | Ascend, GPU | +| | [Graph Kernel Fusion](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/enable_graph_kernel_fusion.html) | Ascend | +| | [Gradient Accumulation](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/apply_gradient_accumulation.html) | Ascend, GPU | diff --git a/docs/programming_guide/source_en/pipeline.md b/docs/programming_guide/source_en/pipeline.md index 178ad149f7cebe5a9389f84ee332a6d2f1e0c411..231811e211676addda197c3a8e9a06b258533bd3 100644 --- a/docs/programming_guide/source_en/pipeline.md +++ b/docs/programming_guide/source_en/pipeline.md @@ -14,7 +14,7 @@ - + ## Overview @@ -22,7 +22,7 @@ Data is the basis of deep learning. Good data input can play a positive role in Each dataset class of MindSpore provides multiple data processing operators. You can build a data processing pipeline to define the data processing operations to be used. In this way, data can be continuously transferred to the training system through the data processing pipeline during the training process. -The following table lists part of the common data processing operators supported by MindSpore. For more data processing operations, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.html). +The following table lists part of the common data processing operators supported by MindSpore. For more data processing operations, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.html). | Data Processing Operator | Description | | ---- | ---- | @@ -76,7 +76,7 @@ The output is as follows: Applies a specified function or operator to specified columns in a dataset to implement data mapping. You can customize the mapping function or use operators in c_transforms or py_transforms to augment image and text data. -> For details about how to use data augmentation, see [Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/master/augmentation.html) in the Programming Guide. +> For details about how to use data augmentation, see [Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/r1.1/augmentation.html) in the Programming Guide. ![map](./images/map.png) diff --git a/docs/programming_guide/source_en/probability.md b/docs/programming_guide/source_en/probability.md index 56aa7ea8333d8896d3f5a1740b304123ccf68ac7..75ae4ab08b6f195f792bc5b6e8055e5d0fa44c9b 100644 --- a/docs/programming_guide/source_en/probability.md +++ b/docs/programming_guide/source_en/probability.md @@ -47,7 +47,7 @@ - + MindSpore deep probabilistic programming is to combine Bayesian learning with deep learning, including probability distribution, probability distribution mapping, deep probability network, probability inference algorithm, Bayesian layer, Bayesian conversion, and Bayesian toolkit. For professional Bayesian learning users, it provides probability sampling, inference algorithms, and model build libraries. On the other hand, advanced APIs are provided for users who are unfamiliar with Bayesian deep learning, so that they can use Bayesian models without changing the deep learning programming logic. @@ -361,23 +361,28 @@ mean_b = Tensor(1.0, dtype=mstype.float32) sd_b = Tensor(2.0, dtype=mstype.float32) kl = my_normal.kl_loss('Normal', mean_b, sd_b) +# get the distribution args as a tuple +dist_arg = my_normal.get_dist_args() + print("mean: ", mean) print("var: ", var) print("entropy: ", entropy) print("prob: ", prob) print("cdf: ", cdf) print("kl: ", kl) +print("dist_arg: ", dist_arg) ``` The output is as follows: ```python -mean: 0.0 -var: 1.0 -entropy: 1.4189385 -prob: [0.35206532, 0.3989423, 0.35206532] -cdf: [0.3085482, 0.5, 0.6914518] -kl: 0.44314718 +mean:  0.0 +var:  1.0 +entropy:  1.4189385 +prob:  [0.35206532 0.3989423  0.35206532] +cdf:  [0.30853754 0.5        0.69146246] +kl:  0.44314718 +dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1)) ``` ### Probability Distribution Class Application in Graph Mode @@ -463,7 +468,7 @@ tx = Tensor(x, dtype=dtype.float32) cdf = LogNormal.cdf(tx) # generate samples from the distribution -shape = ((3, 2)) +shape = (3, 2) sample = LogNormal.sample(shape) # get information of the distribution @@ -473,26 +478,24 @@ print("underlying distribution:\n", LogNormal.distribution) print("bijector:\n", LogNormal.bijector) # get the computation results print("cdf:\n", cdf) -print("sample:\n", sample) +print("sample shape:\n", sample.shape) ``` The output is as follows: ```python TransformedDistribution< - (_bijector): Exp - (_distribution): Normal - > +  (_bijector): Exp +  (_distribution): Normal +  > underlying distribution: - Normal + Normal bijector: - Exp + Exp cdf: - [0.7558914 0.9462397 0.9893489] -sample: - [[ 3.451917 0.645654 ] - [ 0.86533326 1.2023963 ] - [ 2.3343778 11.053896 ]] + [0.7558914 0.9462397 0.9893489] +sample shape: +(3, 2) ``` When the `TransformedDistribution` is constructed to map the transformed `is_constant_jacobian = true` (for example, `ScalarAffine`), the constructed `TransformedDistribution` instance can use the `mean` API to calculate the average value. For example: @@ -544,15 +547,14 @@ x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32) tx = Tensor(x, dtype=dtype.float32) cdf, sample = net(tx) print("cdf: ", cdf) -print("sample: ", sample) +print("sample shape: ", sample.shape) ``` The output is as follows: ```python cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ] -sample: [[0.5361498 0.26627186 2.766659 ] - [1.5831033 0.4096472 2.008679 ]] +sample shape: (2, 3) ``` ## Probability Distribution Mapping @@ -694,11 +696,11 @@ print("inverse_log_jacobian: ", inverse_log_jaco) The output is as follows: ```python -PowerTransform -forward: [2.23606801e+00, 2.64575124e+00, 3.00000000e+00, 3.31662488e+00] -inverse: [1.50000000e+00, 4.00000048e+00, 7.50000000e+00, 1.20000010e+01] -forward_log_jacobian: [-8.04718971e-01, -9.72955048e-01, -1.09861231e+00, -1.19894767e+00] -inverse_log_jacobian: [6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00] +PowerTransform +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ### Invoking a Bijector Instance in Graph Mode @@ -740,10 +742,10 @@ print("inverse_log_jaco: ", inverse_log_jaco) The output is as follows: ```python -forward: [2.236068 2.6457512 3. 3.3166249] -inverse: [ 1.5 4.0000005 7.5 12.000001 ] -forward_log_jaco: [-0.804719 -0.97295505 -1.0986123 -1.1989477 ] -inverse_log_jaco: [0.6931472 1.0986123 1.3862944 1.609438 ] +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ## Deep Probabilistic Network @@ -849,7 +851,7 @@ decoder = Decoder() cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20, num_classes=10) ``` -Load a dataset, for example, Mnist. For details about the data loading and preprocessing process, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html). The create_dataset function is used to create a data iterator. +Load a dataset, for example, Mnist. For details about the data loading and preprocessing process, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html). The create_dataset function is used to create a data iterator. ```python ds_train = create_dataset(image_path, 128, 1) @@ -913,7 +915,7 @@ If you want the generated sample to be better and clearer, you can define a more The following uses the APIs in `nn.probability.bnn_layers` of MindSpore to implement the BNN image classification model. The APIs in `nn.probability.bnn_layers` of MindSpore include `NormalPrior`, `NormalPosterior`, `ConvReparam`, `DenseReparam`, `DenseLocalReparam` and `WithBNNLossCell`. The biggest difference between BNN and DNN is that the weight and bias of the BNN layer are not fixed values, but follow a distribution. `NormalPrior` and `NormalPosterior` are respectively used to generate a prior distribution and a posterior distribution that follow a normal distribution. `ConvReparam` and `DenseReparam` are the Bayesian convolutional layer and fully connected layers implemented by using the reparameterization method, respectively. `DenseLocalReparam` is the Bayesian fully connected layers implemented by using the local reparameterization method. `WithBNNLossCell` is used to encapsulate the BNN and loss function. -For details about how to use the APIs in `nn.probability.bnn_layers` to build a Bayesian neural network and classify images, see [Applying the Bayesian Network](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#id3). +For details about how to use the APIs in `nn.probability.bnn_layers` to build a Bayesian neural network and classify images, see [Applying the Bayesian Network](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#id3). ## Bayesian Conversion @@ -969,7 +971,7 @@ The `trainable_bnn` parameter is a trainable DNN model packaged by `TrainOneStep ``` - `get_dense_args` specifies the parameters to be obtained from the fully connected layer of the DNN model. The default value is the common parameters of the fully connected layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/nn/mindspore.nn.Dense.html). `get_conv_args` specifies the parameters to be obtained from the convolutional layer of the DNN model. The default value is the common parameters of the convolutional layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/nn/mindspore.nn.Conv2d.html). `add_dense_args` and `add_conv_args` specify the new parameter values to be specified for the BNN layer. Note that the parameters in `add_dense_args` cannot be the same as those in `get_dense_args`. The same rule applies to `add_conv_args` and `get_conv_args`. + `get_dense_args` specifies the parameters to be obtained from the fully connected layer of the DNN model. The default value is the common parameters of the fully connected layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Dense.html). `get_conv_args` specifies the parameters to be obtained from the convolutional layer of the DNN model. The default value is the common parameters of the convolutional layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Conv2d.html). `add_dense_args` and `add_conv_args` specify the new parameter values to be specified for the BNN layer. Note that the parameters in `add_dense_args` cannot be the same as those in `get_dense_args`. The same rule applies to `add_conv_args` and `get_conv_args`. - Function 2: Convert a specific layer. @@ -995,7 +997,7 @@ The `trainable_bnn` parameter is a trainable DNN model packaged by `TrainOneStep `Dnn_layer` specifies a DNN layer to be converted into a BNN layer, and `bnn_layer` specifies a BNN layer to be converted into a DNN layer, and `get_args` and `add_args` specify the parameters obtained from the DNN layer and the parameters to be re-assigned to the BNN layer, respectively. -For details about how to use `TransformToBNN` in MindSpore, see [DNN-to-BNN Conversion with One Click](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#dnnbnn). +For details about how to use `TransformToBNN` in MindSpore, see [DNN-to-BNN Conversion with One Click](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#dnnbnn). ## Bayesian Toolbox diff --git a/docs/programming_guide/source_en/run.md b/docs/programming_guide/source_en/run.md index bfca862a1c7b3635cd5df1f823e9088c44e13dd4..22054dfa93c44b058d4c543e74fda5ecf01e8f4d 100644 --- a/docs/programming_guide/source_en/run.md +++ b/docs/programming_guide/source_en/run.md @@ -12,7 +12,7 @@ - + ## Overview @@ -99,7 +99,7 @@ The output is as follows: ## Executing a Network Model -The [Model API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.html#mindspore.Model) of MindSpore is an advanced API used for training and validation. Layers with the training or inference function can be combined into an object. The training, inference, and prediction functions can be implemented by calling the train, eval, and predict APIs, respectively. +The [Model API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.html#mindspore.Model) of MindSpore is an advanced API used for training and validation. Layers with the training or inference function can be combined into an object. The training, inference, and prediction functions can be implemented by calling the train, eval, and predict APIs, respectively. You can transfer the initialized Model APIs such as the network, loss function, and optimizer as required. You can also configure amp_level to implement mixed precision and configure metrics to implement model evaluation. @@ -237,7 +237,7 @@ if __name__ == "__main__": model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=True) ``` -> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html#downloading-the-dataset). +> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html#downloading-the-dataset). The output is as follows: @@ -251,7 +251,7 @@ epoch: 1 step: 1874, loss is 0.0346688 epoch: 1 step: 1875, loss is 0.017264696 ``` -> Use the PyNative mode for debugging, including the execution of single operator, common function, and network training model. For details, see [Debugging in PyNative Mode](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/debug_in_pynative_mode.html). +> Use the PyNative mode for debugging, including the execution of single operator, common function, and network training model. For details, see [Debugging in PyNative Mode](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/debug_in_pynative_mode.html). ### Executing an Inference Model @@ -385,7 +385,7 @@ In the preceding information: - `checkpoint_lenet-1_1875.ckpt`: name of the saved checkpoint model file. - `load_param_into_net`: loads parameters to the network. -> For details about how to save the `checkpoint_lenet-1_1875.ckpt` file, see [Training the Network](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html#training-the-network). +> For details about how to save the `checkpoint_lenet-1_1875.ckpt` file, see [Training the Network](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html#training-the-network). The output is as follows: diff --git a/docs/programming_guide/source_en/sampler.md b/docs/programming_guide/source_en/sampler.md index 3ac5379564309c9f0d436aecac2a6213b9dacf9e..adadf8eed932604220a136e77f23ae2a40b5cd59 100644 --- a/docs/programming_guide/source_en/sampler.md +++ b/docs/programming_guide/source_en/sampler.md @@ -14,13 +14,13 @@ - + ## Overview MindSpore provides multiple samplers to help you sample datasets for various purposes to meet training requirements and solve problems such as oversized datasets and uneven distribution of sample categories. You only need to import the sampler object when loading the dataset for sampling the data. -The following table lists part of the common samplers supported by MindSpore. In addition, you can define your own sampler class as required. For more samplers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.html). +The following table lists part of the common samplers supported by MindSpore. In addition, you can define your own sampler class as required. For more samplers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.html). | Sampler | Description | | ---- | ---- | diff --git a/docs/programming_guide/source_en/security_and_privacy.md b/docs/programming_guide/source_en/security_and_privacy.md index 0af5c87751d4b1ca583acd54d35a957946483790..6ea8579954ad3e3150ac9e4e5be1f4dd994a4519 100644 --- a/docs/programming_guide/source_en/security_and_privacy.md +++ b/docs/programming_guide/source_en/security_and_privacy.md @@ -17,7 +17,7 @@ - + ## Overview @@ -37,7 +37,7 @@ The `Defense` base class defines the interface for adversarial training. Its sub The `Detector` base class defines the interface for adversarial sample detection. Its subclasses implement various specific detection algorithms to enhance the adversarial robustness of the models. -For details, see [Improving Model Security with NAD Algorithm](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/improve_model_security_nad.html). +For details, see [Improving Model Security with NAD Algorithm](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/improve_model_security_nad.html). ## Model Security Test @@ -45,7 +45,7 @@ For details, see [Improving Model Security with NAD Algorithm](https://www.minds The `Fuzzer` class controls the fuzzing process based on the neuron coverage gain. It uses natural perturbation and adversarial sample generation methods as the mutation policy to activate more neurons to explore different types of model output results and error behavior, helping users enhance model robustness. -For details, see [Testing Model Security Using Fuzz Testing](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/test_model_security_fuzzing.html). +For details, see [Testing Model Security Using Fuzz Testing](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/test_model_security_fuzzing.html). ## Differential Privacy Training @@ -53,7 +53,7 @@ For details, see [Testing Model Security Using Fuzz Testing](https://www.mindspo `DPModel` inherits `mindspore.Model` and provides the entry function for differential privacy training. -For details, see [Protecting User Privacy with Differential Privacy Mechanism](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/protect_user_privacy_with_differential_privacy.html). +For details, see [Protecting User Privacy with Differential Privacy Mechanism](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/protect_user_privacy_with_differential_privacy.html). ## Privacy Breach Risk Assessment @@ -61,4 +61,4 @@ For details, see [Protecting User Privacy with Differential Privacy Mechanism](h The `MembershipInference` class provides a reverse analysis method. It can infer whether a sample is in the training set of a model based on the prediction information of the model on the sample to evaluate the privacy breach risk of the model. -For details, see [Testing Model Security with Membership Inference](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/test_model_security_membership_inference.html). +For details, see [Testing Model Security with Membership Inference](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/test_model_security_membership_inference.html). diff --git a/docs/programming_guide/source_en/tensor.md b/docs/programming_guide/source_en/tensor.md index 6ca70599626ed667af8fcdac9710e0c527d74449..51f7c33f614f2dfe4403a0b48d5af3ef8df5b232 100644 --- a/docs/programming_guide/source_en/tensor.md +++ b/docs/programming_guide/source_en/tensor.md @@ -11,11 +11,11 @@ - + ## Overview -Tensor is a basic data structure in the MindSpore network computing. For details about data types in tensors, see [dtype](https://www.mindspore.cn/doc/programming_guide/en/master/dtype.html). +Tensor is a basic data structure in the MindSpore network computing. For details about data types in tensors, see [dtype](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dtype.html). Tensors of different dimensions represent different data. For example, a 0-dimensional tensor represents a scalar, a 1-dimensional tensor represents a vector, a 2-dimensional tensor represents a matrix, and a 3-dimensional tensor may represent the three channels of RGB images. diff --git a/docs/programming_guide/source_en/tokenizer.md b/docs/programming_guide/source_en/tokenizer.md index f3d000874bc4646c8cc48dd3a7f0b79b6610f845..52b59210cd51dcb600ecd354b3c4ec45ba60e909 100644 --- a/docs/programming_guide/source_en/tokenizer.md +++ b/docs/programming_guide/source_en/tokenizer.md @@ -14,7 +14,7 @@ - + ## Overview @@ -36,7 +36,7 @@ MindSpore provides the following tokenizers. In addition, you can customize toke | WhitespaceTokenizer | Performs tokenization on scalar text data based on spaces. | | WordpieceTokenizer | Performs tokenization on scalar text data based on the word set. | -For details about tokenizers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.text.html). +For details about tokenizers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.text.html). ## MindSpore Tokenizers @@ -157,7 +157,7 @@ print("------------------------before tokenization----------------------------") for data in dataset.create_dict_iterator(output_numpy=True): print(text.to_str(data['text'])) -# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/botchan.txt +# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/r1.1/tests/ut/data/dataset/test_sentencepiece/botchan.txt vocab_file = "botchan.txt" vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {}) tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING) diff --git a/docs/programming_guide/source_en/train.md b/docs/programming_guide/source_en/train.md index 1de44dbf01f2baae7e1ebc8fcea8cb4c85c66a9b..b5909077adc274733ecf34b1b357cde91bcdd366 100644 --- a/docs/programming_guide/source_en/train.md +++ b/docs/programming_guide/source_en/train.md @@ -13,7 +13,7 @@ - + ## Overview @@ -23,13 +23,13 @@ MindSpore provides a large number of network models such as object detection and Before customizing a training network, you need to understand the network support of MindSpore, constraints on network construction using Python, and operator support. -- Network support: Currently, MindSpore supports multiple types of networks, including computer vision, natural language processing, recommender, and graph neural network. For details, see [Network List](https://www.mindspore.cn/doc/note/en/master/network_list.html). If the existing networks cannot meet your requirements, you can define your own network as required. +- Network support: Currently, MindSpore supports multiple types of networks, including computer vision, natural language processing, recommender, and graph neural network. For details, see [Network List](https://www.mindspore.cn/doc/note/en/r1.1/network_list.html). If the existing networks cannot meet your requirements, you can define your own network as required. - Constraints on network construction using Python: MindSpore does not support the conversion of any Python source code into computational graphs. Therefore, the source code has the syntax and network definition constraints. These constraints may change as MindSpore evolves. -- Operator support: As the name implies, the network is based on operators. Therefore, before customizing a training network, you need to understand the operators supported by MindSpore. For details about operator implementation on different backends (Ascend, GPU, and CPU), see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +- Operator support: As the name implies, the network is based on operators. Therefore, before customizing a training network, you need to understand the operators supported by MindSpore. For details about operator implementation on different backends (Ascend, GPU, and CPU), see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). -> When the built-in operators of the network cannot meet the requirements, you can refer to [Custom Operators(Ascend)](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_operator_ascend.html) to quickly expand the custom operators of the Ascend AI processor. +> When the built-in operators of the network cannot meet the requirements, you can refer to [Custom Operators(Ascend)](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_operator_ascend.html) to quickly expand the custom operators of the Ascend AI processor. The following is a code example: @@ -246,7 +246,7 @@ if __name__ == "__main__": print("epoch: {0}/{1}, losses: {2}".format(step + 1, epoch, output.asnumpy(), flush=True)) ``` -> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html#downloading-the-dataset). +> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html#downloading-the-dataset). The output is as follows: @@ -263,11 +263,11 @@ epoch: 9/10, losses: 2.305952548980713 epoch: 10/10, losses: 1.4282708168029785 ``` -> The typical application scenario is gradient accumulation. For details, see [Applying Gradient Accumulation Algorithm](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/apply_gradient_accumulation.html). +> The typical application scenario is gradient accumulation. For details, see [Applying Gradient Accumulation Algorithm](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/apply_gradient_accumulation.html). ## Conducting Inference While Training -For some complex networks with a large data volume and a relatively long training time, to learn the change of model accuracy in different training phases, the model accuracy may be traced in a manner of inference while training. For details, see [Evaluating the Model during Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/evaluate_the_model_during_training.html). +For some complex networks with a large data volume and a relatively long training time, to learn the change of model accuracy in different training phases, the model accuracy may be traced in a manner of inference while training. For details, see [Evaluating the Model during Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/evaluate_the_model_during_training.html). ## On-Device Execution diff --git a/docs/programming_guide/source_zh_cn/api_structure.md b/docs/programming_guide/source_zh_cn/api_structure.md index ca2f2f4f004e39338f90e5d36514b8dbc215d29b..90bdae4ce40819a0fc50fba5425defc06edc3745 100644 --- a/docs/programming_guide/source_zh_cn/api_structure.md +++ b/docs/programming_guide/source_zh_cn/api_structure.md @@ -9,9 +9,9 @@ - +    - +    @@ -19,13 +19,13 @@ MindSpore是一个全场景深度学习框架,旨在实现易开发、高效执行、全场景覆盖三大目标,其中易开发表现为API友好、调试难度低,高效执行包括计算效率、数据预处理效率和分布式训练效率,全场景则指框架同时支持云、边缘以及端侧场景。 -MindSpore总体架构分为前端表示层(Mind Expression,ME)、计算图引擎(Graph Engine,GE)和后端运行时三个部分。ME提供了用户级应用软件编程接口(Application Programming Interface,API),用于科学计算以及构建和训练神经网络,并将用户的Python代码转换为数据流图。GE是算子和硬件资源的管理器,负责控制从ME接收的数据流图的执行。后端运行时包含云、边、端上不同环境中的高效运行环境,例如CPU、GPU、Ascend AI处理器、 Android/iOS等。更多总体架构的相关内容请参见[总体架构](https://www.mindspore.cn/doc/note/zh-CN/master/design/mindspore/architecture.html)。 +MindSpore总体架构分为前端表示层(Mind Expression,ME)、计算图引擎(Graph Engine,GE)和后端运行时三个部分。ME提供了用户级应用软件编程接口(Application Programming Interface,API),用于科学计算以及构建和训练神经网络,并将用户的Python代码转换为数据流图。GE是算子和硬件资源的管理器,负责控制从ME接收的数据流图的执行。后端运行时包含云、边、端上不同环境中的高效运行环境,例如CPU、GPU、Ascend AI处理器、 Android/iOS等。更多总体架构的相关内容请参见[总体架构](https://www.mindspore.cn/doc/note/zh-CN/r1.1/design/mindspore/architecture.html)。 ## 设计理念 MindSpore源于全产业的最佳实践,向数据科学家和算法工程师提供了统一的模型训练、推理和导出等接口,支持端、边、云等不同场景下的灵活部署,推动深度学习和科学计算等领域繁荣发展。 -MindSpore目前提供了Python编程范式,用户使用Python原生控制逻辑即可构建复杂的神经网络模型,AI编程变得简单,具体示例请参见[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)。 +MindSpore目前提供了Python编程范式,用户使用Python原生控制逻辑即可构建复杂的神经网络模型,AI编程变得简单,具体示例请参见[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)。 目前主流的深度学习框架的执行模式有两种,分别为静态图模式和动态图模式。静态图模式拥有较高的训练性能,但难以调试。动态图模式相较于静态图模式虽然易于调试,但难以高效执行。MindSpore提供了动态图和静态图统一的编码方式,大大增加了静态图和动态图的可兼容性,用户无需开发多套代码,仅变更一行代码便可切换动态图/静态图模式,例如设置`context.set_context(mode=context.PYNATIVE_MODE)`切换成动态图模式,设置`context.set_context(mode=context.GRAPH_MODE)`即可切换成静态图模式,用户可拥有更轻松的开发调试及性能体验。 @@ -60,11 +60,11 @@ if __name__ == "__main__": 此外,SCT能够将Python代码转换为MindSpore函数中间表达(Intermediate Representation,IR),该函数中间表达构造出能够在不同设备解析和执行的计算图,并且在执行该计算图前,应用了多种软硬件协同优化技术,端、边、云等不同场景下的性能和效率得到针对性的提升。 -如何提高数据处理能力以匹配人工智能芯片的算力,是保证人工智能芯片发挥极致性能的关键。MindSpore为用户提供了多种数据处理算子,通过自动数据加速技术实现了高性能的流水线,包括数据加载、数据论证、数据转换等,支持CV/NLP/GNN等全场景的数据处理能力。MindRecord是MindSpore的自研数据格式,具有读写高效、易于分布式处理等优点,用户可将非标准的数据集和常用的数据集转换为MindRecord格式,从而获得更好的性能体验,转换详情请参见[MindSpore数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_conversion.html)。MindSpore支持加载常用的数据集和多种数据存储格式下的数据集,例如通过`dataset=dataset.Cifar10Dataset("Cifar10Data/")`即可完成CIFAR-10数据集的加载,其中`Cifar10Data/`为数据集本地所在目录,用户也可通过`GeneratorDataset`自定义数据集的加载方式。数据增强是一种基于(有限)数据生成新数据的方法,能够减少网络模型过拟合的现象,从而提高模型的泛化能力。MindSpore除了支持用户自定义数据增强外,还提供了自动数据增强方式,使得数据增强更加灵活,详情请见[自动数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/auto_augmentation.html)。 +如何提高数据处理能力以匹配人工智能芯片的算力,是保证人工智能芯片发挥极致性能的关键。MindSpore为用户提供了多种数据处理算子,通过自动数据加速技术实现了高性能的流水线,包括数据加载、数据论证、数据转换等,支持CV/NLP/GNN等全场景的数据处理能力。MindRecord是MindSpore的自研数据格式,具有读写高效、易于分布式处理等优点,用户可将非标准的数据集和常用的数据集转换为MindRecord格式,从而获得更好的性能体验,转换详情请参见[MindSpore数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_conversion.html)。MindSpore支持加载常用的数据集和多种数据存储格式下的数据集,例如通过`dataset=dataset.Cifar10Dataset("Cifar10Data/")`即可完成CIFAR-10数据集的加载,其中`Cifar10Data/`为数据集本地所在目录,用户也可通过`GeneratorDataset`自定义数据集的加载方式。数据增强是一种基于(有限)数据生成新数据的方法,能够减少网络模型过拟合的现象,从而提高模型的泛化能力。MindSpore除了支持用户自定义数据增强外,还提供了自动数据增强方式,使得数据增强更加灵活,详情请见[自动数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/auto_augmentation.html)。 -深度学习神经网络模型通常含有较多的隐藏层进行特征提取,但特征提取随机化、调试过程不可视限制了深度学习技术的可信和调优。MindSpore支持可视化调试调优(MindInsight),提供训练看板、溯源、性能分析和调试器等功能,帮助用户发现模型训练过程中出现的偏差,轻松进行模型调试和性能调优。例如用户可在初始化网络前,通过`profiler=Profiler()`初始化`Profiler`对象,自动收集训练过程中的算子耗时等信息并记录到文件中,在训练结束后调用`profiler.analyse()`停止收集并生成性能分析结果,以可视化形式供用户查看分析,从而更高效地调试网络性能,更多调试调优相关内容请见[训练过程可视化](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/visualization_tutorials.html)。 +深度学习神经网络模型通常含有较多的隐藏层进行特征提取,但特征提取随机化、调试过程不可视限制了深度学习技术的可信和调优。MindSpore支持可视化调试调优(MindInsight),提供训练看板、溯源、性能分析和调试器等功能,帮助用户发现模型训练过程中出现的偏差,轻松进行模型调试和性能调优。例如用户可在初始化网络前,通过`profiler=Profiler()`初始化`Profiler`对象,自动收集训练过程中的算子耗时等信息并记录到文件中,在训练结束后调用`profiler.analyse()`停止收集并生成性能分析结果,以可视化形式供用户查看分析,从而更高效地调试网络性能,更多调试调优相关内容请见[训练过程可视化](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/visualization_tutorials.html)。 -随着神经网络模型和数据集的规模不断增加,分布式并行训练成为了神经网络训练的常见做法,但分布式并行训练的策略选择和编写十分复杂,这严重制约着深度学习模型的训练效率,阻碍深度学习的发展。MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,例如设置`context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)`便可自动建立代价模型,为用户选择一种较优的并行模式,提高神经网络训练效率,大大降低了AI开发门槛,使用户能够快速实现模型思路,更多内容请见[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html)。 +随着神经网络模型和数据集的规模不断增加,分布式并行训练成为了神经网络训练的常见做法,但分布式并行训练的策略选择和编写十分复杂,这严重制约着深度学习模型的训练效率,阻碍深度学习的发展。MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,例如设置`context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)`便可自动建立代价模型,为用户选择一种较优的并行模式,提高神经网络训练效率,大大降低了AI开发门槛,使用户能够快速实现模型思路,更多内容请见[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html)。 ## 层次结构 diff --git a/docs/programming_guide/source_zh_cn/augmentation.md b/docs/programming_guide/source_zh_cn/augmentation.md index 3a478abf204d9d53e27b9b6032eccb372421bc4d..2e36ad251bd121abd16f7a7b294153c36eb2ccd0 100644 --- a/docs/programming_guide/source_zh_cn/augmentation.md +++ b/docs/programming_guide/source_zh_cn/augmentation.md @@ -16,9 +16,9 @@ - +    - +    @@ -33,7 +33,7 @@ MindSpore提供了`c_transforms`模块和`py_transforms`模块供用户进行数 | c_transforms | 基于C++的OpenCV实现 | 具有较高的性能。 | | py_transforms | 基于Python的PIL实现 | 该模块提供了多种图像增强功能,并提供了PIL Image和NumPy数组之间的传输方法。| -MindSpore目前支持的常用数据增强算子如下表所示,更多数据增强算子参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.vision.html)。 +MindSpore目前支持的常用数据增强算子如下表所示,更多数据增强算子参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.vision.html)。 | 模块 | 算子 | 说明 | | ---- | ---- | ---- | diff --git a/docs/programming_guide/source_zh_cn/auto_augmentation.md b/docs/programming_guide/source_zh_cn/auto_augmentation.md index 44f3d92df027365d3df83348e575feb6f488917c..270848a598b5f76ef7c61b1d7b9d12735fafa6ad 100644 --- a/docs/programming_guide/source_zh_cn/auto_augmentation.md +++ b/docs/programming_guide/source_zh_cn/auto_augmentation.md @@ -12,9 +12,9 @@ - +    - + ## 概述 @@ -26,7 +26,7 @@ MindSpore除了可以让用户自定义数据增强的使用,还提供了一 MindSpore提供了一系列基于概率的自动数据增强API,用户可以对各种数据增强操作进行随机选择与组合,使数据增强更加灵活。 -关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.transforms.html)。 +关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.transforms.html)。 ### RandomApply diff --git a/docs/programming_guide/source_zh_cn/auto_parallel.md b/docs/programming_guide/source_zh_cn/auto_parallel.md index b05e755f9740fe8cf8ca764e6d66312331b28a3d..584753bf280ac843cc43994d9c28b0441ffe002c 100644 --- a/docs/programming_guide/source_zh_cn/auto_parallel.md +++ b/docs/programming_guide/source_zh_cn/auto_parallel.md @@ -33,7 +33,7 @@ - + ## 概述 @@ -103,7 +103,7 @@ context.get_auto_parallel_context("gradients_mean") 其中`auto_parallel`和`data_parallel`在MindSpore教程中有完整样例: -。 +。 代码样例如下: @@ -341,7 +341,7 @@ x = Parameter(Tensor(np.ones([2, 2])), layerwise_parallel=True) 具体用例请参考MindSpore分布式并行训练教程: -。 +。 ## 自动并行 @@ -349,4 +349,4 @@ x = Parameter(Tensor(np.ones([2, 2])), layerwise_parallel=True) 具体用例请参考MindSpore分布式并行训练教程: -。 +。 diff --git a/docs/programming_guide/source_zh_cn/cache.md b/docs/programming_guide/source_zh_cn/cache.md index a319efd5957e666740b1c24dba8ce6f470285852..6d9015a3ae8ae65d9bc77861f9167c73c15c47d5 100644 --- a/docs/programming_guide/source_zh_cn/cache.md +++ b/docs/programming_guide/source_zh_cn/cache.md @@ -11,7 +11,7 @@ - + ## 概述 @@ -146,7 +146,7 @@ 需要注意的是,两个例子均需要按照步骤4中的方法分别创建一个缓存实例,并在数据集加载或map算子中将所创建的`test_cache`作为`cache`参数分别传入。 - 下面两个样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。 + 下面两个样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。 - 缓存原始数据集加载的数据。 @@ -305,11 +305,11 @@ done ``` - > 直接获取完整样例代码:[cache.sh](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/cache/cache.sh) + > 直接获取完整样例代码:[cache.sh](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/cache/cache.sh) 4. 创建并应用缓存实例。 - 下面样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。目录结构如下: + 下面样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。目录结构如下: ```text ├─cache.sh @@ -348,7 +348,7 @@ print("Got {} samples on device {}".format(num_iter, args_opt.device)) ``` - > 直接获取完整样例代码:[my_training_script.py](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/cache/my_training_script.py) + > 直接获取完整样例代码:[my_training_script.py](https://gitee.com/mindspore/docs/blob/r1.1/tutorials/tutorial_code/cache/my_training_script.py) 5. 运行训练脚本。 diff --git a/docs/programming_guide/source_zh_cn/callback.md b/docs/programming_guide/source_zh_cn/callback.md index 15d1e9a391e00a18c0111030f2fd17457b39f4e4..a9451c5b7a4992ba2a8020c796215de8a6263d6f 100644 --- a/docs/programming_guide/source_zh_cn/callback.md +++ b/docs/programming_guide/source_zh_cn/callback.md @@ -9,7 +9,7 @@ - + ## 概述 @@ -23,19 +23,19 @@ Callback回调函数在MindSpore中被实现为一个类,Callback机制类似 与模型训练过程相结合,保存训练后的模型和网络参数,方便进行再推理或再训练。`ModelCheckpoint`一般与`CheckpointConfig`配合使用,`CheckpointConfig`是一个参数配置类,可自定义配置checkpoint的保存策略。 - 详细内容,请参考[Checkpoint官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html)。 + 详细内容,请参考[Checkpoint官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html)。 - SummaryCollector 帮助收集一些常见信息,如loss、learning rate、计算图、参数权重等,方便用户将训练过程可视化和查看信息,并且可以允许summary操作从summary文件中收集数据。 - 详细内容,请参考[Summary官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/summary_record.html)。 + 详细内容,请参考[Summary官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/summary_record.html)。 - LossMonitor 监控训练过程中的loss变化情况,当loss为NAN或INF时,提前终止训练。可以在日志中输出loss,方便用户查看。 - 详细内容,请参考[LossMonitor官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#mindsporecallback)。 + 详细内容,请参考[LossMonitor官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_debugging_info.html#mindsporecallback)。 - TimeMonitor @@ -51,6 +51,6 @@ MindSpore不但有功能强大的内置回调函数,还可以支持用户自 2. 实现保存训练过程中精度最高的checkpoint文件,用户可以自定义在每一轮迭代后都保存当前精度最高的模型。 -详细内容,请参考[自定义Callback官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#id3)。 +详细内容,请参考[自定义Callback官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_debugging_info.html#id3)。 根据教程,用户可以很容易实现具有其他功能的自定义回调函数,如实现在每一轮训练结束后都输出相应的详细训练信息,包括训练进度、训练轮次、训练名称、loss值等;如实现在loss或模型精度达到一定值后停止训练,用户可以设定loss或模型精度的阈值,当loss或模型精度达到该阈值后就提前终止训练等。 diff --git a/docs/programming_guide/source_zh_cn/cell.md b/docs/programming_guide/source_zh_cn/cell.md index ce5021e70db8fbbcc456c3724c6f9164843699f7..650a013cb5342982fae1d802874456b65f832d04 100644 --- a/docs/programming_guide/source_zh_cn/cell.md +++ b/docs/programming_guide/source_zh_cn/cell.md @@ -21,8 +21,8 @@ - - + +    @@ -67,7 +67,7 @@ class Net(nn.Cell): `parameters_dict`方法识别出网络结构中所有的参数,返回一个以key为参数名,value为参数值的`OrderedDict`。 -`Cell`类中返回参数的方法还有许多,例如`get_parameters`、`trainable_params`等,具体使用方法可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Cell.html)。 +`Cell`类中返回参数的方法还有许多,例如`get_parameters`、`trainable_params`等,具体使用方法可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Cell.html)。 代码样例如下: @@ -342,7 +342,7 @@ print(loss(input_data, target_data)) ## 优化算法 -`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,详细说明参见[优化算法](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/optim.html)。 +`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,详细说明参见[优化算法](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/optim.html)。 ## 构建自定义网络 diff --git a/docs/programming_guide/source_zh_cn/conf.py b/docs/programming_guide/source_zh_cn/conf.py index 95d7701759707ab95a3c199cd8a22e2e2cc1194d..7be5f453c21b75703c763a14c8180127aed60e6b 100644 --- a/docs/programming_guide/source_zh_cn/conf.py +++ b/docs/programming_guide/source_zh_cn/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/programming_guide/source_zh_cn/context.md b/docs/programming_guide/source_zh_cn/context.md index 40723c9363079be090f52bad4e4ced5e2e7130e9..d425cf3779e007df13803b1941ff59ab15f493d9 100644 --- a/docs/programming_guide/source_zh_cn/context.md +++ b/docs/programming_guide/source_zh_cn/context.md @@ -16,9 +16,9 @@ - +    - +    @@ -110,7 +110,7 @@ from mindspore.context import ParallelMode context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, gradients_mean=True) ``` -> 分布式并行训练详细介绍可以查看[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html)。 +> 分布式并行训练详细介绍可以查看[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html)。 ## 维测管理 @@ -122,13 +122,25 @@ context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, grad - `enable_profiling`:是否开启profiling功能。设置为True,表示开启profiling功能,从enable_options读取profiling的采集选项;设置为False,表示关闭profiling功能,仅采集training_trace。 -- `profiling_options`:profiling采集选项,取值如下,支持采集多项数据。training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据;task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息;op_trace:采集单算子性能数据。 +- `profiling_options`:profiling采集选项,取值如下,支持采集多项数据。 + result_path: Profiling采集结果文件保存路径。该参数指定的目录需要在启动训练的环境上(容器或Host侧)提前创建且确保安装时配置的运行用户具有读写权限,支持配置绝对路径或相对路径(相对执行命令时的当前路径); + training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据,取值on/off。 + task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息,取值on/off; + aicpu_trace: 采集aicpu数据增强的profiling数据。取值on/off; + fp_point: training_trace为on时需要配置。指定训练网络迭代轨迹正向算子的开始位置,用于记录前向算子开始时间戳。配置值为指定的正向第一个算子名字。当该值为空时,系统自动获取正向第一个算子名字; + bp_point: training_trace为on时需要配置。指定训练网络迭代轨迹反向算子的结束位置,用于记录反向算子结束时间戳。配置值为指定的反向最后一个算子名字。当该值为空时,系统自动获取反向最后一个算子名字; + ai_core_metrics: 取值如下: + - ArithmeticUtilization: 各种计算类指标占比统计。 + - PipeUtilization: 计算单元和搬运单元耗时占比,该项为默认值。 + - Memory: 外部内存读写类指令占比。 + - MemoryL0: 内部内存读写类指令占比。 + - ResourceConflictRatio: 流水线队列类指令占比。 代码样例如下: ```python from mindspore import context -context.set_context(enable_profiling=True, profiling_options="training_trace") +context.set_context(enable_profiling=True, profiling_options= '{"result_path":"/home/data/output","training_trace":"on"}') ``` ### 保存MindIR @@ -146,13 +158,13 @@ from mindspore import context context.set_context(save_graphs=True) ``` -> MindIR详细介绍可以查看[MindSpore IR(MindIR)](https://www.mindspore.cn/doc/note/zh-CN/master/design/mindspore/mindir.html)。 +> MindIR详细介绍可以查看[MindSpore IR(MindIR)](https://www.mindspore.cn/doc/note/zh-CN/r1.1/design/mindspore/mindir.html)。 ### print算子落盘 默认情况下,MindSpore的自研print算子可以将用户输入的Tensor或字符串信息打印出来,支持多字符串输入,多Tensor输入和字符串与Tensor的混合输入,输入参数以逗号隔开。 -> Print打印功能可以查看[Print算子功能介绍](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#print)。 +> Print打印功能可以查看[Print算子功能介绍](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_debugging_info.html#print)。 - `print_file_path`:可以将print算子数据保存到文件,同时关闭屏幕打印功能。如果保存的文件已经存在,则会给文件添加时间戳后缀。数据保存到文件可以解决数据量较大时屏幕打印数据丢失的问题。 @@ -163,4 +175,4 @@ from mindspore import context context.set_context(print_file_path="print.pb") ``` -> context接口详细介绍可以查看[mindspore.context](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.context.html)。 +> context接口详细介绍可以查看[mindspore.context](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.context.html)。 diff --git a/docs/programming_guide/source_zh_cn/customized.rst b/docs/programming_guide/source_zh_cn/customized.rst index 129b147956d9fc0e702dc68cc1e0add0f7e6d2d0..a86ddb8601664e529c4a2f4d4c8d5c3ed04b295d 100644 --- a/docs/programming_guide/source_zh_cn/customized.rst +++ b/docs/programming_guide/source_zh_cn/customized.rst @@ -4,6 +4,6 @@ .. toctree:: :maxdepth: 1 - 自定义算子(Ascend) - 自定义算子(GPU) - 自定义算子(CPU) + 自定义算子(Ascend) + 自定义算子(GPU) + 自定义算子(CPU) diff --git a/docs/programming_guide/source_zh_cn/dataset_conversion.md b/docs/programming_guide/source_zh_cn/dataset_conversion.md index 253c5b0ad5bfbd2be4b7b0f9e49ea3522fefd455..918e9c7a6f6961f4a4ad8a16630506798913f570 100644 --- a/docs/programming_guide/source_zh_cn/dataset_conversion.md +++ b/docs/programming_guide/source_zh_cn/dataset_conversion.md @@ -15,9 +15,9 @@ - +    - +    @@ -185,7 +185,7 @@ MindSpore提供转换常用数据集的工具类,能够将常用的数据集 | TFRecord | TFRecordToMR | | CSV File | CsvToMR | -更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.mindrecord.html)。 +更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.mindrecord.html)。 ### 转换CIFAR-10数据集 diff --git a/docs/programming_guide/source_zh_cn/dataset_loading.md b/docs/programming_guide/source_zh_cn/dataset_loading.md index 0bde657f68bddc2652ae9b9283afa8a6bd976e43..d16da17f5376636a9677d40a63965202ba25591b 100644 --- a/docs/programming_guide/source_zh_cn/dataset_loading.md +++ b/docs/programming_guide/source_zh_cn/dataset_loading.md @@ -21,9 +21,9 @@ - +    - +    @@ -54,7 +54,7 @@ MindSpore还支持加载多种数据存储格式下的数据集,用户可以 MindSpore也同样支持使用`GeneratorDataset`自定义数据集的加载方式,用户可以根据需要实现自己的数据集类。 -> 更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.html)。 +> 更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.html)。 ## 常用数据集加载 @@ -209,7 +209,7 @@ Panoptic: dict_keys(['image', 'bbox', 'category_id', 'iscrowd', 'area']) MindRecord是MindSpore定义的一种数据格式,使用MindRecord能够获得更好的性能提升。 -> 阅读[数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。 +> 阅读[数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。 下面的样例通过`MindDataset`接口加载MindRecord文件,并展示已加载数据的标签。 diff --git a/docs/programming_guide/source_zh_cn/dtype.md b/docs/programming_guide/source_zh_cn/dtype.md index 7d667329d3f9c2f8ae8649fe5b6caf388dc6f87d..3ac4943735c88d71c78697d7f16cb6d9244f862e 100644 --- a/docs/programming_guide/source_zh_cn/dtype.md +++ b/docs/programming_guide/source_zh_cn/dtype.md @@ -8,9 +8,9 @@ - +    - +    @@ -20,7 +20,7 @@ MindSpore张量支持不同的数据类型,包含`int8`、`int16`、`int32`、 在MindSpore的运算处理流程中,Python中的`int`数会被转换为定义的int64类型,`float`数会被转换为定义的`float32`类型。 -详细的类型支持情况请参考。 +详细的类型支持情况请参考。 以下代码,打印MindSpore的数据类型int32。 diff --git a/docs/programming_guide/source_zh_cn/infer.md b/docs/programming_guide/source_zh_cn/infer.md index 8dc0564bd1b199ac68a15a69d421060876660bff..7b03c8eef961e8ba512a9bcb8d3aa609adfb2296 100644 --- a/docs/programming_guide/source_zh_cn/infer.md +++ b/docs/programming_guide/source_zh_cn/infer.md @@ -6,14 +6,14 @@ - + 基于MindSpore训练后的模型,支持在Ascend 910 AI处理器、Ascend 310 AI处理器、GPU、CPU、端侧等多种不同的平台上执行推理。使用方法可参考如下教程: -- [在Ascend 910 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_910.html) -- [在Ascend 310 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_310.html) -- [在GPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_gpu.html) -- [在CPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_cpu.html) -- [在端侧执行推理](https://www.mindspore.cn/tutorial/lite/zh-CN/master/quick_start/quick_start.html) +- [在Ascend 910 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_910.html) +- [在Ascend 310 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310.html) +- [在GPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_gpu.html) +- [在CPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_cpu.html) +- [在端侧执行推理](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.1/quick_start/quick_start.html) -同时,MindSpore提供了一个轻量级、高性能的服务模块,称为MindSpore Serving,可帮助MindSpore开发者在生产环境中高效部署在线推理服务,使用方法可参考[部署推理服务](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_example.html)。 +同时,MindSpore提供了一个轻量级、高性能的服务模块,称为MindSpore Serving,可帮助MindSpore开发者在生产环境中高效部署在线推理服务,使用方法可参考[部署推理服务](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html)。 diff --git a/docs/programming_guide/source_zh_cn/network_component.md b/docs/programming_guide/source_zh_cn/network_component.md index fcbfdaa013e7607b220020c57eaf669140a05dad..c13bf4e4af675aff4c7c32a46d1247d020368483 100644 --- a/docs/programming_guide/source_zh_cn/network_component.md +++ b/docs/programming_guide/source_zh_cn/network_component.md @@ -10,9 +10,9 @@ - +    - +    @@ -26,7 +26,7 @@ MindSpore封装了一些常用的网络组件,用于网络的训练、推理 ## GradOperation -GradOperation组件用于生成输入函数的梯度,利用`get_all`、`get_by_list`和`sens_param`参数控制梯度的计算方式,细节内容详见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GradOperation.html)。 +GradOperation组件用于生成输入函数的梯度,利用`get_all`、`get_by_list`和`sens_param`参数控制梯度的计算方式,细节内容详见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GradOperation.html)。 GradOperation的使用实例如下: diff --git a/docs/programming_guide/source_zh_cn/network_list.rst b/docs/programming_guide/source_zh_cn/network_list.rst index 0086283c5f999b6131593dd0be63ce852df01927..f6ce3af4aaaa3d987a618af4a0e6737cc5e74037 100644 --- a/docs/programming_guide/source_zh_cn/network_list.rst +++ b/docs/programming_guide/source_zh_cn/network_list.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - MindSpore网络支持 \ No newline at end of file + MindSpore网络支持 \ No newline at end of file diff --git a/docs/programming_guide/source_zh_cn/operator_list.rst b/docs/programming_guide/source_zh_cn/operator_list.rst index 6fc28fa3bdea8f865f0f1702724bb434e885ec45..bf2121c2efc84dd6a009b6d185cffde7234e388a 100644 --- a/docs/programming_guide/source_zh_cn/operator_list.rst +++ b/docs/programming_guide/source_zh_cn/operator_list.rst @@ -4,7 +4,7 @@ .. toctree:: :maxdepth: 1 - MindSpore算子支持 - MindSpore隐式类型转换的算子支持 - MindSpore分布式算子支持 - MindSpore Lite算子支持 \ No newline at end of file + MindSpore算子支持 + MindSpore隐式类型转换的算子支持 + MindSpore分布式算子支持 + MindSpore Lite算子支持 \ No newline at end of file diff --git a/docs/programming_guide/source_zh_cn/operators.md b/docs/programming_guide/source_zh_cn/operators.md index 77baf54b3bb6aee6ff35a4c46e9d5eec6ef05aed..f7ad76fe4a3a46b2b0f83c81505159fe0644a83c 100644 --- a/docs/programming_guide/source_zh_cn/operators.md +++ b/docs/programming_guide/source_zh_cn/operators.md @@ -40,9 +40,9 @@ - +    - +    @@ -60,7 +60,7 @@ MindSpore的算子组件,可从算子使用方式和算子功能两种维度 ### mindspore.ops.operations -operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。算子支持情况可查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)。 +operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。算子支持情况可查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)。 Primitive算子也称为算子原语,它直接封装了底层的Ascend、GPU、AICPU、CPU等多种算子的具体实现,为用户提供基础算子能力。 @@ -89,7 +89,7 @@ output = [ 1. 8. 64.] ### mindspore.ops.functional -为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list_ms.html#mindspore-ops-functional)。 +为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list_ms.html#mindspore-ops-functional)。 例如`P.Pow`算子,我们提供了functional版本的`F.tensor_pow`算子。 @@ -172,7 +172,7 @@ tensor [[2.4, 4.2] scalar 3 ``` -此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的梯度函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GradOperation.html)。 +此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的梯度函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GradOperation.html)。 ### operations/functional/composite三类算子合并用法 @@ -194,7 +194,7 @@ pow = ops.Pow() ## 算子功能 -算子按功能可分为张量操作、网络操作、数组操作、图像操作、编码操作、调试操作和量化操作七个功能模块。所有的算子在Ascend AI处理器、GPU和CPU的支持情况,参见[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)。 +算子按功能可分为张量操作、网络操作、数组操作、图像操作、编码操作、调试操作和量化操作七个功能模块。所有的算子在Ascend AI处理器、GPU和CPU的支持情况,参见[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)。 ### 张量操作 diff --git a/docs/programming_guide/source_zh_cn/optim.md b/docs/programming_guide/source_zh_cn/optim.md index 1e2845cb0233ce3d1868fdc3e5b32eeac062e623..530b15a4933f3141ccf840a327dfe20b4597e44a 100644 --- a/docs/programming_guide/source_zh_cn/optim.md +++ b/docs/programming_guide/source_zh_cn/optim.md @@ -13,9 +13,9 @@ - +    - +    diff --git a/docs/programming_guide/source_zh_cn/parameter.md b/docs/programming_guide/source_zh_cn/parameter.md index 110d793f396ac9f953d5bbedcdc41a9b46c5073d..ef728c4666a2f38a8a3837769329d9b7b2e825d1 100644 --- a/docs/programming_guide/source_zh_cn/parameter.md +++ b/docs/programming_guide/source_zh_cn/parameter.md @@ -11,9 +11,9 @@ - +    - +    @@ -41,7 +41,7 @@ mindspore.Parameter(default_input, name=None, requires_grad=True, layerwise_para 当`layerwise_parallel`(混合并行)配置为True时,参数广播和参数梯度聚合时会过滤掉该参数。 -有关分布式并行的相关配置,可以参考文档:。 +有关分布式并行的相关配置,可以参考文档:。 下例通过三种不同的数据类型构造了`Parameter`,三个`Parameter`都需要更新,都不采用layerwise并行。如下: @@ -126,7 +126,7 @@ data: Parameter (name=x) - `set_data`:设置`Parameter`保存的数据,支持传入`Tensor`、`Initializer`、`int`和`float`进行设置, 将方法的入参`slice_shape`设置为True时,可改变`Parameter`的shape,反之,设置的数据shape必须与`Parameter`原来的shape保持一致。 -- `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_parameter_server_training.html)进行训练。 +- `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_parameter_server_training.html)进行训练。 - `clone`:克隆`Parameter`,克隆完成后可以给新Parameter指定新的名字。 diff --git a/docs/programming_guide/source_zh_cn/performance_optimization.md b/docs/programming_guide/source_zh_cn/performance_optimization.md index 6cf6a8e188187a5881719367adf6ab0452452150..9ae02d93961f0000e50a9eab449488f6703234a8 100644 --- a/docs/programming_guide/source_zh_cn/performance_optimization.md +++ b/docs/programming_guide/source_zh_cn/performance_optimization.md @@ -6,14 +6,14 @@ - + MindSpore提供了多种性能优化方法,用户可根据实际情况,利用它们来提升训练和推理的性能。 | 优化阶段 | 优化方法 | 支持情况 | | --- | --- | --- | -| 训练 | [分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html) | Ascend、GPU | -| | [混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_mixed_precision.html) | Ascend、GPU | -| | [图算融合](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_graph_kernel_fusion.html) | Ascend | -| | [梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_gradient_accumulation.html) | Ascend、GPU | -| 推理 | [训练后量化](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/post_training_quantization.html) | Lite | +| 训练 | [分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html) | Ascend、GPU | +| | [混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/enable_mixed_precision.html) | Ascend、GPU | +| | [图算融合](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/enable_graph_kernel_fusion.html) | Ascend | +| | [梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_gradient_accumulation.html) | Ascend、GPU | +| 推理 | [训练后量化](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.1/use/post_training_quantization.html) | Lite | diff --git a/docs/programming_guide/source_zh_cn/pipeline.md b/docs/programming_guide/source_zh_cn/pipeline.md index 729a82e641ad19ce47407043ff545f63e9d744e2..71942fb69d94193dd532127a0bcaa6f4538c5d8c 100644 --- a/docs/programming_guide/source_zh_cn/pipeline.md +++ b/docs/programming_guide/source_zh_cn/pipeline.md @@ -14,9 +14,9 @@ - +    - +    @@ -26,7 +26,7 @@ MindSpore的各个数据集类都为用户提供了多种数据处理算子,用户可以构建数据处理pipeline定义需要使用的数据处理操作,数据即可在训练过程中像水一样源源不断地经过数据处理pipeline流向训练系统。 -MindSpore目前支持的部分常用数据处理算子如下表所示,更多数据处理操作参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.html)。 +MindSpore目前支持的部分常用数据处理算子如下表所示,更多数据处理操作参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.html)。 | 数据处理算子 | 算子说明 | | ---- | ---- | @@ -80,7 +80,7 @@ for data in dataset1.create_dict_iterator(): 将指定的函数或算子作用于数据集的指定列数据,实现数据映射操作。用户可以自定义映射函数,也可以直接使用c_transforms或py_transforms中的算子针对图像、文本数据进行数据增强。 ->更多数据增强的使用说明,参见编程指南中[数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/augmentation.html)章节。 +>更多数据增强的使用说明,参见编程指南中[数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/augmentation.html)章节。 ![map](./images/map.png) diff --git a/docs/programming_guide/source_zh_cn/probability.md b/docs/programming_guide/source_zh_cn/probability.md index ea6cd8e22217e580648faa1be465ab41cd1c9e20..7bedffe0fa5d0d9706c9c66022da70841cf2265f 100644 --- a/docs/programming_guide/source_zh_cn/probability.md +++ b/docs/programming_guide/source_zh_cn/probability.md @@ -47,7 +47,7 @@ - + MindSpore深度概率编程的目标是将深度学习和贝叶斯学习结合,包括概率分布、概率分布映射、深度概率网络、概率推断算法、贝叶斯层、贝叶斯转换和贝叶斯工具箱,面向不同的开发者。对于专业的贝叶斯学习用户,提供概率采样、推理算法和模型构建库;另一方面,为不熟悉贝叶斯深度学习的用户提供了高级的API,从而不用更改深度学习编程逻辑,即可利用贝叶斯模型。 @@ -361,23 +361,28 @@ mean_b = Tensor(1.0, dtype=mstype.float32) sd_b = Tensor(2.0, dtype=mstype.float32) kl = my_normal.kl_loss('Normal', mean_b, sd_b) +# get the distribution args as a tuple +dist_arg = my_normal.get_dist_args() + print("mean: ", mean) print("var: ", var) print("entropy: ", entropy) print("prob: ", prob) print("cdf: ", cdf) print("kl: ", kl) +print("dist_arg: ", dist_arg) ``` 输出为: ```text -mean: 0.0 -var: 1.0 -entropy: 1.4189385 -prob: [0.35206532, 0.3989423, 0.35206532] -cdf: [0.3085482, 0.5, 0.6914518] -kl: 0.44314718 +mean:  0.0 +var:  1.0 +entropy:  1.4189385 +prob:  [0.35206532 0.3989423  0.35206532] +cdf:  [0.30853754 0.5        0.69146246] +kl:  0.44314718 +dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1)) ``` ### 概率分布类在图模式下的应用 @@ -465,7 +470,7 @@ tx = Tensor(x, dtype=dtype.float32) cdf = LogNormal.cdf(tx) # generate samples from the distribution -shape = ((3, 2)) +shape = (3, 2) sample = LogNormal.sample(shape) # get information of the distribution @@ -475,26 +480,24 @@ print("underlying distribution:\n", LogNormal.distribution) print("bijector:\n", LogNormal.bijector) # get the computation results print("cdf:\n", cdf) -print("sample:\n", sample) +print("sample shape:\n", sample.shape) ``` 输出为: ```text TransformedDistribution< - (_bijector): Exp - (_distribution): Normal - > +  (_bijector): Exp +  (_distribution): Normal +  > underlying distribution: -Normal -bijector -Exp + Normal +bijector: + Exp cdf: -[7.55891383e-01, 9.46239710e-01, 9.89348888e-01] -sample: -[[7.64315844e-01, 3.01435232e-01], - [1.17166102e+00, 2.60277224e+00], - [7.02699006e-01, 3.91564220e-01]] + [0.7558914 0.9462397 0.9893489] +sample shape: +(3, 2) ``` 当构造 `TransformedDistribution` 映射变换的 `is_constant_jacobian = true` 时(如 `ScalarAffine`),构造的 `TransformedDistribution` 实例可以使用直接使用 `mean` 接口计算均值,例如: @@ -546,15 +549,14 @@ x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32) tx = Tensor(x, dtype=dtype.float32) cdf, sample = net(tx) print("cdf: ", cdf) -print("sample: ", sample) +print("sample shape: ", sample.shape) ``` 输出为: ```text cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ] -sample: [[0.5361498 0.26627186 2.766659 ] - [1.5831033 0.4096472 2.008679 ]] +sample shape: (2, 3) ``` ## 概率分布映射 @@ -695,11 +697,11 @@ print("inverse_log_jacobian: ", inverse_log_jaco) 输出: ```text -PowerTransform -forward: [2.23606801e+00, 2.64575124e+00, 3.00000000e+00, 3.31662488e+00] -inverse: [1.50000000e+00, 4.00000048e+00, 7.50000000e+00, 1.20000010e+01] -forward_log_jacobian: [-8.04718971e-01, -9.72955048e-01, -1.09861231e+00, -1.19894767e+00] -inverse_log_jacobian: [6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00] +PowerTransform +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ### 图模式下调用Bijector实例 @@ -741,10 +743,10 @@ print("inverse_log_jacobian: ", inverse_log_jaco) 输出为: ```text -forward: [2.236068 2.6457515 3. 3.3166249] -inverse: [ 1.5 4. 7.5 12.000001] -forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477] -inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ] +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ## 深度概率网络 @@ -850,7 +852,7 @@ decoder = Decoder() cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20, num_classes=10) ``` -加载数据集,我们可以使用Mnist数据集,具体的数据加载和预处理过程可以参考这里[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html),这里会用到create_dataset函数创建数据迭代器。 +加载数据集,我们可以使用Mnist数据集,具体的数据加载和预处理过程可以参考这里[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html),这里会用到create_dataset函数创建数据迭代器。 ```python ds_train = create_dataset(image_path, 128, 1) @@ -914,7 +916,7 @@ The shape of the generated sample is (64, 1, 32, 32) 下面的范例使用MindSpore的`nn.probability.bnn_layers`中的API实现BNN图片分类模型。MindSpore的`nn.probability.bnn_layers`中的API包括`NormalPrior`,`NormalPosterior`,`ConvReparam`,`DenseReparam`,`DenseLocalReparam`和`WithBNNLossCell`。BNN与DNN的最大区别在于,BNN层的weight和bias不再是确定的值,而是服从一个分布。其中,`NormalPrior`,`NormalPosterior`分别用来生成服从正态分布的先验分布和后验分布;`ConvReparam`和`DenseReparam`分别是使用reparameterization方法实现的贝叶斯卷积层和全连接层;`DenseLocalReparam`是使用Local Reparameterization方法实现的贝叶斯全连接层;`WithBNNLossCell`是用来封装BNN和损失函数的。 -如何使用`nn.probability.bnn_layers`中的API构建贝叶斯神经网络并实现图片分类,可以参考教程[使用贝叶斯网络](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#id3)。 +如何使用`nn.probability.bnn_layers`中的API构建贝叶斯神经网络并实现图片分类,可以参考教程[使用贝叶斯网络](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#id3)。 ## 贝叶斯转换 @@ -970,7 +972,7 @@ API`TransformToBNN`主要实现了两个功能: ``` - 参数`get_dense_args`指定从DNN模型的全连接层中获取哪些参数,默认值是DNN模型的全连接层和BNN的全连接层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Dense.html);`get_conv_args`指定从DNN模型的卷积层中获取哪些参数,默认值是DNN模型的卷积层和BNN的卷积层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Conv2d.html);参数`add_dense_args`和`add_conv_args`分别指定了要为BNN层指定哪些新的参数值。需要注意的是,`add_dense_args`中的参数不能与`get_dense_args`重复,`add_conv_args`和`get_conv_args`也是如此。 + 参数`get_dense_args`指定从DNN模型的全连接层中获取哪些参数,默认值是DNN模型的全连接层和BNN的全连接层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Dense.html);`get_conv_args`指定从DNN模型的卷积层中获取哪些参数,默认值是DNN模型的卷积层和BNN的卷积层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Conv2d.html);参数`add_dense_args`和`add_conv_args`分别指定了要为BNN层指定哪些新的参数值。需要注意的是,`add_dense_args`中的参数不能与`get_dense_args`重复,`add_conv_args`和`get_conv_args`也是如此。 - 功能二:转换指定类型的层 @@ -996,7 +998,7 @@ API`TransformToBNN`主要实现了两个功能: 参数`dnn_layer`指定将哪个类型的DNN层转换成BNN层,`bnn_layer`指定DNN层将转换成哪个类型的BNN层,`get_args`和`add_args`分别指定从DNN层中获取哪些参数和要为BNN层的哪些参数重新赋值。 -如何在MindSpore中使用API`TransformToBNN`可以参考教程[DNN一键转换成BNN](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#dnnbnn) +如何在MindSpore中使用API`TransformToBNN`可以参考教程[DNN一键转换成BNN](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#dnnbnn) ## 贝叶斯工具箱 diff --git a/docs/programming_guide/source_zh_cn/run.md b/docs/programming_guide/source_zh_cn/run.md index e1989388d076787c4fb80105b1c247249ec3d790..d7938bebd1d03ad4f8f8a8c7e445baaab0ccbb6e 100644 --- a/docs/programming_guide/source_zh_cn/run.md +++ b/docs/programming_guide/source_zh_cn/run.md @@ -12,9 +12,9 @@ - +    - +    @@ -105,7 +105,7 @@ print(output.asnumpy()) ## 执行网络模型 -MindSpore的[Model接口](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.html#mindspore.Model)是用于训练和验证的高级接口。可以将有训练或推理功能的layers组合成一个对象,通过调用train、eval、predict接口可以分别实现训练、推理和预测功能。 +MindSpore的[Model接口](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.html#mindspore.Model)是用于训练和验证的高级接口。可以将有训练或推理功能的layers组合成一个对象,通过调用train、eval、predict接口可以分别实现训练、推理和预测功能。 用户可以根据实际需要传入网络、损失函数和优化器等初始化Model接口,还可以通过配置amp_level实现混合精度,配置metrics实现模型评估。 @@ -243,7 +243,7 @@ if __name__ == "__main__": model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=True) ``` -> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)的下载数据集部分,下同。 +> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)的下载数据集部分,下同。 输出如下: @@ -257,7 +257,7 @@ epoch: 1 step: 1874, loss is 0.0346688 epoch: 1 step: 1875, loss is 0.017264696 ``` -> 使用PyNative模式调试, 请参考[使用PyNative模式调试](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/debug_in_pynative_mode.html), 包括单算子、普通函数和网络训练模型的执行。 +> 使用PyNative模式调试, 请参考[使用PyNative模式调试](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/debug_in_pynative_mode.html), 包括单算子、普通函数和网络训练模型的执行。 ### 执行推理模型 @@ -391,7 +391,7 @@ if __name__ == "__main__": - `checkpoint_lenet-1_1875.ckpt`:保存的CheckPoint模型文件名称。 - `load_param_into_net`:通过该接口把参数加载到网络中。 -> `checkpoint_lenet-1_1875.ckpt`文件的保存方法,可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)的训练网络部分。 +> `checkpoint_lenet-1_1875.ckpt`文件的保存方法,可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)的训练网络部分。 输出如下: diff --git a/docs/programming_guide/source_zh_cn/sampler.md b/docs/programming_guide/source_zh_cn/sampler.md index 00edcb9d90d4c52888766689ff8234cd1a039cb7..9015d511ff259e892160a2c0f6abcc20a55ad71d 100644 --- a/docs/programming_guide/source_zh_cn/sampler.md +++ b/docs/programming_guide/source_zh_cn/sampler.md @@ -14,9 +14,9 @@ - +    - +    @@ -24,7 +24,7 @@ MindSpore提供了多种用途的采样器(Sampler),帮助用户对数据集进行不同形式的采样,以满足训练需求,能够解决诸如数据集过大或样本类别分布不均等问题。只需在加载数据集时传入采样器对象,即可实现数据的采样。 -MindSpore目前提供的部分采样器类别如下表所示。此外,用户也可以根据需要实现自定义的采样器类。更多采样器的使用方法参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.html)。 +MindSpore目前提供的部分采样器类别如下表所示。此外,用户也可以根据需要实现自定义的采样器类。更多采样器的使用方法参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.html)。 | 采样器名称 | 采样器说明 | | ---- | ---- | diff --git a/docs/programming_guide/source_zh_cn/security_and_privacy.md b/docs/programming_guide/source_zh_cn/security_and_privacy.md index ec57b333cb1f9e62aa44047e010286f635b4af81..66a9666d6d250b57463332635caa54a9cb29c86b 100644 --- a/docs/programming_guide/source_zh_cn/security_and_privacy.md +++ b/docs/programming_guide/source_zh_cn/security_and_privacy.md @@ -17,7 +17,7 @@ - + ## 概述 @@ -37,7 +37,7 @@ `Detector`基类定义了对抗样本检测的使用接口,其子类实现了各种具体的检测算法,增强模型的对抗鲁棒性。 -详细内容,请参考[对抗鲁棒性官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/improve_model_security_nad.html)。 +详细内容,请参考[对抗鲁棒性官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/improve_model_security_nad.html)。 ## 模型安全测试 @@ -45,7 +45,7 @@ `Fuzzer`类基于神经元覆盖率增益控制fuzzing流程,采用自然扰动和对抗样本生成方法作为变异策略,激活更多的神经元,从而探索不同类型的模型输出结果、错误行为,指导用户增强模型鲁棒性。 -详细内容,请参考[模型安全测试官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/test_model_security_fuzzing.html)。 +详细内容,请参考[模型安全测试官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/test_model_security_fuzzing.html)。 ## 差分隐私训练 @@ -53,7 +53,7 @@ `DPModel`继承了`mindspore.Model`,提供了差分隐私训练的入口函数。 -详细内容,请参考[差分隐私官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/protect_user_privacy_with_differential_privacy.html)。 +详细内容,请参考[差分隐私官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/protect_user_privacy_with_differential_privacy.html)。 ## 隐私泄露风险评估 @@ -61,4 +61,4 @@ `MembershipInference`类提供了一种模型逆向分析方法,能够基于模型对样本的预测信息,推测某个样本是否在模型的训练集中,以此评估模型的隐私泄露风险。 -详细内容,请参考[隐私泄露风险评估官方教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/test_model_security_membership_inference.html)。 +详细内容,请参考[隐私泄露风险评估官方教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/test_model_security_membership_inference.html)。 diff --git a/docs/programming_guide/source_zh_cn/syntax_list.rst b/docs/programming_guide/source_zh_cn/syntax_list.rst index ee6c9218ca1be9856d50de5d3d40b71e0f8f57df..c31e6ede9c5328f23b1f2e7d08a0cda7d68c513a 100644 --- a/docs/programming_guide/source_zh_cn/syntax_list.rst +++ b/docs/programming_guide/source_zh_cn/syntax_list.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - 静态图语法支持 \ No newline at end of file + 静态图语法支持 \ No newline at end of file diff --git a/docs/programming_guide/source_zh_cn/tensor.md b/docs/programming_guide/source_zh_cn/tensor.md index fd4467012aad1f81c963763786af388bde83e47e..c65c1b1e660ae529663554cc2fba86a9ee84a5b5 100644 --- a/docs/programming_guide/source_zh_cn/tensor.md +++ b/docs/programming_guide/source_zh_cn/tensor.md @@ -11,15 +11,15 @@ - +    - +    ## 概述 -张量(Tensor)是MindSpore网络运算中的基本数据结构。张量中的数据类型可参考[dtype](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dtype.html)。 +张量(Tensor)是MindSpore网络运算中的基本数据结构。张量中的数据类型可参考[dtype](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dtype.html)。 不同维度的张量分别表示不同的数据,0维张量表示标量,1维张量表示向量,2维张量表示矩阵,3维张量可以表示彩色图像的RGB三通道等等。 diff --git a/docs/programming_guide/source_zh_cn/tokenizer.md b/docs/programming_guide/source_zh_cn/tokenizer.md index 0dcaca69a1049364974db4362519c3614778dc2e..12bb566fde083d50fae113c533b2a37a606c39f0 100644 --- a/docs/programming_guide/source_zh_cn/tokenizer.md +++ b/docs/programming_guide/source_zh_cn/tokenizer.md @@ -14,9 +14,9 @@ - +    - +    @@ -40,7 +40,7 @@ MindSpore目前提供的分词器如下表所示。此外,用户也可以根 | WhitespaceTokenizer | 根据空格符对标量文本数据进行分词。 | | WordpieceTokenizer | 根据单词集对标量文本数据进行分词。 | -更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.text.html)。 +更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.text.html)。 ## MindSpore分词器 @@ -161,7 +161,7 @@ print("------------------------before tokenization----------------------------") for data in dataset.create_dict_iterator(output_numpy=True): print(text.to_str(data['text'])) -# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/botchan.txt +# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/r1.1/tests/ut/data/dataset/test_sentencepiece/botchan.txt vocab_file = "botchan.txt" vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {}) tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING) diff --git a/docs/programming_guide/source_zh_cn/train.md b/docs/programming_guide/source_zh_cn/train.md index 22683a4206459c78b69f79a3c79916e79eb0f570..f0d06e7d4dfd8c3bea10246c96cd77d531bfc68a 100644 --- a/docs/programming_guide/source_zh_cn/train.md +++ b/docs/programming_guide/source_zh_cn/train.md @@ -13,9 +13,9 @@ - +    - +    @@ -27,13 +27,13 @@ MindSpore在Model_zoo也已经提供了大量的目标检测、自然语言处 在自定义训练网络前,需要先了解下MindSpore的网络支持、Python源码构造网络约束和算子支持情况。 -- 网络支持:当前MindSpore已经支持多种网络,按类型分为计算机视觉、自然语言处理、推荐和图神经网络,可以通过[网络支持](https://www.mindspore.cn/doc/note/zh-CN/master/network_list.html)查看具体支持的网络情况。如果现有网络无法满足用户需求,用户可以根据实际需要定义自己的网络。 +- 网络支持:当前MindSpore已经支持多种网络,按类型分为计算机视觉、自然语言处理、推荐和图神经网络,可以通过[网络支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/network_list.html)查看具体支持的网络情况。如果现有网络无法满足用户需求,用户可以根据实际需要定义自己的网络。 -- Python源码构造网络约束:MindSpore暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。详细情况可以查看[静态图语法支持](https://www.mindspore.cn/doc/note/zh-CN/master/static_graph_syntax_support.html)了解。随着MindSpore的演进,这些约束可能会发生变化。 +- Python源码构造网络约束:MindSpore暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。详细情况可以查看[静态图语法支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/static_graph_syntax_support.html)了解。随着MindSpore的演进,这些约束可能会发生变化。 -- 算子支持:顾名思义,网络的基础是算子,所以用户自定义训练网络前要对MindSpore当前支持的算子有所了解,可以通过查看[算子支持](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)了解不同的后端(Ascend、GPU和CPU)的算子实现情况。 +- 算子支持:顾名思义,网络的基础是算子,所以用户自定义训练网络前要对MindSpore当前支持的算子有所了解,可以通过查看[算子支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)了解不同的后端(Ascend、GPU和CPU)的算子实现情况。 -> 当开发网络遇到内置算子不足以满足需求时,用户也可以参考[自定义算子](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_operator_ascend.html),方便快捷地扩展昇腾AI处理器的自定义算子。 +> 当开发网络遇到内置算子不足以满足需求时,用户也可以参考[自定义算子](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_operator_ascend.html),方便快捷地扩展昇腾AI处理器的自定义算子。 代码样例如下: @@ -248,7 +248,7 @@ if __name__ == "__main__": print("epoch: {0}/{1}, losses: {2}".format(step + 1, epoch, output.asnumpy(), flush=True)) ``` -> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)的下载数据集部分,下同。 +> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)的下载数据集部分,下同。 输出如下: @@ -265,11 +265,11 @@ epoch: 9/10, losses: 2.305952548980713 epoch: 10/10, losses: 1.4282708168029785 ``` -> 典型的使用场景是梯度累积,详细查看[梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_gradient_accumulation.html)。 +> 典型的使用场景是梯度累积,详细查看[梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_gradient_accumulation.html)。 ## 边训练边推理 -对于某些数据量较大、训练时间较长的复杂网络,为了能掌握训练的不同阶段模型精度的指标变化情况,可以通过边训练边推理的方式跟踪精度的变化情况。具体可以参考[同步训练和验证模型](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/evaluate_the_model_during_training.html)。 +对于某些数据量较大、训练时间较长的复杂网络,为了能掌握训练的不同阶段模型精度的指标变化情况,可以通过边训练边推理的方式跟踪精度的变化情况。具体可以参考[同步训练和验证模型](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/evaluate_the_model_during_training.html)。 ## on-device执行 diff --git a/install/mindspore_ascend310_install_pip.md b/install/mindspore_ascend310_install_pip.md index 31b43e5a4adba665e47ceb3472b3abad3126fc12..7afe564f85438468b46c95d4f8e780016823cb2e 100644 --- a/install/mindspore_ascend310_install_pip.md +++ b/install/mindspore_ascend310_install_pip.md @@ -11,7 +11,7 @@ - + 本文档介绍如何在Ascend 310环境的Linux系统上,使用pip方式快速安装MindSpore。 @@ -42,8 +42,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend 310版本可支持以下系统`euleros_aarch64`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 @@ -118,4 +118,4 @@ make 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend310_install_pip_en.md b/install/mindspore_ascend310_install_pip_en.md index 340b98c4c12345e8591ad23e8af0046c8a36a891..fb65d4a479faf5516f0e79c33900bc57bc75199d 100644 --- a/install/mindspore_ascend310_install_pip_en.md +++ b/install/mindspore_ascend310_install_pip_en.md @@ -1 +1,121 @@ -# Installing MindSpore in Ascend 310 by pip +# Installing MindSpore in Ascend 310 by pip + + + +- [Installing MindSpore in Ascend 310 by pip](#installing-mindspore-in-ascend-310-by-pip) + - [Checking System Environment Information](#checking-system-environment-information) + - [Installing MindSpore](#installing-mindspore) + - [Configuring Environment Variables](#configuring-environment-variables) + - [Verifying the Installation](#verifying-the-installation) + - [Installing MindSpore Serving](#installing-mindspore-serving) + + + + + +The following describes how to quickly install MindSpore by pip on Linux in the Ascend 310 environment. + +## Checking System Environment Information + +- Ensure that the 64-bit Ubuntu 18.04, CentOS 7.6, or EulerOS 2.8 is installed. +- Ensure that [GCC 7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) is installed. +- Ensure that [GMP 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed. +- Ensure that [CMake 3.18.3 or later](https://cmake.org/download/) is installed. + - After installation, add the path of CMake to the system environment variables. +- Ensure that Python 3.7.5 is installed. + - If Python 3.7.5 (64-bit) is not installed, download it from the [Python official website](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) or [HUAWEI CLOUD](https://mirrors.huaweicloud.com/python/3.7.5/Python-3.7.5.tgz) and install it. +- Ensure that the Ascend 310 AI Processor software packages (Atlas Data Center Solution V100R020C10: [A300-3000 1.0.7.SPC103 (aarch64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3000-pid-250702915/software/251999079?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702915), [A300-3010 1.0.7.SPC103 (x86_64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3010-pid-251560253/software/251894987?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251560253), [CANN V100R020C10](https://support.huawei.com/enterprise/en/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373)) are installed. + - Ensure that you have permissions to access the installation path `/usr/local/Ascend` of the Ascend 310 AI Processor software package. If not, ask the user root to add you to a user group to which `/usr/local/Ascend` belongs. For details about the configuration, see the description document in the software package. + - Ensure that the Ascend 310 AI Processor software package that matches GCC 7.3 is installed. + - Install the .whl package provided with the Ascend 310 AI Processor software package. The .whl package is released with the software package. After the software package is upgraded, you need to reinstall the .whl package. + + ```bash + pip install /usr/local/Ascend/atc/lib64/topi-{version}-py3-none-any.whl + pip install /usr/local/Ascend/atc/lib64/te-{version}-py3-none-any.whl + ``` + +## Installing MindSpore + +```bash +pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/ascend/{system}/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple +``` + +In the preceding information: + +- When the network is connected, dependencies of the MindSpore installation package are automatically downloaded during the .whl package installation. For details about dependencies, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt). In other cases, install the dependencies by yourself. +- `{version}` specifies the MindSpore version number. For example, when installing MindSpore 1.1.0, set `{version}` to 1.1.0. +- `{arch}` specifies the system architecture. For example, if a Linux OS architecture is x86_64, set `{arch}` to `x86_64`. If the system architecture is ARM64, set `{arch}` to `aarch64`. +- `{system}` specifies the system version. For example, if EulerOS ARM64 is used, set `{system}` to `euleros_aarch64`. Currently, Ascend 310 supports the following systems: `euleros_aarch64`, `centos_aarch64`, `centos_x86`, `ubuntu_aarch64`, and `ubuntu_x86`. + +## Configuring Environment Variables + +After MindSpore is installed, export runtime environment variables. In the following command, `/usr/local/Ascend` in `LOCAL_ASCEND=/usr/local/Ascend` indicates the installation path of the software package. Change it to the actual installation path. + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +## Verifying the Installation + +Create a directory to store the sample code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample`. You can obtain the code from the [official website](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/sample_resources/ascend310_single_op_sample.zip). A simple example of adding `[1, 2, 3, 4]` to `[2, 3, 4, 5]` is used and the code project directory structure is as follows: + +```text + +└─ascend310_single_op_sample + ├── CMakeLists.txt // Build script + ├── README.md // Usage description + ├── main.cc // Main function + └── tensor_add.mindir // MindIR model file +``` + +Go to the directory of the sample project and change the path based on the actual requirements. + +```bash +cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample +``` + +Build a project by referring to `README.md`. + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +make +``` + +After the build is successful, execute the case. + +```bash +./tensor_add_sample +``` + +The following information is displayed: + +```text +3 +5 +7 +9 +``` + +The preceding information indicates that MindSpore is successfully installed. + +## Installing MindSpore Serving + +If you want to quickly experience the MindSpore online inference service, you can install MindSpore Serving. + +For details, see [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_ascend310_install_source.md b/install/mindspore_ascend310_install_source.md index 1eef13b06700ddd4e4d36f254fbb6d1adb82a450..6fd08a5c3cecb350b899e51fae4419a4bd52cfe4 100644 --- a/install/mindspore_ascend310_install_source.md +++ b/install/mindspore_ascend310_install_source.md @@ -13,7 +13,7 @@ - + 本文档介绍如何在Ascend 310环境的Linux系统上,使用源码编译方式快速安装MindSpore。 @@ -51,7 +51,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -75,8 +75,8 @@ pip install output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl -i htt 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 配置环境变量 @@ -150,4 +150,4 @@ make 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend310_install_source_en.md b/install/mindspore_ascend310_install_source_en.md index 4827c91e89727c6d3cc3d430ecf786c06fc0fb1a..f22401c99fadb01f35d8b3ff0f5020fc48717cc1 100644 --- a/install/mindspore_ascend310_install_source_en.md +++ b/install/mindspore_ascend310_install_source_en.md @@ -1 +1,153 @@ -# Installing MindSpore in Ascend 310 by Source Code +# Installing MindSpore in Ascend 310 by Source Code Compilation + + + +- [Installing MindSpore in Ascend 310 by Source Code Compilation](#installing-mindspore-in-ascend-310-by-source-code-compilation) + - [Checking System Environment Information](#checking-system-environment-information) + - [Downloading Source Code from the Code Repository](#downloading-source-code-from-the-code-repository) + - [Building MindSpore](#building-mindspore) + - [Installing MindSpore](#installing-mindspore) + - [Configuring Environment Variables](#configuring-environment-variables) + - [Verifying the Installation](#verifying-the-installation) + - [Installing MindSpore Serving](#installing-mindspore-serving) + + + + + +The following describes how to quickly install MindSpore by compiling the source code on Linux in the Ascend 310 environment. + +## Checking System Environment Information + +- Ensure that the 64-bit Ubuntu 18.04, CentOS 7.6, or EulerOS 2.8 is installed. +- Ensure that [GCC 7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) is installed. +- Ensure that [GMP 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed. +- Ensure that [Python 3.7.5](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) is installed. +- Ensure that [OpenSSL 1.1.1 or later](https://github.com/openssl/openssl.git) is installed. + - After installation, set the environment variable `export OPENSSL_ROOT_DIR= "OpenSSL installation directory"`. +- Ensure that [CMake 3.18.3 or later](https://cmake.org/download/) is installed. + - After installation, add the path of CMake to the system environment variables. +- Ensure that [patch 2.5 or later](http://ftp.gnu.org/gnu/patch/) is installed. + - After installation, add the patch path to the system environment variables. +- Ensure that [wheel 0.32.0 or later](https://pypi.org/project/wheel/) is installed. +- Ensure that the Ascend 310 AI Processor software packages (Atlas Data Center Solution V100R020C10: [A300-3000 1.0.7.SPC103 (aarch64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3000-pid-250702915/software/251999079?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702915), [A300-3010 1.0.7.SPC103 (x86_64)](https://support.huawei.com/enterprise/en/ascend-computing/a300-3010-pid-251560253/software/251894987?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251560253), [CANN V100R020C10](https://support.huawei.com/enterprise/en/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373)) are installed. + - Ensure that you have permissions to access the installation path `/usr/local/Ascend` of the Ascend 310 AI Processor software package. If not, ask the user root to add you to a user group to which `/usr/local/Ascend` belongs. For details about the configuration, see the description document in the software package. + - Ensure that the Ascend 310 AI Processor software package that matches GCC 7.3 is installed. + - Install the .whl package provided with the Ascend 310 AI Processor software package. The .whl package is released with the software package. After the software package is upgraded, you need to reinstall the .whl package. + + ```bash + pip install /usr/local/Ascend/atc/lib64/topi-{version}-py3-none-any.whl + pip install /usr/local/Ascend/atc/lib64/te-{version}-py3-none-any.whl + ``` + +- Ensure that the git tool is installed. + If not, run the following command to download and install it: + + ```bash + apt-get install git # ubuntu and so on + yum install git # centos and so on + ``` + +## Downloading Source Code from the Code Repository + +```bash +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 +``` + +## Building MindSpore + +Run the following command in the root directory of the source code. + +```bash +bash build.sh -e ascend -V 310 +``` + +In the preceding information: + +The default number of build threads is 8 in `build.sh`. If the compiler performance is poor, build errors may occur. You can add -j{Number of threads} to script to reduce the number of threads. For example, `bash build.sh -e ascend -V 310 -j4`. + +## Installing MindSpore + +```bash +chmod +x output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl +pip install output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl -i https://pypi.tuna.tsinghua.edu.cn/simple +``` + +In the preceding information: + +- When the network is connected, dependencies of the MindSpore installation package are automatically downloaded during the .whl package installation. For details about dependencies, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt). In other cases, install the dependencies by yourself. +- `{version}` specifies the MindSpore version number. For example, when installing MindSpore 1.1.0, set `{version}` to 1.1.0. +- `{arch}` specifies the system architecture. For example, if a Linux OS architecture is x86_64, set `{arch}` to `x86_64`. If the system architecture is ARM64, set `{arch}` to `aarch64`. + +## Configuring Environment Variables + +After MindSpore is installed, export runtime environment variables. In the following command, `/usr/local/Ascend` in `LOCAL_ASCEND=/usr/local/Ascend` indicates the installation path of the software package. Change it to the actual installation path. + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +## Verifying the Installation + +Create a directory to store the sample code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample`. You can obtain the code from the [official website](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/sample_resources/ascend310_single_op_sample.zip). A simple example of adding `[1, 2, 3, 4]` to `[2, 3, 4, 5]` is used and the code project directory structure is as follows: + +```text + +└─ascend310_single_op_sample + ├── CMakeLists.txt // Build script + ├── README.md // Usage description + ├── main.cc // Main function + └── tensor_add.mindir // MindIR model file +``` + +Go to the directory of the sample project and change the path based on the actual requirements. + +```bash +cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample +``` + +Build a project by referring to `README.md`. + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +make +``` + +After the build is successful, execute the case. + +```bash +./tensor_add_sample +``` + +The following information is displayed: + +```text +3 +5 +7 +9 +``` + +The preceding information indicates that MindSpore is successfully installed. + +## Installing MindSpore Serving + +If you want to quickly experience the MindSpore online inference service, you can install MindSpore Serving. + +For details, see [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_ascend_install_conda.md b/install/mindspore_ascend_install_conda.md index 9a51ce0891cb3ffc7395657bb12d96f819549107..d78fcbd206187b7eec493d3c9263e3e74c4b3322 100644 --- a/install/mindspore_ascend_install_conda.md +++ b/install/mindspore_ascend_install_conda.md @@ -18,7 +18,7 @@ - + 本文档介绍如何在Ascend 910环境的Linux系统上,使用Conda方式快速安装MindSpore。 @@ -70,8 +70,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前可支持以下系统`euleros_aarch64`/`euleros_x86`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 @@ -140,22 +140,22 @@ pip install --upgrade mindspore-ascend 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 ## 安装MindSpore Serving 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend_install_pip.md b/install/mindspore_ascend_install_pip.md index bab51e4049fcfcfd926b85970e3ed9e01d102752..15f71bd76e2e3c19ba70db53923284b3501b12eb 100644 --- a/install/mindspore_ascend_install_pip.md +++ b/install/mindspore_ascend_install_pip.md @@ -15,7 +15,7 @@ - + 本文档介绍如何在Ascend 910环境的Linux系统上,使用pip方式快速安装MindSpore。 @@ -44,8 +44,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend版本可支持以下系统`euleros_aarch64`/`euleros_x86`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 @@ -114,22 +114,22 @@ pip install --upgrade mindspore-ascend 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 ## 安装MindSpore Serving 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend_install_pip_en.md b/install/mindspore_ascend_install_pip_en.md index 60d48e276d24dab98436b75084a740bbdec1e900..bb7c054d80e55b6705199e061d0c73c26e74f723 100644 --- a/install/mindspore_ascend_install_pip_en.md +++ b/install/mindspore_ascend_install_pip_en.md @@ -15,7 +15,7 @@ - + This document describes how to quickly install MindSpore in a Linux system with an Ascend 910 environment by pip. @@ -44,8 +44,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. - `{system}` denotes the system version. For example, if you are using EulerOS ARM architecture, `{system}` should be `euleros_aarch64`. Currently, the following systems are supported by Ascend: `euleros_aarch64`/`euleros_x86`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`. @@ -117,22 +117,22 @@ pip install --upgrade mindspore-ascend If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). ## Installing MindSpore Serving If you need to access and experience MindSpore online inference services quickly, you can install MindSpore Serving. -For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README.md). +For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_ascend_install_source.md b/install/mindspore_ascend_install_source.md index 721e2ea9e71a91ecda3d962a6c7447b9490cede0..632885df3655306ede0ee75d88648ecf2a729cf3 100644 --- a/install/mindspore_ascend_install_source.md +++ b/install/mindspore_ascend_install_source.md @@ -17,7 +17,7 @@ - + 本文档介绍如何在Ascend 910环境的Linux系统上,使用源码编译方式快速安装MindSpore。 @@ -74,7 +74,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -97,8 +97,8 @@ pip install build/package/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 配置环境变量 @@ -176,22 +176,22 @@ print(ops.tensor_add(x, y)) 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 ## 安装MindSpore Serving 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend_install_source_en.md b/install/mindspore_ascend_install_source_en.md index 7135e8f030b7cb562c0c42be061a1746cef504e8..cc4b13f3b040f8ecad1784908da72b4fce3f9bd4 100644 --- a/install/mindspore_ascend_install_source_en.md +++ b/install/mindspore_ascend_install_source_en.md @@ -17,7 +17,7 @@ - + This document describes how to quickly install MindSpore in a Linux system with an Ascend 910 environment by source code. @@ -75,7 +75,7 @@ This document describes how to quickly install MindSpore in a Linux system with ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -99,8 +99,8 @@ pip install build/package/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. ## Configuring Environment Variables @@ -180,22 +180,22 @@ Using the following command if you need to update the MindSpore version. If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). ## Installing MindSpore Serving If you need to access and experience MindSpore online inference services quickly, you can install MindSpore Serving. -For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README.md). +For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_cpu_install_conda.md b/install/mindspore_cpu_install_conda.md index 27d191004f359d55ac46008f27ba5c2d07ca7e9f..6b666b82591b94b2bb0d35a4c9d51b5f501fe412 100644 --- a/install/mindspore_cpu_install_conda.md +++ b/install/mindspore_cpu_install_conda.md @@ -15,7 +15,7 @@ - + 本文档介绍如何在CPU环境的Linux系统上,使用Conda方式快速安装MindSpore。 @@ -57,8 +57,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统,例如使用的Ubuntu系统X86架构,`{system}`应写为`ubuntu_x86`,目前CPU版本可支持以下系统`ubuntu_aarch64`/`ubuntu_x86`。 @@ -82,10 +82,10 @@ pip install --upgrade mindspore 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_cpu_install_pip.md b/install/mindspore_cpu_install_pip.md index 80d2d21c99702197a87a86c105350639924786e2..9156494e83ffe07178d5b8aa63df0e78c1cbf2f6 100644 --- a/install/mindspore_cpu_install_pip.md +++ b/install/mindspore_cpu_install_pip.md @@ -12,7 +12,7 @@ - + 本文档介绍如何在CPU环境的Linux系统上,使用pip方式快速安装MindSpore。 @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统,例如使用的Ubuntu系统X86架构,`{system}`应写为`ubuntu_x86`,目前CPU版本可支持以下系统`ubuntu_aarch64`/`ubuntu_x86`。 @@ -56,10 +56,10 @@ pip install --upgrade mindspore 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_cpu_install_pip_en.md b/install/mindspore_cpu_install_pip_en.md index 1174728459384c640ee89d22b9577790d320136e..58f225a2454eaf78a7db6e9894120326049b6747 100644 --- a/install/mindspore_cpu_install_pip_en.md +++ b/install/mindspore_cpu_install_pip_en.md @@ -12,7 +12,7 @@ - + This document describes how to quickly install MindSpore by pip in a Linux system with a CPU environment. @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. - `{system}` denotes the system version. For example, if you are using Ubuntu x86 architecture, `{system}` should be `ubuntu_x86`. Currently, the following systems are supported by CPU: `ubuntu_aarch64`/`ubuntu_x86`. @@ -56,10 +56,10 @@ pip install --upgrade mindspore If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/mindspore_cpu_install_source.md b/install/mindspore_cpu_install_source.md index 2e26d4cc416bd423bf36e7f367895816a2043062..a043d2a6437dbec4f0e9d802248996f9a319568b 100644 --- a/install/mindspore_cpu_install_source.md +++ b/install/mindspore_cpu_install_source.md @@ -14,7 +14,7 @@ - + 本文档介绍如何在CPU环境的Linux系统上,使用源码编译方式快速安装MindSpore。 @@ -47,7 +47,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -70,8 +70,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl -i htt 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARMv8架构64位,则写为`aarch64`。 ## 验证安装是否成功 @@ -104,10 +104,10 @@ python -c 'import mindspore;print(mindspore.__version__)' 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_cpu_install_source_en.md b/install/mindspore_cpu_install_source_en.md index 80957491277cf9a1d2ae5c217121e139c61d9d78..9e11557c06c5148b09961bc5fe67eb779faa4734 100644 --- a/install/mindspore_cpu_install_source_en.md +++ b/install/mindspore_cpu_install_source_en.md @@ -14,7 +14,7 @@ - + This document describes how to quickly install MindSpore by source code in a Linux system with a CPU environment. @@ -47,7 +47,7 @@ This document describes how to quickly install MindSpore by source code in a Lin ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -71,8 +71,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl -i htt Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. ## Installation Verification @@ -105,10 +105,10 @@ Using the following command if you need to update the MindSpore version: If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/mindspore_cpu_macos_install_conda.md b/install/mindspore_cpu_macos_install_conda.md index 1cee10f453ccb46d4545b61faeb7f160d3701175..8aa916954124a37074bda5f96fb222aef09fd381 100644 --- a/install/mindspore_cpu_macos_install_conda.md +++ b/install/mindspore_cpu_macos_install_conda.md @@ -13,7 +13,7 @@ - + 本文档介绍如何在CPU环境的macOS系统上,使用Conda方式快速安装MindSpore。 @@ -52,8 +52,8 @@ conda activate mindspore 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_macos_install_pip.md b/install/mindspore_cpu_macos_install_pip.md index 7292ebd303976d06a6d077df3df50bef43290aaf..4b9148e8e0c0c3806f810e2419da3da742dc4153 100644 --- a/install/mindspore_cpu_macos_install_pip.md +++ b/install/mindspore_cpu_macos_install_pip.md @@ -10,7 +10,7 @@ - + 本文档介绍如何在CPU环境的macOS系统上,使用pip方式快速安装MindSpore。 @@ -28,8 +28,8 @@ 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_macos_install_pip_en.md b/install/mindspore_cpu_macos_install_pip_en.md index a611e95c980ce0ce2e4655a47ff0a8a8519f4320..9f78ec3a61f6c32d92f90bf9830e424d2346e332 100644 --- a/install/mindspore_cpu_macos_install_pip_en.md +++ b/install/mindspore_cpu_macos_install_pip_en.md @@ -10,7 +10,7 @@ - + This document describes how to quickly install MindSpore by pip in a macOS system with a CPU environment. @@ -27,8 +27,8 @@ This document describes how to quickly install MindSpore by pip in a macOS syste Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification diff --git a/install/mindspore_cpu_macos_install_source.md b/install/mindspore_cpu_macos_install_source.md index ecd185cd7947054444803728dbe10fe30659e315..85d0edb6213e7e92d032293c1258de2fc2559ae4 100644 --- a/install/mindspore_cpu_macos_install_source.md +++ b/install/mindspore_cpu_macos_install_source.md @@ -12,7 +12,7 @@ - + 本文档介绍如何在CPU环境的macOS系统上,使用源码编译方法快速安装MindSpore。 @@ -37,7 +37,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -56,8 +56,8 @@ pip install build/package/mindspore-{version}-py37-none-any.whl -i https://pypi. 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_macos_install_source_en.md b/install/mindspore_cpu_macos_install_source_en.md index e680ba9cf9b97c3f85f56cd84baca505012ab19e..0bea73f05243185e1de6b2e6aa8db0f0de3ff9b3 100644 --- a/install/mindspore_cpu_macos_install_source_en.md +++ b/install/mindspore_cpu_macos_install_source_en.md @@ -12,7 +12,7 @@ - + This document describes how to quickly install MindSpore by source code in a macOS system with a CPU environment. @@ -37,7 +37,7 @@ This document describes how to quickly install MindSpore by source code in a mac ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -56,8 +56,8 @@ pip install build/package/mindspore-{version}-py37-none-any.whl -i https://pypi. Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification diff --git a/install/mindspore_cpu_win_install_conda.md b/install/mindspore_cpu_win_install_conda.md index 3d913dee797aa56817f567b0aa56c9f1b4113dde..ee099b811d4117e47e8ef5dedb8bf32f2ca6a00d 100644 --- a/install/mindspore_cpu_win_install_conda.md +++ b/install/mindspore_cpu_win_install_conda.md @@ -14,7 +14,7 @@ - + 本文档介绍如何在CPU环境的Windows系统上,使用Conda方式快速安装MindSpore。 @@ -58,8 +58,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_win_install_pip.md b/install/mindspore_cpu_win_install_pip.md index eeadbd9231adaf3d1ad771eb7a2509f8a73142c9..44453d34bc5633ede536f08c4fa57ce4105826a3 100644 --- a/install/mindspore_cpu_win_install_pip.md +++ b/install/mindspore_cpu_win_install_pip.md @@ -10,7 +10,7 @@ - + 本文档介绍如何在CPU环境的Windows系统上,使用pip方式快速安装MindSpore。 @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_win_install_pip_en.md b/install/mindspore_cpu_win_install_pip_en.md index 24a0d86bb38546a8261b8cae6dc7c41226437f0a..7682105e8f96a8c0d4fbe1ea7bedcf8747eb5fb6 100644 --- a/install/mindspore_cpu_win_install_pip_en.md +++ b/install/mindspore_cpu_win_install_pip_en.md @@ -10,7 +10,7 @@ - + This document describes how to quickly install MindSpore by pip in a Windows system with a CPU environment. @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification diff --git a/install/mindspore_cpu_win_install_source.md b/install/mindspore_cpu_win_install_source.md index c6bfbfd9ec5799a8599961199f7db5d5d753ae9e..2d6e244ff5e708cbbffe3d6550fbc17e792e58ff 100644 --- a/install/mindspore_cpu_win_install_source.md +++ b/install/mindspore_cpu_win_install_source.md @@ -12,7 +12,7 @@ - + 本文档介绍如何在CPU环境的Windows系统上,使用源码编译方法快速安装MindSpore。 @@ -21,7 +21,7 @@ - 确认安装Windows 10是x86架构64位操作系统。 - 确认安装[Visual C++ Redistributable for Visual Studio 2015](https://www.microsoft.com/zh-CN/download/details.aspx?id=48145)。 - 确认安装了[git](https://github.com/git-for-windows/git/releases/download/v2.29.2.windows.2/Git-2.29.2.2-64-bit.exe)工具。 - - 如果git没有安装在`ProgramFiles`,在执行上述命令前,需设置环境变量指定`patch.exe`的位置,例如git安装在`D:\git`时,需设置`set MS_PATCH_PATH=D:\git\usr\bin`。 + - 如果git没有安装在`ProgramFiles`,需设置环境变量指定`patch.exe`的位置,例如git安装在`D:\git`时,需设置`set MS_PATCH_PATH=D:\git\usr\bin`。 - 确认安装[MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z)。 - 安装路径中不能出现中文和日文,安装完成后将安装路径下的`MinGW\bin`添加到系统环境变量。例如安装在`D:\gcc`,则需要将`D:\gcc\MinGW\bin`添加到系统环境变量Path中。 - 确认安装[CMake 3.18.3版本](https://github.com/Kitware/Cmake/releases/tag/v3.18.3)。 @@ -34,7 +34,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -53,8 +53,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-win_amd64.whl -i https: 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_win_install_source_en.md b/install/mindspore_cpu_win_install_source_en.md index 5bfdcbb9f2394f8b0b5cd6a9fe1427fb6f36da01..fbc2082ee4db18a203359be6fe474c7be677c03f 100644 --- a/install/mindspore_cpu_win_install_source_en.md +++ b/install/mindspore_cpu_win_install_source_en.md @@ -12,7 +12,7 @@ - + This document describes how to quickly install MindSpore by source code in a Windows system with a CPU environment. @@ -33,7 +33,7 @@ This document describes how to quickly install MindSpore by source code in a Win ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -52,8 +52,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-win_amd64.whl -i https: Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification diff --git a/install/mindspore_gpu_install_conda.md b/install/mindspore_gpu_install_conda.md index 0e0965dd76409b488bd0009d1dd71cd57f9f65e9..04b401991ecaa82611b822e0d200f31148084cf2 100644 --- a/install/mindspore_gpu_install_conda.md +++ b/install/mindspore_gpu_install_conda.md @@ -16,7 +16,7 @@ - + 本文档介绍如何在GPU环境的Linux系统上,使用Conda方式快速安装MindSpore。 @@ -63,8 +63,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 验证是否成功安装 @@ -111,16 +111,16 @@ pip install --upgrade mindspore-gpu 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_gpu_install_pip.md b/install/mindspore_gpu_install_pip.md index d9316e4879e5a739fc22bb82b50726b7d530c718..2eb933c6be28803687f339b6507318a1adf8f6f0 100644 --- a/install/mindspore_gpu_install_pip.md +++ b/install/mindspore_gpu_install_pip.md @@ -13,7 +13,7 @@ - + 本文档介绍如何在GPU环境的Linux系统上,使用pip方式快速安装MindSpore。 @@ -38,8 +38,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 验证是否成功安装 @@ -86,16 +86,16 @@ pip install --upgrade mindspore-gpu 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_gpu_install_pip_en.md b/install/mindspore_gpu_install_pip_en.md index 687428753629f087e2a9ed1eedec366b3d6466ca..6070a60d1a9168c47b69f9e0b3087391a5850486 100644 --- a/install/mindspore_gpu_install_pip_en.md +++ b/install/mindspore_gpu_install_pip_en.md @@ -13,7 +13,7 @@ - + This document describes how to quickly install MindSpore by pip in a Linux system with a GPU environment. @@ -38,8 +38,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. ## Installation Verification @@ -86,16 +86,16 @@ pip install --upgrade mindspore-gpu If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/mindspore_gpu_install_source.md b/install/mindspore_gpu_install_source.md index 1a8ac68847785e17e9c1c64fcc57072274e89e0a..4a746e40c33cc751de34a780a37747593c87c0c6 100644 --- a/install/mindspore_gpu_install_source.md +++ b/install/mindspore_gpu_install_source.md @@ -15,7 +15,7 @@ - + 本文档介绍如何在GPU环境的Linux系统上,使用源码编译方式快速安装MindSpore。 @@ -58,7 +58,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -82,8 +82,8 @@ pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -i 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 验证是否成功安装 @@ -140,16 +140,16 @@ print(ops.tensor_add(x, y)) 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_gpu_install_source_en.md b/install/mindspore_gpu_install_source_en.md index 35f41e5e3b7ef8619cd6bd1623abcfb716fa0e7a..3c45dbaf3aeaf782118d9e37fce6ecb6c10fe8a8 100644 --- a/install/mindspore_gpu_install_source_en.md +++ b/install/mindspore_gpu_install_source_en.md @@ -14,7 +14,7 @@ - + This document describes how to quickly install MindSpore by source code in a Linux system with a GPU environment. @@ -57,7 +57,7 @@ This document describes how to quickly install MindSpore by source code in a Lin ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -81,8 +81,8 @@ pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -i Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. ## Installation Verification @@ -139,16 +139,16 @@ Using the following command if you need to update the MindSpore version. If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/lite/lite.md b/lite/lite.md index 7bd847cadc11002c875ca9e7231cb019dac01f79..8f2425991090b9ded86fd8787f194c6acbfb1b53 100644 --- a/lite/lite.md +++ b/lite/lite.md @@ -1,7 +1,7 @@

快速入门

- +
- +
训练一个LeNet模型 @@ -29,7 +29,7 @@

获取MindSpore Lite

- +
- +
编译MindSpore Lite @@ -57,7 +57,7 @@

端侧推理

- +
- +
- +
- +
- +
其他工具 @@ -121,7 +121,7 @@

端侧训练

- +
- +
执行训练 @@ -149,7 +149,7 @@

其它文档

- +
- +
- +
- +
- +
- +
- +
- +
- +
风格迁移模型 diff --git a/lite/lite_en.md b/lite/lite_en.md index e7eab0a1283e2e9f46dba1ec3a71ab8a0dd97c9f..5f477f9fb7658a7d4a4b8525871039a42d49a367 100644 --- a/lite/lite_en.md +++ b/lite/lite_en.md @@ -1,7 +1,7 @@

Quick Start

- +
- +
Training a LeNet Model @@ -29,7 +29,7 @@

Obtain MindSpore Lite

- +
- +
Building MindSpore Lite @@ -57,7 +57,7 @@

Inference on Devices

- +
- +
- +
- +
Other Tools @@ -109,7 +109,7 @@

Training on Devices

- +
- +
Executing Model Training @@ -137,7 +137,7 @@

Other Documents

- +
- +
- +
- +
- +
- +
- +
- +
- +
Style Transfer Model diff --git a/resource/release/release_list_en.md b/resource/release/release_list_en.md index aa3f9cca5d3d4a4caaa8baa5d73154cdad16a361..c6630800cad935cdef54a124602a603b5dede177 100644 --- a/resource/release/release_list_en.md +++ b/resource/release/release_list_en.md @@ -3,48 +3,68 @@ - [Release List](#release-list) - - [1.0.1](#101) + - [1.1.0](#110) - [Releasenotes and API Updates](#releasenotes-and-api-updates) - [Downloads](#downloads) - [Related Documents](#related-documents) - - [1.0.0](#100) + - [1.0.1](#101) - [Releasenotes and API Updates](#releasenotes-and-api-updates-1) - [Downloads](#downloads-1) - [Related Documents](#related-documents-1) - - [0.7.0-beta](#070-beta) + - [1.0.0](#100) - [Releasenotes and API Updates](#releasenotes-and-api-updates-2) - [Downloads](#downloads-2) - [Related Documents](#related-documents-2) - - [0.6.0-beta](#060-beta) + - [0.7.0-beta](#070-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-3) - [Downloads](#downloads-3) - [Related Documents](#related-documents-3) - - [0.5.2-beta](#052-beta) + - [0.6.0-beta](#060-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-4) - [Downloads](#downloads-4) - [Related Documents](#related-documents-4) - - [0.5.0-beta](#050-beta) + - [0.5.2-beta](#052-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-5) - [Downloads](#downloads-5) - [Related Documents](#related-documents-5) - - [0.3.0-alpha](#030-alpha) + - [0.5.0-beta](#050-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-6) - [Downloads](#downloads-6) - [Related Documents](#related-documents-6) - - [0.2.0-alpha](#020-alpha) + - [0.3.0-alpha](#030-alpha) - [Releasenotes and API Updates](#releasenotes-and-api-updates-7) - [Downloads](#downloads-7) - [Related Documents](#related-documents-7) - - [0.1.0-alpha](#010-alpha) - - [Releasenotes](#releasenotes) + - [0.2.0-alpha](#020-alpha) + - [Releasenotes and API Updates](#releasenotes-and-api-updates-8) - [Downloads](#downloads-8) - [Related Documents](#related-documents-8) - - [master(unstable)](#masterunstable) + - [0.1.0-alpha](#010-alpha) + - [Releasenotes](#releasenotes) + - [Downloads](#downloads-9) - [Related Documents](#related-documents-9) + - [master(unstable)](#masterunstable) + - [Related Documents](#related-documents-10) - + + +## 1.1.0 + +### Releasenotes and API Updates + + + +### Downloads + +### Related Documents + +| Category | URL | +| --- | --- | +| Installation | | +| Tutorials | Training
Inference
Mobile Phone&IoT | +| Docs | Python API
C++ API
Java API
FAQ
Design&Specification | ## 1.0.1 @@ -384,4 +404,4 @@ | --- | --- | | Installation | | | Tutorials | Training
Inference
Mobile Phone&IoT | -| Docs | Python API
C++ API
FAQ
Other Note | +| Docs | Python API
C++ API
Java API
FAQ
Design&Specification | diff --git a/resource/release/release_list_zh_cn.md b/resource/release/release_list_zh_cn.md index d7986da3f194a22b2d3dba6fc28e27d4ce0c8152..d213b8af967af8e965d744897759a70fd796cb82 100644 --- a/resource/release/release_list_zh_cn.md +++ b/resource/release/release_list_zh_cn.md @@ -3,48 +3,68 @@ - [发布版本列表](#发布版本列表) - - [1.0.1](#101) + - [1.1.0](#110) - [版本说明和接口变更](#版本说明和接口变更) - [下载地址](#下载地址) - [配套资料](#配套资料) - - [1.0.0](#100) + - [1.0.1](#101) - [版本说明和接口变更](#版本说明和接口变更-1) - [下载地址](#下载地址-1) - [配套资料](#配套资料-1) - - [0.7.0-beta](#070-beta) + - [1.0.0](#100) - [版本说明和接口变更](#版本说明和接口变更-2) - [下载地址](#下载地址-2) - [配套资料](#配套资料-2) - - [0.6.0-beta](#060-beta) + - [0.7.0-beta](#070-beta) - [版本说明和接口变更](#版本说明和接口变更-3) - [下载地址](#下载地址-3) - [配套资料](#配套资料-3) - - [0.5.2-beta](#052-beta) + - [0.6.0-beta](#060-beta) - [版本说明和接口变更](#版本说明和接口变更-4) - [下载地址](#下载地址-4) - [配套资料](#配套资料-4) - - [0.5.0-beta](#050-beta) + - [0.5.2-beta](#052-beta) - [版本说明和接口变更](#版本说明和接口变更-5) - [下载地址](#下载地址-5) - [配套资料](#配套资料-5) - - [0.3.0-alpha](#030-alpha) + - [0.5.0-beta](#050-beta) - [版本说明和接口变更](#版本说明和接口变更-6) - [下载地址](#下载地址-6) - [配套资料](#配套资料-6) - - [0.2.0-alpha](#020-alpha) + - [0.3.0-alpha](#030-alpha) - [版本说明和接口变更](#版本说明和接口变更-7) - [下载地址](#下载地址-7) - [配套资料](#配套资料-7) - - [0.1.0-alpha](#010-alpha) - - [版本说明](#版本说明) + - [0.2.0-alpha](#020-alpha) + - [版本说明和接口变更](#版本说明和接口变更-8) - [下载地址](#下载地址-8) - [配套资料](#配套资料-8) - - [master(unstable)](#masterunstable) + - [0.1.0-alpha](#010-alpha) + - [版本说明](#版本说明) + - [下载地址](#下载地址-9) - [配套资料](#配套资料-9) + - [master(unstable)](#masterunstable) + - [配套资料](#配套资料-10) - + + +## 1.1.0 + +### 版本说明和接口变更 + + + +### 下载地址 + +### 配套资料 + +| 类别 | 网址 | +| --- | --- | +|安装 | | +| 教程 | 训练
推理
手机&IoT | +| 文档 | 编程指南
Python API
C++ API
Java API
FAQ
设计和规格 | ## 1.0.1 @@ -384,4 +404,4 @@ | --- | --- | |安装 | | | 教程 | 训练
推理
手机&IoT | -| 文档 | 编程指南
Python API
C++ API
FAQ
其他说明 | +| 文档 | 编程指南
Python API
C++ API
Java API
FAQ
设计和规格 | diff --git a/tools/link_detection/README_CN.md b/tools/link_detection/README_CN.md index c2be9e6e409f7926daaf6e5034c5525da6b120c1..053413b0ef37f1b595cb02f31331e02f324173e9 100644 --- a/tools/link_detection/README_CN.md +++ b/tools/link_detection/README_CN.md @@ -15,7 +15,7 @@ 1. 打开Git Bash,下载MindSpore Docs仓代码。 ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. 进入`tools/link_detection`目录,安装执行所需的第三方库。 diff --git a/tools/pic_detection/README_CN.md b/tools/pic_detection/README_CN.md index a3cf658bc44bc75dede5f6d86a1f649209912092..b52f9314f4adaec06757c172d6711feb05638838 100644 --- a/tools/pic_detection/README_CN.md +++ b/tools/pic_detection/README_CN.md @@ -11,7 +11,7 @@ 1. 打开Git Bash,下载MindSpore Docs仓代码。 ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. 进入`tools/pic_detection`目录。 diff --git a/tutorials/inference/source_en/conf.py b/tutorials/inference/source_en/conf.py index 0a00ad8da18607c9f0ac88017972211d04c763c0..425ae737d4e83fc89afc1d341cc266c2f72ca089 100644 --- a/tutorials/inference/source_en/conf.py +++ b/tutorials/inference/source_en/conf.py @@ -21,7 +21,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/tutorials/inference/source_en/index.rst b/tutorials/inference/source_en/index.rst index 77e779bf19d5c2c7227a3267093afa288ff086fd..352af34cc80aac119b0da4e3afba8ae9bfd25269 100644 --- a/tutorials/inference/source_en/index.rst +++ b/tutorials/inference/source_en/index.rst @@ -24,3 +24,6 @@ Inference Using MindSpore :caption: Inference Service serving_example + serving_grpc + serving_restful + serving_model diff --git a/tutorials/inference/source_en/multi_platform_inference.md b/tutorials/inference/source_en/multi_platform_inference.md index 2879428aaf758850a2ce2d535d0e0bcb5b8ec170..869ba617199ee0a6bf2d96b80d9ae8d0089c47b3 100644 --- a/tutorials/inference/source_en/multi_platform_inference.md +++ b/tutorials/inference/source_en/multi_platform_inference.md @@ -8,7 +8,7 @@ - + Models trained by MindSpore support the inference on different hardware platforms. This document describes the inference process on each platform. diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst b/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst index d16b94a6134bb498484d11cc4a9535cfddc6f39a..1544dd6a232ca90820288d832336763cff2b3774 100644 --- a/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst @@ -5,3 +5,4 @@ Inference on Ascend 310 :maxdepth: 1 multi_platform_inference_ascend_310_air + multi_platform_inference_ascend_310_mindir \ No newline at end of file diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md b/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md index cf0d0656ea5b9dda1c0743f0fc24db8b8a637634..29761ae03be366813609c5b7105838b72617906b 100644 --- a/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md @@ -21,7 +21,7 @@ - + ## Overview @@ -39,7 +39,7 @@ This tutorial describes how to use MindSpore to perform inference on the Atlas 2 5. Load the saved OM model, perform inference, and view the result. -> You can obtain the complete executable sample code at . +> You can obtain the complete executable sample code at . ## Preparing the Development Environment @@ -91,7 +91,7 @@ Install the development kit software package `Ascend-Toolkit-*{version}*-arm64-l ## Inference Directory Structure -Create a directory to store the inference code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`. The `inc`, `src`, and `test_data` directory code can be obtained from the [official website](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/acl_resnet50_sample), and the `model` directory stores the exported `AIR` model file and the converted `OM` model file. The `out` directory stores the executable file generated after building and the output result directory. The directory structure of the inference code project is as follows: +Create a directory to store the inference code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`. The `inc`, `src`, and `test_data` directory code can be obtained from the [official website](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/acl_resnet50_sample), and the `model` directory stores the exported `AIR` model file and the converted `OM` model file. The `out` directory stores the executable file generated after building and the output result directory. The directory structure of the inference code project is as follows: ```text └─acl_resnet50_sample @@ -121,7 +121,7 @@ Create a directory to store the inference code project, for example, `/home/HwHi ## Exporting the AIR Model -Train the target network on the Ascend 910 AI Processor, save it as a checkpoint file, and export the model file in AIR format through the network and checkpoint file. For details about the export process, see [Export AIR Model](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-air-model). +Train the target network on the Ascend 910 AI Processor, save it as a checkpoint file, and export the model file in AIR format through the network and checkpoint file. For details about the export process, see [Export AIR Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-air-model). > The [resnet50_export.air](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/sample_resources/acl_resnet50_sample/resnet50_export.air) is a sample AIR file exported using the ResNet-50 model. diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md b/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md new file mode 100644 index 0000000000000000000000000000000000000000..14cfec147e06256de893a3547af3036a3ed163dd --- /dev/null +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md @@ -0,0 +1,5 @@ +# Inference on the Ascend 310 AI Processor Using Mindir Model + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_910.md b/tutorials/inference/source_en/multi_platform_inference_ascend_910.md index 7b6afa002a202a9d888f693525a67b73d60d58fa..cdf9486a4b142122fd9e660e02789fe119cac5be 100644 --- a/tutorials/inference/source_en/multi_platform_inference_ascend_910.md +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_910.md @@ -10,7 +10,7 @@ - + ## Inference Using a Checkpoint File with Single Device @@ -37,8 +37,8 @@ ``` In the preceding information: - `model.eval` is an API for model validation. For details about the API, see . - > Inference sample code: . + `model.eval` is an API for model validation. For details about the API, see . + > Inference sample code: . 1.2 Remote Storage @@ -61,7 +61,7 @@ In the preceding information: - `mindpsore_hub.load` is an API for loading model parameters. Please check the details in . + `mindpsore_hub.load` is an API for loading model parameters. Please check the details in . 2. Use the `model.predict` API to perform inference. @@ -70,7 +70,7 @@ ``` In the preceding information: - `model.predict` is an API for inference. For details about the API, see . + `model.predict` is an API for inference. For details about the API, see . ## Distributed Inference With Multi Devices @@ -80,13 +80,13 @@ This tutorial would focus on the process that the model slices are saved on each > Distributed inference sample code: > -> +> The process of distributed inference is as follows: 1. Execute training, generate the checkpoint file and the model strategy file. - > - The distributed training tutorial and sample code can be referred to the link: . + > - The distributed training tutorial and sample code can be referred to the link: . > - In the distributed Inference scenario, during the training phase, the `integrated_save` of `CheckpointConfig` interface should be set to `False`, which means that each device only saves the slice of model instead of the full model. > - `parallel_mode` of `set_auto_parallel_context` interface should be set to `auto_parallel` or `semi_auto_parallel`. > - In addition, you need to specify `strategy_ckpt_save_file` to indicate the path of the strategy file. @@ -122,7 +122,7 @@ The process of distributed inference is as follows: - `load_distributed_checkpoint`:merges model slices, then splits it according to the predication strategy, and loads it into the network. > The `load_distributed_checkpoint` interface supports that predict_strategy is `None`, which is single device inference, and the process is different from distributed inference. The detailed usage can be referred to the link: - > . + > . 4. Execute inference. diff --git a/tutorials/inference/source_en/multi_platform_inference_cpu.md b/tutorials/inference/source_en/multi_platform_inference_cpu.md index 8d00afd56a67f27869dd0f68bec43c43437d8c2e..0576c5a802f2ce316c8eda45947bcec9e5c4a219 100644 --- a/tutorials/inference/source_en/multi_platform_inference_cpu.md +++ b/tutorials/inference/source_en/multi_platform_inference_cpu.md @@ -10,7 +10,7 @@ - + ## Inference Using a Checkpoint File @@ -20,6 +20,6 @@ The inference is the same as that on the Ascend 910 AI processor. Similar to the inference on a GPU, the following steps are required: -1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-onnx-model). +1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-onnx-model). 2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime). diff --git a/tutorials/inference/source_en/multi_platform_inference_gpu.md b/tutorials/inference/source_en/multi_platform_inference_gpu.md index 0c3de8af6ba83965679f63f5719233bf1b982100..7ce07c133a3d8b54720a5505d156e9edf10e29e0 100644 --- a/tutorials/inference/source_en/multi_platform_inference_gpu.md +++ b/tutorials/inference/source_en/multi_platform_inference_gpu.md @@ -10,7 +10,7 @@ - + ## Inference Using a Checkpoint File @@ -18,6 +18,6 @@ The inference is the same as that on the Ascend 910 AI processor. ## Inference Using an ONNX File -1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-onnx-model). +1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-onnx-model). 2. Perform inference on a GPU by referring to the runtime or SDK document. For example, use TensorRT to perform inference on the NVIDIA GPU. For details, see [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt). diff --git a/tutorials/inference/source_en/serving_grpc.md b/tutorials/inference/source_en/serving_grpc.md new file mode 100644 index 0000000000000000000000000000000000000000..4d26d1f7347a1976f756f41ef00b44ad17068828 --- /dev/null +++ b/tutorials/inference/source_en/serving_grpc.md @@ -0,0 +1,5 @@ +# Access MindSpore Serving service based on gRPC interface + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_en/serving_model.md b/tutorials/inference/source_en/serving_model.md new file mode 100644 index 0000000000000000000000000000000000000000..9da9378f3fc33e3b8771fed235215b607a9a997d --- /dev/null +++ b/tutorials/inference/source_en/serving_model.md @@ -0,0 +1,5 @@ +# Servable provided by configuration model + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_en/serving_restful.md b/tutorials/inference/source_en/serving_restful.md new file mode 100644 index 0000000000000000000000000000000000000000..29045371bda7381b05bab51c0b5fdf5eaba2f55f --- /dev/null +++ b/tutorials/inference/source_en/serving_restful.md @@ -0,0 +1,5 @@ +# Access MindSpore Serving service based on RESTful interface + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_zh_cn/conf.py b/tutorials/inference/source_zh_cn/conf.py index 0c819a8b0622e1914ff199e5bd29a591595470b3..1a8eda1de573595f3cbeabce9c3fd1913cd377aa 100644 --- a/tutorials/inference/source_zh_cn/conf.py +++ b/tutorials/inference/source_zh_cn/conf.py @@ -21,7 +21,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference.md b/tutorials/inference/source_zh_cn/multi_platform_inference.md index 0556b845255b55e06909a20966e6ea9eacbe99ea..e9f73e065472bbd778dbb547036a922d131f45c2 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference.md @@ -8,7 +8,7 @@ - + 基于MindSpore训练后的模型,支持在不同的硬件平台上执行推理。本文介绍各平台上的推理流程。 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md index 71347fdb25503f12a79fd2fc35607de0b892c4ed..1a4777fe6d0ba3d0592110a9b4fbda9523065394 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md @@ -21,7 +21,7 @@ - + ## 概述 @@ -39,7 +39,7 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 5. 加载保存的OM模型,执行推理并查看结果。 -> 你可以在这里找到完整可运行的样例代码: 。 +> 你可以在这里找到完整可运行的样例代码: 。 ## 开发环境准备 @@ -91,7 +91,7 @@ Atlas 200 DK开发者板支持通过USB端口或者网线与Ubuntu服务器进 ## 推理目录结构介绍 -创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`,其中`inc`、`src`、`test_data`目录代码可以从[官网示例下载](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/acl_resnet50_sample)获取,`model`目录用于存放接下来导出的`AIR`模型文件和转换后的`OM`模型文件,`out`目录用于存放执行编译生成的可执行文件和输出结果目录,推理代码工程目录结构如下: +创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`,其中`inc`、`src`、`test_data`目录代码可以从[官网示例下载](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/acl_resnet50_sample)获取,`model`目录用于存放接下来导出的`AIR`模型文件和转换后的`OM`模型文件,`out`目录用于存放执行编译生成的可执行文件和输出结果目录,推理代码工程目录结构如下: ```text └─acl_resnet50_sample @@ -121,7 +121,7 @@ Atlas 200 DK开发者板支持通过USB端口或者网线与Ubuntu服务器进 ## 导出AIR模型文件 -在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的AIR格式模型文件,导出流程参见[导出AIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#air)。 +在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的AIR格式模型文件,导出流程参见[导出AIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#air)。 > 这里提供使用ResNet-50模型导出的示例AIR文件[resnet50_export.air](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/sample_resources/acl_resnet50_sample/resnet50_export.air)。 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md index 5ba0fe9cf7c795aac253d9661a8c583e843f3947..68b3e24218375358f440f0907a375c1e7fa39285 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md @@ -14,7 +14,7 @@ - + ## 概述 @@ -30,15 +30,15 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 4. 加载保存的MindIR模型,执行推理并查看结果。 -> 你可以在这里找到完整可运行的样例代码: 。 +> 你可以在这里找到完整可运行的样例代码: 。 ## 开发环境准备 -参考[Ascend 310 AI处理器上使用AIR进行推理#开发环境准备](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_310_air.html#id2) +参考[Ascend 310 AI处理器上使用AIR进行推理#开发环境准备](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310_air.html#id2) ## 推理目录结构介绍 -创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample`,目录代码可以从[官网示例下载](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/ascend310_resnet50_preprocess_sample)获取,`model`目录用于存放接下来导出的`MindIR`模型文件,`test_data`目录用于存放待分类的图片,推理代码工程目录结构如下: +创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample`,目录代码可以从[官网示例下载](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/ascend310_resnet50_preprocess_sample)获取,`model`目录用于存放接下来导出的`MindIR`模型文件,`test_data`目录用于存放待分类的图片,推理代码工程目录结构如下: ```text └─ascend310_resnet50_preprocess_sample @@ -55,7 +55,7 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 ## 导出MindIR模型文件 -在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的MindIR格式模型文件,导出流程参见[导出MindIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#mindir)。 +在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的MindIR格式模型文件,导出流程参见[导出MindIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#mindir)。 > 这里提供使用ResNet-50模型导出的示例MindIR文件[resnet50_imagenet.mindir](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/sample_resources/ascend310_resnet50_preprocess_sample/resnet50_imagenet.mindir)。 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md index 95fe1dc2825677c9e52ba7f4bc0bd21d13917b5a..4ea024c441bf2d49174e4c907faba897b93ef514 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md @@ -10,7 +10,7 @@ - + ## 使用checkpoint格式文件单卡推理 @@ -37,8 +37,8 @@ ``` 其中, - `model.eval`为模型验证接口,对应接口说明:。 - > 推理样例代码:。 + `model.eval`为模型验证接口,对应接口说明:。 + > 推理样例代码:。 1.2 使用MindSpore Hub从华为云加载模型 @@ -60,7 +60,7 @@ ``` 其中, - `mindspore_hub.load`为加载模型参数接口,对应接口说明:。 + `mindspore_hub.load`为加载模型参数接口,对应接口说明:。 2. 使用`model.predict`接口来进行推理操作。 @@ -69,7 +69,7 @@ ``` 其中, - `model.predict`为推理接口,对应接口说明:。 + `model.predict`为推理接口,对应接口说明:。 ## 分布式推理 @@ -79,13 +79,13 @@ > 分布式推理样例代码: > -> +> 分布式推理流程如下: 1. 执行训练,生成checkpoint文件和模型参数切分策略文件。 - > - 分布式训练教程和样例代码可参考链接:. + > - 分布式训练教程和样例代码可参考链接:. > - 在分布式推理场景中,训练阶段的`CheckpointConfig`接口的`integrated_save`参数需设定为`False`,表示每卡仅保存模型切片而不是全量模型。 > - `set_auto_parallel_context`接口的`parallel_mode`参数需设定为`auto_parallel`或者`semi_auto_parallel`,并行模式为自动并行或者半自动并行。 > - 此外还需指定`strategy_ckpt_save_file`参数,即生成的策略文件的地址。 @@ -121,7 +121,7 @@ - `load_distributed_checkpoint`:对模型切片进行合并,再根据推理策略进行切分,加载至网络中。 > `load_distributed_checkpoint`接口支持predict_strategy为`None`,此时为单卡推理,其过程与分布式推理有所不同,详细用法请参考链接: - > . + > . 4. 进行推理,得到推理结果。 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md b/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md index 82d7141468788164b7c18d166d19f40206d33be6..c3df856d011d3dd1182f0ceb383a84abd93342c9 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md @@ -10,7 +10,7 @@ - + ## 使用checkpoint格式文件推理 @@ -20,6 +20,6 @@ 与在GPU上进行推理类似,需要以下几个步骤: -1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#onnx)。 +1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#onnx)。 2. 在CPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如使用ONNX Runtime,可以参考[ONNX Runtime说明文档](https://github.com/microsoft/onnxruntime)。 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md b/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md index ea96a12c1ce5e620f6c2700aa5c26088b9e8f534..3bbc9a3ee63f9189b5a612cafb3ad959999eea25 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md @@ -10,7 +10,7 @@ - + ## 使用checkpoint格式文件推理 @@ -18,6 +18,6 @@ ## 使用ONNX格式文件推理 -1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#onnx)。 +1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#onnx)。 2. 在GPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如在Nvidia GPU上进行推理,使用常用的TensorRT,可参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt)。 diff --git a/tutorials/inference/source_zh_cn/serving_example.md b/tutorials/inference/source_zh_cn/serving_example.md index 165e05ad159936a859b99eff5c7745306fc5f947..649316d6bd22cb1f27197fc9774d2fc763dc4941 100644 --- a/tutorials/inference/source_zh_cn/serving_example.md +++ b/tutorials/inference/source_zh_cn/serving_example.md @@ -15,7 +15,7 @@ - + ## 概述 @@ -29,7 +29,7 @@ MindSpore Serving是一个轻量级、高性能的服务模块,旨在帮助Min ### 导出模型 -使用[add_model.py](https://gitee.com/mindspore/serving/blob/master/example/add/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。 +使用[add_model.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。 ```python import os @@ -83,7 +83,7 @@ if __name__ == "__main__": ``` 使用MindSpore定义神经网络需要继承`mindspore.nn.Cell`。Cell是所有神经网络的基类。神经网络的各层需要预先在`__init__`方法中定义,然后通过定义`construct`方法来完成神经网络的前向构造。使用`mindspore`模块的`export`即可导出模型文件。 -更为详细完整的示例可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)。 +更为详细完整的示例可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)。 执行`add_model.py`脚本,生成`tensor_add.mindir`文件,该模型的输入为两个shape为[2,2]的二维Tensor,输出结果是两个输入Tensor之和。 @@ -103,7 +103,7 @@ test_dir - `master_with_worker.py`为启动服务脚本文件。 - `add`为模型文件夹,文件夹名即为模型名。 - `tensor_add.mindir`为上一步网络生成的模型文件,放置在文件夹1下,1为版本号,不同的版本放置在不同的文件夹下,版本号需以纯数字串命名,默认配置下启动最大数值的版本号的模型文件。 -- [servable_config.py](https://gitee.com/mindspore/serving/blob/master/example/add/add/servable_config.py)为[模型配置文件](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_model.html),其定义了模型的处理函数,包括`add_common`和`add_cast`两个方法,`add_common`定义了输入为两个普通float32类型的加法操作,`add_cast`定义输入类型为其他类型,经过输入类型转换float32后的加法操作。 +- [servable_config.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/add/servable_config.py)为[模型配置文件](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_model.html),其定义了模型的处理函数,包括`add_common`和`add_cast`两个方法,`add_common`定义了输入为两个普通float32类型的加法操作,`add_cast`定义输入类型为其他类型,经过输入类型转换float32后的加法操作。 模型配置文件内容如下: @@ -145,7 +145,7 @@ MindSpore Serving提供两种部署方式,轻量级部署和集群部署。轻 #### 轻量级部署 服务端调用Python接口直接启动推理进程(master和worker共进程),客户端直接连接推理服务后下发推理任务。 -执行[master_with_worker.py](https://gitee.com/mindspore/serving/blob/master/example/add/master_with_worker.py),完成轻量级部署服务如下: +执行[master_with_worker.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/master_with_worker.py),完成轻量级部署服务如下: ```python import os @@ -201,8 +201,8 @@ if __name__ == "__main__": ### 执行推理 -客户端提供两种方式访问推理服务,一种是通过[gRPC方式](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_grpc.html),一种是通过[RESTful方式](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_restful.html),本文以gRPC方式为例。 -使用[client.py](https://gitee.com/mindspore/serving/blob/master/example/add/client.py),启动Python客户端。 +客户端提供两种方式访问推理服务,一种是通过[gRPC方式](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_grpc.html),一种是通过[RESTful方式](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_restful.html),本文以gRPC方式为例。 +使用[client.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/client.py),启动Python客户端。 ```python import numpy as np diff --git a/tutorials/inference/source_zh_cn/serving_grpc.md b/tutorials/inference/source_zh_cn/serving_grpc.md index fd388bf0ea72ae812634ff092d66b94c9bad9a4e..b2e3ee870bdb591b21db46d89b320d25cd4e0c50 100644 --- a/tutorials/inference/source_zh_cn/serving_grpc.md +++ b/tutorials/inference/source_zh_cn/serving_grpc.md @@ -11,15 +11,15 @@ - + ## 概述 -MindSpore Serving提供gRPC接口访问Serving服务。在Python环境下,我们提供[mindspore_serving.client](https://gitee.com/mindspore/serving/blob/master/mindspore_serving/client/python/client.py) 模块用于填写请求、解析回复。gRPC服务端(worker节点)当前仅支持Ascend平台,客户端运行不依赖特定硬件环境。接下来我们通过`add`和`ResNet-50`样例来详细说明gRPC Python客户端接口的使用。 +MindSpore Serving提供gRPC接口访问Serving服务。在Python环境下,我们提供[mindspore_serving.client](https://gitee.com/mindspore/serving/blob/r1.1/mindspore_serving/client/python/client.py) 模块用于填写请求、解析回复。gRPC服务端(worker节点)当前仅支持Ascend平台,客户端运行不依赖特定硬件环境。接下来我们通过`add`和`ResNet-50`样例来详细说明gRPC Python客户端接口的使用。 ## add样例 -样例来源于[add example](https://gitee.com/mindspore/serving/blob/master/example/add/client.py) ,`add` Servable提供的`add_common`方法提供两个2x2 Tensor相加功能。其中gRPC Python客户端代码如下所示,一次gRPC请求包括了三对独立的2x2 Tensor: +样例来源于[add example](https://gitee.com/mindspore/serving/blob/r1.1/example/add/client.py) ,`add` Servable提供的`add_common`方法提供两个2x2 Tensor相加功能。其中gRPC Python客户端代码如下所示,一次gRPC请求包括了三对独立的2x2 Tensor: ```python from mindspore_serving.client import Client @@ -54,7 +54,7 @@ if __name__ == '__main__': run_add_common() ``` -按照[入门流程](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_example.html) 导出模型、启动Serving服务器,并执行上述客户端代码。当运行正常后,将打印以下结果,为了展示方便,格式作了调整: +按照[入门流程](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html) 导出模型、启动Serving服务器,并执行上述客户端代码。当运行正常后,将打印以下结果,为了展示方便,格式作了调整: ```python [{'y': array([[2., 2.], [2., 2.]], dtype=float32)}, @@ -124,7 +124,7 @@ if __name__ == '__main__': ## ResNet-50样例 -样例来源于[ResNet-50 example](https://gitee.com/mindspore/serving/blob/master/example/resnet/client.py),`ResNet-50` Servable提供的`classify_top1`方法提供对图像进行识别的服务。`classify_top1`方法输入为图像数据,输出为字符串,方法中预处理对图像进行解码、Resize等操作,接着进行推理,并通过后处理返回得分最大的分类标签。 +样例来源于[ResNet-50 example](https://gitee.com/mindspore/serving/blob/r1.1/example/resnet/client.py),`ResNet-50` Servable提供的`classify_top1`方法提供对图像进行识别的服务。`classify_top1`方法输入为图像数据,输出为字符串,方法中预处理对图像进行解码、Resize等操作,接着进行推理,并通过后处理返回得分最大的分类标签。 ```python import os diff --git a/tutorials/inference/source_zh_cn/serving_model.md b/tutorials/inference/source_zh_cn/serving_model.md index b4edcd3e562054659f90a20a821f48f7cb788ede..277f432d6a2f8028a07123c107383082ead92af5 100644 --- a/tutorials/inference/source_zh_cn/serving_model.md +++ b/tutorials/inference/source_zh_cn/serving_model.md @@ -17,7 +17,7 @@ - + ## 概述 @@ -27,7 +27,7 @@ MindSpore Serving的Servable提供推理服务,包含两种类型。一种是 本文将说明如何对单模型进行配置以提供Servable,以下所有Servable配置说明针对的是单模型Servable,Serving客户端简称客户端。 -本文以ResNet-50作为样例介绍如何配置模型提供Servable。样例代码可参考[ResNet-50样例](https://gitee.com/mindspore/serving/tree/master/example/resnet/) 。 +本文以ResNet-50作为样例介绍如何配置模型提供Servable。样例代码可参考[ResNet-50样例](https://gitee.com/mindspore/serving/tree/r1.1/example/resnet/) 。 ## 相关概念 @@ -136,7 +136,7 @@ def postprocess_top5(score): 预处理和后处理定义格式相同,入参为每个实例的输入数据。输入数据为文本时,入参为str对象;输入数据为其他数据类型,包括Tensor、Scalar number、Bool、Bytes时,入参为**numpy对象**。通过`return`返回实例的处理结果,`return`返回的数据可为**numpy、Python的bool、int、float、str、或bytes**单个数据对象或者由它们组成的tuple。 -预处理和后处理输入的来源和输出的使用由[方法定义](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_model.html#id9)决定。 +预处理和后处理输入的来源和输出的使用由[方法定义](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_model.html#id9)决定。 ### 模型声明 diff --git a/tutorials/inference/source_zh_cn/serving_restful.md b/tutorials/inference/source_zh_cn/serving_restful.md index 017d25a5ff0604f5151a0eaf60f5d80615dbef86..4ae334c606982f4d69e4fb3ffc8847ef8871ed9d 100644 --- a/tutorials/inference/source_zh_cn/serving_restful.md +++ b/tutorials/inference/source_zh_cn/serving_restful.md @@ -12,7 +12,7 @@ - + ## 概述 @@ -20,7 +20,7 @@ MindSpore Serving支持`gPRC`和`RESTful`两种请求方式。本章节介绍`RE `RESTful`是一种基于`HTTP`协议的网络应用程序的设计风格和开发方式,通过`URI`实现对资源的管理及访问,具有扩展性强、结构清晰的特点。基于其轻量级以及通过`HTTP`直接传输数据的特性,`RESTful`已经成为最常见的`Web`服务访问方式。用户通过`RESTful`方式,能够简单直接的与服务进行交互。 -部署`Serving`参考[快速入门](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_example.html) 章节。 +部署`Serving`参考[快速入门](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html) 章节。 与通过`master.start_grpc_server("127.0.0.1", 5500)`启动`gRPC`服务不同的是,`RESTful`服务需要通过`master.start_restful_server("0.0.0.0", 1500)`方式来启动。 diff --git a/tutorials/lite/source_en/conf.py b/tutorials/lite/source_en/conf.py index b472aa71f0899d61ef358f7388dcadfe8a2c7706..c87330ab041202db0eb914846d1bfc70d7438774 100644 --- a/tutorials/lite/source_en/conf.py +++ b/tutorials/lite/source_en/conf.py @@ -21,7 +21,7 @@ copyright = '2020, MindSpore Lite' author = 'MindSpore Lite' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/tutorials/lite/source_en/images/side_train_sequence.png b/tutorials/lite/source_en/images/side_train_sequence.png index 16e4af67a46370813760c09a15da756ad87fa643..058f03d3973beab9c8a245d6aa898f938d486315 100644 Binary files a/tutorials/lite/source_en/images/side_train_sequence.png and b/tutorials/lite/source_en/images/side_train_sequence.png differ diff --git a/tutorials/lite/source_en/index.rst b/tutorials/lite/source_en/index.rst index fcddb633dca596106919215c815e3ac44b6e86e5..8b9f9f69cc001273b90995f5594678fec8c9c104 100644 --- a/tutorials/lite/source_en/index.rst +++ b/tutorials/lite/source_en/index.rst @@ -112,7 +112,7 @@ Using MindSpore on Mobile and IoT