diff --git a/README.md b/README.md index b79bbd2d9b961550b7ded278871cec0fd6e518c5..a340846c25ab655e2ff95cc29ae932ae3fe98ab6 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ MindSpore tutorials and API documents can be generated by [Sphinx](https://www.s 1. Download code of the MindSpore Docs repository. ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. Go to the api_python directory and install the dependency items in the `requirements.txt` file. diff --git a/README_CN.md b/README_CN.md index ac420699c9536a0188543582b30a64b2d7792f0b..6676a0ba31caaec9dcd3e7c78763fc5e25aa60e3 100644 --- a/README_CN.md +++ b/README_CN.md @@ -6,7 +6,7 @@ ## 简介 -此工程提供MindSpore官方网站所呈现安装指南、教程、文档的源文件以及API的相关配置。 +此工程提供MindSpore官方网站所呈现安装指南、教程、文档的源文件以及API的相关配置。 ## 贡献 @@ -40,7 +40,7 @@ MindSpore的教程和API文档均可由[Sphinx](https://www.sphinx-doc.org/en/ma 1. 下载MindSpore Docs仓代码。 ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. 进入api_python目录,安装该目录下`requirements.txt`文件中的依赖项。 diff --git a/docs/api_cpp/source_en/class_list.md b/docs/api_cpp/source_en/class_list.md index d6f4cd6216606ff0b95bc67f0cc18ebbbb8e4f9d..057bed893ac4a2f626ef93074f9344ce6770c0c2 100644 --- a/docs/api_cpp/source_en/class_list.md +++ b/docs/api_cpp/source_en/class_list.md @@ -1,18 +1,27 @@ # Class List - + -Here is a list of all classes with links to the namespace documentation for each member: +Here is a list of all classes with links to the namespace documentation for each member in MindSpore Lite: -| Namespace | Class Name | Description | -| --- | --- | --- | -| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/en/master/mindspore.html#kernelcallback) | KernelCallBack defines the function pointer for callback. | -| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. | -| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#context) | Context is defined for holding environment variables during runtime. | -| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. | -| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#primitivec) | Primitive is defined as prototype of operator. | -| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#model) | Model defines model in MindSpore Lite for managing graph. | -| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#modelbuilder) | ModelBuilder is defined to build the model. | -| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/en/master/session.html#litesession) | LiteSession defines sessions in MindSpore Lite for compiling Model and forwarding model. | -| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/master/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. | -| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/en/master/dataset.html#litemat) |LiteMat is a class used to process images. | +| Namespace | Class Name | Description | +| ------------------ | -------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- | +| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#kernelcallback) | KernelCallBack defines the function pointer for callback. | +| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#allocator) | Allocator defines a memory pool for dynamic memory malloc and memory free. | +| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#context) | Context is defined for holding environment variables during runtime. | +| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#modelimpl) | ModelImpl defines the implement class of Model in MindSpore Lite. | +| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#primitivec) | Primitive is defined as prototype of operator. | +| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#model) | Model defines model in MindSpore Lite for managing graph. | +| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#modelbuilder) | ModelBuilder is defined to build the model. | +| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/en/r1.1/session.html#litesession) | LiteSession defines sessions in MindSpore Lite for compiling Model and forwarding inference. | +| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/r1.1/tensor.html#mstensor) | MSTensor defines tensor in MindSpore Lite. | +| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/en/r1.1/dataset.html#litemat) | LiteMat is a class used to process images. | + +The definitions and namespaces of classes in MindSpore are as follows: + +| Namespace | Class Name | Description | +| --------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------- | +| mindspore | [Context](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#context) | The Context class is used to store environment variables during execution. | +| mindspore | [Serialization](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#serialization) | The Serialization class is used to summarize methods for reading and writing model files. | +| mindspore | [Model](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#model) | The Model class is used to define a MindSpore model, facilitating computational graph management. | +| mindspore | [MSTensor](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#mstensor) | The MSTensor class defines a tensor in MindSpore. | diff --git a/docs/api_cpp/source_en/conf.py b/docs/api_cpp/source_en/conf.py index 4787de3f631f53db97bad94ffb7c95441edf0bb7..a44ca580d3d6539a56c49fcaec32c617cb6dc907 100644 --- a/docs/api_cpp/source_en/conf.py +++ b/docs/api_cpp/source_en/conf.py @@ -22,7 +22,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_cpp/source_en/dataset.md b/docs/api_cpp/source_en/dataset.md index 64abbd55cea593857841139316afb38602ad0d1d..bd80349c40c126a2b1f48b86144e84c62f9c3b88 100644 --- a/docs/api_cpp/source_en/dataset.md +++ b/docs/api_cpp/source_en/dataset.md @@ -1,10 +1,42 @@ # mindspore::dataset - + + +## Execute + +\#include <[execute.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/execute.h)> + +```cpp +Execute(std::shared_ptr op); + +Execute(std::vector> ops); +``` + +Class to run Tensor operations(cv, text) in the eager mode. + +- Parameters + + - `op`: Single transform operation to be used. + - `ops`: A list of transform operations to be used. + +```cpp +Status operator()(const mindspore::MSTensor &input, mindspore::MSTensor *output); +``` + +Callable function to execute the TensorOperation in the eager mode. + +- Parameters + + - `input`: Tensor to be transformed. + - `output`: Transformed tensor. + +- Returns + + Return Status code to indicate transform successful or not. ## ResizeBilinear -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h) @@ -25,7 +57,7 @@ Resize image by bilinear algorithm, currently the data type only supports uint8, ## InitFromPixel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m) @@ -48,7 +80,7 @@ Initialize LiteMat from pixel, providing data in RGB or BGR format does not need ## ConvertTo -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0) @@ -68,7 +100,7 @@ Convert the data type, currently it supports converting the data type from uint8 ## Crop -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h) @@ -91,7 +123,7 @@ Crop image, the channel supports 3 and 1. ## SubStractMeanNormalize -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std) @@ -112,7 +144,7 @@ Normalize image, currently the supports data type is float. ## Pad -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r) @@ -139,7 +171,7 @@ Pad image, the channel supports 3 and 1. ## ExtractChannel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ExtractChannel(const LiteMat &src, LiteMat &dst, int col) @@ -158,7 +190,7 @@ Extract image channel by index. ## Split -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Split(const LiteMat &src, std::vector &mv) @@ -177,7 +209,7 @@ Split image channels to single channel. ## Merge -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Merge(const std::vector &mv, LiteMat &dst) @@ -196,7 +228,7 @@ Create a multi-channel image out of several single-channel arrays. ## Affine -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue) @@ -216,7 +248,7 @@ Apply affine transformation to the 1-channel image. void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C3 borderValue) ``` -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> Apply affine transformation to the 3-channel image. @@ -230,7 +262,7 @@ Apply affine transformation to the 3-channel image. ## GetDefaultBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp std::vector> GetDefaultBoxes(BoxesConfig config) @@ -248,7 +280,7 @@ Get default anchor boxes for Faster R-CNN, SSD, YOLO, etc. ## ConvertBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config) @@ -264,7 +296,7 @@ Convert the prediction boxes to the actual boxes with (y, x, h, w). ## ApplyNms -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes) @@ -285,7 +317,7 @@ Real-size box non-maximum suppression. ## LiteMat -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> LiteMat is a class that processes images. @@ -431,7 +463,7 @@ A **pointer** to the address of the reference counter. ## Subtract -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Subtract(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -451,7 +483,7 @@ Calculates the difference between the two images for each element. ## Divide -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Divide(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -471,7 +503,7 @@ Calculates the division between the two images for each element. ## Multiply -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Multiply(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) diff --git a/docs/api_cpp/source_en/errorcode_and_metatype.md b/docs/api_cpp/source_en/errorcode_and_metatype.md index de25e800ff7ac6ec3d04eba326b1e78cbc01e7b1..b7f8cd51d4b045c9f3373dba5acfee553072c552 100644 --- a/docs/api_cpp/source_en/errorcode_and_metatype.md +++ b/docs/api_cpp/source_en/errorcode_and_metatype.md @@ -1,6 +1,6 @@ # ErrorCode and MetaType - + ## 1.0.1 diff --git a/docs/api_cpp/source_en/lite.md b/docs/api_cpp/source_en/lite.md index aacd2963a46835e7e7888c66fdb781f71b82a6f9..79ca59a131aae15590cfbf3fb181ae26925a844c 100644 --- a/docs/api_cpp/source_en/lite.md +++ b/docs/api_cpp/source_en/lite.md @@ -1,16 +1,16 @@ # mindspore::lite - + ## Allocator -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Allocator defines a memory pool for dynamic memory malloc and memory free. ## Context -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Context is defined for holding environment variables during runtime. @@ -40,7 +40,7 @@ Destructor of MindSpore Lite Context. vendor_name_ ``` -A **string** value. Describes the vendor information. This attribute is used to distinguish from different venders. +A **string** value. Describes the vendor information. This attribute is used to distinguish from different vendors. #### thread_num_ @@ -56,7 +56,7 @@ An **int** value. Defaults to **2**. Thread number config for thread pool. allocator ``` -A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#allocator). +A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#allocator). #### device_list_ @@ -64,19 +64,19 @@ A **pointer** pointing to [**Allocator**](https://www.mindspore.cn/doc/api_cpp/e device_list_ ``` -A [**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontextvector) contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) variables. +A [**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#devicecontextvector) contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#devicecontext) variables. -> Only CPU and GPU are supported now. If GPU device context is set, use GPU device first, otherwise use CPU device first. +> CPU, GPU and NPU are supported now. If GPU device context is set and GPU is supported in the current device, use GPU device first, otherwise use CPU device first. If NPU device context is set and GPU is supported in the current device, use NPU device first, otherwise use CPU device first. ## PrimitiveC -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Primitive is defined as prototype of operator. ## Model -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Model defines model in MindSpore Lite for managing graph. @@ -130,7 +130,7 @@ Static method to create a Model pointer. ## CpuBindMode -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> An **enum** type. CpuBindMode is defined for holding arguments of the bind CPU strategy. @@ -162,7 +162,7 @@ No bind. ## DeviceType -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> An **enum** type. DeviceType is defined for holding user's preferred backend. @@ -190,11 +190,11 @@ GPU device type. DT_NPU = 2 ``` -NPU device type, not supported yet. +NPU device type. ## Version -\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)> +\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/version.h)> ```cpp std::string Version() @@ -232,13 +232,13 @@ Global method to get strings from MSTensor. ## DeviceContextVector -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> -A **vector** contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#devicecontext) variable. +A **vector** contains [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#devicecontext) variable. ## DeviceContext -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> DeviceContext defines different device contexts. @@ -258,11 +258,11 @@ An **enum** type. Defaults to **DT_CPU**. DeviceType is defined for holding device_info_ ``` - An **union** value, contains [**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#cpudeviceinfo) and [**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#gpudeviceinfo) +An **union** value, contains [**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#cpudeviceinfo) , [**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#gpudeviceinfo) and [**NpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#npudeviceinfo) . ## DeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> An **union** value. DeviceInfo is defined for backend's configuration information. @@ -274,7 +274,7 @@ An **union** value. DeviceInfo is defined for backend's configuration informatio cpu_device_info_ ``` -[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#cpudeviceinfo) is defined for CPU's configuration information. +[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#cpudeviceinfo) is defined for CPU's configuration information. #### gpu_device_info_ @@ -282,17 +282,17 @@ cpu_device_info_ gpu_device_info_ ``` -[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#gpudeviceinfo) is defined for GPU's configuration information. +[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#gpudeviceinfo) is defined for GPU's configuration information. ```cpp npu_device_info_ ``` -[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#gpudeviceinfo) is defined for NPU's configuration information. +[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#gpudeviceinfo) is defined for NPU's configuration information. ## CpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> CpuDeviceInfo is defined for CPU's configuration information. @@ -314,11 +314,11 @@ A **bool** value. Defaults to **false**. This attribute enables to perform the G cpu_bind_mode_ ``` -A [**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/en/master/lite.html#cpubindmode) **enum** variable. Defaults to **MID_CPU**. +A [**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/lite.html#cpubindmode) **enum** variable. Defaults to **MID_CPU**. ## GpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> GpuDeviceInfo is defined for GPU's configuration information. @@ -336,7 +336,7 @@ A **bool** value. Defaults to **false**. This attribute enables to perform the G ## NpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> NpuDeviceInfo is defined for NPU's configuration information. @@ -348,7 +348,7 @@ A **int** value. Defaults to **3**. This attribute is used to set the NPU freque ## TrainModel -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Inherited from Model, TrainModel defines a class that allows to import and export the MindSpore trainable model. diff --git a/docs/api_cpp/source_en/lite_cpp_example.rst b/docs/api_cpp/source_en/lite_cpp_example.rst index 58b12e4c8761b3766fcd352332ed35339707778c..5a67ea115e776f6169f9d4f0454f4a4aeadecc55 100644 --- a/docs/api_cpp/source_en/lite_cpp_example.rst +++ b/docs/api_cpp/source_en/lite_cpp_example.rst @@ -4,5 +4,5 @@ Example .. toctree:: :maxdepth: 1 - Quick Start - High-level Usage \ No newline at end of file + Quick Start + High-level Usage \ No newline at end of file diff --git a/docs/api_cpp/source_en/mindspore.md b/docs/api_cpp/source_en/mindspore.md index 1106b7c5875ea5928191f55e0d3c821afa131630..4a56b18e331d665b25dff76ee2b42a518688dc02 100644 --- a/docs/api_cpp/source_en/mindspore.md +++ b/docs/api_cpp/source_en/mindspore.md @@ -1,10 +1,434 @@ # mindspore - + -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +## Context -## KernelCallBack +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/context.h)> + +The Context class is used to store environment variables during execution, which has two derived classes: `GlobalContext` and `ModelContext`. + +## GlobalContext : Context + +GlobalContext is used to store global environment variables during execution. + +### Static Public Member Function + +#### GetGlobalContext + +```cpp +static std::shared_ptr GetGlobalContext(); +``` + +Obtains the single instance of GlobalContext. + +- Returns + + The single instance of GlobalContext. + +#### SetGlobalDeviceTarget + +```cpp +static void SetGlobalDeviceTarget(const std::string &device_target); +``` + +Configures the target device. + +- Parameters + + - `device_target`: target device to be configured, options are `kDeviceTypeAscend310`, `kDeviceTypeAscend910`. + +#### GetGlobalDeviceTarget + +```cpp +static std::string GetGlobalDeviceTarget(); +``` + +Obtains the configured target device. + +- Returns + + The configured target device. + +#### SetGlobalDeviceID + +```cpp +static void SetGlobalDeviceID(const unit32_t &device_id); +``` + +Configures the device ID. + +- Parameters + + - `device_id`: the device ID to configure. + +#### GetGlobalDeviceID + +```cpp +static uint32_t GetGlobalDeviceID(); +``` + +Obtains the configured device ID. + +- Returns + + The configured device ID. + +## ModelContext : Context + +### Static Public Member Function + +| Function | Notes | +| ----------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `void SetInsertOpConfigPath(const std::shared_ptr &context, const std::string &cfg_path)` | Set [AIPP](https://support.huaweicloud.com/intl/en-us/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html) configuration file path

- `context`: context to be set

- `cfg_path`: [AIPP](https://support.huaweicloud.com/intl/en-us/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html) configuration file path | +| `std::string GetInsertOpConfigPath(const std::shared_ptr &context)` | - Returns: The set [AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html) configuration file path | +| `void SetInputFormat(const std::shared_ptr &context, const std::string &format)` | Set format of model inputs

- `context`: context to be set

- `format`: Optional `"NCHW"`, `"NHWC"`, etc. | +| `std::string GetInputFormat(const std::shared_ptr &context)` | - Returns: The set format of model inputs | +| `void SetInputShape(const std::shared_ptr &context, const std::string &shape)` | Set shape of model inputs

- `context`: context to be set

- `shape`: e.g., `"input_op_name1: 1,2,3,4;input_op_name2: 4,3,2,1"` | +| `std::string GetInputShape(const std::shared_ptr &context)` | - Returns: The set shape of model inputs | +| `void SetOutputType(const std::shared_ptr &context, enum DataType output_type)` | Set type of model outputs

- `context`: context to be set

- `output_type`: Only uint8, fp16 and fp32 are supported | +| `enum DataType GetOutputType(const std::shared_ptr &context)` | - Returns: The set type of model outputs | +| `void SetPrecisionMode(const std::shared_ptr &context, const std::string &precision_mode)` | Set precision mode of model

- `context`: context to be set

- `precision_mode`: Optional `"force_fp16"`, `"allow_fp32_to_fp16"`, `"must_keep_origin_dtype"` and `"allow_mix_precision"`, `"force_fp16"` is set as default | +| `std::string GetPrecisionMode(const std::shared_ptr &context)` | - Returns: The set precision mode | +| `void SetOpSelectImplMode(const std::shared_ptr &context, const std::string &op_select_impl_mode)` | Set op select implementation mode

- `context`: context to be set

- `op_select_impl_mode`: Optional `"high_performance"` and `"high_precision"`, `"high_performance"` is set as default | +| `std::string GetOpSelectImplMode(const std::shared_ptr &context)` | - Returns: The set op select implementation mode | + +## Serialization + +\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/serialization.h)> + +The Serialization class is used to summarize methods for reading and writing model files. + +### Static Public Member Function + +#### LoadModel + +```cpp +static Graph LoadModel(const std::string &file, ModelType model_type); +``` + +Loads a model file from path. + +- Parameters + + - `file`: the path of model file. + - `model_type`: the Type of model file, options are `ModelType::kMindIR`, `ModelType::kOM`. + +- Returns + + An instance of `Graph`, used for storing graph data. + +```cpp +static Graph LoadModel(const void *model_data, size_t data_size, ModelType model_type); +``` + +Loads a model file from memory buffer. + +- Parameters + + - `model_data`: a buffer filled by model file. + - `data_size`: the size of the buffer. + - `model_type`: the Type of model file, options are `ModelType::kMindIR`, `ModelType::kOM`. + +- Returns + + An instance of `Graph`, used for storing graph data. + +## Model + +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/model.h)> + +The Model class is used to define a MindSpore model, facilitating computational graph management. + +### Constructor and Destructor + +```cpp +explicit Model(const GraphCell &graph, const std::shared_ptr &model_context); +explicit Model(const std::vector &network, const std::shared_ptr &model_context); +~Model(); +``` + +`GraphCell` is a derivative of `Cell`. `Cell` is not available currently. `GraphCell` can be constructed from `Graph`, for example, `Model model(GraphCell(graph))`。 + +`Context` is used to store the [model options](#modelcontext-contextfor-mindspore) during execution. + +### Public Member Functions + +#### Build + +```cpp +Status Build(); +``` + +Builds a model so that it can run on a device. + +- Returns + + Status code. + +#### Predict + +```cpp +Status Predict(const std::vector &inputs, std::vector *outputs); +``` + +Inference model. + +- Parameters + + - `inputs`: a `vector` where model inputs are arranged in sequence. + - `outputs`: output parameter, which is a pointer to a `vector`. The model outputs are filled in the container in sequence. + +- Returns + + Status code. + +#### GetInputs + +```cpp +std::vector GetInputs(); +``` + +Obtains all input tensors of the model. + +- Returns + + The vector that includes all input tensors. + +#### GetOutputs + +```cpp +std::vector GetOutputs(); +``` + +Obtains all output tensors of the model. + +- Returns + + A `vector` that includes all output tensors. + +#### Resize + +```cpp +Status Resize(const std::vector &inputs, const std::vector> &dims); +``` + +Resizes the shapes of inputs. + +- Parameters + + - `inputs`: a `vector` that includes all input tensors in order. + - `dims`: defines the new shapes of inputs, should be consistent with `inputs`. + +- Returns + + Status code. + +#### CheckModelSupport + +```cpp +static bool CheckModelSupport(const std::string &device_type, ModelType model_type); +``` + +Checks whether the type of device supports the type of model. + +- Parameters + + - `device_type`: device type,options are `Ascend310`, `Ascend910`. + - `model_type`: the Type of model file, options are `ModelType::kMindIR`, `ModelType::kOM`. + +- Returns + + Status code. + +## MSTensor + +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/types.h)> + +The MSTensor class defines a tensor in MindSpore. + +### Constructor and Destructor + +```cpp +MSTensor(); +explicit MSTensor(const std::shared_ptr &impl); +MSTensor(const std::string &name, DataType type, const std::vector &shape, const void *data, size_t data_len); +~MSTensor(); +``` + +### Static Public Member Function + +#### CreateTensor + +```cpp +static MSTensor CreateTensor(const std::string &name, DataType type, const std::vector &shape, + const void *data, size_t data_len) noexcept; +``` + +Creates a MSTensor object, whose data need to be copied before accessed by `Model`. + +- Parameters + + - `name`: the name of the `MSTensor`. + - `type`: the data type of the `MSTensor`. + - `shape`: the shape of the `MSTensor`. + - `data`: the data pointer that points to allocated memory. + - `data`: the length of the memory, in bytes. + +- Returns + + An instance of `MStensor`. + +#### CreateRefTensor + +```cpp +static MSTensor CreateRefTensor(const std::string &name, DataType type, const std::vector &shape, void *data, + size_t data_len) noexcept; +``` + +Creates a MSTensor object, whose data can be directly accessed by `Model`. + +- Parameters + + - `name`: the name of the `MSTensor`. + - `type`: the data type of the `MSTensor`. + - `shape`: the shape of the `MSTensor`. + - `data`: the data pointer that points to allocated memory. + - `data`: the length of the memory, in bytes. + +- Returns + + An instance of `MStensor`. + +### Public Member Functions + +#### Name + +```cpp +const std::string &Name() const; +``` + +Obtains the name of the `MSTensor`. + +- Returns + + The name of the `MSTensor`. + +#### DataType + +```cpp +enum DataType DataType() const; +``` + +Obtains the data type of the `MSTensor`. + +- Returns + + The data type of the `MSTensor`. + +#### Shape + +```cpp +const std::vector &Shape() const; +``` + +Obtains the shape of the `MSTensor`. + +- Returns + + A `vector` that contains the shape of the `MSTensor`. + +#### ElementNum + +```cpp +int64_t ElementNum() const; +``` + +Obtains the number of elements of the `MSTensor`. + +- Returns + + The number of elements of the `MSTensor`. + +#### Data + +```cpp +std::shared_ptr Data() const; +``` + +Obtains a shared pointer to the copy of data of the `MSTensor`. + +- Returns + + A shared pointer to the copy of data of the `MSTensor`. + +#### MutableData + +```cpp +void *MutableData(); +``` + +Obtains the pointer to the data of the `MSTensor`. + +- Returns + + The pointer to the data of the `MSTensor`. + +#### DataSize + +```cpp +size_t DataSize() const; +``` + +Obtains the length of the data of the `MSTensor`, in bytes. + +- Returns + + The length of the data of the `MSTensor`, in bytes. + +#### IsDevice + +```cpp +bool IsDevice() const; +``` + +Gets the boolean value that indicates whether the memory of `MSTensor` is on device. + +- Returns + + The boolean value that indicates whether the memory of `MSTensor` is on device. + +#### Clone + +```cpp +MSTensor Clone() const; +``` + +Gets a deep copy of the `MSTensor`. + +- Returns + + A deep copy of the `MSTensor`. + +#### operator==(std::nullptr_t) + +```cpp +bool operator==(std::nullptr_t) const; +``` + +Gets the boolean value that indicates whether the `MSTensor` is valid. + +- Returns + + The boolean value that indicates whether the `MSTensor` is valid. + +## CallBack + +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> + +The CallBack struct defines the call back function in MindSpore Lite. + +### KernelCallBack ```cpp using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)> @@ -12,11 +436,11 @@ using KernelCallBack = std::function inputs A function wrapper. KernelCallBack defines the pointer for callback function. -## CallBackParam +### CallBackParam A **struct**. CallBackParam defines input arguments for callback function. -### Public Attributes +#### Public Attributes #### node_name diff --git a/docs/api_cpp/source_en/session.md b/docs/api_cpp/source_en/session.md index 9f69ccee5c2ea5119fd0bc1a906cd0314a34a2a9..2fa9059efa019c9a43002739e186ea306252010f 100644 --- a/docs/api_cpp/source_en/session.md +++ b/docs/api_cpp/source_en/session.md @@ -1,10 +1,10 @@ # mindspore::session - + ## LiteSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> LiteSession defines sessions in MindSpore Lite for compiling Model and forwarding inference. @@ -56,7 +56,7 @@ Compile MindSpore Lite model. - Returns - STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). #### GetInputs @@ -97,13 +97,13 @@ Run session with callback. - Parameters - - `before`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/master/mindspore.html#kernelcallback) function. Define a callback function to be called before running each node. + - `before`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#kernelcallback) function. Define a callback function to be called before running each node. - - `after`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/master/mindspore.html#kernelcallback) function. Define a callback function to be called after running each node. + - `after`: A [**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/en/r1.1/mindspore.html#kernelcallback) function. Define a callback function to be called after running each node. - Returns - STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of running graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). #### GetOutputsByNodeName @@ -178,7 +178,7 @@ Resize inputs shape. - Returns - STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of resize inputs, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). ### Static Public Member Functions @@ -218,7 +218,7 @@ Static method to create a LiteSession pointer. The returned LiteSession pointer ## TrainSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> Inherited from LiteSession, TrainSession defines the class that allows training the MindSpore model. @@ -318,7 +318,7 @@ Set model to train mode. - Returns - STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h) + STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h) #### IsTrain @@ -342,7 +342,7 @@ Set model to eval mode. - Returns - STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h). + STATUS as an error code of compiling graph, STATUS is defined in [errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h). #### IsEval diff --git a/docs/api_cpp/source_en/tensor.md b/docs/api_cpp/source_en/tensor.md index 2e7913a146098b902a0f8891cd32fa4690dd1b79..f7cdbad66d4fa88c9390d46a638efb363dff798a 100644 --- a/docs/api_cpp/source_en/tensor.md +++ b/docs/api_cpp/source_en/tensor.md @@ -1,10 +1,10 @@ # mindspore::tensor - + ## MSTensor -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> MSTensor defined tensor in MindSpore Lite. @@ -40,7 +40,7 @@ virtual TypeId data_type() const Get data type of the MindSpore Lite MSTensor. -> TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types or kObjectTypeString in TypeId enum are applicable for MSTensor. +> TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/core/ir/dtype/type_id.h). Only number types or kObjectTypeString in TypeId enum are applicable for MSTensor. - Returns diff --git a/docs/api_cpp/source_en/vision.md b/docs/api_cpp/source_en/vision.md index 3b8cd99c98863f875280c5487f8337ae41e8521d..95d3be0440fd2e48f10403050c1ab904ef7d3eeb 100644 --- a/docs/api_cpp/source_en/vision.md +++ b/docs/api_cpp/source_en/vision.md @@ -1,24 +1,10 @@ # mindspore::dataset::vision - - -## HWC2CHW - -\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision.h)> - -```cpp -std::shared_ptr HWC2CHW() -``` - -Convert the channel of the input image from (H, W, C) to (C, H, W). - -- Returns - - Return a HwcToChw operator. + ## CenterCrop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr CenterCrop(std::vector size) @@ -28,7 +14,7 @@ Crop the center area of the input image to the given size. - Parameters - - `size`: The output size of the resized image. If the size is a single value, the image will be resized to this value with the same image aspect ratio. If the size has 2 values, it should be (height, width). + - `size`: The output size of the cropped image. If the size is a single value, a square crop of size (size, size) is returned. If the size has 2 values, it should be (height, width). - Returns @@ -36,7 +22,7 @@ Crop the center area of the input image to the given size. ## Crop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Crop(std::vector coordinates, std::vector size) @@ -46,8 +32,8 @@ Crop an image based on the location and crop size. - Parameters - - `coordinates`: The starting location of the crop. - - `size`: Size of the cropped area. + - `coordinates`: Starting location of crop. + - `size`: The output size of the cropped image. If the size is a single value, a square crop of size (size, size) is returned. If the size has 2 values, it should be (height, width). - Returns @@ -55,7 +41,7 @@ Crop an image based on the location and crop size. ## Decode -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Decode(bool rgb = true) @@ -73,7 +59,7 @@ Decode the input image. ## Normalize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Normalize(std::vector mean, std::vector std) @@ -92,7 +78,7 @@ Normalize the input image with the given mean and standard deviation. ## Resize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Resize(std::vector size, InterpolationMode interpolation = InterpolationMode::kLinear) @@ -108,3 +94,60 @@ Resize the input image to the given size. - Returns Return a Resize operator. + +## HWC2CHW + +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> + +```cpp +std::shared_ptr HWC2CHW() +``` + +Convert the channel of the input image from (H, W, C) to (C, H, W). + +- Returns + + Return a HwcToChw operator. + +## Pad + +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> + +```cpp +std::shared_ptr Pad(std::vector padding, std::vector fill_value = {0}, BorderType padding_mode = BorderType::kConstant) +``` + +Pad the image according to padding parameters. + +- Parameters + + - `padding`: A vector representing the number of pixels to pad the image. If the vector has a single value, it pads all sides of the image with that value. If the vector has two values, it pads left and right with the first value, and pads top and bottom with the second value. If the vector has four values, it pads left, top, right, and bottom with those values respectively. + - `fill_value`: A vector representing the pixel intensity of the borders if the padding_mode is BorderType.kConstant. If 3 values are provided, it is used to fill R, G, B channels respectively. + - `padding_mode`: padding_mode The method of padding. Can be any of BorderType.kConstant, BorderType.kEdge, BorderType.kReflect, BorderType.kSymmetric. + - BorderType.kConstant, means it fills the border with constant values. + - BorderType.kEdge, means it pads with the last value on the edge. + - BorderType.kReflect, means it reflects the values on the edge omitting the last value of edge. + - BorderType.kSymmetric, means it reflects the values on the edge repeating the last value of edge. + +- Returns + + Return a Pad operator. + +## Rescale + +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> + +```cpp +std::shared_ptr Rescale(float rescale, float shift) +``` + +Apply `y = αx + β` transform on pixels of input image. + +- Parameters + + - `rescale`: paramter α. + - `shift`: paramter β. + +- Returns + + Return a Rescale operator. diff --git a/docs/api_cpp/source_zh_cn/api.md b/docs/api_cpp/source_zh_cn/api.md deleted file mode 100644 index 7c0af98ec52b620064f95b13b38d80d618a8fe5d..0000000000000000000000000000000000000000 --- a/docs/api_cpp/source_zh_cn/api.md +++ /dev/null @@ -1,390 +0,0 @@ -# mindspore::api - - - -## Context - -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/context.h)> - -Context类用于保存执行中的环境变量。 - -### 静态公有成员函数 - -#### Instance - -```cpp -static Context &Instance(); -``` - -获取MindSpore Context实例对象。 - -### 公有成员函数 - -#### GetDeviceTarget - -```cpp -const std::string &GetDeviceTarget() const; -``` - -获取当前目标Device类型。 - -- 返回值 - - 当前DeviceTarget的类型。 - -#### GetDeviceID - -```cpp -uint32_t GetDeviceID() const; -``` - -获取当前Device ID。 - -- 返回值 - - 当前Device ID。 - -#### SetDeviceTarget - -```cpp -Context &SetDeviceTarget(const std::string &device_target); -``` - -配置目标Device。 - -- 参数 - - - `device_target`: 将要配置的目标Device,可选有`kDeviceTypeAscend310`、`kDeviceTypeAscend910`。 - -- 返回值 - - 该MindSpore Context实例对象。 - -#### SetDeviceID - -```cpp -Context &SetDeviceID(uint32_t device_id); -``` - -获取当前Device ID。 - -- 参数 - - - `device_id`: 将要配置的Device ID。 - -- 返回值 - - 该MindSpore Context实例对象。 - -## Serialization - -\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/serialization.h)> - -Serialization类汇总了模型文件读写的方法。 - -### 静态公有成员函数 - -#### LoadModel - -- 参数 - - - `file`: 模型文件路径。 - - `model_type`:模型文件类型,可选有`ModelType::kMindIR`、`ModelType::kOM`。 - -- 返回值 - - 保存图数据的对象。 - -## Model - -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/model.h)> - -Model定义了MindSpore中的模型,便于计算图管理。 - -### 构造函数和析构函数 - -```cpp -Model(const GraphCell &graph); -~Model(); -``` - -`GraphCell`是`Cell`的一个派生,`Cell`目前没有开放使用。`GraphCell`可以由`Graph`构造,如`Model model(GraphCell(graph))`。 - -### 公有成员函数 - -#### Build - -```cpp -Status Build(const std::map &options); -``` - -将模型编译至可在Device上运行的状态。 - -- 参数 - - - `options`: 模型编译选项,key为选项名,value为对应选项,支持的options有: - -| Key | Value | -| --- | --- | -| kModelOptionInsertOpCfgPath | [AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html)配置文件路径 | -| kModelOptionInputFormat | 手动指定模型输入format,可选有`"NCHW"`,`"NHWC"`等 | -| kModelOptionInputShape | 手动指定模型输入shape,如`"input_op_name1: n1,c2,h3,w4;input_op_name2: n4,c3,h2,w1"` | -| kModelOptionOutputType | 手动指定模型输出type,如`"FP16"`,`"UINT8"`等,默认为`"FP32"` | -| kModelOptionPrecisionMode | 模型精度模式,可选有`"force_fp16"`,`"allow_fp32_to_fp16"`,`"must_keep_origin_dtype"`或者`"allow_mix_precision"`,默认为`"force_fp16"` | -| kModelOptionOpSelectImplMode | 算子选择模式,可选有`"high_performance"`和`"high_precision"`,默认为`"high_performance"` | - -- 返回值 - - 状态码。 - -#### Predict - -```cpp -Status Predict(const std::vector &inputs, std::vector *outputs); -``` - -推理模型。 - -- 参数 - - - `inputs`: 模型输入按顺序排列的`vector`。 - - `outputs`: 输出参数,按顺序排列的`vector`的指针,模型输出会按顺序填入该容器。 - -- 返回值 - - 状态码。 - -#### GetInputsInfo - -```cpp -Status GetInputsInfo(std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const; -``` - -获取模型输入信息。 - -- 参数 - - - `names`: 可选输出参数,模型输入按顺序排列的`vector`的指针,模型输入的name会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - - `shapes`: 可选输出参数,模型输入按顺序排列的`vector`的指针,模型输入的shape会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - - `data_types`: 可选输出参数,模型输入按顺序排列的`vector`的指针,模型输入的数据类型会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - - `mem_sizes`: 可选输出参数,模型输入按顺序排列的`vector`的指针,模型输入的以字节为单位的内存长度会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - -- 返回值 - - 状态码。 - -#### GetOutputsInfo - -```cpp -Status GetOutputsInfo(std::vector *names, std::vector> *shapes, std::vector *data_types, std::vector *mem_sizes) const; -``` - -获取模型输入信息。 - -- 参数 - - - `names`: 可选输出参数,模型输出按顺序排列的`vector`的指针,模型输出的name会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - - `shapes`: 可选输出参数,模型输出按顺序排列的`vector`的指针,模型输出的shape会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - - `data_types`: 可选输出参数,模型输出按顺序排列的`vector`的指针,模型输出的数据类型会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - - `mem_sizes`: 可选输出参数,模型输出按顺序排列的`vector`的指针,模型输出的以字节为单位的内存长度会按顺序填入该容器,传入`nullptr`则表示不获取该属性。 - -- 返回值 - - 状态码。 - -## Tensor - -\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/master/include/api/types.h)> - -### 构造函数和析构函数 - -```cpp -Tensor(); -Tensor(const std::string &name, DataType type, const std::vector &shape, const void *data, size_t data_len); -~Tensor(); -``` - -### 静态公有成员函数 - -#### GetTypeSize - -```cpp -static int GetTypeSize(api::DataType type); -``` - -获取数据类型的内存长度,以字节为单位。 - -- 参数 - - - `type`: 数据类型。 - -- 返回值 - - 内存长度,单位是字节。 - -### 公有成员函数 - -#### Name - -```cpp -const std::string &Name() const; -``` - -获取Tensor的名字。 - -- 返回值 - - Tensor的名字。 - -#### DataType - -```cpp -api::DataType DataType() const; -``` - -获取Tensor的数据类型。 - -- 返回值 - - Tensor的数据类型。 - -#### Shape - -```cpp -const std::vector &Shape() const; -``` - -获取Tensor的Shape。 - -- 返回值 - - Tensor的Shape。 - -#### SetName - -```cpp -void SetName(const std::string &name); -``` - -设置Tensor的名字。 - -- 参数 - - - `name`: 将要设置的name。 - -#### SetDataType - -```cpp -void SetDataType(api::DataType type); -``` - -设置Tensor的数据类型。 - -- 参数 - - - `type`: 将要设置的type。 - -#### SetShape - -```cpp -void SetShape(const std::vector &shape); -``` - -设置Tensor的Shape。 - -- 参数 - - - `shape`: 将要设置的shape。 - -#### Data - -```cpp -const void *Data() const; -``` - -获取Tensor中的数据的const指针。 - -- 返回值 - - 指向Tensor中的数据的const指针。 - -#### MutableData - -```cpp -void *MutableData(); -``` - -获取Tensor中的数据的指针。 - -- 返回值 - - 指向Tensor中的数据的指针。 - -#### DataSize - -```cpp -size_t DataSize() const; -``` - -获取Tensor中的数据的以字节为单位的内存长度。 - -- 返回值 - - Tensor中的数据的以字节为单位的内存长度。 - -#### ResizeData - -```cpp -bool ResizeData(size_t data_len); -``` - -重新调整Tensor的内存大小。 - -- 参数 - - - `data_len`: 调整后的内存字节数。 - -- 返回值 - - bool值表示是否成功。 - -#### SetData - -```cpp -bool SetData(const void *data, size_t data_len); -``` - -重新调整Tensor的内存数据。 - -- 参数 - - - `data`: 源数据内存地址。 - - `data_len`: 源数据内存长度。 - -- 返回值 - - bool值表示是否成功。 - -#### ElementNum - -```cpp -int64_t ElementNum() const; -``` - -获取Tensor中元素的个数。 - -- 返回值 - - Tensor中的元素个数 - -#### Clone - -```cpp -Tensor Clone() const; -``` - -拷贝一份自身的副本。 - -- 返回值 - - 深拷贝的副本。 diff --git a/docs/api_cpp/source_zh_cn/class_list.md b/docs/api_cpp/source_zh_cn/class_list.md index 2934485fc023b42c63b20fca44a70cfe310e96c3..7b64349c7b0aa845092a6ccb8cb0ebd4f309896f 100644 --- a/docs/api_cpp/source_zh_cn/class_list.md +++ b/docs/api_cpp/source_zh_cn/class_list.md @@ -1,28 +1,27 @@ # 类列表 - + MindSpore Lite中的类定义及其所属命名空间和描述: -| 命名空间 | 类 | 描述 | -| --- | --- | --- | -| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/mindspore.html#kernelcallback) | KernelCallBack定义了指向回调函数的指针。 | -| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 | -| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#context) | Context用于保存执行期间的环境变量。 | -| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 | -| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#primitivec) | PrimitiveC定义为算子的原型。 | -| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | -| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 | -| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | -| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 | -| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/dataset.html#litemat) |LiteMat是一个处理图像的类。 | +| 命名空间 | 类 | 描述 | +| ------------------ | ----------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------ | +| mindspore | [KernelCallBack](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#kernelcallback) | KernelCallBack定义了指向回调函数的指针。 | +| mindspore::lite | [Allocator](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#allocator) | Allocator定义了一个内存池,用于动态地分配和释放内存。 | +| mindspore::lite | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#context) | Context用于保存执行期间的环境变量。 | +| mindspore::lite | [ModelImpl](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#modelimpl) | ModelImpl定义了MindSpore Lite中的Model的实现类。 | +| mindspore::lite | [PrimitiveC](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#primitivec) | PrimitiveC定义为算子的原型。 | +| mindspore::lite | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#model) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | +| mindspore::lite | [ModelBuilder](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#modelbuilder) | ModelBuilder定义了MindSpore Lite中的模型构建器。 | +| mindspore::session | [LiteSession](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/session.html#litesession) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | +| mindspore::tensor | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/tensor.html#mstensor) | MSTensor定义了MindSpore Lite中的张量。 | +| mindspore::dataset | [LiteMat](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/dataset.html#litemat) | LiteMat是一个处理图像的类。 | MindSpore中的类定义及其所属命名空间和描述: -| 命名空间 | 类 | 描述 | -| --- | --- | --- | -| mindspore::api | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#context) | Context用于保存执行期间的环境变量。 | -| mindspore::api | [Serialization](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#serialization) | Serialization汇总了模型文件读写的方法。 | -| mindspore::api | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#model) | Model定义了MindSpore中的模型,便于计算图管理。 | -| mindspore::api | [Tensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#tensor) | Tensor定义了MindSpore中的张量。 | -| mindspore::api | [Buffer](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/api.html#buffer) | Buffer管理了一段内存空间。 | +| 命名空间 | 类 | 描述 | +| --------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------- | +| mindspore | [Context](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#context) | Context用于保存执行期间的环境变量。 | +| mindspore | [Serialization](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#serialization) | Serialization汇总了模型文件读写的方法。 | +| mindspore | [Model](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#model) | Model定义了MindSpore中的模型,便于计算图管理。 | +| mindspore | [MSTensor](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#mstensor) | Tensor定义了MindSpore中的张量。 | diff --git a/docs/api_cpp/source_zh_cn/conf.py b/docs/api_cpp/source_zh_cn/conf.py index 625e5acd3bde751f170596e75261be4bb2bde60f..2d0cee29dc19d12263e0c6a46bb969ee29f0268f 100644 --- a/docs/api_cpp/source_zh_cn/conf.py +++ b/docs/api_cpp/source_zh_cn/conf.py @@ -23,7 +23,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_cpp/source_zh_cn/dataset.md b/docs/api_cpp/source_zh_cn/dataset.md index 7b1dc332b87cb7728b29e88c921bcdd35e52b40d..cd68a12c755a73d20f832cf3e4494ce86fe55beb 100644 --- a/docs/api_cpp/source_zh_cn/dataset.md +++ b/docs/api_cpp/source_zh_cn/dataset.md @@ -1,10 +1,42 @@ # mindspore::dataset - + + +## Execute + +\#include <[execute.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/execute.h)> + +```cpp +Execute(std::shared_ptr op); + +Execute(std::vector> ops); +``` + +Transform(图像、文本)变换算子Eager模式执行类。 + +- 参数 + + - `op`: 指定单个使用的变换算子。 + - `ops`: 指定一个列表,包含多个使用的变换算子。 + +```cpp +Status operator()(const mindspore::MSTensor &input, mindspore::MSTensor *output); +``` + +Eager模式执行接口。 + +- 参数 + + - `input`: 待变换的Tensor张量。 + - `output`: 变换后的Tensor张量。 + +- 返回值 + + 返回一个状态码指示执行变换是否成功。 ## ResizeBilinear -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h) @@ -25,7 +57,7 @@ bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h) ## InitFromPixel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m) @@ -48,7 +80,7 @@ bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType d ## ConvertTo -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0) @@ -68,7 +100,7 @@ bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0) ## Crop -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h) @@ -91,7 +123,7 @@ bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h) ## SubStractMeanNormalize -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std) @@ -112,7 +144,7 @@ bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector< ## Pad -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r) @@ -139,7 +171,7 @@ bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int ri ## ExtractChannel -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool ExtractChannel(const LiteMat &src, LiteMat &dst, int col) @@ -158,7 +190,7 @@ bool ExtractChannel(const LiteMat &src, LiteMat &dst, int col) ## Split -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Split(const LiteMat &src, std::vector &mv) @@ -177,7 +209,7 @@ bool Split(const LiteMat &src, std::vector &mv) ## Merge -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp bool Merge(const std::vector &mv, LiteMat &dst) @@ -196,7 +228,7 @@ bool Merge(const std::vector &mv, LiteMat &dst) ## Affine -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue) @@ -216,7 +248,7 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C3 borderValue) ``` -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> 对3通道图像应用仿射变换。 @@ -230,7 +262,7 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi ## GetDefaultBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp std::vector> GetDefaultBoxes(BoxesConfig config) @@ -248,7 +280,7 @@ std::vector> GetDefaultBoxes(BoxesConfig config) ## ConvertBoxes -\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> +\#include <[image_process.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/image_process.h)> ```cpp void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config) @@ -264,7 +296,7 @@ void ConvertBoxes(std::vector> &boxes, std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes) @@ -285,7 +317,7 @@ std::vector ApplyNms(std::vector> &all_boxes, std::vecto ## LiteMat -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> LiteMat是一个处理图像的类。 @@ -431,7 +463,7 @@ ref_count_ ## Subtract -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Subtract(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -451,7 +483,7 @@ bool Subtract(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) ## Divide -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Divide(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) @@ -471,7 +503,7 @@ bool Divide(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) ## Multiply -\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> +\#include <[lite_mat.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/kernels/image/lite_cv/lite_mat.h)> ```cpp bool Multiply(const LiteMat &src_a, const LiteMat &src_b, LiteMat *dst) diff --git a/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md index a116e2dbdeae97584e914b97863d953f0510c895..06ed2acd032ce88e61fb148f775ea669116ded8e 100644 --- a/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md +++ b/docs/api_cpp/source_zh_cn/errorcode_and_metatype.md @@ -1,6 +1,6 @@ # 错误码及元类型 - + ## 1.0.1 diff --git a/docs/api_cpp/source_zh_cn/index.rst b/docs/api_cpp/source_zh_cn/index.rst index b5f76d3c78fd947026a99ea4a9ba8afd91355ed8..779317bee1f0397ac1c5a78905b31b236f33d4f8 100644 --- a/docs/api_cpp/source_zh_cn/index.rst +++ b/docs/api_cpp/source_zh_cn/index.rst @@ -12,7 +12,6 @@ MindSpore C++ API class_list mindspore - api dataset vision lite diff --git a/docs/api_cpp/source_zh_cn/lite.md b/docs/api_cpp/source_zh_cn/lite.md index f42570091cf8626a63db230113f90c87047939bb..5e6a4710fc1a6ddb6c10167a5f02e7867c3baa79 100644 --- a/docs/api_cpp/source_zh_cn/lite.md +++ b/docs/api_cpp/source_zh_cn/lite.md @@ -1,16 +1,16 @@ # mindspore::lite - + ## Allocator -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Allocator类定义了一个内存池,用于动态地分配和释放内存。 ## Context -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> Context类用于保存执行中的环境变量。 @@ -56,7 +56,7 @@ thread_num_ allocator ``` -**pointer**类型,指向内存分配器 [**Allocator**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#allocator) 的指针。 +**pointer**类型,指向内存分配器 [**Allocator**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#allocator) 的指针。 #### device_list_ @@ -64,19 +64,19 @@ allocator device_list_ ``` -[**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicecontextvector) 类型, 元素为 [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicecontext) 的**vector**. +[**DeviceContextVector**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicecontextvector) 类型, 元素为 [**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicecontext) 的**vector**. -> 现在只支持CPU和GPU。如果设置了GPU设备环境变量,优先使用GPU设备,否则优先使用CPU设备。 +> 现在支持CPU、GPU和NPU。如果设置了GPU设备环境变量并且设备支持GPU,优先使用GPU设备,否则优先使用CPU设备。如果设置了NPU设备环境变量并且设备支持NPU,优先使用NPU设备,否则优先使用CPU设备。 ## PrimitiveC -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> PrimitiveC定义为算子的原型。 ## Model -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> Model定义了MindSpore Lite中的模型,便于计算图管理。 @@ -130,7 +130,7 @@ static Model *Import(const char *model_buf, size_t size) ## CpuBindMode -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> 枚举类型,设置cpu绑定策略。 @@ -162,7 +162,7 @@ NO_BIND = 0 ## DeviceType -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> 枚举类型,设置设备类型。 @@ -190,11 +190,11 @@ DT_GPU = 1 DT_NPU = 2 ``` -设备为NPU,暂不支持。 +设备为NPU。 ## Version -\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/version.h)> +\#include <[version.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/version.h)> ```cpp std::string Version() @@ -232,13 +232,13 @@ std::vector MSTensorToStrings(const tensor::MSTensor *tensor) ## DeviceContextVector -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> -元素为[**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicecontext) 的**vector**。 +元素为[**DeviceContext**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicecontext) 的**vector**。 ## DeviceContext -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> DeviceContext类定义不同硬件设备的环境信息。 @@ -250,7 +250,7 @@ DeviceContext类定义不同硬件设备的环境信息。 device_type ``` -[**DeviceType**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#devicetype) 枚举类型。默认为**DT_CPU**,标明设备信息。 +[**DeviceType**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#devicetype) 枚举类型。默认为**DT_CPU**,标明设备信息。 #### device_info_ @@ -258,11 +258,11 @@ device_type device_info_ ``` -**union**类型,包含[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#cpudeviceinfo) 和[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#gpudeviceinfo) 。 +**union**类型,包含 [**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#cpudeviceinfo) 、 [**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#gpudeviceinfo) 和 [**NpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#npudeviceinfo) 。 ## DeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> **union**类型,设置不同硬件的环境变量。 @@ -274,7 +274,7 @@ device_info_ cpu_device_info_ ``` -[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#cpudeviceinfo) 类型,配置CPU的环境变量。 +[**CpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#cpudeviceinfo) 类型,配置CPU的环境变量。 #### gpu_device_info_ @@ -282,7 +282,7 @@ cpu_device_info_ gpu_device_info_ ``` -[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#gpudeviceinfo) 类型,配置GPU的环境变量。 +[**GpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#gpudeviceinfo) 类型,配置GPU的环境变量。 #### npu_device_info_ @@ -290,11 +290,11 @@ gpu_device_info_ npu_device_info_ ``` -[**NpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#npudeviceinfo) 类型,配置NPU的环境变量。 +[**NpuDeviceInfo**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#npudeviceinfo) 类型,配置NPU的环境变量。 ## CpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> CpuDeviceInfo类,配置CPU的环境变量。 @@ -316,11 +316,11 @@ enable_float16_ cpu_bind_mode_ ``` -[**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/lite.html#cpubindmode) 枚举类型,默认为**MID_CPU**。 +[**CpuBindMode**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/lite.html#cpubindmode) 枚举类型,默认为**MID_CPU**。 ## GpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> GpuDeviceInfo类,用来配置GPU的环境变量。 @@ -338,7 +338,7 @@ enable_float16_ ## NpuDeviceInfo -\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/context.h)> +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/context.h)> NpuDeviceInfo类,用来配置NPU的环境变量。 @@ -354,7 +354,7 @@ frequency_ ## TrainModel -\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/model.h)> +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/model.h)> 继承于结构体Model,用于导入或导出训练模型。 diff --git a/docs/api_cpp/source_zh_cn/lite_cpp_example.rst b/docs/api_cpp/source_zh_cn/lite_cpp_example.rst index bcc092dcf08208782beb9373e1a4e39e087134e1..c5ea73e4aa3e68a8446d08e8be5b2a08e6f408e4 100644 --- a/docs/api_cpp/source_zh_cn/lite_cpp_example.rst +++ b/docs/api_cpp/source_zh_cn/lite_cpp_example.rst @@ -4,5 +4,5 @@ .. toctree:: :maxdepth: 1 - 快速入门 - 高阶用法 \ No newline at end of file + 快速入门 + 高阶用法 \ No newline at end of file diff --git a/docs/api_cpp/source_zh_cn/mindspore.md b/docs/api_cpp/source_zh_cn/mindspore.md index f6195d8368dad856bb40c6c9e5e8f38646716c09..e0772012a4569af615f28eb48b5f3f3f7b5432d0 100644 --- a/docs/api_cpp/source_zh_cn/mindspore.md +++ b/docs/api_cpp/source_zh_cn/mindspore.md @@ -1,10 +1,430 @@ # mindspore - + -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +## Context -## KernelCallBack +\#include <[context.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/context.h)> + +Context类用于保存执行中的环境变量。包含GlobalContext与ModelContext两个派生类。 + +## GlobalContext : Context + +GlobalContext定义了执行时的全局变量。 + +### 静态公有成员函数 + +#### GetGlobalContext + +```cpp +static std::shared_ptr GetGlobalContext(); +``` + +返回GlobalContext单例。 + +- 返回值 + + 指向GlobalContext单例的智能指针。 + +#### SetGlobalDeviceTarget + +```cpp +static void SetGlobalDeviceTarget(const std::string &device_target); +``` + +配置目标Device。 + +- 参数 + `device_target`: 将要配置的目标Device,可选有`kDeviceTypeAscend310`、`kDeviceTypeAscend910`。 + +#### GetGlobalDeviceTarget + +```cpp +static std::string GetGlobalDeviceTarget(); +``` + +获取已配置的Device。 + +- 返回值 + 已配置的目标Device。 + +#### SetGlobalDeviceID + +```cpp +static void SetGlobalDeviceID(const unit32_t &device_id); +``` + +配置Device ID。 + +- 参数 + `device_id`: 将要配置的Device ID。 + +#### GetGlobalDeviceID + +```cpp +static uint32_t GetGlobalDeviceID(); +``` + +获取已配置的Device ID。 + +- 返回值 + 已配置的Device ID。 + +## ModelContext : Context + +### 静态公有成员函数 + +| 函数 | 说明 | +| ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `void SetInsertOpConfigPath(const std::shared_ptr &context, const std::string &cfg_path)` | 模型插入[AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html)算子

- `context`: 将要修改的context

- `cfg_path`: [AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html)配置文件路径 | +| `std::string GetInsertOpConfigPath(const std::shared_ptr &context)` | - 返回值: 已配置的[AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html) | +| `void SetInputFormat(const std::shared_ptr &context, const std::string &format)` | 指定模型输入format

- `context`: 将要修改的context

- `format`: 可选有`"NCHW"`,`"NHWC"`等 | +| `std::string GetInputFormat(const std::shared_ptr &context)` | - 返回值: 已配置模型输入format | +| `void SetInputShape(const std::shared_ptr &context, const std::string &shape)` | 指定模型输入shape

- `context`: 将要修改的context

- `shape`: 如`"input_op_name1:1,2,3,4;input_op_name2:4,3,2,1"` | +| `std::string GetInputShape(const std::shared_ptr &context)` | - 返回值: 已配置模型输入shape | +| `void SetOutputType(const std::shared_ptr &context, enum DataType output_type)` | 指定模型输出type

- `context`: 将要修改的context

- `output_type`: 仅支持uint8、fp16和fp32 | +| `enum DataType GetOutputType(const std::shared_ptr &context)` | - 返回值: 已配置模型输出type | +| `void SetPrecisionMode(const std::shared_ptr &context, const std::string &precision_mode)` | 配置模型精度模式

- `context`: 将要修改的context

- `precision_mode`: 可选有`"force_fp16"`,`"allow_fp32_to_fp16"`,`"must_keep_origin_dtype"`或者`"allow_mix_precision"`,默认为`"force_fp16"` | +| `std::string GetPrecisionMode(const std::shared_ptr &context)` | - 返回值: 已配置模型精度模式 | +| `void SetOpSelectImplMode(const std::shared_ptr &context, const std::string &op_select_impl_mode)` | 配置算子选择模式

- `context`: 将要修改的context

- `op_select_impl_mode`: 可选有`"high_performance"`和`"high_precision"`,默认为`"high_performance"` | +| `std::string GetOpSelectImplMode(const std::shared_ptr &context)` | - 返回值: 已配置算子选择模式 | + +## Serialization + +\#include <[serialization.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/serialization.h)> + +Serialization类汇总了模型文件读写的方法。 + +### 静态公有成员函数 + +#### LoadModel + +```cpp +static Graph LoadModel(const std::string &file, ModelType model_type); +``` + +从文件加载模型。MindSpore Lite未提供此功能。 + +- 参数 + + - `file`:模型文件路径。 + - `model_type`:模型文件类型,可选有`ModelType::kMindIR`,`ModelType::kOM`。 + +- 返回值 + + 保存图数据的`Graph`实例。 + +```cpp +static Graph LoadModel(const void *model_data, size_t data_size, ModelType model_type); +``` + +从内存缓冲区加载模型。 + +- 参数 + + - `model_data`:已读取模型文件的缓存区。 + - `data_size`:缓存区大小。 + - `model_type`:模型文件类型,可选有`ModelType::kMindIR`、`ModelType::kOM`。 + +- 返回值 + + 保存图数据的`Graph`实例。 + +## Model + +\#include <[model.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/model.h)> + +Model定义了MindSpore中的模型,便于计算图管理。 + +### 构造函数和析构函数 + +```cpp +explicit Model(const GraphCell &graph, const std::shared_ptr &model_context); +explicit Model(const std::vector &network, const std::shared_ptr &model_context); +~Model(); +``` + +`GraphCell`是`Cell`的一个派生,`Cell`目前没有开放使用。`GraphCell`可以由`Graph`构造,如`Model model(GraphCell(graph))`。 + +`Context`表示运行时的[模型配置](#modelcontext-contextfor-mindspore)。 + +### 公有成员函数 + +#### Build + +```cpp +Status Build(); +``` + +将模型编译至可在Device上运行的状态。 + +- 返回值 + + 状态码类`Status`对象,可以使用其公有函数`StatusCode`或`ToString`函数来获取具体错误码及错误信息。 + +#### Predict + +```cpp +Status Predict(const std::vector &inputs, std::vector *outputs); +``` + +执行推理。 + +- 参数 + + - `inputs`: 模型输入按顺序排列的`vector`。 + - `outputs`: 输出参数,按顺序排列的`vector`的指针,模型输出会按顺序填入该容器。 + +- 返回值 + + 状态码类`Status`对象,可以使用其公有函数`StatusCode`或`ToString`函数来获取具体错误码及错误信息。 + +#### GetInputs + +```cpp +std::vector GetInputs(); +``` + +获取模型所有输入张量。 + +- 返回值 + + 包含模型所有输入张量的容器类型变量。 + +#### GetOutputs + +```cpp +std::vector GetOutputs(); +``` + +获取模型所有输出张量。 + +- 返回值 + + 包含模型所有输出张量的容器类型变量。 + +#### Resize + +```cpp +Status Resize(const std::vector &inputs, const std::vector> &dims); +``` + +调整已编译模型的输入形状。 + +- 参数 + + - `inputs`: 模型输入按顺序排列的`vector`。 + - `dims`: 输入形状,按输入顺序排列的由形状组成的`vector`,模型会按顺序依次调整张量形状。 + +- 返回值 + + 状态码类`Status`对象,可以使用其公有函数`StatusCode`或`ToString`函数来获取具体错误码及错误信息。 + +#### CheckModelSupport + +```cpp +static bool CheckModelSupport(const std::string &device_type, ModelType model_type); +``` + +检查设备是否支持该模型。 + +- 参数 + + - `device_type`: 设备名称,例如`Ascend310`。 + - `model_type`: 模型类型,例如`MindIR`。 + +- 返回值 + + 状态码。 + +## MSTensor + +\#include <[types.h](https://gitee.com/mindspore/mindspore/blob/r1.1/include/api/types.h)> + +`MSTensor`定义了MindSpore中的Tensor。 + +### 构造函数和析构函数 + +```cpp +MSTensor(); +explicit MSTensor(const std::shared_ptr &impl); +MSTensor(const std::string &name, DataType type, const std::vector &shape, const void *data, size_t data_len); +~MSTensor(); +``` + +### 静态公有成员函数 + +#### CreateTensor + +```cpp +static MSTensor CreateTensor(const std::string &name, DataType type, const std::vector &shape, + const void *data, size_t data_len) noexcept; +``` + +创建一个`MSTensor`对象,其数据需复制后才能由`Model`访问。 + +- 参数 + + - `name`: 名称。 + - `type`:数据类型。 + - `shape`:形状。 + - `data`:数据指针,指向一段已开辟的内存。 + - `data`:数据长度,以字节为单位。 + +- 返回值 + + `MStensor`实例。 + +#### CreateRefTensor + +```cpp +static MSTensor CreateRefTensor(const std::string &name, DataType type, const std::vector &shape, void *data, + size_t data_len) noexcept; +``` + +创建一个`MSTensor`对象,其数据可以直接由`Model`访问。 + +- 参数 + + - `name`: 名称。 + - `type`:数据类型。 + - `shape`:形状。 + - `data`:数据指针,指向一段已开辟的内存。 + - `data`:数据长度,以字节为单位。 + +- 返回值 + + `MStensor`实例。 + +### 公有成员函数 + +#### Name + +```cpp +const std::string &Name() const; +``` + +获取`MSTensor`的名字。 + +- 返回值 + + `MSTensor`的名字。 + +#### DataType + +```cpp +enum DataType DataType() const; +``` + +获取`MSTensor`的数据类型。 + +- 返回值 + + `MSTensor`的数据类型。 + +#### Shape + +```cpp +const std::vector &Shape() const; +``` + +获取`MSTensor`的Shape。 + +- 返回值 + + `MSTensor`的Shape。 + +#### ElementNum + +```cpp +int64_t ElementNum() const; +``` + +获取`MSTensor`的元素个数。 + +- 返回值 + + `MSTensor`的元素个数。 + +#### Data + +```cpp +std::shared_ptr Data() const; +``` + +获取指向`MSTensor`中的数据拷贝的智能指针。 + +- 返回值 + + 指向`MSTensor`中的数据拷贝的智能指针。 + +#### MutableData + +```cpp +void *MutableData(); +``` + +获取`MSTensor`中的数据的指针。 + +- 返回值 + + 指向`MSTensor`中的数据的指针。 + +#### DataSize + +```cpp +size_t DataSize() const; +``` + +获取`MSTensor`中的数据的以字节为单位的内存长度。 + +- 返回值 + + `MSTensor`中的数据的以字节为单位的内存长度。 + +#### IsDevice + +```cpp +bool IsDevice() const; +``` + +判断`MSTensor`中是否在设备上。 + +- 返回值 + + `MSTensor`中是否在设备上。 + +#### Clone + +```cpp +MSTensor Clone() const; +``` + +拷贝一份自身的副本。 + +- 返回值 + + 深拷贝的副本。 + +#### operator==(std::nullptr_t) + +```cpp +bool operator==(std::nullptr_t) const; +``` + +判断`MSTensor`是否合法。 + +- 返回值 + + `MSTensor`是否合法。 + +## CallBack + +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> + +CallBack定义了MindSpore Lite中的回调函数。 + +### KernelCallBack ```cpp using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)> @@ -12,11 +432,11 @@ using KernelCallBack = std::function inputs 一个函数包装器。KernelCallBack 定义了指向回调函数的指针。 -## CallBackParam +### CallBackParam 一个结构体。CallBackParam定义了回调函数的输入参数。 -### 公有属性 +#### 公有属性 #### node_name diff --git a/docs/api_cpp/source_zh_cn/session.md b/docs/api_cpp/source_zh_cn/session.md index 0fbc07e90b7a6c4e94fd90db609b6b7ece4ca993..c3cdaf310e2b4dd2a1b061049f4299b2b0e62bb6 100644 --- a/docs/api_cpp/source_zh_cn/session.md +++ b/docs/api_cpp/source_zh_cn/session.md @@ -1,10 +1,10 @@ # mindspore::session - + ## LiteSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 @@ -56,7 +56,7 @@ virtual int CompileGraph(lite::Model *model) - 返回值 - STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。 + STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h)中定义。 #### GetInputs @@ -97,13 +97,13 @@ virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBac - 参数 - - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。 + - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。 - - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/master/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。 + - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/doc/api_cpp/zh-CN/r1.1/mindspore.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。 - 返回值 - STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。 + STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h)中定义。 #### GetOutputsByNodeName @@ -176,7 +176,7 @@ virtual int Resize(const std::vector &inputs, const std::ve - 返回值 - STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/errorcode.h)中定义。 + STATUS ,即编译图的错误码。STATUS在[errorcode.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/errorcode.h)中定义。 ### 静态公有成员函数 @@ -216,7 +216,7 @@ static LiteSession *CreateSession(const char *model_buf, size_t size, const lite ## TrainSession -\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)> +\#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/lite_session.h)> 继承于类 LiteSession,用于训练模型。 diff --git a/docs/api_cpp/source_zh_cn/tensor.md b/docs/api_cpp/source_zh_cn/tensor.md index 21a32c86ca85c7678ac606aa9d4c0ffa7be2c788..c3624c7107e6736b8570e90d6fe98081d65b28c1 100644 --- a/docs/api_cpp/source_zh_cn/tensor.md +++ b/docs/api_cpp/source_zh_cn/tensor.md @@ -1,10 +1,10 @@ # mindspore::tensor - + ## MSTensor -\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/ms_tensor.h)> +\#include <[ms_tensor.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/include/ms_tensor.h)> MSTensor定义了MindSpore Lite中的张量。 @@ -40,7 +40,7 @@ virtual TypeId data_type() const 获取MindSpore Lite MSTensor的数据类型。 -> TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型或kObjectTypeString可用于MSTensor。 +> TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型或kObjectTypeString可用于MSTensor。 - 返回值 diff --git a/docs/api_cpp/source_zh_cn/vision.md b/docs/api_cpp/source_zh_cn/vision.md index a8073f22b79cf6bc8dbaad929faedb16527c1ae9..351da2028710fd4602895c6301468ae10555c984 100644 --- a/docs/api_cpp/source_zh_cn/vision.md +++ b/docs/api_cpp/source_zh_cn/vision.md @@ -1,34 +1,20 @@ # mindspore::dataset::vision - - -## HWC2CHW - -\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision.h)> - -```cpp -std::shared_ptr HWC2CHW() -``` - -将输入图像的通道顺序从(H,W,C)转换成(C,H,W)。 - -- 返回值 - - 返回一个HwcToChw的算子。 + ## CenterCrop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr CenterCrop(std::vector size) ``` -将输入的PIL图像的中心区域裁剪到给定的大小。 +从输入的图像的中心区域裁剪出给定尺寸的区域。 - 参数 - - `size`: 表示调整大小后的图像的输出大小。如果size为单个值,则将以相同的图像纵横比将图像调整为该值, 如果size具有2个值,则应为(高度,宽度)。 + - `size`: 输出裁剪区域的尺寸。如果size为单个值,则会生成一个正方形的裁剪区域,如果size具有2个值,则分别对应裁剪区域的高度、宽度。 - 返回值 @@ -36,18 +22,18 @@ std::shared_ptr CenterCrop(std::vector size) ## Crop -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Crop(std::vector coordinates, std::vector size) ``` -根据位置和尺寸裁切图像。 +根据起始位置和裁剪尺寸,从输入图像中裁切出指定区域。 - 参数 - - `coordinates`: 裁剪的起始位置。 - - `size`: 裁剪区域的大小。 + - `coordinates`: 裁剪区域在图像中的起始位置。 + - `size`: 输出裁剪区域的尺寸。如果size为单个值,则会生成一个正方形的裁剪区域;如果size具有2个值,则分别对应裁剪区域的高度、宽度。 - 返回值 @@ -55,7 +41,7 @@ std::shared_ptr Crop(std::vector coordinates, std::vecto ## Decode -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Decode(bool rgb = true) @@ -65,7 +51,7 @@ std::shared_ptr Decode(bool rgb = true) - 参数 - - `rgb`: 表示是否执行RGB模式解码。 + - `rgb`: 表示是否执行RGB模式解码。 - 返回值 @@ -73,7 +59,7 @@ std::shared_ptr Decode(bool rgb = true) ## Normalize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Normalize(std::vector mean, std::vector std) @@ -92,19 +78,72 @@ std::shared_ptr Normalize(std::vector mean, std::vect ## Resize -\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> +\#include <[vision_lite.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision_lite.h)> ```cpp std::shared_ptr Resize(std::vector size, InterpolationMode interpolation = InterpolationMode::kLinear) ``` -通过给定的大小对输入的PIL图像进行调整。 +对输入的图像的长、宽尺寸进行调整。 - 参数 - - `size`: 表示调整大小后的图像的输出大小。如果size为单个值,则将以相同的图像纵横比将图像调整为该值,如果size具有2个值,则应为(高度,宽度)。 + - `size`: 表示调整后的图像的输出尺寸大小。如果size为单个值,图像的短边会调整到此值,另一边则将以相同的纵横比进行调整;如果size具有2个值,则对应输出图像的高度、宽度。 - `interpolation`: 插值模式的枚举。 - 返回值 返回一个Resize的算子。 + +## HWC2CHW + +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> + +```cpp +std::shared_ptr HWC2CHW() +``` + +将输入图像的通道顺序从(H,W,C)转换成(C,H,W)。 + +- 返回值 + + 返回一个HwcToChw的算子。 + +## Pad + +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> + +```cpp +std::shared_ptr Pad(std::vector padding, std::vector fill_value = {0}, BorderType padding_mode = BorderType::kConstant) +``` + +根据给定的填充参数对图像进行填充。 + +- 参数 + + - `padding`: 图像的上、下、左、右边需要填充的像素个数。如果padding为单个值P,则四个边都填充P个像素;如果padding为两个值(P, Q),则左、右边填充P个像素,上、下边填充Q个像素;如果padding为四个值,则分别对应左、上、右、下四个边的填充像素个数。 + - `fill_value`: 需要填充的像素值。 + - `padding_mode`: 填充的模式,可以为常量模式、边界模式、反射模式、对称模式。 + +- 返回值 + + 返回一个Pad的算子。 + +## Rescale + +\#include <[vision.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/minddata/dataset/include/vision.h)> + +```cpp +std::shared_ptr Rescale(float rescale, float shift) +``` + +对输入图像的像素进行`y = αx + β`变换。 + +- 参数 + + - `rescale`: 变换的α参数。 + - `shift`: 变换的β参数。 + +- 返回值 + + 返回一个Rescale的算子。 \ No newline at end of file diff --git a/docs/api_java/source_en/class_list.md b/docs/api_java/source_en/class_list.md index a8073d9d47c1aa0434e41b637b0bd3e23d6f49a6..7260b4fba43437e138539e1f66eb94a31de0c53a 100644 --- a/docs/api_java/source_en/class_list.md +++ b/docs/api_java/source_en/class_list.md @@ -1,14 +1,14 @@ # Class List - + | Package | Class Name | Description | | ------------------------- | -------------- | ------------------------------------------------------------ | -| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/en/master/msconfig.html) | MSConfig defines for holding environment variables during runtime. | -| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode defines the CPU binding mode. | -| com.mindspore.lite.config | [DeviceType](https://www.mindspore.cn/doc/api_java/zh-CN/master/mstensor.html) | DeviceType defines the back-end device type. | -| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/en/master/lite_session.html) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. | -| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/en/master/model.html) | Model defines the model in MindSpore Lite for managing graph. | -| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/en/master/mstensor.html) | MSTensor defines the tensor in MindSpore Lite. | -| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType defines the supported data types. | -| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version is used to obtain the version information of MindSpore Lite. | +| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/en/r1.1/msconfig.html) | MSConfig defines for holding environment variables during runtime. | +| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode defines the CPU binding mode. | +| com.mindspore.lite.config | [DeviceType](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/mstensor.html) | DeviceType defines the back-end device type. | +| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/en/r1.1/lite_session.html) | LiteSession defines session in MindSpore Lite for compiling Model and forwarding model. | +| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/en/r1.1/model.html) | Model defines the model in MindSpore Lite for managing graph. | +| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/en/r1.1/mstensor.html) | MSTensor defines the tensor in MindSpore Lite. | +| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType defines the supported data types. | +| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version is used to obtain the version information of MindSpore Lite. | diff --git a/docs/api_java/source_en/conf.py b/docs/api_java/source_en/conf.py index 4020d50f7b5f7a90b26785749cb1d41046b4723c..71b7386f7a6c58c6814685c843069d827171b488 100644 --- a/docs/api_java/source_en/conf.py +++ b/docs/api_java/source_en/conf.py @@ -23,7 +23,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_java/source_en/index.rst b/docs/api_java/source_en/index.rst index 935aa0a5d22565b2d51fc919a8f81c00d9702b02..1a531e3f3a89cec32b3882e59a74980552ef0864 100644 --- a/docs/api_java/source_en/index.rst +++ b/docs/api_java/source_en/index.rst @@ -14,4 +14,5 @@ MindSpore Java API lite_session model msconfig - mstensor \ No newline at end of file + mstensor + lite_java_example \ No newline at end of file diff --git a/docs/api_java/source_en/lite_java_example.rst b/docs/api_java/source_en/lite_java_example.rst new file mode 100644 index 0000000000000000000000000000000000000000..35e8f359f470841a62f4beee329183f5d8610296 --- /dev/null +++ b/docs/api_java/source_en/lite_java_example.rst @@ -0,0 +1,7 @@ +Example +======== + +.. toctree:: + :maxdepth: 1 + + Quick Start \ No newline at end of file diff --git a/docs/api_java/source_en/lite_session.md b/docs/api_java/source_en/lite_session.md index df13e94ca209a0fafecdf2a977ea72cc9818fdee..30eda6e20c5e3566e5bc62bb7c58d505a4bf3900 100644 --- a/docs/api_java/source_en/lite_session.md +++ b/docs/api_java/source_en/lite_session.md @@ -1,6 +1,6 @@ # LiteSession - + ```java import com.mindspore.lite.LiteSession; diff --git a/docs/api_java/source_en/model.md b/docs/api_java/source_en/model.md index c0928bc1b861f62c4d819b71f19d8e8fac86fc24..aa4f9903ca1c856791eca442c2fd70a6c3312195 100644 --- a/docs/api_java/source_en/model.md +++ b/docs/api_java/source_en/model.md @@ -1,6 +1,6 @@ # Model - + ```java import com.mindspore.lite.Model; diff --git a/docs/api_java/source_en/msconfig.md b/docs/api_java/source_en/msconfig.md index 21b02746a03490919ca63f256203a5f309db556f..70acf659795e8e4733040128930eed872bdf235d 100644 --- a/docs/api_java/source_en/msconfig.md +++ b/docs/api_java/source_en/msconfig.md @@ -1,6 +1,6 @@ # MSConfig - + ```java import com.mindspore.lite.config.MSConfig; @@ -29,10 +29,10 @@ Initialize MSConfig. - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. - - `enable_float16`:Whether to use float16 operator for priority. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `threadNum`: Thread number config for thread pool. + - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. + - `enable_float16`:Whether to use float16 operator for priority. - Returns @@ -46,9 +46,9 @@ Initialize MSConfig. - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - `threadNum`: Thread number config for thread pool. - - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. + - `cpuBindMode`: A [**CpuBindMode**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) **enum** variable. - Returns @@ -62,7 +62,7 @@ Initialize MSConfig, `cpuBindMode` defaults to `CpuBindMode.MID_CPU`. - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - `threadNum`: Thread number config for thread pool. - Returns @@ -77,7 +77,7 @@ Initialize MSConfig,`cpuBindMode` defaults to `CpuBindMode.MID_CPU`, `threadNu - Parameters - - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. + - `deviceType`: A [**DeviceType**](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) **enum** type. - Returns diff --git a/docs/api_java/source_en/mstensor.md b/docs/api_java/source_en/mstensor.md index 4d4f19ac60fabb96f159740e7226d94ab3c2f8aa..5c0e34a6c052331050c1ca466f0bcace202ace7f 100644 --- a/docs/api_java/source_en/mstensor.md +++ b/docs/api_java/source_en/mstensor.md @@ -1,6 +1,6 @@ # MSTensor - + ```java import com.mindspore.lite.MSTensor; @@ -42,7 +42,7 @@ Get the shape of the MindSpore Lite MSTensor. public int getDataType() ``` -> DataType is defined in [com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java). +> DataType is defined in [com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java). - Returns diff --git a/docs/api_java/source_zh_cn/class_list.md b/docs/api_java/source_zh_cn/class_list.md index 6c2d18a87f5910b6e00aa06a1debe90aa3719f95..f9534fdb20b17e6b0ad10df8f3283a0411b5a3d8 100644 --- a/docs/api_java/source_zh_cn/class_list.md +++ b/docs/api_java/source_zh_cn/class_list.md @@ -1,14 +1,14 @@ # 类列表 - + | 包 | 类 | 描述 | | ------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | -| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/zh-CN/master/msconfig.html) | MSConfig用于保存执行期间的配置变量。 | -| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode定义了CPU绑定模式。 | -| com.mindspore.lite.config | [DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DeviceType定义了后端设备类型。 | -| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/zh-CN/master/lite_session.html) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | -| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/zh-CN/master/model.html) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | -| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/zh-CN/master/mstensor.html) | MSTensor定义了MindSpore Lite中的张量。 | -| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType定义了所支持的数据类型。 | -| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version用于获取MindSpore Lite的版本信息。 | +| com.mindspore.lite.config | [MSConfig](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/msconfig.html) | MSConfig用于保存执行期间的配置变量。 | +| com.mindspore.lite.config | [CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java) | CpuBindMode定义了CPU绑定模式。 | +| com.mindspore.lite.config | [DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DeviceType定义了后端设备类型。 | +| com.mindspore.lite | [LiteSession](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/lite_session.html) | LiteSession定义了MindSpore Lite中的会话,用于进行Model的编译和前向推理。 | +| com.mindspore.lite | [Model](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/model.html) | Model定义了MindSpore Lite中的模型,便于计算图管理。 | +| com.mindspore.lite | [MSTensor](https://www.mindspore.cn/doc/api_java/zh-CN/r1.1/mstensor.html) | MSTensor定义了MindSpore Lite中的张量。 | +| com.mindspore.lite | [DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java) | DataType定义了所支持的数据类型。 | +| com.mindspore.lite | [Version](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/Version.java) | Version用于获取MindSpore Lite的版本信息。 | diff --git a/docs/api_java/source_zh_cn/conf.py b/docs/api_java/source_zh_cn/conf.py index e3dfb2a0a9fc6653113e7b2bb878a5497ceb4a2b..d68b7e7966909b7631790f148a864c950696ec0c 100644 --- a/docs/api_java/source_zh_cn/conf.py +++ b/docs/api_java/source_zh_cn/conf.py @@ -22,7 +22,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_java/source_zh_cn/lite_java_example.rst b/docs/api_java/source_zh_cn/lite_java_example.rst index 905bf7d9bb71ee8e7d108990155a454a6e80ab92..19f6b218c4c53be61e43cd7c27c08723eed06940 100644 --- a/docs/api_java/source_zh_cn/lite_java_example.rst +++ b/docs/api_java/source_zh_cn/lite_java_example.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - 快速入门 \ No newline at end of file + 快速入门 \ No newline at end of file diff --git a/docs/api_java/source_zh_cn/lite_session.md b/docs/api_java/source_zh_cn/lite_session.md index 9d3d0fca6357adbf015f09219aa2e86a821258ff..5bd9c1c05f974439453505011866270471676a47 100644 --- a/docs/api_java/source_zh_cn/lite_session.md +++ b/docs/api_java/source_zh_cn/lite_session.md @@ -1,6 +1,6 @@ # LiteSession - + ```java import com.mindspore.lite.LiteSession; @@ -62,7 +62,7 @@ public boolean compileGraph(Model model) - 参数 - - `Model`: 需要被编译的模型。 + - `Model`: 需要被编译的模型。 - 返回值 @@ -102,7 +102,7 @@ public MSTensor getInputByTensorName(String tensorName) - 参数 - - `tensorName`: 张量名。 + - `tensorName`: 张量名。 - 返回值 @@ -118,7 +118,7 @@ public List getOutputsByNodeName(String nodeName) - 参数 - - `nodeName`: 节点名。 + - `nodeName`: 节点名。 - 返回值 diff --git a/docs/api_java/source_zh_cn/model.md b/docs/api_java/source_zh_cn/model.md index 7fbd94321689c1f082ec239c20d8fd02627770e9..373ec68acbb2f9ee90b76ffc82c48e94824136d4 100644 --- a/docs/api_java/source_zh_cn/model.md +++ b/docs/api_java/source_zh_cn/model.md @@ -1,6 +1,6 @@ # Model - + ```java import com.mindspore.lite.Model; diff --git a/docs/api_java/source_zh_cn/msconfig.md b/docs/api_java/source_zh_cn/msconfig.md index 3b2da52153de453f360fe736b0af6417ea047bfb..76759a3472bd31b207aff3f15c2a966015b14080 100644 --- a/docs/api_java/source_zh_cn/msconfig.md +++ b/docs/api_java/source_zh_cn/msconfig.md @@ -1,6 +1,6 @@ # MSConfig - + ```java import com.mindspore.lite.config.MSConfig; @@ -29,9 +29,9 @@ public boolean init(int deviceType, int threadNum, int cpuBindMode, boolean enab - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 - `enable_float16`:是否优先使用float16算子。 - 返回值 @@ -46,9 +46,9 @@ public boolean init(int deviceType, int threadNum, int cpuBindMode) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - `threadNum`: 线程数。 - - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 + - `cpuBindMode`: CPU绑定模式,`cpuBindMode`在[com.mindspore.lite.config.CpuBindMode](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/CpuBindMode.java)中定义。 - 返回值 @@ -62,7 +62,7 @@ public boolean init(int deviceType, int threadNum) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - `threadNum`: 线程数。 - 返回值 @@ -77,7 +77,7 @@ public boolean init(int deviceType) - 参数 - - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 + - `deviceType`: 设备类型,`deviceType`在[com.mindspore.lite.config.DeviceType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/config/DeviceType.java)中定义。 - 返回值 diff --git a/docs/api_java/source_zh_cn/mstensor.md b/docs/api_java/source_zh_cn/mstensor.md index 056cd1dd94514a508c9fecdd243befdedd95e740..8139f3d585857831cf6e05f774c0e8e268be2e6b 100644 --- a/docs/api_java/source_zh_cn/mstensor.md +++ b/docs/api_java/source_zh_cn/mstensor.md @@ -1,6 +1,6 @@ # MSTensor - + ```java import com.mindspore.lite.MSTensor; @@ -42,7 +42,7 @@ public int[] getShape() public int getDataType() ``` -> DataType在[com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java)中定义。 +> DataType在[com.mindspore.lite.DataType](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/lite/java/java/app/src/main/java/com/mindspore/lite/DataType.java)中定义。 - 返回值 diff --git a/docs/api_python/source_en/_templates/classtemplate.rst b/docs/api_python/source_en/_templates/classtemplate.rst index d232f3978ec365474320569e3954a709c1a766a0..476014a3c6968eaf7f270fc9ce00c1efc9001f1c 100644 --- a/docs/api_python/source_en/_templates/classtemplate.rst +++ b/docs/api_python/source_en/_templates/classtemplate.rst @@ -3,7 +3,11 @@ .. currentmodule:: {{ module }} -{% if objname[0].istitle() %} +{% if objname in ["FastGelu", "GatherV2", "TensorAdd", "Gelu"] %} +{{ fullname | underline }} + +.. autofunction:: {{ fullname }} +{% elif objname[0].istitle() %} {{ fullname | underline }} .. autoclass:: {{ name }} diff --git a/docs/api_python/source_en/conf.py b/docs/api_python/source_en/conf.py index c88194339c838fa4ef46289d8f6643a0f135fd53..50815c0e73cb6b6c98920579502e46598d77c262 100644 --- a/docs/api_python/source_en/conf.py +++ b/docs/api_python/source_en/conf.py @@ -32,7 +32,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst b/docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst index d6e7d76dd5558286a2a7c781487625fde870a124..8de6f0a34218bd64730c8ea9e9138014fc178b04 100644 --- a/docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst +++ b/docs/api_python/source_en/mindspore/mindspore.dataset.transforms.rst @@ -20,7 +20,6 @@ mindspore.dataset.transforms.c_transforms mindspore.dataset.transforms.c_transforms.PadEnd mindspore.dataset.transforms.c_transforms.RandomApply mindspore.dataset.transforms.c_transforms.RandomChoice - mindspore.dataset.transforms.c_transforms.Relational mindspore.dataset.transforms.c_transforms.Slice mindspore.dataset.transforms.c_transforms.TypeCast mindspore.dataset.transforms.c_transforms.Unique diff --git a/docs/api_python/source_en/mindspore/mindspore.ops.rst b/docs/api_python/source_en/mindspore/mindspore.ops.rst index 779103f524e5fba8fede87ba25f6bd58a6756850..77157779ebdc1ff0d7ece87d4cf655852de99f75 100644 --- a/docs/api_python/source_en/mindspore/mindspore.ops.rst +++ b/docs/api_python/source_en/mindspore/mindspore.ops.rst @@ -10,7 +10,7 @@ composite The composite operators are the pre-defined combination of operators. -.. autosummary:: +.. msplatformautosummary:: :toctree: ops :nosignatures: :template: classtemplate.rst @@ -29,6 +29,7 @@ The composite operators are the pre-defined combination of operators. mindspore.ops.normal mindspore.ops.poisson mindspore.ops.repeat_elements + mindspore.ops.sequence_mask mindspore.ops.tensor_dot mindspore.ops.uniform @@ -78,7 +79,7 @@ The functional operators are the pre-instantiated Primitive operators, which can * - mindspore.ops.fill - :class:`mindspore.ops.Fill` * - mindspore.ops.gather - - :class:`mindspore.ops.GatherV2` + - :class:`mindspore.ops.Gather` * - mindspore.ops.gather_nd - :class:`mindspore.ops.GatherNd` * - mindspore.ops.hastype @@ -204,7 +205,7 @@ The functional operators are the pre-instantiated Primitive operators, which can * - mindspore.ops.string_eq - :class:`mindspore.ops.Primitive` ('string_equal') * - mindspore.ops.tensor_add - - :class:`mindspore.ops.TensorAdd` + - :class:`mindspore.ops.Add` * - mindspore.ops.tensor_div - :class:`mindspore.ops.RealDiv` * - mindspore.ops.tensor_floordiv diff --git a/docs/api_python/source_en/mindspore/mindspore.rst b/docs/api_python/source_en/mindspore/mindspore.rst index 0b8c7204849fbcccf421d496294cdff7a00434dc..f3a3fb42539f6eac34874cce96ed75612c4397b7 100644 --- a/docs/api_python/source_en/mindspore/mindspore.rst +++ b/docs/api_python/source_en/mindspore/mindspore.rst @@ -40,8 +40,8 @@ mindspore ============================ ================= Type Description ============================ ================= - ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. - ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. + ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. + ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. ``bool_`` Boolean ``True`` or ``False``. ``int_`` Integer scalar. ``uint`` Unsigned integer scalar. diff --git a/docs/api_python/source_en/mindspore/operations.rst b/docs/api_python/source_en/mindspore/operations.rst index ce7c0d2396377881583a47349c25a51919c2d269..2f88d1d5a14dcd2d6d67eeb96753b4b95979ddf3 100644 --- a/docs/api_python/source_en/mindspore/operations.rst +++ b/docs/api_python/source_en/mindspore/operations.rst @@ -46,6 +46,7 @@ Neural Network Operators mindspore.ops.DynamicRNN mindspore.ops.Elu mindspore.ops.FastGelu + mindspore.ops.FastGeLU mindspore.ops.Flatten mindspore.ops.FloorMod mindspore.ops.FusedBatchNorm @@ -54,6 +55,7 @@ Neural Network Operators mindspore.ops.FusedSparseLazyAdam mindspore.ops.FusedSparseProximalAdagrad mindspore.ops.Gelu + mindspore.ops.GeLU mindspore.ops.GetNext mindspore.ops.HSigmoid mindspore.ops.HSwish @@ -90,9 +92,11 @@ Neural Network Operators mindspore.ops.SparseApplyAdagradV2 mindspore.ops.SparseApplyProximalAdagrad mindspore.ops.SparseSoftmaxCrossEntropyWithLogits + mindspore.ops.Stack mindspore.ops.Tanh mindspore.ops.TopK mindspore.ops.Unpack + mindspore.ops.Unstack Math Operators ^^^^^^^^^^^^^^ @@ -105,6 +109,7 @@ Math Operators mindspore.ops.Abs mindspore.ops.AccumulateNV2 mindspore.ops.ACos + mindspore.ops.Add mindspore.ops.AddN mindspore.ops.ApproximateEqual mindspore.ops.Asin @@ -211,8 +216,6 @@ Array Operators mindspore.ops.Cast mindspore.ops.Concat mindspore.ops.DepthToSpace - mindspore.ops.Diag - mindspore.ops.DiagPart mindspore.ops.DType mindspore.ops.DynamicShape mindspore.ops.EditDistance @@ -224,6 +227,7 @@ Array Operators mindspore.ops.GatherD mindspore.ops.GatherNd mindspore.ops.GatherV2 + mindspore.ops.Gather mindspore.ops.Identity mindspore.ops.InplaceUpdate mindspore.ops.InvertPermutation @@ -315,7 +319,6 @@ Debug Operators :nosignatures: :template: classtemplate.rst - mindspore.ops.Assert mindspore.ops.HistogramSummary mindspore.ops.ImageSummary mindspore.ops.InsertGradientOf diff --git a/docs/api_python/source_zh_cn/_templates/classtemplate.rst b/docs/api_python/source_zh_cn/_templates/classtemplate.rst index d232f3978ec365474320569e3954a709c1a766a0..476014a3c6968eaf7f270fc9ce00c1efc9001f1c 100644 --- a/docs/api_python/source_zh_cn/_templates/classtemplate.rst +++ b/docs/api_python/source_zh_cn/_templates/classtemplate.rst @@ -3,7 +3,11 @@ .. currentmodule:: {{ module }} -{% if objname[0].istitle() %} +{% if objname in ["FastGelu", "GatherV2", "TensorAdd", "Gelu"] %} +{{ fullname | underline }} + +.. autofunction:: {{ fullname }} +{% elif objname[0].istitle() %} {{ fullname | underline }} .. autoclass:: {{ name }} diff --git a/docs/api_python/source_zh_cn/conf.py b/docs/api_python/source_zh_cn/conf.py index d1220b8f461bd09a54464c8b09042cfa4577d0be..6eca0f0e635ee5b74e813547cad0fec70ff648ae 100644 --- a/docs/api_python/source_zh_cn/conf.py +++ b/docs/api_python/source_zh_cn/conf.py @@ -32,7 +32,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst index d6e7d76dd5558286a2a7c781487625fde870a124..8de6f0a34218bd64730c8ea9e9138014fc178b04 100644 --- a/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst +++ b/docs/api_python/source_zh_cn/mindspore/mindspore.dataset.transforms.rst @@ -20,7 +20,6 @@ mindspore.dataset.transforms.c_transforms mindspore.dataset.transforms.c_transforms.PadEnd mindspore.dataset.transforms.c_transforms.RandomApply mindspore.dataset.transforms.c_transforms.RandomChoice - mindspore.dataset.transforms.c_transforms.Relational mindspore.dataset.transforms.c_transforms.Slice mindspore.dataset.transforms.c_transforms.TypeCast mindspore.dataset.transforms.c_transforms.Unique diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst index 779103f524e5fba8fede87ba25f6bd58a6756850..77157779ebdc1ff0d7ece87d4cf655852de99f75 100644 --- a/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst +++ b/docs/api_python/source_zh_cn/mindspore/mindspore.ops.rst @@ -10,7 +10,7 @@ composite The composite operators are the pre-defined combination of operators. -.. autosummary:: +.. msplatformautosummary:: :toctree: ops :nosignatures: :template: classtemplate.rst @@ -29,6 +29,7 @@ The composite operators are the pre-defined combination of operators. mindspore.ops.normal mindspore.ops.poisson mindspore.ops.repeat_elements + mindspore.ops.sequence_mask mindspore.ops.tensor_dot mindspore.ops.uniform @@ -78,7 +79,7 @@ The functional operators are the pre-instantiated Primitive operators, which can * - mindspore.ops.fill - :class:`mindspore.ops.Fill` * - mindspore.ops.gather - - :class:`mindspore.ops.GatherV2` + - :class:`mindspore.ops.Gather` * - mindspore.ops.gather_nd - :class:`mindspore.ops.GatherNd` * - mindspore.ops.hastype @@ -204,7 +205,7 @@ The functional operators are the pre-instantiated Primitive operators, which can * - mindspore.ops.string_eq - :class:`mindspore.ops.Primitive` ('string_equal') * - mindspore.ops.tensor_add - - :class:`mindspore.ops.TensorAdd` + - :class:`mindspore.ops.Add` * - mindspore.ops.tensor_div - :class:`mindspore.ops.RealDiv` * - mindspore.ops.tensor_floordiv diff --git a/docs/api_python/source_zh_cn/mindspore/mindspore.rst b/docs/api_python/source_zh_cn/mindspore/mindspore.rst index 879b8dbb4e6dd454549dd564c0b574a755b9d97d..6857ed4c05bf88a0fe5988f86d3139e22724b331 100644 --- a/docs/api_python/source_zh_cn/mindspore/mindspore.rst +++ b/docs/api_python/source_zh_cn/mindspore/mindspore.rst @@ -40,8 +40,8 @@ mindspore ============================ ================= Type Description ============================ ================= - ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. - ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. + ``tensor`` MindSpore's ``tensor`` type. Data format uses NCHW. For details, see `tensor `_. + ``MetaTensor`` A tensor only has data type and shape. For details, see `MetaTensor `_. ``bool_`` Boolean ``True`` or ``False``. ``int_`` Integer scalar. ``uint`` Unsigned integer scalar. diff --git a/docs/api_python/source_zh_cn/mindspore/operations.rst b/docs/api_python/source_zh_cn/mindspore/operations.rst index ce7c0d2396377881583a47349c25a51919c2d269..2f88d1d5a14dcd2d6d67eeb96753b4b95979ddf3 100644 --- a/docs/api_python/source_zh_cn/mindspore/operations.rst +++ b/docs/api_python/source_zh_cn/mindspore/operations.rst @@ -46,6 +46,7 @@ Neural Network Operators mindspore.ops.DynamicRNN mindspore.ops.Elu mindspore.ops.FastGelu + mindspore.ops.FastGeLU mindspore.ops.Flatten mindspore.ops.FloorMod mindspore.ops.FusedBatchNorm @@ -54,6 +55,7 @@ Neural Network Operators mindspore.ops.FusedSparseLazyAdam mindspore.ops.FusedSparseProximalAdagrad mindspore.ops.Gelu + mindspore.ops.GeLU mindspore.ops.GetNext mindspore.ops.HSigmoid mindspore.ops.HSwish @@ -90,9 +92,11 @@ Neural Network Operators mindspore.ops.SparseApplyAdagradV2 mindspore.ops.SparseApplyProximalAdagrad mindspore.ops.SparseSoftmaxCrossEntropyWithLogits + mindspore.ops.Stack mindspore.ops.Tanh mindspore.ops.TopK mindspore.ops.Unpack + mindspore.ops.Unstack Math Operators ^^^^^^^^^^^^^^ @@ -105,6 +109,7 @@ Math Operators mindspore.ops.Abs mindspore.ops.AccumulateNV2 mindspore.ops.ACos + mindspore.ops.Add mindspore.ops.AddN mindspore.ops.ApproximateEqual mindspore.ops.Asin @@ -211,8 +216,6 @@ Array Operators mindspore.ops.Cast mindspore.ops.Concat mindspore.ops.DepthToSpace - mindspore.ops.Diag - mindspore.ops.DiagPart mindspore.ops.DType mindspore.ops.DynamicShape mindspore.ops.EditDistance @@ -224,6 +227,7 @@ Array Operators mindspore.ops.GatherD mindspore.ops.GatherNd mindspore.ops.GatherV2 + mindspore.ops.Gather mindspore.ops.Identity mindspore.ops.InplaceUpdate mindspore.ops.InvertPermutation @@ -315,7 +319,6 @@ Debug Operators :nosignatures: :template: classtemplate.rst - mindspore.ops.Assert mindspore.ops.HistogramSummary mindspore.ops.ImageSummary mindspore.ops.InsertGradientOf diff --git a/docs/faq/source_en/backend_running.md b/docs/faq/source_en/backend_running.md index 6223b8cf48a80f6a680e9c5bd57d350cc61476ac..b01e572a464d92c5ca0bf457cea0e9aa93526caf 100644 --- a/docs/faq/source_en/backend_running.md +++ b/docs/faq/source_en/backend_running.md @@ -2,12 +2,73 @@ `Ascend` `GPU` `CPU` `Environmental Setup` `Operation Mode` `Model Training` `Beginner` `Intermediate` `Expert` - + + +**Q: How do I view the number of model parameters?** + +A: You can load the checkpoint to count the parameter number. Variables in the momentum and optimizer may be counted, so you need to filter them out. +You can refer to the following APIs to collect the number of network parameters: + +```python +def count_params(net): + """Count number of parameters in the network + Args: + net (mindspore.nn.Cell): Mindspore network instance + Returns: + total_params (int): Total number of trainable params + """ + total_params = 0 + for param in net.trainable_params(): + total_params += np.prod(param.shape) + return total_params +``` + +[Script Link](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/tinynet/src/utils.py). + +
+ +**Q: How do I build a multi-label MindRecord dataset for images?** + +A: The data schema can be defined as follows:`cv_schema_json = {"label": {"type": "int32", "shape": [-1]}, "data": {"type": "bytes"}}` + +Note: A label is an array of the numpy type, where label values 1, 1, 0, 1, 0, 1 are stored. These label values correspond to the same data, that is, the binary value of the same image. +For details, see [Converting Dataset to MindRecord](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/convert_dataset.html#id3). + +
+ +**Q: How do I monitor the loss during training and save the training parameters when the `loss` is the lowest?** + +A: You can customize a `callback`.For details, see the writing method of `ModelCheckpoint`. In addition, the logic for determining loss is added. + +```python +class EarlyStop(Callback): +def __init__(self): + self.loss = None +def step_end(self, run_context): + loss = ****(get current loss) + if (self.loss == None or loss < self.loss): + self.loss = loss + # do save ckpt +``` + +
+ +**Q: How do I execute a single `ut` case in `mindspore/tests`?** + +A: `ut` cases are usually based on the MindSpore package of the debug version, which is not provided on the official website. You can run `sh build.sh` to compile the source code and then run the `pytest` command. The compilation in debug mode does not depend on the backend. Run the `sh build.sh -t on` command. For details about how to execute cases, see the `tests/runtest.sh` script. + +
+ +**Q: How do I obtain the expected `feature map` when `nn.Conv2d` is used?** + +A: For details about how to derive the `Conv2d shape`, click [here](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Conv2d.html#mindspore.nn.Conv2d.) Change `pad_mode` of `Conv2d` to `same`. Alternatively, you can calculate the `pad` based on the Conv2d shape derivation formula to keep the `shape` unchanged. Generally, the pad is `(kernel_size-1)//2`. + +
**Q: What can I do if the network performance is abnormal and weight initialization takes a long time during training after MindSpore is installed?** A: The `SciPy 1.4` series versions may be used in the environment. Run the `pip list | grep scipy` command to view the `SciPy` version and change the `SciPy` version to that required by MindSpore. You can view the third-party library dependency in the `requirement.txt` file. - + > Replace version with the specific version branch of MindSpore.
@@ -68,7 +129,27 @@ A: Currently, the PyNative mode supports only Ascend and GPU and does not suppor **Q: For Ascend users, how to get more detailed logs when the `run task error` is reported?** -A: More detailed logs info can be obtained by modify slog config file. You can get different level by modify `/var/log/npu/conf/slog/slog.conf`. The values are as follows: 0:debug、1:info、2:warning、3:error、4:null(no output log), default 1. +A: Use the msnpureport tool to set the on-device log level. The tool is stored in `/usr/local/Ascend/driver/tools/msnpureport`. + +```bash +- Global: /usr/local/Ascend/driver/tools/msnpureport -g info +``` + +```bash +- Module-level: /usr/local/Ascend/driver/tools/msnpureport -m SLOG:error +``` + +```bash +- Event-level: /usr/local/Ascend/driver/tools/msnpureport -e disable/enable +``` + +```bash +- Multi-device ID-level: /usr/local/Ascend/driver/tools/msnpureport -d 1 -g warning +``` + +Assume that the value range of deviceID is [0, 7], and `devices 0–3` and `devices 4–7` are on the same OS. `Devices 0–3` share the same log configuration file and `devices 4–7` share the same configuration file. In this way, changing the log level of any device (for example device 0) will change that of other devices (for example `devices 1–3`). This rule also applies to `devices 4–7`. + +After the driver package is installed (assuming that the installation path is /usr/local/HiAI and the execution file `msnpureport.exe` is in the C:\ProgramFiles\Huawei\Ascend\Driver\tools\ directory on Windows), run the command in the /home/shihangbo/ directory to export logs on the device to the current directory and store logs in a folder named after the timestamp.
@@ -88,7 +169,7 @@ A: The problem is that the Graph mode is selected but the PyNative mode is used. - PyNative mode: dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model. - Graph mode: static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution. This mode uses technologies such as graph optimization to improve the running performance and facilitates large-scale deployment and cross-platform running. -You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/debug_in_pynative_mode.html). +You can select a proper mode and writing method to complete the training by referring to the official website [tutorial](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/debug_in_pynative_mode.html).
diff --git a/docs/faq/source_en/conf.py b/docs/faq/source_en/conf.py index a1fd767271ac159540440ed65bd0d676163366a9..a2abcc9090f480f4504ca43ff682a2e762a5a89f 100644 --- a/docs/faq/source_en/conf.py +++ b/docs/faq/source_en/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/faq/source_en/index.rst b/docs/faq/source_en/index.rst index 6891b201504e3cc797ffef3f1128233addaeb335..9df80bfc87879ee96bfa01e47fa93b869ec642f7 100644 --- a/docs/faq/source_en/index.rst +++ b/docs/faq/source_en/index.rst @@ -15,7 +15,9 @@ MindSpore FAQ network_models platform_and_system backend_running + usage_migrate_3rd programming_language_extensions supported_features mindinsight_use - mindspore_lite \ No newline at end of file + mindspore_lite + mindspore_cpp_library \ No newline at end of file diff --git a/docs/faq/source_en/installation.md b/docs/faq/source_en/installation.md index 24c5b09dd209b8852a8d80906e2d8fa751de0997..da352c948dc603c8e318f70691dfea70b302ae07 100644 --- a/docs/faq/source_en/installation.md +++ b/docs/faq/source_en/installation.md @@ -13,7 +13,7 @@ - + ## Installing Using pip diff --git a/docs/faq/source_en/mindinsight_use.md b/docs/faq/source_en/mindinsight_use.md index 1c95a1df32a66f56797bd5ab301ad74b4994f064..59d96a742383f206c383d41652593a0c687c8254 100644 --- a/docs/faq/source_en/mindinsight_use.md +++ b/docs/faq/source_en/mindinsight_use.md @@ -2,7 +2,7 @@ `Linux` `Ascend` `GPU` `Environment Preparation` - + **Q: What can I do if the error message `ImportError: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory` is displayed in the MindInsight running logs after MindInsight failed to start?** diff --git a/docs/faq/source_en/mindspore_cpp_library.md b/docs/faq/source_en/mindspore_cpp_library.md new file mode 100644 index 0000000000000000000000000000000000000000..2582f73007acb6856dbeb36a1a3846736400f2e8 --- /dev/null +++ b/docs/faq/source_en/mindspore_cpp_library.md @@ -0,0 +1,19 @@ +# MindSpore C++ Library Use + + + +**Q:What should I do when error `/usr/bin/ld: warning: libxxx.so, needed by libmindspore.so, not found` prompts during application compiling?** + +A:Find the directory where the missing dynamic library file is located, add the path to the environment variable `LD_LIBRARY_PATH`, and refer to [Inference Using the MindIR Model on Ascend 310 AI Processors#Building Inference Code](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310_mindir.html#building-inference-code) for environment variable settings. + +**Q:What should I do when error `undefined reference to mindspore::GlobalContext::SetGlobalDeviceTarget(std::__cxx11::basic_string, std::allocator> const &)` prompts during application compiling?** + +A:Since MindSpore uses the old C++ ABI, applications must be the same with MindSpore and add compile definition `-D_GLIBCXX_USE_CXX11_ABI=0`, otherwise the compiling will fail. Refer to [Inference Using the MindIR Model on Ascend 310 AI Processors#Introduce to Building Script](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310_mindir.html#introduce-to-building-script) for cmake script. + +**Q:What should I do when error `ModuleNotFoundError: No module named 'te'` prompts during application running?** + +A:First confirm whether the system environment is installed correctly and whether the whl packages such as `te` and `topi` are installed correctly. If there are multiple Python versions in the user environment, such as Conda virtual environment, you need to execute `ldd name_of_your_executable_app` to confirm whether the application link `libpython3.7m.so.1.0` is consistent with the current Python directory, if not, you need to adjust the order of the environment variable `LD_LIBRARY_PATH`. + +**Q:What should I do when error `error while loading shared libraries: libge_compiler.so: cannot open shared object file: No such file or directory` prompts during application running?** + +A:While installing Ascend 310 AI Processor software packages,the `CANN` package should install the full-featured `toolkit` version instead of the `nnrt` version. diff --git a/docs/faq/source_en/mindspore_lite.md b/docs/faq/source_en/mindspore_lite.md index f48da6347d452427a2f3fbb984545738bb2246ad..d408d900bd6bf677615d7742f9452b5ac255e41d 100644 --- a/docs/faq/source_en/mindspore_lite.md +++ b/docs/faq/source_en/mindspore_lite.md @@ -1,6 +1,6 @@ # MindSpore Lite Use - + **Q: What are the limitations of NPU?** @@ -8,4 +8,4 @@ A: Currently NPU only supports system ROM version EMUI>=11. Chip support inclu **Q: Why does the static library after cutting with the cropper tool fail to compile during integration?** -A: Currently the cropper tool only supports CPU libraries, that is, `-e CPU` is specified in the compilation command. For details, please refer to [Use clipping tool to reduce library file size](https://www.mindspore.cn/tutorial/lite/en/master/use/cropper_tool.html) document. +A: Currently the cropper tool only supports CPU libraries, that is, `-e CPU` is specified in the compilation command. For details, please refer to [Use clipping tool to reduce library file size](https://www.mindspore.cn/tutorial/lite/en/r1.1/use/cropper_tool.html) document. diff --git a/docs/faq/source_en/network_models.md b/docs/faq/source_en/network_models.md index edc5609767bdd0fc7f73b0dc58aeb188d6eabd74..c5e54a9c6a8a9ec0df802af4bcf94ddb7c0118b3 100644 --- a/docs/faq/source_en/network_models.md +++ b/docs/faq/source_en/network_models.md @@ -2,7 +2,19 @@ `Data Processing` `Environmental Setup` `Model Export` `Model Training` `Beginner` `Intermediate` `Expert` - + + +**Q: When MindSpore is used for model training, there are four input parameters for `CTCLoss`: `inputs`, `labels_indices`, `labels_values`, and `sequence_length`. How do I use `CTCLoss` for model training?** + +A: The `dataset` received by the defined `model.train` API can consist of multiple pieces of data, for example, (`data1`, `data2`, `data3`, ...). Therefore, the `dataset` can contain `inputs`, `labels_indices`, `labels_values`, and `sequence_length` information. You only need to define the dataset in the corresponding format and transfer it to `model.train`. For details, see [Data Processing API](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_loading.html). + +
+ +**Q: How do I load the PyTorch weight to MindSpore during model transfer?** + +A: First, enter the `PTH` file of PyTorch. Take `ResNet-18` as an example. The network structure of MindSpore is the same as that of PyTorch. After transferring, the file can be directly loaded to the network. Only `BN` and `Conv2D` are used during loading. If the network names of MindSpore and PyTorch at other layers are different, change the names to the same. + +
**Q: After a model is trained, how do I save the model output in text or `npy` format?** @@ -18,11 +30,11 @@ np.save("output.npy", out.asnumpy()) **Q: Must data be converted into MindRecords when MindSpore is used for segmentation training?** -A: [build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)is used to generate MindRecords based on a dataset. You can directly use or adapt it to your dataset. Alternatively, you can use `GeneratorDataset` if you want to read the dataset by yourself. +A: [build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py) is used to generate MindRecords based on a dataset. You can directly use or adapt it to your dataset. Alternatively, you can use `GeneratorDataset` if you want to read the dataset by yourself. -[GenratorDataset example](https://www.mindspore.cn/doc/programming_guide/en/master/dataset_loading.html#loading-user-defined-dataset) +[GenratorDataset example](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_loading.html#loading-user-defined-dataset) -[GeneratorDataset API description](https://www.mindspore.cn/doc/api_python/en/master/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) +[GeneratorDataset API description](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset)
@@ -46,7 +58,7 @@ A: MindSpore uses protocol buffers (protobuf) to store training parameters and c **Q: How do I use models trained by MindSpore on Ascend 310? Can they be converted to models used by HiLens Kit?** -A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_ascend_310.html). +A: Yes. HiLens Kit uses Ascend 310 as the inference core. Therefore, the two questions are essentially the same. Ascend 310 requires a dedicated OM model. Use MindSpore to export the ONNX or AIR model and convert it into an OM model supported by Ascend 310. For details, see [Multi-platform Inference](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310.html).
@@ -58,19 +70,19 @@ A: When building a network, use `if self.training: x = dropput(x)`. During verif **Q: Where can I view the sample code or tutorial of MindSpore training and inference?** -A: Please visit the [MindSpore official website training](https://www.mindspore.cn/tutorial/training/en/master/index.html) and [MindSpore official website inference](https://www.mindspore.cn/tutorial/inference/en/master/index.html). +A: Please visit the [MindSpore official website training](https://www.mindspore.cn/tutorial/training/en/r1.1/index.html) and [MindSpore official website inference](https://www.mindspore.cn/tutorial/inference/en/r1.1/index.html).
**Q: What types of model is currently supported by MindSpore for training?** -A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#) for detailed information. +A: MindSpore has basic support for common training scenarios, please refer to [Release note](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#) for detailed information.
**Q: What are the available recommendation or text generation networks or models provided by MindSpore?** -A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo).
@@ -141,7 +153,7 @@ if __name__ == "__main__": **Q: How do I use MindSpore to fit quadratic functions such as $f(x)=ax^2+bx+c$?** -A: The following code is referenced from the official [MindSpore tutorial code](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/linear_regression.py). +A: The following code is referenced from the official [MindSpore tutorial code](https://gitee.com/mindspore/docs/blob/r1.1/tutorials/tutorial_code/linear_regression.py). Modify the following items to fit $f(x) = ax^2 + bx + c$: diff --git a/docs/faq/source_en/platform_and_system.md b/docs/faq/source_en/platform_and_system.md index 754a9208fcc1e9b02d52d6d9c60fd70389c1cf38..5c3cdabe332991f619db953e1b27214f64c70c5f 100644 --- a/docs/faq/source_en/platform_and_system.md +++ b/docs/faq/source_en/platform_and_system.md @@ -2,7 +2,25 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Hardware Support` `Beginner` `Intermediate` - + + +**Q: What is the difference between the PyNative and Graph modes?** + +A: In terms of efficiency, operators used in the two modes are the same. Therefore, when the same network and operators are executed in the two modes, the accuracy is the same. The network execution performance varies according to the execution mechanism. Theoretically, operators provided by MindSpore support both the PyNative and Graph modes. + +In terms of application scenarios, Graph mode requires the network structure to be built at the beginning, and then the framework performs entire graph optimization and execution. This mode is suitable to scenarios where the network is fixed and high performance is required. + +The two modes are supported on different hardware (such as `Ascend`, `GPU`, and `CPU`). + +In terms of code debugging, operators are executed line by line. Therefore, you can directly debug the Python code and view the `/api` output or execution result of the corresponding operator at any breakpoint in the code. In Graph mode, the network is built but not executed in the constructor function. Therefore, you cannot obtain the output of the corresponding operator at breakpoints in the `construct` function. The output can be viewed only after the network execution is complete. + +
+ +**Q: How do I perform transfer learning in PyNative mode?** + +A: PyNative mode is compatible with transfer learning. For more tutorial information, see [Code for Loading a Pre-Trained Model](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/cv_mobilenetv2_fine_tune.html#code-for-loading-a-pre-trained-model). + +
**Q: Does MindSpore run only on Huawei `NPUs`?** @@ -30,7 +48,7 @@ A: Ascend 310 can only be used for inference. MindSpore supports training on Asc **Q: Does MindSpore require computing units such as GPUs and NPUs? What hardware support is required?** -A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/doc/note/en/master/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#). +A: MindSpore currently supports CPU, GPU, Ascend, and NPU. Currently, you can try out MindSpore through Docker images on laptops or in environments with GPUs. Some models in MindSpore Model Zoo support GPU-based training and inference, and other models are being improved. For distributed parallel training, MindSpore supports multi-GPU training. You can obtain the latest information from [Road Map](https://www.mindspore.cn/doc/note/en/r1.1/roadmap.html) and [project release notes](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#).
diff --git a/docs/faq/source_en/programming_language_extensions.md b/docs/faq/source_en/programming_language_extensions.md index 7686964b62803147f6dff587d1e42020a3abcfc1..7afc8e66d3c1d667ca9928475ce88a6a65642c9a 100644 --- a/docs/faq/source_en/programming_language_extensions.md +++ b/docs/faq/source_en/programming_language_extensions.md @@ -2,7 +2,7 @@ `Python` `Support Plan` - + **Q: The recent announced programming language such as taichi got Python extensions that could be directly used as `import taichi as ti`. Does MindSpore have similar support?** diff --git a/docs/faq/source_en/supported_features.md b/docs/faq/source_en/supported_features.md index 0323a48d07bc029fa6d3321beefd68d8df45cd1c..6f93fdbe3692048273fae11a83eb7889ff205ee9 100644 --- a/docs/faq/source_en/supported_features.md +++ b/docs/faq/source_en/supported_features.md @@ -2,7 +2,19 @@ `Characteristic Advantages` `On-device Inference` `Functional Module` `Reasoning Tools` - + + +**Q: Does MindSpore Serving support hot loading to avoid inference service interruption?** + +A: MindSpore does not support hot loading. It is recommended that you run multiple Serving services and restart some of them when switching the version. + +
+ +**Q: Does MindSpore support truncated gradient?** + +A: Yes. For details, see [Definition and Usage of Truncated Gradient](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/transformer/src/transformer_for_train.py#L35). + +
**Q: How do I change hyperparameters for calculating loss values during neural network training?** @@ -12,13 +24,13 @@ A: Sorry, this function is not available yet. You can find the optimal hyperpara **Q: Can you introduce the dedicated data processing framework?** -A: MindData provides the heterogeneous hardware acceleration function for data processing. The high-concurrency data processing pipeline supports NPUs, GPUs, and CPUs. The CPU usage is reduced by 30%. For details, see [Optimizing Data Processing](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/optimize_data_processing.html). +A: MindData provides the heterogeneous hardware acceleration function for data processing. The high-concurrency data processing `pipeline` supports `NPU`, `GPU` and `CPU`. The `CPU` usage is reduced by 30%. For details, see [Optimizing Data Processing](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/optimize_data_processing.html).
**Q: What is the MindSpore IR design concept?** -A: Function expression: All expressions are functions, and differentiation and automatic parallel analysis are easy to implement without side effect. `JIT compilation capability`: The graph-based IR, control flow dependency, and data flow are combined to balance the universality and usability. `Turing-complete IR`: More flexible syntaxes are provided for converting `Python`, such as recursion. +A: Function expression: All expressions are functions, and differentiation and automatic parallel analysis are easy to implement without side effect. `JIT` compilation capability: The graph-based IR, control flow dependency, and data flow are combined to balance the universality and usability. Turing-complete IR: More flexible syntaxes are provided for converting `Python`, such as recursion.
@@ -36,7 +48,7 @@ A: If you cooperate with MindSpore in papers and scientific research, you can ob **Q: How do I visualize the MindSpore Lite offline model (.ms file) to view the network structure?** -A: MindSpore Lite code is being submitted to the open-source repository Netron. Later, the MS model visualization will be implemented using Netron. While there are still some issues to be resolved in the Netron open-source repository, we have a Netron version for internal use, which can be [downloaded](https://github.com/lutzroeder/netron/releases). +A: MindSpore Lite code is being submitted to the open-source repository Netron. Later, the MS model visualization will be implemented using Netron. While there are still some issues to be resolved in the Netron open-source repository, we have a Netron version for internal use, which can be downloaded in the [`netron` releases](https://github.com/lutzroeder/netron/releases).
@@ -54,7 +66,7 @@ A: In addition to data parallelism, MindSpore distributed training also supports **Q: Has MindSpore implemented the anti-pooling operation similar to `nn.MaxUnpool2d`?** -A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, click [here](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_operator_ascend.html). +A: Currently, MindSpore does not provide anti-pooling APIs but you can customize the operator to implement the operation. For details, refer to [Custom Operators](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_operator.html).
@@ -90,10 +102,16 @@ A: The TensorFlow's object detection pipeline API belongs to the TensorFlow's Mo **Q: How do I migrate scripts or models of other frameworks to MindSpore?** -A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). +A: For details about script or model migration, please visit the [MindSpore official website](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/migrate_3rd_scripts.html).
**Q: Does MindSpore provide open-source e-commerce datasets?** A: No. Please stay tuned for updates on the [MindSpore official website](https://www.mindspore.cn/en). + +
+ +**Q:Can I encapsulate the Tensor data of MindSpore using numpy array?** + +A:No, all sorts of problems could arise. For example, `numpy.array(Tensor(1)).astype(numpy.float32)` will raise "ValueError: settinng an array element with a sequence.". diff --git a/docs/faq/source_en/supported_operators.md b/docs/faq/source_en/supported_operators.md index f8d2e9a96cc4823b6c788c49e046389291305e1e..1280d013d7ccd509e6ec35f875ba06c0bf171ade 100644 --- a/docs/faq/source_en/supported_operators.md +++ b/docs/faq/source_en/supported_operators.md @@ -2,7 +2,59 @@ `Ascend` `GPU` `CPU` `Environmental Setup` `Beginner` `Intermediate` `Expert` - + + +**Q: What is the function of the `TransData` operator? Can the performance be optimized?** + +A: The `TransData` operator is used in the scenario where the data formats (such as NC1HWC0) used by interconnected operators on the network are inconsistent. In this case, the framework automatically inserts the `TransData` operator to convert the data formats into the same format and then performs computation. You can consider using the `amp` for mixed-precision training. In this way, some `FP32` operations and the invocation of some `TransData` operators can be reduced. + +
+ +**Q: An error occurs when the `Concat` operator concatenates tuples containing multiple tensors. An error occurs when the number of `tensor list` elements entered is greater than or equal to 192. What is a better solution (running in dynamic mode) for `Concat` to concatenate tuples containing multiple Tensors?** + +A: The number of tensors to be concatenated at a time cannot exceed 192 according to the bottom-layer specifications of the Ascend operator. You can try to concatenate them twice. + +
+ +**Q: When `Conv2D` is used to define convolution, the `group` parameter is used. Is it necessary to ensure that the value of `group` can be exactly divided by the input and output dimensions? How is the group parameter transferred?** + +A: The `Conv2d` operator has the following constraint: When the value of `group` is greater than 1, the value must be the same as the number of input and output channels. Do not use `ops.Conv2D`. Currently, this operator does not support a value of `group` that is greater than 1. Currently, only the `nn.Conv2d` API of MindSpore supports `group` convolution. However, the number of groups must be the same as the number of input and output channels. +The `Conv2D` operator function is as follows: + +```python +def __init__(self, + out_channel, + kernel_size, + mode=1, + pad_mode="valid", + pad=0, + stride=1, + dilation=1, + group=1, + data_format="NCHW"): +``` + +If the function contains a `group` parameter, the parameter will be transferred to the C++ layer by default. + +
+ +**Q: Does MindSpore provide 3D convolutional layers?** + +A: 3D convolutional layers on Ascend are coming soon. Go to the [Operator List](https://www.mindspore.cn/doc/programming_guide/en/r1.1/operator_list.html) on the official website to view the operators that are supported. + +
+ +**Q: Does MindSpore support matrix transposition?** + +A: Yes. For details, see [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Transpose.html#mindspore.ops.Transpose). + +
+ +**Q: Can MindSpore calculate the variance of any tensor?** + +A: Currently, MindSpore does not have APIs or operators similar to variance which can directly calculate the variance of a `tensor`. However, MindSpore has sufficient small operators to support such operations. For details, see [class Moments(Cell)](https://www.mindspore.cn/doc/api_python/en/r1.1/_modules/mindspore/nn/layer/math.html#Moments). + +
**Q: Why is data loading abnormal when MindSpore1.0.1 is used in graph data offload mode?** @@ -38,7 +90,7 @@ In MindSpore, you can manually initialize the weight corresponding to the `paddi **Q: What can I do if the LSTM example on the official website cannot run on Ascend?** -A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [here](https://www.mindspore.cn/doc/note/en/master/operator_list_ms.html) to view the supported operators. +A: Currently, the LSTM runs only on a GPU or CPU and does not support the hardware environment. You can click [MindSpore Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list_ms.html) to view the supported operators.
diff --git a/docs/faq/source_en/usage_migrate_3rd.md b/docs/faq/source_en/usage_migrate_3rd.md new file mode 100644 index 0000000000000000000000000000000000000000..9346cd5694552f6b0688ce6ce2e0bea064e69d6c --- /dev/null +++ b/docs/faq/source_en/usage_migrate_3rd.md @@ -0,0 +1,34 @@ +# Migration from a Third-party Framework + + + +**Q:How do I load a pre-trained PyTorch model for fine-tuning on MindSpore?** + +A:Map parameters of PyTorch and MindSpore one by one. No unified conversion script is provided due to flexible network definitions. +Customize scripts based on scenarios. For details, see [Advanced Usage of Checkpoint](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/advanced_usage_of_checkpoint.html). + +
+ +**Q:How do I convert a PyTorch `dataset` to a MindSpore `dataset`?** + +A:The custom dataset logic of MindSpore is similar to that of PyTorch. You need to define a `dataset` class containing `__init__`, `__getitem__`, and `__len__` to read your dataset, instantiate the class into an object (for example, `dataset/dataset_generator`), and transfer the instantiated object to `GeneratorDataset` (on MindSpore) or `DataLoader` (on PyTorch). Then, you are ready to load the custom dataset. MindSpore provides further `map`->`batch` operations based on `GeneratorDataset`. Users can easily add other custom operations to `map` and start `batch`. +The custom dataset of MindSpore is loaded as follows: + +```python +# 1. Perform operations such as data argumentation, shuffle, and sampler. +class Mydata: + def __init__(self): + np.random.seed(58) + self.__data = np.random.sample((5, 2)) + self.__label = np.random.sample((5, 1)) + def __getitem__(self, index): + return (self.__data[index], self.__label[index]) + def __len__(self): + return len(self.__data) +dataset_generator = Mydata() +dataset = ds.GeneratorDataset(dataset_generator, ["data", "label"], shuffle=False) +# 2. Customize data argumentation. +dataset = dataset.map(operations=pyFunc, …) +# 3. batch +dataset = dataset.batch(batch_size, drop_remainder=True) +``` \ No newline at end of file diff --git a/docs/faq/source_zh_cn/backend_running.md b/docs/faq/source_zh_cn/backend_running.md index 890dd7a3f3f6b99ffd07749a834c253cd6f18187..7371faa3a3f62494d4f5d93bee201561cf6a7451 100644 --- a/docs/faq/source_zh_cn/backend_running.md +++ b/docs/faq/source_zh_cn/backend_running.md @@ -1,13 +1,74 @@ -# 后端运行类 +# 后端运行类 `Ascend` `GPU` `CPU` `环境准备` `运行模式` `模型训练` `初级` `中级` `高级` - + + +**Q:如何查看模型参数量?** + +A:可以直接加载CheckPoint统计,可能额外统计了动量和optimizer中的变量,需要过滤下相关变量。 +您可以参考如下接口统计网络参数量: + +```python +def count_params(net): + """Count number of parameters in the network + Args: + net (mindspore.nn.Cell): Mindspore network instance + Returns: + total_params (int): Total number of trainable params + """ + total_params = 0 + for param in net.trainable_params(): + total_params += np.prod(param.shape) + return total_params +``` + +具体[脚本链接](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/tinynet/src/utils.py)。 + +
+ +**Q:如何构建图像的多标签MindRecord格式数据集?** + +A:数据Schema可以按如下方式定义:`cv_schema_json = {"label": {"type": "int32", "shape": [-1]}, "data": {"type": "bytes"}}` + +说明:label是一个数组,numpy类型,这里面可以存你说的 1, 1,0,1, 0, 1 这么多label值,这些label值对应同一个data,即:同一个图像的二进制值。 +可以参考[将数据集转换为MindRecord](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/convert_dataset.html#id3)教程。 + +
+ +**Q:如何在训练过程中监控`loss`在最低的时候并保存训练参数?** + +A:可以自定义一个`Callback`。参考`ModelCheckpoint`的写法,此外再增加判断`loss`的逻辑: + +```python +class EarlyStop(Callback): +def __init__(self): + self.loss = None +def step_end(self, run_context): + loss = ****(get current loss) + if (self.loss == None or loss < self.loss): + self.loss = loss + # do save ckpt +``` + +
+ +**Q:`mindspore/tests`下怎样执行单个`ut`用例?** + +A:`ut`用例通常需要基于debug版本的MindSpore包,官网并没有提供。可以基于源码使用`sh build.sh`编译,然后通过`pytest`指令执行,debug模式编包不依赖后端。编译选项`sh build.sh -t on`,用例执行可以参考`tests/runtest.sh`脚本。 + +
+ +**Q:使用`nn.Conv2d`时,怎样获取期望大小的`feature map`?** + +A:`Conv2d shape`推导方法可以[参考这里](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Conv2d.html#mindspore.nn.Conv2d),`Conv2d`的`pad_mode`改成`same`,或者可以根据`Conv2d shape`推导公式自行计算`pad`,想要使得`shape`不变,一般pad为`(kernel_size-1)//2`。 + +
**Q:MindSpore安装完成,执行训练时发现网络性能异常,权重初始化耗时过长,怎么办?** A:可能与环境中使用了`scipy 1.4`系列版本有关,通过`pip list | grep scipy`命令可查看scipy版本,建议改成MindSpore要求的`scipy`版本。版本第三方库依赖可以在`requirement.txt`中查看。 - + > 其中version替换为MindSpore具体的版本分支。
@@ -62,7 +123,35 @@ A:首先MindSpore训练使用的灰度图MNIST数据集。所以模型使用 **Q:在Ascend平台上,执行用例有时候会报错run task error,如何获取更详细的日志帮助问题定位?** -A:可以通过开启slog获取更详细的日志信息以便于问题定位,修改`/var/log/npu/conf/slog/slog.conf`中的配置,可以控制不同的日志级别,对应关系为:0:debug、1:info、2:warning、3:error、4:null(no output log),默认值为1。 +A:使用msnpureport工具设置device侧日志级别,工具位置在:`/usr/local/Ascend/driver/tools/msnpureport`。 + +- 全局级别: + +```bash +/usr/local/Ascend/driver/tools/msnpureport -g info +``` + +- 模块级别: + +```bash +/usr/local/Ascend/driver/tools/msnpureport -m SLOG:error +```` + +- Event级别: + +```bash +/usr/local/Ascend/driver/tools/msnpureport -e disable/enable +``` + +- 多device id级别: + +```bash +/usr/local/Ascend/driver/tools/msnpureport -d 1 -g warning +``` + +假设deviceID的取值范围是[0-7],`device0`-`device3`和`device4`-`device7`分别在一个os上。其中`device0`-`device3`共用一个日志配置文件;`device4`-`device7`共用一个配置文件。如果修改了`device0`-`device3`中的任意一个日志级别,其他`device`的日志级别也会被修改。如果修改了`device4`-`device7`中的任意一个日志级别,其他device的日志级别也会被修改。 + +`Driver`包安装以后(假设安装路径为/usr/local/HiAI,在Windows环境下,`msnpureport.exe`执行文件在C:\ProgramFiles\Huawei\Ascend\Driver\tools\目录下),假设用户在/home/shihangbo/目录下直接执行命令行,则Device侧日志被导出到当前目录下,并以时间戳命名文件夹进行存放。
@@ -83,7 +172,7 @@ A:这边的问题是选择了Graph模式却使用了PyNative的写法,所以 - Graph模式:也称静态图模式或者图模式,将神经网络模型编译成一整张图,然后下发执行。该模式利用图优化等技术提高运行性能,同时有助于规模部署和跨平台运行。 -用户可以参考[官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/debug_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。 +用户可以参考[官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/debug_in_pynative_mode.html)选择合适、统一的模式和写法来完成训练。
diff --git a/docs/faq/source_zh_cn/conf.py b/docs/faq/source_zh_cn/conf.py index 95d7701759707ab95a3c199cd8a22e2e2cc1194d..7be5f453c21b75703c763a14c8180127aed60e6b 100644 --- a/docs/faq/source_zh_cn/conf.py +++ b/docs/faq/source_zh_cn/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/faq/source_zh_cn/index.rst b/docs/faq/source_zh_cn/index.rst index 6891b201504e3cc797ffef3f1128233addaeb335..9df80bfc87879ee96bfa01e47fa93b869ec642f7 100644 --- a/docs/faq/source_zh_cn/index.rst +++ b/docs/faq/source_zh_cn/index.rst @@ -15,7 +15,9 @@ MindSpore FAQ network_models platform_and_system backend_running + usage_migrate_3rd programming_language_extensions supported_features mindinsight_use - mindspore_lite \ No newline at end of file + mindspore_lite + mindspore_cpp_library \ No newline at end of file diff --git a/docs/faq/source_zh_cn/installation.md b/docs/faq/source_zh_cn/installation.md index 16bd43f42f12581dbfb2dce732a6eba113495aa7..e520d601a9a962fe594b4cc202db67462a73bac2 100644 --- a/docs/faq/source_zh_cn/installation.md +++ b/docs/faq/source_zh_cn/installation.md @@ -1,4 +1,4 @@ -# 安装类 +# 安装类 `Linux` `Windows` `Ascend` `GPU` `CPU` `环境准备` `初级` `中级` @@ -13,7 +13,7 @@ - + ## pip安装 @@ -74,7 +74,7 @@ A:目前MindSpore支持的情况是GPU+Linux与CPU+Windows的组合配置,Wi docker run -it --runtime=nvidia mindspore/mindspore-gpu:1.0.0 /bin/bash ``` -详细步骤可以参考社区提供的实践[张小白GPU安装MindSpore给你看(Ubuntu 18.04.5)](https://bbs.huaweicloud.com/blogs/198357)。 +详细步骤可以参考社区提供的实践[张小白教你安装Windows10的GPU驱动(CUDA和cuDNN)](https://bbs.huaweicloud.com/blogs/212446)。 在此感谢社区成员[张辉](https://bbs.huaweicloud.com/community/usersnew/id_1552550689252345)的分享。
@@ -140,7 +140,7 @@ A:常用的环境变量设置写入到`~/.bash_profile` 或 `~/.bashrc`中, ```python import numpy as np from mindspore import Tensor -imort mindspore.ops as ops +import mindspore.ops as ops import mindspore.context as context context.set_context(device_target="Ascend") diff --git a/docs/faq/source_zh_cn/mindinsight_use.md b/docs/faq/source_zh_cn/mindinsight_use.md index 32d259e88ce696310ba19ae269198d9277a47312..428241f4fa108809d00f4ac1e6a232430e3ef0e6 100644 --- a/docs/faq/source_zh_cn/mindinsight_use.md +++ b/docs/faq/source_zh_cn/mindinsight_use.md @@ -2,9 +2,9 @@ `Linux` `Ascend` `GPU` `环境准备` - + -**Q:MindInsight启动失败并且提示:`ImportError: libcrypto.so.1.0.0: cannnot open shared object file: No such file or directory` 如何处理?** +**Q:MindInsight启动失败并且提示:`ImportError: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory` 如何处理?** A:需要在命令行中使用”export LD_LIBRARY_PATH=dir:$LD_LIBRARY_PATH”来导入LD_LIBRARY_PATH变量。 diff --git a/docs/faq/source_zh_cn/mindspore_cpp_library.md b/docs/faq/source_zh_cn/mindspore_cpp_library.md new file mode 100644 index 0000000000000000000000000000000000000000..8dc48f2be7c289a8df676785c58589bbc68ee389 --- /dev/null +++ b/docs/faq/source_zh_cn/mindspore_cpp_library.md @@ -0,0 +1,19 @@ +# C++接口使用类 + + + +**Q:编译应用时报错`/usr/bin/ld: warning: libxxx.so, needed by libmindspore.so, not found`怎么办?** + +A:寻找缺少的动态库文件所在目录,添加该路径到环境变量`LD_LIBRARY_PATH`中,环境变量设置参考[Ascend 310 AI处理器上使用MindIR模型进行推理#编译推理代码](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310_mindir.html#id6)。 + +**Q:编译应用时出现`undefined reference to mindspore::GlobalContext::SetGlobalDeviceTarget(std::__cxx11::basic_string, std::allocator> const &)`怎么办?** + +A:MindSpore使用旧版的C++ ABI,因此用户程序需与MindSpore一致,添加编译选项`-D_GLIBCXX_USE_CXX11_ABI=0`,否则编译链接会失败,CMake脚本编写参考[Ascend 310 AI处理器上使用MindIR模型进行推理#构建脚本介绍](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310_mindir.html#id5) + +**Q:运行应用时出现`ModuleNotFoundError: No module named 'te'`怎么办?** + +A:首先确认环境安装是否正确,`te`、`topi`等whl包是否正确安装。如果用户环境中有多个Python版本,如Conda虚拟环境中,需`ldd name_of_your_executable_app`确认应用所链接的`libpython3.7m.so.1.0`是否与当前Python路径一致,如果不一致需要调整环境变量`LD_LIBRARY_PATH`顺序。 + +**Q:运行应用时报错`error while loading shared libraries: libge_compiler.so: cannot open shared object file: No such file or directory`怎么办?** + +A:安装MindSpore所依赖的Ascend 310 AI处理器软件配套包时,`CANN`包不能安装`nnrt`版本,而是需要安装功能完整的`toolkit`版本。 \ No newline at end of file diff --git a/docs/faq/source_zh_cn/mindspore_lite.md b/docs/faq/source_zh_cn/mindspore_lite.md index 7a1c25be35eb07f7ea9dc2855ab560c3c87eea7e..f9f7e83107cc65a2450b5792662b1e70c0dfd645 100644 --- a/docs/faq/source_zh_cn/mindspore_lite.md +++ b/docs/faq/source_zh_cn/mindspore_lite.md @@ -1,6 +1,6 @@ # 端侧使用类 - + **Q:NPU推理存在什么限制?** @@ -8,5 +8,5 @@ A:目前NPU仅支持在系统ROM版本EMUI>=11、芯片支持包括Kirin 9000 **Q:为什么使用裁剪工具裁剪后的静态库在集成时存在编译失败情况?** -A:目前裁剪工具仅支持CPU的库,即编译命令中指定了`-e CPU`,具体使用请查看[使用裁剪工具降低库文件大小](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/cropper_tool.html)文档。 +A:目前裁剪工具仅支持CPU的库,即编译命令中指定了`-e CPU`,具体使用请查看[使用裁剪工具降低库文件大小](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.1/use/cropper_tool.html)文档。 diff --git a/docs/faq/source_zh_cn/network_models.md b/docs/faq/source_zh_cn/network_models.md index 741d946538f3f6a31adc34c4b606d5ffe5a56047..449d7235124c77d120e4a8d34a9fd5192fe4cacb 100644 --- a/docs/faq/source_zh_cn/network_models.md +++ b/docs/faq/source_zh_cn/network_models.md @@ -2,7 +2,19 @@ `数据处理` `环境准备` `模型导出` `模型训练` `初级` `中级` `高级` - + + +**Q:使用MindSpore进行模型训练时,`CTCLoss`的输入参数有四个:`inputs`, `labels_indices`, `labels_values`, `sequence_length`,如何使用`CTCLoss`进行训练?** + +A:定义的`model.train`接口里接收的`dataset`可以是多个数据组成,形如(`data1`, `data2`, `data3`, ...),所以`dataset`是可以包含`inputs`,`labels_indices`,`labels_values`,`sequence_length`的信息的。只需要定义好相应形式的`dataset`,传入`model.train`里就可以。具体的可以了解下相应的[数据处理接口](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html) + +
+ +**Q:模型转移时如何把PyTorch的权重加载到MindSpore中?** + +A:首先输入PyTorch的`pth`文件,以`ResNet-18`为例,MindSpore的网络结构和PyTorch保持一致,转完之后可直接加载进网络,这边参数只用到`BN`和`Conv2D`,若有其他层`ms`和PyTorch名称不一致,需要同样的修改名称。 + +
**Q:模型已经训练好,如何将模型的输出结果保存为文本或者`npy`的格式?** @@ -18,11 +30,11 @@ np.save("output.npy", out.asnumpy()) **Q:使用MindSpore做分割训练,必须将数据转为MindRecords吗?** -A:[build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)是将数据集生成MindRecord的脚本,可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取,可以使用`GeneratorDataset`自定义数据集加载。 +A:[build_seg_data.py](https://github.com/mindspore-ai/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/data/build_seg_data.py)是将数据集生成MindRecord的脚本,可以直接使用/适配下你的数据集。或者如果你想尝试自己实现数据集的读取,可以使用`GeneratorDataset`自定义数据集加载。 -[GenratorDataset 示例](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#id5) +[GenratorDataset 示例](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html#id5) -[GenratorDataset API说明](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset) +[GenratorDataset API说明](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/dataset/mindspore.dataset.GeneratorDataset.html#mindspore.dataset.GeneratorDataset)
@@ -34,7 +46,7 @@ A:MindSpore的`ckpt`和TensorFlow的`ckpt`格式是不通用的,虽然都是 **Q:如何不将数据处理为MindRecord格式,直接进行训练呢?** -A:可以使用自定义的数据加载方式 `GeneratorDataset`,具体可以参考[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html)文档中的自定义数据集加载。 +A:可以使用自定义的数据加载方式 `GeneratorDataset`,具体可以参考[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html)文档中的自定义数据集加载。
@@ -46,7 +58,7 @@ A: MindSpore采用protbuf存储训练参数,无法直接读取其他框架 **Q:用MindSpore训练出的模型如何在Ascend 310上使用?可以转换成适用于HiLens Kit用的吗?** -A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_310.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型. +A:Ascend 310需要运行专用的OM模型,先使用MindSpore导出ONNX或AIR模型,再转化为Ascend 310支持的OM模型。具体可参考[多平台推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310.html)。可以,HiLens Kit是以Ascend 310为推理核心,所以前后两个问题本质上是一样的,需要转换为OM模型.
@@ -58,19 +70,19 @@ A:在构造网络的时候可以通过 `if self.training: x = dropput(x)`, **Q:从哪里可以查看MindSpore训练及推理的样例代码或者教程?** -A:可以访问[MindSpore官网教程训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)和[MindSpore官网教程推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/index.html)。 +A:可以访问[MindSpore官网教程训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/index.html)和[MindSpore官网教程推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/index.html)。
**Q:MindSpore支持哪些模型的训练?** -A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#)。 +A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Release note](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#)。
**Q:MindSpore有哪些现成的推荐类或生成类网络或模型可用?** -A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。 +A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo)。
@@ -141,7 +153,7 @@ if __name__ == "__main__": **Q:如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数?** -A:以下代码引用自MindSpore的官方教程的[代码仓](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/linear_regression.py) +A:以下代码引用自MindSpore的官方教程的[代码仓](https://gitee.com/mindspore/docs/blob/r1.1/tutorials/tutorial_code/linear_regression.py) 在以下几处修改即可很好的拟合$f(x)=ax^2+bx+c$: diff --git a/docs/faq/source_zh_cn/platform_and_system.md b/docs/faq/source_zh_cn/platform_and_system.md index 050ef99b749b1ec2bfa4673aef5ec7f819dcb7d2..9f839ab0daf61f45c72a471266c5f56a1341e2db 100644 --- a/docs/faq/source_zh_cn/platform_and_system.md +++ b/docs/faq/source_zh_cn/platform_and_system.md @@ -1,8 +1,26 @@ -# 平台系统类 +# 平台系统类 `Linux` `Windows` `Ascend` `GPU` `CPU` `硬件支持` `初级` `中级` - + + +**Q:PyNative模式和Graph模式的区别?** + +A: 在使用效率上,两个模式使用的算子是一致的,因此相同的网络和算子,分别在两个模式下执行时,精度效果是一致的。由于执行机理的差异,网络的执行性能是会不同的,并且在理论上,MindSpore提供的算子同时支持PyNative模式和Graph模式; + +在场景使用方面,Graph模式需要一开始就构建好网络结构,然后框架做整图优化和执行,对于网络固定没有变化,且需要高性能的场景比较适合; + +在不同硬件(`Ascend`、`GPU`和`CPU`)资源上都支持这两种模式; + +代码调试方面,由于是逐行执行算子,因此用户可以直接调试Python代码,在代码中任意位置打断点查看对应算子`/api`的输出或执行结果。而Graph模式由于在构造函数里只是完成网络构造,实际没有执行,因此在`construct`函数里打断点是无法获取对应算子的输出,而只能等整网执行中指定对应算子的输出打印,在网络执行完成后进行查看。 + +
+ +**Q:使用PyNative模式能够进行迁移学习?** + +A: PyNative模式是兼容迁移学习的,更多的教程信息,可以参考[预训练模型加载代码详解](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/cv_mobilenetv2_fine_tune.html#id7)。 + +
**Q:MindSpore只能在华为自己的`NPU`上跑么?** @@ -30,7 +48,7 @@ A:Ascend 310只能用作推理,MindSpore支持在Ascend 910训练,训练 **Q:安装运行MindSpore时,是否要求平台有GPU、NPU等计算单元?需要什么硬件支持?** -A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/doc/note/zh-CN/master/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/master/RELEASE.md#)获取最新信息。 +A:MindSpore当前支持CPU/GPU/Ascend /NPU。目前笔记本电脑或者有GPU的环境,都可以通过Docker镜像来试用。当前MindSpore Model Zoo中有部分模型已经支持GPU的训练和推理,其他模型也在不断地进行完善。在分布式并行训练方面,MindSpore当前支持GPU多卡训练。你可以通过[RoadMap](https://www.mindspore.cn/doc/note/zh-CN/r1.1/roadmap.html)和项目[Release note](https://gitee.com/mindspore/mindspore/blob/r1.1/RELEASE.md#)获取最新信息。
@@ -42,7 +60,7 @@ A:MindSpore提供了可插拔式的设备管理接口,其他计算单元( **Q:MindSpore与ModelArts是什么关系,在ModelArts中能使用MindSpore吗?** -A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。 +A:ModelArts是华为公有云线上训练及推理平台,MindSpore是华为深度学习框架,可以查阅[MindSpore官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/use_on_the_cloud.html),教程中详细展示了用户如何使用ModelArts来做MindSpore的模型训练。
diff --git a/docs/faq/source_zh_cn/programming_language_extensions.md b/docs/faq/source_zh_cn/programming_language_extensions.md index 1622c8990131a4d12673ad71f26e9a830ee893a3..304e2401372dfd5c47097f966e1c090c119d28fa 100644 --- a/docs/faq/source_zh_cn/programming_language_extensions.md +++ b/docs/faq/source_zh_cn/programming_language_extensions.md @@ -2,7 +2,7 @@ `Python` `支持计划` - + **Q:最近出来的taichi编程语言有Python扩展,类似`import taichi as ti`就能直接用了,MindSpore是否也支持?** diff --git a/docs/faq/source_zh_cn/supported_features.md b/docs/faq/source_zh_cn/supported_features.md index f66be5ea326f5c19a43df002084ff47a494c84cf..f9e99c25fbd44a3d941e1f10f7baaefa3ed08848 100644 --- a/docs/faq/source_zh_cn/supported_features.md +++ b/docs/faq/source_zh_cn/supported_features.md @@ -1,8 +1,20 @@ -# 特性支持类 +# 特性支持类 `特性优势` `端侧推理` `功能模块` `推理工具` - + + +**Q:MindSpore serving是否支持热加载,避免推理服务中断?** + +A:很抱歉,MindSpore当前还不支持热加载,需要重启。建议您可以跑多个Serving服务,切换版本时,重启部分。 + +
+ +**Q:请问MindSpore支持梯度截断吗?** + +A:支持,可以参考[梯度截断的定义和使用](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/transformer/src/transformer_for_train.py#L35)。 + +
**Q:如何在训练神经网络过程中对计算损失的超参数进行改变?** @@ -12,7 +24,7 @@ A:您好,很抱歉暂时还未有这样的功能。目前只能通过训练- **Q:第一次看到有专门的数据处理框架,能介绍下么?** -A:MindData提供数据处理异构硬件加速功能,高并发数据处理`pipeline`同时支持`NPU/GPU/CPU`,`CPU`占用降低30%,[点击查询](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/optimize_data_processing.html)。 +A:MindData提供数据处理异构硬件加速功能,高并发数据处理`pipeline`同时支持`NPU/GPU/CPU`,`CPU`占用降低30%,点击查询[优化数据处理](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/optimize_data_processing.html)。
@@ -36,7 +48,7 @@ A:当前如果与MindSpore展开论文、科研合作是可以获得免费云 **Q:MindSpore Lite的离线模型MS文件如何进行可视化,看到网络结构?** -A:MindSpore Lite正在往开源仓库`netron`上提交代码,后面MS模型会首先使用`netron`实现可视化。现在上`netron`开源仓还有一些问题需要解决,不过我们有内部使用的`netron`版本,可以在[这个链接](https://github.com/lutzroeder/netron/releases)里下载到。 +A:MindSpore Lite正在往开源仓库`netron`上提交代码,后面MS模型会首先使用`netron`实现可视化。现在上`netron`开源仓还有一些问题需要解决,不过我们有内部使用的`netron`版本,可以在[`netron`版本发布](https://github.com/lutzroeder/netron/releases)里下载到。
@@ -54,7 +66,7 @@ A:MindSpore分布式训练除了支持数据并行,还支持算子级模型 **Q:请问MindSpore实现了反池化操作了吗?类似于`nn.MaxUnpool2d` 这个反池化操作?** -A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,自定义算子[详见这里](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_operator_ascend.html)。 +A:目前 MindSpore 还没有反池化相关的接口。如果用户想自己实现的话,可以通过自定义算子的方式自行开发算子,详情请见[自定义算子](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_operator.html)。
@@ -90,10 +102,16 @@ A:TensorFlow的对象检测Pipeline接口属于TensorFlow Model模块。待Min **Q:其他框架的脚本或者模型怎么迁移到MindSpore?** -A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/migrate_3rd_scripts.html)的介绍。 +A:关于脚本或者模型迁移,可以查询MindSpore官网中关于[网络迁移](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/migrate_3rd_scripts.html)的介绍。
**Q:MindSpore是否附带开源电商类数据集?** A:暂时还没有,可以持续关注[MindSpore官网](https://www.mindspore.cn)。 + +
+ +**Q:能否使用第三方库numpy array封装MindSpore的Tensor数据?** + +A:不能,可能出现各种问题。例如:`numpy.array(Tensor(1)).astype(numpy.float32)`的报错信息为"ValueError: settinng an array element with a sequence."。 diff --git a/docs/faq/source_zh_cn/supported_operators.md b/docs/faq/source_zh_cn/supported_operators.md index c090f559a14ec20c8eba16d1357c7282913be45d..401faac9f05d135912f65fb87f8749be5ce695e8 100644 --- a/docs/faq/source_zh_cn/supported_operators.md +++ b/docs/faq/source_zh_cn/supported_operators.md @@ -1,8 +1,60 @@ -# 算子支持类 +# 算子支持类 `Ascend` `CPU` `GPU` `环境准备` `初级` `中级` `高级` - + + +**Q:`TransData`算子的功能是什么,能否优化性能?** + +A:`TransData`算子出现的场景是:如果网络中相互连接的算子使用的数据格式不一致(如NC1HWC0),框架就会自动插入`transdata`算子使其转换成一致的数据格式,然后再进行计算。 可以考虑训练的时候用我们的`amp`做混合精度,这样能减少一些`fp32`的运算,应该能减少一些`transdata`算子的调用。 + +
+ +**Q:算子`Concat`拼接包含多个Tensor的元组出错,似乎传入的`tensor list`元素个数>=192就会报错。如果要`Concat`包含多个Tensor的元组,有什么较好的解决方案?** + +A:这个昇腾算子底层规格限制一次拼接的Tensor个数不能超过192个,可以尝试分开两次进行拼接。 + +
+ +**Q:在使用`Conv2D`进行卷积定义的时候使用到了`group`的参数,`group`的值不是只需要保证可以被输入输出的维度整除即可了吗?`group`参数的传递方式是怎样的呢?** + +A:`Conv2D`算子是有这个约束条件的:当`group`大于1 时,其值必须要与输入输出的通道数相等。不要使用`ops.Conv2D`,这个算子目前不支持`group`>1。目前MindSpore只有`nn.Conv2D`接口支持组卷积,但是有`group`要与输入输出的通道数相等的约束。 +`Conv2D`算子的 + +```python +def __init__(self, + out_channel, + kernel_size, + mode=1, + pad_mode="valid", + pad=0, + stride=1, + dilation=1, + group=1, + data_format="NCHW"): +``` + +函数中带有`group`参数,这个参数默认就会被传到C++层。 + +
+ +**Q:Convolution Layers有没有提供3D卷积?** + +A:目前MindSpore在Ascend上有支持3D卷积的计划。您可以关注官网的[支持列表](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/operator_list.html),等到算子支持后会在表中展示。 + +
+ +**Q:MindSpore支持矩阵转置吗?** + +A:支持,请参考`mindspore.ops.Transpose`的[算子教程](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Transpose.html#mindspore.ops.Transpose)。 + +
+ +**Q:请问MindSpore能算给定任意一个`tensor`的方差吗?** + +A: MindSpore目前暂无可以直接求出`tensor`方差的算子或接口。不过MindSpore有足够多的小算子可以支持用户实现这样的操作,你可以参考[class Moments(Cell)](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/_modules/mindspore/nn/layer/math.html#Moments)来实现。 + +
**Q:使用MindSpore-1.0.1版本在图数据下沉模式加载数据异常是什么原因?** @@ -32,13 +84,13 @@ A:在PyTorch中`padding_idx`的作用是将embedding矩阵中`padding_idx`位 **Q:Operations中`Tile`算子执行到`__infer__`时`value`值为`None`,丢失了数值是怎么回事?** A:`Tile`算子的`multiples input`必须是一个常量(该值不能直接或间接来自于图的输入)。否则构图的时候会拿到一个`None`的数据,因为图的输入是在图执行的时候才传下去的,构图的时候拿不到图的输入数据。 -相关的资料可以看[相关文档](https://www.mindspore.cn/doc/note/zh-CN/master/static_graph_syntax_support.html)的“其他约束”。 +相关的资料可以看[静态图语法支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/static_graph_syntax_support.html)。
**Q:官网的LSTM示例在Ascend上跑不通。** -A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以[点击这里](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list_ms.html)查看算子支持情况。 +A:目前LSTM只支持在GPU和CPU上运行,暂不支持硬件环境,您可以通过[MindSpore算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list_ms.html)查看算子支持情况。
diff --git a/docs/faq/source_zh_cn/usage_migrate_3rd.md b/docs/faq/source_zh_cn/usage_migrate_3rd.md new file mode 100644 index 0000000000000000000000000000000000000000..7f5dd1d3f6324ee9688837083eb861658be50744 --- /dev/null +++ b/docs/faq/source_zh_cn/usage_migrate_3rd.md @@ -0,0 +1,34 @@ +# 第三方框架迁移使用类 + + + +**Q:请问想加载PyTorch预训练好的模型用于MindSpore模型finetune有什么方法?** + +A:需要把PyTorch和MindSpore的参数进行一一对应,因为网络定义的灵活性,所以没办法提供统一的转化脚本。 +需要根据场景书写定制化脚本,可参考[checkpoint高级用法](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/advanced_usage_of_checkpoint.html) + +
+ +**Q:怎么将PyTorch的`dataset`转换成MindSpore的`dataset`?** + +A:MindSpore和PyTorch的自定义数据集逻辑是比较类似的,需要用户先定义一个自己的`dataset`类,该类负责定义`__init__`,`__getitem__`,`__len__`来读取自己的数据集,然后将该类实例化为一个对象(如:`dataset/dataset_generator`),最后将这个实例化对象传入`GeneratorDataset`(mindspore用法)/`DataLoader`(pytorch用法),至此即可以完成自定义数据集加载了。而mindspore在`GeneratorDataset`的基础上提供了进一步的`map`->`batch`操作,可以很方便的让用户在`map`内添加一些其他的自定义操作,并将其`batch`起来。 +对应的MindSpore的自定义数据集加载如下: + +```python +#1 Data enhancement,shuffle,sampler. +class Mydata: + def __init__(self): + np.random.seed(58) + self.__data = np.random.sample((5, 2)) + self.__label = np.random.sample((5, 1)) + def __getitem__(self, index): + return (self.__data[index], self.__label[index]) + def __len__(self): + return len(self.__data) +dataset_generator = Mydata() +dataset = ds.GeneratorDataset(dataset_generator, ["data", "label"], shuffle=False) +#2 Custom data enhancement +dataset = dataset.map(operations=pyFunc, …) +#3 batch +dataset = dataset.batch(batch_size, drop_remainder=True) +``` \ No newline at end of file diff --git a/docs/note/source_en/benchmark.md b/docs/note/source_en/benchmark.md index 9a715e46ceeffbf7a5319b36667743a6dccabbc9..e1e8eb3c4b3291e9e7151f0757f837da0e46d6d5 100644 --- a/docs/note/source_en/benchmark.md +++ b/docs/note/source_en/benchmark.md @@ -13,10 +13,10 @@ - + This document describes the MindSpore benchmarks. -For details about the MindSpore networks, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo). +For details about the MindSpore networks, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo). ## Training Performance diff --git a/docs/note/source_en/conf.py b/docs/note/source_en/conf.py index a1fd767271ac159540440ed65bd0d676163366a9..a2abcc9090f480f4504ca43ff682a2e762a5a89f 100644 --- a/docs/note/source_en/conf.py +++ b/docs/note/source_en/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/note/source_en/design/mindarmour/differential_privacy_design.md b/docs/note/source_en/design/mindarmour/differential_privacy_design.md index 7038864e653ba412238865bae8c9e12e72f7a735..1b9ab9636881e5fb5c88f93c31333fd8ff147609 100644 --- a/docs/note/source_en/design/mindarmour/differential_privacy_design.md +++ b/docs/note/source_en/design/mindarmour/differential_privacy_design.md @@ -14,7 +14,7 @@ - + ## Overall Design @@ -54,10 +54,10 @@ Compared with traditional differential privacy, ZCDP and RDP provide stricter pr ## Code Implementation -- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise. -- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation. -- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned. -- [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability. +- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise. +- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation. +- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned. +- [model.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability. ## References diff --git a/docs/note/source_en/design/mindarmour/fuzzer_design.md b/docs/note/source_en/design/mindarmour/fuzzer_design.md index 34cfc563dc10d75662950114a1db2337fe5f9596..b4a1db0bf5723ad6b51d68da6bd3a05d2adcfd51 100644 --- a/docs/note/source_en/design/mindarmour/fuzzer_design.md +++ b/docs/note/source_en/design/mindarmour/fuzzer_design.md @@ -13,7 +13,7 @@ - + ## Background @@ -61,10 +61,10 @@ Through multiple rounds of mutations, you can obtain a series of variant data in ## Code Implementation -1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process. -2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC. -3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods. -4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks. +1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process. +2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC. +3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods. +4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/r1.1/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks. ## References diff --git a/docs/note/source_en/design/mindinsight/graph_visual_design.md b/docs/note/source_en/design/mindinsight/graph_visual_design.md index 8633d64951454033d95e05a8302c4cdeb825a59d..8f3e7e712d1459f9b4403ea137a740630e974dad 100644 --- a/docs/note/source_en/design/mindinsight/graph_visual_design.md +++ b/docs/note/source_en/design/mindinsight/graph_visual_design.md @@ -15,7 +15,7 @@ - + ## Background @@ -71,4 +71,4 @@ RESTful API is used for data interaction between the MindInsight frontend and ba #### File API Design Data interaction between MindSpore and MindInsight uses the data format defined by [Protocol Buffer](https://developers.google.cn/protocol-buffers/docs/pythontutorial). -The main entry is the [summary.proto file](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_summary.proto). A message object of a computational graph is defined as `GraphProto`. For details about `GraphProto`, see the [anf_ir.proto file](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto). +The main entry is the [summary.proto file](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_summary.proto). A message object of a computational graph is defined as `GraphProto`. For details about `GraphProto`, see the [anf_ir.proto file](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto). diff --git a/docs/note/source_en/design/mindinsight/tensor_visual_design.md b/docs/note/source_en/design/mindinsight/tensor_visual_design.md index 86a364148c6002864ea62d9c5b38bda03775674c..b94fb0c06576241898e997f873c9520a94979603 100644 --- a/docs/note/source_en/design/mindinsight/tensor_visual_design.md +++ b/docs/note/source_en/design/mindinsight/tensor_visual_design.md @@ -14,7 +14,7 @@ - + ## Background @@ -55,7 +55,7 @@ Figure 2 shows tensors recorded by a user in a form of a histogram. ### API Design -In tensor visualization, there are file API and RESTful API. The file API is the [summary.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/summary.proto) file, which is used for data interconnection between MindInsight and MindSpore. RESTful API is an internal API used for data interaction between the MindInsight frontend and backend. +In tensor visualization, there are file API and RESTful API. The file API is the [summary.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/summary.proto) file, which is used for data interconnection between MindInsight and MindSpore. RESTful API is an internal API used for data interaction between the MindInsight frontend and backend. #### File API Design @@ -102,4 +102,4 @@ The `summary.proto` file is the main entry. TensorProto data is stored in the su } ``` -TensorProto is defined in the [anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/anf_ir.proto) file. +TensorProto is defined in the [anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/anf_ir.proto) file. diff --git a/docs/note/source_en/design/mindinsight/training_visual_design.md b/docs/note/source_en/design/mindinsight/training_visual_design.md index 05cadfe220ab59d397cc4e2342d2fbf6d43325b6..40380e07483da6422999b31cf82af36691fd5ee8 100644 --- a/docs/note/source_en/design/mindinsight/training_visual_design.md +++ b/docs/note/source_en/design/mindinsight/training_visual_design.md @@ -18,7 +18,7 @@ - + [MindInsight](https://gitee.com/mindspore/mindinsight) is a visualized debugging and tuning component of MindSpore. MindInsight can be used to complete tasks such as training visualization, performance tuning, and precision tuning. @@ -40,11 +40,11 @@ The training information collection function in MindSpore consists of training i Training information collection APIs include: -- Training information collection API based on the summary operator. This API contains four summary operators, that is, the ScalarSummary operator for recording scalar data, the ImageSummary operator for recording image data, the HistogramSummary operator for recording parameter distribution histogram data, and the TensorSummary operator for recording tensor data. For details about the operators, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +- Training information collection API based on the summary operator. This API contains four summary operators, that is, the ScalarSummary operator for recording scalar data, the ImageSummary operator for recording image data, the HistogramSummary operator for recording parameter distribution histogram data, and the TensorSummary operator for recording tensor data. For details about the operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). -- Training information collection API based on the Python API. You can use the [SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value) method to collect training information in Python code. +- Training information collection API based on the Python API. You can use the [SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value) method to collect training information in Python code. -- Easy-to-use training information collection callback. The [SummaryCollector](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector) callback function can be used to conveniently collect common training information to training logs. +- Easy-to-use training information collection callback. The [SummaryCollector](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector) callback function can be used to conveniently collect common training information to training logs. The training information persistence module mainly includes a summary_record module used to manage a cache and a write_pool module used to process data in parallel and write data into a file. After the training information is made persistent, it is stored in the training log file (summary file). diff --git a/docs/note/source_en/design/mindspore/architecture.md b/docs/note/source_en/design/mindspore/architecture.md index 1ad9274690e89603abd59a6f2da73af93d9679f7..0c0a4b97f446253938a5814b6697ec5c94d778dd 100644 --- a/docs/note/source_en/design/mindspore/architecture.md +++ b/docs/note/source_en/design/mindspore/architecture.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `On Device` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` - + The MindSpore framework consists of the Frontend Expression layer, Graph Engine layer, and Backend Runtime layer. diff --git a/docs/note/source_en/design/mindspore/architecture_lite.md b/docs/note/source_en/design/mindspore/architecture_lite.md index 5b26ac7ec999c60db6f0062efa15355c15020e84..7a99d70fd2b84671a18c5b6d2d2f096a5903fbb0 100644 --- a/docs/note/source_en/design/mindspore/architecture_lite.md +++ b/docs/note/source_en/design/mindspore/architecture_lite.md @@ -2,7 +2,7 @@ `Linux` `Windows` `On Device` `Inference Application` `Intermediate` `Expert` `Contributor` - + The overall architecture of MindSpore Lite is as follows: diff --git a/docs/note/source_en/design/mindspore/distributed_training_design.md b/docs/note/source_en/design/mindspore/distributed_training_design.md index cf963d8a9f819eeecf08184300edf060361f3834..d07526820e27946e90d9b202b53b5da036eef0e2 100644 --- a/docs/note/source_en/design/mindspore/distributed_training_design.md +++ b/docs/note/source_en/design/mindspore/distributed_training_design.md @@ -18,7 +18,7 @@ - + ## Background @@ -66,12 +66,12 @@ This section describes how the data parallel mode `ParallelMode.DATA_PARALLEL` w 1. Collective communication - - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer. - - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition. + - [management.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer. + - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition. 2. Gradient aggregation - - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation. + - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation. ## Automatic Parallelism @@ -122,19 +122,19 @@ As a key feature of MindSpore, automatic parallelism is used to implement hybrid ### Automatic Parallel Code 1. Tensor layout model - - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated. + - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated. 2. Distributed operators - - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated. + - [ops_info](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated. 3. Strategy search algorithm - - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations. + - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations. 4. Device management - - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`. + - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`. 5. Entire graph sharding - - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode. + - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode. 6. Backward propagation of communication operators - - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`. + - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`. diff --git a/docs/note/source_en/design/mindspore/images/auto_parallel.png b/docs/note/source_en/design/mindspore/images/auto_parallel.png index 800b3b2536c739dcc48a1e46b5f65fc327e4ce8d..d0135541eb76cedfcb22f2eb3e470a9d5d913957 100644 Binary files a/docs/note/source_en/design/mindspore/images/auto_parallel.png and b/docs/note/source_en/design/mindspore/images/auto_parallel.png differ diff --git a/docs/note/source_en/design/mindspore/mindir.md b/docs/note/source_en/design/mindspore/mindir.md index 59f55e31952a36ce34cce402f9a8f328a3f835b3..e6ac2ecc9195a047839e95ecf5401ec4061ab626 100644 --- a/docs/note/source_en/design/mindspore/mindir.md +++ b/docs/note/source_en/design/mindspore/mindir.md @@ -18,7 +18,7 @@ - + ## Overview @@ -88,7 +88,7 @@ lambda (x, y) c end ``` -The corresponding MindIR is [ir.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/ir.dot). +The corresponding MindIR is [ir.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/ir.dot). ![image](./images/ir/ir.png) @@ -122,7 +122,7 @@ def hof(x): return res ``` -The corresponding MindIR is [hof.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/hof.dot). +The corresponding MindIR is [hof.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/hof.dot). ![image](./images/ir/hof.png) In the actual network training scripts, the automatic derivation generic function `GradOperation` and `Partial` and `HyperMap` that are commonly used in the optimizer are typical high-order functions. Higher-order semantics greatly improve the flexibility and simplicity of MindSpore representations. @@ -144,7 +144,7 @@ def fibonacci(n): return fibonacci(n-1) + fibonacci(n-2) ``` -The corresponding MindIR is [cf.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/cf.dot). +The corresponding MindIR is [cf.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/cf.dot). ![image](./images/ir/cf.png) `fibonacci` is a top-level function graph. Two function graphs at the top level are selected and called by `switch`. `✓fibonacci` is the True branch of the first `if`, and `✗fibonacci` is the False branch of the first `if`. `✓✗fibonacci` called in `✗fibonacci` is the True branch of `elif`, and `✗✗fibonacci` is the False branch of `elif`. The key is, in a MindIR, conditional jumps and recursion are represented in the form of higher-order control flows. For example, `✓✗fibonacci` and `✗fibonacci` are transferred in as parameters of the `switch` operator. `switch` selects a function as the return value based on the condition parameter. In this way, `switch` performs a binary selection operation on the input functions as common values and does not call the functions. The real function call is completed on CNode following `switch`. @@ -170,7 +170,7 @@ def ms_closure(): return out1, out2 ``` -The corresponding MindIR is [closure.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_en/design/mindspore/images/ir/closure.dot). +The corresponding MindIR is [closure.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_en/design/mindspore/images/ir/closure.dot). ![image](./images/ir/closure.png) In the example, `a` and `b` are free variables because the variables `a` and `b` in `func_inner` are parameters defined in the referenced parent graph `func_outer`. The variable `closure` is a closure, which is the combination of the function `func_inner` and its context `func_outer(1, 2)`. Therefore, the result of `out1` is 4, which is equivalent to `1+2+1`, and the result of `out2` is 5, which is equivalent to `1+2+2`. diff --git a/docs/note/source_en/design/mindspore/profiler_design.md b/docs/note/source_en/design/mindspore/profiler_design.md index d50f623c90860053e06ea22d64b6c08fa3e52d24..9d15dc4ec78822332633614d148c428311cfc7e2 100644 --- a/docs/note/source_en/design/mindspore/profiler_design.md +++ b/docs/note/source_en/design/mindspore/profiler_design.md @@ -26,7 +26,7 @@ - + ## Background diff --git a/docs/note/source_en/design/overall.rst b/docs/note/source_en/design/overall.rst index bec96d2c15254cf9a888536a6cab4aff59ef9c00..5aeb51194e95a4155161c9c0475c7f23654863c2 100644 --- a/docs/note/source_en/design/overall.rst +++ b/docs/note/source_en/design/overall.rst @@ -4,5 +4,6 @@ Overall Design .. toctree:: :maxdepth: 1 + technical_white_paper mindspore/architecture mindspore/architecture_lite diff --git a/docs/note/source_en/design/technical_white_paper.md b/docs/note/source_en/design/technical_white_paper.md new file mode 100644 index 0000000000000000000000000000000000000000..7f7d956d6c05073bc9dc2febe163ad643100fbcb --- /dev/null +++ b/docs/note/source_en/design/technical_white_paper.md @@ -0,0 +1,5 @@ +# Technical White Paper + +Please stay tuned... + + diff --git a/docs/note/source_en/env_var_list.md b/docs/note/source_en/env_var_list.md new file mode 100644 index 0000000000000000000000000000000000000000..37cbdbd9219cd3fcf4111cb52d6b1141c161b6ea --- /dev/null +++ b/docs/note/source_en/env_var_list.md @@ -0,0 +1,5 @@ +# Environment Variables List + +No English version available right now, welcome to contribute. + + diff --git a/docs/note/source_en/glossary.md b/docs/note/source_en/glossary.md index 6ec11c85d39df9af4669d847255544882ac78be4..b4f6ac83b79060b6f1fc6dab24a3fcb43f102501 100644 --- a/docs/note/source_en/glossary.md +++ b/docs/note/source_en/glossary.md @@ -2,11 +2,12 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Beginner` `Intermediate` `Expert` - + | Acronym and Abbreviation | Description | | ----- | ----- | -| ACL | Ascend Computer Language, for users to develop deep neural network applications, which provides the C++ API library including device management, context management, stream management, memory management, model loading and execution, operator loading and execution, media data processing, etc. | +| ACL | Ascend Computer Language, for users to develop deep neural network applications, which provides the C++ API library including device management, context management, stream management, memory management, model loading and execution, operator loading and execution, media data processing, etc. | +| AIR | Ascend Intermediate Representation, such as ONNX, it is an open file format for machine learning. It is defined by Huawei and is better suited to Ascend AI processor. | | Ascend | Name of Huawei Ascend series chips. | | CCE | Cube-based Computing Engine, which is an operator development tool oriented to hardware architecture programming. | | CCE-C | Cube-based Computing Engine C, which is C code developed by the CCE. | @@ -21,7 +22,6 @@ | FP16 | 16-bit floating point, which is a half-precision floating point arithmetic format, consuming less memory. | | FP32 | 32-bit floating point, which is a single-precision floating point arithmetic format. | | GE | Graph Engine, MindSpore computational graph execution engine, which is responsible for optimizing hardware (such as operator fusion and memory overcommitment) based on the front-end computational graph and starting tasks on the device side. | -| AIR | Ascend Intermediate Representation, such as ONNX, it is an open file format for machine learning. It is defined by Huawei and is better suited to Ascend AI processor.| | GHLO | Graph High Level Optimization. GHLO includes optimization irrelevant to hardware (such as dead code elimination), auto parallel, and auto differentiation. | | GLLO | Graph Low Level Optimization. GLLO includes hardware-related optimization and in-depth optimization related to the combination of hardware and software, such as operator fusion and buffer fusion. | | Graph Mode | MindSpore static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution, featuring high performance. | @@ -35,11 +35,12 @@ | MindArmour | The security module of MindSpore, which improves the confidentiality, integrity and usability of the model through technical means such as differential privacy and adversarial attack and defense. MindArmour prevents attackers from maliciously modifying the model or cracking the internal components of the model to steal the parameters of the model. | | MindData | MindSpore data framework, which provides data loading, enhancement, dataset management, and visualization. | | MindInsight | MindSpore visualization component, which visualizes information such as scalars, images, computational graphs, and model hyperparameters. | +| MindIR | MindSpore IR, a functional IR based on graph representation, defines a scalable graph structure and operator IR representation, and stores the basic data structure of MindSpore. | | MindRecord | It is a data format defined by MindSpore, it is a module for reading, writing, searching and converting data sets in MindSpore format. | | MindSpore | Huawei-leaded open-source deep learning framework. | | MindSpore Lite | A lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side. | | MNIST database | Modified National Handwriting of Images and Technology database, a large handwritten digit database, which is usually used to train various image processing systems. | -| ONNX | Open Neural Network Exchange, is an open format built to represent machine learning models.| +| ONNX | Open Neural Network Exchange, is an open format built to represent machine learning models.| | PyNative Mode | MindSpore dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model. | | ResNet-50 | Residual Neural Network 50, a residual neural network proposed by four Chinese people, including Kaiming He from Microsoft Research Institute. | | Schema | Data set structure definition file, which defines the fields contained in a dataset and the field types. | diff --git a/docs/note/source_en/help_seeking_path.md b/docs/note/source_en/help_seeking_path.md index 9ac8c6bb6da04e502a89729e32b1e8644c82db51..c4e51e8c6fd9548433f2ae18ca90c61e0ed0c25f 100644 --- a/docs/note/source_en/help_seeking_path.md +++ b/docs/note/source_en/help_seeking_path.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Beginner` `Intermediate` `Expert` - + This document describes how to seek help and support when you encounter problems in using MindSpore. The following flowchart shows the overall help-seeking process which starts from users encountering a problem in using MindSpore and ends with they finding a proper solution. Help-seeking methods are introduced based on the flowchart. @@ -25,5 +25,5 @@ This document describes how to seek help and support when you encounter problems - If you want a detailed solution, start a help post on the [Ascend forum](https://forum.huawei.com/enterprise/en/forum-100504.html). - After the post is sent, a forum moderator collects the question and contacts technical experts to answer the question. The question will be resolved within three working days. - Resolve the problem by referring to solutions provided by technical experts. - + If the expert test result shows that the MindSpore function needs to be improved, you are advised to submit an issue in the [MindSpore repository](https://gitee.com/mindspore). Issues will be resolved in later versions. diff --git a/docs/note/source_en/image_classification_lite.md b/docs/note/source_en/image_classification_lite.md index 0ca49c2c89032753fb2731b4ae5860936f91faeb..28c6373b4c111eb2563e2921453bb2e75547b4b6 100644 --- a/docs/note/source_en/image_classification_lite.md +++ b/docs/note/source_en/image_classification_lite.md @@ -1,10 +1,10 @@ # Image Classification Model Support (Lite) - + ## Image classification introduction -Image classification is to identity what an image represents, to predict the object list and the probabilites. For example,the following tabel shows the classification results after mode inference. +Image classification is to identity what an image represents, to predict the object list and the probabilities. For example,the following table shows the classification results after mode inference. ![image_classification](images/image_classification_result.png) @@ -15,7 +15,7 @@ Image classification is to identity what an image represents, to predict the obj | tree | 0.8584 | | houseplant | 0.7867 | -Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification). +Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_classification). ## Image classification model list @@ -35,6 +35,6 @@ The following table shows the data of some image classification models using Min | [GhostNet_int8](https://download.mindspore.cn/model_zoo/official/lite/ghostnet_lite/ghostnet_int8.ms) | 15.3 | 73.6% | - | - | 31.452 | | [VGG-Small-low_bit](https://download.mindspore.cn/model_zoo/official/lite/low_bit_quant/low_bit_quant_bs_1.ms) | 17.8 | 93.7% | - | - | 9.082 | | [ResNet50-0.65x](https://download.mindspore.cn/model_zoo/official/lite/adversarial_pruning_lite/adversarial_pruning.ms) | 48.6 | 80.2% | - | - | 89.816 | -| [plain-CNN-ResNet18](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_disstill_res18_cifar10_bs_1_update.ms) | 97.3 | 95.4% | - | - | 63.227 | -| [plain-CNN-ResNet34](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_disstill_res34_cifar10_bs_1_update.ms) | 80.5 | 95.0% | - | - | 20.652 | -| [plain-CNN-ResNet50](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_disstill_res50_cifar10_bs_1_update.ms) | 89.6 | 94.5% | - | - | 24.561 | +| [plain-CNN-ResNet18](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_distill_res18_cifar10_bs_1_update.ms) | 97.3 | 95.4% | - | - | 63.227 | +| [plain-CNN-ResNet34](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_distill_res34_cifar10_bs_1_update.ms) | 80.5 | 95.0% | - | - | 20.652 | +| [plain-CNN-ResNet50](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_distill_res50_cifar10_bs_1_update.ms) | 89.6 | 94.5% | - | - | 24.561 | diff --git a/docs/note/source_en/image_segmentation_lite.md b/docs/note/source_en/image_segmentation_lite.md index 2bd3c91b2ddd1b0f72de93ec773db29850e7f1eb..39f4a40700f50b9a7d246cade970d96b14e97483 100644 --- a/docs/note/source_en/image_segmentation_lite.md +++ b/docs/note/source_en/image_segmentation_lite.md @@ -1,12 +1,12 @@ # Image Segmentation Model Support (Lite) - + ## Image Segmentation introduction Image segmentation is used to detect the position of the object in the picture or a pixel belongs to which object. -Using MindSpore Lite to perform image segmentation [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_segmentation). +Using MindSpore Lite to perform image segmentation [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_segmentation). ## Image segmentation model list diff --git a/docs/note/source_en/index.rst b/docs/note/source_en/index.rst index e3aa74528572fe3a544fd4f85dd3b04f5502852e..3aee92c4f224a296c542dae07d7a2f2d5ae9025f 100644 --- a/docs/note/source_en/index.rst +++ b/docs/note/source_en/index.rst @@ -25,7 +25,9 @@ MindSpore Design And Specification benchmark network_list operator_list + syntax_list model_lite + env_var_list .. toctree:: :glob: @@ -34,7 +36,6 @@ MindSpore Design And Specification glossary roadmap - paper_list help_seeking_path community diff --git a/docs/note/source_en/network_list_ms.md b/docs/note/source_en/network_list_ms.md index 95416313e766dae7ddaafd22da645f65861c3683..cbfc42b51ab6094c8c1d3da044c7c615f54a0886 100644 --- a/docs/note/source_en/network_list_ms.md +++ b/docs/note/source_en/network_list_ms.md @@ -9,70 +9,93 @@ - + ## Model Zoo -| Domain | Sub Domain | Network | Ascend (Graph) | Ascend (PyNative) | GPU (Graph) | GPU (PyNative)| CPU (Graph) | CPU (PyNative) -|:------ |:------| :----------- |:------ |:------ |:------ |:------ |:----- |:----- -|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported -| Computer Vision (CV) | Image Classification | [LeNet(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [ResNet-50(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing -|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [MobileNetV2(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [NASNET](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ghostnet/src/ghostnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Image Classification | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [TinyNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/tinynet/src/tinynet.py) | Supported | Doing | Doing | Doing | Doing | Doing - Computer Vision(CV) | Image Classification | [FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Image Classification | [FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Image Classificationn | [FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Image Classification | [SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -|Computer Vision (CV) | Object Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported | Supported | Supported | Supported | Supported -| Computer Vision (CV) | Object Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53(Quantization)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [MaskRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Object Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Computer Vision(CV) | Object Detection | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing -| Computer Vision(CV) | Object Detection | [CenterFace](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision(CV) | Object Detection | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Object Detection | [YoloV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Text Detection | [PSENet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Text Recognition | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Computer Vision (CV) | Semantic Segmentation | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing - Computer Vision (CV) | Keypoint Detection | [Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported -| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Doing | Doing -| Recommender | Recommender System, Search, Ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing -| Graph Neural Networks (GNN) | Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Graph Neural Networks (GNN) | Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing -| Graph Neural Networks (GNN) | Recommender System | [BGCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing -| Audio | Auto Tagging | [FCN-4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing -| High Performance Computing | Molecular Dynamics | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Doing | Doing | Doing | Doing | Doing -| High Performance Computing | Ocean Model | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing +### Official -> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts. +| Domain | Sub Domain | Network | Ascend (Graph) | Ascend (PyNative) | GPU (Graph) | GPU (PyNative) | CPU (Graph) | CPU (PyNative) +|:------ |:------| :----------- |:------ |:------ |:------ |:------ |:----- |:----- +|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/cnn_direction_model/src/cnn_direction_model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [GoogLeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [InceptionV4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv4/src/inceptionv4.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported +| Computer Vision (CV) | Image Classification | [LeNet (Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv1/src/mobilenet_v1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Supported | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV2 (Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [NASNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Supported | Doing +| Computer Vision (CV) | Image Classification | [ResNet-50 (Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing +|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +|Computer Vision (CV) | Image Classification | [ResNeXt50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Doing | Supported | Supported | Doing | Doing +|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [ShuffleNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv1/src/shufflenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [SqueezeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [Tiny-DarkNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/tinydarknet/src/tinydarknet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [Xception](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/xception/src/Xception.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [CenterFace](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [CTPN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ctpn/src/ctpn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [Faster R-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [Mask R-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [Mask R-CNN (MobileNetV1)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [RetinaFace-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing +|Computer Vision (CV) | Object Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Doing | Supported | Supported | Supported | Doing +| Computer Vision (CV) | Object Detection | [SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [YOLOv3-ResNet18](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [YOLOv3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Computer Vision (CV) | Object Detection | [YOLOv3-DarkNet53 (Quantization)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [YOLOv4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Text Detection | [DeepText](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/deeptext/src/Deeptext/deeptext_vgg16.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Text Detection | [PSENet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Text Recognition | [CNN+CTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Semantic Segmentation | [DeepLabV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Doing | Doing | Doing | Supported | Doing +| Computer Vision (CV) | Semantic Segmentation | [U-Net2D (Medical)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Keypoint Detection | [OpenPose](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Keypoint Detection | [SimplePoseNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/simple_pose/src/model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Optical Character Recognition | [CRNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/crnn/src/crnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [FastText](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/fasttext/src/fasttext_model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [GRU](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/gru/src/seq2seq.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/lstm/src/lstm.py) | Supported | Doing | Supported | Supported | Supported | Supported +| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBERT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [TextCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/textcnn/src/textcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported | Supported | Doing +| Recommender | Recommender System, Search, Ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing +| Recommender | Recommender System | [NCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/ncf/src/ncf.py) | Supported | Doing | Supported | Doing | Doing | Doing +| Graph Neural Networks (GNN) | Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Graph Neural Networks (GNN) | Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Graph Neural Networks (GNN) | Recommender System | [BGCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing + +### Research + +| Domain | Sub Domain | Network | Ascend (Graph) | Ascend (PyNative) | GPU (Graph) | GPU (PyNative)| CPU (Graph) | CPU (PyNative) +|:------ |:------| :----------- |:------ |:------ |:------ |:------ |:----- |:----- +| Computer Vision (CV) | Image Classification | [FaceAttributes](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [FaceRecognition](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceRecognition/src/init_network.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Classification | [FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Object Detection | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Key Point Detection | [CenterNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/centernet/src/centernet_pose.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Computer Vision (CV) | Image Style Transfer | [CycleGAN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/cycle_gan/src/models/cycle_gan.py) | Doing | Doing | Doing | Supported | Supported | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [TextRCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/nlp/textrcnn/src/textrcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Recommender | Recommender System, CTR prediction | [AutoDis](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/recommend/autodis/src/autodis.py) | Supported | Doing | Doing | Doing | Doing | Doing +| Audio | Audio Tagging | [FCN-4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Doing | Doing | Doing | Doing | Doing +| High Performance Computing | Molecular Dynamics | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Supported | Doing | Doing | Doing | Doing +| High Performance Computing | Ocean Model | [GOMO](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing + +> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/r1.1/mindinsight/wizard/) to quickly generate classic network scripts. diff --git a/docs/note/source_en/object_detection_lite.md b/docs/note/source_en/object_detection_lite.md index b3c7e2d9b3ecd91d3158f71d2f09dc52e7f1aaef..3f20272a0bafb302f5d069c2d6a427db9e7f84e5 100644 --- a/docs/note/source_en/object_detection_lite.md +++ b/docs/note/source_en/object_detection_lite.md @@ -1,6 +1,6 @@ # Object Detection Model Support (Lite) - + ## Object dectectin introduction @@ -12,7 +12,7 @@ Object detection can identify the object in the image and its position in the im | -------- | ----------- | ---------------- | | mouse | 0.78 | [10, 25, 35, 43] | -Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection). +Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/object_detection). ## Object detection model list diff --git a/docs/note/source_en/operator_list_implicit.md b/docs/note/source_en/operator_list_implicit.md index d74f3e143afeca92bbdfa0f48f797dfdd6a1ff39..115648639d672ad17b365f4e3ee63140e437dac5 100644 --- a/docs/note/source_en/operator_list_implicit.md +++ b/docs/note/source_en/operator_list_implicit.md @@ -12,7 +12,7 @@ - + ## Implicit Type Conversion @@ -38,68 +38,69 @@ | op name | |:--------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Assign.html) | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignSub.html) | -| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyMomentum.html) | -| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | -| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | -| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | -| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | -| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | -| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | -| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | -| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | -| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | -| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | -| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | -| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | -| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyAddSign.html) | -| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | -| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | -| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | -| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | -| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | -| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BitwiseAnd.html) | -| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BitwiseOr.html) | -| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BitwiseXor.html) | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TensorAdd.html) | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sub.html) | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mul.html) | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Pow.html) | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Minimum.html) | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Maximum.html) | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.RealDiv.html) | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Div.html) | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DivNoNan.html) | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorDiv.html) | -| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TruncateDiv.html) | -| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TruncateMod.html) | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mod.html) | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorMod.html) | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atan2.html) | -| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SquaredDifference.html) | -| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Xdivy.html) | -| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Xlogy.html) | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Equal.html) | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.NotEqual.html) | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Greater.html) | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Less.html) | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LessEqual.html) | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalOr.html) | -| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | -| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | -| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNdSub.html) | -| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | -| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterUpdate.html) | -| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterMax.html) | -| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterMin.html) | -| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterAdd.html) | -| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterSub.html) | -| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterMul.html) | -| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ScatterDiv.html) | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignAdd.html) | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Assign.html) | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | +| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyMomentum.html) | +| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | +| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | +| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | +| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | +| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | +| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | +| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | +| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | +| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | +| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | +| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | +| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | +| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyAddSign.html) | +| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | +| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | +| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | +| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | +| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | +| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BitwiseAnd.html) | +| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BitwiseOr.html) | +| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BitwiseXor.html) | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | +| [mindspore.ops.Add](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Add.html) | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sub.html) | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mul.html) | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Pow.html) | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Div.html) | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | +| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TruncateDiv.html) | +| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TruncateMod.html) | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mod.html) | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | +| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SquaredDifference.html) | +| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Xdivy.html) | +| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Xlogy.html) | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Equal.html) | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Greater.html) | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Less.html) | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | +| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | +| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | +| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNdSub.html) | +| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | +| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterUpdate.html) | +| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterMax.html) | +| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterMin.html) | +| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterAdd.html) | +| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterSub.html) | +| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterMul.html) | +| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ScatterDiv.html) | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | -> \ No newline at end of file +> diff --git a/docs/note/source_en/operator_list_lite.md b/docs/note/source_en/operator_list_lite.md index 600963b57213f840e2d0dac94e0aabefc255f9d8..f6c509410c1220a062de58df7a487e099b6c35d4 100644 --- a/docs/note/source_en/operator_list_lite.md +++ b/docs/note/source_en/operator_list_lite.md @@ -2,123 +2,131 @@ `Linux` `On Device` `Inference Application` `Beginner` `Intermediate` `Expert` - + -| Operation | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU | 支持的Tensorflow
Lite算子 | 支持的Caffe
Lite算子 | 支持的Onnx
Lite算子 | -| --------------------- | ------------ | ------------ | ------------ | ------------- | ------------ | ------------ | --------- | ------------------------------- | ------------------------ | ----------------------------------------------- | -| Abs | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Abs | | Abs | -| Add | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add, Int8Add | -| AddN | | Supported | | | | | | AddN | | | -| Argmax | | Supported | Supported | Supported | | | | Argmax | ArgMax | ArgMax | -| Argmin | | Supported | Supported | Supported | | | | Argmin | | | -| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling | Pooling | AveragePool, GlobalAveragePool, Int8AveragePool | -| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | | BatchNorm | BatchNormalization | -| BatchToSpace | | Supported | Supported | Supported | Supported | Supported | | BatchToSpace, BatchToSpaceND | | | -| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | | BiasAdd | -| Broadcast | | Supported | | | | | | BroadcastTo | | Expand | -| Cast | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cast, QUANTIZE, DEQUANTIZE | | Cast | -| Ceil | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil | -| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat | -| ConstantOfShape | | Supported | | | | | | | | ConstantOfShape | -| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv, Int8Conv, ConvRelu, Int8ConvRelu | -| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose | -| Cos | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cos | | Cos | -| Crop | Supported | Supported | Supported | Supported | | | | | Crop | | -| CustomExtractFeatures | | Supported | | | | | | ExtractFeatures | | | -| CustomNormalize | | Supported | | | | | | Normalize | | | -| CustomPredict | | Supported | | | | | | Predict | | | -| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | | Deconvolution | | -| DepthToSpace | | Supported | Supported | Supported | Supported | Supported | | DepthToSpace | | DepthToSpace | -| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | | -| DetectionPostProcess | | Supported | Supported | Supported | | | | Custom | | | -| Div | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div | -| Eltwise | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Eltwise | Sum, Max[3] | -| Elu | | Supported | | | | | | | Elu | Elu, NonMaxSuppression | -| Equal | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Equal | | Equal | -| Exp | | Supported | | | Supported | Supported | | Exp | Exp | Exp | -| ExpandDims | | Supported | Supported | Supported | | | | ExpandDims | | | -| Fill | | Supported | | | | | | Fill | | | -| Flatten | | Supported | | | | | | | Flatten | | -| Floor | Supported | Supported | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor | -| FloorDiv | Supported | Supported | | | Supported | Supported | Supported | FloorDiv | | | -| FloorMod | Supported | Supported | | | Supported | Supported | Supported | FloorMod | | | -| FullConnection | Supported | Supported | Supported | Supported | Supported | Supported | | FullyConnected | InnerProduct | | -| FusedBatchNorm | Supported | Supported | Supported | Supported | | | Supported | FusedBatchNorm | | | -| GatherNd | | Supported | Supported | Supported | | | | GatherND | | | -| GatherV2 | | Supported | Supported | Supported | Supported | Supported | | Gather | | Gather | -| Greater | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Greater | | Greater | -| GreaterEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | GreaterEqual | | | -| HashtableLookup | | Supported | | | | | | HashtableLookup | | | -| Hswish | Supported | Supported | Supported | Supported | Supported | Supported | Supported | HardSwish | | | -| InstanceNorm | | Supported | | | | | | InstanceNorm | | | -| L2Norm | | Supported | | | | | | L2_NORMALIZATION | | | -| LeakyReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LeakyRelu | | LeakyRelu | -| Less | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Less | | Less | -| LessEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LessEqual | | | -| LRN | | Supported | | | | | | LocalResponseNorm | | Lrn, LRN | -| Log | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Log | | Log | -| LogicalAnd | Supported | Supported | | | Supported | Supported | Supported | LogicalAnd | | And | -| LogicalNot | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LogicalNot | | Not | -| LogicalOr | Supported | Supported | | | Supported | Supported | Supported | LogicalOr | | Or | -| LshProjection | | Supported | | | | | | LshProjection | | | -| LSTM | | Supported | | | | | | | | LSTM | -| MatMul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | | MatMul | -| Maximum | Supported | Supported | | | Supported | Supported | Supported | Maximum | | | -| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool, GlobalMaxPool | -| Minimum | Supported | Supported | | | Supported | Supported | Supported | Minimum | | Min | -| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul | -| Neg | Supported | Supported | | | Supported | Supported | Supported | Neg | | Neg | -| NotEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | NotEqual | | | -| OneHot | | Supported | | | | | | OneHot | | OneHot | -| Pad | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Pad, MirrorPad | | Pad | -| Pow | | Supported | Supported | Supported | | | | Pow | Power | Pow[2] | -| PReLU | | Supported | | | Supported | Supported | | PRELU | PReLU | PRelu | -| Range | | Supported | | | | | | Range | | | -| Rank | | Supported | | | | | | Rank | | | -| ReduceASum | | Supported | | | | | | | Reduction | | -| ReduceMax | Supported | Supported | Supported | Supported | | | | ReduceMax | | ReduceMax | -| ReduceMean | Supported | Supported | Supported | Supported | Supported | Supported | | Mean | Reduction | ReduceMean | -| ReduceMin | Supported | Supported | Supported | Supported | | | | ReduceMin | | ReduceMin | -| ReduceProd | Supported | Supported | Supported | Supported | | | | ReduceProd | | ReduceProd | -| ReduceSum | Supported | Supported | Supported | Supported | Supported | Supported | | Sum | Reduction | ReduceSum | -| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | Reduction | ReduceSumSquare | -| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu | -| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip[1] | -| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | | Reshape | Reshape | Reshape,Flatten | -| Resize | | Supported | Supported | Supported | Supported | Supported | Supported | ResizeBilinear, NearestNeighbor | Interp | | -| Reverse | | Supported | | | | | | reverse | | | -| ReverseSequence | | Supported | | | | | | ReverseSequence | | | -| Round | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Round | | Round | -| Rsqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Rsqrt | | | -| Scale | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Scale | | -| ScatterNd | | Supported | | | | | | ScatterNd | | | -| Shape | | Supported | Supported | Supported | | | Supported | Shape | | Shape | -| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | | Logistic | Sigmoid | Sigmoid | -| Sin | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sin | | Sin | -| Slice | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Slice | Slice | Slice | -| SkipGram | | Supported | | | | | | SKipGram | | | -| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax | -| SpaceToBatch | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatch | | | -| SpaceToBatchND | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatchND | | | -| SpaceToDepth | | Supported | | | | | | SpaceToDepth | | SpaceToDepth | -| SparseToDense | | Supported | | | | | | SpareToDense | | | -| Split | Supported | Supported | Supported | Supported | | | Supported | Split, SplitV | | Split | -| Sqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt | -| Square | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Square | | | -| SquaredDifference | Supported | Supported | | | Supported | Supported | Supported | SquaredDifference | | | -| Squeeze | | Supported | Supported | Supported | Supported | Supported | | Squeeze | | Squeeze | -| StridedSlice | | Supported | Supported | Supported | | | Supported | StridedSlice | | | -| Stack | Supported | Supported | | | Supported | Supported | | Stack | | | -| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub | -| Tanh | Supported | Supported | | | Supported | Supported | | Tanh | TanH | Tanh, Sign | -| Tile | | Supported | | | | | | Tile | Tile | Tile | -| TopK | | Supported | Supported | Supported | | | | TopKV2 | | TopK | -| Transpose | Supported | Supported | | | Supported | Supported | Supported | Transpose | Permute | Transpose | -| Unique | | Supported | | | | | | Unique | | | -| Unsqueeze | | Supported | Supported | Supported | | | Supported | | | Unsqueeze | -| Unstack | | Supported | | | | | | Unstack | | | -| Where | | Supported | | | | | | Where | | | -| ZerosLike | | Supported | | | | | | ZerosLike | | | +| Operation
  | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU
  | TensorFlow
Lite operators supported | Caffe
Lite operators supported | Onnx
Lite operators supported | TensorFlow
operators supported | +| --------------------- | ------------ | ------------ | ------------ | ------------- | ------------ | ------------ | --------- | ------------------------------- | ------------------------ | ----------------------------------------------- | ----------------------------------------------- | +| Abs | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Abs | | Abs | | +| Add | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add, Int8Add | Add, AddV2 | +| AddN | | Supported | | | | | | AddN | | | | +| Assert | | Supported | | | | | | | | | Assert | +| Argmax | | Supported | Supported | Supported | Supported | Supported | | Argmax | ArgMax | ArgMax | | +| Argmin | | Supported | Supported | Supported | Supported | Supported | | Argmin | | | | +| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling | Pooling | AveragePool, GlobalAveragePool, Int8AveragePool | | +| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | | BatchNorm | BatchNormalization | | +| BatchToSpace | | Supported | Supported | Supported | Supported | Supported | | BatchToSpace, BatchToSpaceND | | | | +| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | | BiasAdd | BiasAdd | +| Broadcast | | Supported | | | | | | BroadcastTo | | Expand | | +| Cast | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cast, QUANTIZE, DEQUANTIZE | | Cast | Cast | +| Ceil | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil | | +| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat | ConcatV2 | +| ConstantOfShape | | Supported | | | | | | | | ConstantOfShape | | +| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv, Int8Conv, ConvRelu, Int8ConvRelu | Conv2D | +| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose | | +| Cos | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cos | | Cos | | +| Crop | Supported | Supported | Supported | Supported | | | | | Crop | | | +| CustomExtractFeatures | | Supported | | | | | | ExtractFeatures | | | | +| CustomNormalize | | Supported | | | | | | Normalize | | | | +| CustomPredict | | Supported | | | | | | Predict | | | | +| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | | Deconvolution | | | +| DepthToSpace | | Supported | Supported | Supported | Supported | Supported | | DepthToSpace | | DepthToSpace | | +| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | | | +| DetectionPostProcess | | Supported | Supported | Supported | | | | Custom | | | | +| Div | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div | Div, RealDiv | +| Eltwise | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Eltwise | Sum, Max[3] | | +| Elu | | Supported | | | | | | | Elu | Elu, NonMaxSuppression | | +| Equal | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Equal | | Equal | Equal | +| Exp | | Supported | | | Supported | Supported | | Exp | Exp | Exp | | +| ExpandDims | | Supported | Supported | Supported | | | | ExpandDims | | | ExpandDims | +| Fill | | Supported | | | | | | Fill | | | | +| Flatten | | Supported | | | | | | | Flatten | | | +| Floor | Supported | Supported | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor | | +| FloorDiv | Supported | Supported | | | Supported | Supported | Supported | FloorDiv | | | | +| FloorMod | Supported | Supported | | | Supported | Supported | Supported | FloorMod | | | | +| FullConnection | Supported | Supported | Supported | Supported | Supported | Supported | | FullyConnected | InnerProduct | | | +| FusedBatchNorm | Supported | Supported | Supported | Supported | | | Supported | FusedBatchNorm | | | | +| GatherNd | | Supported | Supported | Supported | Supported | Supported | | GatherND | | | | +| Gather | | Supported | Supported | Supported | Supported | Supported | | Gather | | Gather | GatherV2 | +| Greater | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Greater | | Greater | Greater | +| GreaterEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | GreaterEqual | | | GreaterEqual | +| HashtableLookup | | Supported | | | | | | HashtableLookup | | | | +| Hswish | Supported | Supported | Supported | Supported | Supported | Supported | Supported | HardSwish | | | | +| InstanceNorm | | Supported | | | | | | InstanceNorm | | | | +| L2Norm | | Supported | | | | | | L2_NORMALIZATION | | | | +| LeakyReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LeakyRelu | | LeakyRelu | | +| Less | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Less | | Less | Less | +| LessEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LessEqual | | | LessEqual | +| LRN | | Supported | | | | | | LocalResponseNorm | | Lrn, LRN | | +| Log | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Log | | Log | | +| LogicalAnd | Supported | Supported | | | Supported | Supported | Supported | LogicalAnd | | And | LogicalAnd | +| LogicalNot | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LogicalNot | | Not | | +| LogicalOr | Supported | Supported | | | Supported | Supported | Supported | LogicalOr | | Or | | +| LshProjection | | Supported | | | | | | LshProjection | | | | +| LSTM | | Supported | | | | | | | | LSTM | | +| MatMul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | | MatMul | MatMul | +| Maximum | Supported | Supported | | | Supported | Supported | Supported | Maximum | | | Maximum | +| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool, GlobalMaxPool | | +| Minimum | Supported | Supported | | | Supported | Supported | Supported | Minimum | | Min | Minimum | +| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul | Mul | +| Neg | Supported | Supported | | | Supported | Supported | Supported | Neg | | Neg | | +| NotEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | NotEqual | | |NotEqual | +| OneHot | | Supported | | | Supported | Supported | | OneHot | | OneHot | | +| Pad | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Pad, MirrorPad | | Pad | | +| Pow | | Supported | Supported | Supported | Supported | Supported | | Pow | Power | Pow[2] | | +| PReLU | | Supported | | | Supported | Supported | | PRELU | PReLU | PRelu | | +| Range | | Supported | | | | | | Range | | | Range, RaggedRange | +| Rank | | Supported | | | | | | Rank | | | | +| ReduceAll | | Supported | | | | | | | | | All | +| ReduceASum | | Supported | | | Supported | Supported | | | Reduction | | | +| ReduceMax | Supported | Supported | Supported | Supported | Supported | Supported | | ReduceMax | | ReduceMax | Max | +| ReduceMean | Supported | Supported | Supported | Supported | Supported | Supported | | Mean | Reduction | ReduceMean | Mean | +| ReduceMin | Supported | Supported | Supported | Supported | Supported | Supported | | ReduceMin | | ReduceMin | Min | +| ReduceProd | Supported | Supported | Supported | Supported | Supported | Supported | | ReduceProd | | ReduceProd | Prod | +| ReduceSum | Supported | Supported | Supported | Supported | Supported | Supported | | Sum | Reduction | ReduceSum | Sum | +| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | Reduction | ReduceSumSquare | | +| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu | Relu | +| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip[1] | Relu6 | +| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | | Reshape | Reshape | Reshape,Flatten | Reshape | +| Resize | | Supported | Supported | Supported | Supported | Supported | Supported | ResizeBilinear, NearestNeighbor | Interp | | | +| Reverse | | Supported | | | | | | reverse | | | | +| ReverseSequence | | Supported | | | | | | ReverseSequence | | | ReverseSequence | +| Round | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Round | | Round | Round | +| Rsqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Rsqrt | | | | +| Scale | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Scale | | | +| ScatterNd | | Supported | | | | | | ScatterNd | | | | +| Shape | | Supported | Supported | Supported | Supported | Supported | Supported | Shape | | Shape | Shape | +| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | | Logistic | Sigmoid | Sigmoid | Sigmoid | +| Sin | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sin | | Sin | | +| Slice | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Slice | Slice | Slice | | +| SkipGram | | Supported | | | | | | SKipGram | | | | +| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax | | +| SpaceToBatch | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatch | | | | +| SpaceToBatchND | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatchND | | | | +| SpaceToDepth | | Supported | | | Supported | Supported | | SpaceToDepth | | SpaceToDepth | | +| SparseToDense | | Supported | | | Supported | Supported | | SpareToDense | | | | +| Split | Supported | Supported | Supported | Supported | | | Supported | Split, SplitV | | Split | Split, SplitV | +| Sqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt | | +| Square | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Square | | | | +| SquaredDifference | Supported | Supported | | | Supported | Supported | Supported | SquaredDifference | | | | +| Squeeze | | Supported | Supported | Supported | Supported | Supported | | Squeeze | | Squeeze | Squeeze | +| StridedSlice | | Supported | Supported | Supported | Supported | Supported | Supported | StridedSlice | | | StridedSlice | +| Stack | Supported | Supported | | | Supported | Supported | | Stack | | | Pack | +| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub | Sub | +| Tanh | Supported | Supported | | | Supported | Supported | | Tanh | TanH | Tanh, Sign | Tanh | +| TensorListFromTensor | | Supported | | | | | | | | | TensorListFromTensor | +| TensorListGetItem | | Supported | | | | | | | | | TensorListGetItem | +| TensorListReserve | | Supported | | | | | | | | | TensorListReserve | +| TensorListSetItem | | Supported | | | | | | | | | TensorListSetItem | +| TensorListStack | | Supported | | | | | | | | | TensorListStack | +| Tile | | Supported | | | | | | Tile | Tile | Tile | Tile | +| TopK | | Supported | Supported | Supported | | | | TopKV2 | | TopK | | +| Transpose | Supported | Supported | | | Supported | Supported | Supported | Transpose | Permute | Transpose | Transpose | +| Unique | | Supported | | | | | | Unique | | | | +| Unsqueeze | | Supported | Supported | Supported | | | Supported | | | Unsqueeze | | +| Unstack | | Supported | | | | | | Unstack | | | | +| Where | | Supported | | | | | | Where | | | | +| While | | Supported | | | | | | | | | While, StatelessWhile | +| ZerosLike | | Supported | | | | | | ZerosLike | | | | [1] Clip: Only support converting clip(0, 6) to Relu6. diff --git a/docs/note/source_en/operator_list_ms.md b/docs/note/source_en/operator_list_ms.md index 8611a4282297771d8e5838aa48b862ce3ad85bc9..ba0a5a8399c291c23c6638ae152efcb2f25f98d4 100644 --- a/docs/note/source_en/operator_list_ms.md +++ b/docs/note/source_en/operator_list_ms.md @@ -2,9 +2,9 @@ `Linux` `Ascend` `GPU` `CPU` `Model Development` `Beginner` `Intermediate` `Expert` - + You can choose the operators that are suitable for your hardware platform for building the network model according to your needs. -- Supported operator lists in module `mindspore.nn` could be checked on [API page of mindspore.nn](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.nn.html). -- Supported operator lists in module `mindspore.ops` could be checked on [API page of mindspore.ops](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.ops.html). +- Supported operator lists in module `mindspore.nn` could be checked on [API page of mindspore.nn](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.nn.html). +- Supported operator lists in module `mindspore.ops` could be checked on [API page of mindspore.ops](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.ops.html). diff --git a/docs/note/source_en/operator_list_parallel.md b/docs/note/source_en/operator_list_parallel.md index 038f087317d222b155f134c46c08459cc58980ba..07fe8dbabefebb2758c43709f6a8b8d30f9677ef 100644 --- a/docs/note/source_en/operator_list_parallel.md +++ b/docs/note/source_en/operator_list_parallel.md @@ -9,117 +9,120 @@ - + ## Distributed Operator | op name | constraints | |:----------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Abs.html) | None | -| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ACos.html) | None | -| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Acosh.html) | None | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | None | -| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Asin.html) | None | -| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Asinh.html) | None | -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Assign.html) | None | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignAdd.html) | None | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.AssignSub.html) | None | -| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atan.html) | None | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atan2.html) | None | -| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Atanh.html) | None | -| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BatchMatMul.html) | `transpore_a=True` is not supported. | -| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BesselI0e.html) | None | -| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BesselI1e.html) | None | -| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BiasAdd.html) | None | -| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.BroadcastTo.html) | None | -| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Cast.html) | The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode. | -| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Ceil.html) | None | -| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Concat.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Cos.html) | None | -| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Cosh.html) | None | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Div.html) | None | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DivNoNan.html) | None | -| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DropoutDoMask.html) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported. | -| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.DropoutGenMask.html) | Need to be used in conjunction with `DropoutDoMask`. | -| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Elu.html) | None | -| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | The same as GatherV2. | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Equal.html) | None | -| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Erf.html) | None | -| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Erfc.html) | None | -| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Exp.html) | None | -| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ExpandDims.html) | None | -| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Expm1.html) | None | -| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Floor.html) | None | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorDiv.html) | None | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.FloorMod.html) | None | -| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GatherV2.html) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported. | -| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Gelu.html) | None | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Greater.html) | None | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | None | -| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Inv.html) | None | -| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.L2Normalize.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Less.html) | None | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LessEqual.html) | None | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | None | -| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalNot.html) | None | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogicalOr.html) | None | -| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Log.html) | None | -| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Log1p.html) | None | -| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.LogSoftmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.MatMul.html) | `transpose_a=True` is not supported. | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Maximum.html) | None | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Minimum.html) | None | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mod.html) | None | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Mul.html) | None | -| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Neg.html) | None | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.NotEqual.html) | None | -| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.OneHot.html) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs. | -| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.OnesLike.html) | None | -| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Pack.html) | None | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Pow.html) | None | -| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.PReLU.html) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight. | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.RealDiv.html) | None | -| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Reciprocal.html) | None | -| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceMax.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceMin.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | -| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceSum.html) | None | -| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReduceMean.html) | None | -| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReLU.html) | None | -| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReLU6.html) | None | -| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ReLUV2.html) | None | -| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Reshape.html) | Configuring shard strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators. | -| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Round.html) | None | -| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Rsqrt.html) | None | -| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sigmoid.html) | None | -| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | None | -| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sign.html) | None | -| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sin.html) | None | -| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sinh.html) | None | -| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Softmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | The last dimension of logits and labels can't be splited; Only supports using output[0]. | -| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Softplus.html) | None | -| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Softsign.html) | None | -| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.SparseGatherV2.html) | The same as GatherV2. | -| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Split.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sqrt.html) | None | -| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Square.html) | None | -| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Squeeze.html) | None | -| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.StridedSlice.html) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1. | -| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Slice.html) | The dimension needs to be split should be all extracted. | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Sub.html) | None | -| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Tan.html) | None | -| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Tanh.html) | None | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TensorAdd.html) | None | -| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Tile.html) | Only support configuring shard strategy for multiples. | -| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.TopK.html) | The input_x can't be split into the last dimension, otherwise it's inconsistent with the single machine in the mathematical logic. | -| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Transpose.html) | None | -| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.Unique.html) | Only support the repeat calculate shard strategy (1,). | -| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. | -| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | -| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | -| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.ZerosLike.html) | None | +| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Abs.html) | None | +| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ACos.html) | None | +| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Acosh.html) | None | +| [mindspore.ops.Add](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Add.html) | None | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | None | +| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Asin.html) | None | +| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Asinh.html) | None | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Assign.html) | None | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | None | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | None | +| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atan.html) | None | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | None | +| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Atanh.html) | None | +| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BatchMatMul.html) | `transpore_a=True` is not supported. | +| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BesselI0e.html) | None | +| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BesselI1e.html) | None | +| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BiasAdd.html) | None | +| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.BroadcastTo.html) | None | +| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Cast.html) | The shard strategy is ignored in the Auto Parallel and Semi Auto Parallel mode. | +| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Ceil.html) | None | +| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Concat.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Cos.html) | None | +| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Cosh.html) | None | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Div.html) | None | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | None | +| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DropoutDoMask.html) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported. | +| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.DropoutGenMask.html) | Need to be used in conjunction with `DropoutDoMask`. | +| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Elu.html) | None | +| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | The same as GatherV2. | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Equal.html) | None | +| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Erf.html) | None | +| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Erfc.html) | None | +| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Exp.html) | None | +| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ExpandDims.html) | None | +| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Expm1.html) | None | +| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Floor.html) | None | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | None | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | None | +| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GatherV2.html) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported. | +| [mindspore.ops.Gather](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Gather.html) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported. | +| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Gelu.html) | None | +| [mindspore.ops.GeLU](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GeLU.html) | None | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Greater.html) | None | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | None | +| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Inv.html) | None | +| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.L2Normalize.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Less.html) | None | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | None | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | None | +| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalNot.html) | None | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | None | +| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Log.html) | None | +| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Log1p.html) | None | +| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.LogSoftmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.MatMul.html) | `transpose_a=True` is not supported. | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | None | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | None | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mod.html) | None | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Mul.html) | None | +| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Neg.html) | None | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | None | +| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.OneHot.html) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs. | +| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.OnesLike.html) | None | +| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Pack.html) | None | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Pow.html) | None | +| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.PReLU.html) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight. | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | None | +| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Reciprocal.html) | None | +| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceMax.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceMin.html) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine. | +| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceSum.html) | None | +| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReduceMean.html) | None | +| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReLU.html) | None | +| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReLU6.html) | None | +| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ReLUV2.html) | None | +| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Reshape.html) | Configuring shard strategy is not supported. In auto parallel mode, if multiple operators are followed by the reshape operator, different shard strategys are not allowed to be configured for these operators. | +| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Round.html) | None | +| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Rsqrt.html) | None | +| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sigmoid.html) | None | +| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | None | +| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sign.html) | None | +| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sin.html) | None | +| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sinh.html) | None | +| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Softmax.html) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | The last dimension of logits and labels can't be splited; Only supports using output[0]. | +| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Softplus.html) | None | +| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Softsign.html) | None | +| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.SparseGatherV2.html) | The same as GatherV2. | +| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Split.html) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sqrt.html) | None | +| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Square.html) | None | +| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Squeeze.html) | None | +| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.StridedSlice.html) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is supported when the strides of dimension is 1. | +| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Slice.html) | The dimension needs to be split should be all extracted. | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Sub.html) | None | +| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Tan.html) | None | +| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Tanh.html) | None | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | None | +| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Tile.html) | Only support configuring shard strategy for multiples. | +| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.TopK.html) | The input_x can't be split into the last dimension, otherwise it's inconsistent with the single machine in the mathematical logic. | +| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Transpose.html) | None | +| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.Unique.html) | Only support the repeat calculate shard strategy (1,). | +| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. | +| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the maximum of the input type. The user needs to mask the maximum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | +| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | The shard of input_x and segment_ids must be the same as the dimension of segment_ids. Note that if the segment id i is missing, then the output[i] will be filled with the minimum of the input type. The user needs to mask the minimum value to avoid value overflow. The communication operation such as AllReudce will raise an Run Task Error due to overflow. | +| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.ZerosLike.html) | None | > Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur. > diff --git a/docs/note/source_en/paper_list.md b/docs/note/source_en/paper_list.md deleted file mode 100644 index e4efdfd25275f2fc8331aa248f48b80bd99c52e7..0000000000000000000000000000000000000000 --- a/docs/note/source_en/paper_list.md +++ /dev/null @@ -1,13 +0,0 @@ -# Paper List - -`Linux` `Windows` `Ascend` `GPU` `CPU` `Whole Process` `Framework Development` `Intermediate` `Expert` `Contributor` - - - -| Title | Author | Field | Journal/Conference | Link | -| ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- | -------------- | ------------------------------------------------------------ | -| A Representation Learning Framework for Property Graphs | Yifan Hou, Hongzhi Chen, Changji Li, James Cheng, Ming-Chang Yang, Fan Yu | Graph Neural Networks | KDD | | -| Optimizing the Memory Hierarchy by Compositing Automatic Transformations on Computations and Data | JieZhao,Peng Di | Micro-Architecture | MICRO2020 | | -| Masked Face Recognition with Latent Part Detection | Feifei Ding,Peixi Peng,Yangru Huang,Mengyue Geng,Yonghong Tian | Object Detection | ACMMM | | -| Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets | Kai Han,Yunhe Wang,Qiulin Zhang,Wei Zhang,Chunjing XU,Tong Zhang | Optimization | NeurIPS 2020 | | -| SCOP: Scientific Control for Reliable Neural Network Pruning | Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu | Optimization | NeurIPS 2020 | | diff --git a/docs/note/source_en/posenet_lite.md b/docs/note/source_en/posenet_lite.md index 74468800acadb2bb14448368fbad592463968c97..bead3bfbef35046615ab4a347c3c7766559277f1 100644 --- a/docs/note/source_en/posenet_lite.md +++ b/docs/note/source_en/posenet_lite.md @@ -1,6 +1,6 @@ # Posenet Model Support (Lite) - + ## Posenet introduction @@ -12,4 +12,4 @@ The blue marking points detect the distribution of facial features of the human ![image_posenet](images/posenet_detection.png) -Using MindSpore Lite to realize posenet [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/posenet). +Using MindSpore Lite to realize posenet [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/posenet). diff --git a/docs/note/source_en/roadmap.md b/docs/note/source_en/roadmap.md index ca68dc0a654e245ad069c8c7da176dd2f0c2b6c7..2d17431f7e4d195b6eba2a057e6da64b9f7ee646 100644 --- a/docs/note/source_en/roadmap.md +++ b/docs/note/source_en/roadmap.md @@ -14,7 +14,7 @@ - + MindSpore's top priority plans in the year are displayed as follows. We will continuously adjust the priority based on user feedback. diff --git a/docs/note/source_en/scene_detection_lite.md b/docs/note/source_en/scene_detection_lite.md index 0bd910475c637ea7b6380ce984670df44eb56aeb..673fea50ab7d7d07b9236895a9efb292ae142ac7 100644 --- a/docs/note/source_en/scene_detection_lite.md +++ b/docs/note/source_en/scene_detection_lite.md @@ -1,12 +1,12 @@ # Scene Detection Model Support (Lite) - + ## Scene dectectin introduction Scene detection can identify the type of scene in the device's camera. -Using MindSpore Lite to implement scene detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/scene_detection). +Using MindSpore Lite to implement scene detection [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/scene_detection). ## Scene detection model list diff --git a/docs/note/source_en/static_graph_syntax_support.md b/docs/note/source_en/static_graph_syntax_support.md new file mode 100644 index 0000000000000000000000000000000000000000..bd826e2d17fb0ea273deeca1f57f57bb23db9fd8 --- /dev/null +++ b/docs/note/source_en/static_graph_syntax_support.md @@ -0,0 +1,5 @@ +# Static Graph Syntax Support + +No English version available right now, welcome to contribute. + + diff --git a/docs/note/source_en/style_transfer_lite.md b/docs/note/source_en/style_transfer_lite.md index dcf7100c5933bd53c0af2fb49e1d6dbd484a2525..88c0f56fc896533db7fffc3e9239fad17b4a297a 100644 --- a/docs/note/source_en/style_transfer_lite.md +++ b/docs/note/source_en/style_transfer_lite.md @@ -1,6 +1,6 @@ # Style Transfer Model Support (Lite) - + ## Style transfer introduction @@ -14,4 +14,4 @@ Selecting the first standard image from left to perform the style transfer, as s ![image_after_transfer](images/after_transfer.png) -Using MindSpore Lite to realize style transfer [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/style_transfer). +Using MindSpore Lite to realize style transfer [example](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/style_transfer). diff --git a/docs/note/source_en/syntax_list.rst b/docs/note/source_en/syntax_list.rst new file mode 100644 index 0000000000000000000000000000000000000000..597c59c2b324118dffe760c9e087fd773f644493 --- /dev/null +++ b/docs/note/source_en/syntax_list.rst @@ -0,0 +1,7 @@ +Syntax Support +================ + +.. toctree:: + :maxdepth: 1 + + static_graph_syntax_support \ No newline at end of file diff --git a/docs/note/source_zh_cn/benchmark.md b/docs/note/source_zh_cn/benchmark.md index e0455d68326fff42797c288cdaf151753529afe6..a6768dde33a871397254b843b811e0f77a66c7df 100644 --- a/docs/note/source_zh_cn/benchmark.md +++ b/docs/note/source_zh_cn/benchmark.md @@ -13,9 +13,9 @@ - + -本文介绍MindSpore的基准性能。MindSpore网络定义可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。 +本文介绍MindSpore的基准性能。MindSpore网络定义可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo)。 ## 训练性能 diff --git a/docs/note/source_zh_cn/conf.py b/docs/note/source_zh_cn/conf.py index 95d7701759707ab95a3c199cd8a22e2e2cc1194d..7be5f453c21b75703c763a14c8180127aed60e6b 100644 --- a/docs/note/source_zh_cn/conf.py +++ b/docs/note/source_zh_cn/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md index 256d719bcf7dc1e789d9895d3841598691da4672..b753f3b0b0149778f4cab651e756a53eae3053dd 100644 --- a/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md +++ b/docs/note/source_zh_cn/design/mindarmour/differential_privacy_design.md @@ -14,7 +14,7 @@ - + ## 总体设计 @@ -32,7 +32,7 @@ MindArmour的Differential-Privacy模块实现了差分隐私训练的能力。 - 固定高斯优化器,是一种非自适应高斯噪声的差分隐私优化器。其优势在于可以严格控制差分隐私预算ϵ,缺点是在模型训练过程中,每个Step添加的噪声量固定,若迭代次数过大,训练后期的噪声使得模型收敛困难,甚至导致性能大幅下跌,模型可用性差。 - 自适应高斯优化器,通过自适应调整标准差,来调整高斯分布噪声的大小,在模型训练初期,添加的噪声量较大,随着模型逐渐收敛,噪声量逐渐减小,噪声对于模型可用性的影响减小。自适应高斯噪声的缺点是不能严格控制差分隐私预算。 -- 自适应裁剪优化器,是一种自适应调整调整裁剪粒度的差分隐私优化器,梯度裁剪是差分隐私训练的一个重要操作,自适应裁剪优化器能够自适应的控制梯度裁剪的的比例在给定的范围波动,控制迭代训练过程中梯度裁剪的粒度。 +- 自适应裁剪优化器,是一种自适应调整调整裁剪粒度的差分隐私优化器,梯度裁剪是差分隐私训练的一个重要操作,自适应裁剪优化器能够自适应的控制梯度裁剪的比例在给定的范围波动,控制迭代训练过程中梯度裁剪的粒度。 ### 差分隐私的噪声机制 @@ -54,10 +54,10 @@ Monitor提供RDP、ZCDP等回调函数,用于监测模型的差分隐私预算 ## 代码实现 -- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py):这个文件实现了差分隐私训练所需的噪声生成机制,包括简单高斯噪声、自适应高斯噪声、自适应裁剪高斯噪声等。 -- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py):这个文件实现了使用噪声生成机制在反向传播时添加噪声的根本逻辑。 -- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py):实现了计算差分隐私预算的回调函数,模型训练过程中,会反馈当前的差分隐私预算。 -- [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py):这个文件实现了计算损失和梯度的逻辑,差分隐私训练的梯度截断逻辑在此文件中实现,且model.py是用户使用差分隐私训练能力的入口。 +- [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py):这个文件实现了差分隐私训练所需的噪声生成机制,包括简单高斯噪声、自适应高斯噪声、自适应裁剪高斯噪声等。 +- [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/optimizer/optimizer.py):这个文件实现了使用噪声生成机制在反向传播时添加噪声的根本逻辑。 +- [monitor.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/monitor/monitor.py):实现了计算差分隐私预算的回调函数,模型训练过程中,会反馈当前的差分隐私预算。 +- [model.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/privacy/diff_privacy/train/model.py):这个文件实现了计算损失和梯度的逻辑,差分隐私训练的梯度截断逻辑在此文件中实现,且model.py是用户使用差分隐私训练能力的入口。 ## 参考文献 diff --git a/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md b/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md index 0d753d7bb92744cb7bc2ae96a665ac09fafec77b..9bd10b6fb8493284757db011821ba858483706b4 100644 --- a/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md +++ b/docs/note/source_zh_cn/design/mindarmour/fuzzer_design.md @@ -3,6 +3,7 @@ `Linux` `Ascend` `GPU` `CPU` `数据准备` `模型开发` `模型训练` `模型调优` `企业` `高级` + - [AI模型安全测试](#ai模型安全测试) - [背景](#背景) - [Fuzz Testing设计图](#fuzz-testing设计图) @@ -12,7 +13,7 @@ - + ## 背景 @@ -60,10 +61,10 @@ Fuzz Testing架构主要包括三个模块: ## 代码实现 -1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py):Fuzzer总体流程。 -2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py):神经元覆盖率指标,包括KMNC,NBC,SNAC。 -3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py):图像变异方法,包括基于像素值的变化方法和仿射变化方法。 -4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks):对抗样本攻击方法,包含多种黑盒、白盒攻击方法。 +1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/fuzzing.py):Fuzzer总体流程。 +2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/model_coverage_metrics.py):神经元覆盖率指标,包括KMNC,NBC,SNAC。 +3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/r1.1/mindarmour/fuzz_testing/image_transform.py):图像变异方法,包括基于像素值的变化方法和仿射变化方法。 +4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/r1.1/mindarmour/adv_robustness/attacks):对抗样本攻击方法,包含多种黑盒、白盒攻击方法。 ## 参考文献 diff --git a/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md index be8a8c686bb95ba565506ded1940d8b87281ca51..46e43d52cfdad248e9d315905ae44631b53efa78 100644 --- a/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md +++ b/docs/note/source_zh_cn/design/mindinsight/graph_visual_design.md @@ -15,7 +15,7 @@ - + ## 特性背景 @@ -71,4 +71,4 @@ RESTful API接口是MindInsight前后端进行数据交互的接口。 #### 文件接口设计 MindSpore与MindInsight之间的数据交互,采用[protobuf](https://developers.google.cn/protocol-buffers/docs/pythontutorial?hl=zh-cn)定义数据格式。 -[summary.proto文件](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_summary.proto)为总入口,计算图的消息对象定义为 `GraphProto`。`GraphProto`的详细定义可以参考[anf_ir.proto文件](https://gitee.com/mindspore/mindinsight/blob/master/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto)。 +[summary.proto文件](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_summary.proto)为总入口,计算图的消息对象定义为 `GraphProto`。`GraphProto`的详细定义可以参考[anf_ir.proto文件](https://gitee.com/mindspore/mindinsight/blob/r1.1/mindinsight/datavisual/proto_files/mindinsight_anf_ir.proto)。 diff --git a/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md index 44d4db5b12ddc5dc04e3ed2cedfb16dd69bb382d..18d9114f6b2b1a8d272799eb11b6a0eec53e8b36 100644 --- a/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md +++ b/docs/note/source_zh_cn/design/mindinsight/tensor_visual_design.md @@ -14,7 +14,7 @@ - + ## 特性背景 @@ -55,7 +55,7 @@ Tensor可视支持1-N维的Tensor以表格或直方图的形式展示,对于0 ### 接口设计 -在张量可视中,主要有文件接口和RESTful API接口,其中文件接口为[summary.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/summary.proto)文件,是MindInsight和MindSpore进行数据对接的接口。 RESTful API接口是MindInsight前后端进行数据交互的接口,是内部接口。 +在张量可视中,主要有文件接口和RESTful API接口,其中文件接口为[summary.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/summary.proto)文件,是MindInsight和MindSpore进行数据对接的接口。 RESTful API接口是MindInsight前后端进行数据交互的接口,是内部接口。 #### 文件接口设计 @@ -102,4 +102,4 @@ Tensor可视支持1-N维的Tensor以表格或直方图的形式展示,对于0 } ``` -而TensorProto的定义在[anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/utils/anf_ir.proto)文件中。 +而TensorProto的定义在[anf_ir.proto](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/utils/anf_ir.proto)文件中。 diff --git a/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md b/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md index 3a3e33f58a303088a8d7474c6ab192d5805afe5d..ed0b180cb4832bc3102ffaa579b7bd2c8f2b77a1 100644 --- a/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md +++ b/docs/note/source_zh_cn/design/mindinsight/training_visual_design.md @@ -18,7 +18,7 @@ - + [MindInsight](https://gitee.com/mindspore/mindinsight)是MindSpore的可视化调试调优组件。通过MindInsight可以完成训练可视、性能调优、精度调优等任务。 @@ -40,11 +40,11 @@ 训练信息收集API包括: -- 基于summary算子的训练信息收集API。这部分API主要包括4个summary算子,即用于记录标量数据的ScalarSummary算子,用于记录图片数据的ImageSummary算子,用于记录参数分布图(直方图)数据的HistogramSummary算子和用于记录张量数据的TensorSummary算子。请访问[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)以获取关于这些算子的信息。 +- 基于summary算子的训练信息收集API。这部分API主要包括4个summary算子,即用于记录标量数据的ScalarSummary算子,用于记录图片数据的ImageSummary算子,用于记录参数分布图(直方图)数据的HistogramSummary算子和用于记录张量数据的TensorSummary算子。请访问[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)以获取关于这些算子的信息。 -- 基于Python API的训练信息收集API。通过[SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value)方法,可以在Python代码中完成训练信息的收集。 +- 基于Python API的训练信息收集API。通过[SummaryRecord.add_value](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.train.html#mindspore.train.summary.SummaryRecord.add_value)方法,可以在Python代码中完成训练信息的收集。 -- 易用的训练信息收集callback。通过[SummaryCollector](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)这一callback可以方便地收集常用训练信息到训练日志中。 +- 易用的训练信息收集callback。通过[SummaryCollector](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.train.html#mindspore.train.callback.SummaryCollector)这一callback可以方便地收集常用训练信息到训练日志中。 训练信息持久化模块主要包括用于管理缓存的summary_record模块和用于并行处理数据、写入文件的write_pool模块。训练信息持久化后,存储在训练日志文件(summary文件中)。 diff --git a/docs/note/source_zh_cn/design/mindspore/architecture.md b/docs/note/source_zh_cn/design/mindspore/architecture.md index 36a14407ccb2387053ca1823be8132623ec700f6..6b9d8f414839c632fc2cea8dbcdbd1ef5460f44f 100644 --- a/docs/note/source_zh_cn/design/mindspore/architecture.md +++ b/docs/note/source_zh_cn/design/mindspore/architecture.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `端侧` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` - + MindSpore框架架构总体分为MindSpore前端表示层、MindSpore计算图引擎和MindSpore后端运行时三层。 diff --git a/docs/note/source_zh_cn/design/mindspore/architecture_lite.md b/docs/note/source_zh_cn/design/mindspore/architecture_lite.md index c25e4442923dedc94a42b065bcf886a30fc9cb92..2d8cecfd16e5a8fbc503921e51a46d1287c28cdc 100644 --- a/docs/note/source_zh_cn/design/mindspore/architecture_lite.md +++ b/docs/note/source_zh_cn/design/mindspore/architecture_lite.md @@ -2,7 +2,7 @@ `Linux` `Windows` `端侧` `推理应用` `中级` `高级` `贡献者` - + MindSpore Lite框架的总体架构如下所示: diff --git a/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md b/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md index 97a9a328b99dba77ff968775ef848096a0c995fc..c21a234c8d7990080a3f0813cd9631d3772c9d83 100644 --- a/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md +++ b/docs/note/source_zh_cn/design/mindspore/distributed_training_design.md @@ -18,7 +18,7 @@ - + ## 背景 @@ -52,7 +52,7 @@ 3. 网络构图 - 数据并行网络的书写方式与单机网络没有差别,这是因为在正反向传播(Forward propogation & Backword Propogation)过程中各卡的模型间是独立执行的,只是保持了相同的网络结构。唯一需要特别注意的是为了保证各卡间训练同步,相应的网络参数初始化值应当是一致的,这里建议通过`numpy.random.seed`在每张卡上设置相同的随机数种子达到模型广播的目的。 + 数据并行网络的书写方式与单机网络没有差别,这是因为在正反向传播(Forward propagation & Backward Propagation)过程中各卡的模型间是独立执行的,只是保持了相同的网络结构。唯一需要特别注意的是为了保证各卡间训练同步,相应的网络参数初始化值应当是一致的,这里建议通过`numpy.random.seed`在每张卡上设置相同的随机数种子达到模型广播的目的。 4. 梯度聚合(Gradient aggregation) @@ -66,12 +66,12 @@ 1. 集合通信 - - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py):这个文件中涵盖了集合通信过程中常用的`helper`函数接口,例如获取集群数量和卡的序号等。当在Ascend芯片上执行时,框架会加载环境上的`libhccl.so`库文件,通过它来完成从Python层到底层的通信接口调用。 - - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py):MindSpore将支持的集合通信操作都封装为算子的形式放在这个文件下,包括`AllReduce`、`AllGather`、`ReduceScatter`和`Broadcast`等。`PrimitiveWithInfer`中除了定义算子所需属性外,还包括构图过程中输入到输出的`shape`和`dtype`推导。 + - [management.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/communication/management.py):这个文件中涵盖了集合通信过程中常用的`helper`函数接口,例如获取集群数量和卡的序号等。当在Ascend芯片上执行时,框架会加载环境上的`libhccl.so`库文件,通过它来完成从Python层到底层的通信接口调用。 + - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/operations/comm_ops.py):MindSpore将支持的集合通信操作都封装为算子的形式放在这个文件下,包括`AllReduce`、`AllGather`、`ReduceScatter`和`Broadcast`等。`PrimitiveWithInfer`中除了定义算子所需属性外,还包括构图过程中输入到输出的`shape`和`dtype`推导。 2. 梯度聚合 - - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py):这个文件实现了梯度聚合的过程。对入参`grads`用`HyperMap`展开后插入`AllReduce`算子,这里采用的是全局通信组,用户也可以根据自己网络的需求仿照这个模块进行自定义开发。MindSpore中单机和分布式执行共用一套网络封装接口,在`Cell`内部通过`ParallelMode`来区分是否要对梯度做聚合操作,网络封装接口建议参考`TrainOneStepCell`代码实现。 + - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/nn/wrap/grad_reducer.py):这个文件实现了梯度聚合的过程。对入参`grads`用`HyperMap`展开后插入`AllReduce`算子,这里采用的是全局通信组,用户也可以根据自己网络的需求仿照这个模块进行自定义开发。MindSpore中单机和分布式执行共用一套网络封装接口,在`Cell`内部通过`ParallelMode`来区分是否要对梯度做聚合操作,网络封装接口建议参考`TrainOneStepCell`代码实现。 ## 自动并行 @@ -121,19 +121,19 @@ ### 自动并行代码 1. 张量排布模型 - - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout):这个目录下包含了张量排布模型相关功能的定义及实现。其中`tensor_layout.h`中声明了一个张量排布模型需要具备的成员变量`tensor_map_origin_`,`tensor_shape_`和`device_arrangement_`等。在`tensor_redistribution.h`中声明了实现张量排布间`from_origin_`和`to_origin_`变换的相关方法,将推导得到的重排布操作保存在`operator_list_`中返回,并计算得到重排布所需的通信开销`comm_cost_`, 内存开销`memory_cost_`及计算开销`computation_cost_`。 + - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/tensor_layout):这个目录下包含了张量排布模型相关功能的定义及实现。其中`tensor_layout.h`中声明了一个张量排布模型需要具备的成员变量`tensor_map_origin_`,`tensor_shape_`和`device_arrangement_`等。在`tensor_redistribution.h`中声明了实现张量排布间`from_origin_`和`to_origin_`变换的相关方法,将推导得到的重排布操作保存在`operator_list_`中返回,并计算得到重排布所需的通信开销`comm_cost_`, 内存开销`memory_cost_`及计算开销`computation_cost_`。 2. 分布式算子 - - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info):这个目录下包含了分布式算子的具体实现。在`operator_info.h`中定义了分布式算子实现的基类`OperatorInfo`,开发一个分布式算子需要继承于这个基类并显式实现相关的虚函数。其中`InferTensorInfo`,`InferTensorMap`和`InferDevMatrixShape`函数定义了推导该算子输入、输出张量排布模型的算法。`InferForwardCommunication`,`InferMirrorOps`等函数定义了切分该算子需要插入的额外计算、通信操作。`CheckStrategy`和`GenerateStrategies`函数定义了算子切分策略校验和生成。根据切分策略`SetCostUnderStrategy`将会产生该策略下分布式算子的并行开销值`operator_cost_`。 + - [ops_info](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/ops_info):这个目录下包含了分布式算子的具体实现。在`operator_info.h`中定义了分布式算子实现的基类`OperatorInfo`,开发一个分布式算子需要继承于这个基类并显式实现相关的虚函数。其中`InferTensorInfo`,`InferTensorMap`和`InferDevMatrixShape`函数定义了推导该算子输入、输出张量排布模型的算法。`InferForwardCommunication`,`InferMirrorOps`等函数定义了切分该算子需要插入的额外计算、通信操作。`CheckStrategy`和`GenerateStrategies`函数定义了算子切分策略校验和生成。根据切分策略`SetCostUnderStrategy`将会产生该策略下分布式算子的并行开销值`operator_cost_`。 3. 策略搜索算法 - - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel):这个目录下实现了切分策略搜索的算法。`graph_costmodel.h`定义了构图信息,其中每个点表示一个算子`OperatorInfo`,有向边`edge_costmodel.h`表示算子的输入输出关系及重排布的代价。`operator_costmodel.h`中定义了每个算子的代价模型,包括计算代价、通信代价和内存代价。`dp_algorithm_costmodel.h`主要描述了动态规划算法的主要流程,由一系列图操作组成。在`costmodel.h`中定义了cost和图操作的数据结构。 + - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/r1.1/mindspore/ccsrc/frontend/parallel/auto_parallel):这个目录下实现了切分策略搜索的算法。`graph_costmodel.h`定义了构图信息,其中每个点表示一个算子`OperatorInfo`,有向边`edge_costmodel.h`表示算子的输入输出关系及重排布的代价。`operator_costmodel.h`中定义了每个算子的代价模型,包括计算代价、通信代价和内存代价。`dp_algorithm_costmodel.h`主要描述了动态规划算法的主要流程,由一系列图操作组成。在`costmodel.h`中定义了cost和图操作的数据结构。 4. 设备管理 - - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h):这个文件实现了集群设备通信组的创建及管理。其中设备矩阵模型由`device_matrix.h`定义,通信域由`group_manager.h`管理。 + - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/device_manager.h):这个文件实现了集群设备通信组的创建及管理。其中设备矩阵模型由`device_matrix.h`定义,通信域由`group_manager.h`管理。 5. 整图切分 - - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h):这两个文件包含了自动并行流程的核心实现。首先由`step_auto_parallel.h`调用策略搜索流程并产生分布式算子的`OperatorInfo`,然后在`step_parallel.h`中处理算子切分和张量重排布等流程,对单机计算图进行分布式改造。 + - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ccsrc/frontend/parallel/step_parallel.h):这两个文件包含了自动并行流程的核心实现。首先由`step_auto_parallel.h`调用策略搜索流程并产生分布式算子的`OperatorInfo`,然后在`step_parallel.h`中处理算子切分和张量重排布等流程,对单机计算图进行分布式改造。 6. 通信算子反向 - - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py):这个文件定义了`AllReduce`和`AllGather`等通信算子的反向操作。 + - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/r1.1/mindspore/ops/_grad/grad_comm_ops.py):这个文件定义了`AllReduce`和`AllGather`等通信算子的反向操作。 diff --git a/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png b/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png index 800b3b2536c739dcc48a1e46b5f65fc327e4ce8d..d0135541eb76cedfcb22f2eb3e470a9d5d913957 100644 Binary files a/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png and b/docs/note/source_zh_cn/design/mindspore/images/auto_parallel.png differ diff --git a/docs/note/source_zh_cn/design/mindspore/mindir.md b/docs/note/source_zh_cn/design/mindspore/mindir.md index 01fd8b8ab770c1db0a3749607f368199f41e36bc..db6ffb70beadc6f7b2642d8b01d73cb21b254ffd 100644 --- a/docs/note/source_zh_cn/design/mindspore/mindir.md +++ b/docs/note/source_zh_cn/design/mindspore/mindir.md @@ -17,7 +17,7 @@ - + ## 简介 @@ -87,7 +87,7 @@ lambda (x, y) c end ``` -对应的MindIR为[ir.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot): +对应的MindIR为[ir.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/ir.dot): ![image](./images/ir/ir.png) @@ -121,7 +121,7 @@ def hof(x): return res ``` -对应的MindIR为[hof.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot): +对应的MindIR为[hof.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/hof.dot): ![image](./images/ir/hof.png) @@ -144,7 +144,7 @@ def fibonacci(n): return fibonacci(n-1) + fibonacci(n-2) ``` -对应的MindIR为[cf.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot): +对应的MindIR为[cf.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/cf.dot): ![image](./images/ir/cf.png) @@ -171,7 +171,7 @@ def ms_closure(): return out1, out2 ``` -对应的MindIR为[closure.dot](https://gitee.com/mindspore/docs/blob/master/docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot): +对应的MindIR为[closure.dot](https://gitee.com/mindspore/docs/blob/r1.1/docs/note/source_zh_cn/design/mindspore/images/ir/closure.dot): ![image](./images/ir/closure.png) diff --git a/docs/note/source_zh_cn/design/mindspore/profiler_design.md b/docs/note/source_zh_cn/design/mindspore/profiler_design.md index 4b5bcb51685a791e50c6cd25d2c20e4366ae8eb6..b9be2d1272f0c8bdd7bdfa48a6154842d278f8b3 100644 --- a/docs/note/source_zh_cn/design/mindspore/profiler_design.md +++ b/docs/note/source_zh_cn/design/mindspore/profiler_design.md @@ -25,7 +25,7 @@ - + ## 背景 diff --git a/docs/note/source_zh_cn/design/technical_white_paper.md b/docs/note/source_zh_cn/design/technical_white_paper.md index c3ec41c35159513d72d40770a3fdbe593ce3bbf9..3b9f3bc1f7bf892fbfbc97d6006ea365b62c72db 100644 --- a/docs/note/source_zh_cn/design/technical_white_paper.md +++ b/docs/note/source_zh_cn/design/technical_white_paper.md @@ -10,7 +10,7 @@ - + ## 引言 @@ -22,4 +22,4 @@ MindSpore作为新一代深度学习框架,是源于全产业的最佳实践,最佳匹配昇腾处理器算力,支持终端、边缘、云全场景灵活部署,开创全新的AI编程范式,降低AI开发门槛。MindSpore是一种全新的深度学习计算框架,旨在实现易开发、高效执行、全场景覆盖三大目标。为了实现易开发的目标,MindSpore采用基于源码转换(Source Code Transformation,SCT)的自动微分(Automatic Differentiation,AD)机制,该机制可以用控制流表示复杂的组合。函数被转换成函数中间表达(Intermediate Representation,IR),中间表达构造出一个能够在不同设备上解析和执行的计算图。在执行前,计算图上应用了多种软硬件协同优化技术,以提升端、边、云等不同场景下的性能和效率。MindSpore支持动态图,更易于检查运行模式。由于采用了基于源码转换的自动微分机制,所以动态图和静态图之间的模式切换非常简单。为了在大型数据集上有效训练大模型,通过高级手动配置策略,MindSpore可以支持数据并行、模型并行和混合并行训练,具有很强的灵活性。此外,MindSpore还有“自动并行”能力,它通过在庞大的策略空间中进行高效搜索来找到一种快速的并行策略。MindSpore框架的具体优势,请查看详细介绍。 -[查看技术白皮书](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/white_paper/MindSpore_white_paper.pdf) +[查看技术白皮书](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/white_paper/MindSpore_white_paperV1.1.pdf) diff --git a/docs/note/source_zh_cn/env_var_list.md b/docs/note/source_zh_cn/env_var_list.md index 97632f65bd85acf74934782520ec2b6b443e9662..a02c3896116d53ac20c313e1f8de5315623e0071 100644 --- a/docs/note/source_zh_cn/env_var_list.md +++ b/docs/note/source_zh_cn/env_var_list.md @@ -2,7 +2,7 @@ `Linux` `Ascend` `GPU` `CPU` `初级` `中级` `高级` - + 本文介绍MindSpore的环境变量。 @@ -15,12 +15,13 @@ |MINDDATA_PROFILING_DIR|MindData|系统路径,保存dataset profiling结果路径|String|系统路径,支持相对路径|与PROFILING_MODE=true配合使用|可选| |OPTIMIZE|MindData|是否执行dataset数据处理 pipeline 树优化,在适合数据处理算子融合的场景下,可以提升数据处理效率|String|true: 开启pipeline树优化
false: 关闭pipeline树优化|无|可选| |ENABLE_MS_DEBUGGER|Debugger|是否在训练中启动Debugger|Boolean|1:开启Debugger
0:关闭Debugger|无|可选| +|MS_BUILD_PROCESS_NUM|MindSpore|Ascend后端编译时,指定并行编译进程数|Integer|1~24:允许设置并行进程数取值范围|无|可选| |MS_DEBUGGER_PORT|Debugger|连接MindInsight Debugger Server的端口|Integer|1~65536,连接MindInsight Debugger Server的端口|无|可选 |MS_DEBUGGER_PARTIAL_MEM|Debugger|是否开启部分内存复用(只有在Debugger选中的节点才会关闭这些节点的内存复用)|Boolean|1:开启Debugger选中节点的内存复用
0:关闭Debugger选中节点的内存复用|无|可选| |RANK_TABLE_FILE|MindSpore|路径指向文件,包含指定多Ascend AI处理器环境中Ascend AI处理器的"device_id"对应的"device_ip"。|String|文件路径,支持相对路径与绝对路径|与RANK_SIZE配合使用|必选(使用Ascend AI处理器时)| |RANK_SIZE|MindSpore|指定深度学习时调用Ascend AI处理器的数量|Integer|1~8,调用Ascend AI处理器的数量|与RANK_TABLE_FILE配合使用|必选(使用Ascend AI处理器时)| |RANK_ID|MindSpore|指定深度学习时调用Ascend AI处理器的逻辑ID|Integer|0~7,多机并行时不同server中DEVICE_ID会有重复,使用RANK_ID可以避免这个问题(多机并行时 RANK_ID = SERVER_ID * DEVICE_NUM + DEVICE_ID|无|可选| -|MS_SUBMODULE_LOG_v|MindSpore|[MS_SUBMODULE_LOG_v功能与用法]()|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选 +|MS_SUBMODULE_LOG_v|MindSpore|[MS_SUBMODULE_LOG_v功能与用法]()|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选 |OPTION_PROTO_LIB_PATH|MindSpore|RPOTO依赖库库路径|String|文件路径,支持相对路径与绝对路径|无|可选| |GE_USE_STATIC_MEMORY|GraphEngine|当网络模型层数过大时,特征图中间计算数据可能超过25G,例如BERT24网络。多卡场景下为保证通信内存高效协同,需要配置为1,表示使用内存静态分配方式,其他网络暂时无需配置,默认使用内存动态分配方式。
静态内存默认配置为31G,如需要调整可以通过网络运行参数graph_memory_max_size和variable_memory_max_size的总和指定;动态内存是动态申请,最大不会超过graph_memory_max_size和variable_memory_max_size的总和。|Integer|1:使用内存静态分配方式
0:使用内存动态分配方式|无|可选| |DUMP_GE_GRAPH|GraphEngine|把整个流程中各个阶段的图描述信息打印到文件中,此环境变量控制dump图的内容多少|Integer|1:全量dump
2:不含有权重等数据的基本版dump
3:只显示节点关系的精简版dump|无|可选| diff --git a/docs/note/source_zh_cn/glossary.md b/docs/note/source_zh_cn/glossary.md index 630fec2ea6cbb0bfe4cbbd913419853a056adc57..16136f2d6428994519aa140571a2eb876e093ce2 100644 --- a/docs/note/source_zh_cn/glossary.md +++ b/docs/note/source_zh_cn/glossary.md @@ -2,11 +2,12 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `初级` `中级` `高级` - + | 术语/缩略语 | 说明 | | ----- | ----- | -| ACL | Ascend Computer Language,提供Device管理、Context管理、Stream管理、内存管理、模型加载与执行、算子加载与执行、媒体数据处理等C++ API库,供用户开发深度神经网络应用。| +| ACL | Ascend Computer Language,提供Device管理、Context管理、Stream管理、内存管理、模型加载与执行、算子加载与执行、媒体数据处理等C++ API库,供用户开发深度神经网络应用。| +| AIR | Ascend Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。 | | Ascend | 华为昇腾系列芯片的系列名称。 | | CCE | Cube-based Computing Engine,面向硬件架构编程的算子开发工具。 | | CCE-C | Cube-based Computing Engine C,使用CCE开发的C代码。 | @@ -21,7 +22,6 @@ | FP16 | 16位浮点,半精度浮点算术,消耗更小内存。 | | FP32 | 32位浮点,单精度浮点算术。 | | GE | Graph Engine,MindSpore计算图执行引擎,主要负责根据前端的计算图完成硬件相关的优化(算子融合、内存复用等等)、device侧任务启动。 | -| AIR | Ascend Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。| | GHLO | Graph High Level Optimization,计算图高级别优化。GHLO包含硬件无关的优化(如死代码消除等)、自动并行和自动微分等功能。 | | GLLO | Graph Low Level Optimization,计算图低级别优化。GLLO包含硬件相关的优化,以及算子融合、Buffer融合等软硬件结合相关的深度优化。 | | Graph Mode | MindSpore的静态图模式,将神经网络模型编译成一整张图,然后下发执行,性能高。 | @@ -35,11 +35,12 @@ | MindArmour | MindSpore安全模块,通过差分隐私、对抗性攻防等技术手段,提升模型的保密性、完整性和可用性,阻止攻击者对模型进行恶意修改或是破解模型的内部构件,窃取模型的参数。 | | MindData | MindSpore数据框架,提供数据加载、增强、数据集管理以及可视化。 | | MindInsight | MindSpore可视化组件,可视化标量、图像、计算图以及模型超参等信息。 | +| MindIR | MindSpore IR,一种基于图表示的函数式IR,定义了可扩展的图结构以及算子IR表示,存储了MindSpore基础数据结构。 | | MindRecord | MindSpore定义的一种数据格式,是一个执行读取、写入、搜索和转换MindSpore格式数据集的模块。 | | MindSpore | 华为主导开源的深度学习框架。 | | MindSpore Lite | 一个轻量级的深度神经网络推理引擎,提供了将MindSpore训练出的模型在端侧进行推理的功能。 | | MNIST database | Modified National Institute of Standards and Technology database,一个大型手写数字数据库,通常用于训练各种图像处理系统。 | -| ONNX | Open Neural Network Exchange,是一种针对机器学习所设计的开放式的文件格式,用于存储训练好的模型。| +| ONNX | Open Neural Network Exchange,是一种针对机器学习所设计的开放式的文件格式,用于存储训练好的模型。| | PyNative Mode | MindSpore的动态图模式,将神经网络中的各个算子逐一下发执行,方便用户编写和调试神经网络模型。 | | ResNet-50 | Residual Neural Network 50,由微软研究院的Kaiming He等四名华人提出的残差神经网络。 | | Schema | 数据集结构定义文件,用于定义数据集包含哪些字段以及字段的类型。 | diff --git a/docs/note/source_zh_cn/help_seeking_path.md b/docs/note/source_zh_cn/help_seeking_path.md index ac3338260cf2cc17cbd838c5e7fc101da5021cf1..9798ffaf87e65b4bac4a31e2262da74eedcc50a1 100644 --- a/docs/note/source_zh_cn/help_seeking_path.md +++ b/docs/note/source_zh_cn/help_seeking_path.md @@ -2,7 +2,7 @@ `Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `初级` `中级` `高级` - + 本文将简述用户在使用MindSpore遇到问题时,如何使用官方提供的问题求助路径解决问题。MindSpore问题求助整体流程如图中所示,从用户使用MindSpore发现问题开始,直至选择到合适的问题解决方法。下面我们基于问题求助流程图对各种求助方法做解释说明。 @@ -28,5 +28,5 @@ - 为提高问题解决速度与质量,发帖前请参考[发帖建议](https://bbs.huaweicloud.com/forum/thread-69695-1-1.html),按照建议格式发帖。 - 帖子发出后会有论坛版主负责将问题收录,并联系技术专家进行解答,问题将在三个工作日内解决。 - 参考技术专家的解决方案,解决当前遇到的问题。 - + 如果在专家测试后确定是MindSpore功能有待完善,推荐用户在[MindSpore仓](https://gitee.com/mindspore)中创建ISSUE,所提问题会在后续的版本中得到修复完善。 diff --git a/docs/note/source_zh_cn/image_classification_lite.md b/docs/note/source_zh_cn/image_classification_lite.md index 6a17c2517a56db85c8658248a5bc691a04492a67..9884d155f141468091ec30b0b44651f2814566ca 100644 --- a/docs/note/source_zh_cn/image_classification_lite.md +++ b/docs/note/source_zh_cn/image_classification_lite.md @@ -1,6 +1,6 @@ # 图像分类模型支持(Lite) - + ## 图像分类介绍 @@ -15,7 +15,7 @@ | tree | 0.8584 | | houseplant | 0.7867 | -使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。 +使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_classification)。 ## 图像分类模型列表 @@ -35,6 +35,6 @@ | [GhostNet_int8](https://download.mindspore.cn/model_zoo/official/lite/ghostnet_lite/ghostnet_int8.ms) | 15.3 | 73.6% | - | - | 31.452 | | [VGG-Small-low_bit](https://download.mindspore.cn/model_zoo/official/lite/low_bit_quant/low_bit_quant_bs_1.ms) | 17.8 | 93.7% | - | - | 9.082 | | [ResNet50-0.65x](https://download.mindspore.cn/model_zoo/official/lite/adversarial_pruning_lite/adversarial_pruning.ms) | 48.6 | 80.2% | - | - | 89.816 | -| [plain-CNN-ResNet18](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_disstill_res18_cifar10_bs_1_update.ms) | 97.3 | 95.4% | - | - | 63.227 | -| [plain-CNN-ResNet34](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_disstill_res34_cifar10_bs_1_update.ms) | 80.5 | 95.0% | - | - | 20.652 | -| [plain-CNN-ResNet50](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_disstill_res50_cifar10_bs_1_update.ms) | 89.6 | 94.5% | - | - | 24.561 | +| [plain-CNN-ResNet18](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_distill_res18_cifar10_bs_1_update.ms) | 97.3 | 95.4% | - | - | 63.227 | +| [plain-CNN-ResNet34](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_distill_res34_cifar10_bs_1_update.ms) | 80.5 | 95.0% | - | - | 20.652 | +| [plain-CNN-ResNet50](https://download.mindspore.cn/model_zoo/official/lite/residual_distill_lite/residual_distill_res50_cifar10_bs_1_update.ms) | 89.6 | 94.5% | - | - | 24.561 | diff --git a/docs/note/source_zh_cn/image_segmentation_lite.md b/docs/note/source_zh_cn/image_segmentation_lite.md index 4aa2bd2fa140e975e1cb0a5a04aedb0bbb1f22a1..089ec594b81b7f24e34ea1c3d408e598d6cd31ac 100644 --- a/docs/note/source_zh_cn/image_segmentation_lite.md +++ b/docs/note/source_zh_cn/image_segmentation_lite.md @@ -1,12 +1,12 @@ # 图像分割模型支持(Lite) - + ## 图像分割介绍 图像分割是用于检测目标在图片中的位置或者图片中某一像素是输入何种对象的。 -使用MindSpore Lite实现图像分割的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_segmentation)。 +使用MindSpore Lite实现图像分割的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/image_segmentation)。 ## 图像分割模型列表 diff --git a/docs/note/source_zh_cn/index.rst b/docs/note/source_zh_cn/index.rst index 96bc1961d8e244cca441079cdbaa2b952c1f67b4..316a97305d024865feb345c5962685c1fbfa3d9a 100644 --- a/docs/note/source_zh_cn/index.rst +++ b/docs/note/source_zh_cn/index.rst @@ -35,7 +35,6 @@ MindSpore设计和规格 glossary roadmap - paper_list help_seeking_path community \ No newline at end of file diff --git a/docs/note/source_zh_cn/network_list_ms.md b/docs/note/source_zh_cn/network_list_ms.md index 67d9e26b7a4b1c0987703d087c4b5c03d7cf213a..219c7c0814d8fcf87e6ed930c38c1f8d0ee9213e 100644 --- a/docs/note/source_zh_cn/network_list_ms.md +++ b/docs/note/source_zh_cn/network_list_ms.md @@ -9,70 +9,93 @@ - + ## Model Zoo -| 领域 | 子领域 | 网络 | Ascend (Graph) | Ascend (PyNative) | GPU (Graph) | GPU (PyNaitve) | CPU (Graph) | CPU (PyNaitve) +### 标准网络 + +| 领域 | 子领域 | 网络 | Ascend(Graph) | Ascend(PyNative) | GPU(Graph) | GPU(PyNative) | CPU(Graph) | CPU(PyNative) +|:---- |:------- |:---- |:---- |:---- |:---- |:---- |:---- |:---- +|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/cnn_direction_model/src/cnn_direction_model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogLeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/inceptionv4/src/inceptionv4.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported +| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv1/src/mobilenet_v1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Supported | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [NASNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Supported | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [ResNeXt50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Doing | Supported | Supported | Doing | Doing +|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ShuffleNetV1](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv1/src/shufflenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[SqueezeNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [Tiny-DarkNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/tinydarknet/src/tinydarknet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [Xception](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/xception/src/Xception.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [CenterFace](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [CTPN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ctpn/src/ctpn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [Faster R-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [Mask R-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) |[Mask R-CNN (MobileNetV1)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [RetinaFace-ResNet50](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing +|计算机视觉(CV) | 目标检测(Object Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Doing |Supported |Supported | Supported | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) |[SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Doing | Supported | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [YOLOv3-ResNet18](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [YOLOv3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [YOLOv3-DarkNet53(量化)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) |[YOLOv4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 文本检测(Text Detection) | [DeepText](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/deeptext/src/Deeptext/deeptext_vgg16.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 文本检测(Text Detection) | [PSENet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 文本识别(Text Recognition) | [CNN+CTC](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeepLabV3](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Doing | Doing | Doing | Supported | Doing +| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [U-Net2D (Medical)](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 关键点检测(Keypoint Detection) |[OpenPose](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 关键点检测(Keypoint Detection) |[SimplePoseNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/simple_pose/src/model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 光学字符识别(Optical Character Recognition) |[CRNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/cv/crnn/src/crnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [FastText](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/fasttext/src/fasttext_model.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [GRU](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/gru/src/seq2seq.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/lstm/src/lstm.py) | Supported | Doing | Supported | Supported | Supported | Supported +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBERT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TextCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/nlp/textcnn/src/textcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Supported | Doing +| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search, Ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing +| 推荐(Recommender) | 推荐系统(Recommender System) | [NCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/recommend/ncf/src/ncf.py) | Supported | Doing | Supported | Doing| Doing | Doing +| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 图神经网络(GNN) | 推荐系统(Recommender System) | [BGCF](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing + +### 研究网络 + +| 领域 | 子领域 | 网络 | Ascend(Graph) | Ascend(PyNative) | GPU(Graph) | GPU(PyNative) | CPU(Graph) | CPU(PyNative) |:---- |:------- |:---- |:---- |:---- |:---- |:---- |:---- |:---- -|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported | Supported -| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet_quant/src/lenet_fusion.py) | Supported | Doing | Supported | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet50_quant/models/resnet_quant.py) | Supported | Doing | Doing | Doing | Doing | Doing -|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing | Doing -|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [DenseNet121](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/densenet121/src/network/densenet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV2(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2_quant/src/mobilenetV2.py) | Supported | Doing | Supported | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [NASNET](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/nasnet/src/nasnet_a_mobile.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ShuffleNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/shufflenetv2/src/shufflenetv2.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [EfficientNet-B0](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/efficientnet/src/efficientnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ghostnet/src/ghostnet.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet50-0.65x](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/resnet50_adv_pruning/src/resnet_imgnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) | [TinyNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/tinynet/src/tinynet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceAttributes](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 图像分类(Image Classification) |[SqueezeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/squeezenet/src/squeezenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -|计算机视觉(CV) | 目标检测(Object Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported |Supported |Supported | Supported | Supported -| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53(量化)](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53_quant/src/darknet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [MaskRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn/src/maskrcnn/mask_rcnn_r50.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [Retinaface-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/retinaface_resnet50/src/network.py) | Doing | Doing | Supported | Supported | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [CenterFace](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/centerface/src/centerface.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) |[MaskRCNN-MobileNetV1](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/maskrcnn_mobilenetv1/src/maskrcnn_mobilenetv1/mobilenetv1.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) |[SSD-MobileNetV1-FPN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/mobilenet_v1_fpn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 目标检测(Object Detection) |[YoloV4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov4/src/yolo.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 计算机视觉 (CV) | 文本检测 (Text Detection) | [PSENet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/psenet/src/ETSNET/etsnet.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉 (CV) | 文本识别 (Text Recognition) | [CNNCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/cnnctc/src/cnn_ctc.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [UNet2D-Medical](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/unet/src/unet/unet_model.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 计算机视觉(CV) | 语义分割(Semantic Segmentation) |[Openpose](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/openpose/src/openposenet.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported | Supported -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [GNMT v2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/gnmt_v2/src/gnmt_model/gnmt.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Doing | Doing | Doing | Doing | Doing -| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Supported| Doing | Doing -| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search, Ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Supported | Doing | Doing -| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Supported | Doing | Doing | Doing | Doing -| 图神经网络(GNN) | 推荐系统(Recommender System) | [BGCF](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/bgcf/src/bgcf.py) | Supported | Doing | Doing | Doing | Doing | Doing -|语音(Audio) | 音频标注(Audio Tagging) | [FCN-4](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Supported | Doing | Doing | Doing | Doing -|高性能计算(HPC) | 分子动力学(Molecular Dynamics) | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Doing | Doing | Doing | Doing | Doing -|高性能计算(HPC) | 海洋模型(Ocean Model) | [GOMO](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceAttributes](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceAttribute/src/FaceAttribute/resnet18.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [FaceDetection](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceDetection/src/FaceDetection/yolov3.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceQualityAssessment](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceQualityAssessment/src/face_qa.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognition](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceRecognition/src/init_network.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) |[FaceRecognitionForTracking](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/FaceRecognitionForTracking/src/reid.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 目标检测(Object Detection) | [SSD-GhostNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/ssd_ghostnet/src/ssd_ghostnet.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 关键点检测(Key Point Detection) | [CenterNet](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/centernet/src/centernet_pose.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 计算机视觉(CV) | 图像风格迁移(Image Style Transfer) | [CycleGAN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/cv/cycle_gan/src/models/cycle_gan.py) | Doing | Doing | Doing | Supported | Supported | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [DS-CNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/nlp/dscnn/src/ds_cnn.py) | Supported | Supported | Doing | Doing | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TextRCNN](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/nlp/textrcnn/src/textrcnn.py) | Supported | Doing | Doing | Doing | Doing | Doing +| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [AutoDis](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/recommend/autodis/src/autodis.py) | Supported | Doing | Doing | Doing | Doing | Doing +|语音(Audio) | 音频标注(Audio Tagging) | [FCN-4](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/audio/fcn-4/src/musictagger.py) | Supported | Doing | Doing | Doing | Doing | Doing +|高性能计算(HPC) | 分子动力学(Molecular Dynamics) | [DeepPotentialH2O](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/molecular_dynamics/src/network.py) | Supported | Supported| Doing | Doing | Doing | Doing +|高性能计算(HPC) | 海洋模型(Ocean Model) | [GOMO](https://gitee.com/mindspore/mindspore/blob/r1.1/model_zoo/research/hpc/ocean_model/src/GOMO.py) | Doing | Doing | Supported | Doing | Doing | Doing -> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) 快速生成经典网络脚本。 +> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/r1.1/mindinsight/wizard/) 快速生成经典网络脚本。 diff --git a/docs/note/source_zh_cn/object_detection_lite.md b/docs/note/source_zh_cn/object_detection_lite.md index 38855ad7eb2071f2fb8097198ae97ef0644d292c..e23056139183aab9a599209cd693a545da8ec1fa 100644 --- a/docs/note/source_zh_cn/object_detection_lite.md +++ b/docs/note/source_zh_cn/object_detection_lite.md @@ -1,6 +1,6 @@ # 目标检测模型支持(Lite) - + ## 目标检测介绍 @@ -12,7 +12,7 @@ | ----- | ---- | ---------------- | | mouse | 0.78 | [10, 25, 35, 43] | -使用MindSpore Lite实现目标检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection)。 +使用MindSpore Lite实现目标检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/object_detection)。 ## 目标检测模型列表 diff --git a/docs/note/source_zh_cn/operator_list_implicit.md b/docs/note/source_zh_cn/operator_list_implicit.md index 5a3c7fade7bab75a20ade9ac1b332b17cb36f3ad..ba50614ea271246d8166367631785e68f72bd4f8 100644 --- a/docs/note/source_zh_cn/operator_list_implicit.md +++ b/docs/note/source_zh_cn/operator_list_implicit.md @@ -12,7 +12,7 @@ - + ## 隐式类型转换 @@ -38,68 +38,69 @@ | 算子名 | |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Assign.html) | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignSub.html) | -| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyMomentum.html) | -| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | -| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | -| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | -| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | -| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | -| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | -| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | -| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | -| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | -| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | -| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | -| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | -| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyAddSign.html) | -| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | -| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | -| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | -| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | -| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | -| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BitwiseAnd.html) | -| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BitwiseOr.html) | -| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BitwiseXor.html) | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TensorAdd.html) | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sub.html) | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mul.html) | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Pow.html) | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Minimum.html) | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Maximum.html) | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.RealDiv.html) | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Div.html) | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DivNoNan.html) | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorDiv.html) | -| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TruncateDiv.html) | -| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TruncateMod.html) | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mod.html) | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorMod.html) | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atan2.html) | -| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SquaredDifference.html) | -| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Xdivy.html) | -| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Xlogy.html) | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Equal.html) | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.NotEqual.html) | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Greater.html) | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Less.html) | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LessEqual.html) | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalOr.html) | -| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | -| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | -| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNdSub.html) | -| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | -| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterUpdate.html) | -| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterMax.html) | -| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterMin.html) | -| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterAdd.html) | -| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterSub.html) | -| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterMul.html) | -| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ScatterDiv.html) | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignAdd.html) | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Assign.html) | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | +| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyMomentum.html) | +| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseAdam.html) | +| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseLazyAdam.html) | +| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseFtrl.html) | +| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FusedSparseProximalAdagrad.html) | +| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdaMax.html) | +| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdadelta.html) | +| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdagrad.html) | +| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAdagradV2.html) | +| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagrad.html) | +| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyAdagradV2.html) | +| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyProximalAdagrad.html) | +| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyProximalAdagrad.html) | +| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyAddSign.html) | +| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyPowerSign.html) | +| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyGradientDescent.html) | +| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApplyProximalGradientDescent.html) | +| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrl.html) | +| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseApplyFtrlV2.html) | +| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BitwiseAnd.html) | +| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BitwiseOr.html) | +| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BitwiseXor.html) | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | +| [mindspore.ops.Add](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Add.html) | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sub.html) | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mul.html) | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Pow.html) | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Div.html) | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | +| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TruncateDiv.html) | +| [mindspore.ops.TruncateMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TruncateMod.html) | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mod.html) | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | +| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SquaredDifference.html) | +| [mindspore.ops.Xdivy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Xdivy.html) | +| [mindspore.ops.Xlogy](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Xlogy.html) | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Equal.html) | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Greater.html) | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Less.html) | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | +| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNdUpdate.html) | +| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNdAdd.html) | +| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNdSub.html) | +| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterNonAliasingAdd.html) | +| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterUpdate.html) | +| [mindspore.ops.ScatterMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterMax.html) | +| [mindspore.ops.ScatterMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterMin.html) | +| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterAdd.html) | +| [mindspore.ops.ScatterSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterSub.html) | +| [mindspore.ops.ScatterMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterMul.html) | +| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ScatterDiv.html) | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | -> \ No newline at end of file +> diff --git a/docs/note/source_zh_cn/operator_list_lite.md b/docs/note/source_zh_cn/operator_list_lite.md index c5665c3d3414f29e667fa9df486964d13505b6ee..98626afe4d54648308d2f41467f4e393aa243b66 100644 --- a/docs/note/source_zh_cn/operator_list_lite.md +++ b/docs/note/source_zh_cn/operator_list_lite.md @@ -2,123 +2,131 @@ `Linux` `Ascend` `端侧` `推理应用` `初级` `中级` `高级` - + -| 操作名 | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU | 支持的Tensorflow
Lite算子 | 支持的Caffe
Lite算子 | 支持的Onnx
Lite算子 | -|-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------|---------| -| Abs | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Abs | | Abs | -| Add | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add, Int8Add | -| AddN | | Supported | | | | | | AddN | | | -| Argmax | | Supported | Supported | Supported | | | | Argmax | ArgMax | ArgMax | -| Argmin | | Supported | Supported | Supported | | | | Argmin | | | -| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling | Pooling | AveragePool, GlobalAveragePool, Int8AveragePool | -| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | | BatchNorm | BatchNormalization | -| BatchToSpace | | Supported | Supported | Supported | Supported | Supported | | BatchToSpace, BatchToSpaceND | | | -| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | | BiasAdd | -| Broadcast | | Supported | | | | | | BroadcastTo | | Expand | -| Cast | Supported | Supported | Supported| Supported | Supported | Supported | Supported | Cast, QUANTIZE, DEQUANTIZE | | Cast | -| Ceil | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil | -| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat | -| ConstantOfShape | | Supported | | | | | | | | ConstantOfShape | -| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv, Int8Conv, ConvRelu, Int8ConvRelu | -| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose | -| Cos | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cos | | Cos | -| Crop | Supported | Supported | Supported | Supported | | | | | Crop | | -| CustomExtractFeatures | | Supported | | | | | | ExtractFeatures | | | -| CustomNormalize | | Supported | | | | | | Normalize | | | -| CustomPredict | | Supported | | | | | | Predict | | | -| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | | Deconvolution| | -| DepthToSpace | | Supported | Supported | Supported | Supported | Supported | | DepthToSpace| | DepthToSpace | -| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | | -| DetectionPostProcess | | Supported | Supported | Supported | | | | Custom | | | -| Div | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div | -| Eltwise | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Eltwise | Sum, Max[3] | -| Elu | | Supported | | | | | | | Elu | Elu, NonMaxSuppression | -| Equal | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Equal | | Equal | -| Exp | | Supported | | | Supported | Supported | | Exp | Exp | Exp | -| ExpandDims | | Supported | Supported | Supported | | | |ExpandDims | | | -| Fill | | Supported | | | | | | Fill | | | -| Flatten | | Supported | | | | | | | Flatten | | -| Floor | Supported | Supported | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor | -| FloorDiv | Supported | Supported | | | Supported | Supported | Supported | FloorDiv | | | -| FloorMod | Supported | Supported | | | Supported | Supported | Supported | FloorMod | | | -| FullConnection | Supported | Supported | Supported | Supported | Supported | Supported | | FullyConnected | InnerProduct | | -| FusedBatchNorm | Supported | Supported | Supported | Supported | | | Supported | FusedBatchNorm | | | -| GatherNd | | Supported | Supported | Supported | | | | GatherND | | | -| GatherV2 | | Supported | Supported | Supported | Supported | Supported | | Gather | | Gather | -| Greater | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Greater | | Greater | -| GreaterEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | GreaterEqual| | | -| HashtableLookup | | Supported | | | | | | HashtableLookup | | | -| Hswish | Supported | Supported | Supported | Supported | Supported | Supported | Supported | HardSwish | | | -| InstanceNorm | | Supported | | | | | | InstanceNorm | | | -| L2Norm | | Supported | | | | | | L2_NORMALIZATION | | | -| LeakyReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LeakyRelu | | LeakyRelu | -| Less | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Less | | Less | -| LessEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LessEqual | | | -| LRN | | Supported | | | | | | LocalResponseNorm | | Lrn, LRN | -| Log | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Log | | Log | -| LogicalAnd | Supported | Supported | | | Supported | Supported | Supported | LogicalAnd | | And | -| LogicalNot | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LogicalNot | | Not | -| LogicalOr | Supported | Supported | | | Supported | Supported | Supported | LogicalOr | | Or | -| LshProjection | | Supported | | | | | | LshProjection | | | -| LSTM | | Supported | | | | | | | | LSTM | -| MatMul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | | MatMul | -| Maximum | Supported | Supported | | | Supported | Supported | Supported | Maximum | | | -| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool, GlobalMaxPool | -| Minimum | Supported | Supported | | | Supported | Supported | Supported | Minimum | | Min | -| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul | -| Neg | Supported | Supported | | | Supported | Supported | Supported | Neg | | Neg | -| NotEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | NotEqual | | | -| OneHot | | Supported | | | | | | OneHot | | OneHot | -| Pad | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Pad, MirrorPad | | Pad | -| Pow | | Supported | Supported | Supported | | | | Pow | Power | Pow[2] | -| PReLU | | Supported | | | Supported | Supported | | PRELU | PReLU | PRelu | -| Range | | Supported | | | | | | Range | | | -| Rank | | Supported | | | | | | Rank | | | -| ReduceASum | | Supported | | | | | | | Reduction | | -| ReduceMax | Supported | Supported | Supported | Supported | | | | ReduceMax | | ReduceMax | -| ReduceMean | Supported | Supported | Supported | Supported | Supported | Supported | | Mean | Reduction | ReduceMean | -| ReduceMin | Supported | Supported | Supported | Supported | | | | ReduceMin | | ReduceMin | -| ReduceProd | Supported | Supported | Supported | Supported | | | | ReduceProd | | ReduceProd | -| ReduceSum | Supported | Supported | Supported | Supported | Supported | Supported | | Sum | Reduction | ReduceSum | -| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | Reduction | ReduceSumSquare | -| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu | -| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip[1] | -| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | | Reshape | Reshape | Reshape,Flatten | -| Resize | | Supported | Supported | Supported | Supported | Supported | Supported | ResizeBilinear, NearestNeighbor | Interp | | -| Reverse | | Supported | | | | | | reverse | | | -| ReverseSequence | | Supported | | | | | | ReverseSequence | | | -| Round | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Round | | Round | -| Rsqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Rsqrt | | | -| Scale | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Scale | | -| ScatterNd | | Supported | | | | | | ScatterNd | | | -| Shape | | Supported | Supported | Supported | | | Supported | Shape | | Shape | -| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | | Logistic | Sigmoid | Sigmoid | -| Sin | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sin | | Sin | -| Slice | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Slice | Slice | Slice | -| SkipGram | | Supported | | | | | | SKipGram | | | -| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax | -| SpaceToBatch | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatch | | | -| SpaceToBatchND | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatchND | | | -| SpaceToDepth | | Supported | | | | | | SpaceToDepth | | SpaceToDepth | -| SparseToDense | | Supported | | | | | | SpareToDense | | | -| Split | Supported | Supported | Supported | Supported | | | Supported | Split, SplitV | | Split | -| Sqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt | -| Square | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Square | | | -| SquaredDifference | Supported | Supported | | | Supported | Supported | Supported | SquaredDifference | | | -| Squeeze | | Supported | Supported | Supported | Supported | Supported | | Squeeze | | Squeeze | -| StridedSlice | | Supported | Supported | Supported | | | Supported | StridedSlice| | | -| Stack | Supported | Supported | | | Supported | Supported | | Stack | | | -| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub | -| Tanh | Supported | Supported | | | Supported | Supported | | Tanh | TanH | Tanh, Sign | -| Tile | | Supported | | | | | | Tile | Tile | Tile | -| TopK | | Supported | Supported | Supported | | | | TopKV2 | | TopK | -| Transpose | Supported | Supported | | | Supported | Supported | Supported | Transpose | Permute | Transpose | -| Unique | | Supported | | | | | | Unique | | | -| Unsqueeze | | Supported | Supported | Supported | | | Supported | | | Unsqueeze | -| Unstack | | Supported | | | | | | Unstack | | | -| Where | | Supported | | | | | | Where | | | -| ZerosLike | | Supported | | | | | | ZerosLike | | | +| 操作名
  | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU
  | 支持的TensorFlow
Lite算子 | 支持的Caffe
Lite算子 | 支持的Onnx
Lite算子 |支持的TensorFlow
算子 | +|-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------|---------|---------| +| Abs | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Abs | | Abs | | +| Add | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Add | | Add, Int8Add | Add, AddV2 | +| AddN | | Supported | | | | | | AddN | | | | +| Assert | | Supported | | | | | | | | | Assert | +| Argmax | | Supported | Supported | Supported | Supported | Supported | | Argmax | ArgMax | ArgMax | | +| Argmin | | Supported | Supported | Supported | Supported | Supported | | Argmin | | | | +| AvgPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MeanPooling | Pooling | AveragePool, GlobalAveragePool, Int8AveragePool | | +| BatchNorm | Supported | Supported | Supported | Supported | Supported | Supported | | | BatchNorm | BatchNormalization | | +| BatchToSpace | | Supported | Supported | Supported | Supported | Supported | | BatchToSpace, BatchToSpaceND | | | | +| BiasAdd | | Supported | Supported | Supported | Supported | Supported | | | | BiasAdd | BiasAdd | +| Broadcast | | Supported | | | | | | BroadcastTo | | Expand | | +| Cast | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cast, QUANTIZE, DEQUANTIZE | | Cast | Cast | +| Ceil | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Ceil | | Ceil | | +| Concat | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Concat | Concat | Concat | ConcatV2 | +| ConstantOfShape | | Supported | | | | | | | | ConstantOfShape | | +| Conv2d | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Conv2D | Convolution | Conv, Int8Conv, ConvRelu, Int8ConvRelu | Conv2D | +| Conv2dTranspose | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DeConv2D | Deconvolution | ConvTranspose | | +| Cos | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Cos | | Cos | | +| Crop | Supported | Supported | Supported | Supported | | | | | Crop | | | +| CustomExtractFeatures | | Supported | | | | | | ExtractFeatures | | | | +| CustomNormalize | | Supported | | | | | | Normalize | | | | +| CustomPredict | | Supported | | | | | | Predict | | | | +| DeDepthwiseConv2D | | Supported | Supported | Supported | | | | | Deconvolution | | | +| DepthToSpace | | Supported | Supported | Supported | Supported | Supported | | DepthToSpace | | DepthToSpace | | +| DepthwiseConv2dNative | Supported | Supported | Supported | Supported | Supported | Supported | Supported | DepthwiseConv2D | Convolution | | | +| DetectionPostProcess | | Supported | Supported | Supported | | | | Custom | | | | +| Div | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Div, RealDiv | | Div | Div, RealDiv | +| Eltwise | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Eltwise | Sum, Max[3] | | +| Elu | | Supported | | | | | | | Elu | Elu, NonMaxSuppression | | +| Equal | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Equal | | Equal | Equal | +| Exp | | Supported | | | Supported | Supported | | Exp | Exp | Exp | | +| ExpandDims | | Supported | Supported | Supported | | | | ExpandDims | | | ExpandDims | +| Fill | | Supported | | | | | | Fill | | | | +| Flatten | | Supported | | | | | | | Flatten | | | +| Floor | Supported | Supported | Supported | Supported | Supported | Supported | Supported | flOOR | | Floor | | +| FloorDiv | Supported | Supported | | | Supported | Supported | Supported | FloorDiv | | | | +| FloorMod | Supported | Supported | | | Supported | Supported | Supported | FloorMod | | | | +| FullConnection | Supported | Supported | Supported | Supported | Supported | Supported | | FullyConnected | InnerProduct | | | +| FusedBatchNorm | Supported | Supported | Supported | Supported | | | Supported | FusedBatchNorm | | | | +| GatherNd | | Supported | Supported | Supported | Supported | Supported | | GatherND | | | | +| Gather | | Supported | Supported | Supported | Supported | Supported | | Gather | | Gather | GatherV2 | +| Greater | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Greater | | Greater | Greater | +| GreaterEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | GreaterEqual | | | GreaterEqual | +| HashtableLookup | | Supported | | | | | | HashtableLookup | | | | +| Hswish | Supported | Supported | Supported | Supported | Supported | Supported | Supported | HardSwish | | | | +| InstanceNorm | | Supported | | | | | | InstanceNorm | | | | +| L2Norm | | Supported | | | | | | L2_NORMALIZATION | | | | +| LeakyReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LeakyRelu | | LeakyRelu | | +| Less | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Less | | Less | Less | +| LessEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LessEqual | | | LessEqual | +| LRN | | Supported | | | | | | LocalResponseNorm | | Lrn, LRN | | +| Log | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Log | | Log | | +| LogicalAnd | Supported | Supported | | | Supported | Supported | Supported | LogicalAnd | | And | LogicalAnd | +| LogicalNot | Supported | Supported | Supported | Supported | Supported | Supported | Supported | LogicalNot | | Not | | +| LogicalOr | Supported | Supported | | | Supported | Supported | Supported | LogicalOr | | Or | | +| LshProjection | | Supported | | | | | | LshProjection | | | | +| LSTM | | Supported | | | | | | | | LSTM | | +| MatMul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | | MatMul | MatMul | +| Maximum | Supported | Supported | | | Supported | Supported | Supported | Maximum | | | Maximum | +| MaxPool | Supported | Supported | Supported | Supported | Supported | Supported | Supported | MaxPooling | Pooling | MaxPool, GlobalMaxPool | | +| Minimum | Supported | Supported | | | Supported | Supported | Supported | Minimum | | Min | Minimum | +| Mul | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Mul | | Mul | Mul | +| Neg | Supported | Supported | | | Supported | Supported | Supported | Neg | | Neg | | +| NotEqual | Supported | Supported | Supported | Supported | Supported | Supported | Supported | NotEqual | | |NotEqual | +| OneHot | | Supported | | | Supported | Supported | | OneHot | | OneHot | | +| Pad | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Pad, MirrorPad | | Pad | | +| Pow | | Supported | Supported | Supported | Supported | Supported | | Pow | Power | Pow[2] | | +| PReLU | | Supported | | | Supported | Supported | | PRELU | PReLU | PRelu | | +| Range | | Supported | | | | | | Range | | | Range, RaggedRange | +| Rank | | Supported | | | | | | Rank | | | | +| ReduceAll | | Supported | | | | | | | | | All | +| ReduceASum | | Supported | | | Supported | Supported | | | Reduction | | | +| ReduceMax | Supported | Supported | Supported | Supported | Supported | Supported | | ReduceMax | | ReduceMax | Max | +| ReduceMean | Supported | Supported | Supported | Supported | Supported | Supported | | Mean | Reduction | ReduceMean | Mean | +| ReduceMin | Supported | Supported | Supported | Supported | Supported | Supported | | ReduceMin | | ReduceMin | Min | +| ReduceProd | Supported | Supported | Supported | Supported | Supported | Supported | | ReduceProd | | ReduceProd | Prod | +| ReduceSum | Supported | Supported | Supported | Supported | Supported | Supported | | Sum | Reduction | ReduceSum | Sum | +| ReduceSumSquare | Supported | Supported | Supported | Supported | | | | | Reduction | ReduceSumSquare | | +| ReLU | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu | ReLU | Relu | Relu | +| ReLU6 | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Relu6 | ReLU6 | Clip[1] | Relu6 | +| Reshape | Supported | Supported | Supported | Supported | Supported | Supported | | Reshape | Reshape | Reshape,Flatten | Reshape | +| Resize | | Supported | Supported | Supported | Supported | Supported | Supported | ResizeBilinear, NearestNeighbor | Interp | | | +| Reverse | | Supported | | | | | | reverse | | | | +| ReverseSequence | | Supported | | | | | | ReverseSequence | | | ReverseSequence | +| Round | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Round | | Round | Round | +| Rsqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Rsqrt | | | | +| Scale | Supported | Supported | Supported | Supported | Supported | Supported | Supported | | Scale | | | +| ScatterNd | | Supported | | | | | | ScatterNd | | | | +| Shape | | Supported | Supported | Supported | Supported | Supported | Supported | Shape | | Shape | Shape | +| Sigmoid | Supported | Supported | Supported | Supported | Supported | Supported | | Logistic | Sigmoid | Sigmoid | Sigmoid | +| Sin | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sin | | Sin | | +| Slice | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Slice | Slice | Slice | | +| SkipGram | | Supported | | | | | | SKipGram | | | | +| Softmax | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Softmax | Softmax | Softmax | | +| SpaceToBatch | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatch | | | | +| SpaceToBatchND | | Supported | Supported | Supported | Supported | Supported | | SpaceToBatchND | | | | +| SpaceToDepth | | Supported | | | Supported | Supported | | SpaceToDepth | | SpaceToDepth | | +| SparseToDense | | Supported | | | Supported | Supported | | SpareToDense | | | | +| Split | Supported | Supported | Supported | Supported | | | Supported | Split, SplitV | | Split | Split, SplitV | +| Sqrt | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sqrt | | Sqrt | | +| Square | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Square | | | | +| SquaredDifference | Supported | Supported | | | Supported | Supported | Supported | SquaredDifference | | | | +| Squeeze | | Supported | Supported | Supported | Supported | Supported | | Squeeze | | Squeeze | Squeeze | +| StridedSlice | | Supported | Supported | Supported | Supported | Supported | Supported | StridedSlice | | | StridedSlice | +| Stack | Supported | Supported | | | Supported | Supported | | Stack | | | Pack | +| Sub | Supported | Supported | Supported | Supported | Supported | Supported | Supported | Sub | | Sub | Sub | +| Tanh | Supported | Supported | | | Supported | Supported | | Tanh | TanH | Tanh, Sign | Tanh | +| TensorListFromTensor | | Supported | | | | | | | | | TensorListFromTensor | +| TensorListGetItem | | Supported | | | | | | | | | TensorListGetItem | +| TensorListReserve | | Supported | | | | | | | | | TensorListReserve | +| TensorListSetItem | | Supported | | | | | | | | | TensorListSetItem | +| TensorListStack | | Supported | | | | | | | | | TensorListStack | +| Tile | | Supported | | | | | | Tile | Tile | Tile | Tile | +| TopK | | Supported | Supported | Supported | | | | TopKV2 | | TopK | | +| Transpose | Supported | Supported | | | Supported | Supported | Supported | Transpose | Permute | Transpose | Transpose | +| Unique | | Supported | | | | | | Unique | | | | +| Unsqueeze | | Supported | Supported | Supported | | | Supported | | | Unsqueeze | | +| Unstack | | Supported | | | | | | Unstack | | | | +| Where | | Supported | | | | | | Where | | | | +| While | | Supported | | | | | | | | | While, StatelessWhile | +| ZerosLike | | Supported | | | | | | ZerosLike | | | | [1] Clip:仅支持将clip(0, 6)转换为Relu6。 diff --git a/docs/note/source_zh_cn/operator_list_ms.md b/docs/note/source_zh_cn/operator_list_ms.md index 8a3104db96a43ca98ccd0245602a08014df7dea5..ee61872844c10db36ffe2e7fb467975b8519d224 100644 --- a/docs/note/source_zh_cn/operator_list_ms.md +++ b/docs/note/source_zh_cn/operator_list_ms.md @@ -2,9 +2,9 @@ `Linux` `Ascend` `GPU` `CPU` `模型开发` `初级` `中级` `高级` - + 您可根据需要,选择适用于您硬件平台的算子,构建网络模型。 -- `mindspore.nn`模块支持的算子列表可在[mindspore.nn模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.nn.html)进行查阅。 -- `mindspore.ops`模块支持的算子列表可在[mindspore.ops模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.ops.html)进行查阅。 +- `mindspore.nn`模块支持的算子列表可在[mindspore.nn模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.nn.html)进行查阅。 +- `mindspore.ops`模块支持的算子列表可在[mindspore.ops模块的API页面](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.ops.html)进行查阅。 diff --git a/docs/note/source_zh_cn/operator_list_parallel.md b/docs/note/source_zh_cn/operator_list_parallel.md index 5ada19844e05f433e635c9f85749629573e66ccd..d652b9bd7b62e865243dd6cf0f31bdfc1d1b6664 100644 --- a/docs/note/source_zh_cn/operator_list_parallel.md +++ b/docs/note/source_zh_cn/operator_list_parallel.md @@ -9,116 +9,119 @@ - + ## 分布式算子 | 操作名 | 约束 | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Abs.html) | 无 | -| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ACos.html) | 无 | -| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Acosh.html) | 无 | -| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ApproximateEqual.html) | 无 | -| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Asin.html) | 无 | -| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Asinh.html) | 无 | -| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Assign.html) | 无 | -| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignAdd.html) | 无 | -| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.AssignSub.html) | 无 | -| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atan.html) | 无 | -| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atan2.html) | 无 | -| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Atanh.html) | 无 | -| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BatchMatMul.html) | 不支持`transpose_a=True` | -| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BesselI0e.html) | 无 | -| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BesselI1e.html) | 无 | -| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BiasAdd.html) | 无 | -| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.BroadcastTo.html) | 无 | -| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Cast.html) | Auto Parallel和Semi Auto Parallel模式下,配置策略不生效 | -| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Ceil.html) | 无 | -| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Concat.html) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Cos.html) | 无 | -| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Cosh.html) | 无 | -| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Div.html) | 无 | -| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DivNoNan.html) | 无 | -| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DropoutDoMask.html) | 需和`DropoutGenMask`联合使用,不支持配置切分策略 | -| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.DropoutGenMask.html) | 需和`DropoutDoMask`联合使用 | -| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Elu.html) | 无 | -| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | 同GatherV2 | -| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Equal.html) | 无 | -| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Erf.html) | 无 | -| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Erfc.html) | 无 | -| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Exp.html) | 无 | -| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ExpandDims.html) | 无 | -| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Expm1.html) | 无 | -| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Floor.html) | 无 | -| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorDiv.html) | 无 | -| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.FloorMod.html) | 无 | -| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GatherV2.html) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分 | -| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Gelu.html) | 无 | -| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Greater.html) | 无 | -| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GreaterEqual.html) | 无 | -| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Inv.html) | 无 | -| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.L2Normalize.html) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Less.html) | 无 | -| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LessEqual.html) | 无 | -| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalAnd.html) | 无 | -| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalNot.html) | 无 | -| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogicalOr.html) | 无 | -| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Log.html) | 无 | -| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Log1p.html) | 无 | -| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.LogSoftmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.MatMul.html) | 不支持`transpose_a=True` | -| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Maximum.html) | 无 | -| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Minimum.html) | 无 | -| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mod.html) | 无 | -| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Mul.html) | 无 | -| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Neg.html) | 无 | -| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.NotEqual.html) | 无 | -| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.OneHot.html) | 仅支持输入(indices)是1维的Tensor,切分策略要配置输出的切分策略,以及第1和第2个输入的切分策略 | -| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.OnesLike.html) | 无 | -| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Pack.html) | 无 | -| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Pow.html) | 无 | -| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.PReLU.html) | weight的shape在非[1]的情况下,输入(input_x)的Channel维要和weight的切分方式一致 | -| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.RealDiv.html) | 无 | -| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Reciprocal.html) | 无 | -| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceMax.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceMin.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | -| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceSum.html) | 无 | -| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReduceMean.html) | 无 | -| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReLU.html) | 无 | -| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReLU6.html) | 无 | -| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ReLUV2.html) | 无 | -| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Reshape.html) | 不支持配置切分策略,并且,在自动并行模式下,当reshape算子后接有多个算子,不允许对这些算子配置不同的切分策略 | -| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Round.html) | 无 | -| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Rsqrt.html) | 无 | -| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sigmoid.html) | 无 | -| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | 无 | -| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sign.html) | 无 | -| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sin.html) | 无 | -| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sinh.html) | 无 | -| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Softmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0] | -| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Softplus.html) | 无 | -| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Softsign.html) | 无 | -| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.SparseGatherV2.html) | 同GatherV2 | -| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Split.html) | 轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sqrt.html) | 无 | -| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Square.html) | 无 | -| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Squeeze.html) | 无 | -| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.StridedSlice.html) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分 | -| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Slice.html) | 需要切分的维度必须全部提取 | -| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Sub.html) | 无 | -| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Tan.html) | 无 | -| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Tanh.html) | 无 | -| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TensorAdd.html) | 无 | -| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Tile.html) | 仅支持对multiples配置切分策略 | -| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.TopK.html) | 最后一维不支持切分,切分后,在数学逻辑上和单机不等价 | -| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Transpose.html) | 无 | -| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.Unique.html) | 只支持重复计算的策略(1,) | -| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致 | -| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最大值。需要用户进行掩码处理,将最大值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | -| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最小值。需要用户进行掩码处理,将最小值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | -| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.ZerosLike.html) | 无 | +| [mindspore.ops.Abs](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Abs.html) | 无 | +| [mindspore.ops.ACos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ACos.html) | 无 | +| [mindspore.ops.Acosh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Acosh.html) | 无 | +| [mindspore.ops.Add](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Add.html) | 无 | +| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ApproximateEqual.html) | 无 | +| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ArgMaxWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ArgMinWithValue.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.Asin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Asin.html) | 无 | +| [mindspore.ops.Asinh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Asinh.html) | 无 | +| [mindspore.ops.Assign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Assign.html) | 无 | +| [mindspore.ops.AssignAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignAdd.html) | 无 | +| [mindspore.ops.AssignSub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.AssignSub.html) | 无 | +| [mindspore.ops.Atan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atan.html) | 无 | +| [mindspore.ops.Atan2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atan2.html) | 无 | +| [mindspore.ops.Atanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Atanh.html) | 无 | +| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BatchMatMul.html) | 不支持`transpose_a=True` | +| [mindspore.ops.BesselI0e](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BesselI0e.html) | 无 | +| [mindspore.ops.BesselI1e](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BesselI1e.html) | 无 | +| [mindspore.ops.BiasAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BiasAdd.html) | 无 | +| [mindspore.ops.BroadcastTo](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.BroadcastTo.html) | 无 | +| [mindspore.ops.Cast](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Cast.html) | Auto Parallel和Semi Auto Parallel模式下,配置策略不生效 | +| [mindspore.ops.Ceil](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Ceil.html) | 无 | +| [mindspore.ops.Concat](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Concat.html) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Cos](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Cos.html) | 无 | +| [mindspore.ops.Cosh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Cosh.html) | 无 | +| [mindspore.ops.Div](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Div.html) | 无 | +| [mindspore.ops.DivNoNan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DivNoNan.html) | 无 | +| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DropoutDoMask.html) | 需和`DropoutGenMask`联合使用,不支持配置切分策略 | +| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.DropoutGenMask.html) | 需和`DropoutDoMask`联合使用 | +| [mindspore.ops.Elu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Elu.html) | 无 | +| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.EmbeddingLookup.html) | 同GatherV2 | +| [mindspore.ops.Equal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Equal.html) | 无 | +| [mindspore.ops.Erf](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Erf.html) | 无 | +| [mindspore.ops.Erfc](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Erfc.html) | 无 | +| [mindspore.ops.Exp](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Exp.html) | 无 | +| [mindspore.ops.ExpandDims](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ExpandDims.html) | 无 | +| [mindspore.ops.Expm1](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Expm1.html) | 无 | +| [mindspore.ops.Floor](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Floor.html) | 无 | +| [mindspore.ops.FloorDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorDiv.html) | 无 | +| [mindspore.ops.FloorMod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.FloorMod.html) | 无 | +| [mindspore.ops.GatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GatherV2.html) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分 | +| [mindspore.ops.Gather](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Gather.html) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分 | +| [mindspore.ops.Gelu](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Gelu.html) | 无 | +| [mindspore.ops.GeLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GeLU.html) | 无 | +| [mindspore.ops.Greater](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Greater.html) | 无 | +| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GreaterEqual.html) | 无 | +| [mindspore.ops.Inv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Inv.html) | 无 | +| [mindspore.ops.L2Normalize](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.L2Normalize.html) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Less](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Less.html) | 无 | +| [mindspore.ops.LessEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LessEqual.html) | 无 | +| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalAnd.html) | 无 | +| [mindspore.ops.LogicalNot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalNot.html) | 无 | +| [mindspore.ops.LogicalOr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogicalOr.html) | 无 | +| [mindspore.ops.Log](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Log.html) | 无 | +| [mindspore.ops.Log1p](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Log1p.html) | 无 | +| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.LogSoftmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.MatMul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.MatMul.html) | 不支持`transpose_a=True` | +| [mindspore.ops.Maximum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Maximum.html) | 无 | +| [mindspore.ops.Minimum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Minimum.html) | 无 | +| [mindspore.ops.Mod](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mod.html) | 无 | +| [mindspore.ops.Mul](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Mul.html) | 无 | +| [mindspore.ops.Neg](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Neg.html) | 无 | +| [mindspore.ops.NotEqual](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.NotEqual.html) | 无 | +| [mindspore.ops.OneHot](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.OneHot.html) | 仅支持输入(indices)是1维的Tensor,切分策略要配置输出的切分策略,以及第1和第2个输入的切分策略 | +| [mindspore.ops.OnesLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.OnesLike.html) | 无 | +| [mindspore.ops.Pack](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Pack.html) | 无 | +| [mindspore.ops.Pow](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Pow.html) | 无 | +| [mindspore.ops.PReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.PReLU.html) | weight的shape在非[1]的情况下,输入(input_x)的Channel维要和weight的切分方式一致 | +| [mindspore.ops.RealDiv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.RealDiv.html) | 无 | +| [mindspore.ops.Reciprocal](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Reciprocal.html) | 无 | +| [mindspore.ops.ReduceMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceMax.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.ReduceMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceMin.html) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致 | +| [mindspore.ops.ReduceSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceSum.html) | 无 | +| [mindspore.ops.ReduceMean](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReduceMean.html) | 无 | +| [mindspore.ops.ReLU](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReLU.html) | 无 | +| [mindspore.ops.ReLU6](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReLU6.html) | 无 | +| [mindspore.ops.ReLUV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ReLUV2.html) | 无 | +| [mindspore.ops.Reshape](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Reshape.html) | 不支持配置切分策略,并且,在自动并行模式下,当reshape算子后接有多个算子,不允许对这些算子配置不同的切分策略 | +| [mindspore.ops.Round](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Round.html) | 无 | +| [mindspore.ops.Rsqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Rsqrt.html) | 无 | +| [mindspore.ops.Sigmoid](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sigmoid.html) | 无 | +| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SigmoidCrossEntropyWithLogits.html) | 无 | +| [mindspore.ops.Sign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sign.html) | 无 | +| [mindspore.ops.Sin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sin.html) | 无 | +| [mindspore.ops.Sinh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sinh.html) | 无 | +| [mindspore.ops.Softmax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Softmax.html) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SoftmaxCrossEntropyWithLogits.html) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0] | +| [mindspore.ops.Softplus](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Softplus.html) | 无 | +| [mindspore.ops.Softsign](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Softsign.html) | 无 | +| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.SparseGatherV2.html) | 同GatherV2 | +| [mindspore.ops.Split](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Split.html) | 轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Sqrt](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sqrt.html) | 无 | +| [mindspore.ops.Square](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Square.html) | 无 | +| [mindspore.ops.Squeeze](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Squeeze.html) | 无 | +| [mindspore.ops.StridedSlice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.StridedSlice.html) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分 | +| [mindspore.ops.Slice](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Slice.html) | 需要切分的维度必须全部提取 | +| [mindspore.ops.Sub](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Sub.html) | 无 | +| [mindspore.ops.Tan](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Tan.html) | 无 | +| [mindspore.ops.Tanh](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Tanh.html) | 无 | +| [mindspore.ops.TensorAdd](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TensorAdd.html) | 无 | +| [mindspore.ops.Tile](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Tile.html) | 仅支持对multiples配置切分策略 | +| [mindspore.ops.TopK](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.TopK.html) | 最后一维不支持切分,切分后,在数学逻辑上和单机不等价 | +| [mindspore.ops.Transpose](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Transpose.html) | 无 | +| [mindspore.ops.Unique](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.Unique.html) | 只支持重复计算的策略(1,) | +| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentSum.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致 | +| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMin.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最大值。需要用户进行掩码处理,将最大值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | +| [mindspore.ops.UnsortedSegmentMax](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.UnsortedSegmentMax.html) | 输入input_x和segment_ids的切分配置必须在segment_ids的维度上保持一致。注意:在segment id为空时,输出向量的对应位置会填充为输入类型的最小值。需要用户进行掩码处理,将最小值转换成0。否则容易造成数值溢出,导致通信算子上溢错误,从而引发Run Task Error | +| [mindspore.ops.ZerosLike](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.ZerosLike.html) | 无 | > 重复计算是指,机器没有用满,比如:集群有8张卡跑分布式训练,切分策略只对输入切成了4份。这种情况下会发生重复计算。 diff --git a/docs/note/source_zh_cn/paper_list.md b/docs/note/source_zh_cn/paper_list.md deleted file mode 100644 index 26bb3dbaf93daa91263c8e3b6ad4234ec6d6dad7..0000000000000000000000000000000000000000 --- a/docs/note/source_zh_cn/paper_list.md +++ /dev/null @@ -1,13 +0,0 @@ -# 论文列表 - -`Linux` `Windows` `Ascend` `GPU` `CPU` `全流程` `框架开发` `中级` `高级` `贡献者` - - - -| 论文标题 | 论文作者 | 领域 | 期刊/会议名称 | 论文链接 | -| ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- | -------------- | ------------------------------------------------------------ | -| A Representation Learning Framework for Property Graphs | Yifan Hou, Hongzhi Chen, Changji Li, James Cheng, Ming-Chang Yang, Fan Yu | 图神经网络 | KDD | | -| Optimizing the Memory Hierarchy by Compositing Automatic Transformations on Computations and Data | JieZhao,Peng Di | 微体系结构 | MICRO2020 | | -| Masked Face Recognition with Latent Part Detection | Feifei Ding,Peixi Peng,Yangru Huang,Mengyue Geng,Yonghong Tian | 目标检测 | ACMMM | | -| Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets | Kai Han,Yunhe Wang,Qiulin Zhang,Wei Zhang,Chunjing XU,Tong Zhang | 网络优化 | NeurIPS 2020 | | -| SCOP: Scientific Control for Reliable Neural Network Pruning | Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing Xu, Chao Xu, Chang Xu | 网络优化 | NeurIPS 2020 | | diff --git a/docs/note/source_zh_cn/posenet_lite.md b/docs/note/source_zh_cn/posenet_lite.md index cf910548b8b397e766cd95546199c45db0b17d9a..901b18f69cbfad75b86d7d457b3cd5cd6eb96bb3 100644 --- a/docs/note/source_zh_cn/posenet_lite.md +++ b/docs/note/source_zh_cn/posenet_lite.md @@ -1,6 +1,6 @@ # 骨骼检测模型支持(Lite) - + ## 骨骼检测介绍 @@ -12,4 +12,4 @@ ![image_posenet](images/posenet_detection.png) -使用MindSpore Lite实现骨骼检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/posenet)。 +使用MindSpore Lite实现骨骼检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/posenet)。 diff --git a/docs/note/source_zh_cn/roadmap.md b/docs/note/source_zh_cn/roadmap.md index 62c7657b6f592cb4632dbc4a31653fdd3643a19a..6f8a76c2cc4ac2197928eae3f193d494d6932f33 100644 --- a/docs/note/source_zh_cn/roadmap.md +++ b/docs/note/source_zh_cn/roadmap.md @@ -15,7 +15,7 @@ - + 以下将展示MindSpore近一年的高阶计划,我们会根据用户的反馈诉求,持续调整计划的优先级。 diff --git a/docs/note/source_zh_cn/scene_detection_lite.md b/docs/note/source_zh_cn/scene_detection_lite.md index 19b3d7db410944cf9a4d1e14e10ed4a5c828cf76..9acb0a21e382437a28cca9f7ca3f23654cd318d2 100644 --- a/docs/note/source_zh_cn/scene_detection_lite.md +++ b/docs/note/source_zh_cn/scene_detection_lite.md @@ -1,12 +1,12 @@ # 场景检测模型支持(Lite) - + ## 场景检测介绍 场景检测可以识别设备摄像头中场景的类型。 -使用MindSpore Lite实现场景检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/scene_detection)。 +使用MindSpore Lite实现场景检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/scene_detection)。 ## 场景检测模型列表 diff --git a/docs/note/source_zh_cn/static_graph_syntax_support.md b/docs/note/source_zh_cn/static_graph_syntax_support.md index 996674bbabc106c51b27b395ba3a985bd54abf15..59979fc4b3606403e712e31307c8435aa896db99 100644 --- a/docs/note/source_zh_cn/static_graph_syntax_support.md +++ b/docs/note/source_zh_cn/static_graph_syntax_support.md @@ -42,6 +42,7 @@ - [isinstance](#isinstance) - [partial](#partial) - [map](#map) + - [zip](#zip) - [range](#range) - [enumerate](#enumerate) - [super](#super) @@ -55,20 +56,20 @@ - + ## 概述 在Graph模式下,Python代码并不是由Python解释器去执行,而是将代码编译成静态计算图,然后执行静态计算图。 - 关于Graph模式和计算图,可参考文档: + 关于Graph模式和计算图,可参考文档: 当前仅支持编译`@ms_function`装饰器修饰的函数、Cell及其子类的实例。 对于函数,则编译函数定义;对于网络,则编译`construct`方法及其调用的其他方法或者函数。 - `ms_function`使用规则可参考文档: + `ms_function`使用规则可参考文档: - `Cell`定义可参考文档: + `Cell`定义可参考文档: 由于语法解析的限制,当前在编译构图时,支持的数据类型、语法以及相关操作并没有完全与Python语法保持一致,部分使用受限。 @@ -145,7 +146,7 @@ ```text y: 2 - x: ([1, 88], Tensor(shape=[3], dtype=Int64, value= [1, 2, 3]), 'ok', (1, 2, 3)) + x: ([1, 88], Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), 'ok', (1, 2, 3)) ``` #### Tuple @@ -181,7 +182,7 @@ ```text y: 3 - z: Tensor(shape=[3], dtype=Int64, value= [1, 2, 3]) + z: Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]) m: (2, 3, 4), 3, 4) ``` @@ -224,7 +225,7 @@ ```text y: ("a", "b", "c") - z: (Tensor(shape=[3], dtype=Int64, value= [1, 2, 3]), Tensor(shape=[3], dtype=Int64, value= [4, 5, 6]), Tensor(shape=[3], dtype=Int64, value= [7, 8, 9])) + z: (Tensor(shape=[3], dtype=Int64, value=[1, 2, 3]), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]), Tensor(shape=[3], dtype=Int64, value=[7, 8, 9])) ``` - 支持索引取值和赋值 @@ -242,8 +243,8 @@ 结果如下: ```text - y: Tensor(shape=[3], dtype=Int64, value= [4, 5, 6]) - x: {"a": (2, 3, 4), Tensor(shape=[3], dtype=Int64, value= [4, 5, 6]), Tensor(shape=[3], dtype=Int64, value= [7, 8, 9])} + y: Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]) + x: {"a": (2, 3, 4), Tensor(shape=[3], dtype=Int64, value=[4, 5, 6]), Tensor(shape=[3], dtype=Int64, value=[7, 8, 9])} ``` ### MindSpore自定义数据类型 @@ -256,7 +257,7 @@ 可以通过`@constexpr`装饰器修饰函数,在函数里生成`Tensor`。 -关于`@constexpr`的用法可参考: +关于`@constexpr`的用法可参考: 对于网络中需要用到的常量`Tensor`,可以作为网络的属性,在`init`的时候定义,即`self.x = Tensor(args...)`,然后在`construct`里使用。 @@ -278,36 +279,39 @@ def generate_tensor(): - 支持接口: - `all`:对`Tensor`通过`all`操作进行归约, 仅支持`Bool`类型的`Tensor`。 + `all`:对`Tensor`通过`all`操作进行归约,仅支持`Bool`类型的`Tensor`。 - `any`:对`Tensor`通过`any`操作进行归约。仅支持`Bool`类型的`Tensor`。 - - `expand_as`:将`Tensor`按照广播规则扩展成与另一个`Tensor`相同的`shape`。 + `any`:对`Tensor`通过`any`操作进行归约,仅支持`Bool`类型的`Tensor`。 `view`:将`Tensor`reshape成输入的`shape`。 + `expand_as`:将`Tensor`按照广播规则扩展成与另一个`Tensor`相同的`shape`。 + 示例如下: ```python x = Tensor(np.array([[True, False, True], [False, True, False]])) - y = Tensor(np.array([[1, 2], [3, 4], [5, 6]])) x_shape = x.shape x_dtype = x.dtype x_all = x.all() x_any = x.any() - x_as = x.expand_as(y) x_view = x.view((1, 6)) + + y = Tensor(np.ones((2, 3), np.float32)) + z = Tensor(np.ones((2, 2, 3))) + y_as_z = y.expand_as(z) ``` 结果如下: ```text x_shape: (2, 3) - x_dtype: Int64 - x_all: Tensor(shape=[], dtype=Bool, value= False) - x_any: Tensor(shape=[], dtype=Bool, value= True) - x_as: Tensor(shape=[2, 3], dtype=Bool, value= [[True, False], [True, False], [True, False]]) - x_view: Tensor(shape=[1, 6], dtype=Bool, value= [[True, False, True, False, True, False]]) + x_dtype: Bool + x_all: Tensor(shape=[], dtype=Bool, value=False) + x_any: Tensor(shape=[], dtype=Bool, value=True) + x_view: Tensor(shape=[1, 6], dtype=Bool, value=[[True, False, True, False, True, False]]) + + y_as_z: Tensor(shape=[2, 2, 3], dtype=Float32, value=[[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]], [[1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]]) ``` - 索引取值 @@ -335,8 +339,8 @@ def generate_tensor(): 结果如下: ```text - data_single: Tensor(shape=[3, 2], dtype=Int64, value= [[0, 1], [2, 3], [4, 5]]) - data_multi: Tensor(shape=[2], dtype=Int64, value= [2, 3]) + data_single: Tensor(shape=[3, 2], dtype=Int64, value=[[0, 1], [2, 3], [4, 5]]) + data_multi: Tensor(shape=[2], dtype=Int64, value=[2, 3]) ``` - `True`索引取值 @@ -360,8 +364,8 @@ def generate_tensor(): 结果如下: ```text - data_single: Tensor(shape=[1, 2, 3], dtype=Int64, value= [[[0, 1, 2], [3, 4, 5]]]) - data_multi: Tensor(shape=[1, 1, 2, 3], dtype=Int64, value= [[[[0, 1, 2], [3, 4, 5]]]]) + data_single: Tensor(shape=[1, 2, 3], dtype=Int64, value=[[[0, 1, 2], [3, 4, 5]]]) + data_multi: Tensor(shape=[1, 1, 2, 3], dtype=Int64, value=[[[[0, 1, 2], [3, 4, 5]]]]) ``` - `None`索引取值 @@ -387,8 +391,8 @@ def generate_tensor(): 结果如下: ```text - data_single: Tensor(shape=[2, 3], dtype=Int64, value= [[0, 1, 2], [3, 4, 5]]) - data_multi: Tensor(shape=[2, 3], dtype=Int64, value= [[0, 1, 2], [3, 4, 5]]) + data_single: Tensor(shape=[2, 3], dtype=Int64, value=[[0, 1, 2], [3, 4, 5]]) + data_multi: Tensor(shape=[2, 3], dtype=Int64, value=[[0, 1, 2], [3, 4, 5]]) ``` - `slice`索引取值 @@ -416,8 +420,8 @@ def generate_tensor(): 结果如下: ```text - data_single: Tensor(shape=[2, 2, 2], dtype=Int64, value= [[[4, 5], [6, 7]], [[12, 13], [14, 15]]]) - data_multi: Tensor(shape=[1, 2, 2], dtype=Int64, value= [[[12, 13], [14, 15]]]) + data_single: Tensor(shape=[2, 2, 2], dtype=Int64, value=[[[4, 5], [6, 7]], [[12, 13], [14, 15]]]) + data_multi: Tensor(shape=[1, 2, 2], dtype=Int64, value=[[[12, 13], [14, 15]]]) ``` - `Tensor`索引取值 @@ -437,7 +441,7 @@ def generate_tensor(): 示例如下: ```python - tensor_x = Tensor(np.arange(4 * 2 * 3).reshape((4, 2, 2))) + tensor_x = Tensor(np.arange(4 * 2 * 3).reshape((4, 2, 3))) tensor_index0 = Tensor(np.array([[1, 2], [0, 3]]), mstype.int32) tensor_index1 = Tensor(np.array([[0, 0]]), mstype.int32) data_single = tensor_x[tensor_index0] @@ -447,8 +451,8 @@ def generate_tensor(): 结果如下: ```text - data_single: Tensor(shape=[2, 2, 2, 3], dtype=Int64, value= [[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[0, 1], [2, 3]], [[12, 13], [14, 15]]]]) - data_multi: Tensor(shape=[1, 2, 2, 2, 3], dtype=Int64, value= [[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[4, 5], [6, 7]], [[8, 9], [10, 11]]]]]) + data_single: Tensor(shape=[2, 2, 2, 3], dtype=Int64, value=[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[0, 1], [2, 3]], [[12, 13], [14, 15]]]]) + data_multi: Tensor(shape=[1, 2, 2, 2, 3], dtype=Int64, value=[[[[[4, 5], [6, 7]], [[8, 9], [10, 11]]], [[[4, 5], [6, 7]], [[8, 9], [10, 11]]]]]) ``` - `Tuple`索引取值 @@ -478,7 +482,7 @@ def generate_tensor(): 结果如下: ```text - data: Tensor(shape=[2, 3, 1], dtype=Int64, value= [[[13], [14], [13]], [[12], [15], [14]]]) + data: Tensor(shape=[2, 3, 1], dtype=Int64, value=[[[13], [14], [13]], [[12], [15], [14]]]) ``` - 索引赋值 @@ -504,20 +508,20 @@ def generate_tensor(): 示例如下: ```python - tensor_x = Tensor(np.arange(2 * 3).reshape((2, 3))) - tensor_y = Tensor(np.arange(2 * 3).reshape((2, 3))) - tensor_z = Tensor(np.arange(2 * 3).reshape((2, 3))) - tensor_x[1] = 88 - tensor_y[1][1] = 88 - tensor_y[1]= Tensor(np.array([66, 88, 99])) + tensor_x = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) + tensor_y = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) + tensor_z = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) + tensor_x[1] = 88.0 + tensor_y[1][1] = 88.0 + tensor_z[1]= Tensor(np.array([66, 88, 99]).astype(np.float32)) ``` 结果如下: ```text - tensor_x: Tensor(shape=[2, 3], dtype=Int64, value= [[0, 1, 2], [88, 88, 88]]) - tensor_y: Tensor(shape=[2, 3], dtype=Int64, value= [[0, 1, 2], [3, 88, 5]]) - tensor_z: Tensor(shape=[2, 3], dtype=Int64, value= [[0, 1, 2], [66, 88, 99]]) + tensor_x: Tensor(shape=[2, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [88.0, 88.0, 88.0]]) + tensor_y: Tensor(shape=[2, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [3.0, 88.0, 5.0]]) + tensor_z: Tensor(shape=[2, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [66.0, 88.0, 99.0]]) ``` - `ellipsis`索引赋值 @@ -528,25 +532,26 @@ def generate_tensor(): 当所赋值为`Number`时,可以理解为将所有元素都更新为`Number`。 - 当所赋值为`Tensor`时,`Tensor`的`shape`必须等于或者可广播为原`Tensor`的`shape`,在保持二者`shape`一致后,将赋值`Tensor`元素更新到原`Tensor`对应位置。 + 当所赋值为`Tensor`时,`Tensor`里元素个数必须为1或者等于原`Tensor`里元素个数,元素为1时进行广播,个数相等`shape`不一致时进行`reshape`, + 在保证二者`shape`一致后,将赋值`Tensor`元素按照位置逐一更新到原`Tensor`里。 例如,对`shape = (2, 3, 4)`的`Tensor`,通过`...`索引赋值为100,更新后的`Tensor`shape仍为`(2, 3, 4)`,所有元素都变为100。 示例如下: ```python - tensor_x = Tensor(np.arange(2 * 3).reshape((2, 3))) - tensor_y = Tensor(np.arange(2 * 3).reshape((2, 3))) - tensor_z = Tensor(np.arange(2 * 3).reshape((2, 3))) - tensor_x[...] = 88 - tensor_y[...]= Tensor(np.array([22, 44, 55])) + tensor_x = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) + tensor_y = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) + tensor_z = Tensor(np.arange(2 * 3).reshape((2, 3)).astype(np.float32)) + tensor_x[...] = 88.0 + tensor_y[...] = Tensor(np.array([22, 44, 55, 22, 44, 55]).astype(np.float32)) ``` 结果如下: ```text - tensor_x: Tensor(shape=[2, 3], dtype=Int64, value= [[88, 88, 88], [88, 88, 88]]) - tensor_y: Tensor(shape=[2, 3], dtype=Int64, value= [[22, 44, 55], [22, 44, 55]]) + tensor_x: Tensor(shape=[2, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [88.0, 88.0, 88.0]]) + tensor_y: Tensor(shape=[2, 3], dtype=Float32, value=[[22.0, 44.0, 55.0], [22.0, 44.0, 55.0]]) ``` - `slice`索引赋值 @@ -557,32 +562,33 @@ def generate_tensor(): 当所赋值为`Number`时,可以理解为将`slice`索引取到位置元素都更新为`Number`。 - 当所赋值为`Tensor`时,`Tensor`的`shape`必须等于或者可广播为`slice`索引取到结果的`shape`,在保持二者`shape`一致后,然后将赋值`Tensor`元素更新到索引取出结果对应元素的原`Tensor`位置。 + 当所赋值为`Tensor`时,`Tensor`里元素个数必须为1或者等于`slice`索引取到`Tensor`里元素个数,元素为1时进行广播,个数相等`shape`不一致时进行`reshape`, + 在保证二者`shape`一致后,将赋值`Tensor`元素按照位置逐一更新到原`Tensor`里。 例如,对`shape = (2, 3, 4)`的`Tensor`,通过`0:1:1`索引赋值为100,更新后的`Tensor`shape仍为`(2, 3, 4)`,但第0维位置为0的所有元素,值都更新为100。 示例如下: ```python - tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_z = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_x[0:1] = 88 - tensor_y[0:2][0:2] = 88 - tensor_z[0:2] = Tensor(np.array([11, 12, 13])) + tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_z = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_x[0:1] = 88.0 + tensor_y[0:2][0:2] = 88.0 + tensor_z[0:2] = Tensor(np.array([11, 12, 13, 11, 12, 13]).astype(np.float32)) ``` 结果如下: ```text - tensor_x: Tensor(shape=[3, 3], dtype=Int64, value= [[88, 88, 88], [3, 4, 5], [6, 7, 8]]) - tensor_y: Tensor(shape=[3, 3], dtype=Int64, value= [[88, 88, 88], [88, 88, 88], [6, 7, 8]]) - tensor_z: Tensor(shape=[3, 3], dtype=Int64, value= [[11, 12, 13], [11, 12, 13], [6, 7, 8]]) + tensor_x: Tensor(shape=[3, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]]) + tensor_y: Tensor(shape=[3, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [88.0, 88.0, 88.0], [6.0, 7.0, 8.0]]) + tensor_z: Tensor(shape=[3, 3], dtype=Float32, value=[[11.0, 12.0, 13.0], [11.0, 12.0, 13.0], [6.0, 7.0, 8.0]]) ``` - `Tensor`索引赋值 - 支持单层和多层`Tensor`索引赋值,单层`Tensor`索引赋值:`tensor_x[tensor_index] = u`,多层`Tensor`索引赋值:`tensor_x[tensor_index0][tensor_index1]... = u`。 + 仅支持单层`Tensor`索引赋值,即`tensor_x[tensor_index] = u`。 索引`Tensor`支持`int32`和`bool`类型。 @@ -598,34 +604,31 @@ def generate_tensor(): 当全是`Tensor`的时候,这些`Tensor`在`axis=0`轴上打包之后成为一个新的赋值`Tensor`,这时按照所赋值为`Tensor`的规则进行赋值。 - 例如,对一个`shape`为`(6, 4, 5)`、`dtype`为`int64`的tensor通过`shape`为`(2, 3)`的tensor进行索引赋值,如果所赋值为`Number`,则`Number`必须是`int`; - 如果所赋值为`Tuple`,则`tuple`里的元素都得是`int`,且个数为5;如果所赋值为`Tensor`,则`Tensor`的`dtype`必须为`int64`,且`shape`可广播为`(2, 3, 4, 5)`。 + 例如,对一个`shape`为`(6, 4, 5)`、`dtype`为`float32`的tensor通过`shape`为`(2, 3)`的tensor进行索引赋值,如果所赋值为`Number`,则`Number`必须是`float`; + 如果所赋值为`Tuple`,则`tuple`里的元素都得是`float`,且个数为5;如果所赋值为`Tensor`,则`Tensor`的`dtype`必须为`float32`,且`shape`可广播为`(2, 3, 4, 5)`。 示例如下: ```python - tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_z = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_index = Tensor(np.array([[2, 0, 2], [0, 2, 0], [0, 2, 0]])) - tensor_x[tensor_index] = 88 - tensor_y[tensor_index][tensor_index] = 88 - tensor_z[tensor_index] = Tensor(np.array([11, 12, 13])) + tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_index = Tensor(np.array([[2, 0, 2], [0, 2, 0], [0, 2, 0]], np.int32)) + tensor_x[tensor_index] = 88.0 + tensor_y[tensor_index] = Tensor(np.array([11.0, 12.0, 13.0]).astype(np.float32)) ``` 结果如下: ```text - tensor_x: Tensor(shape=[3, 3], dtype=Int64, value= [[88, 88, 88], [6, 7, 8], [88, 88, 88]]) - tensor_y: Tensor(shape=[3, 3], dtype=Int64, value= [[0, 1, 2], [3, 4, 5], [6, 7, 8]]) - tensor_z: Tensor(shape=[3, 3], dtype=Int64, value= [[11, 12, 13], [6, 7, 8], [11, 12, 13]]) + tensor_x: Tensor(shape=[3, 3], dtype=Float32, value=[[88.0, 88.0, 88.0], [3.0, 4.0, 5.0], [88.0, 88.0, 88.0]]) + tensor_y: Tensor(shape=[3, 3], dtype=Float32, value=[[11.0, 12.0, 13.0], [3.0, 4.0, 5.0], [11.0, 12.0, 13.0]]) ``` - `Tuple`索引赋值 支持单层和多层`Tuple`索引赋值,单层`Tuple`索引赋值:`tensor_x[tuple_index] = u`,多层`Tuple`索引赋值:`tensor_x[tuple_index0][tuple_index1]... = u`。 - `Tuple`索引赋值和`Tuple`索引取值对索引的支持一致。 + `Tuple`索引赋值和`Tuple`索引取值对索引的支持一致, 但多层`Tuple`索引赋值不支持`Tuple`里包含`Tensor`。 所赋值支持`Number`、`Tuple`和`Tensor`,`Number`、`Tuple`和`Tensor`里的值必须与原`Tensor`数据类型一致。 @@ -642,21 +645,21 @@ def generate_tensor(): 示例如下: ```python - tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3))) - tensor_index = Tensor(np.array([[0, 1], [1, 0]])) - tensor_x[1, 1:3] = 88 - tensor_y[1:3, tensor_index] = 88 - tensor_z[1:3, tensor_index] = Tensor(np.array([11, 12])) + tensor_x = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_y = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_z = Tensor(np.arange(3 * 3).reshape((3, 3)).astype(np.float32)) + tensor_index = Tensor(np.array([[0, 1], [1, 0]]).astype(np.int32)) + tensor_x[1, 1:3] = 88.0 + tensor_y[1:3, tensor_index] = 88.0 + tensor_z[1:3, tensor_index] = Tensor(np.array([11, 12]).astype(np.float32)) ``` 结果如下: - ```python - tensor_x: Tensor(shape=[3, 3], dtype=Int64, value= [[0, 1, 2], [3, 88, 88], [6, 7, 8]]) - tensor_y: Tensor(shape=[3, 3], dtype=Int64, value= [[0, 1, 2], [88, 88, 5], [88, 88, 8]]) - tensor_z: Tensor(shape=[3, 3], dtype=Int64, value= [[0, 1, 2], [12, 11, 5], [12, 11, 8]]) + ```text + tensor_x: Tensor(shape=[3, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [3.0, 88.0, 88.0], [6.0, 7.0, 8.0]]) + tensor_y: Tensor(shape=[3, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [88.0, 88.0, 5.0], [88.0, 88.0, 8.0]]) + tensor_z: Tensor(shape=[3, 3], dtype=Float32, value=[[0.0, 1.0, 2.0], [12.0, 11.0, 5.0], [12.0, 11.0, 8.0]]) ``` #### Primitive @@ -667,9 +670,9 @@ def generate_tensor(): 当前不支持在网络调用`Primitive`及其子类相关属性和接口。 -`Primitive`定义可参考文档: +`Primitive`定义可参考文档: -当前已定义的`Primitive`可参考文档: +当前已定义的`Primitive`可参考文档: #### Cell @@ -679,9 +682,9 @@ def generate_tensor(): 当前不支持在网络调用`Cell`及其子类相关属性和接口,除非是在`Cell`自己的`contrcut`中通过`self`调用。 -`Cell`定义可参考文档: +`Cell`定义可参考文档: -当前已定义的`Cell`可参考文档: +当前已定义的`Cell`可参考文档: ## 运算符 @@ -689,7 +692,7 @@ def generate_tensor(): 之所以支持,是因为这些运算符会转换成同名算子进行运算,这些算子支持了隐式类型转换。 -规则可参考文档: +规则可参考文档: ### 算术运算符 @@ -802,7 +805,7 @@ return z 结果如下: ```text -z: Tensor(shape=[2, 3], dtype=Int64, value= [[7, 7], [7, 7], [7, 7]]) +z: Tensor(shape=[2, 3], dtype=Int64, value=[[7, 7], [7, 7], [7, 7]]) ``` 参数:`sequence` -- 遍历序列(`Tuple`、`List`) @@ -965,7 +968,7 @@ z_len: 6 #### isinstance -功能:判断对象是否为类的实例。 +功能:判断对象是否为类的实例。区别于算子`Isinstance`,该算子的第二个入参是MindSpore的`dtype`模块下定义的类型。 调用:`isinstance(obj, type)` @@ -983,16 +986,16 @@ z_len: 6 x = (2, 3, 4) y = [2, 3, 4] z = Tensor(np.ones((6, 4, 5))) -x_is_list = isinstance(x, mstype.list_) -y_is_tuple= isinstance(y, mstype.tuple_) +x_is_tuple = isinstance(x, mstype.tuple_) +y_is_list= isinstance(y, mstype.list_) z_is_tensor = isinstance(z, mstype.tensor) ``` 结果如下: ```text -x_is_list: True -y_is_tuple: True +x_is_tuple: True +y_is_list: True z_is_tensor: True ``` @@ -1082,7 +1085,7 @@ ret = zip(elements_a, elements_b) 结果如下: ```text -ret: (1, 4), (2, 5), (3, 6)) +ret: ((1, 4), (2, 5), (3, 6)) ``` #### range @@ -1154,7 +1157,7 @@ n = enumerate(y) ```text m: ((3, 100), (4, 200), (5, 300), (5, 400)) -n: ((0, Tensor(shape=[2], dtype=Int64, value= [1, 2])), (0, Tensor(shape=[2], dtype=Int64, value= [3, 4])), (0, Tensor(shape=[2], dtype=Int64, value= [5, 6]))) +n: ((0, Tensor(shape=[2], dtype=Int64, value=[1, 2])), (1, Tensor(shape=[2], dtype=Int64, value=[3, 4])), (2, Tensor(shape=[2], dtype=Int64, value=[5, 6]))) ``` #### super @@ -1225,7 +1228,7 @@ ret = pow(x, y) 结果如下: ```text -ret: Tensor(shape=[3], dtype=Int64, value= [1, 4, 9])) +ret: Tensor(shape=[3], dtype=Int64, value=[1, 4, 27])) ``` #### print @@ -1248,7 +1251,7 @@ print("result", x) 结果如下: ```text -result Tensor(shape=[3], dtype=Int64, value= [1, 2, 3])) +result Tensor(shape=[3], dtype=Int64, value=[1, 2, 3])) ``` ### 函数参数 @@ -1265,20 +1268,20 @@ result Tensor(shape=[3], dtype=Int64, value= [1, 2, 3])) ### 整网实例类型 -- 带[@ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。 +- 带[@ms_function](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.html#mindspore.ms_function)装饰器的普通Python函数。 -- 继承自[nn.Cell](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Cell.html)的Cell子类。 +- 继承自[nn.Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Cell.html)的Cell子类。 ### 网络构造组件 | 类别 | 内容 | :----------- |:-------- -| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Cell.html)。 +| `Cell`实例 |[mindspore/nn/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.nn.html)、自定义[Cell](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Cell.html)。 | `Cell`实例的成员函数 | Cell的construct中可以调用其他类成员函数。 | `dataclass`实例 | 使用@dataclass装饰的类。 -| `Primitive`算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.ops.html) -| `Composite`算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.ops.html) -| `constexpr`生成算子 |使用[@constexpr](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.constexpr.html)生成的值计算算子。 +| `Primitive`算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.ops.html) +| `Composite`算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.ops.html) +| `constexpr`生成算子 |使用[@constexpr](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.constexpr.html)生成的值计算算子。 | 函数 | 自定义Python函数、前文中列举的系统函数。 ### 网络使用约束 diff --git a/docs/note/source_zh_cn/style_transfer_lite.md b/docs/note/source_zh_cn/style_transfer_lite.md index bad44095535a9f1a9707e16cd0f6e5392fda10b9..ff9c14728200c720c50da3adfc21a9ff567d3cc8 100644 --- a/docs/note/source_zh_cn/style_transfer_lite.md +++ b/docs/note/source_zh_cn/style_transfer_lite.md @@ -1,6 +1,6 @@ # 风格迁移模型支持(Lite) - + ## 风格迁移介绍 @@ -14,4 +14,4 @@ ![image_after_transfer](images/after_transfer.png) -使用MindSpore Lite实现风格迁移的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/style_transfer)。 +使用MindSpore Lite实现风格迁移的[示例代码](https://gitee.com/mindspore/mindspore/tree/r1.1/model_zoo/official/lite/style_transfer)。 diff --git a/docs/programming_guide/source_en/api_structure.md b/docs/programming_guide/source_en/api_structure.md index e7cc4f2649e29e89ad7abc537446d728b993b7e0..5486017da068c3d905e63d36fc9af1d4eaac897b 100644 --- a/docs/programming_guide/source_en/api_structure.md +++ b/docs/programming_guide/source_en/api_structure.md @@ -9,19 +9,19 @@ - + ## Overall Architecture MindSpore is a deep learning framework in all scenarios, aiming to achieve easy development, efficient execution, and all-scenario coverage. Easy development features include API friendliness and low debugging difficulty. Efficient execution includes computing efficiency, data preprocessing efficiency, and distributed training efficiency. All-scenario coverage means that the framework supports cloud, edge, and device scenarios. -The overall architecture of MindSpore consists of the Mind Expression (ME), Graph Engine (GE), and backend runtime. ME provides user-level APIs for scientific computing, building and training neural networks, and converting Python code of users into graphs. GE is a manager of operators and hardware resources, and is responsible for controlling execution of graphs received from ME. Backend runtime includes efficient running environments, such as the CPU, GPU, Ascend AI processors, and Android/iOS, on the cloud, edge, and device. For more information about the overall architecture, see [Overall Architecture](https://www.mindspore.cn/doc/note/en/master/design/mindspore/architecture.html). +The overall architecture of MindSpore consists of the Mind Expression (ME), Graph Engine (GE), and backend runtime. ME provides user-level APIs for scientific computing, building and training neural networks, and converting Python code of users into graphs. GE is a manager of operators and hardware resources, and is responsible for controlling execution of graphs received from ME. Backend runtime includes efficient running environments, such as the CPU, GPU, Ascend AI processors, and Android/iOS, on the cloud, edge, and device. For more information about the overall architecture, see [Overall Architecture](https://www.mindspore.cn/doc/note/en/r1.1/design/mindspore/architecture.html). ## Design Concept MindSpore originates from the best practices of the entire industry and provides unified model training, inference, and export APIs for data scientists and algorithm engineers. It supports flexible deployment in different scenarios such as the device, edge, and cloud, and promotes the prosperity of domains such as deep learning and scientific computing. -MindSpore provides the Python programming paradigm. Users can use the native control logic of Python to build complex neural network models, simplifying AI programming. For details, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html). +MindSpore provides the Python programming paradigm. Users can use the native control logic of Python to build complex neural network models, simplifying AI programming. For details, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html). Currently, there are two execution modes of a mainstream deep learning framework: a static graph mode and a dynamic graph mode. The static graph mode has a relatively high training performance, but is difficult to debug. On the contrary, the dynamic graph mode is easy to debug, but is difficult to execute efficiently. MindSpore provides an encoding mode that unifies dynamic and static graphs, which greatly improves the compatibility between static and dynamic graphs. Instead of developing multiple sets of code, users can switch between the dynamic and static graph modes by changing only one line of code. For example, set `context.set_context(mode=context.PYNATIVE_MODE)` to switch to the dynamic graph mode, or set `context.set_context(mode=context.GRAPH_MODE)` to switch to the static graph mode, which facilitates development and debugging, and improves performance experience. @@ -56,11 +56,11 @@ In the first step, a function (computational graph) is defined. In the second st In addition, the SCT can convert Python code into an intermediate representation (IR) of a MindSpore function. The IR constructs a computational graph that can be parsed and executed on different devices. Before the computational graph is executed, a plurality of software and hardware collaborative optimization technologies are used, and performance and efficiency in different scenarios such as device, edge, and cloud, are improved. -Improving the data processing capability to match the computing power of AI chips is the key to ensure the ultimate performance of AI chips. MindSpore provides multiple data processing operators and uses automatic data acceleration technology to implement high-performance pipelines, including data loading, data demonstration, and data conversion. It supports data processing capabilities in all scenarios, such as CV, NLP, and GNN. MindRecord is a self-developed data format of MindSpore. It features efficient read and write and easy distributed processing. Users can convert non-standard and common datasets to the MindRecord format to obtain better performance experience. For details about the conversion, see [MindSpore Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/master/dataset_conversion.html). MindSpore supports the loading of common datasets and datasets in multiple data storage formats. For example, users can use `dataset=dataset.Cifar10Dataset("Cifar10Data/")` to load the CIFAR-10 dataset. `Cifar10Data/` indicates the local directory of the dataset, and users can also use `GeneratorDataset` to customize the dataset loading mode. Data augmentation is a method of generating new data based on (limited) data, which can reduce the overfitting phenomenon of network model and improve the generalization ability of the model. In addition to user-defined data augmentation, MindSpore provides automatic data augmentation, making data augmentation more flexible. For details, see [Automatic Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/master/auto_augmentation.html). +Improving the data processing capability to match the computing power of AI chips is the key to ensure the ultimate performance of AI chips. MindSpore provides multiple data processing operators and uses automatic data acceleration technology to implement high-performance pipelines, including data loading, data demonstration, and data conversion. It supports data processing capabilities in all scenarios, such as CV, NLP, and GNN. MindRecord is a self-developed data format of MindSpore. It features efficient read and write and easy distributed processing. Users can convert non-standard and common datasets to the MindRecord format to obtain better performance experience. For details about the conversion, see [MindSpore Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_conversion.html). MindSpore supports the loading of common datasets and datasets in multiple data storage formats. For example, users can use `dataset=dataset.Cifar10Dataset("Cifar10Data/")` to load the CIFAR-10 dataset. `Cifar10Data/` indicates the local directory of the dataset, and users can also use `GeneratorDataset` to customize the dataset loading mode. Data augmentation is a method of generating new data based on (limited) data, which can reduce the overfitting phenomenon of network model and improve the generalization ability of the model. In addition to user-defined data augmentation, MindSpore provides automatic data augmentation, making data augmentation more flexible. For details, see [Automatic Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/r1.1/auto_augmentation.html). -The deep learning neural network model usually contains many hidden layers for feature extraction. However, the feature extraction is random and the debugging process is invisible, which limits the trustworthiness and optimization of the deep learning technology. MindSpore supports visualized debugging and optimization (MindInsight) and provides functions such as training dashboard, lineage, performance analysis, and debugger to help users detect deviations during model training and easily debug and optimize models. For example, before initializing the network, users can use `profiler=Profiler()` to initialize the `Profiler` object, automatically collect information such as the operator time consumption during training, and record the information in a file. After the training is complete, call `profiler.analyse()` to stop collecting data and generate performance analysis results. Users can view and analyze the visualized results to more efficiently debug network performance. For details about debugging and optimization, see [Training Process Visualization](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/visualization_tutorials.html). +The deep learning neural network model usually contains many hidden layers for feature extraction. However, the feature extraction is random and the debugging process is invisible, which limits the trustworthiness and optimization of the deep learning technology. MindSpore supports visualized debugging and optimization (MindInsight) and provides functions such as training dashboard, lineage, performance analysis, and debugger to help users detect deviations during model training and easily debug and optimize models. For example, before initializing the network, users can use `profiler=Profiler()` to initialize the `Profiler` object, automatically collect information such as the operator time consumption during training, and record the information in a file. After the training is complete, call `profiler.analyse()` to stop collecting data and generate performance analysis results. Users can view and analyze the visualized results to more efficiently debug network performance. For details about debugging and optimization, see [Training Process Visualization](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/visualization_tutorials.html). -As a scale of neural network models and datasets continuously increases, parallel distributed training becomes a common practice of neural network training. However, policy selection and compilation of parallel distributed training are very complex, which severely restricts training efficiency of a deep learning model and hinders development of deep learning. MindSpore unifies the encoding methods of standalone and distributed training. Developers do not need to write complex distributed policies. Instead, they can implement distributed training by adding a small amount of codes to the standalone code. For example, after `context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)` is set, a cost model can be automatically established, and a better parallel mode can be selected for users. This improves the training efficiency of neural networks, greatly decreases the AI development difficulty, and enables users to quickly implement model. For more information, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +As a scale of neural network models and datasets continuously increases, parallel distributed training becomes a common practice of neural network training. However, policy selection and compilation of parallel distributed training are very complex, which severely restricts training efficiency of a deep learning model and hinders development of deep learning. MindSpore unifies the encoding methods of standalone and distributed training. Developers do not need to write complex distributed policies. Instead, they can implement distributed training by adding a small amount of codes to the standalone code. For example, after `context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)` is set, a cost model can be automatically established, and a better parallel mode can be selected for users. This improves the training efficiency of neural networks, greatly decreases the AI development difficulty, and enables users to quickly implement model. For more information, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). ## Level Structure diff --git a/docs/programming_guide/source_en/augmentation.md b/docs/programming_guide/source_en/augmentation.md index 93d3d00940d082d27f48f78ec2710105af94c027..e9c569744a1efeaf68df2baa378cc86607337206 100644 --- a/docs/programming_guide/source_en/augmentation.md +++ b/docs/programming_guide/source_en/augmentation.md @@ -16,7 +16,7 @@ - + ## Overview @@ -29,7 +29,7 @@ MindSpore provides the `c_transforms` and `py_transforms` modules for data augme | c_transforms | Implemented based on C++. | This module provides high performance. | | py_transforms | Implemented based on Python PIL | This module provides multiple image augmentation methods and can convert PIL images to NumPy arrays. | -The following table lists the common data augmentation operators supported by MindSpore. For details about more data augmentation operators, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.vision.html). +The following table lists the common data augmentation operators supported by MindSpore. For details about more data augmentation operators, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.vision.html). | Module | Operator | Description | | ---- | ---- | ---- | diff --git a/docs/programming_guide/source_en/auto_augmentation.md b/docs/programming_guide/source_en/auto_augmentation.md index d2ce902d1d7ff5e90649fedda3d2874243a39963..6bcf28bb835c1ed7668fc8a7bc7454c6b899a074 100644 --- a/docs/programming_guide/source_en/auto_augmentation.md +++ b/docs/programming_guide/source_en/auto_augmentation.md @@ -12,7 +12,7 @@ - + ## Overview @@ -24,7 +24,7 @@ Auto augmentation can be implemented based on probability or callback parameters MindSpore provides a series of probability-based auto augmentation APIs. You can randomly select and combine various data augmentation operations to make data augmentation more flexible. -For details about APIs, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.transforms.html). +For details about APIs, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.transforms.html). ### RandomApply diff --git a/docs/programming_guide/source_en/auto_parallel.md b/docs/programming_guide/source_en/auto_parallel.md index fa4001c4a70e515482cc1b033f8f088af52968bc..4a40cbaee4cbe05ccdc83e0a9b59ab9f2b0c5668 100644 --- a/docs/programming_guide/source_en/auto_parallel.md +++ b/docs/programming_guide/source_en/auto_parallel.md @@ -33,7 +33,7 @@ - + ## Overview @@ -101,7 +101,7 @@ context.get_auto_parallel_context("gradients_mean") - `semi_auto_parallel`: semi-automatic parallel mode. In this mode, you can use the `shard` method to configure a segmentation policy for an operator. If no policy is configured, the data parallel policy is used by default. - `auto_parallel`: automatic parallel mode. In this mode, the framework automatically creates a cost model and selects the optimal segmentation policy for users. -The complete examples of `auto_parallel` and `data_parallel` are provided in [Distributed Training](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html). +The complete examples of `auto_parallel` and `data_parallel` are provided in [Distributed Training](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html). The following is a code example: @@ -150,7 +150,7 @@ context.get_auto_parallel_context("enable_parallel_optimizer") #### parameter_broadcast -Parameter broadcast shares the value of data parallel weights among devices, in the purpose of synchronization of weights. +Parameter broadcast shares the value of data parallel weights among devices, in the purpose of synchronization of weights. The default value is False and only the graph mode is supported. The following is a code example: @@ -298,7 +298,7 @@ rank_id = get_rank() ### cross_batch -In specific scenarios, the calculation logic of `data_parallel` is different from that of `stand_alone`. The calculation logic of `auto_parallel` is the same as that of `stand_alone` in any scenario. The convergence effect of `data_parallel` may be better. Therefore, MindSpore provides the `cross_barch` parameter to ensure that the calculation logic of `auto_parallel` is consistent with that of `data_parallel`. You can use the `add_prim_attr` method to configure the logic. The default value is False. +In specific scenarios, the calculation logic of `data_parallel` is different from that of `stand_alone`. The calculation logic of `auto_parallel` is the same as that of `stand_alone` in any scenario. The convergence effect of `data_parallel` may be better. Therefore, MindSpore provides the `cross_batch` parameter to ensure that the calculation logic of `auto_parallel` is consistent with that of `data_parallel`. You can use the `add_prim_attr` method to configure the logic. The default value is False. The following is a code example: @@ -338,10 +338,10 @@ x = Parameter(Tensor(np.ones([2, 2])), layerwise_parallel=True) Data parallel refers to the parallel mode in which data is segmented. Generally, data is segmented by batch and distributed to each computing unit (worker) for model calculation. In data parallel mode, datasets must be imported in data parallel mode, and `parallel_mode` must be set to `data_parallel`. -For details about the test cases, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +For details about the test cases, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). ## Automatic Parallel Automatic parallel is a distributed parallel mode that integrates data parallel, model parallel, and hybrid parallel. It can automatically establish a cost model and select a parallel mode for users. The cost model refers to modeling the training time based on the memory computing overhead and the communication overhead, and designing an efficient algorithm to find a parallel policy with a relatively short training time. In automatic parallel mode, datasets must be imported in data parallel mode, and `parallel_mode` must be set to `auto_parallel`. -For details about the test cases, see the [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +For details about the test cases, see the [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html). diff --git a/docs/programming_guide/source_en/cache.md b/docs/programming_guide/source_en/cache.md new file mode 100644 index 0000000000000000000000000000000000000000..0d307652d883b052c1e8b2d6f15fe6a97e7cf3b1 --- /dev/null +++ b/docs/programming_guide/source_en/cache.md @@ -0,0 +1,5 @@ +# Single Node Data Cache + +No English version available right now, welcome to contribute. + + diff --git a/docs/programming_guide/source_en/callback.md b/docs/programming_guide/source_en/callback.md index 793d1637fa558241b0cd2d650935d75f108350ae..b8f8cfc00e48faeac83859fb374218ce6c9274f1 100644 --- a/docs/programming_guide/source_en/callback.md +++ b/docs/programming_guide/source_en/callback.md @@ -9,7 +9,7 @@ - + ## Overview @@ -23,19 +23,19 @@ In MindSpore, the callback mechanism is generally used in the network training p This function is combined with the model training process, and saves the model and network parameters after training to facilitate re-inference or re-training. `ModelCheckpoint` is generally used together with `CheckpointConfig`. `CheckpointConfig` is a parameter configuration class that can be used to customize the checkpoint storage policy. - For details, see [Saving Models](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html). + For details, see [Saving Models](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html). - SummaryCollector This function collects common information, such as loss, learning rate, computational graph, and parameter weight, helping you visualize the training process and view information. In addition, you can perform the summary operation to collect data from the summary file. - For details, see [Collecting Summary Record](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/summary_record.html). + For details, see [Collecting Summary Record](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/summary_record.html). - LossMonitor This function monitors the loss change during training. When the loss is NAN or INF, the training is terminated in advance. Loss information can be recorded in logs for you to view. - For details, see the [Custom Debugging Information](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#mindsporecallback). + For details, see the [Custom Debugging Information](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#mindsporecallback). - TimeMonitor @@ -51,6 +51,6 @@ The following examples are used to introduce the custom callback functions: 2. Save the checkpoint file with the highest accuracy during training. You can customize the function to save a model with the highest accuracy after each epoch. -For details, see [Custom Callback](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#custom-callback). +For details, see [Custom Callback](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#custom-callback). According to the tutorial, you can easily customize other callback functions. For example, customize a function to output the detailed training information, including the training progress, training step, training name, and loss value, after each training is complete; terminate training when the loss or model accuracy reaches a certain value by setting the loss or model accuracy threshold. When the loss or model accuracy reaches the threshold, the training is terminated in advance. diff --git a/docs/programming_guide/source_en/cell.md b/docs/programming_guide/source_en/cell.md index 8b42e2b8d45ae402670cfdb8369693f02843d133..a5617424e983fa4e1cd6a1cece78476335327363 100644 --- a/docs/programming_guide/source_en/cell.md +++ b/docs/programming_guide/source_en/cell.md @@ -21,7 +21,7 @@ - + ## Overview @@ -64,7 +64,7 @@ class Net(nn.Cell): The `parameters_dict` method is used to identify all parameters in the network structure and return `OrderedDict` with key as the parameter name and value as the parameter value. -There are many other methods for returning parameters in the `Cell` class, such as `get_parameters` and `trainable_params`. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/nn/mindspore.nn.Cell.html). +There are many other methods for returning parameters in the `Cell` class, such as `get_parameters` and `trainable_params`. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Cell.html). A code example is as follows: @@ -75,17 +75,13 @@ print(result.keys()) print(result['weight']) ``` -In the example, `Net` uses the preceding network building case to print names of all parameters on the network and the result of the `conv.weight` parameter. +In the example, `Net` uses the preceding network building case to print names of all parameters on the network and the result of the `weight` parameter. The following information is displayed: ```text odict_keys(['weight']) -Parameter (name=weight, value=[[[[-3.95042636e-03 1.08830128e-02 -6.51786150e-03] - [ 8.66129529e-03 7.36288540e-03 -4.32638079e-03] - [-1.47628486e-02 8.24100431e-03 -2.71035335e-03]] - ...... - [ 1.58852488e-02 -1.03505487e-02 1.72988791e-02]]]]) +Parameter (name=weight) ``` ### cells_and_names @@ -123,9 +119,9 @@ The following information is displayed: ```text ('', Net1< - (conv): Conv2d + (conv): Conv2d >) -('conv', Conv2d) +('conv', Conv2d) -------names------- ['conv'] ``` @@ -338,7 +334,7 @@ In this case, two pieces of tensor data are built. The `nn.L1Loss` API is used t ## Optimization Algorithms -`mindspore.nn.optim` is a module that implements various optimization algorithms in the MindSpore framework. For details, see [Optimization Algorithms](https://www.mindspore.cn/doc/programming_guide/en/master/optim.html) +`mindspore.nn.optim` is a module that implements various optimization algorithms in the MindSpore framework. For details, see [Optimization Algorithms](https://www.mindspore.cn/doc/programming_guide/en/r1.1/optim.html) ## Building a Customized Network diff --git a/docs/programming_guide/source_en/conf.py b/docs/programming_guide/source_en/conf.py index a1fd767271ac159540440ed65bd0d676163366a9..a2abcc9090f480f4504ca43ff682a2e762a5a89f 100644 --- a/docs/programming_guide/source_en/conf.py +++ b/docs/programming_guide/source_en/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/programming_guide/source_en/context.md b/docs/programming_guide/source_en/context.md index 709e994e1e140c79786e5e2d62d14c7757f8e92b..47384862d91469754ff0049fbedb61903aafdf2e 100644 --- a/docs/programming_guide/source_en/context.md +++ b/docs/programming_guide/source_en/context.md @@ -16,7 +16,7 @@ - + ## Overview @@ -86,27 +86,7 @@ context.set_context(device_target="Ascend", device_id=6) The context contains the context.set_auto_parallel_context API that is used to configure parallel training parameters. This API must be called before the network is initialized. -- `parallel_mode`: parallel distributed mode. The default value is `ParallelMode.STAND_ALONE`. The options are `ParallelMode.DATA_PARALLEL` and `ParallelMode.AUTO_PARALLEL`. - -- `gradients_mean`: During backward computation, the framework collects gradients of parameters in data parallel mode across multiple hosts, obtains the global gradient value, and transfers the global gradient value to the optimizer for update. The default value is `False`, which indicates that the `allreduce_sum` operation is applied. The value `True` indicates that the `allreduce_mean` operation is applied. - -- `enable_parallel_optimizer`: This feature is being developed. It enables the model parallelism of an optimizer and splits the weight to each device for update and synchronization to improve performance. This parameter is valid only in data parallel mode and when the number of parameters is greater than the number of hosts. The `Lamb` and `Adam` optimizers are supported. - -- `device_num`: indicates the number of available device. Its value is int type and must be in the range of 1~4096. - -- `global_rank`: indicates the logical sequence number of the current device, its value is int type and must be in the range of 0~4095. - -> You are advised to set `device_num` and `global_rank` to their default values. The framework calls the HCCL API to obtain the values. - -A code example is as follows: - -```python -from mindspore import context -from mindspore.context import ParallelMode -context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, gradients_mean=True) -``` - -For details about distributed parallel training, see [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html). +> For details about distributed management, see [Parallel Distributed Training](https://www.mindspore.cn/doc/programming_guide/en/r1.1/auto_parallel.html). ## Maintenance and Test Management @@ -118,13 +98,25 @@ The system can collect profiling data during training and use the profiling tool - `enable_profiling`: indicates whether to enable the profiling function. If this parameter is set to True, the profiling function is enabled, and profiling options are read from enable_options. If this parameter is set to False, the profiling function is disabled and only training_trace is collected. -- `profiling_options`: profiling collection options. The values are as follows. Multiple data items can be collected. training_trace: collects step trace data, that is, software information about training tasks and AI software stacks, to analyze the performance of training tasks. It focuses on data argumentation, forward and backward computation, and gradient aggregation update. task_trace: collects task trace data, that is, hardware information of the Ascend 910 processor HWTS/AICore and analysis of task start and end information. op_trace: collects performance data of a single operator. Format: ['op_trace','task_trace','training_trace'] +- `profiling_options`: profiling collection options. The values are as follows. Multiple data items can be collected. + result_path: saving the path of the profiling collection result file. The directory spectified by this parameter needs to be created in advance on the training environment (container or host side) and ensure that the running user configured during installation has read and write permissions. It supports the configuration of absolute or relative paths(relative to the current path when executing the command line). The absolute path configuration starts with '/', for example:/home/data/output. The relative path configuration directly starts with the directory name, for example:output; + training_trace: collect iterative trajectory data, that is, the training task and software information of the AI software stack, to achieve performance analysis of the training task, focusing on data enhancement, forward and backward calculation, gradient aggregation update and other related data. The value is on/off; + task_trace: collect task trajectory data, that is, the hardware information of the HWTS/AICore of the Ascend 910 processor, and analyze the information of beginning and ending of the task. The value is on/off; + aicpu_trace: collect profiling data enhanced by aicpu data. The value is on/off; + fp_point: specify the start position of the forward operator of the training network iteration trajectory, which is used to record the start timestamp of the forward calculation. The configuration value is the name of the first operator specified in the forward direction. when the value is empty, the system will automatically obtain the forward operator name; + bp_point: specify the end position of the iteration trajectory reversal operator of the training network, record the end timestamp of the backward calculation. The configuration value is the name of the operator after the specified reverse. when the value is empty, the system will automatically obtain the backward operator name; + ai_core_metrics: the values are as follows: + - ArithmeticUtilization: percentage statistics of various calculation indicators; + - PipeUtilization: the time-consuming ratio of calculation unit and handling unit, this item is the default value; + - Memory: percentage of external memory read and write instructions; + - MemoryL0: percentage of internal memory read and write instructions; + - ResourceConflictRatio: proportion of pipline queue instructions. A code example is as follows: ```python from mindspore import context -context.set_context(enable_profiling=True, profiling_options="training_trace") +context.set_context(enable_profiling=True, profiling_options='{"result_path":"/home/data/output","training_trace":"on"}') ``` ### Saving MindIR @@ -142,13 +134,13 @@ from mindspore import context context.set_context(save_graphs=True) ``` -> For details about the debugging method, see [Asynchronous Dump](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#asynchronous-dump). +> For details about the debugging method, see [Asynchronous Dump](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#asynchronous-dump). ### Print Operator Disk Flushing By default, the MindSpore self-developed print operator can output the tensor or character string information entered by users. Multiple character string inputs, multiple tensor inputs, and hybrid inputs of character strings and tensors are supported. The input parameters are separated by commas (,). -> For details about the print function, see [MindSpore Print Operator](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_debugging_info.html#mindspore-print-operator). +> For details about the print function, see [MindSpore Print Operator](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_debugging_info.html#mindspore-print-operator). - `print_file_path`: saves the print operator data to a file and disables the screen printing function. If the file to be saved exists, a timestamp suffix is added to the file. Saving data to a file can solve the problem that the data displayed on the screen is lost when the data volume is large. @@ -159,4 +151,4 @@ from mindspore import context context.set_context(print_file_path="print.pb") ``` -> For details about the context API, see [mindspore.context](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.context.html). +> For details about the context API, see [mindspore.context](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.context.html). diff --git a/docs/programming_guide/source_en/customized.rst b/docs/programming_guide/source_en/customized.rst index 69d484707e05903fe9e29d6546a10189930a3dab..bfc40ae74e2ba03f873635b0123c58d6193bcbd3 100644 --- a/docs/programming_guide/source_en/customized.rst +++ b/docs/programming_guide/source_en/customized.rst @@ -4,6 +4,6 @@ Custom Operators .. toctree:: :maxdepth: 1 - Custom Operators(Ascend) - Custom Operators(GPU) - Custom Operators(CPU) + Custom Operators (Ascend) + Custom Operators (GPU) + Custom Operators (CPU) diff --git a/docs/programming_guide/source_en/data_pipeline.rst b/docs/programming_guide/source_en/data_pipeline.rst index 75d7846d2d8692dc3031b80737d5daaee0c487d4..0e52d9ddf432e0ea22730d34e8ccf448f617c014 100644 --- a/docs/programming_guide/source_en/data_pipeline.rst +++ b/docs/programming_guide/source_en/data_pipeline.rst @@ -11,3 +11,4 @@ Data Pipeline tokenizer dataset_conversion auto_augmentation + cache diff --git a/docs/programming_guide/source_en/dataset_conversion.md b/docs/programming_guide/source_en/dataset_conversion.md index 84320bf41642fbbd4e18c4f2d6e1a50cd7277aa6..fbe20fbfeeb1fdf69e49ff69d29a635a97c705e1 100644 --- a/docs/programming_guide/source_en/dataset_conversion.md +++ b/docs/programming_guide/source_en/dataset_conversion.md @@ -15,7 +15,7 @@ - + ## Overview @@ -180,7 +180,7 @@ MindSpore provides tool classes for converting common datasets to MindRecord. Th | TFRecord | TFRecordToMR | | CSV File | CsvToMR | -For details about dataset conversion, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.mindrecord.html). +For details about dataset conversion, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.mindrecord.html). ### Converting the CIFAR-10 Dataset diff --git a/docs/programming_guide/source_en/dataset_loading.md b/docs/programming_guide/source_en/dataset_loading.md index 993718c7c61e1be5d0452805ab1b5c7c142886f7..cac7ae7a47b1e4b21b68e8bd30a1242d9f194a5c 100644 --- a/docs/programming_guide/source_en/dataset_loading.md +++ b/docs/programming_guide/source_en/dataset_loading.md @@ -21,7 +21,7 @@ - + ## Overview @@ -50,7 +50,7 @@ MindSpore can also load datasets in different data storage formats. You can dire MindSpore also supports user-defined dataset loading using `GeneratorDataset`. You can implement your own dataset classes as required. -> For details about the API for dataset loading, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.html). +> For details about the API for dataset loading, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.html). ## Loading Common Dataset @@ -205,7 +205,7 @@ The following describes how to load dataset files in specific formats. MindRecord is a data format defined by MindSpore. Using MindRecord can improve performance. -> For details about how to convert a dataset into the MindRecord data format, see [Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/master/dataset_conversion.html). +> For details about how to convert a dataset into the MindRecord data format, see [Data Format Conversion](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dataset_conversion.html). The following example uses the `MindDataset` API to load MindRecord files, and displays labels of the loaded data. @@ -390,7 +390,7 @@ For the datasets that cannot be directly loaded by MindSpore, you can construct ### Constructing Dataset Generator Function -Construct a generator function that defines the data return method, and then use this function to construct the user-defined dataset object. This method is appliable for simple scenarios. +Construct a generator function that defines the data return method, and then use this function to construct the user-defined dataset object. This method is applicable for simple scenarios. ```python import numpy as np @@ -444,6 +444,7 @@ class IterDatasetGenerator: return item def __iter__(self): + self.__index = 0 return self def __len__(self): @@ -468,7 +469,7 @@ The output is as follows: ### Constructing Random Accessible Dataset Class -Construct a dataset class to implement the `__getitem__` method, and then use the object of this class to construct a user-defined dataset object. This method is appliable for achieving distributed training. +Construct a dataset class to implement the `__getitem__` method, and then use the object of this class to construct a user-defined dataset object. This method is applicable for achieving distributed training. ```python import numpy as np diff --git a/docs/programming_guide/source_en/dtype.md b/docs/programming_guide/source_en/dtype.md index 29437cc5de0d6dfbf696f4548167d62f04f3d682..2f922384f37ec6d3ee8b74d17ef879367df7a687 100644 --- a/docs/programming_guide/source_en/dtype.md +++ b/docs/programming_guide/source_en/dtype.md @@ -8,7 +8,7 @@ - + ## Overview @@ -16,7 +16,7 @@ MindSpore tensors support different data types, including `int8`, `int16`, `int3 In the computation process of MindSpore, the `int` data type in Python is converted into the defined `int64` type, and the `float` data type is converted into the defined `float32` type. -For details about the supported types, see . +For details about the supported types, see . In the following code, the data type of MindSpore is int32. diff --git a/docs/programming_guide/source_en/infer.md b/docs/programming_guide/source_en/infer.md index 84ade7f462316d52e379492e024dbbbc1ab3867d..4fabbef22f6c55976c9927b344a72720e1b42ad8 100644 --- a/docs/programming_guide/source_en/infer.md +++ b/docs/programming_guide/source_en/infer.md @@ -6,12 +6,12 @@ - + Based on the model trained by MindSpore, it supports the execution of inferences on various platforms such as Ascend 910 AI processor, Ascend 310 AI processor, GPU, CPU, and device side. For more details, please refer to the following tutorials: -- [Inference on the Ascend 910 AI processor](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_ascend_910.html) -- [Inference on the Ascend 310 AI processor](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_ascend_310.html) -- [Inference on a GPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_gpu.html) -- [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/master/multi_platform_inference_cpu.html) -- [Inference on the device side](https://www.mindspore.cn/tutorial/lite/en/master/quick_start/quick_start.html) +- [Inference on the Ascend 910 AI processor](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_910.html) +- [Inference on the Ascend 310 AI processor](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310.html) +- [Inference on a GPU](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_gpu.html) +- [Inference on a CPU](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_cpu.html) +- [Inference on the device side](https://www.mindspore.cn/tutorial/lite/en/r1.1/quick_start/quick_start.html) diff --git a/docs/programming_guide/source_en/network_component.md b/docs/programming_guide/source_en/network_component.md index 102931b1b8cc131df26e1b18207db03cfb48bebe..49e9f7b56210286f90499f66ad8c8054cfdeed93 100644 --- a/docs/programming_guide/source_en/network_component.md +++ b/docs/programming_guide/source_en/network_component.md @@ -10,7 +10,7 @@ - + ## Overview @@ -22,12 +22,11 @@ The following describes three network components, `GradOperation`, `WithLossCell ## GradOperation -GradOperation is used to generate the gradient of the input function. The `get_all`, `get_by_list`, and `sens_param` parameters are used to control the gradient calculation method. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GradOperation.html) +GradOperation is used to generate the gradient of the input function. The `get_all`, `get_by_list`, and `sens_param` parameters are used to control the gradient calculation method. For details, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GradOperation.html) The following is an example of using GradOperation: ```python import numpy as np - import mindspore.nn as nn from mindspore import Tensor, Parameter from mindspore import dtype as mstype @@ -64,9 +63,8 @@ The preceding example is used to calculate the gradient value of `Net` to x. You The following information is displayed: ```text -Tensor(shape=[2, 3], dtype=Float32, [[1.4100001 1.5999999 6.6 ] - [1.4100001 1.5999999 6.6 ]]) + [1.4100001 1.5999999 6.6 ]] ``` All other components, such as `WithGradCell` and `TrainOneStepCell`, involved in gradient calculation use `GradOperation`. @@ -80,7 +78,6 @@ The following uses an example to describe how to use this function. First, you n ```python import numpy as np - import mindspore.context as context import mindspore.nn as nn from mindspore import Tensor diff --git a/docs/programming_guide/source_en/network_list.rst b/docs/programming_guide/source_en/network_list.rst index 5118f160b0b99ba8edf4e7cc9f3aba11a24a15a9..a1ef6c0ae242fbf3f5cb6a4f2dc2f585ac7fc2ab 100644 --- a/docs/programming_guide/source_en/network_list.rst +++ b/docs/programming_guide/source_en/network_list.rst @@ -4,4 +4,4 @@ Network List .. toctree:: :maxdepth: 1 - MindSpore Network List \ No newline at end of file + MindSpore Network List \ No newline at end of file diff --git a/docs/programming_guide/source_en/operator_list.rst b/docs/programming_guide/source_en/operator_list.rst index d2c966c71941f94f84667fa994506bf1dac82440..5a4bf5a86fee49a81d812d121473dd5b12855d40 100644 --- a/docs/programming_guide/source_en/operator_list.rst +++ b/docs/programming_guide/source_en/operator_list.rst @@ -4,7 +4,7 @@ Operator List .. toctree:: :maxdepth: 1 - MindSpore Operator List - MindSpore Implicit Type Conversion - MindSpore Distributed Operator List - MindSpore Lite Operator List \ No newline at end of file + MindSpore Operator List + MindSpore Implicit Type Conversion + MindSpore Distributed Operator List + MindSpore Lite Operator List \ No newline at end of file diff --git a/docs/programming_guide/source_en/operators.md b/docs/programming_guide/source_en/operators.md index 12c255d8a5f4bb74c7b2a4bf7e295b6b2d47ccb1..7223e37b6648559ae61435f10fdb114d8464abdd 100644 --- a/docs/programming_guide/source_en/operators.md +++ b/docs/programming_guide/source_en/operators.md @@ -40,7 +40,7 @@ - + ## Overview @@ -56,7 +56,7 @@ APIs related to operators include operations, functional, and composite. Operato ### mindspore.ops.operations -The operations API provides all primitive operator APIs, which are the lowest-order operator APIs open to users. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +The operations API provides all primitive operator APIs, which are the lowest-order operator APIs open to users. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). Primitive operators directly encapsulate the implementation of operators at bottom layers such as Ascend, GPU, AICPU, and CPU, providing basic operator capabilities for users. @@ -85,7 +85,7 @@ output = [ 1. 8. 64.] ### mindspore.ops.functional -To simplify the calling process of operators without attributes, MindSpore provides the functional version of some operators. For details about the input parameter requirements, see the input and output requirements of the original operator. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list_ms.html#mindspore-ops-functional). +To simplify the calling process of operators without attributes, MindSpore provides the functional version of some operators. For details about the input parameter requirements, see the input and output requirements of the original operator. For details about the supported operators, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list_ms.html#mindspore-ops-functional). For example, the functional version of the `P.Pow` operator is `F.tensor_pow`. @@ -123,7 +123,7 @@ from mindspore import Tensor mean = Tensor(1.0, mstype.float32) stddev = Tensor(1.0, mstype.float32) output = C.normal((2, 3), mean, stddev, seed=5) -print("ouput =", output) +print("output =", output) ``` The following information is displayed: @@ -168,7 +168,7 @@ tensor [[2.4, 4.2] scalar 3 ``` -In addition, the high-order function `GradOperation` provides the method of computing the gradient function corresponding to the input function. For details, see [mindspore.ops](https://www.mindspore.cn/doc/api_python/en/master/mindspore/ops/mindspore.ops.GradOperation.html). +In addition, the high-order function `GradOperation` provides the method of computing the gradient function corresponding to the input function. For details, see [mindspore.ops](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/ops/mindspore.ops.GradOperation.html). ### Combination usage of operations/functional/composite three types of operators @@ -190,7 +190,7 @@ pow = ops.Pow() ## Operator Functions -Operators can be classified into seven functional modules: tensor operations, network operations, array operations, image operations, encoding operations, debugging operations, and quantization operations. For details about the supported operators on the Ascend AI processors, GPU, and CPU, see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +Operators can be classified into seven functional modules: tensor operations, network operations, array operations, image operations, encoding operations, debugging operations, and quantization operations. For details about the supported operators on the Ascend AI processors, GPU, and CPU, see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). ### Tensor Operations @@ -358,9 +358,10 @@ from mindspore import Tensor import mindspore.ops as ops import numpy as np -input_ = Tensor(np.ones([2, 8]).astype(np.float32)) -broadcast = ops.Broadcast(1) -output = broadcast((input_,)) +shape = (2, 3) +input_x = Tensor(np.array([1, 2, 3]).astype(np.float32)) +broadcast_to = ops.BroadcastTo(shape) +output = broadcast_to(input_x) print(output) ``` @@ -368,8 +369,8 @@ print(output) The following information is displayed: ```text -[[1.0, 1.0, 1.0 ... 1.0, 1.0, 1.0], - [1.0, 1.0, 1.0 ... 1.0, 1.0, 1.0]] +[[1. 2. 3.] + [1. 2. 3.]] ``` ### Network Operations @@ -529,7 +530,7 @@ print(result) The following information is displayed: ```text -[0. 0. 0. 0.] +(Tensor(shape=[4], dtype=Float32, value= [ 1.98989999e+00, -4.90300000e-01, 1.69520009e+00, 3.98009992e+00]),) ``` ### Array Operations @@ -607,7 +608,7 @@ print(output) The following information is displayed: ```text -[3, 2, 1] +(3, 2, 1) ``` ### Image Operations @@ -677,8 +678,8 @@ from mindspore import Tensor import mindspore.ops as ops import mindspore -anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32) -groundtruth_box = Tensor([[3,1,2,2],[1,2,1,4]],mindspore.float32) +anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]],mindspore.float32) +groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]],mindspore.float32) boundingbox_encode = ops.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0)) res = boundingbox_encode(anchor_box, groundtruth_box) print(res) @@ -687,8 +688,8 @@ print(res) The following information is displayed: ```text -[[5.0000000e-01 5.0000000e-01 -6.5504000e+04 6.9335938e-01] - [-1.0000000e+00 2.5000000e-01 0.0000000e+00 4.0551758e-01]] +[[ -1. 0.25 0. 0.40551758] + [ -1. 0.25 0. 0.40551758]] ``` #### BoundingBoxDecode diff --git a/docs/programming_guide/source_en/optim.md b/docs/programming_guide/source_en/optim.md index 7a697d2df242ef9cb06b369230f312f81b29426a..af3de158463a35f0599ee70a5b12bd0c9484f2d5 100644 --- a/docs/programming_guide/source_en/optim.md +++ b/docs/programming_guide/source_en/optim.md @@ -13,7 +13,7 @@ - + ## Overview diff --git a/docs/programming_guide/source_en/parameter.md b/docs/programming_guide/source_en/parameter.md index 1d3f7e3327130ccb009f749e4621f995b786c86d..fd3a04dbb156b14c64a3ba04856e2a72c29d8027 100644 --- a/docs/programming_guide/source_en/parameter.md +++ b/docs/programming_guide/source_en/parameter.md @@ -11,7 +11,7 @@ - + ## Overview @@ -37,7 +37,7 @@ To update a parameter, set `requires_grad` to `True`. When `layerwise_parallel` is set to True, this parameter will be filtered out during parameter broadcast and parameter gradient aggregation. -For details about the configuration of distributed parallelism, see . +For details about the configuration of distributed parallelism, see . In the following example, `Parameter` objects are built using three different data types. All the three `Parameter` objects need to be updated, and layerwise parallelism is not used. @@ -111,7 +111,7 @@ inited_param: None requires_grad: True layerwise_parallel: False -data: Parameter (name=x) +data: Parameter (name=Parameter) ``` ## Methods @@ -121,7 +121,7 @@ data: Parameter (name=x) - `set_data`: sets the data saved by `Parameter`. `Tensor`, `Initializer`, `int`, and `float` can be input for setting. When the input parameter `slice_shape` of the method is set to True, the shape of `Parameter` can be changed. Otherwise, the configured shape must be the same as the original shape of `Parameter`. -- `set_param_ps`: controls whether training parameters are trained by using the [Parameter Server](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/apply_parameter_server_training.html). +- `set_param_ps`: controls whether training parameters are trained by using the [Parameter Server](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/apply_parameter_server_training.html). - `clone`: clones `Parameter`. You can specify the parameter name after cloning. diff --git a/docs/programming_guide/source_en/performance_optimization.md b/docs/programming_guide/source_en/performance_optimization.md index 8df9d479158c6e361d005f150dd10339ec7f6500..843acbd9085682cfa1c4653ff73be289a7b1dad6 100644 --- a/docs/programming_guide/source_en/performance_optimization.md +++ b/docs/programming_guide/source_en/performance_optimization.md @@ -6,13 +6,13 @@ - + MindSpore provides a variety of performance optimization methods, users can use them to improve the performance of training and inference according to the actual situation. | Optimization Stage | Optimization Method | Supported | | --- | --- | --- | -| Training | [Distributed Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_tutorials.html) | Ascend, GPU | -| | [Mixed Precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) | Ascend, GPU | -| | [Graph Kernel Fusion](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_graph_kernel_fusion.html) | Ascend | -| | [Gradient Accumulation](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/apply_gradient_accumulation.html) | Ascend, GPU | +| Training | [Distributed Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/distributed_training_tutorials.html) | Ascend, GPU | +| | [Mixed Precision](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/enable_mixed_precision.html) | Ascend, GPU | +| | [Graph Kernel Fusion](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/enable_graph_kernel_fusion.html) | Ascend | +| | [Gradient Accumulation](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/apply_gradient_accumulation.html) | Ascend, GPU | diff --git a/docs/programming_guide/source_en/pipeline.md b/docs/programming_guide/source_en/pipeline.md index 178ad149f7cebe5a9389f84ee332a6d2f1e0c411..231811e211676addda197c3a8e9a06b258533bd3 100644 --- a/docs/programming_guide/source_en/pipeline.md +++ b/docs/programming_guide/source_en/pipeline.md @@ -14,7 +14,7 @@ - + ## Overview @@ -22,7 +22,7 @@ Data is the basis of deep learning. Good data input can play a positive role in Each dataset class of MindSpore provides multiple data processing operators. You can build a data processing pipeline to define the data processing operations to be used. In this way, data can be continuously transferred to the training system through the data processing pipeline during the training process. -The following table lists part of the common data processing operators supported by MindSpore. For more data processing operations, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.html). +The following table lists part of the common data processing operators supported by MindSpore. For more data processing operations, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.html). | Data Processing Operator | Description | | ---- | ---- | @@ -76,7 +76,7 @@ The output is as follows: Applies a specified function or operator to specified columns in a dataset to implement data mapping. You can customize the mapping function or use operators in c_transforms or py_transforms to augment image and text data. -> For details about how to use data augmentation, see [Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/master/augmentation.html) in the Programming Guide. +> For details about how to use data augmentation, see [Data Augmentation](https://www.mindspore.cn/doc/programming_guide/en/r1.1/augmentation.html) in the Programming Guide. ![map](./images/map.png) diff --git a/docs/programming_guide/source_en/probability.md b/docs/programming_guide/source_en/probability.md index 56aa7ea8333d8896d3f5a1740b304123ccf68ac7..8ffb10921dbddc90af0f0b77e7e6f991d66d7e84 100644 --- a/docs/programming_guide/source_en/probability.md +++ b/docs/programming_guide/source_en/probability.md @@ -47,7 +47,7 @@ - + MindSpore deep probabilistic programming is to combine Bayesian learning with deep learning, including probability distribution, probability distribution mapping, deep probability network, probability inference algorithm, Bayesian layer, Bayesian conversion, and Bayesian toolkit. For professional Bayesian learning users, it provides probability sampling, inference algorithms, and model build libraries. On the other hand, advanced APIs are provided for users who are unfamiliar with Bayesian deep learning, so that they can use Bayesian models without changing the deep learning programming logic. @@ -254,8 +254,8 @@ Properties are described as follows: The `Distribution` base class invokes the private API in the `Gumbel` and `TransformedDistribution` to implement the public APIs in the base class. `Gumbel` supports the following public APIs: -- `mean`,`mode`,`var`, and `sd`:No paramter. -- `entropy`: No paramter. +- `mean`,`mode`,`var`, and `sd`:No parameter. +- `entropy`: No parameter. - `cross_entropy` and `kl_loss`: The input parameters *dist*, *loc_b*, and *scale_b* are mandatory. *dist* indicates the name of another distribution type. Currently, only *'Gumbel'* is supported. *loc_b* and *scale_b* indicate the location and scale of distribution *b*. - `prob`, `log_prob`, `cdf`, `log_cdf`, `survival_function`, and `log_survival`: The input parameter *value* is mandatory. - `sample`: Input parameters sample shape *shape* is optional. @@ -361,23 +361,28 @@ mean_b = Tensor(1.0, dtype=mstype.float32) sd_b = Tensor(2.0, dtype=mstype.float32) kl = my_normal.kl_loss('Normal', mean_b, sd_b) +# get the distribution args as a tuple +dist_arg = my_normal.get_dist_args() + print("mean: ", mean) print("var: ", var) print("entropy: ", entropy) print("prob: ", prob) print("cdf: ", cdf) print("kl: ", kl) +print("dist_arg: ", dist_arg) ``` The output is as follows: ```python -mean: 0.0 -var: 1.0 -entropy: 1.4189385 -prob: [0.35206532, 0.3989423, 0.35206532] -cdf: [0.3085482, 0.5, 0.6914518] -kl: 0.44314718 +mean:  0.0 +var:  1.0 +entropy:  1.4189385 +prob:  [0.35206532 0.3989423  0.35206532] +cdf:  [0.30853754 0.5        0.69146246] +kl:  0.44314718 +dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1)) ``` ### Probability Distribution Class Application in Graph Mode @@ -463,36 +468,34 @@ tx = Tensor(x, dtype=dtype.float32) cdf = LogNormal.cdf(tx) # generate samples from the distribution -shape = ((3, 2)) +shape = (3, 2) sample = LogNormal.sample(shape) # get information of the distribution print(LogNormal) -# get information of the underyling distribution and the bijector separately +# get information of the underlying distribution and the bijector separately print("underlying distribution:\n", LogNormal.distribution) print("bijector:\n", LogNormal.bijector) # get the computation results print("cdf:\n", cdf) -print("sample:\n", sample) +print("sample shape:\n", sample.shape) ``` The output is as follows: ```python TransformedDistribution< - (_bijector): Exp - (_distribution): Normal - > +  (_bijector): Exp +  (_distribution): Normal +  > underlying distribution: - Normal + Normal bijector: - Exp + Exp cdf: - [0.7558914 0.9462397 0.9893489] -sample: - [[ 3.451917 0.645654 ] - [ 0.86533326 1.2023963 ] - [ 2.3343778 11.053896 ]] + [0.7558914 0.9462397 0.9893489] +sample shape: +(3, 2) ``` When the `TransformedDistribution` is constructed to map the transformed `is_constant_jacobian = true` (for example, `ScalarAffine`), the constructed `TransformedDistribution` instance can use the `mean` API to calculate the average value. For example: @@ -544,15 +547,14 @@ x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32) tx = Tensor(x, dtype=dtype.float32) cdf, sample = net(tx) print("cdf: ", cdf) -print("sample: ", sample) +print("sample shape: ", sample.shape) ``` The output is as follows: ```python cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ] -sample: [[0.5361498 0.26627186 2.766659 ] - [1.5831033 0.4096472 2.008679 ]] +sample shape: (2, 3) ``` ## Probability Distribution Mapping @@ -694,11 +696,11 @@ print("inverse_log_jacobian: ", inverse_log_jaco) The output is as follows: ```python -PowerTransform -forward: [2.23606801e+00, 2.64575124e+00, 3.00000000e+00, 3.31662488e+00] -inverse: [1.50000000e+00, 4.00000048e+00, 7.50000000e+00, 1.20000010e+01] -forward_log_jacobian: [-8.04718971e-01, -9.72955048e-01, -1.09861231e+00, -1.19894767e+00] -inverse_log_jacobian: [6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00] +PowerTransform +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ### Invoking a Bijector Instance in Graph Mode @@ -740,10 +742,10 @@ print("inverse_log_jaco: ", inverse_log_jaco) The output is as follows: ```python -forward: [2.236068 2.6457512 3. 3.3166249] -inverse: [ 1.5 4.0000005 7.5 12.000001 ] -forward_log_jaco: [-0.804719 -0.97295505 -1.0986123 -1.1989477 ] -inverse_log_jaco: [0.6931472 1.0986123 1.3862944 1.609438 ] +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ## Deep Probabilistic Network @@ -849,7 +851,7 @@ decoder = Decoder() cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20, num_classes=10) ``` -Load a dataset, for example, Mnist. For details about the data loading and preprocessing process, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html). The create_dataset function is used to create a data iterator. +Load a dataset, for example, Mnist. For details about the data loading and preprocessing process, see [Implementing an Image Classification Application](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html). The create_dataset function is used to create a data iterator. ```python ds_train = create_dataset(image_path, 128, 1) @@ -913,7 +915,7 @@ If you want the generated sample to be better and clearer, you can define a more The following uses the APIs in `nn.probability.bnn_layers` of MindSpore to implement the BNN image classification model. The APIs in `nn.probability.bnn_layers` of MindSpore include `NormalPrior`, `NormalPosterior`, `ConvReparam`, `DenseReparam`, `DenseLocalReparam` and `WithBNNLossCell`. The biggest difference between BNN and DNN is that the weight and bias of the BNN layer are not fixed values, but follow a distribution. `NormalPrior` and `NormalPosterior` are respectively used to generate a prior distribution and a posterior distribution that follow a normal distribution. `ConvReparam` and `DenseReparam` are the Bayesian convolutional layer and fully connected layers implemented by using the reparameterization method, respectively. `DenseLocalReparam` is the Bayesian fully connected layers implemented by using the local reparameterization method. `WithBNNLossCell` is used to encapsulate the BNN and loss function. -For details about how to use the APIs in `nn.probability.bnn_layers` to build a Bayesian neural network and classify images, see [Applying the Bayesian Network](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#id3). +For details about how to use the APIs in `nn.probability.bnn_layers` to build a Bayesian neural network and classify images, see [Applying the Bayesian Network](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#id3). ## Bayesian Conversion @@ -969,7 +971,7 @@ The `trainable_bnn` parameter is a trainable DNN model packaged by `TrainOneStep ``` - `get_dense_args` specifies the parameters to be obtained from the fully connected layer of the DNN model. The default value is the common parameters of the fully connected layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/nn/mindspore.nn.Dense.html). `get_conv_args` specifies the parameters to be obtained from the convolutional layer of the DNN model. The default value is the common parameters of the convolutional layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/nn/mindspore.nn.Conv2d.html). `add_dense_args` and `add_conv_args` specify the new parameter values to be specified for the BNN layer. Note that the parameters in `add_dense_args` cannot be the same as those in `get_dense_args`. The same rule applies to `add_conv_args` and `get_conv_args`. + `get_dense_args` specifies the parameters to be obtained from the fully connected layer of the DNN model. The default value is the common parameters of the fully connected layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Dense.html). `get_conv_args` specifies the parameters to be obtained from the convolutional layer of the DNN model. The default value is the common parameters of the convolutional layers of the DNN and BNN models. For details about the parameters, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/nn/mindspore.nn.Conv2d.html). `add_dense_args` and `add_conv_args` specify the new parameter values to be specified for the BNN layer. Note that the parameters in `add_dense_args` cannot be the same as those in `get_dense_args`. The same rule applies to `add_conv_args` and `get_conv_args`. - Function 2: Convert a specific layer. @@ -995,7 +997,7 @@ The `trainable_bnn` parameter is a trainable DNN model packaged by `TrainOneStep `Dnn_layer` specifies a DNN layer to be converted into a BNN layer, and `bnn_layer` specifies a BNN layer to be converted into a DNN layer, and `get_args` and `add_args` specify the parameters obtained from the DNN layer and the parameters to be re-assigned to the BNN layer, respectively. -For details about how to use `TransformToBNN` in MindSpore, see [DNN-to-BNN Conversion with One Click](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#dnnbnn). +For details about how to use `TransformToBNN` in MindSpore, see [DNN-to-BNN Conversion with One Click](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#dnnbnn). ## Bayesian Toolbox diff --git a/docs/programming_guide/source_en/run.md b/docs/programming_guide/source_en/run.md index bfca862a1c7b3635cd5df1f823e9088c44e13dd4..a04fff34d68b0718823bcf5366935c5b8e8f7d90 100644 --- a/docs/programming_guide/source_en/run.md +++ b/docs/programming_guide/source_en/run.md @@ -12,7 +12,7 @@ - + ## Overview @@ -99,11 +99,11 @@ The output is as follows: ## Executing a Network Model -The [Model API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.html#mindspore.Model) of MindSpore is an advanced API used for training and validation. Layers with the training or inference function can be combined into an object. The training, inference, and prediction functions can be implemented by calling the train, eval, and predict APIs, respectively. +The [Model API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.html#mindspore.Model) of MindSpore is an advanced API used for training and validation. Layers with the training or inference function can be combined into an object. The training, inference, and prediction functions can be implemented by calling the train, eval, and predict APIs, respectively. You can transfer the initialized Model APIs such as the network, loss function, and optimizer as required. You can also configure amp_level to implement mixed precision and configure metrics to implement model evaluation. -> Excecuting a network model will produce a `kernel_meta` directory in the current directory, and all the cache of operations compiled during the executing process will be stored in it. If user executes the same model again, or a model with some differences, MindSpore will automaticlly call the reusable cache in `kernel_meta` to reduce the compilation time of the whole model. It has a significant improvement in performance. The cache usually cannot be shared between different situations, for example, single device and mutiple devices, training and inference, etc. +> Executing a network model will produce a `kernel_meta` directory in the current directory, and all the cache of operations compiled during the executing process will be stored in it. If user executes the same model again, or a model with some differences, MindSpore will automatically call the reusable cache in `kernel_meta` to reduce the compilation time of the whole model. It has a significant improvement in performance. The cache usually cannot be shared between different situations, for example, single device and multiple devices, training and inference, etc. > > Please note that, if users only delete the cache on part of the devices when executing models on several devices, may lead to a timeout of the waiting time between devices, because only some of them need to recompile the operations. To avoid this situation, users could set the environment variable `HCCL_CONNECT_TIMEOUT` to a reasonable waiting time. However, in this way, the time consuming is the same as deleting all the cache and recompiling. If users interrupt the process of compilation, there is a possibility that the cache file in `kernel_meta` will be generated incorrectly, and the subsequent re-execution process will fail. In this case, users need to delete the `kernel_mata` folder and recompile the network. @@ -234,10 +234,10 @@ if __name__ == "__main__": model = Model(network, net_loss, net_opt) print("============== Starting Training ==============") - model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=True) + model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=False) ``` -> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html#downloading-the-dataset). +> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html#downloading-the-dataset). The output is as follows: @@ -251,11 +251,11 @@ epoch: 1 step: 1874, loss is 0.0346688 epoch: 1 step: 1875, loss is 0.017264696 ``` -> Use the PyNative mode for debugging, including the execution of single operator, common function, and network training model. For details, see [Debugging in PyNative Mode](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/debug_in_pynative_mode.html). +> Use the PyNative mode for debugging, including the execution of single operator, common function, and network training model. For details, see [Debugging in PyNative Mode](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/debug_in_pynative_mode.html). ### Executing an Inference Model -Call the train API of Model to implement inference. To facilitate model evaluation, you can set metrics when the Model API is initialized. +Call the eval API of Model to implement inference. To facilitate model evaluation, you can set metrics when the Model API is initialized. Metrics are used to evaluate models. Common metrics include Accuracy, Fbeta, Precision, Recall, and TopKCategoricalAccuracy. Generally, the comprehensive model quality cannot be evaluated by one model metric. Therefore, multiple metrics are often used together to evaluate the model. @@ -367,14 +367,14 @@ if __name__ == "__main__": network = LeNet5(10) net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - repeat_size = 10 + repeat_size = 1 net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9) model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy(), "Precision": Precision()}) print("============== Starting Testing ==============") param_dict = load_checkpoint("./ckpt/checkpoint_lenet-1_1875.ckpt") load_param_into_net(network, param_dict) - ds_eval = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data", "test"), 32, 1) + ds_eval = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data", "test"), 32, repeat_size) acc = model.eval(ds_eval, dataset_sink_mode=True) print("============== {} ==============".format(acc)) ``` @@ -385,7 +385,7 @@ In the preceding information: - `checkpoint_lenet-1_1875.ckpt`: name of the saved checkpoint model file. - `load_param_into_net`: loads parameters to the network. -> For details about how to save the `checkpoint_lenet-1_1875.ckpt` file, see [Training the Network](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html#training-the-network). +> For details about how to save the `checkpoint_lenet-1_1875.ckpt` file, see [Training the Network](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html#training-the-network). The output is as follows: diff --git a/docs/programming_guide/source_en/sampler.md b/docs/programming_guide/source_en/sampler.md index 3ac5379564309c9f0d436aecac2a6213b9dacf9e..bbcb1318e61a59ed8f1b11bd145d5a269e391bf5 100644 --- a/docs/programming_guide/source_en/sampler.md +++ b/docs/programming_guide/source_en/sampler.md @@ -14,13 +14,13 @@ - + ## Overview MindSpore provides multiple samplers to help you sample datasets for various purposes to meet training requirements and solve problems such as oversized datasets and uneven distribution of sample categories. You only need to import the sampler object when loading the dataset for sampling the data. -The following table lists part of the common samplers supported by MindSpore. In addition, you can define your own sampler class as required. For more samplers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.html). +The following table lists part of the common samplers supported by MindSpore. In addition, you can define your own sampler class as required. For more samplers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.html). | Sampler | Description | | ---- | ---- | @@ -59,13 +59,15 @@ ds.config.set_seed(0) DATA_DIR = "cifar-10-batches-bin/" +print("------ Without Replacement ------") + sampler = ds.RandomSampler(num_samples=5) dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler) for data in dataset1.create_dict_iterator(): print("Image shape:", data['image'].shape, ", Label:", data['label']) -print("------------") +print("------ With Replacement ------") sampler = ds.RandomSampler(replacement=True, num_samples=5) dataset2 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler) @@ -77,12 +79,13 @@ for data in dataset2.create_dict_iterator(): The output is as follows: ```text +------ Without Replacement ------ Image shape: (32, 32, 3) , Label: 1 Image shape: (32, 32, 3) , Label: 6 Image shape: (32, 32, 3) , Label: 7 Image shape: (32, 32, 3) , Label: 0 Image shape: (32, 32, 3) , Label: 4 ------------- +------ With Replacement ------ Image shape: (32, 32, 3) , Label: 4 Image shape: (32, 32, 3) , Label: 6 Image shape: (32, 32, 3) , Label: 9 @@ -200,7 +203,7 @@ Image shape: (32, 32, 3) , Label: 9 Samples dataset shards in distributed training. -The following example uses a distributed sampler to divide a generated dataset into three shards, obtains three data samples in each shard, and displays the loaded data. +The following example uses a distributed sampler to divide a generated dataset into three shards, obtains no more than three data samples in each shard, and displays the loaded data on shard number 0. ```python import numpy as np diff --git a/docs/programming_guide/source_en/security_and_privacy.md b/docs/programming_guide/source_en/security_and_privacy.md index 0af5c87751d4b1ca583acd54d35a957946483790..6ea8579954ad3e3150ac9e4e5be1f4dd994a4519 100644 --- a/docs/programming_guide/source_en/security_and_privacy.md +++ b/docs/programming_guide/source_en/security_and_privacy.md @@ -17,7 +17,7 @@ - + ## Overview @@ -37,7 +37,7 @@ The `Defense` base class defines the interface for adversarial training. Its sub The `Detector` base class defines the interface for adversarial sample detection. Its subclasses implement various specific detection algorithms to enhance the adversarial robustness of the models. -For details, see [Improving Model Security with NAD Algorithm](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/improve_model_security_nad.html). +For details, see [Improving Model Security with NAD Algorithm](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/improve_model_security_nad.html). ## Model Security Test @@ -45,7 +45,7 @@ For details, see [Improving Model Security with NAD Algorithm](https://www.minds The `Fuzzer` class controls the fuzzing process based on the neuron coverage gain. It uses natural perturbation and adversarial sample generation methods as the mutation policy to activate more neurons to explore different types of model output results and error behavior, helping users enhance model robustness. -For details, see [Testing Model Security Using Fuzz Testing](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/test_model_security_fuzzing.html). +For details, see [Testing Model Security Using Fuzz Testing](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/test_model_security_fuzzing.html). ## Differential Privacy Training @@ -53,7 +53,7 @@ For details, see [Testing Model Security Using Fuzz Testing](https://www.mindspo `DPModel` inherits `mindspore.Model` and provides the entry function for differential privacy training. -For details, see [Protecting User Privacy with Differential Privacy Mechanism](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/protect_user_privacy_with_differential_privacy.html). +For details, see [Protecting User Privacy with Differential Privacy Mechanism](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/protect_user_privacy_with_differential_privacy.html). ## Privacy Breach Risk Assessment @@ -61,4 +61,4 @@ For details, see [Protecting User Privacy with Differential Privacy Mechanism](h The `MembershipInference` class provides a reverse analysis method. It can infer whether a sample is in the training set of a model based on the prediction information of the model on the sample to evaluate the privacy breach risk of the model. -For details, see [Testing Model Security with Membership Inference](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/test_model_security_membership_inference.html). +For details, see [Testing Model Security with Membership Inference](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/test_model_security_membership_inference.html). diff --git a/docs/programming_guide/source_en/tensor.md b/docs/programming_guide/source_en/tensor.md index 6ca70599626ed667af8fcdac9710e0c527d74449..51f7c33f614f2dfe4403a0b48d5af3ef8df5b232 100644 --- a/docs/programming_guide/source_en/tensor.md +++ b/docs/programming_guide/source_en/tensor.md @@ -11,11 +11,11 @@ - + ## Overview -Tensor is a basic data structure in the MindSpore network computing. For details about data types in tensors, see [dtype](https://www.mindspore.cn/doc/programming_guide/en/master/dtype.html). +Tensor is a basic data structure in the MindSpore network computing. For details about data types in tensors, see [dtype](https://www.mindspore.cn/doc/programming_guide/en/r1.1/dtype.html). Tensors of different dimensions represent different data. For example, a 0-dimensional tensor represents a scalar, a 1-dimensional tensor represents a vector, a 2-dimensional tensor represents a matrix, and a 3-dimensional tensor may represent the three channels of RGB images. diff --git a/docs/programming_guide/source_en/tokenizer.md b/docs/programming_guide/source_en/tokenizer.md index f3d000874bc4646c8cc48dd3a7f0b79b6610f845..52b59210cd51dcb600ecd354b3c4ec45ba60e909 100644 --- a/docs/programming_guide/source_en/tokenizer.md +++ b/docs/programming_guide/source_en/tokenizer.md @@ -14,7 +14,7 @@ - + ## Overview @@ -36,7 +36,7 @@ MindSpore provides the following tokenizers. In addition, you can customize toke | WhitespaceTokenizer | Performs tokenization on scalar text data based on spaces. | | WordpieceTokenizer | Performs tokenization on scalar text data based on the word set. | -For details about tokenizers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/master/mindspore/mindspore.dataset.text.html). +For details about tokenizers, see [MindSpore API](https://www.mindspore.cn/doc/api_python/en/r1.1/mindspore/mindspore.dataset.text.html). ## MindSpore Tokenizers @@ -157,7 +157,7 @@ print("------------------------before tokenization----------------------------") for data in dataset.create_dict_iterator(output_numpy=True): print(text.to_str(data['text'])) -# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/botchan.txt +# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/r1.1/tests/ut/data/dataset/test_sentencepiece/botchan.txt vocab_file = "botchan.txt" vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {}) tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING) diff --git a/docs/programming_guide/source_en/train.md b/docs/programming_guide/source_en/train.md index 1de44dbf01f2baae7e1ebc8fcea8cb4c85c66a9b..b5909077adc274733ecf34b1b357cde91bcdd366 100644 --- a/docs/programming_guide/source_en/train.md +++ b/docs/programming_guide/source_en/train.md @@ -13,7 +13,7 @@ - + ## Overview @@ -23,13 +23,13 @@ MindSpore provides a large number of network models such as object detection and Before customizing a training network, you need to understand the network support of MindSpore, constraints on network construction using Python, and operator support. -- Network support: Currently, MindSpore supports multiple types of networks, including computer vision, natural language processing, recommender, and graph neural network. For details, see [Network List](https://www.mindspore.cn/doc/note/en/master/network_list.html). If the existing networks cannot meet your requirements, you can define your own network as required. +- Network support: Currently, MindSpore supports multiple types of networks, including computer vision, natural language processing, recommender, and graph neural network. For details, see [Network List](https://www.mindspore.cn/doc/note/en/r1.1/network_list.html). If the existing networks cannot meet your requirements, you can define your own network as required. - Constraints on network construction using Python: MindSpore does not support the conversion of any Python source code into computational graphs. Therefore, the source code has the syntax and network definition constraints. These constraints may change as MindSpore evolves. -- Operator support: As the name implies, the network is based on operators. Therefore, before customizing a training network, you need to understand the operators supported by MindSpore. For details about operator implementation on different backends (Ascend, GPU, and CPU), see [Operator List](https://www.mindspore.cn/doc/note/en/master/operator_list.html). +- Operator support: As the name implies, the network is based on operators. Therefore, before customizing a training network, you need to understand the operators supported by MindSpore. For details about operator implementation on different backends (Ascend, GPU, and CPU), see [Operator List](https://www.mindspore.cn/doc/note/en/r1.1/operator_list.html). -> When the built-in operators of the network cannot meet the requirements, you can refer to [Custom Operators(Ascend)](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/custom_operator_ascend.html) to quickly expand the custom operators of the Ascend AI processor. +> When the built-in operators of the network cannot meet the requirements, you can refer to [Custom Operators(Ascend)](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/custom_operator_ascend.html) to quickly expand the custom operators of the Ascend AI processor. The following is a code example: @@ -246,7 +246,7 @@ if __name__ == "__main__": print("epoch: {0}/{1}, losses: {2}".format(step + 1, epoch, output.asnumpy(), flush=True)) ``` -> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/master/quick_start/quick_start.html#downloading-the-dataset). +> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorial/training/en/r1.1/quick_start/quick_start.html#downloading-the-dataset). The output is as follows: @@ -263,11 +263,11 @@ epoch: 9/10, losses: 2.305952548980713 epoch: 10/10, losses: 1.4282708168029785 ``` -> The typical application scenario is gradient accumulation. For details, see [Applying Gradient Accumulation Algorithm](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/apply_gradient_accumulation.html). +> The typical application scenario is gradient accumulation. For details, see [Applying Gradient Accumulation Algorithm](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/apply_gradient_accumulation.html). ## Conducting Inference While Training -For some complex networks with a large data volume and a relatively long training time, to learn the change of model accuracy in different training phases, the model accuracy may be traced in a manner of inference while training. For details, see [Evaluating the Model during Training](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/evaluate_the_model_during_training.html). +For some complex networks with a large data volume and a relatively long training time, to learn the change of model accuracy in different training phases, the model accuracy may be traced in a manner of inference while training. For details, see [Evaluating the Model during Training](https://www.mindspore.cn/tutorial/training/en/r1.1/advanced_use/evaluate_the_model_during_training.html). ## On-Device Execution diff --git a/docs/programming_guide/source_zh_cn/api_structure.md b/docs/programming_guide/source_zh_cn/api_structure.md index ca2f2f4f004e39338f90e5d36514b8dbc215d29b..90bdae4ce40819a0fc50fba5425defc06edc3745 100644 --- a/docs/programming_guide/source_zh_cn/api_structure.md +++ b/docs/programming_guide/source_zh_cn/api_structure.md @@ -9,9 +9,9 @@ - +    - +    @@ -19,13 +19,13 @@ MindSpore是一个全场景深度学习框架,旨在实现易开发、高效执行、全场景覆盖三大目标,其中易开发表现为API友好、调试难度低,高效执行包括计算效率、数据预处理效率和分布式训练效率,全场景则指框架同时支持云、边缘以及端侧场景。 -MindSpore总体架构分为前端表示层(Mind Expression,ME)、计算图引擎(Graph Engine,GE)和后端运行时三个部分。ME提供了用户级应用软件编程接口(Application Programming Interface,API),用于科学计算以及构建和训练神经网络,并将用户的Python代码转换为数据流图。GE是算子和硬件资源的管理器,负责控制从ME接收的数据流图的执行。后端运行时包含云、边、端上不同环境中的高效运行环境,例如CPU、GPU、Ascend AI处理器、 Android/iOS等。更多总体架构的相关内容请参见[总体架构](https://www.mindspore.cn/doc/note/zh-CN/master/design/mindspore/architecture.html)。 +MindSpore总体架构分为前端表示层(Mind Expression,ME)、计算图引擎(Graph Engine,GE)和后端运行时三个部分。ME提供了用户级应用软件编程接口(Application Programming Interface,API),用于科学计算以及构建和训练神经网络,并将用户的Python代码转换为数据流图。GE是算子和硬件资源的管理器,负责控制从ME接收的数据流图的执行。后端运行时包含云、边、端上不同环境中的高效运行环境,例如CPU、GPU、Ascend AI处理器、 Android/iOS等。更多总体架构的相关内容请参见[总体架构](https://www.mindspore.cn/doc/note/zh-CN/r1.1/design/mindspore/architecture.html)。 ## 设计理念 MindSpore源于全产业的最佳实践,向数据科学家和算法工程师提供了统一的模型训练、推理和导出等接口,支持端、边、云等不同场景下的灵活部署,推动深度学习和科学计算等领域繁荣发展。 -MindSpore目前提供了Python编程范式,用户使用Python原生控制逻辑即可构建复杂的神经网络模型,AI编程变得简单,具体示例请参见[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)。 +MindSpore目前提供了Python编程范式,用户使用Python原生控制逻辑即可构建复杂的神经网络模型,AI编程变得简单,具体示例请参见[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)。 目前主流的深度学习框架的执行模式有两种,分别为静态图模式和动态图模式。静态图模式拥有较高的训练性能,但难以调试。动态图模式相较于静态图模式虽然易于调试,但难以高效执行。MindSpore提供了动态图和静态图统一的编码方式,大大增加了静态图和动态图的可兼容性,用户无需开发多套代码,仅变更一行代码便可切换动态图/静态图模式,例如设置`context.set_context(mode=context.PYNATIVE_MODE)`切换成动态图模式,设置`context.set_context(mode=context.GRAPH_MODE)`即可切换成静态图模式,用户可拥有更轻松的开发调试及性能体验。 @@ -60,11 +60,11 @@ if __name__ == "__main__": 此外,SCT能够将Python代码转换为MindSpore函数中间表达(Intermediate Representation,IR),该函数中间表达构造出能够在不同设备解析和执行的计算图,并且在执行该计算图前,应用了多种软硬件协同优化技术,端、边、云等不同场景下的性能和效率得到针对性的提升。 -如何提高数据处理能力以匹配人工智能芯片的算力,是保证人工智能芯片发挥极致性能的关键。MindSpore为用户提供了多种数据处理算子,通过自动数据加速技术实现了高性能的流水线,包括数据加载、数据论证、数据转换等,支持CV/NLP/GNN等全场景的数据处理能力。MindRecord是MindSpore的自研数据格式,具有读写高效、易于分布式处理等优点,用户可将非标准的数据集和常用的数据集转换为MindRecord格式,从而获得更好的性能体验,转换详情请参见[MindSpore数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_conversion.html)。MindSpore支持加载常用的数据集和多种数据存储格式下的数据集,例如通过`dataset=dataset.Cifar10Dataset("Cifar10Data/")`即可完成CIFAR-10数据集的加载,其中`Cifar10Data/`为数据集本地所在目录,用户也可通过`GeneratorDataset`自定义数据集的加载方式。数据增强是一种基于(有限)数据生成新数据的方法,能够减少网络模型过拟合的现象,从而提高模型的泛化能力。MindSpore除了支持用户自定义数据增强外,还提供了自动数据增强方式,使得数据增强更加灵活,详情请见[自动数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/auto_augmentation.html)。 +如何提高数据处理能力以匹配人工智能芯片的算力,是保证人工智能芯片发挥极致性能的关键。MindSpore为用户提供了多种数据处理算子,通过自动数据加速技术实现了高性能的流水线,包括数据加载、数据论证、数据转换等,支持CV/NLP/GNN等全场景的数据处理能力。MindRecord是MindSpore的自研数据格式,具有读写高效、易于分布式处理等优点,用户可将非标准的数据集和常用的数据集转换为MindRecord格式,从而获得更好的性能体验,转换详情请参见[MindSpore数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_conversion.html)。MindSpore支持加载常用的数据集和多种数据存储格式下的数据集,例如通过`dataset=dataset.Cifar10Dataset("Cifar10Data/")`即可完成CIFAR-10数据集的加载,其中`Cifar10Data/`为数据集本地所在目录,用户也可通过`GeneratorDataset`自定义数据集的加载方式。数据增强是一种基于(有限)数据生成新数据的方法,能够减少网络模型过拟合的现象,从而提高模型的泛化能力。MindSpore除了支持用户自定义数据增强外,还提供了自动数据增强方式,使得数据增强更加灵活,详情请见[自动数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/auto_augmentation.html)。 -深度学习神经网络模型通常含有较多的隐藏层进行特征提取,但特征提取随机化、调试过程不可视限制了深度学习技术的可信和调优。MindSpore支持可视化调试调优(MindInsight),提供训练看板、溯源、性能分析和调试器等功能,帮助用户发现模型训练过程中出现的偏差,轻松进行模型调试和性能调优。例如用户可在初始化网络前,通过`profiler=Profiler()`初始化`Profiler`对象,自动收集训练过程中的算子耗时等信息并记录到文件中,在训练结束后调用`profiler.analyse()`停止收集并生成性能分析结果,以可视化形式供用户查看分析,从而更高效地调试网络性能,更多调试调优相关内容请见[训练过程可视化](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/visualization_tutorials.html)。 +深度学习神经网络模型通常含有较多的隐藏层进行特征提取,但特征提取随机化、调试过程不可视限制了深度学习技术的可信和调优。MindSpore支持可视化调试调优(MindInsight),提供训练看板、溯源、性能分析和调试器等功能,帮助用户发现模型训练过程中出现的偏差,轻松进行模型调试和性能调优。例如用户可在初始化网络前,通过`profiler=Profiler()`初始化`Profiler`对象,自动收集训练过程中的算子耗时等信息并记录到文件中,在训练结束后调用`profiler.analyse()`停止收集并生成性能分析结果,以可视化形式供用户查看分析,从而更高效地调试网络性能,更多调试调优相关内容请见[训练过程可视化](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/visualization_tutorials.html)。 -随着神经网络模型和数据集的规模不断增加,分布式并行训练成为了神经网络训练的常见做法,但分布式并行训练的策略选择和编写十分复杂,这严重制约着深度学习模型的训练效率,阻碍深度学习的发展。MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,例如设置`context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)`便可自动建立代价模型,为用户选择一种较优的并行模式,提高神经网络训练效率,大大降低了AI开发门槛,使用户能够快速实现模型思路,更多内容请见[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html)。 +随着神经网络模型和数据集的规模不断增加,分布式并行训练成为了神经网络训练的常见做法,但分布式并行训练的策略选择和编写十分复杂,这严重制约着深度学习模型的训练效率,阻碍深度学习的发展。MindSpore统一了单机和分布式训练的编码方式,开发者无需编写复杂的分布式策略,在单机代码中添加少量代码即可实现分布式训练,例如设置`context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL)`便可自动建立代价模型,为用户选择一种较优的并行模式,提高神经网络训练效率,大大降低了AI开发门槛,使用户能够快速实现模型思路,更多内容请见[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html)。 ## 层次结构 diff --git a/docs/programming_guide/source_zh_cn/augmentation.md b/docs/programming_guide/source_zh_cn/augmentation.md index 3a478abf204d9d53e27b9b6032eccb372421bc4d..2e36ad251bd121abd16f7a7b294153c36eb2ccd0 100644 --- a/docs/programming_guide/source_zh_cn/augmentation.md +++ b/docs/programming_guide/source_zh_cn/augmentation.md @@ -16,9 +16,9 @@ - +    - +    @@ -33,7 +33,7 @@ MindSpore提供了`c_transforms`模块和`py_transforms`模块供用户进行数 | c_transforms | 基于C++的OpenCV实现 | 具有较高的性能。 | | py_transforms | 基于Python的PIL实现 | 该模块提供了多种图像增强功能,并提供了PIL Image和NumPy数组之间的传输方法。| -MindSpore目前支持的常用数据增强算子如下表所示,更多数据增强算子参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.vision.html)。 +MindSpore目前支持的常用数据增强算子如下表所示,更多数据增强算子参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.vision.html)。 | 模块 | 算子 | 说明 | | ---- | ---- | ---- | diff --git a/docs/programming_guide/source_zh_cn/auto_augmentation.md b/docs/programming_guide/source_zh_cn/auto_augmentation.md index 44f3d92df027365d3df83348e575feb6f488917c..270848a598b5f76ef7c61b1d7b9d12735fafa6ad 100644 --- a/docs/programming_guide/source_zh_cn/auto_augmentation.md +++ b/docs/programming_guide/source_zh_cn/auto_augmentation.md @@ -12,9 +12,9 @@ - +    - + ## 概述 @@ -26,7 +26,7 @@ MindSpore除了可以让用户自定义数据增强的使用,还提供了一 MindSpore提供了一系列基于概率的自动数据增强API,用户可以对各种数据增强操作进行随机选择与组合,使数据增强更加灵活。 -关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.transforms.html)。 +关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.transforms.html)。 ### RandomApply diff --git a/docs/programming_guide/source_zh_cn/auto_parallel.md b/docs/programming_guide/source_zh_cn/auto_parallel.md index b05e755f9740fe8cf8ca764e6d66312331b28a3d..d59fbe62df28c9cbfcb15472cf13a57a4886c52e 100644 --- a/docs/programming_guide/source_zh_cn/auto_parallel.md +++ b/docs/programming_guide/source_zh_cn/auto_parallel.md @@ -33,7 +33,7 @@ - + ## 概述 @@ -103,7 +103,7 @@ context.get_auto_parallel_context("gradients_mean") 其中`auto_parallel`和`data_parallel`在MindSpore教程中有完整样例: -。 +。 代码样例如下: @@ -152,7 +152,7 @@ context.get_auto_parallel_context("enable_parallel_optimizer") #### parameter_broadcast -`parameter_broadcast`将数据并行参数0号卡上的权值广播到其他卡上,达到同步初始化权重的目的。 +`parameter_broadcast`将数据并行参数在0号卡上的权值广播到其他卡上,达到同步初始化权重的目的。参数默认值是False,当前仅支持图模式。 代码样例如下: @@ -299,7 +299,7 @@ rank_id = get_rank() ### cross_batch -在特定场景下,`data_parallel`的计算逻辑和`stand_alone`是不一样的,`auto_parallel`在任何场景下都是和`stand_alone`的计算逻辑保持一致。而`data_parallel`的收敛效果可能更好,因此MindSpore提供了`cross_barch`这个参数,可以使`auto_parallel`的计算逻辑和`data_parallel`保持一致,用户可通过`add_prim_attr`方法进行配置,默认值是False。 +在特定场景下,`data_parallel`的计算逻辑和`stand_alone`是不一样的,`auto_parallel`在任何场景下都是和`stand_alone`的计算逻辑保持一致。而`data_parallel`的收敛效果可能更好,因此MindSpore提供了`cross_batch`这个参数,可以使`auto_parallel`的计算逻辑和`data_parallel`保持一致,用户可通过`add_prim_attr`方法进行配置,默认值是False。 代码样例如下: @@ -341,7 +341,7 @@ x = Parameter(Tensor(np.ones([2, 2])), layerwise_parallel=True) 具体用例请参考MindSpore分布式并行训练教程: -。 +。 ## 自动并行 @@ -349,4 +349,4 @@ x = Parameter(Tensor(np.ones([2, 2])), layerwise_parallel=True) 具体用例请参考MindSpore分布式并行训练教程: -。 +。 diff --git a/docs/programming_guide/source_zh_cn/cache.md b/docs/programming_guide/source_zh_cn/cache.md index a319efd5957e666740b1c24dba8ce6f470285852..a96c315b3d8b0d44e70a1d810954dee45fe7aaa5 100644 --- a/docs/programming_guide/source_zh_cn/cache.md +++ b/docs/programming_guide/source_zh_cn/cache.md @@ -11,7 +11,7 @@ - + ## 概述 @@ -69,7 +69,7 @@ `cache_admin`支持以下命令和参数: - `--start`:启动缓存服务器,支持通过以下参数进行配置: - `--workers`或`-w`:设置缓存服务器的工作线程数量,默认情况下工作线程数量为机器CPU个数的一半。 - - `--spilldir`或`-s`:设置若缓存数据的大小超过内存空间,则溢出至磁盘的数据文件路径,默认为`/tmp/mindspore/cache`。 + - `--spilldir`或`-s`:设置若缓存数据的大小超过内存空间,则溢出至磁盘的数据文件路径,默认为空(表示不启用数据溢出功能)。 - `--hostname`或`-h`:缓存服务器的ip地址,默认为127.0.0.1。 - `--port`或`-p`:缓存服务器的端口号,默认为50052。 - `--loglevel`或`-l`:设置日志等级,默认为1(WARNING级别)。若设置为0(INFO级别),会输出过多日志,导致性能劣化。 @@ -83,7 +83,8 @@ 用户可通过`ps -ef|grep cache_server`命令来检查服务器是否已启动以及查询服务器参数。 - > 设置cache_server初始化参数时,要先确认系统可用内存和待加载数据集大小,cache_server初始化容量或待加载数据集空间占耗超过系统可用内存时,都有可能导致机器宕机/重启、cache_server自动关闭、训练流程执行失败等问题。 + > - 设置cache_server初始化参数时,要先确认系统可用内存和待加载数据集大小,cache_server初始化容量或待加载数据集空间占耗超过系统可用内存时,都有可能导致机器宕机/重启、cache_server自动关闭、训练流程执行失败等问题。 + > - 若要启用数据溢出功能,则用户在启动缓存服务器时必须使用`-s`参数对溢出路径进行设置,否则该功能默认关闭。 3. 创建缓存会话。 @@ -135,7 +136,7 @@ > - 在实际使用中,通常应当首先使用`cache_admin -g`命令从缓存服务器处获得一个缓存会话id并作为`session_id`的参数,防止发生缓存会话不存在而报错的情况。 > - 设置`size=0`代表不限制缓存所使用的内存空间,但不超过系统总内存的80%。注意,设置`size`为0可能会存在机器“out of memory”的风险,因此建议用户根据机器本身的空闲内存大小,给`size`参数设置一个合理的取值。 - > - 若设置`spilling=True`,则当内存空间不足时,多余数据将写入磁盘中。因此,用户需确保所设置的磁盘路径具有写入权限以及足够的磁盘空间,以存储溢出至磁盘的缓存数据。 + > - 若设置`spilling=True`,则当内存空间不足时,多余数据将写入磁盘中。因此,用户需确保所设置的磁盘路径具有写入权限以及足够的磁盘空间,以存储溢出至磁盘的缓存数据。注意,若启动服务器时未指定溢出路径,则在调用API时设置`spilling=True`将会导致报错。 > - 若设置`spilling=False`,则缓存服务器在耗尽所设置的内存空间后将不再写入新的数据。 > - 当使用不支持随机访问的数据集(如`TFRecordDataset`)进行数据加载并启用缓存服务时,需要保证整个数据集均存放于本地。在该场景下,若本地内存空间不足以存放所有数据,则必须启用溢出,将数据溢出至磁盘。 > - `num_connections`和`prefetch_size`为内部性能调优参数,一般情况下,用户无需设置这两个参数。 @@ -146,7 +147,7 @@ 需要注意的是,两个例子均需要按照步骤4中的方法分别创建一个缓存实例,并在数据集加载或map算子中将所创建的`test_cache`作为`cache`参数分别传入。 - 下面两个样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。 + 下面两个样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。 - 缓存原始数据集加载的数据。 @@ -305,11 +306,11 @@ done ``` - > 直接获取完整样例代码:[cache.sh](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/cache/cache.sh) + > 直接获取完整样例代码:[cache.sh](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/cache/cache.sh) 4. 创建并应用缓存实例。 - 下面样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。目录结构如下: + 下面样例中使用到CIFAR-10数据集。运行样例前,需参照[数据集加载](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_loading.html#cifar-10-100)中的方法下载并存放CIFAR-10数据集。目录结构如下: ```text ├─cache.sh @@ -348,7 +349,7 @@ print("Got {} samples on device {}".format(num_iter, args_opt.device)) ``` - > 直接获取完整样例代码:[my_training_script.py](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/cache/my_training_script.py) + > 直接获取完整样例代码:[my_training_script.py](https://gitee.com/mindspore/docs/blob/r1.1/tutorials/tutorial_code/cache/my_training_script.py) 5. 运行训练脚本。 diff --git a/docs/programming_guide/source_zh_cn/callback.md b/docs/programming_guide/source_zh_cn/callback.md index 15d1e9a391e00a18c0111030f2fd17457b39f4e4..a9451c5b7a4992ba2a8020c796215de8a6263d6f 100644 --- a/docs/programming_guide/source_zh_cn/callback.md +++ b/docs/programming_guide/source_zh_cn/callback.md @@ -9,7 +9,7 @@ - + ## 概述 @@ -23,19 +23,19 @@ Callback回调函数在MindSpore中被实现为一个类,Callback机制类似 与模型训练过程相结合,保存训练后的模型和网络参数,方便进行再推理或再训练。`ModelCheckpoint`一般与`CheckpointConfig`配合使用,`CheckpointConfig`是一个参数配置类,可自定义配置checkpoint的保存策略。 - 详细内容,请参考[Checkpoint官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html)。 + 详细内容,请参考[Checkpoint官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html)。 - SummaryCollector 帮助收集一些常见信息,如loss、learning rate、计算图、参数权重等,方便用户将训练过程可视化和查看信息,并且可以允许summary操作从summary文件中收集数据。 - 详细内容,请参考[Summary官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/summary_record.html)。 + 详细内容,请参考[Summary官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/summary_record.html)。 - LossMonitor 监控训练过程中的loss变化情况,当loss为NAN或INF时,提前终止训练。可以在日志中输出loss,方便用户查看。 - 详细内容,请参考[LossMonitor官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#mindsporecallback)。 + 详细内容,请参考[LossMonitor官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_debugging_info.html#mindsporecallback)。 - TimeMonitor @@ -51,6 +51,6 @@ MindSpore不但有功能强大的内置回调函数,还可以支持用户自 2. 实现保存训练过程中精度最高的checkpoint文件,用户可以自定义在每一轮迭代后都保存当前精度最高的模型。 -详细内容,请参考[自定义Callback官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#id3)。 +详细内容,请参考[自定义Callback官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_debugging_info.html#id3)。 根据教程,用户可以很容易实现具有其他功能的自定义回调函数,如实现在每一轮训练结束后都输出相应的详细训练信息,包括训练进度、训练轮次、训练名称、loss值等;如实现在loss或模型精度达到一定值后停止训练,用户可以设定loss或模型精度的阈值,当loss或模型精度达到该阈值后就提前终止训练等。 diff --git a/docs/programming_guide/source_zh_cn/cell.md b/docs/programming_guide/source_zh_cn/cell.md index ce5021e70db8fbbcc456c3724c6f9164843699f7..7bca63caf6f4a41996c4158e13814fd09a52be23 100644 --- a/docs/programming_guide/source_zh_cn/cell.md +++ b/docs/programming_guide/source_zh_cn/cell.md @@ -21,14 +21,14 @@ - - + +    ## 概述 -MindSpore的`Cell`类是构建所有网络的基类,也是网络的基本单元。当用户需要自定义网络时,需要继承`Cell`类,并重写`__init__`方法和`contruct`方法。 +MindSpore的`Cell`类是构建所有网络的基类,也是网络的基本单元。当用户需要自定义网络时,需要继承`Cell`类,并重写`__init__`方法和`construct`方法。 损失函数、优化器和模型层等本质上也属于网络结构,也需要继承`Cell`类才能实现功能,同样用户也可以根据业务需求自定义这部分内容。 @@ -67,7 +67,7 @@ class Net(nn.Cell): `parameters_dict`方法识别出网络结构中所有的参数,返回一个以key为参数名,value为参数值的`OrderedDict`。 -`Cell`类中返回参数的方法还有许多,例如`get_parameters`、`trainable_params`等,具体使用方法可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Cell.html)。 +`Cell`类中返回参数的方法还有许多,例如`get_parameters`、`trainable_params`等,具体使用方法可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Cell.html)。 代码样例如下: @@ -78,17 +78,13 @@ print(result.keys()) print(result['weight']) ``` -样例中的`Net`采用上文构造网络的用例,打印了网络中所有参数的名字和`conv.weight`参数的结果。 +样例中的`Net`采用上文构造网络的用例,打印了网络中所有参数的名字和`weight`参数的结果。 输出如下: ```text odict_keys(['weight']) -Parameter (name=weight, value=[[[[-3.95042636e-03 1.08830128e-02 -6.51786150e-03] - [ 8.66129529e-03 7.36288540e-03 -4.32638079e-03] - [-1.47628486e-02 8.24100431e-03 -2.71035335e-03]] - ...... - [ 1.58852488e-02 -1.03505487e-02 1.72988791e-02]]]]) +Parameter (name=weight) ``` ### cells_and_names @@ -126,9 +122,9 @@ print(names) ```text ('', Net1< - (conv): Conv2d + (conv): Conv2d >) -('conv', Conv2d) +('conv', Conv2d) -------names------- ['conv'] ``` @@ -342,7 +338,7 @@ print(loss(input_data, target_data)) ## 优化算法 -`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,详细说明参见[优化算法](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/optim.html)。 +`mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,详细说明参见[优化算法](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/optim.html)。 ## 构建自定义网络 diff --git a/docs/programming_guide/source_zh_cn/conf.py b/docs/programming_guide/source_zh_cn/conf.py index 95d7701759707ab95a3c199cd8a22e2e2cc1194d..7be5f453c21b75703c763a14c8180127aed60e6b 100644 --- a/docs/programming_guide/source_zh_cn/conf.py +++ b/docs/programming_guide/source_zh_cn/conf.py @@ -20,7 +20,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/docs/programming_guide/source_zh_cn/context.md b/docs/programming_guide/source_zh_cn/context.md index 40723c9363079be090f52bad4e4ced5e2e7130e9..932397d7ecf69dd649dcb26b2b926cff567b3427 100644 --- a/docs/programming_guide/source_zh_cn/context.md +++ b/docs/programming_guide/source_zh_cn/context.md @@ -16,9 +16,9 @@ - +    - +    @@ -90,27 +90,7 @@ context.set_context(device_target="Ascend", device_id=6) context中有专门用于配置并行训练参数的接口:context.set_auto_parallel_context,该接口必须在初始化网络之前调用。 -- `parallel_mode`:分布式并行模式,默认为单机模式`ParallelMode.STAND_ALONE`。可选数据并行`ParallelMode.DATA_PARALLEL`及自动并行`ParallelMode.AUTO_PARALLEL`。 - -- `gradients_mean`:反向计算时,框架内部会将数据并行参数分散在多台机器的梯度值进行收集,得到全局梯度值后再传入优化器中更新。默认值为`False`,设置为True对应`allreduce_mean`操作,False对应`allreduce_sum`操作。 - -- `enable_parallel_optimizer`:开发中特性。打开优化器模型并行开关,通过拆分权重到各卡分别进行更新再同步的方式以提升性能。该参数目前只在数据并行模式和参数量大于机器数时有效,支持`Lamb`和`Adam`优化器。 - -- `device_num`:表示可用的机器数,其值为int型,且必须在1~4096范围内。 - -- `global_rank`:表示当前卡的逻辑序号,其值为int型,且必须在0~4095范围内。 - -> `device_num`和`global_rank`建议采用默认值,框架内会调用HCCL接口获取。 - -代码样例如下: - -```python -from mindspore import context -from mindspore.context import ParallelMode -context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, gradients_mean=True) -``` - -> 分布式并行训练详细介绍可以查看[分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html)。 +> 分布式管理详细介绍可以查看[分布式并行](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/auto_parallel.html)。 ## 维测管理 @@ -122,13 +102,25 @@ context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, grad - `enable_profiling`:是否开启profiling功能。设置为True,表示开启profiling功能,从enable_options读取profiling的采集选项;设置为False,表示关闭profiling功能,仅采集training_trace。 -- `profiling_options`:profiling采集选项,取值如下,支持采集多项数据。training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据;task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息;op_trace:采集单算子性能数据。 +- `profiling_options`:profiling采集选项,取值如下,支持采集多项数据。 + result_path: Profiling采集结果文件保存路径。该参数指定的目录需要在启动训练的环境上(容器或Host侧)提前创建且确保安装时配置的运行用户具有读写权限,支持配置绝对路径或相对路径(相对执行命令时的当前路径); + training_trace:采集迭代轨迹数据,即训练任务及AI软件栈的软件信息,实现对训练任务的性能分析,重点关注数据增强、前后向计算、梯度聚合更新等相关数据,取值on/off。 + task_trace:采集任务轨迹数据,即昇腾910处理器HWTS/AICore的硬件信息,分析任务开始、结束等信息,取值on/off; + aicpu_trace: 采集aicpu数据增强的profiling数据。取值on/off; + fp_point: training_trace为on时需要配置。指定训练网络迭代轨迹正向算子的开始位置,用于记录前向算子开始时间戳。配置值为指定的正向第一个算子名字。当该值为空时,系统自动获取正向第一个算子名字; + bp_point: training_trace为on时需要配置。指定训练网络迭代轨迹反向算子的结束位置,用于记录反向算子结束时间戳。配置值为指定的反向最后一个算子名字。当该值为空时,系统自动获取反向最后一个算子名字; + ai_core_metrics: 取值如下: + - ArithmeticUtilization: 各种计算类指标占比统计。 + - PipeUtilization: 计算单元和搬运单元耗时占比,该项为默认值。 + - Memory: 外部内存读写类指令占比。 + - MemoryL0: 内部内存读写类指令占比。 + - ResourceConflictRatio: 流水线队列类指令占比。 代码样例如下: ```python from mindspore import context -context.set_context(enable_profiling=True, profiling_options="training_trace") +context.set_context(enable_profiling=True, profiling_options= '{"result_path":"/home/data/output","training_trace":"on"}') ``` ### 保存MindIR @@ -146,13 +138,13 @@ from mindspore import context context.set_context(save_graphs=True) ``` -> MindIR详细介绍可以查看[MindSpore IR(MindIR)](https://www.mindspore.cn/doc/note/zh-CN/master/design/mindspore/mindir.html)。 +> MindIR详细介绍可以查看[MindSpore IR(MindIR)](https://www.mindspore.cn/doc/note/zh-CN/r1.1/design/mindspore/mindir.html)。 ### print算子落盘 默认情况下,MindSpore的自研print算子可以将用户输入的Tensor或字符串信息打印出来,支持多字符串输入,多Tensor输入和字符串与Tensor的混合输入,输入参数以逗号隔开。 -> Print打印功能可以查看[Print算子功能介绍](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_debugging_info.html#print)。 +> Print打印功能可以查看[Print算子功能介绍](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_debugging_info.html#print)。 - `print_file_path`:可以将print算子数据保存到文件,同时关闭屏幕打印功能。如果保存的文件已经存在,则会给文件添加时间戳后缀。数据保存到文件可以解决数据量较大时屏幕打印数据丢失的问题。 @@ -163,4 +155,4 @@ from mindspore import context context.set_context(print_file_path="print.pb") ``` -> context接口详细介绍可以查看[mindspore.context](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.context.html)。 +> context接口详细介绍可以查看[mindspore.context](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.context.html)。 diff --git a/docs/programming_guide/source_zh_cn/customized.rst b/docs/programming_guide/source_zh_cn/customized.rst index 129b147956d9fc0e702dc68cc1e0add0f7e6d2d0..a86ddb8601664e529c4a2f4d4c8d5c3ed04b295d 100644 --- a/docs/programming_guide/source_zh_cn/customized.rst +++ b/docs/programming_guide/source_zh_cn/customized.rst @@ -4,6 +4,6 @@ .. toctree:: :maxdepth: 1 - 自定义算子(Ascend) - 自定义算子(GPU) - 自定义算子(CPU) + 自定义算子(Ascend) + 自定义算子(GPU) + 自定义算子(CPU) diff --git a/docs/programming_guide/source_zh_cn/dataset_conversion.md b/docs/programming_guide/source_zh_cn/dataset_conversion.md index 253c5b0ad5bfbd2be4b7b0f9e49ea3522fefd455..00a2d3f32dd1b9dc0e612c31264c906ea885c946 100644 --- a/docs/programming_guide/source_zh_cn/dataset_conversion.md +++ b/docs/programming_guide/source_zh_cn/dataset_conversion.md @@ -15,9 +15,9 @@ - +    - +    @@ -34,7 +34,7 @@ 本示例主要介绍用户如何将自己的CV类数据集转换成MindRecord,并使用`MindDataset`读取。 本示例首先创建一个包含100条记录的MindRecord文件,其样本包含`file_name`(字符串)、 -`label`(整形)、 `data`(二进制)三个字段,然后使用`MindDataset`读取该MindRecord文件。 +`label`(整型)、 `data`(二进制)三个字段,然后使用`MindDataset`读取该MindRecord文件。 1. 导入相关模块。 @@ -105,7 +105,7 @@ 本示例主要介绍用户如何将自己的NLP类数据集转换成MindRecord,并使用`MindDataset`读取。为了方便展示,此处略去了将文本转换成字典序的预处理过程。 -本示例首先创建一个包含100条记录的MindRecord文件,其样本包含八个字段,均为整形数组,然后使用`MindDataset`读取该MindRecord文件。 +本示例首先创建一个包含100条记录的MindRecord文件,其样本包含八个字段,均为整型数组,然后使用`MindDataset`读取该MindRecord文件。 1. 导入相关模块。 @@ -185,7 +185,7 @@ MindSpore提供转换常用数据集的工具类,能够将常用的数据集 | TFRecord | TFRecordToMR | | CSV File | CsvToMR | -更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.mindrecord.html)。 +更多数据集转换的详细说明可参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.mindrecord.html)。 ### 转换CIFAR-10数据集 diff --git a/docs/programming_guide/source_zh_cn/dataset_loading.md b/docs/programming_guide/source_zh_cn/dataset_loading.md index 0bde657f68bddc2652ae9b9283afa8a6bd976e43..177192ef617a975d9b9d551989fcae6e9d5af9ff 100644 --- a/docs/programming_guide/source_zh_cn/dataset_loading.md +++ b/docs/programming_guide/source_zh_cn/dataset_loading.md @@ -21,9 +21,9 @@ - +    - +    @@ -54,7 +54,7 @@ MindSpore还支持加载多种数据存储格式下的数据集,用户可以 MindSpore也同样支持使用`GeneratorDataset`自定义数据集的加载方式,用户可以根据需要实现自己的数据集类。 -> 更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.html)。 +> 更多详细的数据集加载接口说明,参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.html)。 ## 常用数据集加载 @@ -209,7 +209,7 @@ Panoptic: dict_keys(['image', 'bbox', 'category_id', 'iscrowd', 'area']) MindRecord是MindSpore定义的一种数据格式,使用MindRecord能够获得更好的性能提升。 -> 阅读[数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。 +> 阅读[数据格式转换](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dataset_conversion.html)章节,了解如何将数据集转化为MindSpore数据格式。 下面的样例通过`MindDataset`接口加载MindRecord文件,并展示已加载数据的标签。 @@ -448,6 +448,7 @@ class IterDatasetGenerator: return item def __iter__(self): + self.__index = 0 return self def __len__(self): diff --git a/docs/programming_guide/source_zh_cn/dtype.md b/docs/programming_guide/source_zh_cn/dtype.md index 7d667329d3f9c2f8ae8649fe5b6caf388dc6f87d..3ac4943735c88d71c78697d7f16cb6d9244f862e 100644 --- a/docs/programming_guide/source_zh_cn/dtype.md +++ b/docs/programming_guide/source_zh_cn/dtype.md @@ -8,9 +8,9 @@ - +    - +    @@ -20,7 +20,7 @@ MindSpore张量支持不同的数据类型,包含`int8`、`int16`、`int32`、 在MindSpore的运算处理流程中,Python中的`int`数会被转换为定义的int64类型,`float`数会被转换为定义的`float32`类型。 -详细的类型支持情况请参考。 +详细的类型支持情况请参考。 以下代码,打印MindSpore的数据类型int32。 diff --git a/docs/programming_guide/source_zh_cn/infer.md b/docs/programming_guide/source_zh_cn/infer.md index 8dc0564bd1b199ac68a15a69d421060876660bff..7b03c8eef961e8ba512a9bcb8d3aa609adfb2296 100644 --- a/docs/programming_guide/source_zh_cn/infer.md +++ b/docs/programming_guide/source_zh_cn/infer.md @@ -6,14 +6,14 @@ - + 基于MindSpore训练后的模型,支持在Ascend 910 AI处理器、Ascend 310 AI处理器、GPU、CPU、端侧等多种不同的平台上执行推理。使用方法可参考如下教程: -- [在Ascend 910 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_910.html) -- [在Ascend 310 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_310.html) -- [在GPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_gpu.html) -- [在CPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_cpu.html) -- [在端侧执行推理](https://www.mindspore.cn/tutorial/lite/zh-CN/master/quick_start/quick_start.html) +- [在Ascend 910 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_910.html) +- [在Ascend 310 AI处理器上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310.html) +- [在GPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_gpu.html) +- [在CPU上执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_cpu.html) +- [在端侧执行推理](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.1/quick_start/quick_start.html) -同时,MindSpore提供了一个轻量级、高性能的服务模块,称为MindSpore Serving,可帮助MindSpore开发者在生产环境中高效部署在线推理服务,使用方法可参考[部署推理服务](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_example.html)。 +同时,MindSpore提供了一个轻量级、高性能的服务模块,称为MindSpore Serving,可帮助MindSpore开发者在生产环境中高效部署在线推理服务,使用方法可参考[部署推理服务](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html)。 diff --git a/docs/programming_guide/source_zh_cn/network_component.md b/docs/programming_guide/source_zh_cn/network_component.md index fcbfdaa013e7607b220020c57eaf669140a05dad..7146cbea92d25739062d104a20c6c20c10f4b3ee 100644 --- a/docs/programming_guide/source_zh_cn/network_component.md +++ b/docs/programming_guide/source_zh_cn/network_component.md @@ -10,9 +10,9 @@ - +    - +    @@ -26,13 +26,12 @@ MindSpore封装了一些常用的网络组件,用于网络的训练、推理 ## GradOperation -GradOperation组件用于生成输入函数的梯度,利用`get_all`、`get_by_list`和`sens_param`参数控制梯度的计算方式,细节内容详见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GradOperation.html)。 +GradOperation组件用于生成输入函数的梯度,利用`get_all`、`get_by_list`和`sens_param`参数控制梯度的计算方式,细节内容详见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GradOperation.html)。 GradOperation的使用实例如下: ```python import numpy as np - import mindspore.nn as nn from mindspore import Tensor, Parameter from mindspore import dtype as mstype @@ -68,9 +67,8 @@ GradNetWrtX(Net())(x, y) 输出如下: ```text -Tensor(shape=[2, 3], dtype=Float32, [[1.4100001 1.5999999 6.6 ] - [1.4100001 1.5999999 6.6 ]]) + [1.4100001 1.5999999 6.6 ]] ``` MindSpore涉及梯度计算的其他组件,例如`WithGradCell`和`TrainOneStepCell`等,都用到了`GradOperation`, @@ -84,7 +82,6 @@ MindSpore涉及梯度计算的其他组件,例如`WithGradCell`和`TrainOneSte ```python import numpy as np - import mindspore.context as context import mindspore.nn as nn from mindspore import Tensor diff --git a/docs/programming_guide/source_zh_cn/network_list.rst b/docs/programming_guide/source_zh_cn/network_list.rst index 0086283c5f999b6131593dd0be63ce852df01927..f6ce3af4aaaa3d987a618af4a0e6737cc5e74037 100644 --- a/docs/programming_guide/source_zh_cn/network_list.rst +++ b/docs/programming_guide/source_zh_cn/network_list.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - MindSpore网络支持 \ No newline at end of file + MindSpore网络支持 \ No newline at end of file diff --git a/docs/programming_guide/source_zh_cn/operator_list.rst b/docs/programming_guide/source_zh_cn/operator_list.rst index 6fc28fa3bdea8f865f0f1702724bb434e885ec45..bf2121c2efc84dd6a009b6d185cffde7234e388a 100644 --- a/docs/programming_guide/source_zh_cn/operator_list.rst +++ b/docs/programming_guide/source_zh_cn/operator_list.rst @@ -4,7 +4,7 @@ .. toctree:: :maxdepth: 1 - MindSpore算子支持 - MindSpore隐式类型转换的算子支持 - MindSpore分布式算子支持 - MindSpore Lite算子支持 \ No newline at end of file + MindSpore算子支持 + MindSpore隐式类型转换的算子支持 + MindSpore分布式算子支持 + MindSpore Lite算子支持 \ No newline at end of file diff --git a/docs/programming_guide/source_zh_cn/operators.md b/docs/programming_guide/source_zh_cn/operators.md index 77baf54b3bb6aee6ff35a4c46e9d5eec6ef05aed..c0c8a805ca84438f1d2a21982e87c5aadca1e89e 100644 --- a/docs/programming_guide/source_zh_cn/operators.md +++ b/docs/programming_guide/source_zh_cn/operators.md @@ -40,9 +40,9 @@ - +    - +    @@ -54,13 +54,13 @@ MindSpore的算子组件,可从算子使用方式和算子功能两种维度 算子相关接口主要包括operations、functional和composite,可通过ops直接获取到这三类算子。 -- operations提供单个的Primtive算子。一个算子对应一个原语,是最小的执行对象,需要实例化之后使用。 +- operations提供单个的Primitive算子。一个算子对应一个原语,是最小的执行对象,需要实例化之后使用。 - composite提供一些预定义的组合算子,以及复杂的涉及图变换的算子,如`GradOperation`。 - functional提供operations和composite实例化后的对象,简化算子的调用流程。 ### mindspore.ops.operations -operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。算子支持情况可查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)。 +operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。算子支持情况可查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)。 Primitive算子也称为算子原语,它直接封装了底层的Ascend、GPU、AICPU、CPU等多种算子的具体实现,为用户提供基础算子能力。 @@ -89,7 +89,7 @@ output = [ 1. 8. 64.] ### mindspore.ops.functional -为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list_ms.html#mindspore-ops-functional)。 +为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list_ms.html#mindspore-ops-functional)。 例如`P.Pow`算子,我们提供了functional版本的`F.tensor_pow`算子。 @@ -127,7 +127,7 @@ from mindspore import Tensor mean = Tensor(1.0, mstype.float32) stddev = Tensor(1.0, mstype.float32) output = C.normal((2, 3), mean, stddev, seed=5) -print("ouput =", output) +print("output =", output) ``` 输出如下: @@ -172,7 +172,7 @@ tensor [[2.4, 4.2] scalar 3 ``` -此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的梯度函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/ops/mindspore.ops.GradOperation.html)。 +此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的梯度函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/ops/mindspore.ops.GradOperation.html)。 ### operations/functional/composite三类算子合并用法 @@ -194,7 +194,7 @@ pow = ops.Pow() ## 算子功能 -算子按功能可分为张量操作、网络操作、数组操作、图像操作、编码操作、调试操作和量化操作七个功能模块。所有的算子在Ascend AI处理器、GPU和CPU的支持情况,参见[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)。 +算子按功能可分为张量操作、网络操作、数组操作、图像操作、编码操作、调试操作和量化操作七个功能模块。所有的算子在Ascend AI处理器、GPU和CPU的支持情况,参见[算子支持列表](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)。 ### 张量操作 @@ -362,9 +362,10 @@ from mindspore import Tensor import mindspore.ops as ops import numpy as np -input_ = Tensor(np.ones([2, 8]).astype(np.float32)) -broadcast = ops.Broadcast(1) -output = broadcast((input_,)) +shape = (2, 3) +input_x = Tensor(np.array([1, 2, 3]).astype(np.float32)) +broadcast_to = ops.BroadcastTo(shape) +output = broadcast_to(input_x) print(output) ``` @@ -372,8 +373,8 @@ print(output) 输出如下: ```text -[[1.0, 1.0, 1.0 ... 1.0, 1.0, 1.0], - [1.0, 1.0, 1.0 ... 1.0, 1.0, 1.0]] +[[1. 2. 3.] + [1. 2. 3.]] ``` ### 网络操作 @@ -529,7 +530,7 @@ print(result) 输出如下: ```text -[0. 0. 0. 0.] +(Tensor(shape=[4], dtype=Float32, value= [ 1.98989999e+00, -4.90300000e-01, 1.69520009e+00, 3.98009992e+00]),) ``` ### 数组操作 @@ -607,7 +608,7 @@ print(output) 输出如下: ```text -[3, 2, 1] +(3, 2, 1) ``` ### 图像操作 @@ -677,8 +678,8 @@ from mindspore import Tensor import mindspore.ops as ops import mindspore -anchor_box = Tensor([[4,1,2,1],[2,2,2,3]],mindspore.float32) -groundtruth_box = Tensor([[3,1,2,2],[1,2,1,4]],mindspore.float32) +anchor_box = Tensor([[2, 2, 2, 3], [2, 2, 2, 3]],mindspore.float32) +groundtruth_box = Tensor([[1, 2, 1, 4], [1, 2, 1, 4]],mindspore.float32) boundingbox_encode = ops.BoundingBoxEncode(means=(0.0, 0.0, 0.0, 0.0), stds=(1.0, 1.0, 1.0, 1.0)) res = boundingbox_encode(anchor_box, groundtruth_box) print(res) @@ -687,8 +688,8 @@ print(res) 输出如下: ```text -[[5.0000000e-01 5.0000000e-01 -6.5504000e+04 6.9335938e-01] - [-1.0000000e+00 2.5000000e-01 0.0000000e+00 4.0551758e-01]] +[[ -1. 0.25 0. 0.40551758] + [ -1. 0.25 0. 0.40551758]] ``` #### BoundingBoxDecode diff --git a/docs/programming_guide/source_zh_cn/optim.md b/docs/programming_guide/source_zh_cn/optim.md index 1e2845cb0233ce3d1868fdc3e5b32eeac062e623..930824bcf3b7fec2218b6f055c2ec927a17cb0d4 100644 --- a/docs/programming_guide/source_zh_cn/optim.md +++ b/docs/programming_guide/source_zh_cn/optim.md @@ -13,9 +13,9 @@ - +    - +    @@ -23,7 +23,7 @@ `mindspore.nn.optim`是MindSpore框架中实现各种优化算法的模块,包含常用的优化器、学习率等,并且接口具备足够的通用性,可以将以后更新、更复杂的方法集成到模块里。 -`mindspore.nn.optim`为模型提供常用的优化器,如`SGD`、`ADAM`、`Momentum`。优化器用于计算和更新梯度,模型优化算法的选择直接关系到最终模型的性能,如果有时候效果不好,未必是特征或者模型设计的问题,很有可能是优化算法的问题;同时还有`mindspore.nn`提供的学习率的模块,学习率分为`dynamic_lr`和`learning_rate_schedule`,都是动态学习率,但是实现方式不同,学习率是监督学习以及深度学习中最为重要的参数,其决定着目标函数是否能收敛到局部最小值以及何时能收敛到最小值。合适的学习率能够使目标函数在合适的的时间内收敛到局部最小值。 +`mindspore.nn.optim`为模型提供常用的优化器,如`SGD`、`ADAM`、`Momentum`。优化器用于计算和更新梯度,模型优化算法的选择直接关系到最终模型的性能,如果有时候效果不好,未必是特征或者模型设计的问题,很有可能是优化算法的问题;同时还有`mindspore.nn`提供的学习率的模块,学习率分为`dynamic_lr`和`learning_rate_schedule`,都是动态学习率,但是实现方式不同,学习率是监督学习以及深度学习中最为重要的参数,其决定着目标函数是否能收敛到局部最小值以及何时能收敛到最小值。合适的学习率能够使目标函数在合适的时间内收敛到局部最小值。 > 本文档中的所有示例,支持CPU,GPU,Ascend环境。 diff --git a/docs/programming_guide/source_zh_cn/parameter.md b/docs/programming_guide/source_zh_cn/parameter.md index 110d793f396ac9f953d5bbedcdc41a9b46c5073d..901717496dfd901cb2d9df593bf9b7b526a0bf2a 100644 --- a/docs/programming_guide/source_zh_cn/parameter.md +++ b/docs/programming_guide/source_zh_cn/parameter.md @@ -11,9 +11,9 @@ - +    - +    @@ -41,7 +41,7 @@ mindspore.Parameter(default_input, name=None, requires_grad=True, layerwise_para 当`layerwise_parallel`(混合并行)配置为True时,参数广播和参数梯度聚合时会过滤掉该参数。 -有关分布式并行的相关配置,可以参考文档:。 +有关分布式并行的相关配置,可以参考文档:。 下例通过三种不同的数据类型构造了`Parameter`,三个`Parameter`都需要更新,都不采用layerwise并行。如下: @@ -115,7 +115,7 @@ inited_param: None requires_grad: True layerwise_parallel: False -data: Parameter (name=x) +data: Parameter (name=Parameter) ``` ## 方法 @@ -126,7 +126,7 @@ data: Parameter (name=x) - `set_data`:设置`Parameter`保存的数据,支持传入`Tensor`、`Initializer`、`int`和`float`进行设置, 将方法的入参`slice_shape`设置为True时,可改变`Parameter`的shape,反之,设置的数据shape必须与`Parameter`原来的shape保持一致。 -- `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_parameter_server_training.html)进行训练。 +- `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_parameter_server_training.html)进行训练。 - `clone`:克隆`Parameter`,克隆完成后可以给新Parameter指定新的名字。 diff --git a/docs/programming_guide/source_zh_cn/performance_optimization.md b/docs/programming_guide/source_zh_cn/performance_optimization.md index 6cf6a8e188187a5881719367adf6ab0452452150..9ae02d93961f0000e50a9eab449488f6703234a8 100644 --- a/docs/programming_guide/source_zh_cn/performance_optimization.md +++ b/docs/programming_guide/source_zh_cn/performance_optimization.md @@ -6,14 +6,14 @@ - + MindSpore提供了多种性能优化方法,用户可根据实际情况,利用它们来提升训练和推理的性能。 | 优化阶段 | 优化方法 | 支持情况 | | --- | --- | --- | -| 训练 | [分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/distributed_training_tutorials.html) | Ascend、GPU | -| | [混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_mixed_precision.html) | Ascend、GPU | -| | [图算融合](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_graph_kernel_fusion.html) | Ascend | -| | [梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_gradient_accumulation.html) | Ascend、GPU | -| 推理 | [训练后量化](https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/post_training_quantization.html) | Lite | +| 训练 | [分布式并行训练](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/distributed_training_tutorials.html) | Ascend、GPU | +| | [混合精度](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/enable_mixed_precision.html) | Ascend、GPU | +| | [图算融合](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/enable_graph_kernel_fusion.html) | Ascend | +| | [梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_gradient_accumulation.html) | Ascend、GPU | +| 推理 | [训练后量化](https://www.mindspore.cn/tutorial/lite/zh-CN/r1.1/use/post_training_quantization.html) | Lite | diff --git a/docs/programming_guide/source_zh_cn/pipeline.md b/docs/programming_guide/source_zh_cn/pipeline.md index 729a82e641ad19ce47407043ff545f63e9d744e2..71942fb69d94193dd532127a0bcaa6f4538c5d8c 100644 --- a/docs/programming_guide/source_zh_cn/pipeline.md +++ b/docs/programming_guide/source_zh_cn/pipeline.md @@ -14,9 +14,9 @@ - +    - +    @@ -26,7 +26,7 @@ MindSpore的各个数据集类都为用户提供了多种数据处理算子,用户可以构建数据处理pipeline定义需要使用的数据处理操作,数据即可在训练过程中像水一样源源不断地经过数据处理pipeline流向训练系统。 -MindSpore目前支持的部分常用数据处理算子如下表所示,更多数据处理操作参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.html)。 +MindSpore目前支持的部分常用数据处理算子如下表所示,更多数据处理操作参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.html)。 | 数据处理算子 | 算子说明 | | ---- | ---- | @@ -80,7 +80,7 @@ for data in dataset1.create_dict_iterator(): 将指定的函数或算子作用于数据集的指定列数据,实现数据映射操作。用户可以自定义映射函数,也可以直接使用c_transforms或py_transforms中的算子针对图像、文本数据进行数据增强。 ->更多数据增强的使用说明,参见编程指南中[数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/augmentation.html)章节。 +>更多数据增强的使用说明,参见编程指南中[数据增强](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/augmentation.html)章节。 ![map](./images/map.png) diff --git a/docs/programming_guide/source_zh_cn/probability.md b/docs/programming_guide/source_zh_cn/probability.md index ea6cd8e22217e580648faa1be465ab41cd1c9e20..82db56bcf3d32a0a93f372e062c48e2613723b51 100644 --- a/docs/programming_guide/source_zh_cn/probability.md +++ b/docs/programming_guide/source_zh_cn/probability.md @@ -47,7 +47,7 @@ - + MindSpore深度概率编程的目标是将深度学习和贝叶斯学习结合,包括概率分布、概率分布映射、深度概率网络、概率推断算法、贝叶斯层、贝叶斯转换和贝叶斯工具箱,面向不同的开发者。对于专业的贝叶斯学习用户,提供概率采样、推理算法和模型构建库;另一方面,为不熟悉贝叶斯深度学习的用户提供了高级的API,从而不用更改深度学习编程逻辑,即可利用贝叶斯模型。 @@ -361,23 +361,28 @@ mean_b = Tensor(1.0, dtype=mstype.float32) sd_b = Tensor(2.0, dtype=mstype.float32) kl = my_normal.kl_loss('Normal', mean_b, sd_b) +# get the distribution args as a tuple +dist_arg = my_normal.get_dist_args() + print("mean: ", mean) print("var: ", var) print("entropy: ", entropy) print("prob: ", prob) print("cdf: ", cdf) print("kl: ", kl) +print("dist_arg: ", dist_arg) ``` 输出为: ```text -mean: 0.0 -var: 1.0 -entropy: 1.4189385 -prob: [0.35206532, 0.3989423, 0.35206532] -cdf: [0.3085482, 0.5, 0.6914518] -kl: 0.44314718 +mean:  0.0 +var:  1.0 +entropy:  1.4189385 +prob:  [0.35206532 0.3989423  0.35206532] +cdf:  [0.30853754 0.5        0.69146246] +kl:  0.44314718 +dist_arg: (Tensor(shape=[], dtype=Float32, value= 0), Tensor(shape=[], dtype=Float32, value= 1)) ``` ### 概率分布类在图模式下的应用 @@ -465,36 +470,34 @@ tx = Tensor(x, dtype=dtype.float32) cdf = LogNormal.cdf(tx) # generate samples from the distribution -shape = ((3, 2)) +shape = (3, 2) sample = LogNormal.sample(shape) # get information of the distribution print(LogNormal) -# get information of the underyling distribution and the bijector separately +# get information of the underlying distribution and the bijector separately print("underlying distribution:\n", LogNormal.distribution) print("bijector:\n", LogNormal.bijector) # get the computation results print("cdf:\n", cdf) -print("sample:\n", sample) +print("sample shape:\n", sample.shape) ``` 输出为: ```text TransformedDistribution< - (_bijector): Exp - (_distribution): Normal - > +  (_bijector): Exp +  (_distribution): Normal +  > underlying distribution: -Normal -bijector -Exp + Normal +bijector: + Exp cdf: -[7.55891383e-01, 9.46239710e-01, 9.89348888e-01] -sample: -[[7.64315844e-01, 3.01435232e-01], - [1.17166102e+00, 2.60277224e+00], - [7.02699006e-01, 3.91564220e-01]] + [0.7558914 0.9462397 0.9893489] +sample shape: +(3, 2) ``` 当构造 `TransformedDistribution` 映射变换的 `is_constant_jacobian = true` 时(如 `ScalarAffine`),构造的 `TransformedDistribution` 实例可以使用直接使用 `mean` 接口计算均值,例如: @@ -546,15 +549,14 @@ x = np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32) tx = Tensor(x, dtype=dtype.float32) cdf, sample = net(tx) print("cdf: ", cdf) -print("sample: ", sample) +print("sample shape: ", sample.shape) ``` 输出为: ```text cdf: [0.7558914 0.86403143 0.9171715 0.9462397 ] -sample: [[0.5361498 0.26627186 2.766659 ] - [1.5831033 0.4096472 2.008679 ]] +sample shape: (2, 3) ``` ## 概率分布映射 @@ -695,11 +697,11 @@ print("inverse_log_jacobian: ", inverse_log_jaco) 输出: ```text -PowerTransform -forward: [2.23606801e+00, 2.64575124e+00, 3.00000000e+00, 3.31662488e+00] -inverse: [1.50000000e+00, 4.00000048e+00, 7.50000000e+00, 1.20000010e+01] -forward_log_jacobian: [-8.04718971e-01, -9.72955048e-01, -1.09861231e+00, -1.19894767e+00] -inverse_log_jacobian: [6.93147182e-01 1.09861231e+00 1.38629436e+00 1.60943794e+00] +PowerTransform +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ### 图模式下调用Bijector实例 @@ -741,10 +743,10 @@ print("inverse_log_jacobian: ", inverse_log_jaco) 输出为: ```text -forward: [2.236068 2.6457515 3. 3.3166249] -inverse: [ 1.5 4. 7.5 12.000001] -forward_log_jacobian: [-0.804719 -0.9729551 -1.0986123 -1.1989477] -inverse_log_jacobian: [0.6931472 1.0986123 1.3862944 1.609438 ] +forward:  [2.236068  2.6457515 3.        3.3166249] +inverse:  [ 1.5       4.        7.5      12.000001] +forward_log_jacobian:  [-0.804719  -0.9729551 -1.0986123 -1.1989477] +inverse_log_jacobian:  [0.6931472 1.0986123 1.3862944 1.609438 ] ``` ## 深度概率网络 @@ -850,7 +852,7 @@ decoder = Decoder() cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20, num_classes=10) ``` -加载数据集,我们可以使用Mnist数据集,具体的数据加载和预处理过程可以参考这里[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html),这里会用到create_dataset函数创建数据迭代器。 +加载数据集,我们可以使用Mnist数据集,具体的数据加载和预处理过程可以参考这里[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html),这里会用到create_dataset函数创建数据迭代器。 ```python ds_train = create_dataset(image_path, 128, 1) @@ -914,7 +916,7 @@ The shape of the generated sample is (64, 1, 32, 32) 下面的范例使用MindSpore的`nn.probability.bnn_layers`中的API实现BNN图片分类模型。MindSpore的`nn.probability.bnn_layers`中的API包括`NormalPrior`,`NormalPosterior`,`ConvReparam`,`DenseReparam`,`DenseLocalReparam`和`WithBNNLossCell`。BNN与DNN的最大区别在于,BNN层的weight和bias不再是确定的值,而是服从一个分布。其中,`NormalPrior`,`NormalPosterior`分别用来生成服从正态分布的先验分布和后验分布;`ConvReparam`和`DenseReparam`分别是使用reparameterization方法实现的贝叶斯卷积层和全连接层;`DenseLocalReparam`是使用Local Reparameterization方法实现的贝叶斯全连接层;`WithBNNLossCell`是用来封装BNN和损失函数的。 -如何使用`nn.probability.bnn_layers`中的API构建贝叶斯神经网络并实现图片分类,可以参考教程[使用贝叶斯网络](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#id3)。 +如何使用`nn.probability.bnn_layers`中的API构建贝叶斯神经网络并实现图片分类,可以参考教程[使用贝叶斯网络](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#id3)。 ## 贝叶斯转换 @@ -970,7 +972,7 @@ API`TransformToBNN`主要实现了两个功能: ``` - 参数`get_dense_args`指定从DNN模型的全连接层中获取哪些参数,默认值是DNN模型的全连接层和BNN的全连接层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Dense.html);`get_conv_args`指定从DNN模型的卷积层中获取哪些参数,默认值是DNN模型的卷积层和BNN的卷积层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/nn/mindspore.nn.Conv2d.html);参数`add_dense_args`和`add_conv_args`分别指定了要为BNN层指定哪些新的参数值。需要注意的是,`add_dense_args`中的参数不能与`get_dense_args`重复,`add_conv_args`和`get_conv_args`也是如此。 + 参数`get_dense_args`指定从DNN模型的全连接层中获取哪些参数,默认值是DNN模型的全连接层和BNN的全连接层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Dense.html);`get_conv_args`指定从DNN模型的卷积层中获取哪些参数,默认值是DNN模型的卷积层和BNN的卷积层所共有的参数,参数具体的含义可以参考[API说明文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/nn/mindspore.nn.Conv2d.html);参数`add_dense_args`和`add_conv_args`分别指定了要为BNN层指定哪些新的参数值。需要注意的是,`add_dense_args`中的参数不能与`get_dense_args`重复,`add_conv_args`和`get_conv_args`也是如此。 - 功能二:转换指定类型的层 @@ -996,7 +998,7 @@ API`TransformToBNN`主要实现了两个功能: 参数`dnn_layer`指定将哪个类型的DNN层转换成BNN层,`bnn_layer`指定DNN层将转换成哪个类型的BNN层,`get_args`和`add_args`分别指定从DNN层中获取哪些参数和要为BNN层的哪些参数重新赋值。 -如何在MindSpore中使用API`TransformToBNN`可以参考教程[DNN一键转换成BNN](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_deep_probability_programming.html#dnnbnn) +如何在MindSpore中使用API`TransformToBNN`可以参考教程[DNN一键转换成BNN](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_deep_probability_programming.html#dnnbnn) ## 贝叶斯工具箱 diff --git a/docs/programming_guide/source_zh_cn/run.md b/docs/programming_guide/source_zh_cn/run.md index e1989388d076787c4fb80105b1c247249ec3d790..34a4f7e53b9b53691bd468e42eb98f4e374bf238 100644 --- a/docs/programming_guide/source_zh_cn/run.md +++ b/docs/programming_guide/source_zh_cn/run.md @@ -12,9 +12,9 @@ - +    - +    @@ -105,7 +105,7 @@ print(output.asnumpy()) ## 执行网络模型 -MindSpore的[Model接口](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.html#mindspore.Model)是用于训练和验证的高级接口。可以将有训练或推理功能的layers组合成一个对象,通过调用train、eval、predict接口可以分别实现训练、推理和预测功能。 +MindSpore的[Model接口](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.html#mindspore.Model)是用于训练和验证的高级接口。可以将有训练或推理功能的layers组合成一个对象,通过调用train、eval、predict接口可以分别实现训练、推理和预测功能。 用户可以根据实际需要传入网络、损失函数和优化器等初始化Model接口,还可以通过配置amp_level实现混合精度,配置metrics实现模型评估。 @@ -240,14 +240,15 @@ if __name__ == "__main__": model = Model(network, net_loss, net_opt) print("============== Starting Training ==============") - model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=True) + model.train(1, ds_train, callbacks=[LossMonitor()], dataset_sink_mode=False) ``` -> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)的下载数据集部分,下同。 +> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)的下载数据集部分,下同。 输出如下: -```python +```text +============== Starting Training ============== epoch: 1 step: 1, loss is 2.300784 epoch: 1 step: 2, loss is 2.3076947 epoch: 1 step: 3, loss is 2.2993166 @@ -257,11 +258,11 @@ epoch: 1 step: 1874, loss is 0.0346688 epoch: 1 step: 1875, loss is 0.017264696 ``` -> 使用PyNative模式调试, 请参考[使用PyNative模式调试](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/debug_in_pynative_mode.html), 包括单算子、普通函数和网络训练模型的执行。 +> 使用PyNative模式调试, 请参考[使用PyNative模式调试](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/debug_in_pynative_mode.html), 包括单算子、普通函数和网络训练模型的执行。 ### 执行推理模型 -通过调用Model的train接口可以实现推理。为了方便评估模型的好坏,可以在Model接口初始化的时候设置评估指标Metric。 +通过调用Model的eval接口可以实现推理。为了方便评估模型的好坏,可以在Model接口初始化的时候设置评估指标Metric。 Metric是用于评估模型好坏的指标。常见的主要有Accuracy、Fbeta、Precision、Recall和TopKCategoricalAccuracy等,通常情况下,一种模型指标无法全面的评估模型的好坏,一般会结合多个指标共同作用对模型进行评估。 @@ -373,14 +374,14 @@ if __name__ == "__main__": network = LeNet5(10) net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean") - repeat_size = 10 + repeat_size = 1 net_opt = nn.Momentum(network.trainable_params(), 0.01, 0.9) model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy(), "Precision": Precision()}) print("============== Starting Testing ==============") param_dict = load_checkpoint("./ckpt/checkpoint_lenet-1_1875.ckpt") load_param_into_net(network, param_dict) - ds_eval = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data", "test"), 32, 1) + ds_eval = create_dataset(os.path.join("/home/workspace/mindspore_dataset/MNIST_Data", "test"), 32, repeat_size) acc = model.eval(ds_eval, dataset_sink_mode=True) print("============== {} ==============".format(acc)) ``` @@ -391,7 +392,7 @@ if __name__ == "__main__": - `checkpoint_lenet-1_1875.ckpt`:保存的CheckPoint模型文件名称。 - `load_param_into_net`:通过该接口把参数加载到网络中。 -> `checkpoint_lenet-1_1875.ckpt`文件的保存方法,可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)的训练网络部分。 +> `checkpoint_lenet-1_1875.ckpt`文件的保存方法,可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)的训练网络部分。 输出如下: diff --git a/docs/programming_guide/source_zh_cn/sampler.md b/docs/programming_guide/source_zh_cn/sampler.md index 00edcb9d90d4c52888766689ff8234cd1a039cb7..8fc623d949c9e9c2ef6b7c84ae5e43bc8768fb30 100644 --- a/docs/programming_guide/source_zh_cn/sampler.md +++ b/docs/programming_guide/source_zh_cn/sampler.md @@ -14,9 +14,9 @@ - +    - +    @@ -24,7 +24,7 @@ MindSpore提供了多种用途的采样器(Sampler),帮助用户对数据集进行不同形式的采样,以满足训练需求,能够解决诸如数据集过大或样本类别分布不均等问题。只需在加载数据集时传入采样器对象,即可实现数据的采样。 -MindSpore目前提供的部分采样器类别如下表所示。此外,用户也可以根据需要实现自定义的采样器类。更多采样器的使用方法参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.html)。 +MindSpore目前提供的部分采样器类别如下表所示。此外,用户也可以根据需要实现自定义的采样器类。更多采样器的使用方法参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.html)。 | 采样器名称 | 采样器说明 | | ---- | ---- | @@ -63,13 +63,15 @@ ds.config.set_seed(0) DATA_DIR = "cifar-10-batches-bin/" +print("------ Without Replacement ------") + sampler = ds.RandomSampler(num_samples=5) dataset1 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler) for data in dataset1.create_dict_iterator(): print("Image shape:", data['image'].shape, ", Label:", data['label']) -print("------------") +print("------ With Replacement ------") sampler = ds.RandomSampler(replacement=True, num_samples=5) dataset2 = ds.Cifar10Dataset(DATA_DIR, sampler=sampler) @@ -81,12 +83,13 @@ for data in dataset2.create_dict_iterator(): 输出结果如下: ```text +------ Without Replacement ------ Image shape: (32, 32, 3) , Label: 1 Image shape: (32, 32, 3) , Label: 6 Image shape: (32, 32, 3) , Label: 7 Image shape: (32, 32, 3) , Label: 0 Image shape: (32, 32, 3) , Label: 4 ------------- +------ With Replacement ------ Image shape: (32, 32, 3) , Label: 4 Image shape: (32, 32, 3) , Label: 6 Image shape: (32, 32, 3) , Label: 9 @@ -204,7 +207,7 @@ Image shape: (32, 32, 3) , Label: 9 在分布式训练中,对数据集分片进行采样。 -下面的样例使用分布式采样器将构建的数据集分为3片,在每个分片中采样3个数据样本,并展示已读取的数据。 +下面的样例使用分布式采样器将构建的数据集分为3片,在每个分片中采样不多于3个数据样本,并展示第0个分片读取到的数据。 ```python import numpy as np diff --git a/docs/programming_guide/source_zh_cn/security_and_privacy.md b/docs/programming_guide/source_zh_cn/security_and_privacy.md index ec57b333cb1f9e62aa44047e010286f635b4af81..66a9666d6d250b57463332635caa54a9cb29c86b 100644 --- a/docs/programming_guide/source_zh_cn/security_and_privacy.md +++ b/docs/programming_guide/source_zh_cn/security_and_privacy.md @@ -17,7 +17,7 @@ - + ## 概述 @@ -37,7 +37,7 @@ `Detector`基类定义了对抗样本检测的使用接口,其子类实现了各种具体的检测算法,增强模型的对抗鲁棒性。 -详细内容,请参考[对抗鲁棒性官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/improve_model_security_nad.html)。 +详细内容,请参考[对抗鲁棒性官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/improve_model_security_nad.html)。 ## 模型安全测试 @@ -45,7 +45,7 @@ `Fuzzer`类基于神经元覆盖率增益控制fuzzing流程,采用自然扰动和对抗样本生成方法作为变异策略,激活更多的神经元,从而探索不同类型的模型输出结果、错误行为,指导用户增强模型鲁棒性。 -详细内容,请参考[模型安全测试官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/test_model_security_fuzzing.html)。 +详细内容,请参考[模型安全测试官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/test_model_security_fuzzing.html)。 ## 差分隐私训练 @@ -53,7 +53,7 @@ `DPModel`继承了`mindspore.Model`,提供了差分隐私训练的入口函数。 -详细内容,请参考[差分隐私官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/protect_user_privacy_with_differential_privacy.html)。 +详细内容,请参考[差分隐私官网教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/protect_user_privacy_with_differential_privacy.html)。 ## 隐私泄露风险评估 @@ -61,4 +61,4 @@ `MembershipInference`类提供了一种模型逆向分析方法,能够基于模型对样本的预测信息,推测某个样本是否在模型的训练集中,以此评估模型的隐私泄露风险。 -详细内容,请参考[隐私泄露风险评估官方教程](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/test_model_security_membership_inference.html)。 +详细内容,请参考[隐私泄露风险评估官方教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/test_model_security_membership_inference.html)。 diff --git a/docs/programming_guide/source_zh_cn/syntax_list.rst b/docs/programming_guide/source_zh_cn/syntax_list.rst index ee6c9218ca1be9856d50de5d3d40b71e0f8f57df..c31e6ede9c5328f23b1f2e7d08a0cda7d68c513a 100644 --- a/docs/programming_guide/source_zh_cn/syntax_list.rst +++ b/docs/programming_guide/source_zh_cn/syntax_list.rst @@ -4,4 +4,4 @@ .. toctree:: :maxdepth: 1 - 静态图语法支持 \ No newline at end of file + 静态图语法支持 \ No newline at end of file diff --git a/docs/programming_guide/source_zh_cn/tensor.md b/docs/programming_guide/source_zh_cn/tensor.md index fd4467012aad1f81c963763786af388bde83e47e..c65c1b1e660ae529663554cc2fba86a9ee84a5b5 100644 --- a/docs/programming_guide/source_zh_cn/tensor.md +++ b/docs/programming_guide/source_zh_cn/tensor.md @@ -11,15 +11,15 @@ - +    - +    ## 概述 -张量(Tensor)是MindSpore网络运算中的基本数据结构。张量中的数据类型可参考[dtype](https://www.mindspore.cn/doc/programming_guide/zh-CN/master/dtype.html)。 +张量(Tensor)是MindSpore网络运算中的基本数据结构。张量中的数据类型可参考[dtype](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.1/dtype.html)。 不同维度的张量分别表示不同的数据,0维张量表示标量,1维张量表示向量,2维张量表示矩阵,3维张量可以表示彩色图像的RGB三通道等等。 diff --git a/docs/programming_guide/source_zh_cn/tokenizer.md b/docs/programming_guide/source_zh_cn/tokenizer.md index 0dcaca69a1049364974db4362519c3614778dc2e..12bb566fde083d50fae113c533b2a37a606c39f0 100644 --- a/docs/programming_guide/source_zh_cn/tokenizer.md +++ b/docs/programming_guide/source_zh_cn/tokenizer.md @@ -14,9 +14,9 @@ - +    - +    @@ -40,7 +40,7 @@ MindSpore目前提供的分词器如下表所示。此外,用户也可以根 | WhitespaceTokenizer | 根据空格符对标量文本数据进行分词。 | | WordpieceTokenizer | 根据单词集对标量文本数据进行分词。 | -更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.text.html)。 +更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/r1.1/mindspore/mindspore.dataset.text.html)。 ## MindSpore分词器 @@ -161,7 +161,7 @@ print("------------------------before tokenization----------------------------") for data in dataset.create_dict_iterator(output_numpy=True): print(text.to_str(data['text'])) -# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/botchan.txt +# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/r1.1/tests/ut/data/dataset/test_sentencepiece/botchan.txt vocab_file = "botchan.txt" vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {}) tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING) diff --git a/docs/programming_guide/source_zh_cn/train.md b/docs/programming_guide/source_zh_cn/train.md index 22683a4206459c78b69f79a3c79916e79eb0f570..f0d06e7d4dfd8c3bea10246c96cd77d531bfc68a 100644 --- a/docs/programming_guide/source_zh_cn/train.md +++ b/docs/programming_guide/source_zh_cn/train.md @@ -13,9 +13,9 @@ - +    - +    @@ -27,13 +27,13 @@ MindSpore在Model_zoo也已经提供了大量的目标检测、自然语言处 在自定义训练网络前,需要先了解下MindSpore的网络支持、Python源码构造网络约束和算子支持情况。 -- 网络支持:当前MindSpore已经支持多种网络,按类型分为计算机视觉、自然语言处理、推荐和图神经网络,可以通过[网络支持](https://www.mindspore.cn/doc/note/zh-CN/master/network_list.html)查看具体支持的网络情况。如果现有网络无法满足用户需求,用户可以根据实际需要定义自己的网络。 +- 网络支持:当前MindSpore已经支持多种网络,按类型分为计算机视觉、自然语言处理、推荐和图神经网络,可以通过[网络支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/network_list.html)查看具体支持的网络情况。如果现有网络无法满足用户需求,用户可以根据实际需要定义自己的网络。 -- Python源码构造网络约束:MindSpore暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。详细情况可以查看[静态图语法支持](https://www.mindspore.cn/doc/note/zh-CN/master/static_graph_syntax_support.html)了解。随着MindSpore的演进,这些约束可能会发生变化。 +- Python源码构造网络约束:MindSpore暂不支持将任意Python源码转换成计算图,所以对于用户源码支持的写法有所限制,主要包括语法约束和网络定义约束两方面。详细情况可以查看[静态图语法支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/static_graph_syntax_support.html)了解。随着MindSpore的演进,这些约束可能会发生变化。 -- 算子支持:顾名思义,网络的基础是算子,所以用户自定义训练网络前要对MindSpore当前支持的算子有所了解,可以通过查看[算子支持](https://www.mindspore.cn/doc/note/zh-CN/master/operator_list.html)了解不同的后端(Ascend、GPU和CPU)的算子实现情况。 +- 算子支持:顾名思义,网络的基础是算子,所以用户自定义训练网络前要对MindSpore当前支持的算子有所了解,可以通过查看[算子支持](https://www.mindspore.cn/doc/note/zh-CN/r1.1/operator_list.html)了解不同的后端(Ascend、GPU和CPU)的算子实现情况。 -> 当开发网络遇到内置算子不足以满足需求时,用户也可以参考[自定义算子](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/custom_operator_ascend.html),方便快捷地扩展昇腾AI处理器的自定义算子。 +> 当开发网络遇到内置算子不足以满足需求时,用户也可以参考[自定义算子](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/custom_operator_ascend.html),方便快捷地扩展昇腾AI处理器的自定义算子。 代码样例如下: @@ -248,7 +248,7 @@ if __name__ == "__main__": print("epoch: {0}/{1}, losses: {2}".format(step + 1, epoch, output.asnumpy(), flush=True)) ``` -> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)的下载数据集部分,下同。 +> 示例中用到的MNIST数据集的获取方法,可以参照[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)的下载数据集部分,下同。 输出如下: @@ -265,11 +265,11 @@ epoch: 9/10, losses: 2.305952548980713 epoch: 10/10, losses: 1.4282708168029785 ``` -> 典型的使用场景是梯度累积,详细查看[梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_gradient_accumulation.html)。 +> 典型的使用场景是梯度累积,详细查看[梯度累积](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/apply_gradient_accumulation.html)。 ## 边训练边推理 -对于某些数据量较大、训练时间较长的复杂网络,为了能掌握训练的不同阶段模型精度的指标变化情况,可以通过边训练边推理的方式跟踪精度的变化情况。具体可以参考[同步训练和验证模型](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/evaluate_the_model_during_training.html)。 +对于某些数据量较大、训练时间较长的复杂网络,为了能掌握训练的不同阶段模型精度的指标变化情况,可以通过边训练边推理的方式跟踪精度的变化情况。具体可以参考[同步训练和验证模型](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/advanced_use/evaluate_the_model_during_training.html)。 ## on-device执行 diff --git a/install/mindspore_ascend310_install_pip.md b/install/mindspore_ascend310_install_pip.md index 31b43e5a4adba665e47ceb3472b3abad3126fc12..d98abe82571369ec35f1eda5311a40111108ab55 100644 --- a/install/mindspore_ascend310_install_pip.md +++ b/install/mindspore_ascend310_install_pip.md @@ -11,39 +11,39 @@ - + -本文档介绍如何在Ascend 310环境的Linux系统上,使用pip方式快速安装MindSpore。 +本文档介绍如何在Ascend 310环境的Linux系统上,使用pip方式快速安装MindSpore,Ascend 310版本仅支持推理。 ## 确认系统环境信息 -- 确认安装Ubuntu 18.04/CentOS 7.6/EulerOS 2.8是64位操作系统。 -- 确认安装[GCC 7.3.0版本](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz)。 +- 确认安装Ubuntu 18.04/CentOS 8.2/EulerOS 2.8是64位操作系统。 +- 确认安装正确[GCC 版本](http://ftp.gnu.org/gnu/gcc/),Ubuntu 18.04/EulerOS 2.8用户,GCC>=7.3.0;CentOS 8.2用户 GCC>=8.3.1。 - 确认安装[gmp 6.1.2版本](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz)。 - 确认安装[CMake 3.18.3及以上版本](https://cmake.org/download/)。 - 安装完成后将CMake所在路径添加到系统环境变量。 - 确认安装Python 3.7.5版本。 - 如果未安装或者已安装其他版本的Python,可从[官网](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz)或者[华为云](https://mirrors.huaweicloud.com/python/3.7.5/Python-3.7.5.tgz)下载Python 3.7.5版本 64位,进行安装。 -- 确认安装Ascend 310 AI处理器软件配套包(Atlas Data Center Solution V100R020C10:[A300-3000 1.0.7.SPC103 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a300-3000-pid-250702915/software/251999079?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702915), [A300-3010 1.0.7.SPC103 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a300-3010-pid-251560253/software/251894987?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251560253),[CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373))。 +- 确认安装Ascend 310 AI处理器软件配套包([Atlas Intelligent Edge Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-intelligent-edge-solution-pid-251167903/software/251687140))。 - 确认当前用户有权限访问Ascend 310 AI处理器配套软件包的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 - 需要安装配套GCC 7.3版本的Ascend 310 AI处理器软件配套包。 - 安装Ascend 310 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。 ```bash - pip install /usr/local/Ascend/atc/lib64/topi-{version}-py3-none-any.whl - pip install /usr/local/Ascend/atc/lib64/te-{version}-py3-none-any.whl + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/topi-{version}-py3-none-any.whl + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/te-{version}-py3-none-any.whl ``` ## 安装MindSpore ```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/ascend/{system}/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple +pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/ascend/ascend310/{system}/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple ``` 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend 310版本可支持以下系统`euleros_aarch64`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 @@ -61,13 +61,13 @@ LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} -# lib libraries that the mindspore depends on +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path -export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/atc/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on ``` @@ -90,7 +90,7 @@ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample ``` -参照`README.md`说明,构建工程。 +参照`README.md`说明,构建工程,其中`pip3`需要按照实际情况修改。 ```bash cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` @@ -118,4 +118,4 @@ make 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend310_install_pip_en.md b/install/mindspore_ascend310_install_pip_en.md index 340b98c4c12345e8591ad23e8af0046c8a36a891..aac0ab3bfba91fa7feaa1ca79636818ecdaf4781 100644 --- a/install/mindspore_ascend310_install_pip_en.md +++ b/install/mindspore_ascend310_install_pip_en.md @@ -1 +1,121 @@ # Installing MindSpore in Ascend 310 by pip + + + +- [Installing MindSpore in Ascend 310 by pip](#installing-mindspore-in-ascend-310-by-pip) + - [Checking System Environment Information](#checking-system-environment-information) + - [Installing MindSpore](#installing-mindspore) + - [Configuring Environment Variables](#configuring-environment-variables) + - [Verifying the Installation](#verifying-the-installation) + - [Installing MindSpore Serving](#installing-mindspore-serving) + + + + + +The following describes how to quickly install MindSpore by pip on Linux in the Ascend 310 environment, MindSpore in Ascend 310 only supports inference. + +## Checking System Environment Information + +- Ensure that the 64-bit Ubuntu 18.04, CentOS 8.2, or EulerOS 2.8 is installed. +- Ensure that right version [GCC](http://ftp.gnu.org/gnu/gcc/) is installed, for Ubuntu 18.04, EulerOS 2.8 users, GCC>=7.3.0; for CentOS 8.2 users, GCC>=8.3.1 . +- Ensure that [GMP 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed. +- Ensure that [CMake 3.18.3 or later](https://cmake.org/download/) is installed. + - After installation, add the path of CMake to the system environment variables. +- Ensure that Python 3.7.5 is installed. + - If Python 3.7.5 (64-bit) is not installed, download it from the [Python official website](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) or [HUAWEI CLOUD](https://mirrors.huaweicloud.com/python/3.7.5/Python-3.7.5.tgz) and install it. +- Ensure that the Ascend 310 AI Processor software packages ([Atlas Intelligent Edge Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-intelligent-edge-solution-pid-251167903/software/251687140)) are installed. + - Ensure that you have permissions to access the installation path `/usr/local/Ascend` of the Ascend 310 AI Processor software package. If not, ask the user root to add you to a user group to which `/usr/local/Ascend` belongs. For details about the configuration, see the description document in the software package. + - Ensure that the Ascend 310 AI Processor software package that matches GCC 7.3 is installed. + - Install the .whl package provided with the Ascend 310 AI Processor software package. The .whl package is released with the software package. After the software package is upgraded, you need to reinstall the .whl package. + + ```bash + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/topi-{version}-py3-none-any.whl + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/te-{version}-py3-none-any.whl + ``` + +## Installing MindSpore + +```bash +pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/ascend/ascend310/{system}/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple +``` + +In the preceding information: + +- When the network is connected, dependencies of the MindSpore installation package are automatically downloaded during the .whl package installation. For details about dependencies, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt). In other cases, install the dependencies by yourself. +- `{version}` specifies the MindSpore version number. For example, when installing MindSpore 1.1.0, set `{version}` to 1.1.0. +- `{arch}` specifies the system architecture. For example, if a Linux OS architecture is x86_64, set `{arch}` to `x86_64`. If the system architecture is ARM64, set `{arch}` to `aarch64`. +- `{system}` specifies the system version. For example, if EulerOS ARM64 is used, set `{system}` to `euleros_aarch64`. Currently, Ascend 310 supports the following systems: `euleros_aarch64`, `centos_aarch64`, `centos_x86`, `ubuntu_aarch64`, and `ubuntu_x86`. + +## Configuring Environment Variables + +After MindSpore is installed, export runtime environment variables. In the following command, `/usr/local/Ascend` in `LOCAL_ASCEND=/usr/local/Ascend` indicates the installation path of the software package. Change it to the actual installation path. + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/atc/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +## Verifying the Installation + +Create a directory to store the sample code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample`. You can obtain the code from the [official website](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/sample_resources/ascend310_single_op_sample.zip). A simple example of adding `[1, 2, 3, 4]` to `[2, 3, 4, 5]` is used and the code project directory structure is as follows: + +```text + +└─ascend310_single_op_sample + ├── CMakeLists.txt // Build script + ├── README.md // Usage description + ├── main.cc // Main function + └── tensor_add.mindir // MindIR model file +``` + +Go to the directory of the sample project and change the path based on the actual requirements. + +```bash +cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample +``` + +Build a project by referring to `README.md`, modify `pip3` according to the actual situation. + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +make +``` + +After the build is successful, execute the case. + +```bash +./tensor_add_sample +``` + +The following information is displayed: + +```text +3 +5 +7 +9 +``` + +The preceding information indicates that MindSpore is successfully installed. + +## Installing MindSpore Serving + +If you want to quickly experience the MindSpore online inference service, you can install MindSpore Serving. + +For details, see [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_ascend310_install_source.md b/install/mindspore_ascend310_install_source.md index 1eef13b06700ddd4e4d36f254fbb6d1adb82a450..34591eacb8af7b3e430200922e08a7cd0bf3551d 100644 --- a/install/mindspore_ascend310_install_source.md +++ b/install/mindspore_ascend310_install_source.md @@ -13,14 +13,14 @@ - + -本文档介绍如何在Ascend 310环境的Linux系统上,使用源码编译方式快速安装MindSpore。 +本文档介绍如何在Ascend 310环境的Linux系统上,使用源码编译方式快速安装MindSpore,Ascend 310版本仅支持推理。 ## 确认系统环境信息 -- 确认安装Ubuntu 18.04/CentOS 7.6/EulerOS 2.8是64位操作系统。 -- 确认安装[GCC 7.3.0版本](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz)。 +- 确认安装Ubuntu 18.04/CentOS 8.2/EulerOS 2.8是64位操作系统。 +- 确认安装正确[GCC 版本](http://ftp.gnu.org/gnu/gcc/),Ubuntu 18.04/EulerOS 2.8用户,GCC>=7.3.0;CentOS 8.2用户 GCC>=8.3.1。 - 确认安装[gmp 6.1.2版本](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz)。 - 确认安装[Python 3.7.5版本](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz)。 - 确认安装[OpenSSL 1.1.1及以上版本](https://github.com/openssl/openssl.git)。 @@ -30,14 +30,14 @@ - 确认安装[patch 2.5及以上版本](http://ftp.gnu.org/gnu/patch/)。 - 安装完成后将patch所在路径添加到系统环境变量中。 - 确认安装[wheel 0.32.0及以上版本](https://pypi.org/project/wheel/)。 -- 确认安装Ascend 310 AI处理器软件配套包(Atlas Data Center Solution V100R020C10:[A300-3000 1.0.7.SPC103 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a300-3000-pid-250702915/software/251999079?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702915), [A300-3010 1.0.7.SPC103 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a300-3010-pid-251560253/software/251894987?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251560253),[CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373))。 +- 确认安装Ascend 310 AI处理器软件配套包([Atlas Intelligent Edge Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-intelligent-edge-solution-pid-251167903/software/251687140))。 - 确认当前用户有权限访问Ascend 310 AI处理器配套软件包的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 - 需要安装配套GCC 7.3版本的Ascend 310 AI处理器软件配套包。 - 安装Ascend 310 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。 ```bash - pip install /usr/local/Ascend/atc/lib64/topi-{version}-py3-none-any.whl - pip install /usr/local/Ascend/atc/lib64/te-{version}-py3-none-any.whl + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/topi-{version}-py3-none-any.whl + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/te-{version}-py3-none-any.whl ``` - 确认安装git工具。 @@ -51,7 +51,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -75,8 +75,8 @@ pip install output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl -i htt 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 配置环境变量 @@ -93,13 +93,13 @@ LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} -# lib libraries that the mindspore depends on +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path -export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/atc/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on ``` @@ -122,7 +122,7 @@ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample ``` -参照`README.md`说明,构建工程。 +参照`README.md`说明,构建工程,其中`pip3`需要按照实际情况修改。 ```bash cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` @@ -150,4 +150,4 @@ make 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend310_install_source_en.md b/install/mindspore_ascend310_install_source_en.md index 4827c91e89727c6d3cc3d430ecf786c06fc0fb1a..bd0ff05b67c212b29997d8aea7d84a5b8330b4ac 100644 --- a/install/mindspore_ascend310_install_source_en.md +++ b/install/mindspore_ascend310_install_source_en.md @@ -1 +1,153 @@ -# Installing MindSpore in Ascend 310 by Source Code +# Installing MindSpore in Ascend 310 by Source Code Compilation + + + +- [Installing MindSpore in Ascend 310 by Source Code Compilation](#installing-mindspore-in-ascend-310-by-source-code-compilation) + - [Checking System Environment Information](#checking-system-environment-information) + - [Downloading Source Code from the Code Repository](#downloading-source-code-from-the-code-repository) + - [Building MindSpore](#building-mindspore) + - [Installing MindSpore](#installing-mindspore) + - [Configuring Environment Variables](#configuring-environment-variables) + - [Verifying the Installation](#verifying-the-installation) + - [Installing MindSpore Serving](#installing-mindspore-serving) + + + + + +The following describes how to quickly install MindSpore by compiling the source code on Linux in the Ascend 310 environment, MindSpore in Ascend 310 only supports inference. + +## Checking System Environment Information + +- Ensure that the 64-bit Ubuntu 18.04, CentOS 8.2, or EulerOS 2.8 is installed. +- Ensure that right version [GCC](http://ftp.gnu.org/gnu/gcc/) is installed, for Ubuntu 18.04, EulerOS 2.8 users, GCC>=7.3.0; for CentOS 8.2 users, GCC>=8.3.1 . +- Ensure that [GMP 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed. +- Ensure that [Python 3.7.5](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) is installed. +- Ensure that [OpenSSL 1.1.1 or later](https://github.com/openssl/openssl.git) is installed. + - After installation, set the environment variable `export OPENSSL_ROOT_DIR= "OpenSSL installation directory"`. +- Ensure that [CMake 3.18.3 or later](https://cmake.org/download/) is installed. + - After installation, add the path of CMake to the system environment variables. +- Ensure that [patch 2.5 or later](http://ftp.gnu.org/gnu/patch/) is installed. + - After installation, add the patch path to the system environment variables. +- Ensure that [wheel 0.32.0 or later](https://pypi.org/project/wheel/) is installed. +- Ensure that the Ascend 310 AI Processor software packages ([Atlas Intelligent Edge Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-intelligent-edge-solution-pid-251167903/software/251687140)) are installed. + - Ensure that you have permissions to access the installation path `/usr/local/Ascend` of the Ascend 310 AI Processor software package. If not, ask the user root to add you to a user group to which `/usr/local/Ascend` belongs. For details about the configuration, see the description document in the software package. + - Ensure that the Ascend 310 AI Processor software package that matches GCC 7.3 is installed. + - Install the .whl package provided with the Ascend 310 AI Processor software package. The .whl package is released with the software package. After the software package is upgraded, you need to reinstall the .whl package. + + ```bash + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/topi-{version}-py3-none-any.whl + pip install /usr/local/Ascend/ascend-toolkit/latest/atc/lib64/te-{version}-py3-none-any.whl + ``` + +- Ensure that the git tool is installed. + If not, run the following command to download and install it: + + ```bash + apt-get install git # ubuntu and so on + yum install git # centos and so on + ``` + +## Downloading Source Code from the Code Repository + +```bash +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 +``` + +## Building MindSpore + +Run the following command in the root directory of the source code. + +```bash +bash build.sh -e ascend -V 310 +``` + +In the preceding information: + +The default number of build threads is 8 in `build.sh`. If the compiler performance is poor, build errors may occur. You can add -j{Number of threads} to script to reduce the number of threads. For example, `bash build.sh -e ascend -V 310 -j4`. + +## Installing MindSpore + +```bash +chmod +x output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl +pip install output/mindspore-ascend-{version}-cp37-cp37m-linux_{arch}.whl -i https://pypi.tuna.tsinghua.edu.cn/simple +``` + +In the preceding information: + +- When the network is connected, dependencies of the MindSpore installation package are automatically downloaded during the .whl package installation. For details about dependencies, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt). In other cases, install the dependencies by yourself. +- `{version}` specifies the MindSpore version number. For example, when installing MindSpore 1.1.0, set `{version}` to 1.1.0. +- `{arch}` specifies the system architecture. For example, if a Linux OS architecture is x86_64, set `{arch}` to `x86_64`. If the system architecture is ARM64, set `{arch}` to `aarch64`. + +## Configuring Environment Variables + +After MindSpore is installed, export runtime environment variables. In the following command, `/usr/local/Ascend` in `LOCAL_ASCEND=/usr/local/Ascend` indicates the installation path of the software package. Change it to the actual installation path. + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/atc/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +## Verifying the Installation + +Create a directory to store the sample code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample`. You can obtain the code from the [official website](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/sample_resources/ascend310_single_op_sample.zip). A simple example of adding `[1, 2, 3, 4]` to `[2, 3, 4, 5]` is used and the code project directory structure is as follows: + +```text + +└─ascend310_single_op_sample + ├── CMakeLists.txt // Build script + ├── README.md // Usage description + ├── main.cc // Main function + └── tensor_add.mindir // MindIR model file +``` + +Go to the directory of the sample project and change the path based on the actual requirements. + +```bash +cd /home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_single_op_sample +``` + +Build a project by referring to `README.md`, modify `pip3` according to the actual situation. + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +make +``` + +After the build is successful, execute the case. + +```bash +./tensor_add_sample +``` + +The following information is displayed: + +```text +3 +5 +7 +9 +``` + +The preceding information indicates that MindSpore is successfully installed. + +## Installing MindSpore Serving + +If you want to quickly experience the MindSpore online inference service, you can install MindSpore Serving. + +For details, see [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_ascend_install_conda.md b/install/mindspore_ascend_install_conda.md deleted file mode 100644 index 9a51ce0891cb3ffc7395657bb12d96f819549107..0000000000000000000000000000000000000000 --- a/install/mindspore_ascend_install_conda.md +++ /dev/null @@ -1,161 +0,0 @@ -# Conda方式安装MindSpore Ascend版本 - - - -- [Conda方式安装MindSpore Ascend版本](#conda方式安装mindspore-ascend版本) - - [确认系统环境信息](#确认系统环境信息) - - [安装Conda](#安装conda) - - [添加Conda镜像源](#添加conda镜像源) - - [创建并激活Conda环境](#创建并激活conda环境) - - [安装MindSpore](#安装mindspore) - - [配置环境变量](#配置环境变量) - - [验证是否成功安装](#验证是否成功安装) - - [升级MindSpore版本](#升级mindspore版本) - - [安装MindInsight](#安装mindinsight) - - [安装MindArmour](#安装mindarmour) - - [安装MindSpore Hub](#安装mindspore-hub) - - [安装MindSpore Serving](#安装mindspore-serving) - - - - - -本文档介绍如何在Ascend 910环境的Linux系统上,使用Conda方式快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装Ubuntu 18.04/CentOS 7.6/EulerOS 2.8是64位操作系统。 -- 确认安装Ascend 910 AI处理器软件配套包(Atlas Data Center Solution V100R020C10:[A800-9000 1.0.8 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9000-pid-250702818/software/252069004?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702818),[A800-9010 1.0.8 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9010-pid-250702809/software/252062130?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702809),[CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373))。 - - 确认当前用户有权限访问Ascend 910 AI处理器配套软件包的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 - - 安装Ascend 910 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。 - - ```bash - pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/topi-{version}-py3-none-any.whl - pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/te-{version}-py3-none-any.whl - pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/hccl-{version}-py3-none-any.whl - ``` - -## 安装Conda - -下载并安装对应架构的Conda安装包。 - -- X86架构 - - 官网下载地址:[X86 Anaconda](https://www.anaconda.com/distribution/) 或 [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html) - - 清华镜像源下载地址:[X86 Anaconda](https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2020.02-Linux-x86_64.sh) -- ARM架构 - - [ARM Anaconda](https://github.com/Archiconda/build-tools/releases/download/0.2.3/Archiconda3-0.2.3-Linux-aarch64.sh) - -## 添加Conda镜像源 - -从清华源镜像源下载Conda安装包的可跳过此步操作。 - -```bash -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ -conda config --set show_channel_urls yes -``` - -## 创建并激活Conda环境 - -```bash -conda create -n mindspore python=3.7.5 -conda activate mindspore -``` - -## 安装MindSpore - -```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/ascend/{system}/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 -- `{arch}`表示系统架构,例如使用的系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 -- `{system}`表示系统,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前可支持以下系统`euleros_aarch64`/`euleros_x86`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 - -## 配置环境变量 - -**如果Ascend 910 AI处理器配套软件包没有安装在默认路径**,安装好MindSpore之后,需要导出Runtime相关环境变量,下述命令中`LOCAL_ASCEND=/usr/local/Ascend`的`/usr/local/Ascend`表示配套软件包的安装路径,需注意将其改为配套软件包的实际安装路径。 - -```bash -# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. -export GLOG_v=2 - -# Conda environmental options -LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package - -# lib libraries that the run package depends on -export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} - -# Environment variables that must be configured -export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path -export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path -export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path -export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on -``` - -## 验证是否成功安装 - -```python -import numpy as np -from mindspore import Tensor -import mindspore.ops as ops -import mindspore.context as context - -context.set_context(device_target="Ascend") -x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -print(ops.tensor_add(x, y)) -``` - -如果输出: - -```text -[[[ 2. 2. 2. 2.], - [ 2. 2. 2. 2.], - [ 2. 2. 2. 2.]], - - [[ 2. 2. 2. 2.], - [ 2. 2. 2. 2.], - [ 2. 2. 2. 2.]], - - [[ 2. 2. 2. 2.], - [ 2. 2. 2. 2.], - [ 2. 2. 2. 2.]]] -``` - -说明MindSpore安装成功了。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -```bash -pip install --upgrade mindspore-ascend -``` - -## 安装MindInsight - -当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 - -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 - -## 安装MindArmour - -当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 - -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 - -## 安装MindSpore Hub - -当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 - -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 - -## 安装MindSpore Serving - -当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 - -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 diff --git a/install/mindspore_ascend_install_docker.md b/install/mindspore_ascend_install_docker.md new file mode 100644 index 0000000000000000000000000000000000000000..a0acdc7895d80b682d30a9518fe71fad217b6c52 --- /dev/null +++ b/install/mindspore_ascend_install_docker.md @@ -0,0 +1,130 @@ +# Docker方式安装MindSpore Ascend 910版本 + + + +- [Docker方式安装MindSpore Ascend 910版本](#docker方式安装mindspore-ascend-910版本) + - [确认系统环境信息](#确认系统环境信息) + - [获取MindSpore镜像](#获取mindspore镜像) + - [运行MindSpore镜像](#运行mindspore镜像) + - [验证是否安装成功](#验证是否安装成功) + - [升级MindSpore版本](#升级mindspore版本) + + + + + +[Docker](https://docs.docker.com/get-docker/)是一个开源的应用容器引擎,让开发者打包他们的应用以及依赖包到一个轻量级、可移植的容器中。通过使用Docker,可以实现MindSpore的快速部署,并与系统环境隔离。 + +本文档介绍如何在Ascend 910环境的Linux系统上,使用Docker方式快速安装MindSpore。 + +MindSpore的Ascend 910镜像托管在[Ascend Hub](https://ascend.huawei.com/ascendhub/#/main)上。 + +目前容器化构建选项支持情况如下: + +| 硬件平台 | Docker镜像仓库 | 标签 | 说明 | +| :----- | :------------------------ | :----------------------- | :--------------------------------------- | +| Ascend | `public-ascendhub/ascend-mindspore-arm` | `x.y.z` | 已经预安装与Ascend Data Center Solution `x.y.z` 版本共同发布的MindSpore的生产环境。 | + +> `x.y.z`对应Atlas Data Center Solution版本号,可以在Ascend Hub页面获取。 + +## 确认系统环境信息 + +- 确认安装Ubuntu 18.04/CentOS 8.2是64位操作系统。 +- 确认安装[Docker 18.03或更高版本](https://docs.docker.com/get-docker/)。 +- 确认安装Ascend 910 AI处理器软件配套包([Atlas Data Center Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251826872))。 + - 确认当前用户有权限访问Ascend 910 AI处理器配套软件包的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 + - 在完成安装基础驱动与配套软件包的基础上,确认安装CANN软件包中的toolbox实用工具包,即Ascend-cann-toolbox-{version}.run,该工具包提供了Ascend NPU容器化支持的Ascend Docker runtime工具。 + +## 获取MindSpore镜像 + +1. 登录[Ascend Hub镜像中心](https://ascend.huawei.com/ascendhub/#/home),注册并激活账号,获取登录指令和拉取指令。 +2. 获取下载权限后,进入MindSpore镜像下载页面([x86版本](https://ascend.huawei.com/ascendhub/#/detail?name=ascend-mindspore-x86),[arm版本](https://ascend.huawei.com/ascendhub/#/detail?name=ascend-mindspore-arm)),获取登录与下载指令并执行: + + ```bash + docker login -u {username} -p {password} {url} + docker pull swr.cn-south-1.myhuaweicloud.com/public-ascendhub/ascend-mindspore-{arch}:{tag} + ``` + + 其中: + + - `{username}` `{password}` `{url}` 代表用户的登录信息与镜像服务器信息,均为注册并激活账号后自动生成,在对应MindSpore镜像页面复制登录命令即可获取。 + - `{arch}` 表示系统架构,例如使用的Linux系统是x86架构64位时,{arch}应写为x86。如果系统是ARM架构64位,则写为arm。 + - `{tag}` 对应Atlas Data Center Solution版本号,同样可以在MindSpore镜像下载页面复制下载命令获取。 + +## 运行MindSpore镜像 + +执行以下命令启动Docker容器实例: + +```bash +docker run -it -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ + -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \ + -v /var/log/npu/:/usr/slog \ + --device=/dev/davinci0 \ + --device=/dev/davinci1 \ + --device=/dev/davinci2 \ + --device=/dev/davinci3 \ + --device=/dev/davinci4 \ + --device=/dev/davinci5 \ + --device=/dev/davinci6 \ + --device=/dev/davinci7 \ + --device=/dev/davinci_manager \ + --device=/dev/devmm_svm \ + --device=/dev/hisi_hdc \ + swr.cn-south-1.myhuaweicloud.com/public-ascendhub/ascend-mindspore-{arch}:{tag} \ + /bin/bash +``` + +其中: + +- `{arch}` 表示系统架构,例如使用的Linux系统是x86架构64位时,{arch}应写为x86。如果系统是ARM架构64位,则写为arm。 +- `{tag}`对应Atlas Data Center Solution版本号,在MindSpore镜像下载页面自动获取。 + +## 验证是否安装成功 + +按照上述步骤进入MindSpore容器后,测试Docker容器是否正常工作,请运行下面的Python代码并检查输出: + +```python +import numpy as np +from mindspore import Tensor +import mindspore.ops as ops +import mindspore.context as context + +context.set_context(device_target="Ascend") +x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) +y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) +print(ops.tensor_add(x, y)) +``` + +代码成功运行时会输出: + +```text +[[[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]]] +``` + +至此,你已经成功通过Docker方式安装了MindSpore Ascend 910版本。 + +## 升级MindSpore版本 + +当需要升级MindSpore版本时: + +- 根据需要升级的MindSpore版本,升级对应的Ascend 910 AI处理器软件配套包。 +- 再次登录[Ascend Hub镜像中心](https://ascend.huawei.com/ascendhub/#/home)获取最新docker版本的下载命令,并执行: + + ```bash + docker pull swr.cn-south-1.myhuaweicloud.com/public-ascendhub/ascend-mindspore-{arch}:{tag} + ``` + + 其中: + + - `{arch}` 表示系统架构,例如使用的Linux系统是x86架构64位时,{arch}应写为x86。如果系统是ARM架构64位,则写为arm。 + - `{tag}`对应Atlas Data Center Solution版本号,同样可以在MindSpore镜像下载页面自动获取。 diff --git a/install/mindspore_ascend_install_docker_en.md b/install/mindspore_ascend_install_docker_en.md new file mode 100644 index 0000000000000000000000000000000000000000..f4d192e2c0c231f3c537bc92d9598773c6876c08 --- /dev/null +++ b/install/mindspore_ascend_install_docker_en.md @@ -0,0 +1,130 @@ +# Installing MindSpore in Ascend 910 by Docker + + + +- [Installing MindSpore in Ascend 910 by Docker](#installing-mindspore-in-ascend-910-by-docker) + - [System Environment Information Confirmation](#system-environment-information-confirmation) + - [Obtaining MindSpore Image](#obtaining-mindspore-image) + - [Running MindSpore Image](#running-mindspore-image) + - [Installation Verification](#installation-verification) + - [Version Update](#version-update) + + + + + +[Docker](https://docs.docker.com/get-docker/) is an open source application container engine, developers can package their applications and dependencies into a lightweight, portable container. By using Docker, MindSpore can be rapidly deployed and separated from the system environment. + +This document describes how to quickly install MindSpore in a Linux system with an Ascend 910 environment by Docker. + +The Ascend 910 image of MindSpore is hosted on the [Ascend Hub](https://ascend.huawei.com/ascendhub/#/main). + +The current support for containerized build options is as follows: + +| Hardware | Docker Image Hub | Label | Note | +| :----- | :------------------------ | :----------------------- | :--------------------------------------- | +| Ascend | `public-ascendhub/ascend-mindspore-arm` | `x.y.z` | The production environment of MindSpore released together with the Ascend Data Center Solution `x.y.z` version is pre-installed. | + +> `x.y.z` corresponds to the version number of Atlas Data Center Solution, which can be obtained on the Ascend Hub page. + +## System Environment Information Confirmation + +- Confirm that Ubuntu 18.04/CentOS 8.2 is installed with the 64-bit operating system. +- Confirm that [Docker 18.03 or later](https://docs.docker.com/get-docker/) is installed. +- Confirm that the Ascend 910 AI processor software package ([Atlas Data Center Solution V100R020C20](https://support.huawei.com/enterprise/en/ascend-computing/atlas-data-center-solution-pid-251167910/software/251826872)) are installed. + - Confirm that the current user has the right to access the installation path `/usr/local/Ascend`of Ascend 910 AI processor software package. If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document. + - After installing basic driver and corresponding software packages, confirm that the toolbox utility package in the CANN software package is installed, namely Ascend-cann-toolbox-{version}.run. The toolbox provides Ascend Docker runtime tools supported by Ascend NPU containerization. + +## Obtaining MindSpore Image + +1. Log in to [Ascend Hub Image Center](https://ascend.huawei.com/ascendhub/#/home), register and activate an account, get login instructions and pull instructions. +2. After obtaining the download permission, enter the MindSpore image download page ([x86 version](https://ascend.huawei.com/ascendhub/#/detail?name=ascend-mindspore-x86), [arm version](https://ascend.huawei.com/ascendhub/#/detail?name=ascend-mindspore-arm)). Get login and download commands and execute: + + ```bash + docker login -u {username} -p {password} {url} + docker pull swr.cn-south-1.myhuaweicloud.com/public-ascendhub/ascend-mindspore-{arch}:{tag} + ``` + + of which, + + - `{username}` `{password}` `{url}` represents the user's login information and image server information, which are automatically generated after registering and activating the account, and can be obtained by copying the login command on the corresponding MindSpore image page. + - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, {arch} should be x86. If the system is ARM architecture 64-bit, then it should be arm. + - `{tag}` corresponds to the version number of Atlas Data Center Solution, which can also be obtained by copying the download command on the MindSpore image download page. + +## Running MindSpore Image + +Execute the following command to start the Docker container instance: + +```bash +docker run -it -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ + -v /usr/local/Ascend/add-ons/:/usr/local/Ascend/add-ons/ \ + -v /var/log/npu/:/usr/slog \ + --device=/dev/davinci0 \ + --device=/dev/davinci1 \ + --device=/dev/davinci2 \ + --device=/dev/davinci3 \ + --device=/dev/davinci4 \ + --device=/dev/davinci5 \ + --device=/dev/davinci6 \ + --device=/dev/davinci7 \ + --device=/dev/davinci_manager \ + --device=/dev/devmm_svm \ + --device=/dev/hisi_hdc \ + swr.cn-south-1.myhuaweicloud.com/public-ascendhub/ascend-mindspore-{arch}:{tag} \ + /bin/bash +``` + +of which, + +- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, {arch} should be x86. If the system is ARM architecture 64-bit, then it should be arm. +- `{tag}` corresponds to the version number of Atlas Data Center Solution, which can be automatically obtained on the MindSpore image download page. + +## Installation Verification + +After entering the MindSpore container according to the above steps, to test whether the Docker container is working properly, please run the following Python code and check the output: + +```python +import numpy as np +from mindspore import Tensor +import mindspore.ops as ops +import mindspore.context as context + +context.set_context(device_target="Ascend") +x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) +y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) +print(ops.tensor_add(x, y)) +``` + +The outputs should be the same as: + +```text +[[[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]]] +``` + +It means MindSpore has been installed by docker successfully. + +## Version Update + +When you need to update the MindSpore version: + +- update Ascend 910 AI processor software package according to MindSpore package version of which you wish to update. +- log in to [Ascend Hub Image Center](https://ascend.huawei.com/ascendhub/#/home) again to obtain the download command of the latest docker version and execute: + + ```bash + docker pull swr.cn-south-1.myhuaweicloud.com/public-ascendhub/ascend-mindspore-{arch}:{tag} + ``` + + of which, + + - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, {arch} should be x86. If the system is ARM architecture 64-bit, then it should be arm. + - `{tag}` corresponds to the version number of Atlas Data Center Solution, which can also be obtained by copying the download command on the MindSpore image download page. diff --git a/install/mindspore_ascend_install_pip.md b/install/mindspore_ascend_install_pip.md index bab51e4049fcfcfd926b85970e3ed9e01d102752..0b33218da55600c258ee411d4f25e53d6fa5e1e2 100644 --- a/install/mindspore_ascend_install_pip.md +++ b/install/mindspore_ascend_install_pip.md @@ -1,8 +1,8 @@ -# pip方式安装MindSpore Ascend版本 +# pip方式安装MindSpore Ascend 910版本 -- [pip方式安装MindSpore Ascend版本](#pip方式安装mindspore-ascend版本) +- [pip方式安装MindSpore Ascend 910版本](#pip方式安装mindspore-ascend-910版本) - [确认系统环境信息](#确认系统环境信息) - [安装MindSpore](#安装mindspore) - [配置环境变量](#配置环境变量) @@ -15,20 +15,20 @@ - + 本文档介绍如何在Ascend 910环境的Linux系统上,使用pip方式快速安装MindSpore。 ## 确认系统环境信息 -- 确认安装Ubuntu 18.04/CentOS 7.6/EulerOS 2.8是64位操作系统。 -- 确认安装[GCC 7.3.0版本](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz)。 +- 确认安装Ubuntu 18.04/CentOS 8.2/EulerOS 2.8是64位操作系统。 +- 确认安装正确[GCC 版本](http://ftp.gnu.org/gnu/gcc/),Ubuntu 18.04/EulerOS 2.8用户,GCC>=7.3.0;CentOS 8.2用户 GCC>=8.3.1。 - 确认安装[gmp 6.1.2版本](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz)。 -- 确认安装Python 3.7.5版本。 +- 确认安装Python 3.7.5版本。 - 如果未安装或者已安装其他版本的Python,可从[官网](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz)或者[华为云](https://mirrors.huaweicloud.com/python/3.7.5/Python-3.7.5.tgz)下载Python 3.7.5版本 64位,进行安装。 -- 确认安装Ascend 910 AI处理器软件配套包(Atlas Data Center Solution V100R020C10:[A800-9000 1.0.8 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9000-pid-250702818/software/252069004?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702818), [A800-9010 1.0.8 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9010-pid-250702809/software/252062130?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702809),[CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373))。 +- 确认安装Ascend 910 AI处理器软件配套包([Atlas Data Center Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251826872))。 - 确认当前用户有权限访问Ascend 910 AI处理器配套软件包的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 - - 安装Ascend 910 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。 + - 安装Ascend 910 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,参考如下命令完成安装。 ```bash pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/topi-{version}-py3-none-any.whl @@ -36,6 +36,12 @@ pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/hccl-{version}-py3-none-any.whl ``` + - 如果升级了Ascend 910 AI处理器配套软件包,配套的whl包也需要重新安装,先将原来的安装包卸载,再参考上述命令重新安装。 + + ```bash + pip uninstall te topi hccl -y + ``` + ## 安装MindSpore ```bash @@ -44,10 +50,10 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 -- `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend版本可支持以下系统`euleros_aarch64`/`euleros_x86`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 +- `{system}`表示系统版本,例如使用的欧拉系统ARM架构,`{system}`应写为`euleros_aarch64`,目前Ascend版本可支持以下系统`euleros_aarch64`/`centos_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`。 ## 配置环境变量 @@ -61,7 +67,7 @@ export GLOG_v=2 LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on -export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path @@ -114,22 +120,22 @@ pip install --upgrade mindspore-ascend 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 ## 安装MindSpore Serving 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend_install_pip_en.md b/install/mindspore_ascend_install_pip_en.md index 60d48e276d24dab98436b75084a740bbdec1e900..80f5b396a707f3fbc3e838ffdad6caf8a2902c3d 100644 --- a/install/mindspore_ascend_install_pip_en.md +++ b/install/mindspore_ascend_install_pip_en.md @@ -1,8 +1,8 @@ -# Installing MindSpore in Ascend by pip +# Installing MindSpore in Ascend 910 by pip -- [Installing MindSpore in Ascend by pip](#installing-mindspore-in-ascend-by-pip) +- [Installing MindSpore in Ascend 910 by pip](#installing-mindspore-in-ascend-910-by-pip) - [System Environment Information Confirmation](#system-environment-information-confirmation) - [Installing MindSpore](#installing-mindspore) - [Configuring Environment Variables](#configuring-environment-variables) @@ -15,18 +15,18 @@ - + This document describes how to quickly install MindSpore in a Linux system with an Ascend 910 environment by pip. ## System Environment Information Confirmation -- Confirm that Ubuntu 18.04/CentOS 7.6/EulerOS 2.8 is installed with the 64-bit operating system. -- Confirm that [GCC 7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) is installed. +- Confirm that Ubuntu 18.04/CentOS 8.2/EulerOS 2.8 is installed with the 64-bit operating system. +- Ensure that right version [GCC](http://ftp.gnu.org/gnu/gcc/) is installed, for Ubuntu 18.04, EulerOS 2.8 users, GCC>=7.3.0; for CentOS 8.2 users, GCC>=8.3.1 . - Confirm that [gmp 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed. - Confirm that Python 3.7.5 is installed. - If you didn't install Python or you have installed other versions, please download the Python 3.7.5 64-bit from [Python](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) or [Huaweicloud](https://mirrors.huaweicloud.com/python/3.7.5/Python-3.7.5.tgz) to install. -- Confirm that the Ascend 910 AI processor software package (Atlas Data Center Solution V100R020C10:[A800-9000 1.0.8 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9000-pid-250702818/software/252069004?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702818), [A800-9010 1.0.8 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9010-pid-250702809/software/252062130?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702809), [CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373)) are installed. +- Confirm that the Ascend 910 AI processor software package ([Atlas Data Center Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251826872)) are installed. - Confirm that the current user has the right to access the installation path `/usr/local/Ascend`of Ascend 910 AI processor software package, If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document. - Install the .whl package provided in Ascend 910 AI processor software package. The .whl package is released with the software package. After software package is upgraded, reinstall the .whl package. @@ -36,6 +36,12 @@ This document describes how to quickly install MindSpore in a Linux system with pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/hccl-{version}-py3-none-any.whl ``` + - If the Ascend 910 AI processor software package is upgraded, the .whl package need update too, using the following command to uninstall the .whl packages firstly, and then reinstall them. + + ```bash + pip uninstall te topi hccl -y + ``` + ## Installing MindSpore ```bash @@ -44,10 +50,10 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. -- `{system}` denotes the system version. For example, if you are using EulerOS ARM architecture, `{system}` should be `euleros_aarch64`. Currently, the following systems are supported by Ascend: `euleros_aarch64`/`euleros_x86`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`. +- `{system}` denotes the system version. For example, if you are using EulerOS ARM architecture, `{system}` should be `euleros_aarch64`. Currently, the following systems are supported by Ascend: `euleros_aarch64`/`centos_x86`/`ubuntu_aarch64`/`ubuntu_x86`. ## Configuring Environment Variables @@ -61,7 +67,7 @@ Of which, LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on - export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path @@ -117,22 +123,22 @@ pip install --upgrade mindspore-ascend If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). ## Installing MindSpore Serving If you need to access and experience MindSpore online inference services quickly, you can install MindSpore Serving. -For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README.md). +For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_ascend_install_source.md b/install/mindspore_ascend_install_source.md index 721e2ea9e71a91ecda3d962a6c7447b9490cede0..28b3f1d3f9ffa4502b5e235143512da4ddc9c34d 100644 --- a/install/mindspore_ascend_install_source.md +++ b/install/mindspore_ascend_install_source.md @@ -1,8 +1,8 @@ -# 源码编译方式安装MindSpore Ascend版本 +# 源码编译方式安装MindSpore Ascend 910版本 -- [源码编译方式安装MindSpore Ascend版本](#源码编译方式安装mindspore-ascend版本) +- [源码编译方式安装MindSpore Ascend 910版本](#源码编译方式安装mindspore-ascend-910版本) - [确认系统环境信息](#确认系统环境信息) - [从代码仓下载源码](#从代码仓下载源码) - [编译MindSpore](#编译mindspore) @@ -17,14 +17,14 @@ - + 本文档介绍如何在Ascend 910环境的Linux系统上,使用源码编译方式快速安装MindSpore。 ## 确认系统环境信息 -- 确认安装Ubuntu 18.04/CentOS 7.6/EulerOS 2.8是64位操作系统。 -- 确认安装[GCC 7.3.0版本](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz)。 +- 确认安装Ubuntu 18.04/CentOS 8.2/EulerOS 2.8是64位操作系统。 +- 确认安装正确[GCC 版本](http://ftp.gnu.org/gnu/gcc/),Ubuntu 18.04/EulerOS 2.8用户,GCC>=7.3.0;CentOS 8.2用户 GCC>=8.3.1。 - 确认安装[gmp 6.1.2版本](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz)。 - 确认安装[Python 3.7.5版本](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz)。 - 确认安装[OpenSSL 1.1.1及以上版本](https://github.com/openssl/openssl.git)。 @@ -34,9 +34,9 @@ - 确认安装[patch 2.5及以上版本](http://ftp.gnu.org/gnu/patch/)。 - 安装完成后将patch所在路径添加到系统环境变量中。 - 确认安装[wheel 0.32.0及以上版本](https://pypi.org/project/wheel/)。 -- 确认安装Ascend 910 AI处理器软件配套包(Atlas Data Center Solution V100R020C10:[A800-9000 1.0.8 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9000-pid-250702818/software/252069004?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702818),[A800-9010 1.0.8 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9010-pid-250702809/software/252062130?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702809),[CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373))。 +- 确认安装Ascend 910 AI处理器软件配套包([Atlas Data Center Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251826872))。 - 确认当前用户有权限访问Ascend 910 AI处理器配套软件包的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 - - 安装Ascend 910 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。 + - 安装Ascend 910 AI处理器配套软件包提供的whl包,whl包随配套软件包发布,参考如下命令完成安装。 ```bash pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/topi-{version}-py3-none-any.whl @@ -44,6 +44,12 @@ pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/hccl-{version}-py3-none-any.whl ``` + - 如果升级了Ascend 910 AI处理器配套软件包,配套的whl包也需要重新安装,先将原来的安装包卸载,再参考上述命令重新安装。 + + ```bash + pip uninstall te topi hccl -y + ``` + - 确认安装[NUMA 2.0.11及以上版本](https://github.com/numactl/numactl)。 Ubuntu系统用户,如果未安装,使用如下命令下载安装: @@ -74,7 +80,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -97,8 +103,8 @@ pip install build/package/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 ## 配置环境变量 @@ -113,7 +119,7 @@ export GLOG_v=2 LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on -export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path @@ -176,22 +182,22 @@ print(ops.tensor_add(x, y)) 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 ## 安装MindSpore Serving 当您想要快速体验MindSpore在线推理服务时,可以选装MindSpore Serving。 -具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_ascend_install_source_en.md b/install/mindspore_ascend_install_source_en.md index 7135e8f030b7cb562c0c42be061a1746cef504e8..2bf40dc084de5411015ac4b31cccbd0754e4a788 100644 --- a/install/mindspore_ascend_install_source_en.md +++ b/install/mindspore_ascend_install_source_en.md @@ -1,8 +1,8 @@ -# Installing MindSpore in Ascend by Source Code +# Installing MindSpore in Ascend 910 by Source Code -- [Installing MindSpore in Ascend by Source Code](#installing-mindspore-in-ascend-by-source-code) +- [Installing MindSpore in Ascend 910 by Source Code](#installing-mindspore-in-ascend-910-by-source-code) - [System Environment Information Confirmation](#system-environment-information-confirmation) - [Downloading Source Code from Code Repository](#downloading-source-code-from-code-repository) - [Compiling MindSpore](#compiling-mindspore) @@ -17,14 +17,14 @@ - + This document describes how to quickly install MindSpore in a Linux system with an Ascend 910 environment by source code. ## System Environment Information Confirmation -- Confirm that Ubuntu 18.04/CentOS 7.6/EulerOS 2.8 is installed with the 64-bit operating system. -- Confirm that [GCC 7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) is installed. +- Confirm that Ubuntu 18.04/CentOS 8.2/EulerOS 2.8 is installed with the 64-bit operating system. +- Ensure that right version [GCC](http://ftp.gnu.org/gnu/gcc/) is installed, for Ubuntu 18.04, EulerOS 2.8 users, GCC>=7.3.0; for CentOS 8.2 users, GCC>=8.3.1 . - Confirm that [gmp 6.1.2](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz) is installed. - Confirm that [Python 3.7.5](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) is installed. - Confirm that [OpenSSL 1.1.1 or later](https://github.com/openssl/openssl.git) is installed. @@ -34,7 +34,7 @@ This document describes how to quickly install MindSpore in a Linux system with - Confirm that [patch 2.5 or later](http://ftp.gnu.org/gnu/patch/) is installed. - Add the path where the executable file `patch` stores to the environment variable PATH. - Confirm that [wheel 0.32.0 or later](https://pypi.org/project/wheel/) is installed. -- Confirm that the Ascend 910 AI processor software package (Atlas Data Center Solution V100R020C10:[A800-9000 1.0.8 (aarch64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9000-pid-250702818/software/252069004?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702818), [A800-9010 1.0.8 (x86_64)](https://support.huawei.com/enterprise/zh/ascend-computing/a800-9010-pid-250702809/software/252062130?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C250702809), [CANN V100R020C10](https://support.huawei.com/enterprise/zh/ascend-computing/cann-pid-251168373/software/251174283?idAbsPath=fixnode01%7C23710424%7C251366513%7C22892968%7C251168373)) are installed. +- Confirm that the Ascend 910 AI processor software package ([Atlas Data Center Solution V100R020C20](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251826872)) are installed. - Confirm that the current user has the right to access the installation path `/usr/local/Ascend`of Ascend 910 AI processor software package, If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document. - Install the .whl package provided in Ascend 910 AI processor software package. The .whl package is released with the software package. After software package is upgraded, reinstall the .whl package. @@ -44,6 +44,12 @@ This document describes how to quickly install MindSpore in a Linux system with pip install /usr/local/Ascend/ascend-toolkit/latest/fwkacllib/lib64/hccl-{version}-py3-none-any.whl ``` + - If the Ascend 910 AI processor software package is upgraded, the .whl package need update too, using the following command to uninstall the .whl packages firstly, and then reinstall them. + + ```bash + pip uninstall te topi hccl -y + ``` + - Confirm that [NUMA 2.0.11 or later](https://github.com/numactl/numactl) is installed. If not, for Ubuntu users, use the following command to install it: @@ -75,7 +81,7 @@ This document describes how to quickly install MindSpore in a Linux system with ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -99,8 +105,8 @@ pip install build/package/mindspore_ascend-{version}-cp37-cp37m-linux_{arch}.whl Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. ## Configuring Environment Variables @@ -115,7 +121,7 @@ Of which, LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on - export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path @@ -180,22 +186,22 @@ Using the following command if you need to update the MindSpore version. If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). ## Installing MindSpore Serving If you need to access and experience MindSpore online inference services quickly, you can install MindSpore Serving. -For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/master/README.md). +For more details, please refer to [MindSpore Serving](https://gitee.com/mindspore/serving/blob/r1.1/README.md). diff --git a/install/mindspore_cpu_install_conda.md b/install/mindspore_cpu_install_conda.md deleted file mode 100644 index 27d191004f359d55ac46008f27ba5c2d07ca7e9f..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_install_conda.md +++ /dev/null @@ -1,91 +0,0 @@ -# Conda方式安装MindSpore CPU版本 - - - -- [Conda方式安装MindSpore CPU版本](#conda方式安装mindspore-cpu版本) - - [确认系统环境信息](#确认系统环境信息) - - [安装Conda](#安装conda) - - [添加Conda镜像源](#添加conda镜像源) - - [创建并激活Conda环境](#创建并激活conda环境) - - [安装MindSpore](#安装mindspore) - - [验证安装是否成功](#验证安装是否成功) - - [升级MindSpore版本](#升级mindspore版本) - - [安装MindArmour](#安装mindarmour) - - [安装MindSpore Hub](#安装mindspore-hub) - - - - - -本文档介绍如何在CPU环境的Linux系统上,使用Conda方式快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装Ubuntu 18.04是64位操作系统。 -- 确认安装[gmp 6.1.2版本](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz)。 - -## 安装Conda - -下载并安装对应架构的Conda安装包。 - -- 官网下载地址:[X86 Anaconda](https://www.anaconda.com/distribution/) 或 [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html) - -- 清华镜像源下载地址:[X86 Anaconda](https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2020.02-Linux-x86_64.sh) - -## 添加Conda镜像源 - -从清华源镜像源下载Conda安装包的可跳过此步操作。 - -```bash -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ -conda config --set show_channel_urls yes -``` - -## 创建并激活Conda环境 - -```bash -conda create -n mindspore python=3.7.5 -conda activate mindspore -``` - -## 安装MindSpore - -```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/cpu/{system}/mindspore-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 -- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 -- `{system}`表示系统,例如使用的Ubuntu系统X86架构,`{system}`应写为`ubuntu_x86`,目前CPU版本可支持以下系统`ubuntu_aarch64`/`ubuntu_x86`。 - -## 验证安装是否成功 - -```bash -python -c "import mindspore;print(mindspore.__version__)" -``` - -如果输出MindSpore版本号,说明MindSpore安装成功了,如果输出`No module named 'mindspore'`说明未成功安装。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -```bash -pip install --upgrade mindspore -``` - -## 安装MindArmour - -当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 - -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 - -## 安装MindSpore Hub - -当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 - -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 diff --git a/install/mindspore_cpu_install_docker.md b/install/mindspore_cpu_install_docker.md new file mode 100644 index 0000000000000000000000000000000000000000..dbeae41c1842fe2ea27a833891d2a3c9e6232dde --- /dev/null +++ b/install/mindspore_cpu_install_docker.md @@ -0,0 +1,105 @@ +# Docker方式安装MindSpore CPU版本 + + + +- [Docker方式安装MindSpore CPU版本](#docker方式安装mindspore-cpu版本) + - [确认系统环境信息](#确认系统环境信息) + - [获取MindSpore镜像](#获取mindspore镜像) + - [运行MindSpore镜像](#运行mindspore镜像) + - [验证是否安装成功](#验证是否安装成功) + + + + + +[Docker](https://docs.docker.com/get-docker/)是一个开源的应用容器引擎,让开发者打包他们的应用以及依赖包到一个轻量级、可移植的容器中。通过使用Docker,可以实现MindSpore的快速部署,并与系统环境隔离。 + +本文档介绍如何在CPU环境的Linux系统上,使用Docker方式快速安装MindSpore。 + +MindSpore的Docker镜像托管在[Huawei SWR](https://support.huaweicloud.com/swr/index.html)上。 + +目前容器化构建选项支持情况如下: + +| 硬件平台 | Docker镜像仓库 | 标签 | 说明 | +| :----- | :------------------------ | :----------------------- | :--------------------------------------- | +| CPU | `mindspore/mindspore-cpu` | `x.y.z` | 已经预安装MindSpore `x.y.z` CPU版本的生产环境。 | +| | | `devel` | 提供开发环境从源头构建MindSpore(`CPU`后端)。安装详情请参考 。 | +| | | `runtime` | 提供运行时环境,未安装MindSpore二进制包(`CPU`后端)。 | + +> `x.y.z`对应MindSpore版本号,例如安装1.1.0版本MindSpore时,`x.y.z`应写为1.1.0。 + +## 确认系统环境信息 + +- 确认安装Ubuntu 18.04是基于x86架构的64位操作系统。 +- 确认安装[Docker 18.03或者更高版本](https://docs.docker.com/get-docker/)。 + +## 获取MindSpore镜像 + +对于`CPU`后端,可以直接使用以下命令获取最新的稳定镜像: + +```bash +docker pull swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-cpu:{tag} +``` + +其中: + +- `{tag}`对应上述表格中的标签。 + +## 运行MindSpore镜像 + +执行以下命令启动Docker容器实例: + +```bash +docker run -it swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-cpu:{tag} /bin/bash +``` + +其中: + +- `{tag}`对应上述表格中的标签。 + +## 验证是否安装成功 + +- 如果你安装的是指定版本`x.y.z`的容器。 + + 按照上述步骤进入MindSpore容器后,测试Docker是否正常工作,请运行下面的Python代码并检查输出: + + ```python + import numpy as np + import mindspore.context as context + import mindspore.ops as ops + from mindspore import Tensor + + context.set_context(mode=context.PYNATIVE_MODE, device_target="CPU") + + x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + print(ops.tensor_add(x, y)) + ``` + + 代码成功运行时会输出: + + ```text + [[[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]]] + ``` + + 至此,你已经成功通过Docker方式安装了MindSpore CPU版本。 + +- 如果你安装的是`runtime`标签的容器,需要自行安装MindSpore。 + + 进入[MindSpore安装指南页面](https://www.mindspore.cn/install),选择CPU硬件平台、Ubuntu-x86操作系统和pip的安装方式,获得安装指南。运行容器后参考安装指南,通过pip方式安装MindSpore CPU版本,并进行验证。 + +- 如果你安装的是`devel`标签的容器,需要自行编译并安装MindSpore。 + + 进入[MindSpore安装指南页面](https://www.mindspore.cn/install),选择CPU硬件平台、Ubuntu-x86操作系统和Source的安装方式,获得安装指南。运行容器后,下载MindSpore代码仓并参考安装指南,通过源码编译方式安装MindSpore CPU版本,并进行验证。 + +如果您想了解更多关于MindSpore Docker镜像的构建过程,请查看[docker repo](https://gitee.com/mindspore/mindspore/blob/master/docker/README.md)了解详细信息。 diff --git a/install/mindspore_cpu_install_docker_en.md b/install/mindspore_cpu_install_docker_en.md new file mode 100644 index 0000000000000000000000000000000000000000..fbecb98cb7fe42da55266bad11a9aa4a15b68f34 --- /dev/null +++ b/install/mindspore_cpu_install_docker_en.md @@ -0,0 +1,105 @@ +# Installing MindSpore in CPU by Docker + + + +- [Installing MindSpore in CPU by Docker](#installing-mindSpore-in-cpu-by-docker) + - [System Environment Information Confirmation](#system-environment-information-confirmation) + - [Obtaining MindSpore Image](#obtaining-mindspore-image) + - [Running MindSpore Image](#running-mindspore-image) + - [Installation Verification](#installation-verification) + + + + + +[Docker](https://docs.docker.com/get-docker/) is an open source application container engine, developers can package their applications and dependencies into a lightweight, portable container. By using Docker, MindSpore can be rapidly deployed and separated from the system environment. + +This document describes how to quickly install MindSpore by Docker in a Linux system with a CPU environment. + +The Docker image of MindSpore is hosted on [Huawei SWR](https://support.huaweicloud.com/swr/index.html). + +The current support for containerized build is as follows: + +| Hardware | Docker Image Hub | Label | Note | +| :----- | :------------------------ | :----------------------- | :--------------------------------------- | +| CPU | `mindspore/mindspore-cpu` | `x.y.z` | A production environment with the MindSpore `x.y.z` CPU version pre-installed. | +| | | `devel` | Provide a development environment to build MindSpore from the source (`CPU` backend). For installation details, please refer to . | +| | | `runtime` | Provide runtime environment, MindSpore binary package (`CPU` backend) is not installed. | + +> `x.y.z` corresponds to the MindSpore version number. For example, when installing MindSpore version 1.1.0, `x.y.z` should be written as 1.1.0. + +## System Environment Information Confirmation + +- Confirm that Ubuntu 18.04 is installed with the 64-bit operating system. +- Confirm that [Docker 18.03 or later versioin](https://docs.docker.com/get-docker/) is installed. + +## Obtaining MindSpore Image + +For the `CPU` backend, you can directly use the following command to obtain the latest stable image: + +```bash +docker pull swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-cpu:{tag} +``` + +of which, + +- `{tag}` corresponds to the label in the above table. + +## Running MindSpore Image + +Execute the following command to start the Docker container instance: + +```bash +docker run -it swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-cpu:{tag} /bin/bash +``` + +of which, + +- `{tag}` corresponds to the label in the above table. + +## Installation Verification + +- If you are installing the container of the specified version `x.y.z`. + + After entering the MindSpore container according to the above steps, to test whether the Docker container is working properly, please run the following Python code and check the output: + + ```python + import numpy as np + import mindspore.context as context + import mindspore.ops as ops + from mindspore import Tensor + + context.set_context(mode=context.PYNATIVE_MODE, device_target="CPU") + + x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + print(ops.tensor_add(x, y)) + ``` + + The outputs should be the same as: + + ```text + [[[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]]] + ``` + + It means MindSpore has been installed by docker successfully. + +- If you install a container with the label of `runtime`, you need to install MindSpore yourself. + + Go to [MindSpore Installation Guide Page](https://www.mindspore.cn/install/en), choose the CPU hardware platform, Ubuntu-x86 operating system and pip installation method to get the installation guide. Refer to the installation guide after running the container and install the MindSpore CPU version by pip, and verify it. + +- If you install a container with the label of `devel`, you need to compile and install MindSpore yourself. + + Go to [MindSpore Installation Guide Page](https://www.mindspore.cn/install/en), choose the CPU hardware platform, Ubuntu-x86 operating system and pip installation method to get the installation guide. After running the container, download the MindSpore code repository and refer to the installation guide, install the MindSpore CPU version through source code compilation, and verify it. + +If you want to know more about the MindSpore Docker image building process, please check [docker repo](https://gitee.com/mindspore/mindspore/blob/r1.1/docker/README.md) for details. diff --git a/install/mindspore_cpu_install_pip.md b/install/mindspore_cpu_install_pip.md index 80d2d21c99702197a87a86c105350639924786e2..9156494e83ffe07178d5b8aa63df0e78c1cbf2f6 100644 --- a/install/mindspore_cpu_install_pip.md +++ b/install/mindspore_cpu_install_pip.md @@ -12,7 +12,7 @@ - + 本文档介绍如何在CPU环境的Linux系统上,使用pip方式快速安装MindSpore。 @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - `{system}`表示系统,例如使用的Ubuntu系统X86架构,`{system}`应写为`ubuntu_x86`,目前CPU版本可支持以下系统`ubuntu_aarch64`/`ubuntu_x86`。 @@ -56,10 +56,10 @@ pip install --upgrade mindspore 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_cpu_install_pip_en.md b/install/mindspore_cpu_install_pip_en.md index 1174728459384c640ee89d22b9577790d320136e..58f225a2454eaf78a7db6e9894120326049b6747 100644 --- a/install/mindspore_cpu_install_pip_en.md +++ b/install/mindspore_cpu_install_pip_en.md @@ -12,7 +12,7 @@ - + This document describes how to quickly install MindSpore by pip in a Linux system with a CPU environment. @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. - `{system}` denotes the system version. For example, if you are using Ubuntu x86 architecture, `{system}` should be `ubuntu_x86`. Currently, the following systems are supported by CPU: `ubuntu_aarch64`/`ubuntu_x86`. @@ -56,10 +56,10 @@ pip install --upgrade mindspore If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/mindspore_cpu_install_source.md b/install/mindspore_cpu_install_source.md index 2e26d4cc416bd423bf36e7f367895816a2043062..5bf3aeaeefa320d0c973bf857c5b0cdca904575a 100644 --- a/install/mindspore_cpu_install_source.md +++ b/install/mindspore_cpu_install_source.md @@ -14,10 +14,12 @@ - + 本文档介绍如何在CPU环境的Linux系统上,使用源码编译方式快速安装MindSpore。 +详细步骤可以参考社区提供的实践——[在Ubuntu(CPU)上进行源码编译安装MindSpore](https://www.mindspore.cn/news/newschildren?id=365),在此感谢社区成员[damon0626](https://gitee.com/damon0626)的分享。 + ## 确认系统环境信息 - 确认安装Ubuntu 18.04是64位操作系统。 @@ -47,7 +49,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -70,8 +72,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl -i htt 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 - `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARMv8架构64位,则写为`aarch64`。 ## 验证安装是否成功 @@ -104,10 +106,10 @@ python -c 'import mindspore;print(mindspore.__version__)' 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_cpu_install_source_en.md b/install/mindspore_cpu_install_source_en.md index 80957491277cf9a1d2ae5c217121e139c61d9d78..9e11557c06c5148b09961bc5fe67eb779faa4734 100644 --- a/install/mindspore_cpu_install_source_en.md +++ b/install/mindspore_cpu_install_source_en.md @@ -14,7 +14,7 @@ - + This document describes how to quickly install MindSpore by source code in a Linux system with a CPU environment. @@ -47,7 +47,7 @@ This document describes how to quickly install MindSpore by source code in a Lin ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -71,8 +71,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-linux_{arch}.whl -i htt Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. - `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. ## Installation Verification @@ -105,10 +105,10 @@ Using the following command if you need to update the MindSpore version: If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/mindspore_cpu_macos_install_conda.md b/install/mindspore_cpu_macos_install_conda.md deleted file mode 100644 index 1cee10f453ccb46d4545b61faeb7f160d3701175..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_macos_install_conda.md +++ /dev/null @@ -1,72 +0,0 @@ -# Conda方式安装MindSpore CPU版本(macOS) - - - -- [Conda方式安装MindSpore CPU版本(macOS)](#conda方式安装mindspore-cpu版本macOS) - - [确认系统环境信息](#确认系统环境信息) - - [安装Conda](#安装conda) - - [添加Conda镜像源](#添加conda镜像源) - - [创建并激活Conda环境](#创建并激活conda环境) - - [安装MindSpore](#安装mindspore) - - [验证是否安装成功](#验证是否安装成功) - - [升级MindSpore版本](#升级mindspore版本) - - - - - -本文档介绍如何在CPU环境的macOS系统上,使用Conda方式快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装macOS Catalina是64位操作系统。 - -## 安装Conda - -下载并安装对应架构的Conda安装包。 - -- 官方源下载[X86 Anaconda](https://www.anaconda.com/distribution/) 或 [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html) -- 清华镜像源下载地址:[X86 Anaconda](https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-5.3.1-macOSX-x86_64.sh) - -## 添加Conda镜像源 - -从清华源镜像源下载Conda安装包的可忽略此步操作。 - -```shell -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ -conda config --set show_channel_urls yes -``` - -## 创建并激活Conda环境 - -```bash -conda create -n mindspore python=3.7.5 -conda activate mindspore -``` - -## 安装MindSpore - -```bash -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 - -## 验证是否安装成功 - -```bash -python -c "import mindspore;mindspore.__version__" -``` - -如果输出MindSpore版本号,说明MindSpore安装成功了,如果输出`No module named 'mindspore'`说明未安装成功。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -```bash -pip install --upgrade mindspore -``` diff --git a/install/mindspore_cpu_macos_install_pip.md b/install/mindspore_cpu_macos_install_pip.md deleted file mode 100644 index 7292ebd303976d06a6d077df3df50bef43290aaf..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_macos_install_pip.md +++ /dev/null @@ -1,48 +0,0 @@ -# pip方式安装MindSpore CPU版本(macOS) - - - -- [pip方式安装MindSpore CPU版本(macOS)](#pip方式安装mindspore-cpu版本macOS) - - [确认系统环境信息](#确认系统环境信息) - - [安装MindSpore](#安装mindspore) - - [验证是否安装成功](#验证是否安装成功) - - [升级MindSpore版本](#升级mindspore版本) - - - - - -本文档介绍如何在CPU环境的macOS系统上,使用pip方式快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装macOS Catalina是64位操作系统。 -- 确认安装[Python 3.7.5](https://www.python.org/ftp/python/3.7.5/python-3.7.5-macosx10.9.pkg)版本。 -- 安装Python完毕后,将Python添加到系统环境变量。 - - 将Python路径添加到系统环境变量中即可。 - -## 安装MindSpore - -```bash -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 - -## 验证是否安装成功 - -```bash -python -c "import mindspore;print(mindspore.__version__)" -``` - -如果输出MindSpore版本号,说明MindSpore安装成功了,如果输出`No module named 'mindspore'`说明未安装成功。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -```bash -pip install --upgrade mindspore -``` diff --git a/install/mindspore_cpu_macos_install_pip_en.md b/install/mindspore_cpu_macos_install_pip_en.md deleted file mode 100644 index a611e95c980ce0ce2e4655a47ff0a8a8519f4320..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_macos_install_pip_en.md +++ /dev/null @@ -1,47 +0,0 @@ -# Installing MindSpore in CPU by pip (macOS) - - - -- [Installing MindSpore in CPU by pip (macOS)](#installing-mindspore-in-cpu-by-pip-macos) - - [System Environment Information Confirmation](#system-environment-information-confirmation) - - [Installing MindSpore](#installing-mindspore) - - [Installation Verification](#installation-verification) - - [Version Update](#version-update) - - - - - -This document describes how to quickly install MindSpore by pip in a macOS system with a CPU environment. - -## System Environment Information Confirmation - -- Confirm that macOS Cata is installed with the 64-bit operating system. -- Confirm that [Python 3.7.5](https://www.python.org/ftp/python/3.7.5/python-3.7.5-macosx10.9.pkg) is installed. - - After installing, add the path of `python` to the environment variable PATH. - -## Installing MindSpore - -```bash -``` - -Of which, - -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. - -## Installation Verification - -```bash -python -c "import mindspore;print(mindspore.__version__)" -``` - -If the MindSpore version number is displayed, it means that MindSpore is installed successfully, and if the output is `No module named 'mindspore'`, it means that the installation was not successful. - -## Version Update - -Using the following command if you need to update MindSpore version: - -```bash -pip install --upgrade mindspore -``` diff --git a/install/mindspore_cpu_macos_install_source.md b/install/mindspore_cpu_macos_install_source.md deleted file mode 100644 index ecd185cd7947054444803728dbe10fe30659e315..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_macos_install_source.md +++ /dev/null @@ -1,86 +0,0 @@ -# 源码编译方式安装MindSpore CPU版本(macOS) - - - -- [源码编译方式安装MindSpore CPU版本(macOS)](#源码编译方式安装mindspore-cpu版本macOS) - - [确认系统环境信息](#确认系统环境信息) - - [从代码仓下载源码](#从代码仓下载源码) - - [编译MindSpore](#编译mindspore) - - [安装MindSpore](#安装mindspore) - - [验证是否安装成功](#验证是否安装成功) - - [升级MindSpore版本](#升级mindspore版本) - - - - - -本文档介绍如何在CPU环境的macOS系统上,使用源码编译方法快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装macOS Catalina是x86架构64位操作系统。 -- 确认安装Xcode并配置clang version 11.0.0。 -- 确认安装[CMake 3.18.3版本](https://github.com/Kitware/Cmake/releases/tag/v3.18.3)。 - - 安装完成后将CMake添加到系统环境变量。 -- 确认安装[Python 3.7.5版本](https://www.python.org/ftp/python/3.7.5/python-3.7.5-macosx10.9.pkg)。 - - 安装完成后需要将Python添加到系统环境变量Path中。 -- 确认安装[OpenSSL 1.1.1及以上版本](https://github.com/openssl/openssl.git)。 - - 安装完成后将Openssl添加到环境变量。 -- 确认安装[wheel 0.32.0及以上版本](https://pypi.org/project/wheel/)。 -- 确认安装git工具。 - 如果未安装,使用如下命令下载安装: - - ```bash - brew install git - ``` - -## 从代码仓下载源码 - -```bash -git clone https://gitee.com/mindspore/mindspore.git -``` - -## 编译MindSpore - -在源码根目录下执行如下命令: - -```bash -bash build.sh -e cpu -``` - -## 安装MindSpore - -```bash -pip install build/package/mindspore-{version}-py37-none-any.whl -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 - -## 验证是否安装成功 - -```bash -python -c "import mindspore;print(mindspore.__version__)" -``` - -如果输出MindSpore版本号,说明MindSpore安装成功了,如果输出`No module named 'mindspore'`说明未安装成功。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -- 直接在线升级 - - ```bash - pip install --upgrade mindspore - ``` - -- 本地源码编译升级 - - 在源码根目录下执行编译脚本`build.sh`成功后,在`build/package`目录下找到编译生成的whl安装包,然后执行命令进行升级。 - - ```bash - pip install --upgrade mindspore-{version}-py37-none-any.whl - ``` diff --git a/install/mindspore_cpu_macos_install_source_en.md b/install/mindspore_cpu_macos_install_source_en.md deleted file mode 100644 index e680ba9cf9b97c3f85f56cd84baca505012ab19e..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_macos_install_source_en.md +++ /dev/null @@ -1,86 +0,0 @@ -# Installing MindSpore in CPU by Source Code (macOS) - - - -- [Installing MindSpore in CPU by Source Code (macOS)](#installing-mindspore-in-cpu-by-source-code-macOS) - - [System Environment Information Confirmation](#system-environment-information-confirmation) - - [Downloading Source Code from Code Repository](#downloading-source-code-from-code-repository) - - [Compiling MindSpore](#compiling-mindspore) - - [Installing MindSpore](#installing-mindspore) - - [Installation Verification](#installing-verification) - - [Version Update](#version-update) - - - - - -This document describes how to quickly install MindSpore by source code in a macOS system with a CPU environment. - -## System Environment Information Confirmation - -- Confirm that macOS Catalina is installed with the x86 architecture 64-bit operating system. -- Confirm that the Xcode and Clang 11.0.0 is installed. -- Confirm that [CMake 3.18.3](https://github.com/Kitware/Cmake/releases/tag/v3.18.3) is installed. - - After installing, add the path of `cmake` to the environment variable PATH. -- Confirm that [Python 3.7.5](https://www.python.org/ftp/python/3.7.5/python-3.7.5-macosx10.9.pkg) is installed. - - After installing, add the path of `python` to the environment variable PATH. -- Confirm that [OpenSSL 1.1.1 or later](https://github.com/openssl/openssl.git) is installed. - - After installing, add the path of `Openssl` to the environment variable PATH. -- Confirm that [wheel 0.32.0 or later](https://pypi.org/project/wheel/) is installed. -- Confirm that the git tool is installed. - If not, use the following command to install it: - - ```bash - brew install git - ``` - -## Downloading Source Code from Code Repository - -```bash -git clone https://gitee.com/mindspore/mindspore.git -``` - -## Compiling MindSpore - -Run the following command in the root directory of the source code to compile MindSpore: - -```bash -bash build.sh -e cpu -``` - -## Installing MindSpore - -```bash -pip install build/package/mindspore-{version}-py37-none-any.whl -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -Of which, - -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. - -## Installation Verification - -```bash -python -c "import mindspore;print(mindspore.__version__)" -``` - -If the MindSpore version number is displayed, it means that MindSpore is installed successfully, and if the output is `No module named 'mindspore'`, it means that the installation was not successful. - -## Version Update - -Using the following command if you need to update MindSpore version: - -- Update online - - ```bash - pip install --upgrade mindspore - ``` - -- Update after source code compilation - - After successfully executing the compile script `build.bat` in the root path of the source code, find the whl package in path `build/package`, use the following command to update your version. - -```bash -pip install --upgrade mindspore-{version}-cp37-cp37m-win_amd64.whl -``` diff --git a/install/mindspore_cpu_win_install_conda.md b/install/mindspore_cpu_win_install_conda.md deleted file mode 100644 index 3d913dee797aa56817f567b0aa56c9f1b4113dde..0000000000000000000000000000000000000000 --- a/install/mindspore_cpu_win_install_conda.md +++ /dev/null @@ -1,78 +0,0 @@ -# Conda方式安装MindSpore CPU版本(Windows) - - - -- [Conda方式安装MindSpore CPU版本(Windows)](#conda方式安装mindspore-cpu版本windows) - - [确认系统环境信息](#确认系统环境信息) - - [安装Conda](#安装conda) - - [启动Anaconda Prompt](#启动anaconda-prompt) - - [添加Conda镜像源](#添加conda镜像源) - - [创建并激活Conda环境](#创建并激活conda环境) - - [安装MindSpore](#安装mindspore) - - [验证是否安装成功](#验证是否安装成功) - - [升级MindSpore版本](#升级mindspore版本) - - - - - -本文档介绍如何在CPU环境的Windows系统上,使用Conda方式快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装Windows 10是x86架构64位操作系统。 - -## 安装Conda - -下载并安装对应架构的Conda安装包。 - -- 官方源下载[X86 Anaconda](https://www.anaconda.com/distribution/) 或 [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html) -- 清华镜像源下载地址:[X86 Anaconda](https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2020.02-Windows-x86_64.exe) - -## 启动Anaconda Prompt - -安装Conda后,从Windows“开始”菜单打开“Anaconda Prompt”。 - -## 添加Conda镜像源 - -从清华源镜像源下载Conda安装包的可忽略此步操作。 - -```shell -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ -conda config --set show_channel_urls yes -``` - -## 创建并激活Conda环境 - -```bash -conda create -n mindspore python=3.7.5 -conda activate mindspore -``` - -## 安装MindSpore - -```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/cpu/windows_x64/mindspore-{version}-cp37-cp37m-win_amd64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 - -## 验证是否安装成功 - -```bash -python -c "import mindspore;mindspore.__version__" -``` - -如果输出MindSpore版本号,说明MindSpore安装成功了,如果输出`No module named 'mindspore'`说明未安装成功。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -```bash -pip install --upgrade mindspore -``` diff --git a/install/mindspore_cpu_win_install_pip.md b/install/mindspore_cpu_win_install_pip.md index eeadbd9231adaf3d1ad771eb7a2509f8a73142c9..44453d34bc5633ede536f08c4fa57ce4105826a3 100644 --- a/install/mindspore_cpu_win_install_pip.md +++ b/install/mindspore_cpu_win_install_pip.md @@ -10,7 +10,7 @@ - + 本文档介绍如何在CPU环境的Windows系统上,使用pip方式快速安装MindSpore。 @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_win_install_pip_en.md b/install/mindspore_cpu_win_install_pip_en.md index 24a0d86bb38546a8261b8cae6dc7c41226437f0a..7682105e8f96a8c0d4fbe1ea7bedcf8747eb5fb6 100644 --- a/install/mindspore_cpu_win_install_pip_en.md +++ b/install/mindspore_cpu_win_install_pip_en.md @@ -10,7 +10,7 @@ - + This document describes how to quickly install MindSpore by pip in a Windows system with a CPU environment. @@ -31,8 +31,8 @@ pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSp Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification diff --git a/install/mindspore_cpu_win_install_source.md b/install/mindspore_cpu_win_install_source.md index c6bfbfd9ec5799a8599961199f7db5d5d753ae9e..7587afe64da857696185fa750759a338ba30edcf 100644 --- a/install/mindspore_cpu_win_install_source.md +++ b/install/mindspore_cpu_win_install_source.md @@ -12,16 +12,18 @@ - + 本文档介绍如何在CPU环境的Windows系统上,使用源码编译方法快速安装MindSpore。 +详细步骤可以参考社区提供的实践——[在Windows(CPU)上进行源码编译安装MindSpore](https://www.mindspore.cn/news/newschildren?id=364),在此感谢社区成员[lvmingfu](https://gitee.com/lvmingfu)的分享。 + ## 确认系统环境信息 - 确认安装Windows 10是x86架构64位操作系统。 - 确认安装[Visual C++ Redistributable for Visual Studio 2015](https://www.microsoft.com/zh-CN/download/details.aspx?id=48145)。 - 确认安装了[git](https://github.com/git-for-windows/git/releases/download/v2.29.2.windows.2/Git-2.29.2.2-64-bit.exe)工具。 - - 如果git没有安装在`ProgramFiles`,在执行上述命令前,需设置环境变量指定`patch.exe`的位置,例如git安装在`D:\git`时,需设置`set MS_PATCH_PATH=D:\git\usr\bin`。 + - 如果git没有安装在`ProgramFiles`,需设置环境变量指定`patch.exe`的位置,例如git安装在`D:\git`时,需设置`set MS_PATCH_PATH=D:\git\usr\bin`。 - 确认安装[MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z)。 - 安装路径中不能出现中文和日文,安装完成后将安装路径下的`MinGW\bin`添加到系统环境变量。例如安装在`D:\gcc`,则需要将`D:\gcc\MinGW\bin`添加到系统环境变量Path中。 - 确认安装[CMake 3.18.3版本](https://github.com/Kitware/Cmake/releases/tag/v3.18.3)。 @@ -34,7 +36,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -53,8 +55,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-win_amd64.whl -i https: 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否安装成功 diff --git a/install/mindspore_cpu_win_install_source_en.md b/install/mindspore_cpu_win_install_source_en.md index 5bfdcbb9f2394f8b0b5cd6a9fe1427fb6f36da01..fbc2082ee4db18a203359be6fe474c7be677c03f 100644 --- a/install/mindspore_cpu_win_install_source_en.md +++ b/install/mindspore_cpu_win_install_source_en.md @@ -12,7 +12,7 @@ - + This document describes how to quickly install MindSpore by source code in a Windows system with a CPU environment. @@ -33,7 +33,7 @@ This document describes how to quickly install MindSpore by source code in a Win ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -52,8 +52,8 @@ pip install build/package/mindspore-{version}-cp37-cp37m-win_amd64.whl -i https: Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification diff --git a/install/mindspore_gpu_install_conda.md b/install/mindspore_gpu_install_conda.md deleted file mode 100644 index 0e0965dd76409b488bd0009d1dd71cd57f9f65e9..0000000000000000000000000000000000000000 --- a/install/mindspore_gpu_install_conda.md +++ /dev/null @@ -1,126 +0,0 @@ -# Conda方式安装MindSpore GPU版本 - - - -- [Conda方式安装MindSpore GPU版本](#conda方式安装mindspore-gpu版本) - - [确认系统环境信息](#确认系统环境信息) - - [安装Conda](#安装conda) - - [添加Conda镜像源](#添加conda镜像源) - - [创建并激活Conda环境](#创建并激活conda环境) - - [安装MindSpore](#安装mindspore) - - [验证是否成功安装](#验证是否成功安装) - - [升级MindSpore版本](#升级mindspore版本) - - [安装MindInsight](#安装mindinsight) - - [安装MindArmour](#安装mindarmour) - - [安装MindSpore Hub](#安装mindspore-hub) - - - - - -本文档介绍如何在GPU环境的Linux系统上,使用Conda方式快速安装MindSpore。 - -## 确认系统环境信息 - -- 确认安装Ubuntu 18.04是64位操作系统。 -- 确认安装[GCC 7.3.0版本](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz)。 -- 确认安装[CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)。 - - CUDA安装后,若CUDA没有安装在默认位置,需要设置环境变量PATH(如:`export PATH=/usr/local/cuda-${version}/bin:$PATH`)和`LD_LIBRARY_PATH`(如:`export LD_LIBRARY_PATH=/usr/local/cuda-${version}/lib64:$LD_LIBRARY_PATH`),详细安装后的设置可参考[CUDA安装手册](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions)。 -- 确认安装[cuDNN 7.6.X版本](https://developer.nvidia.com/rdp/cudnn-archive)。 -- 确认安装[OpenMPI 4.0.3版本](https://www.open-mpi.org/faq/?category=building#easy-build)(可选,单机多卡/多机多卡训练需要)。 -- 确认安装[NCCL 2.7.6-1版本](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian)(可选,单机多卡/多机多卡训练需要)。 -- 确认安装[gmp 6.1.2版本](https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz)。 - -## 安装Conda - -下载并安装对应架构的Conda安装包。 - -- 官网下载地址:[X86 Anaconda](https://www.anaconda.com/distribution/) 或 [X86 Miniconda](https://docs.conda.io/en/latest/miniconda.html)。 -- 清华镜像源下载地址:[X86 Anaconda](https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2020.02-Linux-x86_64.sh)。 - -## 添加Conda镜像源 - -从清华源镜像源下载Conda安装包的可跳过此步操作。 - -```bash -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ -conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/ -conda config --set show_channel_urls yes -``` - -## 创建并激活Conda环境 - -```bash -conda create -n mindspore python=3.7.5 -conda activate mindspore -``` - -## 安装MindSpore - -```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple -``` - -其中: - -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 -- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 - -## 验证是否成功安装 - -```python -import numpy as np -from mindspore import Tensor -import mindspore.ops as ops -import mindspore.context as context - -context.set_context(device_target="GPU") -x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -print(ops.tensor_add(x, y)) -``` - -如果输出: - -```text -[[[ 2. 2. 2. 2.], - [ 2. 2. 2. 2.], - [ 2. 2. 2. 2.]], - - [[ 2. 2. 2. 2.], - [ 2. 2. 2. 2.], - [ 2. 2. 2. 2.]], - - [[ 2. 2. 2. 2.], - [ 2. 2. 2. 2.], - [ 2. 2. 2. 2.]]] -``` - -说明MindSpore安装成功了。 - -## 升级MindSpore版本 - -当需要升级MindSpore版本时,可执行如下命令: - -```bash -pip install --upgrade mindspore-gpu -``` - -## 安装MindInsight - -当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 - -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 - -## 安装MindArmour - -当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 - -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 - -## 安装MindSpore Hub - -当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 - -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 diff --git a/install/mindspore_gpu_install_docker.md b/install/mindspore_gpu_install_docker.md new file mode 100644 index 0000000000000000000000000000000000000000..deeff36042393db00a2b3f4696f5f69eea9fed36 --- /dev/null +++ b/install/mindspore_gpu_install_docker.md @@ -0,0 +1,145 @@ +# Docker方式安装MindSpore GPU版本 + + + +- [Docker方式安装MindSpore GPU版本](#docker方式安装mindspore-gpu版本) + - [确认系统环境信息](#确认系统环境信息) + - [nvidia-container-toolkit安装](#nvidia-container-toolkit安装) + - [获取MindSpore镜像](#获取mindspore镜像) + - [运行MindSpore镜像](#运行mindspore镜像) + - [验证是否安装成功](#验证是否安装成功) + + + + + +[Docker](https://docs.docker.com/get-docker/)是一个开源的应用容器引擎,让开发者打包他们的应用以及依赖包到一个轻量级、可移植的容器中。通过使用Docker,可以实现MindSpore的快速部署,并与系统环境隔离。 + +本文档介绍如何在GPU环境的Linux系统上,使用Docker方式快速安装MindSpore。 + +MindSpore的Docker镜像托管在[Huawei SWR](https://support.huaweicloud.com/swr/index.html)上。 + +目前容器化构建选项支持情况如下: + +| 硬件平台 | Docker镜像仓库 | 标签 | 说明 | +| :----- | :------------------------ | :----------------------- | :--------------------------------------- | +| GPU | `mindspore/mindspore-gpu` | `x.y.z` | 已经预安装MindSpore `x.y.z` GPU版本的生产环境。 | +| | | `devel` | 提供开发环境从源头构建MindSpore(`GPU CUDA10.1`后端)。安装详情请参考 。 | +| | | `runtime` | 提供运行时环境,未安装MindSpore二进制包(`GPU CUDA10.1`后端)。 | + +> **注意:** 不建议从源头构建GPU `devel` Docker镜像后直接安装whl包。我们强烈建议您在GPU `runtime` Docker镜像中传输并安装whl包。 +> `x.y.z`对应MindSpore版本号,例如安装1.1.0版本MindSpore时,`x.y.z`应写为1.1.0。 + +## 确认系统环境信息 + +- 确认安装Ubuntu 18.04是基于x86架构的64位操作系统。 +- 确认安装[Docker 18.03或者更高版本](https://docs.docker.com/get-docker/)。 + +## nvidia-container-toolkit安装 + +对于`GPU`后端,请确保`nvidia-container-toolkit`已经提前安装,以下是`Ubuntu`用户的`nvidia-container-toolkit`安装指南: + +```bash +# Acquire version of operating system version +DISTRIBUTION=$(. /etc/os-release; echo $ID$VERSION_ID) +curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add - +curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list + +sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit nvidia-docker2 +sudo systemctl restart docker +``` + +daemon.json是Docker的配置文件,编辑文件daemon.json配置容器运行时,让Docker可以使用nvidia-container-runtime: + +```bash +$ vim /etc/docker/daemon.json +{ + "runtimes": { + "nvidia": { + "path": "nvidia-container-runtime", + "runtimeArgs": [] + } + } +} +``` + +再次重启Docker: + +```bash +sudo systemctl daemon-reload +sudo systemctl restart docker +``` + +## 获取MindSpore镜像 + +对于`GPU`后端,可以直接使用以下命令获取最新的稳定镜像: + +```bash +docker pull swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-gpu:{tag} +``` + +其中: + +- `{tag}`对应上述表格中的标签。 + +## 运行MindSpore镜像 + +执行以下命令启动Docker容器实例: + +```bash +docker run -it -v /dev/shm:/dev/shm --runtime=nvidia --privileged=true swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-gpu:{tag} /bin/bash +``` + +其中: + +- `-v /dev/shm:/dev/shm` 将NCCL共享内存段所在目录挂载至容器内部; +- `--runtime=nvidia` 用于指定容器运行时为`nvidia-container-runtime`; +- `--privileged=true` 赋予容器扩展的能力; +- `{tag}`对应上述表格中的标签。 + +## 验证是否安装成功 + +- 如果你安装的是指定版本`x.y.z`的容器。 + + 按照上述步骤进入MindSpore容器后,测试Docker是否正常工作,请运行下面的Python代码并检查输出: + + ```python + import numpy as np + import mindspore.context as context + import mindspore.ops as ops + from mindspore import Tensor + + context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU") + + x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + print(ops.tensor_add(x, y)) + ``` + + 代码成功运行时会输出: + + ```text + [[[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]]] + ``` + + 至此,你已经成功通过Docker方式安装了MindSpore GPU版本。 + +- 如果你安装的是`runtime`标签的容器,需要自行安装MindSpore。 + + 进入[MindSpore安装指南页面](https://www.mindspore.cn/install),选择GPU硬件平台、Ubuntu-x86操作系统和pip的安装方式,获得安装指南。运行容器后参考安装指南,通过pip方式安装MindSpore GPU版本,并进行验证。 + +- 如果你安装的是`devel`标签的容器,需要自行编译并安装MindSpore。 + + 进入[MindSpore安装指南页面](https://www.mindspore.cn/install),选择GPU硬件平台、Ubuntu-x86操作系统和Source的安装方式,获得安装指南。运行容器后,下载MindSpore代码仓并参考安装指南,通过源码编译方式安装MindSpore GPU版本,并进行验证。 + +如果您想了解更多关于MindSpore Docker镜像的构建过程,请查看[docker repo](https://gitee.com/mindspore/mindspore/blob/master/docker/README.md)了解详细信息。 diff --git a/install/mindspore_gpu_install_docker_en.md b/install/mindspore_gpu_install_docker_en.md new file mode 100644 index 0000000000000000000000000000000000000000..1e6b2085a6f5d9d6119dfb502114ec347315940e --- /dev/null +++ b/install/mindspore_gpu_install_docker_en.md @@ -0,0 +1,145 @@ +# Installing MindSpore in GPU by Docker + + + +- [Installing MindSpore in GPU by Docker](#installing-mindSpore-in-gpu-by-docker) + - [System Environment Information Confirmation](#system-environment-information-confirmation) + - [nvidia-container-toolkit Installation](#nvidia-container-toolkit-installation) + - [Obtaining MindSpore Image](#obtaining-mindspore-image) + - [Running MindSpore Image](#running-mindspore-image) + - [Installation Verification](#installation-verification) + + + + + +[Docker](https://docs.docker.com/get-docker/) is an open source application container engine, developers can package their applications and dependencies into a lightweight, portable container. By using Docker, MindSpore can be rapidly deployed and separated from the system environment. + +This document describes how to quickly install MindSpore by Docker in a Linux system with a GPU environment. + +The Docker image of MindSpore is hosted on [Huawei SWR](https://support.huaweicloud.com/swr/index.html). + +The current support for containerized build is as follows: + +| Hardware | Docker Image Hub | Label | Note | +| :----- | :------------------------ | :----------------------- | :--------------------------------------- | +| GPU | `mindspore/mindspore-gpu` | `x.y.z` | A production environment with the MindSpore `x.y.z` GPU version pre-installed. | +| | | `devel` | Provide a development environment to build MindSpore from the source (`GPU CUDA10.1` backend). For installation details, please refer to . | +| | | `runtime` | Provide runtime environment, MindSpore binary package (`GPU CUDA10.1` backend) is not installed. | + +> **Note:** It is not recommended to install the whl package directly after building the GPU `devel` Docker image from the source. We strongly recommend that you transfer and install the `whl` package in the GPU `runtime` Docker image. +> `x.y.z` corresponds to the MindSpore version number. For example, when installing MindSpore version 1.1.0, `x.y.z` should be written as 1.1.0. + +## System Environment Information Confirmation + +- Confirm that Ubuntu 18.04 is installed with the 64-bit operating system. +- Confirm that [Docker 18.03 or later versioin](https://docs.docker.com/get-docker/) is installed. + +## nvidia-container-toolkit Installation + +For the `GPU` backend, please make sure that `nvidia-container-toolkit` has been installed in advance. The following is the installation guide for `nvidia-container-toolkit` for `Ubuntu` users: + +```bash +# Acquire version of operating system version +DISTRIBUTION=$(. /etc/os-release; echo $ID$VERSION_ID) +curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add - +curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list + +sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit nvidia-docker2 +sudo systemctl restart docker +``` + +daemon.json is the configuration file of Docker. Edit the file daemon.json to configure the container runtime so that Docker can use nvidia-container-runtime: + +```bash +$ vim /etc/docker/daemon.json +{ + "runtimes": { + "nvidia": { + "path": "nvidia-container-runtime", + "runtimeArgs": [] + } + } +} +``` + +Restart Docker: + +```bash +sudo systemctl daemon-reload +sudo systemctl restart docker +``` + +## Obtaining MindSpore Image + +For the `CPU` backend, you can directly use the following command to obtain the latest stable image: + +```bash +docker pull swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-gpu:{tag} +``` + +of which, + +- `{tag}` corresponds to the label in the above table. + +## Running MindSpore Image + +Execute the following command to start the Docker container instance: + +```bash +docker run -it -v /dev/shm:/dev/shm --runtime=nvidia --privileged=true swr.cn-south-1.myhuaweicloud.com/mindspore/mindspore-gpu:{tag} /bin/bash +``` + +of which, + +- `-v /dev/shm:/dev/shm` mounts the directory where the NCCL shared memory segment is located into the container; +- `--runtime=nvidia` is used to specify the container runtime as `nvidia-container-runtime`; +- `--privileged=true` enables the container to expand; +- `{tag}` corresponds to the label in the above table. + +## Installation Verification + +- If you are installing the container of the specified version `x.y.z`. + + After entering the MindSpore container according to the above steps, to test whether the Docker container is working properly, please run the following Python code and check the output: + + ```python + import numpy as np + import mindspore.context as context + import mindspore.ops as ops + from mindspore import Tensor + + context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU") + + x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) + print(ops.tensor_add(x, y)) + ``` + + The outputs should be the same as: + + ```text + [[[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]], + + [[ 2. 2. 2. 2.], + [ 2. 2. 2. 2.], + [ 2. 2. 2. 2.]]] + ``` + + It means MindSpore has been installed by docker successfully. + +- If you install a container with the label of `runtime`, you need to install MindSpore yourself. + + Go to [MindSpore Installation Guide Page](https://www.mindspore.cn/install/en), choose the GPU hardware platform, Ubuntu-x86 operating system and pip installation method to get the installation guide. Refer to the installation guide after running the container and install the MindSpore GPU version by pip, and verify it. + +- If you install a container with the label of `devel`, you need to compile and install MindSpore yourself. + + Go to [MindSpore Installation Guide Page](https://www.mindspore.cn/install/en), choose the GPU hardware platform, Ubuntu-x86 operating system and pip installation method to get the installation guide. After running the container, download the MindSpore code repository and refer to the installation guide, install the MindSpore GPU version through source code compilation, and verify it. + +If you want to know more about the MindSpore Docker image building process, please check [docker repo](https://gitee.com/mindspore/mindspore/blob/r1.1/docker/README.md) for details. diff --git a/install/mindspore_gpu_install_pip.md b/install/mindspore_gpu_install_pip.md index d9316e4879e5a739fc22bb82b50726b7d530c718..928b6e13516d5daea8e83166869f68f8ef77e694 100644 --- a/install/mindspore_gpu_install_pip.md +++ b/install/mindspore_gpu_install_pip.md @@ -13,7 +13,7 @@ - + 本文档介绍如何在GPU环境的Linux系统上,使用pip方式快速安装MindSpore。 @@ -33,14 +33,13 @@ ## 安装MindSpore ```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple +pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-{version}-cp37-cp37m-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple ``` 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 -- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否成功安装 @@ -86,16 +85,16 @@ pip install --upgrade mindspore-gpu 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_gpu_install_pip_en.md b/install/mindspore_gpu_install_pip_en.md index 687428753629f087e2a9ed1eedec366b3d6466ca..1f72adb56518a66ddd6ead50dbec0358c6f62882 100644 --- a/install/mindspore_gpu_install_pip_en.md +++ b/install/mindspore_gpu_install_pip_en.md @@ -13,7 +13,7 @@ - + This document describes how to quickly install MindSpore by pip in a Linux system with a GPU environment. @@ -33,14 +33,13 @@ This document describes how to quickly install MindSpore by pip in a Linux syste ## Installing MindSpore ```bash -pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple +pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{version}/MindSpore/gpu/ubuntu_x86/cuda-10.1/mindspore_gpu-{version}-cp37-cp37m-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple ``` Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. -- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification @@ -86,16 +85,16 @@ pip install --upgrade mindspore-gpu If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/mindspore_gpu_install_source.md b/install/mindspore_gpu_install_source.md index 1a8ac68847785e17e9c1c64fcc57072274e89e0a..6173ac5d9f29f8084c738b9b8daf7897f3b3495f 100644 --- a/install/mindspore_gpu_install_source.md +++ b/install/mindspore_gpu_install_source.md @@ -15,10 +15,12 @@ - + 本文档介绍如何在GPU环境的Linux系统上,使用源码编译方式快速安装MindSpore。 +详细步骤可以参考社区提供的实践——[在Linux上体验源码编译安装MindSpore GPU版本](https://www.mindspore.cn/news/newschildren?id=401),在此感谢社区成员[飞翔的企鹅](https://gitee.com/zhang_yi2020)的分享。 + ## 确认系统环境信息 - 确认安装Ubuntu 18.04是64位操作系统。 @@ -58,7 +60,7 @@ ## 从代码仓下载源码 ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## 编译MindSpore @@ -76,15 +78,14 @@ bash build.sh -e gpu ## 安装MindSpore ```bash -chmod +x build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -i https://pypi.tuna.tsinghua.edu.cn/simple +chmod +x build/package/mindspore_gpu-{version}-cp37-cp37m-linux_x86_64.whl +pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple ``` 其中: -- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)),其余情况需自行安装。 -- `{version}`表示MindSpore版本号,例如下载1.0.1版本MindSpore时,`{version}`应写为1.0.1。 -- `{arch}`表示系统架构,例如使用的Linux系统是x86架构64位时,`{arch}`应写为`x86_64`。如果系统是ARM架构64位,则写为`aarch64`。 +- 在联网状态下,安装whl包时会自动下载MindSpore安装包的依赖项(依赖项详情参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)),其余情况需自行安装。 +- `{version}`表示MindSpore版本号,例如安装1.1.0版本MindSpore时,`{version}`应写为1.1.0。 ## 验证是否成功安装 @@ -140,16 +141,16 @@ print(ops.tensor_add(x, y)) 当您需要查看训练过程中的标量、图像、计算图以及模型超参等信息时,可以选装MindInsight。 -具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README_CN.md)。 +具体安装步骤参见[MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README_CN.md)。 ## 安装MindArmour 当您进行AI模型安全研究或想要增强AI应用模型的防护能力时,可以选装MindArmour。 -具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README_CN.md)。 +具体安装步骤参见[MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README_CN.md)。 ## 安装MindSpore Hub 当您想要快速体验MindSpore预训练模型时,可以选装MindSpore Hub。 -具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README_CN.md)。 +具体安装步骤参见[MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README_CN.md)。 diff --git a/install/mindspore_gpu_install_source_en.md b/install/mindspore_gpu_install_source_en.md index 35f41e5e3b7ef8619cd6bd1623abcfb716fa0e7a..f1ddee212c6f1b15cb15fd198273cf4e2e4b7d5d 100644 --- a/install/mindspore_gpu_install_source_en.md +++ b/install/mindspore_gpu_install_source_en.md @@ -14,7 +14,7 @@ - + This document describes how to quickly install MindSpore by source code in a Linux system with a GPU environment. @@ -57,7 +57,7 @@ This document describes how to quickly install MindSpore by source code in a Lin ## Downloading Source Code from Code Repository ```bash -git clone https://gitee.com/mindspore/mindspore.git +git clone https://gitee.com/mindspore/mindspore.git -b r1.1 ``` ## Compiling MindSpore @@ -75,15 +75,14 @@ Of which, ## Installing MindSpore ```bash -chmod +x build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_{arch}.whl -i https://pypi.tuna.tsinghua.edu.cn/simple +chmod +x build/package/mindspore_gpu-{version}-cp37-cp37m-linux_x86_64.whl +pip install build/package/mindspore_gpu-{version}-cp37-cp37m-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple ``` Of which, -- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt)). In other cases, you need to manually install dependency items. -- `{version}` denotes the version of MindSpore. For example, when you are downloading MindSpore 1.0.1, `{version}` should be 1.0.1. -- `{arch}` denotes the system architecture. For example, the Linux system you are using is x86 architecture 64-bit, `{arch}` should be `x86_64`. If the system is ARM architecture 64-bit, then it should be `aarch64`. +- When the network is connected, dependency items are automatically downloaded during .whl package installation. (For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r1.1/requirements.txt)). In other cases, you need to manually install dependency items. +- `{version}` denotes the version of MindSpore. For example, when you are installing MindSpore 1.1.0, `{version}` should be 1.1.0. ## Installation Verification @@ -139,16 +138,16 @@ Using the following command if you need to update the MindSpore version. If you need to analyze information such as model scalars, graphs, computation graphs and model traceback, you can install MindInsight. -For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/master/README.md). +For more details, please refer to [MindInsight](https://gitee.com/mindspore/mindinsight/blob/r1.1/README.md). ## Installing MindArmour If you need to conduct AI model security research or enhance the security of the model in you applications, you can install MindArmour. -For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/master/README.md). +For more details, please refer to [MindArmour](https://gitee.com/mindspore/mindarmour/blob/r1.1/README.md). ## Installing MindSpore Hub If you need to access and experience MindSpore pre-trained models quickly, you can install MindSpore Hub. -For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/master/README.md). +For more details, please refer to [MindSpore Hub](https://gitee.com/mindspore/hub/blob/r1.1/README.md). diff --git a/install/third_party/third_party_cpu_install.md b/install/third_party/third_party_cpu_install.md new file mode 100644 index 0000000000000000000000000000000000000000..9f6a0d815d40d61353cc6810f2603a150597d551 --- /dev/null +++ b/install/third_party/third_party_cpu_install.md @@ -0,0 +1,383 @@ +# 源码编译方式安装MindSpore CPU版本(含第三方依赖) + +作者:[damon0626](https://gitee.com/damon0626) + +本文档介绍如何在```Ubuntu 18.04 64```位操作系统```CPU```环境下,使用源码编译方式安装```MindSpore```。 + +## 确认系统环境信息 + +### 1. 确认安装Ubuntu 18.04是64位操作系统 + +(1)确认系统版本号,在终端输入```lab_release -a``` + +```shell +ms-sd@mssd:~$ lsb_release -a +No LSB modules are available. +Distributor ID:Ubuntu +Description:Ubuntu 18.04.5 LTS +Release:18.04 +Codename:bionic +``` + +(2)确认系统位数,在终端输入```uname -a``` + +```shell +ms-sd@mssd:~$ uname -a +Linux mssd 5.4.0-42-generic #46~18.04.1-Ubuntu SMP Fri Jul 10 07:21:24 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux +``` + +### 2. 确认安装GCC 7.3.0版本 + +(1)确认当前系统安装的GCC版本 + +在终端输入```gcc --version```,系统已安装版本为7.5.0 + +```shell +ms-sd@mssd:~/gcc-7.3.0/build$ gcc --version +gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 +Copyright (C) 2017 Free Software Foundation, Inc. +This is free software; see the source for copying conditions. +There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +``` + +(2)如果提示找不到gcc命令,用以下方式安装 + +```shell +ms-sd@mssd:~$ sudo apt-get install gcc +``` + +(3)本地编译安装7.3.0,下载文件 + +[点此下载GCC7.3.0](http://ftp.gnu.org/gnu/gcc/gcc-7.3.0/gcc-7.3.0.tar.gz) + +(4)解压并进入目录 + +```shell +tar -xvzf gcc-7.3.0.tar.gz +cd gcc-7.3.0 +``` + +(5)运行```download_prerequesites```,运行该文件的目的是 + +> 1. Download some prerequisites needed by gcc. +> 2. Run this from the top level of the gcc source tree and the gcc build will do the right thing. + +```shell +ms-sd@mssd:~/gcc-7.3.0$ ./contrib/download_prerequisites +2020-12-19 09:58:33 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/gmp-6.1.0.tar.bz2 [2383840] -> "./gmp-6.1.0.tar.bz2" [1] +2020-12-19 10:00:01 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/mpfr-3.1.4.tar.bz2 [1279284] -> "./mpfr-3.1.4.tar.bz2" [1] +2020-12-19 10:00:50 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/mpc-1.0.3.tar.gz [669925] -> "./mpc-1.0.3.tar.gz" [1] +2020-12-19 10:03:10 URL: ftp://gcc.gnu.org/pub/gcc/infrastructure/isl-0.16.1.tar.bz2 [1626446] -> "./isl-0.16.1.tar.bz2" [1] +gmp-6.1.0.tar.bz2: 成功 +mpfr-3.1.4.tar.bz2: 成功 +mpc-1.0.3.tar.gz: 成功 +isl-0.16.1.tar.bz2: 成功 +All prerequisites downloaded successfully. +``` + +(6)运行成功后,进行配置 + +```shell +ms-sd@mssd:~/gcc-7.3.0/build$ ../configure --enable-checking=release --enable-languages=c,c++ --disable-multilib +``` + +> 参数解释: +> –enable-checking=release 增加一些检查 +> –enable-languages=c,c++ 需要gcc支持的编程语言 +> –disable-multilib 取消多目标库编译(取消32位库编译) + +(7)编译,根据CPU性能,选择合适的线程数 + +```shell +ms-sd@mssd:~/gcc-7.3.0/build$ make -j 6 +``` + +(8)安装 + +```shell +ms-sd@mssd:~$ sudo make install -j 6 +``` + +(9)验证,看到版本已经变更为7.3.0,安装成功。 + +```shell +ms-sd@mssd:~$ gcc --version +gcc (GCC) 7.3.0 +Copyright © 2017 Free Software Foundation, Inc. +This is free software; see the source for copying conditions. +There is NOwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. +``` + +### 3. 确认安装Python 3.7.5版本 + +**注意:** ```Ubuntu 18.04``` 系统自带的 ```Python3```版本为```Python3.6.9```,系统自带```Python```不要删除,防止依赖错误。```Linux```发行版中, ```Debian```系的提供了```update-alternatives```工具,用于在多个同功能的软件,或软件的多个不同版本间选择,这里采用```update-alternatives```工具控制多个Python版本。 + +(1)查看系统Python版本 + +```shell +ms-sd@mssd:~$ python3 --version +Python3.6.9 +``` + +(2)[点此下载Python 3.7.5安装包](https://www.python.org/ftp/python/3.7.5/Python-3.7.5.tgz) + +(3)解压并进入目录 + +```shell +ms-sd@mssd:~$ tar -xvzf Python-3.7.5.tgz +ms-sd@mssd:~$ cd Python-3.7.5/ +``` + +(4)配置文件路径 + +```shell +ms-sd@mssd:~/Python-3.7.5$ ./configure --prefix=/usr/local/python3.7.5 --with-ssl +``` + +> 参数解释: +> --prefix=/usr/local/python3.7.5 +> 可执行文件放在/usr/local/python3.7.5/bin下, +> 库文件放在/usr/local/python3.7.5/lib, +> 配置文件放在/usr/local/python3.7.1/include, +> 其他资源文件放在/usr/local/python3.7.5下 +> +> --with-ssl:确保pip安装库时能找到SSL + +(5)安装必要的依赖 + +```shell +ms-sd@mssd:~/Python-3.7.5$ sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python3-openssl +``` + +(6)编译安装 + +```shell +ms-sd@mssd:~/Python-3.7.5$ make -j 6 +ms-sd@mssd:~/Python-3.7.5$ sudo make install -j 6 +``` + +(7)查看当前系统python/python3的指向 + +```shell +ms-sd@mssd:~$ ls -l /usr/bin/ | grep python +lrwxrwxrwx 1 root root 23 10月 8 20:12 pdb3.6 -> ../lib/python3.6/pdb.py +lrwxrwxrwx 1 root root 31 12月 18 21:44 py3versions -> ../share/python3/py3versions.py +lrwxrwxrwx 1 root root 9 12月 18 21:44 python3 -> python3.6 +-rwxr-xr-x 2 root root 4526456 10月 8 20:12 python3.6 +-rwxr-xr-x 2 root root 4526456 10月 8 20:12 python3.6m +lrwxrwxrwx 1 root root 10 12月 18 21:44 python3m -> python3.6m() +``` + +(8)备份原来的python3链接,重新建立新的python3指向以更改python3默认指向 + +```shell +ms-sd@mssd:~/Python-3.7.5$ sudo mv /usr/bin/python3 /usr/bin/python3.bak +ms-sd@mssd:~/Python-3.7.5$ sudo ln -s /usr/local/python3.7.5/bin/python3.7 /usr/bin/python3 +``` + +(9)重新建立pip3指向 + +```shell +ms-sd@mssd:~/Python-3.7.5$ sudo ln -s /usr/local/python3.7.5/bin/pip3 /usr/bin/pip3 +``` + +(10)输入验证,Python已更改为3.7.5版本 + +```python +ms-sd@mssd:~/Python-3.7.5$ python3 +Python 3.7.5 (default, Dec 19 2020, 11:29:09) +[GCC 7.3.0] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> +``` + +(11)更新```update-alternatives```python列表 + +```shell +sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 100 +sudo update-alternatives --install /usr/bin/python python /usr/bin/python3 150 +sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 110 +``` + +(12)设置Python默认选项,选择2,默认优先级最高的选项 + +```shell +ms-sd@mssd:~$ sudo update-alternatives --config python +There are 3 choices for the alternative python (providing /usr/bin/python). + + Selection Path Priority Status +------------------------------------------------------------ + 0 /usr/bin/python3 150 auto mode + 1 /usr/bin/python2 100 manual mode +* 2 /usr/bin/python3 150 manual mode + 3 /usr/bin/python3.6 110 manual mode + +Press to keep the current choice[*], or type selection number: +``` + +### 4. 确认安装OpenSSL 1.1.1及以上版本 + +(1)Ubuntu 18.04自带了OpenSSL 1.1.1 + +```shell +ms-sd@mssd:~/Python-3.7.5$ openssl version +OpenSSL 1.1.1 11 Sep 2018 +``` + +(2)本地编译安装请参考[Ubuntu 18.04 安装新版本openssl](https://www.cnblogs.com/thechosenone95/p/10603110.html) + +### 5. 确认安装CMake 3.18.3及以上版本 + +(1)[点此下载CMake 3.18.5](https://github.com/Kitware/CMake/releases/download/v3.18.5/cmake-3.18.5.tar.gz) + +(2)解压并进入文件目录 + +```shell +ms-sd@mssd:~$ tar -zxvf cmake-3.18.5.tar.gz +ms-sd@mssd:~$ cd cmake-3.18.5/ +``` + +(3)编译安装 + +在源码的README.rst中看到如下文字: + +> For example, if you simply want to build and install CMake from source, +> you can build directly in the source tree:: +> +> $ ./bootstrap && make && sudo make install +> +> Or, if you plan to develop CMake or otherwise run the test suite, create +> a separate build tree:: +> +> $.mkdir cmake-build && cd cmake-build +> +> $./cmake-source/bootsrap && make + +选择从源码编译安装,根据提示在终端依次输入以下命令: + +```shell +ms-sd@mssd:~/cmake-3.18.5$ ./bootstrap +ms-sd@mssd:~/cmake-3.18.5$ make -j 6 +ms-sd@mssd:~/cmake-3.18.5$ sudo make install -j 6 +``` + +(4)验证,安装成功 + +```shell +ms-sd@mssd:~$ cmake --version +cmake version 3.18.5 + +CMake suite maintained and supported by Kitware (kitware.com/cmake). +``` + +### 6. 确认安装wheel 0.32.0及以上版本 + +(1)更新pip源 + +修改 ~/.pip/pip.conf (如果没有该文件,创建一个), 内容如下: + +```shell +[global] +index-url = https://pypi.tuna.tsinghua.edu.cn/simple +``` + +(2)安装wheel 0.32.0 + +```shel +ms-sd@mssd:~$ sudo pip3 install wheel==0.32.0 +``` + +(3)查看安装情况 + +```shell +ms-sd@mssd:~$ pip3 list +Package Version +---------- ------- +numpy 1.19.4 +pip 20.3.3 +setuptools 41.2.0 +wheel 0.32.0 +``` + +### 7. 确认安装patch 2.5及以上版本 + +(1)查看patch版本,ubuntu18.04自带了2.7.6版本 + +```shell +ms-sd@mssd:~$ patch --version +GNU patch 2.7.6 +Copyright (C) 2003, 2009-2012 Free Software Foundation, Inc. +Copyright (C) 1988 Larry Wall + +License GPLv3+: GNU GPL version 3 or later . +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. + +Written by Larry Wall and Paul Eggert +``` + +### 8. 确认安装NUMA 2.0.11及以上版本 + +(1)如果未安装,使用如下命令下载安装: + +```shell +ms-sd@mssd:~$ apt-get install libnuma-dev +``` + +### 9. 确认安装git工具 + +```shell +ms-sd@mssd:~$ sudo apt-get install git +``` + +## MindSpore源码安装 + +### 10. 下载MindSpore源码 + +(1)从代码仓库下载源码 + +```shell +ms-sd@mssd:~$ git clone https://gitee.com/mindspore/mindspore.git -b r1.1 +``` + +(2)安装依赖(根据编译过程中报错,整理如下) + +```shell +ms-sd@mssd:~$ sudo apt-get install python3.7-dev pybind11 python3-wheel python3-setuptools python3.7-minimal +``` + +(3)编译(内存占用太大,总是超内存线程被杀死,建议4G以上) + +```shell +ms-sd@mssd:~/mindspore$ sudo bash build.sh -e cpu -j 2 +``` + +(4)编译成功 + +大约需要1小时,编译成功,出现如下提示: + +```shell +CPack: - package: /home/ms-sd/mindspore/build/mindspore/mindspore generated. +success building mindspore project! +---------------- mindspore: build end ---------------- +``` + +同时在```/mindspore/build/package/```文件下生成了```mindspore-1.1.0-cp37-cp37m-linux_x86_64.whl```文件。 + +(5)pip3安装MindSpore安装文件 + +```shell +ms-sd@mssd:~/mindspore$ sudo pip3 install /mindspore/build/package/mindspore-1.1.0-cp37-cp37m-linux_x86_64.whl +``` + +(6)导入测试 + +```python3 +ms-sd@mssd:~/mindspore$ sudo python3 +Python 3.7.5 (default, Dec 19 2020, 13:04:49) +[GCC 7.3.0] on linux +Type "help", "copyright", "credits" or "license" for more information. +>>> import mindspore +>>> mindspore.__version__ +'1.1.0' +``` diff --git a/lite/lite.md b/lite/lite.md index 7bd847cadc11002c875ca9e7231cb019dac01f79..8f2425991090b9ded86fd8787f194c6acbfb1b53 100644 --- a/lite/lite.md +++ b/lite/lite.md @@ -1,7 +1,7 @@

快速入门

- +
- +
训练一个LeNet模型 @@ -29,7 +29,7 @@

获取MindSpore Lite

- +
- +
编译MindSpore Lite @@ -57,7 +57,7 @@

端侧推理

- +
- +
- +
- +
- +
其他工具 @@ -121,7 +121,7 @@

端侧训练

- +
- +
执行训练 @@ -149,7 +149,7 @@

其它文档

- +
- +
- +
- +
- +
- +
- +
- +
- +
风格迁移模型 diff --git a/lite/lite_en.md b/lite/lite_en.md index e7eab0a1283e2e9f46dba1ec3a71ab8a0dd97c9f..0935fd93e88f19948cd5a359a77ea8c8535f42c1 100644 --- a/lite/lite_en.md +++ b/lite/lite_en.md @@ -1,7 +1,7 @@

Quick Start

- +
- +
Training a LeNet Model @@ -29,7 +29,7 @@

Obtain MindSpore Lite

- +
- +
Building MindSpore Lite @@ -57,7 +57,7 @@

Inference on Devices

- +
- +
- +
- +
Other Tools @@ -109,7 +109,7 @@

Training on Devices

- +
- +
Executing Model Training @@ -137,7 +137,7 @@

Other Documents

- +
- +
- +
- +
- +
- +
- +
- +
Style Transfer Model diff --git a/resource/api_mapping.md b/resource/api_mapping.md index 6b14db8cd64d438f56b995d1ab1fe5087269065a..33d84c9aaf93799ec980426e153834fad6417e06 100644 --- a/resource/api_mapping.md +++ b/resource/api_mapping.md @@ -6,11 +6,10 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun |------------------------------------------------------|------------------------------------------------------------------------| | torch.abs | mindspore.ops.Abs | | torch.acos | mindspore.ops.ACos | -| torch.add | mindspore.ops.TensorAdd | +| torch.add | mindspore.ops.Add | | torch.argmax | mindspore.ops.Argmax | | torch.argmin | mindspore.ops.Argmin | | torch.asin | mindspore.ops.Asin | -| torch.Assert | mindspore.ops.Assert | | torch.atan | mindspore.ops.Atan | | torch.atan2 | mindspore.ops.Atan2 | | torch.bitwise_and | mindspore.ops.BitwiseAnd | @@ -23,18 +22,16 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun | torch.clamp | mindspore.ops.clip_by_value | | torch.cos | mindspore.ops.Cos | | torch.cosh | mindspore.ops.Cosh | -| torch.count_nonzero | mindspore.ops.count_nonzero | | torch.cuda.device_count | mindspore.communication.get_group_size | | torch.cuda.set_device | mindspore.context.set_context | | torch.cumprod | mindspore.ops.CumProd | | torch.cumsum | mindspore.ops.CumSum | | torch.det | mindspore.nn.MatDet | | torch.diag | mindspore.ops.Diag | -| torch.digamma | mindspore.ops.DiGamma | +| torch.digamma | mindspore.nn.DiGamma | | torch.distributed.all_gather | mindspore.ops.AllGather | | torch.distributed.all_reduce | mindspore.ops.AllReduce | | torch.distributions.gamma.Gamma | mindspore.ops.Gamma | -| torch.distributions.bata.Bata | mindspore.nn.LBeta | | torch.distributed.get_rank | mindspore.communication.get_rank | | torch.distributed.init_process_group | mindspore.communication.init | | torch.div | mindspore.ops.Div | @@ -117,6 +114,7 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun | torch.nn.Sigmoid | mindspore.nn.Sigmoid | | torch.nn.SmoothL1Loss | mindspore.nn.SmoothL1Loss | | torch.nn.Softmax | mindspore.nn.Softmax | +| torch.nn.SyncBatchNorm.convert_sync_batchnorm | mindspore.nn.GlobalBatchNorm | | torch.nn.Tanh | mindspore.nn.Tanh | | torch.nn.Unfold | mindspore.nn.Unfold | | torch.nn.Upsample | mindspore.ops.ResizeBilinear | @@ -129,13 +127,12 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun | torch.optim.Adam | mindspore.nn.Adam | | torch.optim.Adamax | mindspore.ops.ApplyAdaMax | | torch.optim.AdamW | mindspore.nn.AdamWeightDecay | -| torch.optim.lr_scheduler.CosineAnnealingWarmRestarts | mindspore.nn.dynamic_lr.cosine_decay_lr | -| torch.optim.lr_scheduler.StepLR | mindspore.nn.dynamic_lr.piecewise_constant_lr | +| torch.optim.lr_scheduler.CosineAnnealingWarmRestarts | mindspore.nn.cosine_decay_lr | +| torch.optim.lr_scheduler.StepLR | mindspore.nn.piecewise_constant_lr | | torch.optim.Optimizer.step | mindspore.nn.TrainOneStepCell | | torch.optim.RMSprop | mindspore.nn.RMSProp | | torch.optim.SGD | mindspore.nn.SGD | | torch.pow | mindspore.ops.Pow | -| torch.possion | mindspore.ops.Possion | | torch.prod | mindspore.ops.ReduceProd | | torch.rand | mindspore.ops.UniformReal | | torch.randint | mindspore.ops.UniformInt | @@ -193,7 +190,6 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun | torch.zeros | mindspore.ops.Zeros | | torch.zeros_like | mindspore.ops.ZerosLike | | torchvision.datasets.ImageFolder | mindspore.dataset.ImageFolderDataset | -| torchvision.ops.box_iou | mindspore.ops.IOU | | torchvision.ops.nms | mindspore.ops.NMSWithMask | | torchvision.ops.roi_align | mindspore.ops.ROIAlign | | torchvision.transforms.CenterCrop | mindspore.dataset.vision.py_transforms.CenterCrop | diff --git a/resource/release/release_list_en.md b/resource/release/release_list_en.md index aa3f9cca5d3d4a4caaa8baa5d73154cdad16a361..60c8b40a93f7859ff62366f5b189ba5b80949599 100644 --- a/resource/release/release_list_en.md +++ b/resource/release/release_list_en.md @@ -3,48 +3,152 @@ - [Release List](#release-list) - - [1.0.1](#101) + - [1.1.1](#111) - [Releasenotes and API Updates](#releasenotes-and-api-updates) - [Downloads](#downloads) - [Related Documents](#related-documents) - - [1.0.0](#100) + - [1.1.0](#110) - [Releasenotes and API Updates](#releasenotes-and-api-updates-1) - [Downloads](#downloads-1) - [Related Documents](#related-documents-1) - - [0.7.0-beta](#070-beta) + - [1.0.1](#101) - [Releasenotes and API Updates](#releasenotes-and-api-updates-2) - [Downloads](#downloads-2) - [Related Documents](#related-documents-2) - - [0.6.0-beta](#060-beta) + - [1.0.0](#100) - [Releasenotes and API Updates](#releasenotes-and-api-updates-3) - [Downloads](#downloads-3) - [Related Documents](#related-documents-3) - - [0.5.2-beta](#052-beta) + - [0.7.0-beta](#070-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-4) - [Downloads](#downloads-4) - [Related Documents](#related-documents-4) - - [0.5.0-beta](#050-beta) + - [0.6.0-beta](#060-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-5) - [Downloads](#downloads-5) - [Related Documents](#related-documents-5) - - [0.3.0-alpha](#030-alpha) + - [0.5.2-beta](#052-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-6) - [Downloads](#downloads-6) - [Related Documents](#related-documents-6) - - [0.2.0-alpha](#020-alpha) + - [0.5.0-beta](#050-beta) - [Releasenotes and API Updates](#releasenotes-and-api-updates-7) - [Downloads](#downloads-7) - [Related Documents](#related-documents-7) - - [0.1.0-alpha](#010-alpha) - - [Releasenotes](#releasenotes) + - [0.3.0-alpha](#030-alpha) + - [Releasenotes and API Updates](#releasenotes-and-api-updates-8) - [Downloads](#downloads-8) - [Related Documents](#related-documents-8) - - [master(unstable)](#masterunstable) + - [0.2.0-alpha](#020-alpha) + - [Releasenotes and API Updates](#releasenotes-and-api-updates-9) + - [Downloads](#downloads-9) - [Related Documents](#related-documents-9) + - [0.1.0-alpha](#010-alpha) + - [Releasenotes](#releasenotes) + - [Downloads](#downloads-10) + - [Related Documents](#related-documents-10) + - [master(unstable)](#masterunstable) + - [Related Documents](#related-documents-11) - + + +## 1.1.1 + +### Releasenotes and API Updates + + + +### Downloads + +| Module Name | Hardware Platform | Operating System | Download Links | SHA-256 | +| --- | --- | --- | --- | --- | +| MindSpore | Ascend 910 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | +| | Ascend 310 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | +| | GPU CUDA 10.1 | Ubuntu-x86 | | | +| | CPU | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | Windows-x64 | | | +| MindInsight | Ascend 910 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | +| | GPU CUDA 10.1 | Ubuntu-x86 | | | +| MindArmour | Ascend 910 | Ubuntu-x86
CentOS-x86 | | | +| | | Ubuntu-aarch64
EulerOS-aarch64
CentOS-aarch64 | | | +| | GPU CUDA 10.1
CPU | Ubuntu-x86 | | | +| MindSpore
Hub | | any | | | +| MindSpore
Serving | Ascend 910
Ascend310 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | + +### Related Documents + +| Category | URL | +| --- | --- | +| Installation | | +| Tutorials | Training
Inference
Mobile Phone&IoT | +| Docs | Python API
C++ API
Java API
FAQ
Design&Specification | + +## 1.1.0 + +### Releasenotes and API Updates + + + +### Downloads + +| Module Name | Hardware Platform | Operating System | Download Links | SHA-256 | +| --- | --- | --- | --- | --- | +| MindSpore | Ascend 910 | Ubuntu-x86 | | 8dc45c9c6367a9b59a5893c896b3ebfd929544325c911f48f679b9203165d85d | +| | | Ubuntu-aarch64 | | b49124e793127ac9d55ba8e5df109a17aafb3f09bbc4a9f7bc228bfc5b652042 | +| | | EulerOS-aarch64 | | 1c03e7941a9e247fb0e64f9ba0adbcb4fde3e815cd00dc4bc79e6a81a29e0335 | +| | | CentOS-x86 | | 3affe7f5dc4c7c649221d80bf8a41f54fe64028424c422d3513c11a6507f193f | +| | | CentOS-aarch64 | |051d2fe7fa1fa95e92da9841a1cdad113561da19a5e7f9abe30322ff44d68d2e | +| | Ascend 310 | Ubuntu-x86 | |fe357e5e83130938ad490563fa310e71261683cea08dede8731a915373991d5c | +| | | Ubuntu-aarch64 | |17dc70cdf79f80db0344def06a427c93c5b03f3448a5aeb34a0b41305425e0bd | +| | | EulerOS-aarch64 | |be0881c5848696f67cbf54456babf344317f9509ad0961487588ae5e26ec2f87 | +| | | CentOS-x86 | |fc0c6d3cfd6688f6b7c999a4189cd06a8496ccde45db8528b57439edb12f819e | +| | | CentOS-aarch64 | |2a6856e2a7bd8db106748877bc2b4fa9d9804db265578d2d5f057a4e79073305 | +| | GPU CUDA 10.1 | Ubuntu-x86 | | 11386b0e156f033987f879e3b79f87e7cde0a6881063434f2c84a8564099e858 | +| | CPU | Ubuntu-x86 | | 1a1683e9c30650284f23001a1af0ae570ca854317ec52efc698ce7da604e31b0 | +| | | Ubuntu-aarch64 | | e1fa3cec68aef0e6619408f81d7e9e627704c1bfbf453ed90ee6d3b6c0c8c84f | +| | | Windows-x64 | | ce3f1d4504fd8236113827d435c9aa691b0200e1ffeba3db391e678ad31a7df7 | +| MindInsight | Ascend 910 | Ubuntu-x86 | | 85f4a38ecaf4d6799482e2a982609c46a49471325b47699c5b01b340549ab961 | +| | | Ubuntu-aarch64 | | adb45fa766ff5ca4ef6cbe24335ca7e87c81e9293b60ffe00fec76533115ef4e | +| | | EulerOS-aarch64 | | 78b9a728aecc01ead3687f9469d8af228917eab285f0770316bcc214b4ae3adc | +| | | CentOS-x86 | | a19a126ae1daa210c78aa256262303c9ad20f9cfe2404a5af840d325a471eb30 | +| | | CentOS-aarch64 | | f499aa428d754dc36da303f02b6531576e9e86158b213184c392f2302f13da2b | +| | GPU CUDA 10.1 | Ubuntu-x86 | | 85f4a38ecaf4d6799482e2a982609c46a49471325b47699c5b01b340549ab961 | +| MindArmour | Ascend 910 | Ubuntu-x86
CentOS-x86 | | 3d8b05437dca6d648073b85909508377b7cab05f9a6f52ee712592083d611770 | +| | | Ubuntu-aarch64
EulerOS-aarch64
CentOS-aarch64 | | bc724697cf053672198be226193cd0467c5a7f2a700d26a024bcfb318724f34a | +| | GPU CUDA 10.1
CPU | Ubuntu-x86 | | 3d8b05437dca6d648073b85909508377b7cab05f9a6f52ee712592083d611770 | +| MindSpore
Hub | | any | |1f329f35865a4e7014461e485e8a87859160aae6cbe1033973239e26c7dee01f | +| MindSpore
Serving | Ascend 910
Ascend310 | Ubuntu-x86 | | 4bfb3a41b9fbfd77ed09244f08ec98f8e5833e6fa27d7c214b9262c1f3568258 | +| | | Ubuntu-aarch64 | | 095ac95e4c338b17dd192422d8bf342c55441a79eeeeb70441ccc65746b0f2d7 | +| | | EulerOS-aarch64 | | 1695ac7a01fdcb4fad9d47a172767d56fcae4979ecced298f5e33c936e821649 | +| | | CentOS-x86 | | ed0cc466efad7fb717527a511611c1fb2d72db4caf0f66e6fcbde0ecf7d6e525 | +| | | CentOS-aarch64 | |e6ed84cfe0ff9b51b94cd2575f62238c95a73ac386e2d09adf75d3ea74177420 | + +### Related Documents + +| Category | URL | +| --- | --- | +| Installation | | +| Tutorials | Training
Inference
Mobile Phone&IoT | +| Docs | Python API
C++ API
Java API
FAQ
Design&Specification | ## 1.0.1 @@ -384,4 +488,4 @@ | --- | --- | | Installation | | | Tutorials | Training
Inference
Mobile Phone&IoT | -| Docs | Python API
C++ API
FAQ
Other Note | +| Docs | Python API
C++ API
Java API
FAQ
Design&Specification | diff --git a/resource/release/release_list_zh_cn.md b/resource/release/release_list_zh_cn.md index d7986da3f194a22b2d3dba6fc28e27d4ce0c8152..54285934955c82cee3ca1a94b2349d780db328a5 100644 --- a/resource/release/release_list_zh_cn.md +++ b/resource/release/release_list_zh_cn.md @@ -3,48 +3,152 @@ - [发布版本列表](#发布版本列表) - - [1.0.1](#101) + - [1.1.1](#111) - [版本说明和接口变更](#版本说明和接口变更) - [下载地址](#下载地址) - [配套资料](#配套资料) - - [1.0.0](#100) + - [1.1.0](#110) - [版本说明和接口变更](#版本说明和接口变更-1) - [下载地址](#下载地址-1) - [配套资料](#配套资料-1) - - [0.7.0-beta](#070-beta) + - [1.0.1](#101) - [版本说明和接口变更](#版本说明和接口变更-2) - [下载地址](#下载地址-2) - [配套资料](#配套资料-2) - - [0.6.0-beta](#060-beta) + - [1.0.0](#100) - [版本说明和接口变更](#版本说明和接口变更-3) - [下载地址](#下载地址-3) - [配套资料](#配套资料-3) - - [0.5.2-beta](#052-beta) + - [0.7.0-beta](#070-beta) - [版本说明和接口变更](#版本说明和接口变更-4) - [下载地址](#下载地址-4) - [配套资料](#配套资料-4) - - [0.5.0-beta](#050-beta) + - [0.6.0-beta](#060-beta) - [版本说明和接口变更](#版本说明和接口变更-5) - [下载地址](#下载地址-5) - [配套资料](#配套资料-5) - - [0.3.0-alpha](#030-alpha) + - [0.5.2-beta](#052-beta) - [版本说明和接口变更](#版本说明和接口变更-6) - [下载地址](#下载地址-6) - [配套资料](#配套资料-6) - - [0.2.0-alpha](#020-alpha) + - [0.5.0-beta](#050-beta) - [版本说明和接口变更](#版本说明和接口变更-7) - [下载地址](#下载地址-7) - [配套资料](#配套资料-7) - - [0.1.0-alpha](#010-alpha) - - [版本说明](#版本说明) + - [0.3.0-alpha](#030-alpha) + - [版本说明和接口变更](#版本说明和接口变更-8) - [下载地址](#下载地址-8) - [配套资料](#配套资料-8) - - [master(unstable)](#masterunstable) + - [0.2.0-alpha](#020-alpha) + - [版本说明和接口变更](#版本说明和接口变更-9) + - [下载地址](#下载地址-9) - [配套资料](#配套资料-9) + - [0.1.0-alpha](#010-alpha) + - [版本说明](#版本说明) + - [下载地址](#下载地址-10) + - [配套资料](#配套资料-10) + - [master(unstable)](#masterunstable) + - [配套资料](#配套资料-11) - + + +## 1.1.1 + +### 版本说明和接口变更 + + + +### 下载地址 + +| 组件 | 硬件平台 | 操作系统 | 链接 | SHA-256 | +| --- | --- | --- | --- | --- | +| MindSpore | Ascend 910 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | +| | Ascend 310 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | +| | GPU CUDA 10.1 | Ubuntu-x86 | | | +| | CPU | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | Windows-x64 | | | +| MindInsight | Ascend 910 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | +| | GPU CUDA 10.1 | Ubuntu-x86 | | | +| MindArmour | Ascend 910 | Ubuntu-x86
CentOS-x86 | | | +| | | Ubuntu-aarch64
EulerOS-aarch64
CentOS-aarch64 | | | +| | GPU CUDA 10.1
CPU | Ubuntu-x86 | | | +| MindSpore
Hub | | any | | | +| MindSpore
Serving | Ascend 910
Ascend310 | Ubuntu-x86 | | | +| | | Ubuntu-aarch64 | | | +| | | EulerOS-aarch64 | | | +| | | CentOS-x86 | | | +| | | CentOS-aarch64 | | | + +### 配套资料 + +| 类别 | 网址 | +| --- | --- | +|安装 | | +| 教程 | 训练
推理
手机&IoT | +| 文档 | 编程指南
Python API
C++ API
Java API
FAQ
设计和规格 | + +## 1.1.0 + +### 版本说明和接口变更 + + + +### 下载地址 + +| 组件 | 硬件平台 | 操作系统 | 链接 | SHA-256 | +| --- | --- | --- | --- | --- | +| MindSpore | Ascend 910 | Ubuntu-x86 | | 8dc45c9c6367a9b59a5893c896b3ebfd929544325c911f48f679b9203165d85d | +| | | Ubuntu-aarch64 | | b49124e793127ac9d55ba8e5df109a17aafb3f09bbc4a9f7bc228bfc5b652042 | +| | | EulerOS-aarch64 | | 1c03e7941a9e247fb0e64f9ba0adbcb4fde3e815cd00dc4bc79e6a81a29e0335 | +| | | CentOS-x86 | | 3affe7f5dc4c7c649221d80bf8a41f54fe64028424c422d3513c11a6507f193f | +| | | CentOS-aarch64 | |051d2fe7fa1fa95e92da9841a1cdad113561da19a5e7f9abe30322ff44d68d2e | +| | Ascend 310 | Ubuntu-x86 | |fe357e5e83130938ad490563fa310e71261683cea08dede8731a915373991d5c | +| | | Ubuntu-aarch64 | |17dc70cdf79f80db0344def06a427c93c5b03f3448a5aeb34a0b41305425e0bd | +| | | EulerOS-aarch64 | |be0881c5848696f67cbf54456babf344317f9509ad0961487588ae5e26ec2f87 | +| | | CentOS-x86 | |fc0c6d3cfd6688f6b7c999a4189cd06a8496ccde45db8528b57439edb12f819e | +| | | CentOS-aarch64 | |2a6856e2a7bd8db106748877bc2b4fa9d9804db265578d2d5f057a4e79073305 | +| | GPU CUDA 10.1 | Ubuntu-x86 | | 11386b0e156f033987f879e3b79f87e7cde0a6881063434f2c84a8564099e858 | +| | CPU | Ubuntu-x86 | | 1a1683e9c30650284f23001a1af0ae570ca854317ec52efc698ce7da604e31b0 | +| | | Ubuntu-aarch64 | | e1fa3cec68aef0e6619408f81d7e9e627704c1bfbf453ed90ee6d3b6c0c8c84f | +| | | Windows-x64 | | ce3f1d4504fd8236113827d435c9aa691b0200e1ffeba3db391e678ad31a7df7 | +| MindInsight | Ascend 910 | Ubuntu-x86 | | 85f4a38ecaf4d6799482e2a982609c46a49471325b47699c5b01b340549ab961 | +| | | Ubuntu-aarch64 | | adb45fa766ff5ca4ef6cbe24335ca7e87c81e9293b60ffe00fec76533115ef4e | +| | | EulerOS-aarch64 | | 78b9a728aecc01ead3687f9469d8af228917eab285f0770316bcc214b4ae3adc | +| | | CentOS-x86 | | a19a126ae1daa210c78aa256262303c9ad20f9cfe2404a5af840d325a471eb30 | +| | | CentOS-aarch64 | | f499aa428d754dc36da303f02b6531576e9e86158b213184c392f2302f13da2b | +| | GPU CUDA 10.1 | Ubuntu-x86 | | 85f4a38ecaf4d6799482e2a982609c46a49471325b47699c5b01b340549ab961 | +| MindArmour | Ascend 910 | Ubuntu-x86
CentOS-x86 | | 3d8b05437dca6d648073b85909508377b7cab05f9a6f52ee712592083d611770 | +| | | Ubuntu-aarch64
EulerOS-aarch64
CentOS-aarch64 | | bc724697cf053672198be226193cd0467c5a7f2a700d26a024bcfb318724f34a | +| | GPU CUDA 10.1
CPU | Ubuntu-x86 | | 3d8b05437dca6d648073b85909508377b7cab05f9a6f52ee712592083d611770 | +| MindSpore
Hub | | any | |1f329f35865a4e7014461e485e8a87859160aae6cbe1033973239e26c7dee01f | +| MindSpore
Serving | Ascend 910
Ascend310 | Ubuntu-x86 | | 4bfb3a41b9fbfd77ed09244f08ec98f8e5833e6fa27d7c214b9262c1f3568258 | +| | | Ubuntu-aarch64 | | 095ac95e4c338b17dd192422d8bf342c55441a79eeeeb70441ccc65746b0f2d7 | +| | | EulerOS-aarch64 | | 1695ac7a01fdcb4fad9d47a172767d56fcae4979ecced298f5e33c936e821649 | +| | | CentOS-x86 | | ed0cc466efad7fb717527a511611c1fb2d72db4caf0f66e6fcbde0ecf7d6e525 | +| | | CentOS-aarch64 | |e6ed84cfe0ff9b51b94cd2575f62238c95a73ac386e2d09adf75d3ea74177420 | + +### 配套资料 + +| 类别 | 网址 | +| --- | --- | +|安装 | | +| 教程 | 训练
推理
手机&IoT | +| 文档 | 编程指南
Python API
C++ API
Java API
FAQ
设计和规格 | ## 1.0.1 @@ -111,8 +215,8 @@ | | | CentOS-x86 | | 8eab8881dd585731dfdedaec16b456fe6e80242199efbdc5703e20382b59aeab | | | | CentOS-aarch64 | | 3f76f2ff8c809b638136748348d5860b2ef6f6412ec37db2e02d00a7bc53c91f | | | GPU CUDA 10.1 | Ubuntu-x86 | | dd951904ef10adbb93501c3cbafa6b4d34b1e8e5c4efe4fcaa7af49f0c081041 | -| MindArmour | Ascend 910 | Ubuntu-x86
EulerOS-x86
CentOS 7.6 x86_64 | | a139ded76899e5901889fc4e578165ef78584a127f9c264830e4e2806c30cc82 | -| | | Ubuntu-aarch64
EulerOS-aarch64
CentOS 7.6 aarch64 | | e895ba5a0d207e0cb3e93acdfaaa399a63161443371ef68d626d29542e41d940 | +| MindArmour | Ascend 910 | Ubuntu-x86
EulerOS-x86
CentOS x86_64 | | a139ded76899e5901889fc4e578165ef78584a127f9c264830e4e2806c30cc82 | +| | | Ubuntu-aarch64
EulerOS-aarch64
CentOS aarch64 | | e895ba5a0d207e0cb3e93acdfaaa399a63161443371ef68d626d29542e41d940 | | | GPU CUDA 10.1
CPU | Ubuntu-x86 | | a139ded76899e5901889fc4e578165ef78584a127f9c264830e4e2806c30cc82 | | MindSpore
Hub | | any | |0cb7ea4c8cd81279bc61558e1102da14516d2ea9653269cb0519c7085df8e3c3 | | MindSpore
Lite RT | CPU | Android-aarch32 | |abb28cee1b8a439c51d05a7c4521dc3f76d05ae79db4be781c932ee5f0abc774 | @@ -384,4 +488,4 @@ | --- | --- | |安装 | | | 教程 | 训练
推理
手机&IoT | -| 文档 | 编程指南
Python API
C++ API
FAQ
其他说明 | +| 文档 | 编程指南
Python API
C++ API
Java API
FAQ
设计和规格 | diff --git a/tools/link_detection/README_CN.md b/tools/link_detection/README_CN.md index c2be9e6e409f7926daaf6e5034c5525da6b120c1..053413b0ef37f1b595cb02f31331e02f324173e9 100644 --- a/tools/link_detection/README_CN.md +++ b/tools/link_detection/README_CN.md @@ -15,7 +15,7 @@ 1. 打开Git Bash,下载MindSpore Docs仓代码。 ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. 进入`tools/link_detection`目录,安装执行所需的第三方库。 diff --git a/tools/pic_detection/README_CN.md b/tools/pic_detection/README_CN.md index a3cf658bc44bc75dede5f6d86a1f649209912092..b52f9314f4adaec06757c172d6711feb05638838 100644 --- a/tools/pic_detection/README_CN.md +++ b/tools/pic_detection/README_CN.md @@ -11,7 +11,7 @@ 1. 打开Git Bash,下载MindSpore Docs仓代码。 ```shell - git clone https://gitee.com/mindspore/docs.git + git clone https://gitee.com/mindspore/docs.git -b r1.1 ``` 2. 进入`tools/pic_detection`目录。 diff --git a/tutorials/inference/source_en/conf.py b/tutorials/inference/source_en/conf.py index 0a00ad8da18607c9f0ac88017972211d04c763c0..425ae737d4e83fc89afc1d341cc266c2f72ca089 100644 --- a/tutorials/inference/source_en/conf.py +++ b/tutorials/inference/source_en/conf.py @@ -21,7 +21,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/tutorials/inference/source_en/index.rst b/tutorials/inference/source_en/index.rst index 77e779bf19d5c2c7227a3267093afa288ff086fd..352af34cc80aac119b0da4e3afba8ae9bfd25269 100644 --- a/tutorials/inference/source_en/index.rst +++ b/tutorials/inference/source_en/index.rst @@ -24,3 +24,6 @@ Inference Using MindSpore :caption: Inference Service serving_example + serving_grpc + serving_restful + serving_model diff --git a/tutorials/inference/source_en/multi_platform_inference.md b/tutorials/inference/source_en/multi_platform_inference.md index 2879428aaf758850a2ce2d535d0e0bcb5b8ec170..869ba617199ee0a6bf2d96b80d9ae8d0089c47b3 100644 --- a/tutorials/inference/source_en/multi_platform_inference.md +++ b/tutorials/inference/source_en/multi_platform_inference.md @@ -8,7 +8,7 @@ - + Models trained by MindSpore support the inference on different hardware platforms. This document describes the inference process on each platform. diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst b/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst index d16b94a6134bb498484d11cc4a9535cfddc6f39a..1544dd6a232ca90820288d832336763cff2b3774 100644 --- a/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310.rst @@ -5,3 +5,4 @@ Inference on Ascend 310 :maxdepth: 1 multi_platform_inference_ascend_310_air + multi_platform_inference_ascend_310_mindir \ No newline at end of file diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md b/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md index cf0d0656ea5b9dda1c0743f0fc24db8b8a637634..8f31df3ee50cd1e61ab5b00f09c574e629f92298 100644 --- a/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310_air.md @@ -21,7 +21,7 @@ - + ## Overview @@ -39,7 +39,7 @@ This tutorial describes how to use MindSpore to perform inference on the Atlas 2 5. Load the saved OM model, perform inference, and view the result. -> You can obtain the complete executable sample code at . +> You can obtain the complete executable sample code at . ## Preparing the Development Environment @@ -71,7 +71,7 @@ The following five types of scripts and software packages are required for confi In the preceding information: - For details about the first three items, see [Creating an SD Card with a Card Reader](https://support.huaweicloud.com/intl/en-us//usermanual-A200dk_3000/atlas200dk_02_0011.html). -- You are advised to obtain other software packages from [Firmware and Driver](https://www.huaweicloud.com/intl/en-us/ascend/resource/Software). On this page, select `Atlas 200 DK` from the product series and product model and select the required files to download. +- You are advised to obtain other software packages from [Firmware and Driver](https://ascend.huawei.com/en/#/hardware/firmware-drivers). On this page, select `Atlas 200 DK` from the product series and product model and select the required files to download. ### Preparing the SD Card @@ -91,7 +91,7 @@ Install the development kit software package `Ascend-Toolkit-*{version}*-arm64-l ## Inference Directory Structure -Create a directory to store the inference code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`. The `inc`, `src`, and `test_data` directory code can be obtained from the [official website](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/acl_resnet50_sample), and the `model` directory stores the exported `AIR` model file and the converted `OM` model file. The `out` directory stores the executable file generated after building and the output result directory. The directory structure of the inference code project is as follows: +Create a directory to store the inference code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`. The `inc`, `src`, and `test_data` [sample code](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/acl_resnet50_sample) can be obtained from the official website, and the `model` directory stores the exported `AIR` model file and the converted `OM` model file. The `out` directory stores the executable file generated after building and the output result directory. The directory structure of the inference code project is as follows: ```text └─acl_resnet50_sample @@ -121,7 +121,7 @@ Create a directory to store the inference code project, for example, `/home/HwHi ## Exporting the AIR Model -Train the target network on the Ascend 910 AI Processor, save it as a checkpoint file, and export the model file in AIR format through the network and checkpoint file. For details about the export process, see [Export AIR Model](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-air-model). +Train the target network on the Ascend 910 AI Processor, save it as a checkpoint file, and export the model file in AIR format through the network and checkpoint file. For details about the export process, see [Export AIR Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-air-model). > The [resnet50_export.air](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/sample_resources/acl_resnet50_sample/resnet50_export.air) is a sample AIR file exported using the ResNet-50 model. @@ -149,6 +149,8 @@ In the preceding information: - `--output`: path of the converted OM model file - `--input_format`: input image format +For detailed information about ATC tools, please select the corresponding CANN version in the [Developer Documentation(Community Edition)](https://ascend.huawei.com/en/#/document?tag=developer), and then search for the chapter of "ATC Tool Instructions". + ## Building Inference Code Go to the project directory `acl_resnet50_sample` and set the following environment variables: diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md b/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md new file mode 100644 index 0000000000000000000000000000000000000000..dbef96d1625d4e9e9c43ea794fbd81d44674d06a --- /dev/null +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_310_mindir.md @@ -0,0 +1,227 @@ +# Inference Using the MindIR Model on Ascend 310 AI Processors + +`Linux` `Ascend` `Inference Application` `Beginner` `Intermediate` `Expert` + + + +- [Inference Using the MindIR Model on Ascend 310 AI Processors](#inference-using-the-mindir-model-on-ascend-310-ai-processors) + - [Overview](#overview) + - [Preparing the Development Environment](#preparing-the-development-environment) + - [Exporting the MindIR Model](#exporting-the-mindir-model) + - [Inference Directory Structure](#inference-directory-structure) + - [Inference Code](#inference-code) + - [Introduce to Building Script](#introduce-to-building-script) + - [Building Inference Code](#building-inference-code) + - [Performing Inference and Viewing the Result](#performing-inference-and-viewing-the-result) + + + + + +## Overview + +Ascend 310 is a highly efficient and integrated AI processor oriented to edge scenarios. The Atlas 200 Developer Kit (Atlas 200 DK) is a developer board that uses the Atlas 200 AI accelerator module. Integrated with the HiSilicon Ascend 310 AI processor, the Atlas 200 allows data analysis, inference, and computing for various data such as images and videos, and can be widely used in scenarios such as intelligent surveillance, robots, drones, and video servers. + +This tutorial describes how to use MindSpore to perform inference on the Atlas 200 DK. The process is as follows: + +1. Prepare the development environment, including creating an SD card for the Atlas 200 DK, configuring the Python environment, and updating the development software package. + +2. Export the MindIR model file. The ResNet-50 model is used as an example. + +3. Build the inference code to generate an executable `main` file. + +4. Load the saved MindIR model, perform inference, and view the result. + +> You can obtain the complete executable sample code at . + +## Preparing the Development Environment + +For details, see [Inference on the Ascend 310 AI Processor](https://www.mindspore.cn/tutorial/inference/en/r1.1/multi_platform_inference_ascend_310_air.html#preparing-the-development-environment). + +## Exporting the MindIR Model + +Train the target network on the Ascend 910 AI Processor, save it as a checkpoint file, and export the model file in MindIR format through the network and checkpoint file. For details about the export process, see [Export MindIR Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-mindir-model). + +> The [resnet50_imagenet.mindir](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/sample_resources/ascend310_resnet50_preprocess_sample/resnet50_imagenet.mindir) is a sample MindIR file exported using the ResNet-50 model. + +## Inference Directory Structure + +Create a directory to store the inference code project, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample`. The directory code can be obtained from the [official website](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/ascend310_resnet50_preprocess_sample). The `model` directory stores the exported `MindIR` model files and the `test_data` directory stores the images to be classified. The directory structure of the inference code project is as follows: + +```text +└─ascend310_resnet50_preprocess_sample + ├── CMakeLists.txt // Build script + ├── README.md // Usage description + ├── main.cc // Main function + ├── model + │ └── resnet50_imagenet.mindir // MindIR model file + └── test_data + ├── ILSVRC2012_val_00002138.JPEG // Input sample image 1 + ├── ILSVRC2012_val_00003014.JPEG // Input sample image 2 + ├── ... // Input sample image n +``` + +## Inference Code + +Inference sample code: . + +Set global context, device target is `Ascend310` and evice id is `0`: + +```c++ +ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend310); +ms::GlobalContext::SetGlobalDeviceID(0); +``` + +Load mindir file: + +```c++ +// Load MindIR model +auto graph =ms::Serialization::LoadModel(resnet_file, ms::ModelType::kMindIR); +// Build model with graph object +ms::Model resnet50((ms::GraphCell(graph))); +ms::Status ret = resnet50.Build({}); +``` + +Get informance of this model: + +```c++ +std::vector model_inputs = resnet50.GetInputs(); +``` + +Load image file: + +```c++ +// Readfile is a function to read images +ms::MSTensor ReadFile(const std::string &file); +auto image = ReadFile(image_file); +``` + +Image preprocess: + +```c++ +// Create the CPU operator provided by MindData to get the function object +ms::dataset::Execute preprocessor({ms::dataset::vision::Decode(), // Decode the input to RGB format + ms::dataset::vision::Resize({256}), // Resize the image to the given size + ms::dataset::vision::Normalize({0.485 * 255, 0.456 * 255, 0.406 * 255}, + {0.229 * 255, 0.224 * 255, 0.225 * 255}), // Normalize the input + ms::dataset::vision::CenterCrop({224, 224}), // Crop the input image at the center + ms::dataset::vision::HWC2CHW(), // shape (H, W, C) to shape(C, H, W) + }); +// Call the function object to get the processed image +ret = preprocessor(image, &image); +``` + +Execute the model: + +```c++ +// Create outputs vector +std::vector outputs; +// Create inputs vector +std::vector inputs; +inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(), + image.Data().get(), image.DataSize()); +// Call the Predict function of Model for inference +ret = resnet50.Predict(inputs, &outputs); +``` + +Print the result: + +```c++ +// Output the maximum probability to the screen +std::cout << "Image: " << image_file << " infer result: " << GetMax(outputs[0]) << std::endl; +``` + +## Introduce to Building Script + +The building script is used to building applications: . + +Since MindSpore uses the [old C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html), applications must be the same with MindSpore, otherwise the building will fail. + +```cmake +add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0) +set(CMAKE_CXX_STANDARD 17) +``` + +Add head files to gcc search path: + +```cmake +option(MINDSPORE_PATH "mindspore install path" "") +include_directories(${MINDSPORE_PATH}) +include_directories(${MINDSPORE_PATH}/include) +``` + +Find the shared libraries in MindSpore: + +```cmake +find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib) +file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*) +``` + +Use the source files to generate the target executable file, and link the MindSpore libraries for the executable file: + +```cmake +add_executable(resnet50_sample main.cc) +target_link_libraries(resnet50_sample ${MS_LIB} ${MD_LIB}) +``` + +## Building Inference Code + +Go to the project directory `ascend310_resnet50_preprocess_sample` and set the following environment variables: + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/atc/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +Run the `cmake` command, modify `pip3` according to the actual situation: + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +``` + +Run the `make` command for building. + +```bash +make +``` + +After building, the executable `main` file is generated in `ascend310_resnet50_preprocess_sample`. + +## Performing Inference and Viewing the Result + +Log in to the Atlas 200 DK developer board, and create the `model` directory for storing the MindIR file `resnet50_imagenet.mindir`, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample/model`. +Create the `test_data` directory to store images, for example, `/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample/test_data`. +Then, perform the inference. + +```bash +./resnet50_sample +``` + +Inference is performed on all images stored in the `test_data` directory. For example, if there are 9 images whose label is 0 in the [ImageNet2012](http://image-net.org/download-images) validation set, the inference result is as follows: + +```text +Image: ./test_data/ILSVRC2012_val_00002138.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00003014.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00006697.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00007197.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009111.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009191.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009346.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009379.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009396.JPEG infer result: 0 +``` diff --git a/tutorials/inference/source_en/multi_platform_inference_ascend_910.md b/tutorials/inference/source_en/multi_platform_inference_ascend_910.md index 7b6afa002a202a9d888f693525a67b73d60d58fa..1621abe5f2054617024c75bd5ba176223dd9c5f0 100644 --- a/tutorials/inference/source_en/multi_platform_inference_ascend_910.md +++ b/tutorials/inference/source_en/multi_platform_inference_ascend_910.md @@ -6,11 +6,17 @@ - [Inference on the Ascend 910 AI processor](#inference-on-the-ascend-910-ai-processor) - [Inference Using a Checkpoint File with Single Device](#inference-using-a-checkpoint-file-with-single-device) - - [Distributed Inference with Multiple Devices](#Distributed-inference-with-multiple-devices) + - [Distributed Inference with Multiple Devices](#distributed-inference-with-multi-devices) + - [Use C++ Interface to Load a MindIR file for inferencing](#use-c-interface-to-load-a-mindir-file-for-inferencing) + - [Inference Directory Structure](#inference-directory-structure) + - [Inference Code](#inference-code) + - [Introduce to Building Script](#introduce-to-building-script) + - [Building Inference Code](#building-inference-code) + - [Performing Inference and Viewing the Result](#performing-inference-and-viewing-the-result) - + ## Inference Using a Checkpoint File with Single Device @@ -37,8 +43,8 @@ ``` In the preceding information: - `model.eval` is an API for model validation. For details about the API, see . - > Inference sample code: . + `model.eval` is an API for model validation. For details about the API, see . + > Inference sample code: . 1.2 Remote Storage @@ -61,7 +67,7 @@ In the preceding information: - `mindpsore_hub.load` is an API for loading model parameters. Please check the details in . + `mindpsore_hub.load` is an API for loading model parameters. Please check the details in . 2. Use the `model.predict` API to perform inference. @@ -70,7 +76,7 @@ ``` In the preceding information: - `model.predict` is an API for inference. For details about the API, see . + `model.predict` is an API for inference. For details about the API, see . ## Distributed Inference With Multi Devices @@ -80,13 +86,13 @@ This tutorial would focus on the process that the model slices are saved on each > Distributed inference sample code: > -> +> The process of distributed inference is as follows: 1. Execute training, generate the checkpoint file and the model strategy file. - > - The distributed training tutorial and sample code can be referred to the link: . + > - The distributed training tutorial and sample code can be referred to the link: . > - In the distributed Inference scenario, during the training phase, the `integrated_save` of `CheckpointConfig` interface should be set to `False`, which means that each device only saves the slice of model instead of the full model. > - `parallel_mode` of `set_auto_parallel_context` interface should be set to `auto_parallel` or `semi_auto_parallel`. > - In addition, you need to specify `strategy_ckpt_save_file` to indicate the path of the strategy file. @@ -122,10 +128,196 @@ The process of distributed inference is as follows: - `load_distributed_checkpoint`:merges model slices, then splits it according to the predication strategy, and loads it into the network. > The `load_distributed_checkpoint` interface supports that predict_strategy is `None`, which is single device inference, and the process is different from distributed inference. The detailed usage can be referred to the link: - > . + > . 4. Execute inference. ```python model.predict(predict_data) ``` + +## Use C++ Interface to Load a MindIR File for Inferencing + +Users can create C++ applications and call MindSpore C++ interface to inference MindIR models. + +### Inference Directory Structure + +Create a directory to store the inference code project, for example, `/home/HwHiAiUser/mindspore_sample/ascend910_resnet50_preprocess_sample`. The directory code can be obtained from the [official website](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/ascend910_resnet50_preprocess_sample). The `model` directory stores the exported `MindIR` model files and the `test_data` directory stores the images to be classified. The directory structure of the inference code project is as follows: + +```text +└─ascend910_resnet50_preprocess_sample + ├── CMakeLists.txt // Build script + ├── README.md // Usage description + ├── main.cc // Main function + ├── model + │ └── resnet50_imagenet.mindir // MindIR model file + └── test_data + ├── ILSVRC2012_val_00002138.JPEG // Input sample image 1 + ├── ILSVRC2012_val_00003014.JPEG // Input sample image 2 + ├── ... // Input sample image n +``` + +### Inference Code + +Inference sample code: . + +Set global context, device target is `Ascend910` and evice id is `0`: + +```c++ +ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend910); +ms::GlobalContext::SetGlobalDeviceID(0); +``` + +Load mindir file: + +```c++ +// Load MindIR model +auto graph =ms::Serialization::LoadModel(resnet_file, ms::ModelType::kMindIR); +// Build model with graph object +ms::Model resnet50((ms::GraphCell(graph))); +ms::Status ret = resnet50.Build({}); +``` + +Get informance of this model: + +```c++ +std::vector model_inputs = resnet50.GetInputs(); +``` + +Load image file: + +```c++ +// Readfile is a function to read images +ms::MSTensor ReadFile(const std::string &file); +auto image = ReadFile(image_file); +``` + +Image preprocess: + +```c++ +// Create the CPU operator provided by MindData to get the function object +ms::dataset::Execute preprocessor({ms::dataset::vision::Decode(), // Decode the input to RGB format + ms::dataset::vision::Resize({256}), // Resize the image to the given size + ms::dataset::vision::Normalize({0.485 * 255, 0.456 * 255, 0.406 * 255}, + {0.229 * 255, 0.224 * 255, 0.225 * 255}), // Normalize the input + ms::dataset::vision::CenterCrop({224, 224}), // Crop the input image at the center + ms::dataset::vision::HWC2CHW(), // shape (H, W, C) to shape(C, H, W) + }); +// Call the function object to get the processed image +ret = preprocessor(image, &image); +``` + +Execute the model: + +```c++ +// Create outputs vector +std::vector outputs; +// Create inputs vector +std::vector inputs; +inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(), + image.Data().get(), image.DataSize()); +// Call the Predict function of Model for inference +ret = resnet50.Predict(inputs, &outputs); +``` + +Print the result: + +```c++ +// Output the maximum probability to the screen +std::cout << "Image: " << image_file << " infer result: " << GetMax(outputs[0]) << std::endl; +``` + +### Introduce to Building Script + +The building script is used to building applications: . + +Since MindSpore uses the [old C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html), applications must be the same with MindSpore, otherwise the building will fail. + +```cmake +add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0) +set(CMAKE_CXX_STANDARD 17) +``` + +Add head files to gcc search path: + +```cmake +option(MINDSPORE_PATH "mindspore install path" "") +include_directories(${MINDSPORE_PATH}) +include_directories(${MINDSPORE_PATH}/include) +``` + +Find the shared libraries in MindSpore: + +```cmake +find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib) +file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*) +``` + +Use the source files to generate the target executable file, and link the MindSpore libraries for the executable file: + +```cmake +add_executable(resnet50_sample main.cc) +target_link_libraries(resnet50_sample ${MS_LIB} ${MD_LIB}) +``` + +### Building Inference Code + +Go to the project directory `ascend910_resnet50_preprocess_sample` and set the following environment variables: + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64/common:${LOCAL_ASCEND}/driver/lib64/driver:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +Run the `cmake` command, modify `pip3` according to the actual situation: + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +``` + +Run the `make` command for building. + +```bash +make +``` + +After building, the executable `main` file is generated in `ascend910_resnet50_preprocess_sample`. + +## Performing Inference and Viewing the Result + +Log in to the Ascend 910 server, and create the `model` directory for storing the MindIR file `resnet50_imagenet.mindir`, for example, `/home/HwHiAiUser/mindspore_sample/ascend910_resnet50_preprocess_sample/model`. +Create the `test_data` directory to store images, for example, `/home/HwHiAiUser/mindspore_sample/ascend910_resnet50_preprocess_sample/test_data`. +Then, perform the inference. + +```bash +./resnet50_sample +``` + +Inference is performed on all images stored in the `test_data` directory. For example, if there are 9 images whose label is 0 in the [ImageNet2012](http://image-net.org/download-images) validation set, the inference result is as follows: + +```text +Image: ./test_data/ILSVRC2012_val_00002138.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00003014.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00006697.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00007197.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009111.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009191.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009346.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009379.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009396.JPEG infer result: 0 +``` diff --git a/tutorials/inference/source_en/multi_platform_inference_cpu.md b/tutorials/inference/source_en/multi_platform_inference_cpu.md index 8d00afd56a67f27869dd0f68bec43c43437d8c2e..0576c5a802f2ce316c8eda45947bcec9e5c4a219 100644 --- a/tutorials/inference/source_en/multi_platform_inference_cpu.md +++ b/tutorials/inference/source_en/multi_platform_inference_cpu.md @@ -10,7 +10,7 @@ - + ## Inference Using a Checkpoint File @@ -20,6 +20,6 @@ The inference is the same as that on the Ascend 910 AI processor. Similar to the inference on a GPU, the following steps are required: -1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-onnx-model). +1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-onnx-model). 2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime). diff --git a/tutorials/inference/source_en/multi_platform_inference_gpu.md b/tutorials/inference/source_en/multi_platform_inference_gpu.md index 0c3de8af6ba83965679f63f5719233bf1b982100..7ce07c133a3d8b54720a5505d156e9edf10e29e0 100644 --- a/tutorials/inference/source_en/multi_platform_inference_gpu.md +++ b/tutorials/inference/source_en/multi_platform_inference_gpu.md @@ -10,7 +10,7 @@ - + ## Inference Using a Checkpoint File @@ -18,6 +18,6 @@ The inference is the same as that on the Ascend 910 AI processor. ## Inference Using an ONNX File -1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/master/use/save_model.html#export-onnx-model). +1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/training/en/r1.1/use/save_model.html#export-onnx-model). 2. Perform inference on a GPU by referring to the runtime or SDK document. For example, use TensorRT to perform inference on the NVIDIA GPU. For details, see [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt). diff --git a/tutorials/inference/source_en/serving_grpc.md b/tutorials/inference/source_en/serving_grpc.md new file mode 100644 index 0000000000000000000000000000000000000000..67d65de21b09a8cc3f94a17c9b9a6dac0a990381 --- /dev/null +++ b/tutorials/inference/source_en/serving_grpc.md @@ -0,0 +1,5 @@ +# Access MindSpore Serving service based on gRPC interface + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_en/serving_model.md b/tutorials/inference/source_en/serving_model.md new file mode 100644 index 0000000000000000000000000000000000000000..fb5ff0d9a1d75645de2b677cc92a1b864e4f3f73 --- /dev/null +++ b/tutorials/inference/source_en/serving_model.md @@ -0,0 +1,5 @@ +# Servable provided by configuration model + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_en/serving_restful.md b/tutorials/inference/source_en/serving_restful.md new file mode 100644 index 0000000000000000000000000000000000000000..a8beac851c19e7bef061f943d7ec4b193651413d --- /dev/null +++ b/tutorials/inference/source_en/serving_restful.md @@ -0,0 +1,5 @@ +# Access MindSpore Serving service based on RESTful interface + +No English version available right now, welcome to contribute. + + diff --git a/tutorials/inference/source_zh_cn/conf.py b/tutorials/inference/source_zh_cn/conf.py index 0c819a8b0622e1914ff199e5bd29a591595470b3..1a8eda1de573595f3cbeabce9c3fd1913cd377aa 100644 --- a/tutorials/inference/source_zh_cn/conf.py +++ b/tutorials/inference/source_zh_cn/conf.py @@ -21,7 +21,7 @@ copyright = '2020, MindSpore' author = 'MindSpore' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference.md b/tutorials/inference/source_zh_cn/multi_platform_inference.md index 0556b845255b55e06909a20966e6ea9eacbe99ea..a1ac5d19438a505a07cdb3cfac60bf81b601b430 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference.md @@ -5,31 +5,98 @@ - [推理模型总览](#推理模型总览) + - [模型文件](#模型文件) + - [执行推理](#执行推理) - + -基于MindSpore训练后的模型,支持在不同的硬件平台上执行推理。本文介绍各平台上的推理流程。 +MindSpore可以基于训练好的模型,在不同的硬件平台上执行推理任务。 -按照原理不同,推理可以有两种方式: +## 模型文件 -- 直接使用checkpoint文件进行推理,即在MindSpore训练环境下,使用推理接口加载数据及checkpoint文件进行推理。 -- 将checkpoint文件转化为通用的模型格式,如ONNX、AIR格式模型文件进行推理,推理环境不需要依赖MindSpore。这样的好处是可以跨硬件平台,只要支持ONNX/AIR推理的硬件平台即可进行推理。譬如在Ascend 910 AI处理器上训练的模型,可以在GPU/CPU上进行推理。 +MindSpore支持保存两种类型的数据:训练参数和网络模型(模型中包含参数信息)。 -MindSpore支持的推理场景,按照硬件平台维度可以分为下面几种: +- 训练参数指的是Checkpoint格式文件。 +- 网络模型包括MindIR、AIR和ONNX三种格式文件。 -硬件平台 | 模型文件格式 | 说明 ---|--|-- -Ascend 910 AI处理器 | checkpoint格式 | 与MindSpore训练环境依赖一致 -Ascend 310 AI处理器 | ONNX、AIR格式 | 搭载了ACL框架,支持OM格式模型,需要使用工具转化模型为OM格式模型。 -GPU | checkpoint格式 | 与MindSpore训练环境依赖一致。 -GPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。 -CPU | checkpoint格式 | 与MindSpore训练环境依赖一致。 -CPU | ONNX格式 | 支持ONNX推理的runtime/SDK,如TensorRT。 +下面介绍一下这几种格式的基本概念及其应用场景。 -> - ONNX,全称Open Neural Network Exchange,是一种针对机器学习所设计的开放式的文件格式,用于存储训练好的模型。它使得不同的人工智能框架(如PyTorch, MXNet)可以采用相同格式存储模型数据并交互。详细了解,请参见ONNX官网。 -> - AIR,全称Ascend Intermediate Representation,类似ONNX,是华为定义的针对机器学习所设计的开放式的文件格式,能更好地适配Ascend AI处理器。 -> - ACL,全称Ascend Computer Language,提供Device管理、Context管理、Stream管理、内存管理、模型加载与执行、算子加载与执行、媒体数据处理等C++ API库,供用户开发深度神经网络应用。它匹配Ascend AI处理器,使能硬件的运行管理、资源管理能力。 -> - OM,全称Offline Model,华为Ascend AI处理器支持的离线模型,实现算子调度的优化,权值数据重排、压缩,内存使用优化等可以脱离设备完成的预处理功能。 -> - TensorRT,NVIDIA 推出的高性能深度学习推理的SDK,包括深度推理优化器和runtime,提高深度学习模型在边缘设备上的推断速度。详细请参见。 +- Checkpoint + - 采用了Protocol Buffers格式,存储了网络中所有的参数值。 + - 一般用于训练任务中断后恢复训练,或训练后的微调(Fine Tune)任务。 +- MindIR + - 全称MindSpore IR,是MindSpore的一种基于图表示的函数式IR,定义了可扩展的图结构以及算子的IR表示。 + - 它消除了不同后端的模型差异,一般用于跨硬件平台执行推理任务。 +- ONNX + - 全称Open Neural Network Exchange,是一种针对机器学习模型的通用表达。 + - 一般用于不同框架间的模型迁移或在推理引擎([TensorRT](https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/index.html))上使用。 +- AIR + - 全称Ascend Intermediate Representation,是华为定义的针对机器学习所设计的开放式文件格式。 + - 它能更好地适应华为AI处理器,一般用于Ascend 310上执行推理任务。 + +## 执行推理 + +按照使用环境的不同,推理可以分为以下两种方式。 + +1. 本机推理 + + 通过加载网络训练产生的Checkpoint文件,调用`model.predict`接口进行推理验证,具体操作可查看[使用Checkpoint格式文件执行推理](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_910.html#checkpoint)。 + +2. 跨平台推理 + + 使用网络定义和Checkpoint文件,调用`export`接口导出模型文件,在不同平台执行推理,目前支持导出MindIR、ONNX和AIR(仅支持Ascend AI处理器)模型,具体操作可查看[保存模型](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html)。 + +## MindIR介绍 + +MindSpore通过统一IR定义了网络的逻辑结构和算子的属性,将MindIR格式的模型文件与硬件平台解耦,实现一次训练多次部署。 + +1. 基本介绍 + + MindIR作为MindSpore的统一模型文件,同时存储了网络结构和权重参数值。同时支持部署到云端Serving和端侧Lite平台执行推理任务。 + + 同一个MindIR文件支持多种硬件形态的部署: + + - 云端Serving部署推理:MindSpore训练生成MindIR模型文件后,可直接发给MindSpore Serving加载,执行推理任务,而无需额外的模型转化,做到Ascend、GPU、CPU等多硬件的模型统一。 + - 端侧Lite推理部署:MindIR可直接供Lite部署使用。同时由于端侧轻量化需求,提供了模型小型化和转换功能,支持将原始MindIR模型文件由Protocol Buffers格式转化为FlatBuffers格式存储,以及网络结构轻量化,以更好的满足端侧性能、内存等要求。 + +2. 使用场景 + + 先使用网络定义和Checkpoint文件导出MindIR模型文件,再根据不同需求执行推理任务,如[在Ascend 310上执行推理任务](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310_mindir.html)、[基于MindSpore Serving部署推理服务](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html)、[端侧推理](https://www.mindspore.cn/lite/docs?r1.1)。 + +### MindIR支持的网络列表 + +| Model name | +|-----------------------| +| AlexNet | +| BERT | +| BGCF | +| CenterFace | +| CNN&CTC | +| DeepLabV3 | +| DenseNet121 | +| Faster R-CNN | +| GAT | +| GCN | +| GoogLeNet | +| LeNet | +| Mask R-CNN | +| MASS | +| MobileNetV2 | +| NCF | +| PSENet | +| ResNet | +| ResNeXt | +| InceptionV3 | +| SqueezeNet | +| SSD | +| Transformer | +| TinyBert | +| UNet2D | +| VGG16 | +| Wide&Deep | +| YOLOv3 | +| YOLOv4 | + +> \ No newline at end of file diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md index 71347fdb25503f12a79fd2fc35607de0b892c4ed..e917577f9a635ea084f9246d0c2ef26311f29cc6 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_air.md @@ -21,7 +21,7 @@ - + ## 概述 @@ -39,7 +39,7 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 5. 加载保存的OM模型,执行推理并查看结果。 -> 你可以在这里找到完整可运行的样例代码: 。 +> 你可以在这里找到完整可运行的样例代码: 。 ## 开发环境准备 @@ -52,9 +52,9 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 配置开发环境需要的脚本和软件包如下5类,共7个文件。 -1. 制卡入口脚本:[make_sd_card.py](https://gitee.com/ascend/tools/blob/master/makesd/for_1.7x.0.0/make_sd_card.py) +1. 制卡入口脚本:[make_sd_card.py](https://gitee.com/ascend/tools/blob/master/makesd/for_20.0/make_sd_card.py) -2. 制作SD卡操作系统脚本:[make_ubuntu_sd.sh](https://gitee.com/ascend/tools/blob/master/makesd/for_1.7x.0.0/make_ubuntu_sd.sh) +2. 制作SD卡操作系统脚本:[make_ubuntu_sd.sh](https://gitee.com/ascend/tools/blob/master/makesd/for_20.0/make_ubuntu_sd.sh) 3. Ubuntu操作系统镜像包:[ubuntu-18.04.xx-server-arm64.iso](http://cdimage.ubuntu.com/ubuntu/releases/18.04/release/ubuntu-18.04.5-server-arm64.iso) @@ -71,7 +71,7 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 其中, - 前3项可以参考[Atlas 200 DK 开发者套件使用指南](https://support.huaweicloud.com/usermanual-A200dk_3000/atlas200dk_02_0011.html)获取。 -- 其余软件包建议从[基础软件下载](https://www.huaweicloud.com/ascend/resource/Software)中获取,在该页面中选择产品系列和产品型号为`Atlas 200 DK`,选中需要的文件,即可下载。 +- 其余软件包建议从[固件与驱动](https://ascend.huawei.com/#/hardware/firmware-drivers)中获取,在该页面中选择产品系列和产品型号为`Atlas 200 DK`,选中需要的文件,即可下载。 ### 制作SD卡 @@ -91,7 +91,7 @@ Atlas 200 DK开发者板支持通过USB端口或者网线与Ubuntu服务器进 ## 推理目录结构介绍 -创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`,其中`inc`、`src`、`test_data`目录代码可以从[官网示例下载](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/acl_resnet50_sample)获取,`model`目录用于存放接下来导出的`AIR`模型文件和转换后的`OM`模型文件,`out`目录用于存放执行编译生成的可执行文件和输出结果目录,推理代码工程目录结构如下: +创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/acl_resnet50_sample`,其中`inc`、`src`、`test_data`目录代码可以从官网示例下载[样例代码](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/acl_resnet50_sample),`model`目录用于存放接下来导出的`AIR`模型文件和转换后的`OM`模型文件,`out`目录用于存放执行编译生成的可执行文件和输出结果目录,推理代码工程目录结构如下: ```text └─acl_resnet50_sample @@ -121,7 +121,7 @@ Atlas 200 DK开发者板支持通过USB端口或者网线与Ubuntu服务器进 ## 导出AIR模型文件 -在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的AIR格式模型文件,导出流程参见[导出AIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#air)。 +在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的AIR格式模型文件,导出流程参见[导出AIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#air)。 > 这里提供使用ResNet-50模型导出的示例AIR文件[resnet50_export.air](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/sample_resources/acl_resnet50_sample/resnet50_export.air)。 @@ -149,6 +149,8 @@ export ASCEND_OPP_PATH=${install_path}/opp - `--output`:转换得到的OM模型文件的路径。 - `--input_format`:输入数据格式。 +ATC工具详细资料可在[昇腾社区开发者文档](https://ascend.huawei.com/#/document?tag=developer)中选择相应CANN版本后,查找《ATC工具使用指南》章节查看。 + ## 编译推理代码 进入工程目录`acl_resnet50_sample`,设置如下环境变量: diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md index 5ba0fe9cf7c795aac253d9661a8c583e843f3947..a71fee74b7f6583d0fad5d32da2238c6d5c3f291 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_310_mindir.md @@ -7,14 +7,16 @@ - [Ascend 310 AI处理器上使用MindIR模型进行推理](#ascend-310-ai处理器上使用mindir模型进行推理) - [概述](#概述) - [开发环境准备](#开发环境准备) - - [推理目录结构介绍](#推理目录结构介绍) - [导出MindIR模型文件](#导出mindir模型文件) + - [推理目录结构介绍](#推理目录结构介绍) + - [推理代码介绍](#推理代码介绍) + - [构建脚本介绍](#构建脚本介绍) - [编译推理代码](#编译推理代码) - [执行推理并查看结果](#执行推理并查看结果) - + ## 概述 @@ -30,34 +32,137 @@ Ascend 310是面向边缘场景的高能效高集成度AI处理器。Atlas 200 4. 加载保存的MindIR模型,执行推理并查看结果。 -> 你可以在这里找到完整可运行的样例代码: 。 +> 你可以在这里找到完整可运行的样例代码: 。 ## 开发环境准备 -参考[Ascend 310 AI处理器上使用AIR进行推理#开发环境准备](https://www.mindspore.cn/tutorial/inference/zh-CN/master/multi_platform_inference_ascend_310_air.html#id2) +参考[Ascend 310 AI处理器上使用AIR进行推理#开发环境准备](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/multi_platform_inference_ascend_310_air.html#id2) + +## 导出MindIR模型文件 + +在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的MindIR格式模型文件,导出流程参见[导出MindIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#mindir)。 + +> 这里提供使用ResNet-50模型导出的示例MindIR文件[resnet50_imagenet.mindir](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/sample_resources/ascend310_resnet50_preprocess_sample/resnet50_imagenet.mindir)。 ## 推理目录结构介绍 -创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample`,目录代码可以从[官网示例下载](https://gitee.com/mindspore/docs/tree/master/tutorials/tutorial_code/ascend310_resnet50_preprocess_sample)获取,`model`目录用于存放接下来导出的`MindIR`模型文件,`test_data`目录用于存放待分类的图片,推理代码工程目录结构如下: +创建目录放置推理代码工程,例如`/home/HwHiAiUser/Ascend/ascend-toolkit/20.0.RC1/acllib_linux.arm64/sample/acl_execute_model/ascend310_resnet50_preprocess_sample`,可以从官网示例下载[样例代码](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/ascend310_resnet50_preprocess_sample),`model`目录用于存放上述导出的`MindIR`模型文件,`test_data`目录用于存放待分类的图片,推理代码工程目录结构如下: ```text └─ascend310_resnet50_preprocess_sample - ├── CMakeLists.txt // 编译脚本 + ├── CMakeLists.txt // 构建脚本 ├── README.md // 使用说明 ├── main.cc // 主函数 ├── model │ └── resnet50_imagenet.mindir // MindIR模型文件 └── test_data - ├── ILSVRC2012_val_00000293.JPEG // 输入样本图片1 - ├── ILSVRC2012_val_00002138.JPEG // 输入样本图片2 + ├── ILSVRC2012_val_00002138.JPEG // 输入样本图片1 + ├── ILSVRC2012_val_00003014.JPEG // 输入样本图片2 ├── ... // 输入样本图片n ``` -## 导出MindIR模型文件 +## 推理代码介绍 -在Ascend 910的机器上训练好目标网络,并保存为CheckPoint文件,通过网络和CheckPoint文件导出对应的MindIR格式模型文件,导出流程参见[导出MindIR格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#mindir)。 +推理代码样例: 。 -> 这里提供使用ResNet-50模型导出的示例MindIR文件[resnet50_imagenet.mindir](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/sample_resources/ascend310_resnet50_preprocess_sample/resnet50_imagenet.mindir)。 +环境初始化,指定硬件为Ascend 310,DeviceID为0: + +```c++ +ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend310); +ms::GlobalContext::SetGlobalDeviceID(0); +``` + +加载模型文件: + +```c++ +// Load MindIR model +auto graph =ms::Serialization::LoadModel(resnet_file, ms::ModelType::kMindIR); +// Build model with graph object +ms::Model resnet50((ms::GraphCell(graph))); +ms::Status ret = resnet50.Build({}); +``` + +获取模型所需输入信息: + +```c++ +std::vector model_inputs = resnet50.GetInputs(); +``` + +加载图片文件: + +```c++ +// Readfile is a function to read images +ms::MSTensor ReadFile(const std::string &file); +auto image = ReadFile(image_file); +``` + +图片预处理: + +```c++ +// Create the CPU operator provided by MindData to get the function object +ms::dataset::Execute preprocessor({ms::dataset::vision::Decode(), // Decode the input to RGB format + ms::dataset::vision::Resize({256}), // Resize the image to the given size + ms::dataset::vision::Normalize({0.485 * 255, 0.456 * 255, 0.406 * 255}, + {0.229 * 255, 0.224 * 255, 0.225 * 255}), // Normalize the input + ms::dataset::vision::CenterCrop({224, 224}), // Crop the input image at the center + ms::dataset::vision::HWC2CHW(), // shape (H, W, C) to shape(C, H, W) + }); +// Call the function object to get the processed image +ret = preprocessor(image, &image); +``` + +执行推理: + +```c++ +// Create outputs vector +std::vector outputs; +// Create inputs vector +std::vector inputs; +inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(), + image.Data().get(), image.DataSize()); +// Call the Predict function of Model for inference +ret = resnet50.Predict(inputs, &outputs); +``` + +获取推理结果: + +```c++ +// Output the maximum probability to the screen +std::cout << "Image: " << image_file << " infer result: " << GetMax(outputs[0]) << std::endl; +``` + +## 构建脚本介绍 + +构建脚本用于构建用户程序,样例来自于: 。 + +由于MindSpore使用[旧版的C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html),因此用户程序需与MindSpore一致,否则编译链接会失败。 + +```cmake +add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0) +set(CMAKE_CXX_STANDARD 17) +``` + +为编译器添加头文件搜索路径: + +```cmake +option(MINDSPORE_PATH "mindspore install path" "") +include_directories(${MINDSPORE_PATH}) +include_directories(${MINDSPORE_PATH}/include) +``` + +在MindSpore中查找所需动态库: + +```cmake +find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib) +file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*) +``` + +使用指定的源文件生成目标可执行文件,并为目标文件链接MindSpore库: + +```cmake +add_executable(resnet50_sample main.cc) +target_link_libraries(resnet50_sample ${MS_LIB} ${MD_LIB}) +``` ## 编译推理代码 @@ -73,17 +178,17 @@ LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package # lib libraries that the run package depends on export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/acllib/lib64:${LOCAL_ASCEND}/ascend-toolkit/latest/atc/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} -# lib libraries that the mindspore depends on +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} # Environment variables that must be configured export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path -export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/atc/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on ``` -执行`cmake`命令: +执行`cmake`命令,其中`pip3`需要按照实际情况修改: ```bash cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` @@ -107,10 +212,9 @@ make ./resnet50_sample ``` -执行后,会对`test_data`目录下放置的所有图片进行推理,比如放置了10张[ImageNet2012](http://image-net.org/download-images)验证集中label为0的图片,可以看到推理结果如下。 +执行后,会对`test_data`目录下放置的所有图片进行推理,比如放置了9张[ImageNet2012](http://image-net.org/download-images)验证集中label为0的图片,可以看到推理结果如下。 ```text -Image: ./test_data/ILSVRC2012_val_00000293.JPEG infer result: 0 Image: ./test_data/ILSVRC2012_val_00002138.JPEG infer result: 0 Image: ./test_data/ILSVRC2012_val_00003014.JPEG infer result: 0 Image: ./test_data/ILSVRC2012_val_00006697.JPEG infer result: 0 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md index 95fe1dc2825677c9e52ba7f4bc0bd21d13917b5a..b46e8c108207b624f53a320d46d774ce0a7b6e74 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_ascend_910.md @@ -7,16 +7,22 @@ - [Ascend 910 AI处理器上推理](#ascend-910-ai处理器上推理) - [使用checkpoint格式文件单卡推理](#使用checkpoint格式文件单卡推理) - [分布式推理](#分布式推理) + - [使用C++接口推理MindIR格式文件](#使用c接口推理mindir格式文件) + - [推理目录结构介绍](#推理目录结构介绍) + - [推理代码介绍](#推理代码介绍) + - [构建脚本介绍](#构建脚本介绍) + - [编译推理代码](#编译推理代码) + - [执行推理并查看结果](#执行推理并查看结果) - + ## 使用checkpoint格式文件单卡推理 1. 使用`model.eval`接口来进行模型验证。 - 1.1 模型已保存在本地 + 1.1 模型已保存在本地 首先构建模型,然后使用`mindspore.train.serialization`模块的`load_checkpoint`和`load_param_into_net`从本地加载模型与参数,传入验证数据集后即可进行模型推理,验证数据集的处理方式与训练数据集相同。 @@ -37,8 +43,8 @@ ``` 其中, - `model.eval`为模型验证接口,对应接口说明:。 - > 推理样例代码:。 + `model.eval`为模型验证接口,对应接口说明:。 + > 推理样例代码:。 1.2 使用MindSpore Hub从华为云加载模型 @@ -60,7 +66,7 @@ ``` 其中, - `mindspore_hub.load`为加载模型参数接口,对应接口说明:。 + `mindspore_hub.load`为加载模型参数接口,对应接口说明:。 2. 使用`model.predict`接口来进行推理操作。 @@ -69,7 +75,7 @@ ``` 其中, - `model.predict`为推理接口,对应接口说明:。 + `model.predict`为推理接口,对应接口说明:。 ## 分布式推理 @@ -79,13 +85,13 @@ > 分布式推理样例代码: > -> +> 分布式推理流程如下: 1. 执行训练,生成checkpoint文件和模型参数切分策略文件。 - > - 分布式训练教程和样例代码可参考链接:. + > - 分布式训练教程和样例代码可参考链接:. > - 在分布式推理场景中,训练阶段的`CheckpointConfig`接口的`integrated_save`参数需设定为`False`,表示每卡仅保存模型切片而不是全量模型。 > - `set_auto_parallel_context`接口的`parallel_mode`参数需设定为`auto_parallel`或者`semi_auto_parallel`,并行模式为自动并行或者半自动并行。 > - 此外还需指定`strategy_ckpt_save_file`参数,即生成的策略文件的地址。 @@ -121,10 +127,196 @@ - `load_distributed_checkpoint`:对模型切片进行合并,再根据推理策略进行切分,加载至网络中。 > `load_distributed_checkpoint`接口支持predict_strategy为`None`,此时为单卡推理,其过程与分布式推理有所不同,详细用法请参考链接: - > . + > . 4. 进行推理,得到推理结果。 ```python model.predict(predict_data) ``` + +## 使用C++接口推理MindIR格式文件 + +用户可以创建C++应用程序,调用MindSpore的C++接口推理MindIR模型。 + +### 推理目录结构介绍 + +创建目录放置推理代码工程,例如`/home/HwHiAiUser/mindspore_sample/ascend910_resnet50_preprocess_sample`,可以从官网示例下载[样例代码](https://gitee.com/mindspore/docs/tree/r1.1/tutorials/tutorial_code/ascend910_resnet50_preprocess_sample),`model`目录用于存放上述导出的`MindIR`模型文件,`test_data`目录用于存放待分类的图片,推理代码工程目录结构如下: + +```text +└─ascend910_resnet50_preprocess_sample + ├── CMakeLists.txt // 构建脚本 + ├── README.md // 使用说明 + ├── main.cc // 主函数 + ├── model + │ └── resnet50_imagenet.mindir // MindIR模型文件 + └── test_data + ├── ILSVRC2012_val_00002138.JPEG // 输入样本图片1 + ├── ILSVRC2012_val_00003014.JPEG // 输入样本图片2 + ├── ... // 输入样本图片n +``` + +### 推理代码介绍 + +推理代码样例: 。 + +环境初始化,指定硬件为Ascend 910,DeviceID为0: + +```c++ +ms::GlobalContext::SetGlobalDeviceTarget(ms::kDeviceTypeAscend910); +ms::GlobalContext::SetGlobalDeviceID(0); +``` + +加载模型文件: + +```c++ +// Load MindIR model +auto graph =ms::Serialization::LoadModel(resnet_file, ms::ModelType::kMindIR); +// Build model with graph object +ms::Model resnet50((ms::GraphCell(graph))); +ms::Status ret = resnet50.Build({}); +``` + +获取模型所需输入信息: + +```c++ +std::vector model_inputs = resnet50.GetInputs(); +``` + +加载图片文件: + +```c++ +// Readfile is a function to read images +ms::MSTensor ReadFile(const std::string &file); +auto image = ReadFile(image_file); +``` + +图片预处理: + +```c++ +// Create the CPU operator provided by MindData to get the function object +ms::dataset::Execute preprocessor({ms::dataset::vision::Decode(), // Decode the input to RGB format + ms::dataset::vision::Resize({256}), // Resize the image to the given size + ms::dataset::vision::Normalize({0.485 * 255, 0.456 * 255, 0.406 * 255}, + {0.229 * 255, 0.224 * 255, 0.225 * 255}), // Normalize the input + ms::dataset::vision::CenterCrop({224, 224}), // Crop the input image at the center + ms::dataset::vision::HWC2CHW(), // shape (H, W, C) to shape(C, H, W) + }); +// Call the function object to get the processed image +ret = preprocessor(image, &image); +``` + +执行推理: + +```c++ +// Create outputs vector +std::vector outputs; +// Create inputs vector +std::vector inputs; +inputs.emplace_back(model_inputs[0].Name(), model_inputs[0].DataType(), model_inputs[0].Shape(), + image.Data().get(), image.DataSize()); +// Call the Predict function of Model for inference +ret = resnet50.Predict(inputs, &outputs); +``` + +获取推理结果: + +```c++ +// Output the maximum probability to the screen +std::cout << "Image: " << image_file << " infer result: " << GetMax(outputs[0]) << std::endl; +``` + +### 构建脚本介绍 + +构建脚本用于构建用户程序,样例来自于: 。 + +由于MindSpore使用[旧版的C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html),因此用户程序需与MindSpore一致,否则编译链接会失败。 + +```cmake +add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0) +set(CMAKE_CXX_STANDARD 17) +``` + +为编译器添加头文件搜索路径: + +```cmake +option(MINDSPORE_PATH "mindspore install path" "") +include_directories(${MINDSPORE_PATH}) +include_directories(${MINDSPORE_PATH}/include) +``` + +在MindSpore中查找所需动态库: + +```cmake +find_library(MS_LIB libmindspore.so ${MINDSPORE_PATH}/lib) +file(GLOB_RECURSE MD_LIB ${MINDSPORE_PATH}/_c_dataengine*) +``` + +使用指定的源文件生成目标可执行文件,并为目标文件链接MindSpore库: + +```cmake +add_executable(resnet50_sample main.cc) +target_link_libraries(resnet50_sample ${MS_LIB} ${MD_LIB}) +``` + +## 编译推理代码 + +进入工程目录`ascend910_resnet50_preprocess_sample`,设置如下环境变量: + +```bash +# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING. +export GLOG_v=2 + +# Conda environmental options +LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package + +# lib libraries that the run package depends on +export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64/common:${LOCAL_ASCEND}/driver/lib64/driver:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} + +# lib libraries that the mindspore depends on, modify "pip3" according to the actual situation +export LD_LIBRARY_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore/lib"}' | xargs realpath`:${LD_LIBRARY_PATH} + +# Environment variables that must be configured +export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path +export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path +export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path +export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on +``` + +执行`cmake`命令,其中`pip3`需要按照实际情况修改: + +```bash +cmake . -DMINDSPORE_PATH=`pip3 show mindspore-ascend | grep Location | awk '{print $2"/mindspore"}' | xargs realpath` +``` + +再执行`make`命令编译即可。 + +```bash +make +``` + +编译完成后,在`ascend910_resnet50_preprocess_sample`下会生成可执行`main`文件。 + +## 执行推理并查看结果 + +登录Ascend 910环境,创建`model`目录放置MindIR文件`resnet50_imagenet.mindir`,例如`/home/HwHiAiUser/mindspore_sample/ascend910_resnet50_preprocess_sample/model`。 +创建`test_data`目录放置图片,例如`/home/HwHiAiUser/mindspore_sample/ascend910_resnet50_preprocess_sample/test_data`。 +就可以开始执行推理了: + +```bash +./resnet50_sample +``` + +执行后,会对`test_data`目录下放置的所有图片进行推理,比如放置了9张[ImageNet2012](http://image-net.org/download-images)验证集中label为0的图片,可以看到推理结果如下。 + +```text +Image: ./test_data/ILSVRC2012_val_00002138.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00003014.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00006697.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00007197.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009111.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009191.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009346.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009379.JPEG infer result: 0 +Image: ./test_data/ILSVRC2012_val_00009396.JPEG infer result: 0 +``` diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md b/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md index 82d7141468788164b7c18d166d19f40206d33be6..c3df856d011d3dd1182f0ceb383a84abd93342c9 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_cpu.md @@ -10,7 +10,7 @@ - + ## 使用checkpoint格式文件推理 @@ -20,6 +20,6 @@ 与在GPU上进行推理类似,需要以下几个步骤: -1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#onnx)。 +1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#onnx)。 2. 在CPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如使用ONNX Runtime,可以参考[ONNX Runtime说明文档](https://github.com/microsoft/onnxruntime)。 diff --git a/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md b/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md index ea96a12c1ce5e620f6c2700aa5c26088b9e8f534..3bbc9a3ee63f9189b5a612cafb3ad959999eea25 100644 --- a/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md +++ b/tutorials/inference/source_zh_cn/multi_platform_inference_gpu.md @@ -10,7 +10,7 @@ - + ## 使用checkpoint格式文件推理 @@ -18,6 +18,6 @@ ## 使用ONNX格式文件推理 -1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/master/use/save_model.html#onnx)。 +1. 在训练平台上生成ONNX格式模型,具体步骤请参考[导出ONNX格式文件](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/use/save_model.html#onnx)。 2. 在GPU上进行推理,具体可以参考推理使用runtime/SDK的文档。如在Nvidia GPU上进行推理,使用常用的TensorRT,可参考[TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt)。 diff --git a/tutorials/inference/source_zh_cn/serving_example.md b/tutorials/inference/source_zh_cn/serving_example.md index 165e05ad159936a859b99eff5c7745306fc5f947..612e67db9147265411b4109cdd547966d8ec1e76 100644 --- a/tutorials/inference/source_zh_cn/serving_example.md +++ b/tutorials/inference/source_zh_cn/serving_example.md @@ -15,7 +15,7 @@ - + ## 概述 @@ -25,11 +25,11 @@ MindSpore Serving是一个轻量级、高性能的服务模块,旨在帮助Min ### 环境准备 -运行示例前,需确保已经正确安装了MindSpore Serving。如果没有,可以通过[MindSpore Serving安装页面](https://gitee.com/mindspore/serving#%E5%AE%89%E8%A3%85serving),将MindSpore Serving正确地安装到你的电脑当中,同时通过[MindSpore Serving环境配置页面](https://gitee.com/mindspore/serving#%E9%85%8D%E7%BD%AE%E7%8E%AF%E5%A2%83%E5%8F%98%E9%87%8F)完成环境变量配置。 +运行示例前,需确保已经正确安装了MindSpore Serving。如果没有,可以通过[MindSpore Serving安装页面](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md#安装),将MindSpore Serving正确地安装到你的电脑当中,同时通过[MindSpore Serving环境配置页面](https://gitee.com/mindspore/serving/blob/r1.1/README_CN.md#配置环境变量)完成环境变量配置。 ### 导出模型 -使用[add_model.py](https://gitee.com/mindspore/serving/blob/master/example/add/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。 +使用[add_model.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。 ```python import os @@ -49,7 +49,7 @@ class Net(nn.Cell): def __init__(self): super(Net, self).__init__() - self.add = ops.TensorAdd() + self.add = ops.Add() def construct(self, x_, y_): """construct add net""" @@ -83,7 +83,7 @@ if __name__ == "__main__": ``` 使用MindSpore定义神经网络需要继承`mindspore.nn.Cell`。Cell是所有神经网络的基类。神经网络的各层需要预先在`__init__`方法中定义,然后通过定义`construct`方法来完成神经网络的前向构造。使用`mindspore`模块的`export`即可导出模型文件。 -更为详细完整的示例可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/master/quick_start/quick_start.html)。 +更为详细完整的示例可以参考[实现一个图片分类应用](https://www.mindspore.cn/tutorial/training/zh-CN/r1.1/quick_start/quick_start.html)。 执行`add_model.py`脚本,生成`tensor_add.mindir`文件,该模型的输入为两个shape为[2,2]的二维Tensor,输出结果是两个输入Tensor之和。 @@ -103,7 +103,7 @@ test_dir - `master_with_worker.py`为启动服务脚本文件。 - `add`为模型文件夹,文件夹名即为模型名。 - `tensor_add.mindir`为上一步网络生成的模型文件,放置在文件夹1下,1为版本号,不同的版本放置在不同的文件夹下,版本号需以纯数字串命名,默认配置下启动最大数值的版本号的模型文件。 -- [servable_config.py](https://gitee.com/mindspore/serving/blob/master/example/add/add/servable_config.py)为[模型配置文件](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_model.html),其定义了模型的处理函数,包括`add_common`和`add_cast`两个方法,`add_common`定义了输入为两个普通float32类型的加法操作,`add_cast`定义输入类型为其他类型,经过输入类型转换float32后的加法操作。 +- [servable_config.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/add/servable_config.py)为[模型配置文件](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_model.html),其定义了模型的处理函数,包括`add_common`和`add_cast`两个方法,`add_common`定义了输入为两个普通float32类型的加法操作,`add_cast`定义输入类型为其他类型,经过输入类型转换float32后的加法操作。 模型配置文件内容如下: @@ -113,7 +113,7 @@ from mindspore_serving.worker import register def add_trans_datatype(x1, x2): - """define preprocess, this example has one input and one output""" + """define preprocess, this example has two input and two output""" return x1.astype(np.float32), x2.astype(np.float32) @@ -126,7 +126,7 @@ register.declare_servable(servable_file="tensor_add.mindir", model_format="MindI # register add_common method in add @register.register_method(output_names=["y"]) def add_common(x1, x2): # only support float32 inputs - """method add_common data flow definition, only call model servable""" + """method add_common data flow definition, only call model inference""" y = register.call_servable(x1, x2) return y @@ -134,7 +134,7 @@ def add_common(x1, x2): # only support float32 inputs # register add_cast method in add @register.register_method(output_names=["y"]) def add_cast(x1, x2): - """method add_cast data flow definition, only call preprocess and model servable""" + """method add_cast data flow definition, only call preprocess and model inference""" x1, x2 = register.call_preprocess(add_trans_datatype, x1, x2) # cast input to float32 y = register.call_servable(x1, x2) return y @@ -145,7 +145,7 @@ MindSpore Serving提供两种部署方式,轻量级部署和集群部署。轻 #### 轻量级部署 服务端调用Python接口直接启动推理进程(master和worker共进程),客户端直接连接推理服务后下发推理任务。 -执行[master_with_worker.py](https://gitee.com/mindspore/serving/blob/master/example/add/master_with_worker.py),完成轻量级部署服务如下: +执行[master_with_worker.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/master_with_worker.py),完成轻量级部署服务如下: ```python import os @@ -201,8 +201,8 @@ if __name__ == "__main__": ### 执行推理 -客户端提供两种方式访问推理服务,一种是通过[gRPC方式](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_grpc.html),一种是通过[RESTful方式](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_restful.html),本文以gRPC方式为例。 -使用[client.py](https://gitee.com/mindspore/serving/blob/master/example/add/client.py),启动Python客户端。 +客户端提供两种方式访问推理服务,一种是通过[gRPC方式](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_grpc.html),一种是通过[RESTful方式](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_restful.html),本文以gRPC方式为例。 +使用[client.py](https://gitee.com/mindspore/serving/blob/r1.1/example/add/client.py),启动Python客户端。 ```python import numpy as np diff --git a/tutorials/inference/source_zh_cn/serving_grpc.md b/tutorials/inference/source_zh_cn/serving_grpc.md index fd388bf0ea72ae812634ff092d66b94c9bad9a4e..b2e3ee870bdb591b21db46d89b320d25cd4e0c50 100644 --- a/tutorials/inference/source_zh_cn/serving_grpc.md +++ b/tutorials/inference/source_zh_cn/serving_grpc.md @@ -11,15 +11,15 @@ - + ## 概述 -MindSpore Serving提供gRPC接口访问Serving服务。在Python环境下,我们提供[mindspore_serving.client](https://gitee.com/mindspore/serving/blob/master/mindspore_serving/client/python/client.py) 模块用于填写请求、解析回复。gRPC服务端(worker节点)当前仅支持Ascend平台,客户端运行不依赖特定硬件环境。接下来我们通过`add`和`ResNet-50`样例来详细说明gRPC Python客户端接口的使用。 +MindSpore Serving提供gRPC接口访问Serving服务。在Python环境下,我们提供[mindspore_serving.client](https://gitee.com/mindspore/serving/blob/r1.1/mindspore_serving/client/python/client.py) 模块用于填写请求、解析回复。gRPC服务端(worker节点)当前仅支持Ascend平台,客户端运行不依赖特定硬件环境。接下来我们通过`add`和`ResNet-50`样例来详细说明gRPC Python客户端接口的使用。 ## add样例 -样例来源于[add example](https://gitee.com/mindspore/serving/blob/master/example/add/client.py) ,`add` Servable提供的`add_common`方法提供两个2x2 Tensor相加功能。其中gRPC Python客户端代码如下所示,一次gRPC请求包括了三对独立的2x2 Tensor: +样例来源于[add example](https://gitee.com/mindspore/serving/blob/r1.1/example/add/client.py) ,`add` Servable提供的`add_common`方法提供两个2x2 Tensor相加功能。其中gRPC Python客户端代码如下所示,一次gRPC请求包括了三对独立的2x2 Tensor: ```python from mindspore_serving.client import Client @@ -54,7 +54,7 @@ if __name__ == '__main__': run_add_common() ``` -按照[入门流程](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_example.html) 导出模型、启动Serving服务器,并执行上述客户端代码。当运行正常后,将打印以下结果,为了展示方便,格式作了调整: +按照[入门流程](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html) 导出模型、启动Serving服务器,并执行上述客户端代码。当运行正常后,将打印以下结果,为了展示方便,格式作了调整: ```python [{'y': array([[2., 2.], [2., 2.]], dtype=float32)}, @@ -124,7 +124,7 @@ if __name__ == '__main__': ## ResNet-50样例 -样例来源于[ResNet-50 example](https://gitee.com/mindspore/serving/blob/master/example/resnet/client.py),`ResNet-50` Servable提供的`classify_top1`方法提供对图像进行识别的服务。`classify_top1`方法输入为图像数据,输出为字符串,方法中预处理对图像进行解码、Resize等操作,接着进行推理,并通过后处理返回得分最大的分类标签。 +样例来源于[ResNet-50 example](https://gitee.com/mindspore/serving/blob/r1.1/example/resnet/client.py),`ResNet-50` Servable提供的`classify_top1`方法提供对图像进行识别的服务。`classify_top1`方法输入为图像数据,输出为字符串,方法中预处理对图像进行解码、Resize等操作,接着进行推理,并通过后处理返回得分最大的分类标签。 ```python import os diff --git a/tutorials/inference/source_zh_cn/serving_model.md b/tutorials/inference/source_zh_cn/serving_model.md index b4edcd3e562054659f90a20a821f48f7cb788ede..a37c8514e4a2824eacecc11a195e532b596ba137 100644 --- a/tutorials/inference/source_zh_cn/serving_model.md +++ b/tutorials/inference/source_zh_cn/serving_model.md @@ -17,17 +17,17 @@ - + ## 概述 MindSpore Serving当前仅支持Ascend 310和Ascend 910环境。 -MindSpore Serving的Servable提供推理服务,包含两种类型。一种是推理服务来源于单模型,一种是推理服务来源于多模型组合,多模型组合正在开发中。 +MindSpore Serving的Servable提供推理服务,包含两种类型。一种是推理服务来源于单模型,一种是推理服务来源于多模型组合,多模型组合正在开发中。模型需要进行配置以提供Serving推理服务。 本文将说明如何对单模型进行配置以提供Servable,以下所有Servable配置说明针对的是单模型Servable,Serving客户端简称客户端。 -本文以ResNet-50作为样例介绍如何配置模型提供Servable。样例代码可参考[ResNet-50样例](https://gitee.com/mindspore/serving/tree/master/example/resnet/) 。 +本文以ResNet-50作为样例介绍如何配置模型提供Servable。样例代码可参考[ResNet-50样例](https://gitee.com/mindspore/serving/tree/r1.1/example/resnet/) 。 ## 相关概念 @@ -136,7 +136,7 @@ def postprocess_top5(score): 预处理和后处理定义格式相同,入参为每个实例的输入数据。输入数据为文本时,入参为str对象;输入数据为其他数据类型,包括Tensor、Scalar number、Bool、Bytes时,入参为**numpy对象**。通过`return`返回实例的处理结果,`return`返回的数据可为**numpy、Python的bool、int、float、str、或bytes**单个数据对象或者由它们组成的tuple。 -预处理和后处理输入的来源和输出的使用由[方法定义](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_model.html#id9)决定。 +预处理和后处理输入的来源和输出的使用由[方法定义](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_model.html#id9)决定。 ### 模型声明 @@ -159,6 +159,17 @@ register.declare_servable(servable_file="resnet50_1b_imagenet.mindir", model_for ![image](./images/matmul_without_batch.png) +另外,对于一个模型,假设其中一个输入是数据输入,包括`batch`维度信息,另一个输入为模型配置信息,没有包括`batch`维度信息,此时在设置`with_batch_dim`为`True`基础上,设置额外参数`without_batch_dim_inputs`指定没有包括`batch`维度信息的输入信息。 +例如: + +```python +from mindspore_serving.worker import register +# Input1 indicates the input shape information of the model, without the batch dimension information. +# input0: [N,3,416,416], input1: [2] +register.declare_servable(servable_file="yolov3_darknet53.mindir", model_format="MindIR", + with_batch_dim=True, without_batch_dim_inputs=1) +``` + ### 方法定义 方法定义的例子如下: @@ -186,13 +197,9 @@ def classify_top5(image): return label, score ``` -Python函数和Servable方法对应关系如下表: +上述代码在Servable `resnet50`定义了`classify_top1`和`classify_top5`方法,其中方法`classify_top1`入参为`image`,出参为`label`,方法`classify_top5`入参为`image`,出参为`label`和`score`。即,Servable方法的入参由Python方法的入参指定,Servable方法的出参由`register_method`的`output_names`指定。 -| Python函数 | Servable方法 | -| ---- | ---- | -| 函数名 | 方法名 | -| 入参和入参名称 | 入参和入参名称 | -| `register_method`的`output_names`参数 | 出参和出参名称 | +另外方法定义中: - `call_preprocess`指示了使用的预处理及其输入。 @@ -203,3 +210,31 @@ Python函数和Servable方法对应关系如下表: - `return`指示了方法的返回数据,和`register_method`的`output_names`参数对应。 方法定义不能包括if、for、while等分支结构,预处理和后处理可选,不可重复,模型推理必选,且顺序不能打乱。 + +用户在客户端使用Servable某个方法提供的服务时,需要通过入参名称指定对应输入的值,通过出参名称识别各个输出的值。比如客户端访问方法`classify_top5`: + +```python +from mindspore_serving.client import Client + +def read_images(): + # read image file and return + +def run_classify_top5(): + """Client for servable resnet50 and method classify_top5""" + client = Client("localhost", 5500, "resnet50", "classify_top5") + instances = [] + for image in read_images(): # read multi image + instances.append({"image": image}) # input `image` + result = client.infer(instances) + print(result) + for result_item in result: # result for every image + label = result_item["label"] # result `label` + score = result_item["score"] # result `score` + print("label result", label) + print("score result", score) + +if __name__ == '__main__': + run_classify_top5() +``` + +另外,一次请求可包括多个实例,且多个排队处理的请求也将有多个实例,如果需要在自定义的预处理或后处理中通过多线程等并法方式处理多个实例,比如在预处理中使用MindData并发能力处理多个输入图片,MindSpore Serving提供了`call_preprocess_pipeline`和`call_postprocess_pipeline`用于注册此类预处理和后处理。详情可参考[ResNet-50样例的模型配置](https://gitee.com/mindspore/serving/blob/r1.1/example/resnet/resnet50/servable_config.py) 。 diff --git a/tutorials/inference/source_zh_cn/serving_restful.md b/tutorials/inference/source_zh_cn/serving_restful.md index 017d25a5ff0604f5151a0eaf60f5d80615dbef86..589b3480d3487959adbd4a82d7b4cab6d825a829 100644 --- a/tutorials/inference/source_zh_cn/serving_restful.md +++ b/tutorials/inference/source_zh_cn/serving_restful.md @@ -8,11 +8,12 @@ - [概述](#概述) - [请求方式](#请求方式) - [请求输入格式](#请求输入格式) + - [base64数据编码](#base64数据编码) - [请求应答格式](#请求应答格式) - + ## 概述 @@ -20,15 +21,15 @@ MindSpore Serving支持`gPRC`和`RESTful`两种请求方式。本章节介绍`RE `RESTful`是一种基于`HTTP`协议的网络应用程序的设计风格和开发方式,通过`URI`实现对资源的管理及访问,具有扩展性强、结构清晰的特点。基于其轻量级以及通过`HTTP`直接传输数据的特性,`RESTful`已经成为最常见的`Web`服务访问方式。用户通过`RESTful`方式,能够简单直接的与服务进行交互。 -部署`Serving`参考[快速入门](https://www.mindspore.cn/tutorial/inference/zh-CN/master/serving_example.html) 章节。 +部署`Serving`参考[快速入门](https://www.mindspore.cn/tutorial/inference/zh-CN/r1.1/serving_example.html) 章节。 -与通过`master.start_grpc_server("127.0.0.1", 5500)`启动`gRPC`服务不同的是,`RESTful`服务需要通过`master.start_restful_server("0.0.0.0", 1500)`方式来启动。 +通过`master.start_restful_server`接口启动`RESTful`服务;另外,可通过`master.start_grpc_server`启动`gRPC`服务。 ->`RESTful`服务端(worker节点)当前仅支持`Ascend`硬件,`RESTful`客户端不依赖特定硬件平台。 +> `RESTful`客户端不依赖特定硬件平台,Serving服务端当前仅支持`Ascend310`和`Ascend910`硬件环境。 ## 请求方式 -当前支持`POST`类型的RESTful请求,请求格式如下: +当前仅支持`POST`类型的RESTful请求,请求格式如下: ```text POST http://${HOST}:${PORT}/model/${MODLE_NAME}[/version/${VERSION}]:${METHOD_NAME} @@ -36,11 +37,11 @@ POST http://${HOST}:${PORT}/model/${MODLE_NAME}[/version/${VERSION}]:${METHOD_NA 其中: -- `HOST`:指定访问的IP地址; -- `PORT`:指定访问的端口号; -- `MODEL_NAME`:请求的模型名称; -- `VERSION`:表示版本号。版本号是可选的,若未指定具体版本号,则默认使用模型的最新版本。 -- `METHOD_NAME`:表示请求模型的具体方法名称。 +- `${HOST}`:指定访问的IP地址; +- `${PORT}`:指定访问的端口号; +- `${MODLE_NAME}`:请求的模型名称; +- `${VERSION}`:表示版本号。版本号是可选的,若未指定具体版本号,则默认使用模型的最新版本。 +- `${METHOD_NAME}`:表示请求模型的具体方法名称。 如果使用`curl`工具,RESTful请求方式如下: @@ -54,13 +55,13 @@ curl -X POST -d '${REQ_JSON_MESSAGE}' http://${HOST}:${PORT}/model/${MODLE_NAME} curl -X POST -d '{"instances":{"image":{"b64":"babe64-encoded-string"}' http://127.0.0.1:1500/model/lenet/version/1:predict ``` -其中:`babe64-encoded-string`是数字`1`图片经过`base64`编码之后的字符串。由于字符串比较长,不显式列出。 +其中:`babe64-encoded-string`表示数字图片经过`base64`编码之后的字符串。由于字符串比较长,不显式列出。 ## 请求输入格式 RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多个实例。 -每个实例通过`key-value`格式的`Json`表示。其中: +每个实例通过`key-value`格式的`Json`对象来表示。其中: - `key`:表示输入名称,需要与请求模型提供的方法的输入参数名称一致,若不一致,则请求失败。 @@ -70,7 +71,7 @@ RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多 `bytes`:通过`base64`编码方式支持。 - - 张量:`int`、`float`、`bool`。 + - 张量:`int`、`float`、`bool`组成的一级或多级数组。 张量通过数组格式表示数据和维度信息。 @@ -80,19 +81,19 @@ RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多 ```text { -"instances":[ - { - "input_name1":||, - "input_name2":||, - ... - }, - { - "input_name1":||, - "input_name2":||, + "instances":[ + { + "input_name1":||, + "input_name2":||, + ... + }, + { + "input_name1":||, + "input_name2":||, + ... + } ... - } - ... -] + ] } ``` @@ -103,12 +104,12 @@ RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多 "instances":[ { "tag":"one", - "box":[[1,1],[2,3],[3,4]] + "box":[[1,1],[2,3],[3,4]], "image":{"b64":"iVBOR...ggg==="} }, { - "tag":"two" - "box":[[2,2],[5,5],[6,6]] + "tag":"two", + "box":[[2,2],[5,5],[6,6]], "image":{"b64":"iVBOR...QmCC", "type":"bytes"} } ] @@ -117,7 +118,9 @@ RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多 其中:`iVBOR...ggg===`是图片数字`0`经过`base64`编码之后的省略字符串。`iVBOR...QmCC`是图片数字`1`经过`base64`编码之后的省略字符串。不同图片编码出来的字符串可能不同,上述是示意说明。 -`bytes`类型需要通过`base64`编码进行表示。`base64`除了支持`bytes`类型,也支持表示其他标量和张量,此时需要通过`type`指定数据类型,通过`shape`指定维度信息。 +### base64数据编码 + +`bytes`类型需要通过`base64`编码进行表示。`base64`除了可以表示`bytes`类型,也可以表示其他标量和张量数据,此时将标量和张量的二进制数据通过`base64`进行编码,并额外通过`type`指定数据类型,通过`shape`指定维度信息: - `type`:可选,如果不指定,默认为`bytes`。 @@ -133,26 +136,24 @@ RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多 { "instances":[ { - "tag":"one", - "box":{"b64":"AQACAAIAAwADAAQA", "type":"int16", "shape":[3,2]}, - "image":{"b64":"iVBOR...ggg==="} + "box":{"b64":"AQACAAIAAwADAAQA", "type":"int16", "shape":[3,2]} } ] } ``` -其中`AQACAAIAAwADAAQA`:是`[[1,1],[2,3],[3,4]]`经过`base64`编码字后的字符串。 +其中`AQACAAIAAwADAAQA`:是`[[1,1],[2,3],[3,4]]`的二进制数据格式经过`base64`编码后的字符串。 **支持的类型总结如下:** -| 支持的类型 | 例子 | 备注 | -| :------------------------------------------------------------------------------------------: | ------------------------------------------------------------------------------ | ---------------------------------- | -| `int` | 1,[1,2,3,4] | 默认`int32`表示范围 | -| `float` | 1.0,[[1.2, 2.3], [3.0, 4.5]] | 默认`float32`表示范围 | -| `bool` | true,false,[[true],[false]] | `bool`类型 | -| `string` | "hello"或者
{"b64":"aGVsbG8=", "type":"str"} | 直接表示或者指定`type`方式表示 | -| `bytes` | {"b64":"AQACAAIAAwADAAQA"} 或者
{"b64":"AQACAAIAAwADAAQA", "type":"bytes"} | 如果不填`type`,默认为`bytes` | -| `int8`,`int16`,`int32`,`int64`,`uint8`,`uint16`,`uint32`,`uint64` `float16`,`float32`,`bool` | {"b64":"AQACAAIAAwADAAQA", "type":"int16", "shape":[3,2]} | 利用base64编码,表示指定type的数据 | +| 支持的类型 | 例子 | 备注 | +| :------: | -------- | ---------------- | +| `int` | 1,[1,2,3,4] | 默认`int32`表示范围 | +| `float` | 1.0,[[1.2, 2.3], [3.0, 4.5]] | 默认`float32`表示范围 | +| `bool` | true,false,[[true],[false]] | `bool`类型 | +| `string` | "hello"或者
{"b64":"aGVsbG8=", "type":"str"} | 直接表示或者指定`type`方式表示 | +| `bytes` | {"b64":"AQACAAIAAwADAAQA"} 或者
{"b64":"AQACAAIAAwADAAQA", "type":"bytes"} | 如果不填`type`,默认为`bytes` | +| `int8`,`int16`,`int32`,`int64`,`uint8`,`uint16`,`uint32`,`uint64` `f16`,`f32`,`f64`,`bool` | {"b64":"AQACAAIAAwADAAQA", "type":"int16", "shape":[3,2]} | 利用base64编码,表示指定type的数据 | ## 请求应答格式 @@ -167,7 +168,7 @@ RESTful支持`Json`请求格式,`key`固定为`instances`,`value`表示多 ... }, { - "output_name1":||, + "output_name1":||, "output_name2":||, ... } diff --git a/tutorials/lite/source_en/_static/js/lite.js b/tutorials/lite/source_en/_static/js/lite.js index dba4c4baa03e83dc99a6aec8711a6a85fbf08506..03943ac0175addd579e247c2c47f7196b823f194 100644 --- a/tutorials/lite/source_en/_static/js/lite.js +++ b/tutorials/lite/source_en/_static/js/lite.js @@ -146,7 +146,7 @@ $(function() { // 计算总页数 var len = Math.ceil((all - hidden_num) / curNum); // 生成页码 - var pageList = '
  • ' + '共' + all_article + '条' + '
  • ' + '
  • '; + var pageList = '
  • ' + 'Total ' + all_article + ' Results(s)' + '
  • ' + '
  • '; // 当前的索引值 var iNum = 0; @@ -158,7 +158,7 @@ $(function() { if (all_article > 0){ $('#pageNav').html(pageList).find('li').eq(2).addClass('active'); }else{ - $('#pageNav').html('
  • ' + '共' + all_article + '条' + '
  • '); + $('#pageNav').html('
  • ' + 'Total ' + all_article + ' Results(s)' + '
  • '); } // 标签页的点击事件 diff --git a/tutorials/lite/source_en/conf.py b/tutorials/lite/source_en/conf.py index b472aa71f0899d61ef358f7388dcadfe8a2c7706..c87330ab041202db0eb914846d1bfc70d7438774 100644 --- a/tutorials/lite/source_en/conf.py +++ b/tutorials/lite/source_en/conf.py @@ -21,7 +21,7 @@ copyright = '2020, MindSpore Lite' author = 'MindSpore Lite' # The full version, including alpha/beta/rc tags -release = 'master' +release = 'r1.1' # -- General configuration --------------------------------------------------- diff --git a/tutorials/lite/source_en/images/side_train_sequence.png b/tutorials/lite/source_en/images/side_train_sequence.png index 16e4af67a46370813760c09a15da756ad87fa643..058f03d3973beab9c8a245d6aa898f938d486315 100644 Binary files a/tutorials/lite/source_en/images/side_train_sequence.png and b/tutorials/lite/source_en/images/side_train_sequence.png differ diff --git a/tutorials/lite/source_en/index.rst b/tutorials/lite/source_en/index.rst index fcddb633dca596106919215c815e3ac44b6e86e5..8b9f9f69cc001273b90995f5594678fec8c9c104 100644 --- a/tutorials/lite/source_en/index.rst +++ b/tutorials/lite/source_en/index.rst @@ -112,7 +112,7 @@ Using MindSpore on Mobile and IoT