+
+Figure 1 shows an overall design of differential privacy training, and mainly including differential privacy noise mechanisms (DP mechanisms), a differential privacy optimizer (DP optimizer), and a privacy monitor.
+
+
+### DP Optimizer
+
+DP optimizer inherits capabilities of the MindSpore optimizer and uses the DP mechanisms to scramble and protect gradients. Currently, MindArmour provides three types of DP optimizers: constant Gaussian optimizer, adaptive Gaussian optimizer, and adaptive clipping optimizer. Each type of DP optimizer adds differential privacy protection capabilities to common optimizers such as SGD and Momentum from different perspectives.
+
+* Constant Gaussian optimizer is a DP optimizer for non-adaptive Gaussian noise. The advantage is that the differential privacy budget ϵ can be strictly controlled. The disadvantage is that in the model training process, the noise amount added in each step is fixed. If the number of training steps is too large, the noise in the later phase of training makes the model convergence difficult, or even causes the performance to deteriorate greatly and the model availability to be poor.
+* Adaptive Gaussian optimizer adaptively adjusts the standard deviation to adjust the Gaussian distribution noise. In the initial phase of model training, a large amount of noise is added. As the model gradually converges, the noise amount gradually decreases, and the impact of the noise on the model availability is reduced. A disadvantage of the adaptive Gaussian noise is that a differential privacy budget cannot be strictly controlled.
+* Adaptive clipping optimizer is a DP optimizer that adaptively adjusts a clipping granularity. Gradient clipping is an important operation in differential privacy training. The adaptive clipping optimizer can control a ratio of gradient clipping to fluctuate within a given range and control the gradient clipping granularity during training steps.
+
+### DP Mechanisms
+
+The noise mechanism is a basis for building a differential privacy training capability. Different noise mechanisms meet requirements of different DP optimizers, including multiple mechanisms such as constant Gaussian distribution noise, adaptive Gaussian distribution noise, adaptive clipping Gaussian distribution noise, and Laplace distribution noise.
+
+### Monitor
+
+Monitor provides callback functions such as Rényi differential privacy (RDP) and zero-concentrated differential privacy (ZCDP) to monitor the differential privacy budget of the model.
+
+* ZCDP[2]
+
+ ZCDP is a loose differential privacy definition. It uses the Rényi divergence to measure the distribution difference of random functions on adjacent datasets.
+
+* RDP[3]
+
+ RDP is a more general differential privacy definition based on the Rényi divergence. It uses the Rényi divergence to measure the distribution difference between two adjacent datasets.
+
+
+Compared with traditional differential privacy, ZCDP and RDP provide stricter privacy budget upper bound guarantee.
+
+
+## Code Implementation
+
+* [mechanisms.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/mechanisms/mechanisms.py): implements the noise generation mechanism required by differential privacy training, including simple Gaussian noise, adaptive Gaussian noise, and adaptive clipping Gaussian noise.
+* [optimizer.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/optimizer/optimizer.py): implements the fundamental logic of using the noise generation mechanism to add noise during backward propagation.
+* [monitor.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/monitor/monitor.py): implements the callback function for computing the differential privacy budget. During model training, the current differential privacy budget is returned.
+* [model.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/privacy/diff_privacy/train/model.py): implements the logic of computing the loss and gradient as well as the gradient truncation logic of differential privacy training, which is the entry for users to use the differential privacy training capability.
+
+## References
+
+[1] Dwork, Cynthia, and Jing Lei. "Differential privacy and robust statistics." *Proceedings of the forty-first annual ACM symposium on Theory of computing*. 2009.
+
+[2] Lee, Jaewoo, and Daniel Kifer. "Concentrated differentially private gradient descent with adaptive per-iteration privacy budget." *Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining*. 2018.
+
+[3] Mironov, Ilya. "Rényi differential privacy." *2017 IEEE 30th Computer Security Foundations Symposium (CSF)*. IEEE, 2017.
diff --git a/docs/source_en/design/mindarmour/fuzzer_design.md b/docs/source_en/design/mindarmour/fuzzer_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..2a41c2342eb3ed7fe13804890f7d97f491e2f20e
--- /dev/null
+++ b/docs/source_en/design/mindarmour/fuzzer_design.md
@@ -0,0 +1,74 @@
+# AI Model Security Test
+
+`Linux` `Ascend` `GPU` `CPU` `Data Preparation` `Model Development` `Model Training` `Model Optimization` `Enterprise` `Expert`
+
+
+
+
+- [AI Model Security Test](#ai-model-security-test)
+ - [Background](#background)
+ - [Fuzz Testing Design](#fuzz-testing-design)
+ - [Fuzz Testing Process](#fuzz-testing-process)
+ - [Code Implementation](#code-implementation)
+ - [References](#references)
+
+
+
+
+
+## Background
+
+Different from [fuzzing security test for traditional programs](https://zhuanlan.zhihu.com/p/43432370), MindArmour provides the AI model security test module fuzz_testing for deep neural network. Based on the neural network features, the concept of neuron coverage rate [1] is introduced to guide the fuzz testing. Fuzz testing is guided to generate samples in the direction of increasing neuron coverage rate so that more neurons can be activated by inputs. The distribution range of neuron values is wider to fully test DNN and explore the output results of different types of models and model error behavior.
+
+## Fuzz Testing Design
+
+The following figure shows the security test design of the AI model.
+
+
+
+At the user interface layer, users need to provide the original dataset `DataSet`, tested model `Model`, and Fuzzer parameter `Fuzzer configuration`. After fuzzing the model and data, Fuzzer module returns the security report `Security Report`.
+
+Fuzz testting architecture consists of three modules:
+
+1. Natural Threat/Adversarial Example Generator:
+
+ Randomly select a mutation method to mutate seed data and generate multiple variants. Mutation policies supporting multiple samples include:
+
+ - Image affine transformation methods: Translate, Rotate, Scale, and Shear.
+ - Methods based on image pixel value changes: Contrast, Brightness, Blur, and Noise.
+ - Methods for generating adversarial examples based on white-box and black-box attacks: FGSM, PGD, and MDIIM.
+
+2. Fuzzer Moduler:
+
+ Perform fuzz testing on the mutated data to observe the change of the neuron coverage rate. If the generated data increases the neuron coverage rate, add the data to the mutated seed queue for the next round of data mutation. Currently, the following neuron coverage metrics are supported: KMNC, NBC, and SNAC [2].
+
+3. Evaluation:
+
+ Evaluate the fuzz testing effect, quality of generated data, and strength of mutation methods. Five metrics of three types are supported, including the general evaluation metric (accuracy), neuron coverage rate metrics (kmnc, nbc, and snac), and adversarial attack evaluation metric (attack_success_rate).
+
+## Fuzz Testing Process
+
+
+
+The fuzz testing process is as follows:
+
+1. Select seed A from the seed queue according to the policy.
+2. Randomly select a mutation policy to mutate seed A and generate multiple variants A1, A2, ...
+3. Use the target model to predict the variants. If the semantics of variant is consistent with the seed, the variant enters the Fuzzed Tests.
+4. If the prediction is correct, use the neuron coverage metric for analysis.
+5. If a variant increases the coverage rate, place the variant in the seed queue for the next round of mutation.
+
+Through multiple rounds of mutations, you can obtain a series of variant data in the Fuzzed Tests, perform further analysis, and provide security reports from multiple perspectives. You can use them to deeply analyze defects of the neural network model and enhance the model to improve its universality and robustness.
+
+## Code Implementation
+
+1. [fuzzing.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/fuzzing.py): overall fuzz testing process.
+2. [model_coverage_metrics.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/model_coverage_metrics.py): neuron coverage rate metrics, including KMNC, NBC, and SNAC.
+3. [image_transform.py](https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py): image mutation methods, including methods based on image pixel value changes and affine transformation methods.
+4. [adversarial attacks](https://gitee.com/mindspore/mindarmour/tree/master/mindarmour/adv_robustness/attacks): methods for generating adversarial examples based on white-box and black-box attacks.
+
+## References
+
+[1] Pei K, Cao Y, Yang J, et al. Deepxplore: Automated whitebox testing of deep learning systems[C]//Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017: 1-18.
+
+[2] Ma L, Juefei-Xu F, Zhang F, et al. Deepgauge: Multi-granularity testing criteria for deep learning systems[C]//Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ACM, 2018: 120-131.
\ No newline at end of file
diff --git a/docs/source_en/design/mindarmour/images/dp_arch.png b/docs/source_en/design/mindarmour/images/dp_arch.png
new file mode 100644
index 0000000000000000000000000000000000000000..c903e4e2acece6c6de882852dc3570126b6fcb05
Binary files /dev/null and b/docs/source_en/design/mindarmour/images/dp_arch.png differ
diff --git a/docs/source_en/design/mindarmour/images/fuzz_architecture.png b/docs/source_en/design/mindarmour/images/fuzz_architecture.png
new file mode 100644
index 0000000000000000000000000000000000000000..d4e8b89bd9a9f4844c59790f5b2114d1d477f927
Binary files /dev/null and b/docs/source_en/design/mindarmour/images/fuzz_architecture.png differ
diff --git a/docs/source_en/design/mindarmour/images/fuzz_process.png b/docs/source_en/design/mindarmour/images/fuzz_process.png
new file mode 100644
index 0000000000000000000000000000000000000000..2e04347f7cfb0819562578a6be1e91b5cc7ce9d5
Binary files /dev/null and b/docs/source_en/design/mindarmour/images/fuzz_process.png differ
diff --git a/docs/source_en/design/mindinsight/images/analyser_class_profiler.png b/docs/source_en/design/mindinsight/images/analyser_class_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..3f785786eb8652e8d1cfc09795e48895da80eef9
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/analyser_class_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/context_profiler.png b/docs/source_en/design/mindinsight/images/context_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..f11782ebfe473ddfaec9736055c9012a5129a26f
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/context_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_main.png b/docs/source_en/design/mindinsight/images/graph_visual_main.png
index 55ca7d7183c818a15b69a3a6ee2c4ef29655460c..0bc13636b5c84952978469c652c38500e6d34f43 100644
Binary files a/docs/source_en/design/mindinsight/images/graph_visual_main.png and b/docs/source_en/design/mindinsight/images/graph_visual_main.png differ
diff --git a/docs/source_en/design/mindinsight/images/graph_visual_right_side.png b/docs/source_en/design/mindinsight/images/graph_visual_right_side.png
index 90e8d868b5ff9d68ae14d55d8f3ff188db412556..e138bcfbbfda77ff3468442a3e5e169dcd7fed03 100644
Binary files a/docs/source_en/design/mindinsight/images/graph_visual_right_side.png and b/docs/source_en/design/mindinsight/images/graph_visual_right_side.png differ
diff --git a/docs/source_en/design/mindinsight/images/module_profiler.png b/docs/source_en/design/mindinsight/images/module_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..f30582b53e046a37e5d97450b148d4e665ba174d
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/module_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/parser_module_profiler.png b/docs/source_en/design/mindinsight/images/parser_module_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..8ef3c927013517e341fbe44c7f96f0be05536b80
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/parser_module_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/proposer_class_profiler.png b/docs/source_en/design/mindinsight/images/proposer_class_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..3e2d4363e92821b05cafc330573c981a1ab99bbf
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/proposer_class_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/proposer_module_profiler.png b/docs/source_en/design/mindinsight/images/proposer_module_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..909dd42c89715d49a11c35764d84aab231b91fb4
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/proposer_module_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/images/tensor_table.png b/docs/source_en/design/mindinsight/images/tensor_table.png
index 725bd9f8481826d682b593c2224a766854e9b4f8..f2d1ad90b3930f71fa4014d94ae52df909bea434 100644
Binary files a/docs/source_en/design/mindinsight/images/tensor_table.png and b/docs/source_en/design/mindinsight/images/tensor_table.png differ
diff --git a/docs/source_en/design/mindinsight/images/time_order_profiler.png b/docs/source_en/design/mindinsight/images/time_order_profiler.png
new file mode 100644
index 0000000000000000000000000000000000000000..35eef99934ce9d743ebe0294e18ff0b5ea40abab
Binary files /dev/null and b/docs/source_en/design/mindinsight/images/time_order_profiler.png differ
diff --git a/docs/source_en/design/mindinsight/profiler_design.md b/docs/source_en/design/mindinsight/profiler_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..e18497237388c37ddada8552fa01844026926fa6
--- /dev/null
+++ b/docs/source_en/design/mindinsight/profiler_design.md
@@ -0,0 +1,175 @@
+# Profiler Design Document
+
+`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
+
+
+
+- [Profiler Design Document](#profiler-design-document)
+ - [Background](#background)
+ - [Profiler Architecture Design](#profiler-architecture-design)
+ - [Context](#context)
+ - [Module Structure](#module-structure)
+ - [Internal Module Interaction](#internal-module-interaction)
+ - [Sub-Module Design](#sub-module-design)
+ - [ProfilerAPI and Controller](#profilerapi-and-controller)
+ - [Description](#description)
+ - [Design](#design)
+ - [Parser](#parser)
+ - [Description](#description-1)
+ - [Design](#design-1)
+ - [Analyser](#analyser)
+ - [Description](#description-2)
+ - [Design](#design-2)
+ - [Proposer](#proposer)
+ - [Description](#description-3)
+ - [Design](#design-3)
+
+
+
+
+
+## Background
+
+To support model development and performance debugging in MindSpore, an easy-to-use profile tool is required to intuitively display the performance information of each dimension of a network model, provide users with easy-to-use and abundant profiling functions, and help users quickly locate network performance faults.
+
+## Profiler Architecture Design
+The Profiler architecture design is introduced from the following three aspects: the overall context interaction relationship of Profiler; the internal structure of Profiler, including the module structure and module layers; the interactive calling relationship between modules.
+
+### Context
+
+Profiler is a part of the MindSpore debugging and optimization tool. The following figure shows the tool context.
+
+
+
+Figure 1 Context relationship
+
+As shown in the preceding figure, the interaction between the Profiler and other components is as follows:
+
+1. In the training script, MindSpore Profiler is called to send the command to the MindSpore ada communication module for starting performance data collection. Finally, the ada generates original performance data.
+
+2. MindSpore Profiler parses the original data in the user script and generates the intermediate data results in the specified folder.
+
+3. MindInsight Profiler connects to the intermediate data and provides the visualized Profiler function for users.
+### Module Structure
+
+Modules are classified into the following layers:
+
+
+
+Figure 2 Relationships between modules at different layers
+
+
+Module functions are as follows:
+1. ProfilerAPI is a calling entry provided by code, including the performance collection startup API and analysis API.
+2. Controller is a module at a layer lower than that of ProfilerAPI. It is called by the startup API of ProfilerAPI to start or stop the performance collection function. The original data is written to a fixed position by ada.
+3. Parser is a module for parsing original performance data which is collected on the device and cannot be directly understood by users. Parser parses, combines, and converts the data to generate intermediate results that can be understood by users and analyzed by upper layers.
+4. Analyser obtains the intermediate results parsed by the lower-layer Parser, encapsulates, filters, and sorts the intermediate results, and returns the various information to the upper-layer Profiler API and RESTful.
+5. RESTful is used to call the common API provided by the backend Analyser to obtain objective data and use RESTful to connect to the frontend.
+
+### Internal Module Interaction
+Users can use API or RESTful to complete internal module interaction process. The following uses the API as an example:
+
+
+
+Figure 3 Module interaction
+
+The interaction process of each module is as follows:
+
+1. ProfilerAPI calls the control function of the lower-layer Controller to control the lower-layer collection module to collect performance information. Currently, the collection module (ada) receives commands in resident process mode and independently collects performance information.
+
+2. After the training is complete, users call the analysis API of ProfilerAPI.
+
+3. Profiler API analysis API uses the Parser module to parse performance data, generates intermediate results, calls the Aalayser module to analyze the results, and returns various information to users.
+
+## Sub-Module Design
+### ProfilerAPI and Controller
+
+#### Description
+ProfilerAPI provides an entry API in the training script for users to start performance collection and analyze performance data.
+ProfilerAPI delivers commands through Controller to control the startup of ada.
+
+#### Design
+ProfilerAPI belongs to the API layer of upper-layer application and is integrated by the training script. The function is divided into two parts:
+
+- Before training, call the bottom-layer Controller API to deliver a command to start a profiling task.
+
+- After training, call the bottom-layer Controller API to deliver commands to stop the profiling task, call the Analyser and Parser APIs to parse data files and generate result data such as operator performance statistics and training trace statistics.
+
+
+Controller provides an API for the upper layer, calls API of the lower-layer performance collection module, and delivers commands for starting and stopping performance collection.
+
+The generated original performance data includes:
+
+- `hwts.log.data.45.dev.profiler_default_tag` file: stores operator execution information, including the start and end of a task and stream ID.
+- `DATA_PREPROCESS.dev.AICPU` file: specifies AI CPU operator execution time at each stage.
+- `Framework.host.task_desc_info` file: stores the mapping between operator IDs and operator names and the input and output information of each operator.
+- `training_trace.46.dev.profiler_default_tag` file: stores the start and end time of each step and time of step interval, forward and backward propagation, and step tail.
+
+### Parser
+#### Description
+Parser is a module for parsing original performance data which is collected on the device and cannot be directly understood by users. Parser parses, combines, and converts the data to generate intermediate results that can be understood by users and analyzed by upper layers.
+#### Design
+
+
+Figure 4 Parser module
+
+As shown in the preceding figure, there are HWTS Parser, AI CPU Parser, Framework Parser, and Training Trace Parser modules. Each module parses a type of original data to obtain the intermediate file that can be read by users.
+
+- HWTS Parser: parses the `hwts.log.data.45.dev.profiler_default_tag` file to obtain the task-based statistics of the device, such as the start and end of each task and stream ID, which are used to compute the operator execution time.
+- AI CPU Parser: parses the `DATA_PREPROCESS.dev.AICPU` file to obtain the AI CPU operator execution time at each stage.
+- Framework Parser: parses the `Framework.host.task_desc_info` file to obtain the mapping between AI Core operator and task, and key operator information.
+- Training Trace Parser: parses the `training_trace.46.dev.profiler_default_tag` file to analyze the time at each training stage.
+
+### Analyser
+
+#### Description
+Analyzer is used to filter, sort, query, and page the intermediate results generated at the parsing stage.
+
+#### Design
+
+This module parses the intermediate files generated by Parser, provides a general API for upper-layer data analysis, and returns the analyzed data to the upper layer for display. Various intermediate files have certain common points which can be abstracted. Therefore, following figure shows the design of the Analyser class.
+
+
+
+Figure 5 Analyser class
+
+As shown in the preceding figure, multiple Analysers are implemented for different contents to be queried. Filter, sorting, and pagination conditions can be defined for each Analyser. Each Analyser knows which intermediate files are required to merge, filter, and sort data. Analyser is associated with Parser through the intermediate files generated by Parser, and no function is called. In this way, Analyser and Parser are decoupled.
+
+Currently, there are two types of analyzers for operator information:
+
+- Filter the average information of operator types.
+- Filter the detailed average information of each operator in two Analysers (AicoreTypeAnalyser and AicoreDetailAnalyser).
+
+To hide the internal implementation of Analyser and facilitate calling, the simple factory mode is used to obtain the specified Analyser through AnalyserFactory.
+
+
+### Proposer
+#### Description
+Proposer is a Profiler performance optimization suggestion module. Proposer calls the Analyser module to obtain performance data, analyzes the performance data based on optimization rules, and displays optimization suggestions for users through the UI and API.
+
+#### Design
+
+Modules are classified into the following layers:
+
+
+
+Figure 6 Proposer module
+
+As shown in the preceding figure:
+
+- Proposer provides API for calling the API and RESTful to obtain optimization suggestions.
+- Proposer calls the Analyser API to obtain performance data and obtain optimization suggestions based on optimization rules.
+- Proposer calls Analyser factory to obtain the Analyser object.
+
+You can call the query API of the Analyser object to obtain information, including the top N AICore, AICoreType, and AICpu operators that are sorted by time and the time information of each traning trace stage.
+
+The following figure shows the module class design:
+
+
+
+Figure 7 Proposer class
+
+As shown in the preceding figure:
+
+- Proposers of various types inherit the abstract class Proposer and implement the analyze methods.
+- API and CLI call the ProposerFactory to obtain the Proposer and call the Proposer.analyze function to obtain the optimization suggestions of each type of Proposer.
\ No newline at end of file
diff --git a/docs/source_en/design/mindinsight/tensor_visual_design.md b/docs/source_en/design/mindinsight/tensor_visual_design.md
index ce3839d5b9affa269ccf802cf10a697412a82b78..f142f425963bb10ede2b84144bcaa6e4bcb6403d 100644
--- a/docs/source_en/design/mindinsight/tensor_visual_design.md
+++ b/docs/source_en/design/mindinsight/tensor_visual_design.md
@@ -44,7 +44,7 @@ Figure 1: Table view
Figure 1 displays tensors recorded by a user in a form of a table. The following functions are included:
-- The input boxes under the table display the tensor data of the current dimension. The colon (:) indicates all values of the current dimension. You can enter the corresponding index in the box (the meaning is the same as that of the Python index, and negative values are supported) or use `:` to query tensor data in a specific dimension.
+- The input boxes under the table display the tensor data of the current dimension. The colon (:) indicates index range of the current dimension which is basically the same as the meaning of Python index. If no specific index is specified, it indicates all the values of the current dimension and `2:5` indicates the value of index from 2 to 5 (not including 5). You can enter the corresponding index in the box or use index range containing `:` to query tensor data in a specific dimension.
- Drag the thumb of the linear slider below the table to query the tensor data of a specific step.

diff --git a/docs/source_en/design/mindspore/distributed_training_design.md b/docs/source_en/design/mindspore/distributed_training_design.md
new file mode 100644
index 0000000000000000000000000000000000000000..14c13e4f3e90e4ee08a8acd14d95f9f7e604220f
--- /dev/null
+++ b/docs/source_en/design/mindspore/distributed_training_design.md
@@ -0,0 +1,144 @@
+# Distributed Training Design
+
+`Linux` `Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor`
+
+
+
+- [Distributed Training Design](#distributed-training-design)
+ - [Background](#background)
+ - [Concepts](#concepts)
+ - [Collective Communication](#collective-communication)
+ - [Synchronization Mode](#synchronization-mode)
+ - [Data Parallelism](#data-parallelism)
+ - [Principle of Data Parallelism](#principle-of-data-parallelism)
+ - [Data Parallel Code](#data-parallel-code)
+ - [Automatic Parallelism](#automatic-parallelism)
+ - [Principle of Automatic Parallelism](#principle-of-automatic-parallelism)
+ - [Automatic Parallel Code](#automatic-parallel-code)
+
+
+
+
+
+## Background
+
+With the rapid development of deep learning, the number of datasets and parameters are growing exponentially to improve the accuracy and generalization capability of neural networks. Parallel distributed training has become a development trend to resolve the performance bottleneck of ultra-large scale networks. MindSpore supports the mainstream distributed training paradigm and develops an automatic hybrid parallel solution. The following describes the design principles of several parallel training modes and provides guidance for users to perform custom development.
+
+
+## Concepts
+
+### Collective Communication
+
+Collective communication is defined as communication that involves a group of processes. All processes in the group send and receive data after meeting certain conditions. MindSpore implements data transmission during parallel training through collective communication. On Ascend chips, MindSpore depends on the Huawei Collective Communication Library (`HCCL`) to implement the task. On GPU, MindSpore depends on the NVIDIA Collective Communication Library (`NCCL`) to implement the task.
+
+### Synchronization Mode
+
+In synchronous mode, all devices strart training at the same time and update parameter values synchronously after the backward propagation algorithm is executed. Currently, MindSpore uses the synchronous training mode.
+
+## Data Parallelism
+
+This section describes how the data parallel mode `ParallelMode.DATA_PARALLEL` works in MindSpore.
+
+### Principle of Data Parallelism
+
+
+
+1. Environment dependencies
+
+ Each time before parallel training starts, the `mindspore.communication.init` API is called to initialize communication resources and the global communication group `WORLD_COMM_GROUP` is automatically created.
+
+2. Data distribution
+
+ The key of data parallelism is to split datasets based on the sample dimension and deliver the split datasets to different devices. Each dataset loading API provided by the `mindspore.dataset` module has the `num_shards` and `shard_id` parameters. The parameters are used to split a dataset into multiple datasets, perform cyclic sampling, and collect data of the `batch` size to each device. When the data volume is insufficient, the sampling restarts from the beginning.
+
+3. Network structure
+
+ The scripting method of data parallel network is the same as that of standalone network. This is because, although models of each device are executed independently during the forward and backward propagation processes, the same network structure is maintained. To ensure the synchronous training between devices, the initial values of corresponding network parameters must be the same. You are advised to set the same random number seed on each device by using `numpy.random.seed` to broadcast models.
+
+4. Gradient aggregation
+
+ Theoretically, the training effect of data parallel network should be the same as that of the standalone network. To ensure the consistency of the calculation logic, the `AllReduce` operator is inserted after gradient calculation to implement the gradient aggregation operation between devices. You can enable `mean` to average the sum of gradient values, or regard `mean` as a hyperparameter. Enabling `mean` is equivalent to reducing the learning rate by multiple times.
+
+5. Parameter update
+
+ Because the gradient aggregation operation is introduced, the models of each device perform parameter update with the same gradient value. Therefore, MindSpore implements a synchronous data parallel training mode. Theoretically, models trained by each device are the same. If the reduce operation on samples is involved on the network, the network output may be different. This is determined by the sharding attribute of data parallelism.
+
+### Data Parallel Code
+
+1. Collective communication
+
+ - [management.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/communication/management.py): This file covers the `helper` function APIs commonly used during the collective communication process, for example, the APIs for obtaining the number of clusters and device ID. When collective communication is executed on the Ascend chip, the framework loads the `libhccl.so` library file in the environment and uses it to call the communication APIs from the Python layer to the underlying layer.
+ - [comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/operations/comm_ops.py): MindSpore encapsulates supported collective communication operations as operators and stores the operators in this file. The operators include `AllReduce`, `AllGather`, `ReduceScatter`, and `Broadcast`. `PrimitiveWithInfer` defines the attributes required by the operators, as well as the `shape` and `dtype` inference methods from the input to the output during graph composition.
+
+2. Gradient aggregation
+
+ - [grad_reducer.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/nn/wrap/grad_reducer.py): This file implements the gradient aggregation process. After the input parameter `grads` is expanded by using `HyperMap`, the `AllReduce` operator is inserted. The global communication group is used. You can also perform custom development by referring to this section based on your network requirements. In MindSpore, standalone and distributed execution shares a set of network encapsulation APIs. In the `Cell`, `ParallelMode` is used to determine whether to perform gradient aggregation. For details about the network encapsulation APIs, see the `TrainOneStepCell` code implementation.
+
+
+## Automatic Parallelism
+
+As a key feature of MindSpore, automatic parallelism is used to implement hybrid parallel training that combines automatic data parallelism and model parallelism. It aims to help users express the parallel algorithm logic using standalone scripts, reduce the difficulty of distributed training, improve the algorithm R&D efficiency, and maintain the high performance of training. This section describes how the automatic parallel mode `ParallelMode.AUTO_PARALLEL` and semi-automatic parallel mode `ParallelMode.SEMI_AUTO_PARALLEL` work in MindSpore.
+
+### Principle of Automatic Parallelism
+
+
+
+1. Distributed operator and tensor layout
+
+ As shown in the preceding figure, the automatic parallel process traverses the standalone forward ANF graphs and performs shard modeling on tensors in the unit of distributed operator, indicating how the input and output tensors of an operator are distributed to each device of the cluster, that is, the tensor layout. Users do not need to know which device runs which slice of a model. The framework automatically schedules and allocates model slices.
+
+ To obtain the tensor layout model, each operator has a shard strategy, which indicates the shard status of each input of the operator in the corresponding dimension. Generally, tensors can be sharded in any dimension as long as the value is a multiple of 2, and the even distribution principle is met. The following figure shows an example of the three-dimensional `BatchMatmul` operation. The parallel strategy consists of two tuples, indicating the sharding of `input` and `weight`, respectively. Elements in a tuple correspond to tensor dimensions one by one. `2^N` indicates the shard unit, and `1` indicates that the tuple is not sharded. If you want to express a parallel data shard strategy, that is, only data in the `batch` dimension of `input` is sharded, and data in other dimensions are not sharded, you can use `strategy=((2^N, 1, 1),(1, 1, 1))`. If you want to express a parallel model shard strategy, that is, only model in the non-`batch` dimension of `weight` is sharded, for example, only the `channel` dimension is sharded, you can use `strategy=((1, 1, 1),(1, 1, 2^N))`. If you want to express a hybrid parallel shard strategy, one of which is `strategy=((2^N, 1, 1),(1, 1, 2^N))`.
+
+ 
+
+ Based on the shard strategy of an operator, the framework automatically derives the distribution model of input tensors and output tensors of the operator. This distribution model consists of `device_matrix`, `tensor_shape`, and `tensor map`, which indicate the device matrix shape, tensor shape, and mapping between devices and tensor dimensions, respectively. Based on the tensor layout model, distributed operator determines whether to insert extra computation and communication operations in the graph to ensure that the operator computing logic is correct.
+
+2. Tensor Redistribution
+
+ When the output tensor model of an operator is inconsistent with the input tensor model of the next operator, computation and communication operations need to be introduced to implement the change between tensor layouts. The automatic parallel process introduces the tensor redistribution algorithm, which can be used to derive the communication conversion operations between random tensor layouts. The following three examples represent a parallel computing process of the formula `Z=(X×W)×V`, that is, a `MatMul` operation of two two-dimensional matrices, and show how to perform conversion between different parallel modes.
+
+ In example 1, the output of the first data parallel matrix multiplication is sharded in the row rection, and the input of the second model parallel matrix multiplication requires full tensors. The framework automatically inserts the `AllGather` operator to implement redistribution.
+
+ 
+
+ In example 2, the output of parallel matrix multiplication of the first model is sharded in the column direction, and the input of parallel matrix multiplication of the second model is sharded in the row direction. The framework automatically inserts a communication operator equivalent to the `AlltoAll` operation in collective communication to implement redistribution.
+
+ 
+
+ In example 3, an output shard mode of the first hybrid parallel matrix multiplication is the same as an input shard mode of the second hybrid parallel matrix multiplication. Therefore, redistribution does not need to be introduced. In the second matrix multiplication operation, the related dimensions of the two inputs are sharded. Therefore, the `AllReduce` operator needs to be inserted to ensure the operation correctness.
+
+ 
+
+ In general, this distributed representation breaks the boundary between data parallelism and model parallelism, making it easy to implement hybrid parallelism. From the perspective of scripts, users only need to construct a standalone network to express the parallel algorithm logic. Framework automatically shards the entire graph.
+
+3. Efficient parallel strategy search algorithm
+
+ The `SEMI_AUTO_PARALLEL` semi-automatic parallel mode indicates that you manually configure the parallel strategy for operators when you are familiar with the operator sharding representation. This mode is helpful for manual optimization, with certain commissioning difficulty. You need to master the parallel principle and obtain a high-performance parallel solution based on the network structure and cluster topology. To further help users accelerate the parallel network training process, the automatic parallel mode `AUTO_PARALLEL` introduces the automatic search feature of the parallel strategy on the basis of the semi-automatic parallel mode. Automatic parallelism builds cost models based on the hardware platform, and calculates the computation cost, memory cost, and communication cost of a certain amount of data and specific operators based on different parallel strategies Then, by using the dynamic programming algorithm or recursive programming algorithm and taking the memory upper limit of a single device as a constraint condition, a parallel strategy with optimal performance is efficiently searched out.
+
+ Strategy search replaces manual model sharding and provides a high-performance sharding solution within a short period of time, greatly reducing the threshold for parallel training.
+
+
+4. Convenient distributed automatic differentiation
+
+ In addition to forward network communication, the traditional manual model sharding needs to consider backward parallel computing. MindSpore encapsulates communication operations into operators and automatically generates backward propagation of communication operators based on the original automatic differentiation operations of the framework. Therefore, even during distributed training, users only need to pay attention to the forward propagation of the network to implement actual automatic parallel training.
+
+### Automatic Parallel Code
+
+1. Tensor layout model
+ - [tensor_layout](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/tensor_layout): This directory contains the definitions and implementation of functions related to the tensor distribution model. `tensor_layout.h` declares the member variables `tensor_map_origin_`, `tensor_shape_`, and `device_arrangement_` required by a tensor distribution model. In `tensor_redistribution.h`, the related methods for implementing the `from_origin_` and `to_origin_` transformation between tensor distributions are declared. The deduced redistribution operation is stored in `operator_list_` and returned, in addition, the communication cost `comm_cost_`,, memory cost `memory_cost_`, and calculation cost `computation_cost_` required for redistribution are calculated.
+
+2. Distributed operators
+ - [ops_info](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/ops_info): This directory contains the implementation of distributed operators. In `operator_info.h`, the base class `OperatorInfo` of distributed operator implementation is defined. A distributed operator to be developed shall inherit the base class and explicitly implement related imaginary functions. The `InferTensorInfo`, `InferTensorMap`, and `InferDevMatrixShape` functions define the algorithms for deriving the input and output tensor distribution model of the operator. The `InferForwardCommunication` and `InferMirrorOps` functions define the extra calculation and communication operations to be inserted for operator sharding. The `CheckStrategy` and `GenerateStrategies` functions define the parallel strategy validation and generation for the operator. According to the parallel strategy `SetCostUnderStrategy`, the parallel cost `operator_cost_` of the distributed operator is generated.
+
+3. Strategy search algorithm
+ - [auto_parallel](https://gitee.com/mindspore/mindspore/tree/master/mindspore/ccsrc/frontend/parallel/auto_parallel): The shard strategy search algorithm is implemented in this directory. `graph_costmodel.h` defines the graph composition information. Each point indicates an operator `OperatorInfo`. The directed edge `edge_costmodel.h` indicates the input and output relationship of operators and the redistribution cost. `operator_costmodel.h` defines the cost model of each operator, including the calculation cost, communication cost, and memory cost. `dp_algorithm_costmodel.h` describes the main process of the dynamic planning algorithm, which consists of a series of graph operations. `costmodel.h` defines the data structures of cost and graph operations.
+
+4. Device management
+ - [device_manager.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/device_manager.h): This file is used to create and manage cluster device communication groups. The device matrix model is defined by `device_matrix.h`, and the communication domain is managed by `group_manager.h`.
+
+5. Entire graph sharding
+ - [step_auto_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_auto_parallel.h), and [step_parallel.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/frontend/parallel/step_parallel.h): The two files contain the core implementation of the automatic parallel process. `step_auto_parallel.h` calls the strategy search process and generates the `OperatorInfo` of the distributed operator. Then in `step_parallel.h`, processes such as operator sharding and tensor redistribution are processed to reconstruct the standalone computing graph in distributed mode.
+
+
+6. Backward propagation of communication operators
+ - [grad_comm_ops.py](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ops/_grad/grad_comm_ops.py): This file defines the backward propagation of communication operators, such as `AllReduce` and `AllGather`.
diff --git a/docs/source_en/design/mindspore/images/auto_parallel.png b/docs/source_en/design/mindspore/images/auto_parallel.png
new file mode 100644
index 0000000000000000000000000000000000000000..800b3b2536c739dcc48a1e46b5f65fc327e4ce8d
Binary files /dev/null and b/docs/source_en/design/mindspore/images/auto_parallel.png differ
diff --git a/docs/source_en/design/mindspore/images/data_parallel.png b/docs/source_en/design/mindspore/images/data_parallel.png
new file mode 100644
index 0000000000000000000000000000000000000000..a92c82aa64615b398e83b9bc2cf0aa2c5db9f904
Binary files /dev/null and b/docs/source_en/design/mindspore/images/data_parallel.png differ
diff --git a/docs/source_en/design/mindspore/images/operator_split.png b/docs/source_en/design/mindspore/images/operator_split.png
new file mode 100644
index 0000000000000000000000000000000000000000..4063170990c6816884361f195db5851cfbdf932e
Binary files /dev/null and b/docs/source_en/design/mindspore/images/operator_split.png differ
diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png b/docs/source_en/design/mindspore/images/tensor_redistribution.png
similarity index 100%
rename from docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png
rename to docs/source_en/design/mindspore/images/tensor_redistribution.png
diff --git a/docs/source_en/design/mindspore/images/tensor_redistribution1.png b/docs/source_en/design/mindspore/images/tensor_redistribution1.png
new file mode 100644
index 0000000000000000000000000000000000000000..ed4d79416a0a07f8d75e738aa544d214834ae778
Binary files /dev/null and b/docs/source_en/design/mindspore/images/tensor_redistribution1.png differ
diff --git a/docs/source_en/design/mindspore/images/tensor_redistribution2.png b/docs/source_en/design/mindspore/images/tensor_redistribution2.png
new file mode 100644
index 0000000000000000000000000000000000000000..114f984c66ae578722dbcdbb59ab03c44dbcb097
Binary files /dev/null and b/docs/source_en/design/mindspore/images/tensor_redistribution2.png differ
diff --git a/docs/source_en/design/mindspore/images/tensor_redistribution3.png b/docs/source_en/design/mindspore/images/tensor_redistribution3.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd66c9120615f50f2b3f60cfe139954cb4adf307
Binary files /dev/null and b/docs/source_en/design/mindspore/images/tensor_redistribution3.png differ
diff --git a/docs/source_en/design/mindspore/ir.md b/docs/source_en/design/mindspore/ir.md
index 4837ba94baccb0f15638d6bb744ec13f9035bb1b..98743518453e919a3b70d280ef5e72f1f34b9a25 100644
--- a/docs/source_en/design/mindspore/ir.md
+++ b/docs/source_en/design/mindspore/ir.md
@@ -1,7 +1,7 @@
# MindSpore IR (MindIR)
-`Framework Development` `Intermediate` `Expert` `Contributor`
+`Linux` `Windows` `Framework Development` `Intermediate` `Expert` `Contributor`
diff --git a/docs/source_en/glossary.md b/docs/source_en/glossary.md
index ae1fb21e9168f00bd574fdef787b2a7b3a86f831..3f08ac2a4124b14bf6551de670ec44f8eddaffcf 100644
--- a/docs/source_en/glossary.md
+++ b/docs/source_en/glossary.md
@@ -32,9 +32,10 @@
| LSTM | Long short-term memory, an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. |
| Manifest | A data format file. Huawei ModelArt adopts this format. For details, see . |
| ME | Mind Expression, MindSpore frontend, which is used to compile tasks from user source code to computational graphs, control execution during training, maintain contexts (in non-sink mode), and dynamically generate graphs (in PyNative mode). |
-| MindArmour | MindSpore security component, which is used for AI adversarial example management, AI model attack defense and enhancement, and AI model robustness evaluation. |
+| MindArmour | The security module of MindSpore, which improves the confidentiality, integrity and usability of the model through technical means such as differential privacy and adversarial attack and defense. MindArmour prevents attackers from maliciously modifying the model or cracking the internal components of the model to steal the parameters of the model. |
| MindData | MindSpore data framework, which provides data loading, enhancement, dataset management, and visualization. |
| MindInsight | MindSpore visualization component, which visualizes information such as scalars, images, computational graphs, and model hyperparameters. |
+| MindRecord | It is a data format defined by MindSpore, it is a module for reading, writing, searching and converting data sets in MindSpore format. |
| MindSpore | Huawei-leaded open-source deep learning framework. |
| MindSpore Lite | A lightweight deep neural network inference engine that provides the inference function for models trained by MindSpore on the device side. |
| MNIST database | Modified National Handwriting of Images and Technology database, a large handwritten digit database, which is usually used to train various image processing systems. |
@@ -43,5 +44,5 @@
| ResNet-50 | Residual Neural Network 50, a residual neural network proposed by four Chinese people, including Kaiming He from Microsoft Research Institute. |
| Schema | Data set structure definition file, which defines the fields contained in a dataset and the field types. |
| Summary | An operator that monitors the values of tensors on the network. It is a peripheral operation in the figure and does not affect the data flow. |
-| TBE | Tensor Boost Engine, an operator development tool that is extended based on the Tensor Virtual Machine (TVM) framework. |
+| TBE | Tensor Boost Engine, it is a self-developed NPU operator development tool developed by Huawei, which is expanded on the basis of the TVM (Tensor Virtual Machine) framework. It provides a set of Python API to implement development activities and develop custom operators. |
| TFRecord | Data format defined by TensorFlow. |
diff --git a/docs/source_en/network_list.md b/docs/source_en/network_list.md
index 897111be5078687a3c4b4671c0c9f05904226128..fcad7fc7f16e7d5edc291cb3c801fe403e8e1bef 100644
--- a/docs/source_en/network_list.md
+++ b/docs/source_en/network_list.md
@@ -6,7 +6,6 @@
- [Network List](#network-list)
- [Model Zoo](#model-zoo)
- - [Pre-trained Models](#pre-trained-models)
@@ -14,47 +13,33 @@
## Model Zoo
-| Domain | Sub Domain | Network | Ascend | GPU | CPU
-|:------ |:------| :----------- |:------ |:------ |:-----
-|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
-| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
-|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
-| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
-|Computer Vision (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
-| Computer Vision (CV) | Targets Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Targets Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
-| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
-| Computer Vision(CV) | Targets Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
-| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
-| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
-| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
-| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
-| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
-| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
+| Domain | Sub Domain | Network | Ascend(Graph) | Ascend(PyNative) | GPU(Graph) | GPU(PyNative) | CPU(Graph)
+|:------ |:------| :----------- |:------ |:------ |:------ |:------ |:-----
+|Computer Vision (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported
+| Computer Vision (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing
+|Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing
+|Computer Vision (CV) | Image Classification | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Doing |Doing | Doing | Doing
+|Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing
+| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Supported | Supported | Doing
+| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing
+| Computer Vision (CV) | Mobile Image Classification Image Classification Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing
+|Computer Vision (CV) | Object Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported |Doing | Doing | Doing
+| Computer Vision (CV) | Object Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Object Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing | Doing | Doing
+| Computer Vision (CV) | Object Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing | Doing | Doing
+| Computer Vision(CV) | Object Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Doing | Supported | Supported | Doing
+| Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported
+| Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Doing | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing
+| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Doing | Doing
+| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Doing | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing | Doing | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing | Doing | Doing
> You can also use [MindWizard Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) to quickly generate classic network scripts.
-
-## Pre-trained Models
-*It refers to the released MindSpore version. The hardware platforms that support model training are CPU, GPU and Ascend. As shown in the table below, ✓ indicates that the pre-trained model run on the selected platform.
-
-| Domain | Sub Domain| Network | Dataset | CPU | GPU | Ascend | 0.5.0-beta*
-|:------ |:------ | :------- |:------ |:------ |:------ |:----- |:-----
-|Computer Vision (CV) | Image Classification| [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | CIFAR-10| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/alexnet/alexnet_ascend_0.5.0_cifar10_official_classification_20200716.tar.gz)
-|Computer Vision (CV) | Image Classification| [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py)| MNIST | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/lenet/lenet_ascend_0.5.0_mnist_official_classification_20200716.tar.gz)
-|Computer Vision (CV) | Image Classification| [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py)| CIFAR-10 | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/vgg/vgg16_ascend_0.5.0_cifar10_official_classification_20200715.tar.gz)
-|Computer Vision (CV) | Image Classification| [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | CIFAR-10| | | ✓ |[Download](http://download.mindspore.cn/model_zoo/official/cv/resnet/resnet50_v1.5_ascend_0.3.0_cifar10_official_classification_20200718.tar.gz)
-|Computer Vision (CV) | Targets Detection| [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | COCO 2014| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/cv/yolo/yolov3_darknet53_ascend_0.5.0_coco2014_official_object_detection_20200717.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_Base](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | zhwiki | | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_base_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [BERT_NEZHA](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py)| zhwiki| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_nezha_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| Natural Language Processing (NLP) | Natural Language Understanding| [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py)| WMT English-German| | | ✓ | [Download](http://download.mindspore.cn/model_zoo/official/nlp/transformer/transformer_ascend_0.5.0_wmtende_official_machine_translation_20200713.tar.gz)
diff --git a/docs/source_en/operator_list.md b/docs/source_en/operator_list.md
index 672de46b5ab7e69e5c8743b03fa3cfd323d899d7..470d6a650efb713e6ac6867ec884d2f2906b7422 100644
--- a/docs/source_en/operator_list.md
+++ b/docs/source_en/operator_list.md
@@ -37,7 +37,7 @@
| [mindspore.nn.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
| [mindspore.nn.Dense](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
-| [mindspore.nn.Norm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Doing | Supported | Doing |layer/basic
+| [mindspore.nn.Norm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Supported | Supported | Doing |layer/basic
| [mindspore.nn.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
| [mindspore.nn.Range](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
| [mindspore.nn.SequentialCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
@@ -65,11 +65,21 @@
| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Doing | Doing |layer/quant
| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Doing |layer/quant
+| [mindspore.nn.FakeQuantWithMinMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FakeQuantWithMinMax) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnFoldQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnWithoutFoldQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnWithoutFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.DenseQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.ActQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ActQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.LeakyReLUQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLUQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSwishQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSwishQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSigmoidQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSigmoidQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.TensorAddQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TensorAddQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.MulQuant](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MulQuant) | Supported | Supported | Supported |layer/quant
| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
| [mindspore.nn.MSELoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) |Supported |Doing | Doing |loss/loss
| [mindspore.nn.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyWithLogits) | Supported | Supported | Supported |loss/loss
-| [mindspore.nn.SoftmaxCrossEntropyExpand](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyExpand) | Supported |Supported | Doing |loss/loss
| [mindspore.nn.CosineEmbeddingLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CosineEmbeddingLoss) |Supported |Supported | Doing |loss/loss
| [mindspore.nn.ProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ProximalAdagrad) | Supported | Doing | Doing |optim/ProximalAdagrad
| [mindspore.nn.LazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LazyAdam) | Supported | Doing | Doing |optim/lazyadam
@@ -84,300 +94,305 @@
| [mindspore.nn.WithLossCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.WithGradCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.DataWrapper](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DataWrapper) |Doing | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
| [mindspore.nn.DistributedGradReducer](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DistributedGradReducer) | Supported |Doing | Doing |wrap/grad_reducer
-| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Doing |Doing | Doing |wrap/loss_scale
+| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Supported |Supported | Doing |wrap/loss_scale
| [mindspore.nn.Cell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell) | Supported | Supported | Supported |cell
+| [mindspore.nn.EmbeddingLookup](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.EmbeddingLookup) |Supported | Supported | Supported |layer/embedding
+| [mindspore.nn.Pad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Pad) |Supported | Supported | Doing |layer/basic
## mindspore.ops.operations
| Operation | Ascend | GPU | CPU |Operator Type
| :----------- |:------ |:------ |:-----|:---
-| [mindspore.ops.operations.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Flatten) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Unpack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Unpack) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.L2Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Loss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.CTCLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CTCLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.RNNTLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RNNTLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Softplus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softplus) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ReLU6](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU6) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.HSwish](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSwish) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.HSigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSigmoid) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchNorm) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.LRN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LRN) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.Conv2D](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2D) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNative](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPoolWithArgmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPool](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPool) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.AvgPool](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AvgPool) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Conv2DBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.TopK](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TopK) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyRMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ApplyCenteredRMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.SmoothL1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SGD](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SGD) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.LayerNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LayerNorm) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ResizeBilinear](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeBilinear) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.GetNext](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GetNext) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LSTM](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LSTM) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.BasicLSTMCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Pad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ROIAlign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ROIAlign) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.BinaryCrossEntropy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.KLDivLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.KLDivLoss) | Doing | Supported | Doing | nn_ops
-| [mindspore.ops.operations.LARSUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LARSUpdate) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Softsign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softsign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.ReduceProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.CumProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.CumSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumSum) | Supported | Supported| Doing | math_ops
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.SquareSumAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquareSumAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Reciprocal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reciprocal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log1p](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log1p) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Floor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Floor) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.EqualCount](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EqualCount) | Doing | Supported | Supported | math_ops
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Ceil](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Ceil) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Inv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Inv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Invert](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Invert) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUAllocFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUGetFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUClearFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloatStatus) | Doing | Supported | Doing | math_ops
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Cosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cosh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI0e](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI0e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI1e](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI1e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Tan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asinh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erf](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erf) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erfc](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erfc) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sinh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Expm1](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Expm1) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NMSWithMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NMSWithMask) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Abs](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Abs) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sign) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Round](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Round) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceAdd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.SameTypeShape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SameTypeShape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.IsSubClass](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsSubClass) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.IsInstance](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsInstance) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Split](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Split) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.TruncatedNormal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncatedNormal) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ZerosLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ZerosLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.TupleToArray](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TupleToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToArray](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToTensor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToTensor) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.InvertPermutation](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InvertPermutation) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmax) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ParallelConcat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ParallelConcat) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Slice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Slice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.DiagPart](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DiagPart) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Eye](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Eye) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ResizeNearestNeighbor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToDepth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToDepth) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.DepthToSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatch](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatch) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatchND](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatchND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpaceND](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpaceND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.IsFinite](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsFinite) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.InplaceUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Rint](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rint) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReverseV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReduceOp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceOp) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllReduce](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllReduce) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllGather](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllGather) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.ReduceScatter](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceScatter) | Doing | Supported | Doing | comm_ops
-| [mindspore.ops.operations.Broadcast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Broadcast) | Supported | Doing | Doing | comm_ops
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | Supported | Supported | Supported | control_ops
-| [mindspore.ops.operations.GeSwitch](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GeSwitch) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.Merge](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Merge) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.ScalarSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.ImageSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ImageSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.TensorSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.HistogramSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.InsertGradientOf](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InsertGradientOf) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | Supported | Doing | Doing | debug_ops
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxEncode](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxDecode](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.PopulationCount](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PopulationCount) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.CheckValid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CheckValid) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.IOU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IOU) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.MakeRefKey](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MakeRefKey) | Supported | Supported | Supported | other_ops
-| [mindspore.ops.operations.InTopK](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InTopK) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.StandardNormal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StandardNormal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.Gamma](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gamma) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.Poisson](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Poisson) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.UniformInt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformInt) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.UniformReal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformReal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomChoiceWithMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomCategorical](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomCategorical) | Supported| Doing | Doing | random_ops
-| [mindspore.ops.operations.ScalarCast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarCast) | Supported | Supported | Supported | inner_ops
-| [mindspore.ops.operations.ReverseSequence](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseSequence) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.CropAndResize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CropAndResize) | Supported | Doing | Doing | image_ops
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.HistogramFixedWidth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Flatten) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Acosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Acosh) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorMod) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Elu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Elu) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.MirrorPad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MirrorPad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Unpack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Unpack) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pack) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.L2Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.L2Loss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.CTCLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CTCLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.RNNTLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RNNTLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Softplus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softplus) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ReLU6](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLU6) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.HSwish](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HSwish) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.HSigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HSigmoid) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchNorm) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.LRN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LRN) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.Conv2D](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Conv2D) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.DepthwiseConv2dNative](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPoolWithArgmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPool](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MaxPool) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.AvgPool](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AvgPool) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Conv2DBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.TopK](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TopK) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyRMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ApplyCenteredRMSProp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.SmoothL1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SGD](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SGD) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.LayerNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LayerNorm) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ResizeBilinear](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ResizeBilinear) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.GetNext](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GetNext) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LSTM](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LSTM) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.BasicLSTMCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Pad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ROIAlign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ROIAlign) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Adam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Adam) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.BinaryCrossEntropy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.KLDivLoss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.KLDivLoss) | Doing | Supported | Doing | nn_ops
+| [mindspore.ops.LARSUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LARSUpdate) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Softsign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softsign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReduceProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.CumProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CumProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.CumSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CumSum) | Supported | Supported| Doing | math_ops
+| [mindspore.ops.AddN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AddN) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Neg) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sub) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Square) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.SquareSumAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SquareSumAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Rsqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rsqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Reciprocal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reciprocal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Exp) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Log) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log1p](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Log1p) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Div) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Floor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Floor) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.EqualCount](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.EqualCount) | Doing | Supported | Supported | math_ops
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Greater) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Less](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Less) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Atan2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Atan2) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LessEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Ceil](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Ceil) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Inv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Inv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Invert](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Invert) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUAllocFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUGetFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUClearFloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloatStatus](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloatStatus) | Doing | Supported | Doing | math_ops
+| [mindspore.ops.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Cosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cosh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ACos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI0e](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BesselI0e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI1e](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BesselI1e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Tan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Asin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asinh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Asinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erf](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Erf) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erfc](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Erfc) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sinh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Expm1](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Expm1) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NMSWithMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NMSWithMask) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Abs](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Abs) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sign) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Round](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Round) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InplaceAdd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InplaceSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Mod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.DType](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DType) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.SameTypeShape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SameTypeShape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cast) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.IsSubClass](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IsSubClass) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.IsInstance](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IsInstance) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Shape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Shape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Split](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Split) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Rank](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rank) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.TruncatedNormal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncatedNormal) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.Size](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Size) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Fill](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Fill) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ZerosLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ZerosLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.TupleToArray](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TupleToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToArray](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToTensor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarToTensor) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.InvertPermutation](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InvertPermutation) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Argmax) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Argmin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tile) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentProd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Concat) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ParallelConcat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ParallelConcat) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Slice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Slice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Select](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Select) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Diag) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.DiagPart](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DiagPart) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.Eye](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Eye) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ResizeNearestNeighbor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyFtrl) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToDepth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SpaceToDepth) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.DepthToSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatch](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatch) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatchND](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatchND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpaceND](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpaceND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.IsFinite](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IsFinite) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.InplaceUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InplaceUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Rint](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rint) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReverseV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReverseV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReduceOp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceOp) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllReduce](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AllReduce) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllGather](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AllGather) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.ReduceScatter](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceScatter) | Doing | Supported | Doing | comm_ops
+| [mindspore.ops.Broadcast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Broadcast) | Supported | Doing | Doing | comm_ops
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | Supported | Supported | Supported | control_ops
+| [mindspore.ops.GeSwitch](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GeSwitch) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.Merge](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Merge) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.ScalarSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.ImageSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ImageSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.TensorSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.HistogramSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HistogramSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.InsertGradientOf](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InsertGradientOf) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.Print](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Print) | Supported | Doing | Doing | debug_ops
+| [mindspore.ops.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Assign) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxEncode](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxDecode](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.PopulationCount](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.PopulationCount) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.CheckValid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CheckValid) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.IOU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IOU) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.MakeRefKey](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MakeRefKey) | Supported | Supported | Supported | other_ops
+| [mindspore.ops.InTopK](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InTopK) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.StandardNormal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.StandardNormal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.Gamma](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Gamma) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.Poisson](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Poisson) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.UniformInt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UniformInt) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.UniformReal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UniformReal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.RandomChoiceWithMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
+| [mindspore.ops.RandomCategorical](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RandomCategorical) | Supported| Doing | Doing | random_ops
+| [mindspore.ops.ScalarCast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarCast) | Supported | Supported | Supported | inner_ops
+| [mindspore.ops.ReverseSequence](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReverseSequence) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.CropAndResize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CropAndResize) | Supported | Doing | Doing | image_ops
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xdivy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xlogy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.HistogramFixedWidth](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Eps](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Eps) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReLUV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLUV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingReduce](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingReduce) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingUpdate) | Supported | Doing | Doing | nn_ops
## mindspore.ops.functional
| Operation | functional Operation
| :----------- | :-----------
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | pack
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | tensor_add
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | assign_sub
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | addn
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | square
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | sqrt
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | equal
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | not_equal
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | logical_not
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | logical_and
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | logical_or
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | expand_dims
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | dtype
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | cast
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | reshape
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | shape
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | gather
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | rank
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | size
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | fill
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | ones_like
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | tile
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | select
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | scatter_nd
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | gather_nd
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | control_depend
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | print
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | assign
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | tensor_pow
+| [mindspore.ops.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pack) | pack
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | tensor_add
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | assign_sub
+| [mindspore.ops.AddN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AddN) | addn
+| [mindspore.ops.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Square) | square
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | sqrt
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal) | equal
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | not_equal
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | logical_not
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | logical_and
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | logical_or
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | expand_dims
+| [mindspore.ops.DType](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DType) | dtype
+| [mindspore.ops.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cast) | cast
+| [mindspore.ops.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | reshape
+| [mindspore.ops.Shape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Shape) | shape
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | gather
+| [mindspore.ops.Rank](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rank) | rank
+| [mindspore.ops.Size](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Size) | size
+| [mindspore.ops.Fill](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Fill) | fill
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | ones_like
+| [mindspore.ops.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tile) | tile
+| [mindspore.ops.Select](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Select) | select
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | scatter_nd
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | gather_nd
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | control_depend
+| [mindspore.ops.Print](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Print) | print
+| [mindspore.ops.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Assign) | assign
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow) | tensor_pow
> At present, functional supports some operators without attributes, which will be further completed in the future.
@@ -385,62 +400,61 @@
| op name | constraints
| :----------- | :-----------
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | None
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | None
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | None
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | None
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | None
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | None
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | None
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | None
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | None
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | None
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | None
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | None
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | None
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | None
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | None
-| [mindspore.ops.operations.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Dropout) | Repeated calculation is not supported.
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | None
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | None
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | None
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | None
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | None
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | None
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | None
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | None
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | None
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | None
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | None
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | None
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | None
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | None
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | None
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Need to be used in conjunction with `DropoutDoMask`.
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported.
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.
-| [mindspore.ops.operations.SparseGatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseGatherV2) | The same as GatherV2.
-| [mindspore.ops.operations.EmbeddingLookup](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EmbeddingLookup) | The same as GatherV2.
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | The last dimension of logits and labels can't be splited; Only supports using output[0].
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | `transpose_a=True` is not supported.
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | `transpore_a=True` is not supported.
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | The shard strategy in channel dimension of input_x should be consistent with weight.
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Only support 1-dim indices.
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | None
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | None
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | None
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | The output index can't be used as the input of other operators.
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | The output index can't be used as the input of other operators.
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | None
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Configuring shard strategy is not supported.
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is not supported when the strides of dimension is 1.
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Only support configuring shard strategy for multiples.
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | None
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Configuring shard strategy is not supported.
+| [mindspore.ops.ACos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ACos) | None
+| [mindspore.ops.Cos](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cos) | None
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | None
+| [mindspore.ops.Log](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Log) | None
+| [mindspore.ops.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Exp) | None
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.Softmax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | The logits can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | None
+| [mindspore.ops.Gelu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | None
+| [mindspore.ops.ReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | None
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | None
+| [mindspore.ops.Cast](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cast) | None
+| [mindspore.ops.Neg](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Neg) | None
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | None
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | None
+| [mindspore.ops.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Square) | None
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | None
+| [mindspore.ops.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Dropout) | Repeated calculation is not supported.
+| [mindspore.ops.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Div) | None
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | None
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | None
+| [mindspore.ops.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mul) | None
+| [mindspore.ops.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sub) | None
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow) | None
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | None
+| [mindspore.ops.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Greater) | None
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | None
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | None
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal) | None
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | None
+| [mindspore.ops.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | None
+| [mindspore.ops.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | None
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | None
+| [mindspore.ops.Concat](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Concat) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | Need to be used in conjunction with `DropoutDoMask`.
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | Need to be used in conjunction with `DropoutGenMask`,configuring shard strategy is not supported.
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | Only support 1-dim and 2-dim parameters and the last dimension of the input_params should be 32-byte aligned; Scalar input_indices is not supported; Repeated calculation is not supported when the parameters are split in the dimension of the axis; Split input_indices and input_params at the same time is not supported.
+| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseGatherV2) | The same as GatherV2.
+| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.EmbeddingLookup) | The same as GatherV2.
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | The input_x can't be split into the dimension of axis, otherwise it's inconsistent with the single machine in the mathematical logic.
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | The last dimension of logits and labels can't be splited; Only supports using output[0].
+| [mindspore.ops.MatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | `transpose_a=True` is not supported.
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | `transpore_a=True` is not supported.
+| [mindspore.ops.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | When the shape of weight is not [1], the shard strategy in channel dimension of input_x should be consistent with weight.
+| [mindspore.ops.OneHot](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | Only support 1-dim indices. Must configure strategy for the output and the first and second inputs.
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | None
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | The output index can't be used as the input of other operators. When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | The output index can't be used as the input of other operators. When the input_x is splited on the axis dimension, the distributed result may be inconsistent with that on the single machine.
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | None
+| [mindspore.ops.Reshape](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | Configuring shard strategy is not supported.
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | Only support mask with all 0 values; The dimension needs to be split should be all extracted; Split is not supported when the strides of dimension is 1.
+| [mindspore.ops.Tile](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tile) | Only support configuring shard strategy for multiples.
+| [mindspore.ops.Transpose](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | None
> Repeated calculation means that the device is not fully used. For example, the cluster has 8 devices to run distributed training, the splitting strategy only cuts the input into 4 copies. In this case, double counting will occur.
>
@@ -470,66 +484,66 @@ when the Tensor of int8 and uint8 data types are operated, they are converted to
| op name
| :-----------
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign)
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub)
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum)
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam)
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam)
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl)
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad)
-| [mindspore.ops.operations.ApplyAdaMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdaMax)
-| [mindspore.ops.operations.ApplyAdadelta](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdadelta)
-| [mindspore.ops.operations.ApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdagrad)
-| [mindspore.ops.operations.ApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdagradV2)
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad)
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2)
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad)
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad)
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign)
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign)
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent)
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent)
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl)
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2)
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd)
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr)
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor)
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd)
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub)
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul)
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow)
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum)
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum)
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv)
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div)
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan)
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv)
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv)
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod)
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod)
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod)
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2)
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference)
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy)
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy)
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal)
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual)
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual)
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater)
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual)
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less)
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual)
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd)
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr)
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate)
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd)
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub)
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd)
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate)
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax)
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin)
-| [mindspore.ops.operations.ScatterAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterAdd)
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub)
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul)
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv)
+| [mindspore.ops.Assign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Assign)
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub)
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum)
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam)
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam)
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl)
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad)
+| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdaMax)
+| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdadelta)
+| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdagrad)
+| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdagradV2)
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad)
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2)
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad)
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad)
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign)
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign)
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent)
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent)
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl)
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2)
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd)
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr)
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor)
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd)
+| [mindspore.ops.Sub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sub)
+| [mindspore.ops.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mul)
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow)
+| [mindspore.ops.Minimum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Minimum)
+| [mindspore.ops.Maximum](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Maximum)
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RealDiv)
+| [mindspore.ops.Div](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Div)
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan)
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv)
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv)
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod)
+| [mindspore.ops.Mod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mod)
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorMod)
+| [mindspore.ops.Atan2](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Atan2)
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference)
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xdivy)
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xlogy)
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal)
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual)
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual)
+| [mindspore.ops.Greater](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Greater)
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual)
+| [mindspore.ops.Less](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Less)
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LessEqual)
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd)
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr)
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate)
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd)
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub)
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd)
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate)
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax)
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin)
+| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterAdd)
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub)
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul)
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv)
diff --git a/docs/source_zh_cn/_static/logo_source.png b/docs/source_zh_cn/_static/logo_source.png
index fc347d271abe082ae8d16242328551648766b6fb..880f2bc87172daf487654c0ba4f1657c672bd2b8 100644
Binary files a/docs/source_zh_cn/_static/logo_source.png and b/docs/source_zh_cn/_static/logo_source.png differ
diff --git a/docs/source_zh_cn/constraints_on_network_construction.md b/docs/source_zh_cn/constraints_on_network_construction.md
index 8b352b2625d65ebdb811e75a84caddc9a71f0b78..75deef1f664fdbe0a8befc405955d66d04935113 100644
--- a/docs/source_zh_cn/constraints_on_network_construction.md
+++ b/docs/source_zh_cn/constraints_on_network_construction.md
@@ -225,40 +225,67 @@ tuple也支持切片取值操作, 但不支持切片类型为Tensor类型,支
| `Cell`实例的成员函数 | Cell的construct中可以调用其他类成员函数。
| 函数 | 自定义Python函数、前文中列举的系统函数。
| dataclass实例 | 使用@dataclass装饰的类。
-| Primitive算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html)
-| Composite算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.composite.html)
+| Primitive算子 |[mindspore/ops/operations/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html)
+| Composite算子 |[mindspore/ops/composite/*](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html)
| constexpr生成算子 |使用[@constexpr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.constexpr)生成的值计算算子。
### 其他约束
-整网construct函数输入的参数以及使用ms_function装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用。所以,在图模式下,限制入口网络的参数只能是Tensor,如下例所示:
-* 错误的写法如下:
- ```python
- class ExpandDimsTest(Cell):
+1. 整网`construct`函数输入的参数以及使用`ms_function`装饰器修饰的函数的参数在图编译过程中会进行泛化,不能作为常量输入传给算子使用。所以,在图模式下,限制入口网络的参数只能是`Tensor`,如下例所示:
+
+ * 错误的写法如下:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+
+ def construct(self, input_x, input_axis):
+ return self.expandDims(input_x, input_axis)
+ expand_dim = ExpandDimsTest()
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x, 0)
+ ```
+ 在示例中,`ExpandDimsTest`是一个只有单算子的网络,网络的输入有`input_x`和`input_axis`两个。因为`ExpandDims`算子的第二个输入需要是常量,这是因为在图编译过程中推导`ExpandDims`算子输出维度的时候需要用到,而`input_axis`作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
+
+ * 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
+ ```python
+ class ExpandDimsTest(Cell):
+ def __init__(self, axis):
+ super(ExpandDimsTest, self).__init__()
+ self.expandDims = P.ExpandDims()
+ self.axis = axis
+
+ def construct(self, input_x):
+ return self.expandDims(input_x, self.axis)
+ axis = 0
+ expand_dim = ExpandDimsTest(axis)
+ input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
+ expand_dim(input_x)
+ ```
+
+2. 不允许修改网络的非`Parameter`类型数据成员。示例如下:
+
+ ```
+ class Net(Cell):
def __init__(self):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
-
- def construct(self, input_x, input_axis):
- return self.expandDims(input_x, input_axis)
- expand_dim = ExpandDimsTest()
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x, 0)
+ super(Net, self).__init__()
+ self.num = 2
+ self.par = Parameter(Tensor(np.ones((2, 3, 4))), name="par")
+
+ def construct(self, x, y):
+ return x + y
```
- 在示例中,ExpandDimsTest是一个只有单算子的网络,网络的输入有input_x和input_axis两个。因为ExpandDims算子的第二个输入需要是常量,这是因为在图编译过程中推导ExpandDims算子输出维度的时候需要用到,而input_axis作为网络参数输入会泛化成变量,无法确定其值,从而无法推导算子的输出维度导致图编译失败。所以在图编译阶段需要值推导的输入都应该是常量输入。在API中,这类算子需要常量输入的参数会进行说明,标注"constant input is needed"。
+ 上面所定义的网络里,`self.num`不是一个`Parameter`,不允许被修改,而`self.par`是一个`Parameter`,可以被修改。
-* 正确的写法是在construct函数里面对算子的常量输入直接填入需要的值或者是一个类的成员变量,如下:
- ```python
- class ExpandDimsTest(Cell):
- def __init__(self, axis):
- super(ExpandDimsTest, self).__init__()
- self.expandDims = P.ExpandDims()
- self.axis = axis
-
- def construct(self, input_x):
- return self.expandDims(input_x, self.axis)
- axis = 0
- expand_dim = ExpandDimsTest(axis)
- input_x = Tensor(np.random.randn(2,2,2,2).astype(np.float32))
- expand_dim(input_x)
+3. 当`construct`函数里,使用未定义的类成员时,不会像Python解释器那样抛出`AttributeError`,而是作为`None`处理。示例如下:
```
+ class Net(Cell):
+ def __init__(self):
+ super(Net, self).__init__()
+
+ def construct(self, x):
+ return x + self.y
+ ```
+ 上面所定义的网络里,`construct`里使用了并未定义的类成员`self.y`,此时会将`self.y`作为`None`处理。
+
diff --git a/docs/source_zh_cn/design/mindarmour/fuzzer_design.md b/docs/source_zh_cn/design/mindarmour/fuzzer_design.md
index 81a730c30a9ff3804fea82b355544b9894bfa8c1..129496aaa2b874ca4acab1ca69663b5312248500 100644
--- a/docs/source_zh_cn/design/mindarmour/fuzzer_design.md
+++ b/docs/source_zh_cn/design/mindarmour/fuzzer_design.md
@@ -6,8 +6,8 @@
- [AI模型安全测试](#ai模型安全测试)
- [背景](#背景)
- - [Fuzzer设计图](#Fuzzer设计图)
- - [Fuzzer流程](#Fuzzer流程)
+ - [Fuzz Testing设计图](#fuzz-testing设计图)
+ - [Fuzz Testing流程](#Fuzz-testing流程)
- [代码实现](#代码实现)
- [参考文献](#参考文献)
@@ -17,9 +17,9 @@
## 背景
-不同于[传统程序的Fuzz安全测试](https://zhuanlan.zhihu.com/p/43432370),MindArmour针对深度神经网络,提供AI模型安全测试模块Fuzzer。根据神经网络的特点,引入神经元覆盖率[1]的概念,作为Fuzz的测试指导,引导Fuzz朝神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试DNN,探索不同类型的模型输出结果、模型错误行为。
+不同于[传统程序的Fuzz安全测试](https://zhuanlan.zhihu.com/p/43432370),MindArmour针对深度神经网络,提供AI模型安全测试模块fuzz_testing。根据神经网络的特点,引入神经元覆盖率[1]的概念,作为Fuzz的测试指导,引导Fuzz朝神经元覆盖率增加的方向生成样本,让输入能够激活更多的神经元,神经元值的分布范围更广,以充分测试DNN,探索不同类型的模型输出结果、模型错误行为。
-## Fuzzer设计图
+## Fuzz Testing设计图
AI模型安全测试设计图如下。
@@ -27,7 +27,7 @@ AI模型安全测试设计图如下。
在用户接口层,需要用户提供原始数据集`DataSet`、被测试模型`Model`和配置Fuzzer参数`Fuzzer configuration`。Fuzzer模块对模型和数据进行Fuzz测试后,返回安全评估报告`Security Report`。
-Fuzzer架构主要包括三个模块:
+Fuzz Testing架构主要包括三个模块:
1. Natural Threat/Adversarial Example Generator(数据变异模块):
@@ -43,17 +43,17 @@ Fuzzer架构主要包括三个模块:
3. Evaluation(评估模块):
- 评估Fuzzer效果,生成数据的质量,变异方法的强度。支持3个类型5种指标,包括通用评价指标:accuracy,神经元覆盖率指标:kmnc, nbc,snac,对抗攻击评价指标:attack_success_rate。
+ 评估Fuzz Testing的效果,生成数据的质量,变异方法的强度。支持3个类型5种指标,包括通用评价指标:accuracy,神经元覆盖率指标:kmnc, nbc,snac,对抗攻击评价指标:attack_success_rate。
-## Fuzzer流程
+## Fuzz Testing流程

-具体的Fuzzer流程如下:
+具体的Fuzz Testing流程如下:
1. 根据策略从种子队列中选择一个种子A。
2. 随机选择变异策略,对种子A进行变异,生成多个变种数据A1,A2...
-3. 用目标模型对变种A1,A2...进行预测,如果变种使得目标模型预测错误,则改变种进入Failed tests。
+3. 用目标模型对变种A1,A2...进行预测,如果变种的语意与种子保持一致,则进入Fuzzed Tests。
4. 若目标模型对于变种的预测结果是正确的,用神经元覆盖率指标进行分析。
5. 如果变种使得覆盖率增加,那么将该变种放入种子队列,用于下一轮变异。
diff --git a/docs/source_zh_cn/design/mindinsight/images/graph_visual_main.png b/docs/source_zh_cn/design/mindinsight/images/graph_visual_main.png
index 55ca7d7183c818a15b69a3a6ee2c4ef29655460c..0bc13636b5c84952978469c652c38500e6d34f43 100644
Binary files a/docs/source_zh_cn/design/mindinsight/images/graph_visual_main.png and b/docs/source_zh_cn/design/mindinsight/images/graph_visual_main.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/images/graph_visual_right_side.png b/docs/source_zh_cn/design/mindinsight/images/graph_visual_right_side.png
index ea9515857e23d9a55ad56a88a4a21d232734ffb5..1cfab2911877ed6a51097f0e7bac880479143e26 100644
Binary files a/docs/source_zh_cn/design/mindinsight/images/graph_visual_right_side.png and b/docs/source_zh_cn/design/mindinsight/images/graph_visual_right_side.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/images/tensor_table.png b/docs/source_zh_cn/design/mindinsight/images/tensor_table.png
index 725bd9f8481826d682b593c2224a766854e9b4f8..d04dffae59fd6f9e49aede94bae93f8b8621fcb0 100644
Binary files a/docs/source_zh_cn/design/mindinsight/images/tensor_table.png and b/docs/source_zh_cn/design/mindinsight/images/tensor_table.png differ
diff --git a/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md b/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md
index e3af486df552ed81c7f4e4b6fab8bf680c0b2687..e70db7a709cad527a0f67fed78f2b8d75d075250 100644
--- a/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md
+++ b/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md
@@ -44,7 +44,7 @@ Tensor可视支持1-N维的Tensor以表格或直方图的形式展示,对于0
图1将用户所记录的张量以表格的形式展示,包含以下功能:
-- 表格中白色方框显示当前展示的是哪个维度下的张量数据,其中冒号`:`表示当前维度的所有值,可以在方框输入对应的索引(和Python的索引含义一致,支持负值)或者`:`来查询特定维度的张量数据。
+- 表格中白色方框显示当前展示的是哪个维度下的张量数据,其中冒号`:`表示当前维度索引范围,和Python索引含义基本一致,不指定具体索引表示当前维度所有值,`2:5`表示索引2到5(不包括5)的值,可以在方框输入对应的索引或者含有`:`的索引范围来查询特定维度的张量数据。
- 拖拽表格下方的空心圆圈可以查询特定步骤的张量数据。

diff --git a/docs/source_zh_cn/design/mindspore/distributed_training_design.md b/docs/source_zh_cn/design/mindspore/distributed_training_design.md
index 3a73eb8ff66d49500707f959e32d87fb926bf85c..1273d4bfb43a7b7f0e56746cb206d7a24bb34692 100644
--- a/docs/source_zh_cn/design/mindspore/distributed_training_design.md
+++ b/docs/source_zh_cn/design/mindspore/distributed_training_design.md
@@ -47,19 +47,19 @@
每次开始进行并行训练前,通过调用`mindspore.communication.init`接口初始化通信资源,并自动创建全局通信组`WORLD_COMM_GROUP`。
-2. 数据分发
+2. 数据分发(Data distribution)
数据并行的核心在于将数据集在样本维度拆分并下发到不同的卡上。在`mindspore.dataset`模块提供的所有数据集加载接口中都有`num_shards`和`shard_id`两个参数,它们用于将数据集拆分为多份并循环采样的方式,采集`batch`大小的数据到各自的卡上,当出现数据量不足的情况时将会从头开始采样。
3. 网络构图
- 数据并行网络的书写方式与单机网络没有差别,这是因为在正反向传播过程中各卡的模型间是独立执行的,只是保持了相同的网络结构。唯一需要特别注意的是为了保证各卡间训练同步,相应的网络参数初始化值应当是一致的,这里建议通过`numpy.random.seed`在每张卡上设置相同的随机数种子达到模型广播的目的。
+ 数据并行网络的书写方式与单机网络没有差别,这是因为在正反向传播(Forward propogation & Backword Propogation)过程中各卡的模型间是独立执行的,只是保持了相同的网络结构。唯一需要特别注意的是为了保证各卡间训练同步,相应的网络参数初始化值应当是一致的,这里建议通过`numpy.random.seed`在每张卡上设置相同的随机数种子达到模型广播的目的。
-4. 梯度聚合
+4. 梯度聚合(Gradient aggregation)
数据并行理论上应该实现和单机一致的训练效果,为了保证计算逻辑的一致性,在梯度计算完成后插入`AllReduce`算子实现各卡间的梯度聚合操作。这里我们设置了`mean`开关,用户可以选择是否要对求和后的梯度值进行求平均操作,也可以将其视为超参项,打开开关等价于学习率倍数缩小。
-5. 参数更新
+5. 参数更新(Parameter update)
因为引入了梯度聚合操作,所以各卡的模型会以相同的梯度值一起进入参数更新步骤。因此MindSpore实现的是一种同步数据并行训练方式。理论上最终每卡训练出来的模型是相同的,如果网络中含有在样本维度的归约类型操作,网络的输出可能会有所差别,这是由数据并行的切分性质决定的。
diff --git a/docs/source_zh_cn/design/mindspore/images/data_parallel.png b/docs/source_zh_cn/design/mindspore/images/data_parallel.png
index a926948143fbdfbe323fe661672c0aad824459a0..a92c82aa64615b398e83b9bc2cf0aa2c5db9f904 100644
Binary files a/docs/source_zh_cn/design/mindspore/images/data_parallel.png and b/docs/source_zh_cn/design/mindspore/images/data_parallel.png differ
diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution1.png b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution1.png
index 2220231387851241c5fa8d514aff00c0f4e3cc49..ed4d79416a0a07f8d75e738aa544d214834ae778 100644
Binary files a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution1.png and b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution1.png differ
diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution2.png b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution2.png
index 1261cdc28b2f8c3a6f0ccba9adb96e9d0fb5bcfa..114f984c66ae578722dbcdbb59ab03c44dbcb097 100644
Binary files a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution2.png and b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution2.png differ
diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution3.png b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution3.png
index 70eafae423d9836480a801b6519f85d892bbf19c..dd66c9120615f50f2b3f60cfe139954cb4adf307 100644
Binary files a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution3.png and b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution3.png differ
diff --git a/docs/source_zh_cn/design/mindspore/ir.md b/docs/source_zh_cn/design/mindspore/ir.md
index 77bc45014d5301fa6f76a9505ce38a491c09b9b4..362544c7f113919404560d8b52b838d632eda6e5 100644
--- a/docs/source_zh_cn/design/mindspore/ir.md
+++ b/docs/source_zh_cn/design/mindspore/ir.md
@@ -1,6 +1,6 @@
# MindSpore IR(MindIR)
-`Linux` `框架开发` `中级` `高级` `贡献者`
+`Linux` `Windows` `框架开发` `中级` `高级` `贡献者`
diff --git a/docs/source_zh_cn/glossary.md b/docs/source_zh_cn/glossary.md
index 647c9076f97496a863d4f9ba88e06df4f2beb908..c5721652b0e701aa8e989eea346e307b2c58fd72 100644
--- a/docs/source_zh_cn/glossary.md
+++ b/docs/source_zh_cn/glossary.md
@@ -32,9 +32,10 @@
| LSTM | Long short-term memory,长短期记忆,对应的网络是一种时间循环神经网络,适合于处理和预测时间序列中间隔和延迟非常长的重要事件。 |
| Manifest | 一种数据格式文件,华为ModelArts采用了该格式,详细说明请参见。 |
| ME | Mind Expression,MindSpore前端,主要完成从用户源码到计算图的编译任务、训练中控制执行及上下文维护(非下沉模式配置下)、动态图(PyNative模式)等。 |
-| MindArmour | MindSpore安全组件,用于AI对抗样本管理,AI模型防攻击和增强,AI模型健壮性评测。 |
+| MindArmour | MindSpore安全模块,通过差分隐私、对抗性攻防等技术手段,提升模型的保密性、完整性和可用性,阻止攻击者对模型进行恶意修改或是破解模型的内部构件,窃取模型的参数。 |
| MindData | MindSpore数据框架,提供数据加载、增强、数据集管理以及可视化。 |
| MindInsight | MindSpore可视化组件,可视化标量、图像、计算图以及模型超参等信息。 |
+| MindRecord | MindSpore定义的一种数据格式,是一个执行读取、写入、搜索和转换MindSpore格式数据集的模块。 |
| MindSpore | 华为主导开源的深度学习框架。 |
| MindSpore Lite | 一个轻量级的深度神经网络推理引擎,提供了将MindSpore训练出的模型在端侧进行推理的功能。 |
| MNIST database | Modified National Institute of Standards and Technology database,一个大型手写数字数据库,通常用于训练各种图像处理系统。 |
@@ -43,5 +44,5 @@
| ResNet-50 | Residual Neural Network 50,由微软研究院的Kaiming He等四名华人提出的残差神经网络。 |
| Schema | 数据集结构定义文件,用于定义数据集包含哪些字段以及字段的类型。 |
| Summary | 是对网络中Tensor取值进行监测的一种算子,在图中是“外围”操作,不影响数据流本身。 |
-| TBE | Tensor Boost Engine,在TVM( Tensor Virtual Machine )框架基础上扩展的算子开发工具。 |
+| TBE | Tensor Boost Engine,华为自研的NPU算子开发工具,在TVM( Tensor Virtual Machine )框架基础上扩展,提供了一套Python API来实施开发活动,进行自定义算子开发。 |
| TFRecord | Tensorflow定义的数据格式。 |
diff --git a/docs/source_zh_cn/network_list.md b/docs/source_zh_cn/network_list.md
index 351a5223c2083d20655f4eac118991dd08da7400..7364a1223b2bec9004bda1fcd75eae22c615475d 100644
--- a/docs/source_zh_cn/network_list.md
+++ b/docs/source_zh_cn/network_list.md
@@ -6,7 +6,6 @@
- [网络支持](#网络支持)
- [Model Zoo](#model-zoo)
- - [预训练模型](#预训练模型)
@@ -14,47 +13,33 @@
## Model Zoo
-| 领域 | 子领域 | 网络 | Ascend | GPU | CPU
-|:---- |:------- |:---- |:---- |:---- |:----
-|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported
-| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification) 目标检测(Image Classification) 语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification) 目标检测(Image Classification) 语义分割(Semantic Tegmentation) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
-|计算机视觉(CV) | 目标检测(Targets Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 目标检测(Targets Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing
-| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing
-| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing
-| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing
-| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing
+| 领域 | 子领域 | 网络 | Ascend(Graph) | Ascend(PyNative) | GPU(Graph) | GPU(PyNative) | CPU(Graph)
+|:---- |:------- |:---- |:---- |:---- |:---- |:---- |:----
+|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | Supported | Supported | Supported | Supported | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/src/googlenet.py) | Supported | Supported | Supported | Supported | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py) | Supported | Supported | Supported | Supported | Supported
+| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported | Supported | Supported | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Supported |Supported | Supported | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [SE-ResNet50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported | Doing |Doing | Doing | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Supported | Supported | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Supported | Supported | Supported | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Supported | Supported | Doing
+| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification) 目标检测(Image Classification) 语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Supported | Supported | Doing
+| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification) 目标检测(Image Classification) 语义分割(Semantic Tegmentation) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Doing | Supported | Supported | Doing
+|计算机视觉(CV) | 目标检测(Object Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported | Supported |Doing | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Object Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Object Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Object Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Doing | Supported | Supported | Doing
+| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/nets/deeplab_v3/deeplab_v3.py) | Supported | Supported | Doing | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | Supported | Supported | Supported | Supported | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Doing | Supported | Supported | Supported
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Supported | Doing | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Supported | Doing | Doing
+| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Supported | Doing | Doing
+| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Supported | Doing | Doing
+| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing | Doing | Doing
+| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing | Doing | Doing
> 你也可以使用 [MindWizard工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/wizard/) 快速生成经典网络脚本。
-
-## 预训练模型
-*代表MindSpore已发布的版本号,支持网络训练的硬件平台有CPU、GPU和Ascend,以下表格中 ✓ 代表模型是基于选中的硬件平台训练而来。
-
-| 领域 | 子领域 | 网络 |数据集 | CPU | GPU | Ascend | 0.5.0-beta*
-|:---- |:----- |:---- |:---- |:---- |:---- |:---- |:------
-|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/alexnet/src/alexnet.py) | CIFAR-10 | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/alexnet/alexnet_ascend_0.5.0_cifar10_official_classification_20200716.tar.gz)
-|计算机视觉(CV) | 图像分类(Image Classification)| [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/src/lenet.py)| MNIST | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/lenet/lenet_ascend_0.5.0_mnist_official_classification_20200716.tar.gz)
-|计算机视觉(CV) | 图像分类(Image Classification)| [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py)|CIFAR-10 | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/vgg/vgg16_ascend_0.5.0_cifar10_official_classification_20200715.tar.gz)
-|计算机视觉(CV) | 图像分类(Image Classification)| [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) |CIFAR-10 | | | ✓ |[下载](http://download.mindspore.cn/model_zoo/official/cv/resnet/resnet50_v1.5_ascend_0.3.0_cifar10_official_classification_20200718.tar.gz)
-|计算机视觉(CV) | 目标检测(Targets Detection)| [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53) |COCO 2014 | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/cv/yolo/yolov3_darknet53_ascend_0.5.0_coco2014_official_object_detection_20200717.tar.gz)
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding)| [BERT_Base](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py) | zhwiki | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_base_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding)| [BERT_NEZHA](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/bert/src/bert_model.py)| zhwiki | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/nlp/bert/bert_nezha_ascend_0.5.0_cn-wiki_official_nlp_20200720.tar.gz)
-| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding)| [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py)|WMT English-German | | | ✓ | [下载](http://download.mindspore.cn/model_zoo/official/nlp/transformer/transformer_ascend_0.5.0_wmtende_official_machine_translation_20200713.tar.gz)
diff --git a/docs/source_zh_cn/operator_list.md b/docs/source_zh_cn/operator_list.md
index 016d4b5f8025ca4ffe97b5ff6a836b8c265c1f90..1bac1146d559988a5847e84b6909917586ce45f5 100644
--- a/docs/source_zh_cn/operator_list.md
+++ b/docs/source_zh_cn/operator_list.md
@@ -37,7 +37,7 @@
| [mindspore.nn.Flatten](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Supported |layer/basic
| [mindspore.nn.Dense](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dense) |Supported | Supported | Supported |layer/basic
| [mindspore.nn.ClipByNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ClipByNorm) |Supported | Supported | Doing |layer/basic
-| [mindspore.nn.Norm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Doing | Supported | Doing |layer/basic
+| [mindspore.nn.Norm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Norm) |Supported | Supported | Doing |layer/basic
| [mindspore.nn.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.OneHot) | Supported | Supported | Supported |layer/basic
| [mindspore.nn.Range](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Range) | Supported | Doing | Doing |layer/basic
| [mindspore.nn.SequentialCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SequentialCell) |Supported | Supported | Doing |layer/container
@@ -63,13 +63,23 @@
| [mindspore.nn.LinSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) | Supported | Supported | Doing |layer/pooling
-| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Doing | Doing |layer/quant
-| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Doing |layer/quant
+| [mindspore.nn.DenseBnAct](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseBnAct) |Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnAct](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnAct) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.FakeQuantWithMinMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FakeQuantWithMinMax) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnFoldQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dBnWithoutFoldQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dBnWithoutFoldQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.Conv2dQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2dQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.DenseQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DenseQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.ActQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ActQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.LeakyReLUQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLUQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSwishQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSwishQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.HSigmoidQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.HSigmoidQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.TensorAddQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TensorAddQuant) | Supported | Supported | Supported |layer/quant
+| [mindspore.nn.MulQuant](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MulQuant) | Supported | Supported | Supported |layer/quant
| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Supported |Supported | Doing |loss/loss
| [mindspore.nn.MSELoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MSELoss) | Supported |Doing | Doing |loss/loss
| [mindspore.nn.SmoothL1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SmoothL1Loss) | Supported |Doing | Doing |loss/loss
| [mindspore.nn.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyWithLogits) | Supported | Supported | Supported |loss/loss
-| [mindspore.nn.SoftmaxCrossEntropyExpand](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.SoftmaxCrossEntropyExpand) | Supported |Supported | Doing |loss/loss
| [mindspore.nn.CosineEmbeddingLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.CosineEmbeddingLoss) |Supported |Supported | Doing |loss/loss
| [mindspore.nn.ProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ProximalAdagrad) | Supported |Doing | Doing |optim/ProximalAdagrad
| [mindspore.nn.LazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LazyAdam) | Supported |Doing | Doing |optim/lazyadam
@@ -84,300 +94,305 @@
| [mindspore.nn.WithLossCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithLossCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.WithGradCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithGradCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.TrainOneStepCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepCell) | Supported | Supported | Doing |wrap/cell_wrapper
-| [mindspore.nn.DataWrapper](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DataWrapper) |Doing | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.GetNextSingleOp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GetNextSingleOp) |Doing | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.WithEvalCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.WithEvalCell) | Supported | Supported | Doing |wrap/cell_wrapper
| [mindspore.nn.ParameterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.ParameterUpdate) | Supported |Doing | Doing |wrap/cell_wrapper
| [mindspore.nn.DistributedGradReducer](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DistributedGradReducer) | Supported |Doing | Doing |wrap/grad_reducer
-| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Doing |Doing | Doing |wrap/loss_scale
-| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Doing |Doing | Doing |wrap/loss_scale
+| [mindspore.nn.DynamicLossScaleUpdateCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.DynamicLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.FixedLossScaleUpdateCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.FixedLossScaleUpdateCell) | Supported |Supported | Doing |wrap/loss_scale
+| [mindspore.nn.TrainOneStepWithLossScaleCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.TrainOneStepWithLossScaleCell) | Supported |Supported | Doing |wrap/loss_scale
| [mindspore.nn.Cell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Cell) | Supported | Supported | Supported |cell
+| [mindspore.nn.EmbeddingLookup](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.EmbeddingLookup) |Supported | Supported | Supported |layer/embedding
+| [mindspore.nn.Pad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Pad) |Supported | Supported | Doing |layer/basic
## mindspore.ops.operations
| 操作名 | Ascend | GPU | CPU |算子类别
| :----------- |:------ |:------ |:-----|:---
-| [mindspore.ops.operations.Flatten](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Flatten) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Unpack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Unpack) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | Supported| Doing | Doing | nn_ops
-| [mindspore.ops.operations.L2Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Loss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.CTCLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CTCLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.RNNTLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RNNTLoss) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Softplus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softplus) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ReLU6](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU6) | Supported | Supported |Supported | nn_ops
-| [mindspore.ops.operations.HSwish](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSwish) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.HSigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSigmoid) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchNorm) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.LRN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LRN) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.Conv2D](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2D) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNative](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPoolWithArgmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
-| [mindspore.ops.operations.MaxPool](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MaxPool) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.AvgPool](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AvgPool) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Conv2DBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.TopK](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TopK) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
-| [mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ApplyRMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ApplyCenteredRMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
-| [mindspore.ops.operations.SmoothL1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.SGD](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SGD) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LayerNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LayerNorm) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.ResizeBilinear](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeBilinear) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | Supported | Supported | Supported | nn_ops
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.GetNext](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GetNext) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.LSTM](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LSTM) | Doing | Supported | Supported | nn_ops
-| [mindspore.ops.operations.BasicLSTMCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Pad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pad) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.ROIAlign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ROIAlign) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.BinaryCrossEntropy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
-| [mindspore.ops.operations.KLDivLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.KLDivLoss) | Doing | Supported | Doing | nn_ops
-| [mindspore.ops.operations.LARSUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LARSUpdate) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Softsign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softsign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignAdd) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.ReduceProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.CumProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumProd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.CumSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CumSum) | Supported | Supported| Doing | math_ops
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | Supported | Supported | Supported | math_ops
-| [mindspore.ops.operations.SquareSumAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquareSumAll) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Reciprocal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reciprocal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Log1p](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log1p) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Floor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Floor) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.EqualCount](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EqualCount) | Doing | Supported | Supported | math_ops
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Ceil](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Ceil) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Inv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Inv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Invert](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Invert) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUAllocFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUGetFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NPUClearFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.FloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloatStatus) | Doing | Supported | Doing | math_ops
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Cosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cosh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI0e](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI0e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.BesselI1e](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BesselI1e) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Tan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tan) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Asinh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Asinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erf](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erf) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Erfc](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Erfc) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sin) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Sinh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sinh) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Expm1](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Expm1) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.NMSWithMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NMSWithMask) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Abs](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Abs) | Supported | Supported | Doing | math_ops
-| [mindspore.ops.operations.Sign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sign) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Round](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Round) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceAdd) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.InplaceSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceSub) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.SameTypeShape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SameTypeShape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.IsSubClass](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsSubClass) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.IsInstance](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsInstance) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Split](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Split) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.TruncatedNormal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncatedNormal) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ZerosLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ZerosLike) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.TupleToArray](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TupleToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToArray](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToArray) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScalarToTensor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarToTensor) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.InvertPermutation](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InvertPermutation) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmax) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Argmin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Argmin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.UnsortedSegmentProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ParallelConcat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ParallelConcat) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Slice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Slice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.DiagPart](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DiagPart) | Doing | Doing | Doing | array_ops
-| [mindspore.ops.operations.Eye](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Eye) | Supported | Supported | Supported | array_ops
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ResizeNearestNeighbor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToDepth](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToDepth) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.DepthToSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatch](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatch) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.SpaceToBatchND](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SpaceToBatchND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpace) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.BatchToSpaceND](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchToSpaceND) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.IsFinite](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IsFinite) | Supported | Supported | Doing | array_ops
-| [mindspore.ops.operations.InplaceUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InplaceUpdate) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.Rint](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rint) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReverseV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseV2) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ReduceOp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceOp) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllReduce](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllReduce) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.AllGather](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AllGather) | Supported | Supported | Doing | comm_ops
-| [mindspore.ops.operations.ReduceScatter](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceScatter) | Doing | Supported | Doing | comm_ops
-| [mindspore.ops.operations.Broadcast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Broadcast) | Supported | Doing | Doing | comm_ops
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | Supported | Supported | Supported | control_ops
-| [mindspore.ops.operations.GeSwitch](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GeSwitch) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.Merge](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Merge) | Doing | Doing | Doing | control_ops
-| [mindspore.ops.operations.ScalarSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.ImageSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ImageSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.TensorSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.HistogramSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramSummary) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.InsertGradientOf](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InsertGradientOf) | Supported | Supported | Supported | debug_ops
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | Supported | Doing | Doing | debug_ops
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxEncode](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.BoundingBoxDecode](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.PopulationCount](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PopulationCount) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.CheckValid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CheckValid) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.IOU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.IOU) | Supported | Supported | Doing | other_ops
-| [mindspore.ops.operations.MakeRefKey](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MakeRefKey) | Supported | Supported | Supported | other_ops
-| [mindspore.ops.operations.InTopK](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.InTopK) | Supported | Doing | Doing | other_ops
-| [mindspore.ops.operations.StandardNormal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StandardNormal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.Gamma](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gamma) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.Poisson](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Poisson) | Supported | Doing | Doing | random_ops
-| [mindspore.ops.operations.UniformInt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformInt) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.UniformReal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.UniformReal) | Supported | Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomChoiceWithMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
-| [mindspore.ops.operations.RandomCategorical](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RandomCategorical) | Supported| Doing | Doing | random_ops
-| [mindspore.ops.operations.ScalarCast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScalarCast) | Supported | Supported | Supported | inner_ops
-| [mindspore.ops.operations.ReverseSequence](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReverseSequence) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.CropAndResize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.CropAndResize) | Supported | Doing | Doing | image_ops
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy) | Supported | Doing | Doing | math_ops
-| [mindspore.ops.operations.HistogramFixedWidth](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Flatten](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Flatten) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Acosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Acosh) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorMod) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Elu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Elu) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.MirrorPad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MirrorPad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Unpack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Unpack) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pack) | Supported| Doing | Doing | nn_ops
+| [mindspore.ops.L2Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.L2Loss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.CTCLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CTCLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.RNNTLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RNNTLoss) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Softplus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softplus) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ReLU6](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLU6) | Supported | Supported |Supported | nn_ops
+| [mindspore.ops.HSwish](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HSwish) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.HSigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HSigmoid) | Doing | Supported |Doing | nn_ops
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchNorm) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.LRN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LRN) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.Conv2D](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Conv2D) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.DepthwiseConv2dNative](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.DepthwiseConv2dNativeiBackpropFilter](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthwiseConv2dNativeBackpropFilter) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPoolWithArgmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MaxPoolWithArgmax) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.MaxPool](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MaxPool) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.AvgPool](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AvgPool) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.Conv2DBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Conv2DBackpropInput) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.TopK](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TopK) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | Supported | Supported |Doing | nn_ops
+| [mindspore.ops.SparseSoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseSoftmaxCrossEntropyWithLogits) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ApplyRMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ApplyCenteredRMSProp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyCenteredRMSProp) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam) | Doing | Doing | Supported | nn_ops
+| [mindspore.ops.SmoothL1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SmoothL1Loss) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.SGD](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SGD) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LayerNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LayerNorm) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.ResizeBilinear](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ResizeBilinear) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | Supported | Supported | Supported | nn_ops
+| [mindspore.ops.Gelu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.GetNext](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GetNext) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.LSTM](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LSTM) | Doing | Supported | Supported | nn_ops
+| [mindspore.ops.BasicLSTMCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Pad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pad) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.ROIAlign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ROIAlign) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.Adam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Adam) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.BinaryCrossEntropy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BinaryCrossEntropy) | Supported | Supported | Doing | nn_ops
+| [mindspore.ops.KLDivLoss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.KLDivLoss) | Doing | Supported | Doing | nn_ops
+| [mindspore.ops.LARSUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LARSUpdate) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.Softsign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softsign) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignAdd) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReduceProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.CumProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CumProd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.MatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.CumSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CumSum) | Supported | Supported| Doing | math_ops
+| [mindspore.ops.AddN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AddN) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Neg](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Neg) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sub) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mul) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Square) | Supported | Supported | Supported | math_ops
+| [mindspore.ops.SquareSumAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SquareSumAll) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Rsqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rsqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Reciprocal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reciprocal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Exp) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Log) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Log1p](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Log1p) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Div) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Floor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Floor) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.EqualCount](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.EqualCount) | Doing | Supported | Supported | math_ops
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Greater) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Less](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Less) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Atan2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Atan2) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LessEqual) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Ceil](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Ceil) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Inv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Inv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Invert](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Invert) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUAllocFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NPUAllocFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUGetFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NPUGetFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NPUClearFloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NPUClearFloatStatus) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.FloatStatus](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloatStatus) | Doing | Supported | Doing | math_ops
+| [mindspore.ops.Cos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Cosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cosh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ACos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ACos) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI0e](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BesselI0e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.BesselI1e](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BesselI1e) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Tan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tan) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Asin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Asinh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Asinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erf](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Erf) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Erfc](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Erfc) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sin) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Sinh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sinh) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Expm1](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Expm1) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.NMSWithMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NMSWithMask) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Abs](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Abs) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.Sign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sign) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Round](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Round) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InplaceAdd) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.InplaceSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InplaceSub) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Mod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mod) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.DType](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DType) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.SameTypeShape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SameTypeShape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cast) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.IsSubClass](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IsSubClass) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.IsInstance](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IsInstance) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Shape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Shape) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Transpose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Split](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Split) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.Rank](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rank) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.TruncatedNormal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncatedNormal) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.Size](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Size) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Fill](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Fill) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ZerosLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ZerosLike) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.TupleToArray](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TupleToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToArray](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarToArray) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScalarToTensor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarToTensor) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.InvertPermutation](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InvertPermutation) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Argmax) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Argmin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Argmin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tile) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentSum) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.UnsortedSegmentProd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UnsortedSegmentProd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Concat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Concat) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ParallelConcat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ParallelConcat) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Slice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Slice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Select](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Select) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.Diag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Diag) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.DiagPart](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DiagPart) | Doing | Doing | Doing | array_ops
+| [mindspore.ops.Eye](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Eye) | Supported | Supported | Supported | array_ops
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ResizeNearestNeighbor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ResizeNearestNeighbor) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.ApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyFtrl) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl) | Doing | Doing | Supported | array_ops
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate) | Supported | Doing | Supported | array_ops
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToDepth](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SpaceToDepth) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.DepthToSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DepthToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatch](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatch) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.SpaceToBatchND](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SpaceToBatchND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpace) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.BatchToSpaceND](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchToSpaceND) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.IsFinite](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IsFinite) | Supported | Supported | Doing | array_ops
+| [mindspore.ops.InplaceUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InplaceUpdate) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.Rint](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rint) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReverseV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReverseV2) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.ReduceOp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceOp) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllReduce](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AllReduce) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.AllGather](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AllGather) | Supported | Supported | Doing | comm_ops
+| [mindspore.ops.ReduceScatter](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceScatter) | Doing | Supported | Doing | comm_ops
+| [mindspore.ops.Broadcast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Broadcast) | Supported | Doing | Doing | comm_ops
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | Supported | Supported | Supported | control_ops
+| [mindspore.ops.GeSwitch](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GeSwitch) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.Merge](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Merge) | Doing | Doing | Doing | control_ops
+| [mindspore.ops.ScalarSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.ImageSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ImageSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.TensorSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.HistogramSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HistogramSummary) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.InsertGradientOf](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InsertGradientOf) | Supported | Supported | Supported | debug_ops
+| [mindspore.ops.Print](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Print) | Supported | Doing | Doing | debug_ops
+| [mindspore.ops.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Assign) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxEncode](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxEncode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.BoundingBoxDecode](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BoundingBoxDecode) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.PopulationCount](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.PopulationCount) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.CheckValid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CheckValid) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.IOU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.IOU) | Supported | Supported | Doing | other_ops
+| [mindspore.ops.MakeRefKey](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MakeRefKey) | Supported | Supported | Supported | other_ops
+| [mindspore.ops.InTopK](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.InTopK) | Supported | Doing | Doing | other_ops
+| [mindspore.ops.StandardNormal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.StandardNormal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.Gamma](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Gamma) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.Poisson](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Poisson) | Supported | Doing | Doing | random_ops
+| [mindspore.ops.UniformInt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UniformInt) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.UniformReal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.UniformReal) | Supported | Supported | Doing | random_ops
+| [mindspore.ops.RandomChoiceWithMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RandomChoiceWithMask) | Doing| Supported | Doing | random_ops
+| [mindspore.ops.RandomCategorical](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RandomCategorical) | Supported| Doing | Doing | random_ops
+| [mindspore.ops.ScalarCast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScalarCast) | Supported | Supported | Supported | inner_ops
+| [mindspore.ops.ReverseSequence](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReverseSequence) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.CropAndResize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.CropAndResize) | Supported | Doing | Doing | image_ops
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xdivy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xlogy) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.HistogramFixedWidth](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.HistogramFixedWidth) | Supported | Doing | Doing | math_ops
+| [mindspore.ops.Eps](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Eps) | Supported | Supported | Doing | math_ops
+| [mindspore.ops.ReLUV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLUV2) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingReduce](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingReduce) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.BNTrainingUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BNTrainingUpdate) | Supported | Doing | Doing | nn_ops
## mindspore.ops.functional
| 操作名 | 对应functional算子
| :----------- | :-----------
-| [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | pack
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | tensor_add
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | assign_sub
-| [mindspore.ops.operations.AddN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AddN) | addn
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | square
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | sqrt
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | equal
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | not_equal
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | logical_not
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd) | logical_and
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr) | logical_or
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | expand_dims
-| [mindspore.ops.operations.DType](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DType) | dtype
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | cast
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | reshape
-| [mindspore.ops.operations.Shape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Shape) | shape
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | gather
-| [mindspore.ops.operations.Rank](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rank) | rank
-| [mindspore.ops.operations.Size](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Size) | size
-| [mindspore.ops.operations.Fill](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Fill) | fill
-| [mindspore.ops.operations.OnesLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OnesLike) | ones_like
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | tile
-| [mindspore.ops.operations.Select](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Select) | select
-| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | scatter_nd
-| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | gather_nd
-| [mindspore.ops.operations.ControlDepend](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ControlDepend) | control_depend
-| [mindspore.ops.operations.Print](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Print) | print
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign) | assign
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | tensor_pow
+| [mindspore.ops.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pack) | pack
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | tensor_add
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | assign_sub
+| [mindspore.ops.AddN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AddN) | addn
+| [mindspore.ops.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Square) | square
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | sqrt
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal) | equal
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | not_equal
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | logical_not
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd) | logical_and
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr) | logical_or
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | expand_dims
+| [mindspore.ops.DType](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DType) | dtype
+| [mindspore.ops.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cast) | cast
+| [mindspore.ops.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | reshape
+| [mindspore.ops.Shape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Shape) | shape
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | gather
+| [mindspore.ops.Rank](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Rank) | rank
+| [mindspore.ops.Size](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Size) | size
+| [mindspore.ops.Fill](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Fill) | fill
+| [mindspore.ops.OnesLike](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OnesLike) | ones_like
+| [mindspore.ops.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tile) | tile
+| [mindspore.ops.Select](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Select) | select
+| [mindspore.ops.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNd) | scatter_nd
+| [mindspore.ops.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherNd) | gather_nd
+| [mindspore.ops.ControlDepend](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ControlDepend) | control_depend
+| [mindspore.ops.Print](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Print) | print
+| [mindspore.ops.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Assign) | assign
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow) | tensor_pow
> 当前functional支持了一部分没有属性的算子,后续会进一步补齐完整。
@@ -385,62 +400,61 @@
| 操作名 | 约束
| :----------- | :-----------
-| [mindspore.ops.operations.ACos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ACos) | None
-| [mindspore.ops.operations.Cos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cos) | None
-| [mindspore.ops.operations.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalNot) | None
-| [mindspore.ops.operations.Log](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Log) | None
-| [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | None
-| [mindspore.ops.operations.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogSoftmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Softmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | None
-| [mindspore.ops.operations.Gelu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Gelu) | None
-| [mindspore.ops.operations.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU) | None
-| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | None
-| [mindspore.ops.operations.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Cast) | None
-| [mindspore.ops.operations.Neg](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Neg) | None
-| [mindspore.ops.operations.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ExpandDims) | None
-| [mindspore.ops.operations.Squeeze](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Squeeze) | None
-| [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | None
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | None
-| [mindspore.ops.operations.Dropout](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Dropout) | 不支持重复计算
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div) | None
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | None
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv) | None
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | None
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub) | None
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | None
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv) | None
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater) | None
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub) | None
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | None
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal) | None
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual) | None
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum) | None
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum) | None
-| [mindspore.ops.operations.BiasAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BiasAdd) | None
-| [mindspore.ops.operations.Concat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Concat) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.DropoutGenMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutGenMask) | 需和`DropoutDoMask`联合使用
-| [mindspore.ops.operations.DropoutDoMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DropoutDoMask) | 需和`DropoutGenMask`联合使用,不支持配置切分策略
-| [mindspore.ops.operations.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherV2) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分
-| [mindspore.ops.operations.SparseGatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseGatherV2) | 同GatherV2
-| [mindspore.ops.operations.EmbeddingLookup](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.EmbeddingLookup) | 同GatherV2
-| [mindspore.ops.operations.L2Normalize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Normalize) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价
-| [mindspore.ops.operations.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SoftmaxCrossEntropyWithLogits) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0]
-| [mindspore.ops.operations.MatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MatMul) | 不支持`transpose_a=True`
-| [mindspore.ops.operations.BatchMatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchMatMul) | 不支持`transpore_a=True`
-| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | 输入(input_x)的Channel维要和weight的切分方式一致
-| [mindspore.ops.operations.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.OneHot) | 仅支持输入(indices)是1维的Tensor
-| [mindspore.ops.operations.ReduceSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceSum) | None
-| [mindspore.ops.operations.ReduceMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMax) | None
-| [mindspore.ops.operations.ReduceMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMin) | None
-| [mindspore.ops.operations.ArgMinWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMinWithValue) | 第一个输出(index)不能作为其他算子的输入
-| [mindspore.ops.operations.ArgMaxWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ArgMaxWithValue) | 第一个输出(index)不能作为其他算子的输入
-| [mindspore.ops.operations.ReduceMean](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReduceMean) | None
-| [mindspore.ops.operations.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reshape) | 不支持配置切分策略
-| [mindspore.ops.operations.StridedSlice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.StridedSlice) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分
-| [mindspore.ops.operations.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tile) | 仅支持对multiples配置切分策略
-| [mindspore.ops.operations.Transpose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Transpose) | None
-| [mindspore.ops.operations.Diag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Diag) | 不支持配置切分策略
+| [mindspore.ops.ACos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ACos) | None
+| [mindspore.ops.Cos](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cos) | None
+| [mindspore.ops.LogicalNot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalNot) | None
+| [mindspore.ops.Log](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Log) | None
+| [mindspore.ops.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Exp) | None
+| [mindspore.ops.LogSoftmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogSoftmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.Softmax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Softmax) | 输入(logits)在轴(axis)对应的维度不可切分,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tanh) | None
+| [mindspore.ops.Gelu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Gelu) | None
+| [mindspore.ops.ReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReLU) | None
+| [mindspore.ops.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sqrt) | None
+| [mindspore.ops.Cast](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Cast) | None
+| [mindspore.ops.Neg](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Neg) | None
+| [mindspore.ops.ExpandDims](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ExpandDims) | None
+| [mindspore.ops.Squeeze](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Squeeze) | None
+| [mindspore.ops.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Square) | None
+| [mindspore.ops.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sigmoid) | None
+| [mindspore.ops.Dropout](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Dropout) | 不支持重复计算
+| [mindspore.ops.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Div) | None
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd) | None
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RealDiv) | None
+| [mindspore.ops.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mul) | None
+| [mindspore.ops.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sub) | None
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow) | None
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv) | None
+| [mindspore.ops.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Greater) | None
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub) | None
+| [mindspore.ops.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SigmoidCrossEntropyWithLogits) | None
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal) | None
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual) | None
+| [mindspore.ops.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Maximum) | None
+| [mindspore.ops.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Minimum) | None
+| [mindspore.ops.BiasAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BiasAdd) | None
+| [mindspore.ops.Concat](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Concat) | 输入(input_x)在轴(axis)所对应的维度不能切分,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.DropoutGenMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutGenMask) | 需和`DropoutDoMask`联合使用
+| [mindspore.ops.DropoutDoMask](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DropoutDoMask) | 需和`DropoutGenMask`联合使用,不支持配置切分策略
+| [mindspore.ops.GatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GatherV2) | 仅支持1维和2维的input_params,并且input_params的最后一维要32字节对齐(出于性能考虑);不支持标量input_indices;参数在轴(axis)所在维度切分时,不支持重复计算;不支持input_indices和input_params同时进行切分
+| [mindspore.ops.SparseGatherV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseGatherV2) | 同GatherV2
+| [mindspore.ops.EmbeddingLookup](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.EmbeddingLookup) | 同GatherV2
+| [mindspore.ops.L2Normalize](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.L2Normalize) | 输入(input_x)在轴(axis)对应的维度不能切,切分后,在数学逻辑上和单机不等价
+| [mindspore.ops.SoftmaxCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SoftmaxCrossEntropyWithLogits) | 输入(logits、labels)的最后一维不能切分;有两个输出,正向的loss只支持取[0]
+| [mindspore.ops.MatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.MatMul) | 不支持`transpose_a=True`
+| [mindspore.ops.BatchMatMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BatchMatMul) | 不支持`transpore_a=True`
+| [mindspore.ops.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.PReLU) | weight的shape在非[1]的情况下,输入(input_x)的Channel维要和weight的切分方式一致
+| [mindspore.ops.OneHot](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.OneHot) | 仅支持输入(indices)是1维的Tensor,切分策略要配置输出的切分策略,以及第1和第2个输入的切分策略
+| [mindspore.ops.ReduceSum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceSum) | None
+| [mindspore.ops.ReduceMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMax) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ReduceMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMin) | 输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ArgMinWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMinWithValue) | 第一个输出(index)不能作为其他算子的输入,输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ArgMaxWithValue](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ArgMaxWithValue) | 第一个输出(index)不能作为其他算子的输入,输入在轴(axis)的维度进行切分时,分布式结果可能会和单机不一致
+| [mindspore.ops.ReduceMean](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ReduceMean) | None
+| [mindspore.ops.Reshape](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Reshape) | 不支持配置切分策略
+| [mindspore.ops.StridedSlice](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.StridedSlice) | 仅支持值为全0的mask;需要切分的维度必须全部提取;输入在strides不为1对应的维度不支持切分
+| [mindspore.ops.Tile](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Tile) | 仅支持对multiples配置切分策略
+| [mindspore.ops.Transpose](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Transpose) | None
> 重复计算是指,机器没有用满,比如:集群有8张卡跑分布式训练,切分策略只对输入切成了4份。这种情况下会发生重复计算。
@@ -468,66 +482,66 @@
| 算子名
| :-----------
-| [mindspore.ops.operations.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Assign)
-| [mindspore.ops.operations.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.AssignSub)
-| [mindspore.ops.operations.ApplyMomentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyMomentum)
-| [mindspore.ops.operations.FusedSparseAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseAdam)
-| [mindspore.ops.operations.FusedSparseLazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseLazyAdam)
-| [mindspore.ops.operations.FusedSparseFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseFtrl)
-| [mindspore.ops.operations.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FusedSparseProximalAdagrad)
-| [mindspore.ops.operations.ApplyAdaMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdaMax)
-| [mindspore.ops.operations.ApplyAdadelta](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdadelta)
-| [mindspore.ops.operations.ApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdagrad)
-| [mindspore.ops.operations.ApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAdagradV2)
-| [mindspore.ops.operations.SparseApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagrad)
-| [mindspore.ops.operations.SparseApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyAdagradV2)
-| [mindspore.ops.operations.ApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalAdagrad)
-| [mindspore.ops.operations.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyProximalAdagrad)
-| [mindspore.ops.operations.ApplyAddSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyAddSign)
-| [mindspore.ops.operations.ApplyPowerSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyPowerSign)
-| [mindspore.ops.operations.ApplyGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyGradientDescent)
-| [mindspore.ops.operations.ApplyProximalGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyProximalGradientDescent)
-| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl)
-| [mindspore.ops.operations.SparseApplyFtrlV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrlV2)
-| [mindspore.ops.operations.BitwiseAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseAnd)
-| [mindspore.ops.operations.BitwiseOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseOr)
-| [mindspore.ops.operations.BitwiseXor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BitwiseXor)
-| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd)
-| [mindspore.ops.operations.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sub)
-| [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul)
-| [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow)
-| [mindspore.ops.operations.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Minimum)
-| [mindspore.ops.operations.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Maximum)
-| [mindspore.ops.operations.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.RealDiv)
-| [mindspore.ops.operations.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Div)
-| [mindspore.ops.operations.DivNoNan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DivNoNan)
-| [mindspore.ops.operations.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorDiv)
-| [mindspore.ops.operations.TruncateDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateDiv)
-| [mindspore.ops.operations.TruncateMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TruncateMod)
-| [mindspore.ops.operations.Mod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mod)
-| [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod)
-| [mindspore.ops.operations.Atan2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Atan2)
-| [mindspore.ops.operations.SquaredDifference](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquaredDifference)
-| [mindspore.ops.operations.Xdivy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xdivy)
-| [mindspore.ops.operations.Xlogy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Xlogy)
-| [mindspore.ops.operations.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Equal)
-| [mindspore.ops.operations.ApproximateEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApproximateEqual)
-| [mindspore.ops.operations.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.NotEqual)
-| [mindspore.ops.operations.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Greater)
-| [mindspore.ops.operations.GreaterEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GreaterEqual)
-| [mindspore.ops.operations.Less](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Less)
-| [mindspore.ops.operations.LessEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LessEqual)
-| [mindspore.ops.operations.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalAnd)
-| [mindspore.ops.operations.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LogicalOr)
-| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate)
-| [mindspore.ops.operations.ScatterNdAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdAdd)
-| [mindspore.ops.operations.ScatterNdSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdSub)
-| [mindspore.ops.operations.ScatterNonAliasingAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNonAliasingAdd)
-| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate)
-| [mindspore.ops.operations.ScatterMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMax)
-| [mindspore.ops.operations.ScatterMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMin)
-| [mindspore.ops.operations.ScatterAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterAdd)
-| [mindspore.ops.operations.ScatterSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterSub)
-| [mindspore.ops.operations.ScatterMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterMul)
-| [mindspore.ops.operations.ScatterDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterDiv)
+| [mindspore.ops.Assign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Assign)
+| [mindspore.ops.AssignSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.AssignSub)
+| [mindspore.ops.ApplyMomentum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyMomentum)
+| [mindspore.ops.FusedSparseAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseAdam)
+| [mindspore.ops.FusedSparseLazyAdam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseLazyAdam)
+| [mindspore.ops.FusedSparseFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseFtrl)
+| [mindspore.ops.FusedSparseProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FusedSparseProximalAdagrad)
+| [mindspore.ops.ApplyAdaMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdaMax)
+| [mindspore.ops.ApplyAdadelta](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdadelta)
+| [mindspore.ops.ApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdagrad)
+| [mindspore.ops.ApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAdagradV2)
+| [mindspore.ops.SparseApplyAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagrad)
+| [mindspore.ops.SparseApplyAdagradV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyAdagradV2)
+| [mindspore.ops.ApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalAdagrad)
+| [mindspore.ops.SparseApplyProximalAdagrad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyProximalAdagrad)
+| [mindspore.ops.ApplyAddSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyAddSign)
+| [mindspore.ops.ApplyPowerSign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyPowerSign)
+| [mindspore.ops.ApplyGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyGradientDescent)
+| [mindspore.ops.ApplyProximalGradientDescent](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApplyProximalGradientDescent)
+| [mindspore.ops.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrl)
+| [mindspore.ops.SparseApplyFtrlV2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SparseApplyFtrlV2)
+| [mindspore.ops.BitwiseAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseAnd)
+| [mindspore.ops.BitwiseOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseOr)
+| [mindspore.ops.BitwiseXor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.BitwiseXor)
+| [mindspore.ops.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TensorAdd)
+| [mindspore.ops.Sub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Sub)
+| [mindspore.ops.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mul)
+| [mindspore.ops.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Pow)
+| [mindspore.ops.Minimum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Minimum)
+| [mindspore.ops.Maximum](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Maximum)
+| [mindspore.ops.RealDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.RealDiv)
+| [mindspore.ops.Div](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Div)
+| [mindspore.ops.DivNoNan](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.DivNoNan)
+| [mindspore.ops.FloorDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorDiv)
+| [mindspore.ops.TruncateDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateDiv)
+| [mindspore.ops.TruncateMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.TruncateMod)
+| [mindspore.ops.Mod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Mod)
+| [mindspore.ops.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.FloorMod)
+| [mindspore.ops.Atan2](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Atan2)
+| [mindspore.ops.SquaredDifference](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.SquaredDifference)
+| [mindspore.ops.Xdivy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xdivy)
+| [mindspore.ops.Xlogy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Xlogy)
+| [mindspore.ops.Equal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Equal)
+| [mindspore.ops.ApproximateEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ApproximateEqual)
+| [mindspore.ops.NotEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.NotEqual)
+| [mindspore.ops.Greater](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Greater)
+| [mindspore.ops.GreaterEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.GreaterEqual)
+| [mindspore.ops.Less](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.Less)
+| [mindspore.ops.LessEqual](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LessEqual)
+| [mindspore.ops.LogicalAnd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalAnd)
+| [mindspore.ops.LogicalOr](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.LogicalOr)
+| [mindspore.ops.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdUpdate)
+| [mindspore.ops.ScatterNdAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdAdd)
+| [mindspore.ops.ScatterNdSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNdSub)
+| [mindspore.ops.ScatterNonAliasingAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterNonAliasingAdd)
+| [mindspore.ops.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterUpdate)
+| [mindspore.ops.ScatterMax](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMax)
+| [mindspore.ops.ScatterMin](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMin)
+| [mindspore.ops.ScatterAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterAdd)
+| [mindspore.ops.ScatterSub](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterSub)
+| [mindspore.ops.ScatterMul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterMul)
+| [mindspore.ops.ScatterDiv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html#mindspore.ops.ScatterDiv)
diff --git a/lite/docs/source_en/_static/logo_source.png b/lite/docs/source_en/_static/logo_source.png
index fc347d271abe082ae8d16242328551648766b6fb..880f2bc87172daf487654c0ba4f1657c672bd2b8 100644
Binary files a/lite/docs/source_en/_static/logo_source.png and b/lite/docs/source_en/_static/logo_source.png differ
diff --git a/lite/docs/source_en/apicc/dataset.md b/lite/docs/source_en/apicc/dataset.md
index 984ffc15eaa2fc44ed5e17c87f89b561083a5eae..6b2313ffb61a19ef45f18309b87f6a68b528e335 100644
--- a/lite/docs/source_en/apicc/dataset.md
+++ b/lite/docs/source_en/apicc/dataset.md
@@ -6,6 +6,8 @@
## Functions of image_process.h
+### ResizeBilinear
+
```
bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
```
@@ -22,6 +24,8 @@ Resize image by bilinear algorithm, currently the data type only supports uint8,
Return True or False.
+### InitFromPixel
+
```
bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m)
```
@@ -40,6 +44,8 @@ Initialize LiteMat from pixel, currently the conversion supports rbgaTorgb and r
Return True or False.
+### ConvertTo
+
```
bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
```
@@ -56,6 +62,8 @@ Convert the data type, currently it supports converting the data type from uint8
Return True or False.
+### Crop
+
```
bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
```
@@ -74,8 +82,10 @@ Crop image, the channel supports is 3 and 1.
Return True or False.
+### SubStractMeanNormalize
+
```
-bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float *norm)
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std);
```
Normalize image, currently the supports data type is float.
@@ -85,16 +95,18 @@ Normalize image, currently the supports data type is float.
- `src`: Input image data.
- `dst`: Output image data.
- `mean`: Mean of the data set.
- - `norm`: Norm of the data set.
+ - `std`: Norm of the data set.
- Returns
Return True or False.
+### Pad
+
```
-bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
+bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r)
```
-Padd image, the channel supports is 3 and 1.
+Pad image, the channel supports is 3 and 1.
- Parameters
@@ -105,13 +117,15 @@ Padd image, the channel supports is 3 and 1.
- `left`: The length of left.
- `right`: The length of right.
- `pad_type`: The type of pad.
- - `fill_r`: R.
+ - `fill_b_or_gray`: B or GRAY.
- `fill_g`: G.
- - `fill_b`: B.
+ - `fill_r`: R.
- Returns
Return True or False.
+### Affine
+
```
void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue)
```
@@ -140,6 +154,8 @@ Apply affine transformation for 3 channel image.
- `dsize`: The size of the output image.
- `borderValue`: The pixel value is used for filing after the image is captured.
+### GetDefaultBoxes
+
```
std::vector> GetDefaultBoxes(BoxesConfig config)
```
@@ -154,6 +170,8 @@ Get default anchor boxes for Faster R-CNN, SSD, YOLO etc.
Return the default boxes.
+### ConvertBoxes
+
```
void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config)
```
@@ -166,6 +184,8 @@ Convert the prediction boxes to the actual boxes with (y, x, h, w).
- `default_boxes`: Default box.
- `config`: Objects of BoxesConfig structure.
+### ApplyNms
+
```
std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes)
```
@@ -190,6 +210,7 @@ Class that represents a lite Mat of a Image.
**Constructors & Destructors**
+### LiteMat
```
LiteMat()
@@ -211,6 +232,7 @@ Destructor of MindSpore dataset LiteMat.
**Public Member Functions**
+### Init
```
void Init(int width, LDataType data_type = LDataType::UINT8)
@@ -222,6 +244,8 @@ void Init(int width, int height, int channel, LDataType data_type = LDataType::U
The function to initialize the channel, width and height of the image, but the parameters are different.
+### IsEmpty
+
```
bool IsEmpty() const
```
@@ -232,6 +256,8 @@ A function to determine whether the object is empty.
Return True or False.
+### Release
+
```
void Release()
```
@@ -240,6 +266,8 @@ A function to release memory.
**Private Member Functions**
+### AlignMalloc
+
```
void *AlignMalloc(unsigned int size)
```
@@ -254,6 +282,8 @@ Apply for memory alignment.
Return the size of a pointer.
+### AlignFree
+
```
void AlignFree(void *ptr)
```
@@ -270,6 +300,8 @@ Initialize the value of elem_size_ by data_type.
- `data_type`: Type of data.
+### addRef
+
```
int addRef(int *p, int value)
```
diff --git a/lite/docs/source_en/apicc/errorcode_and_metatype.md b/lite/docs/source_en/apicc/errorcode_and_metatype.md
index df566213408154cd2034eb2932a5f6d1380f89f3..45b4877a858d82df61c1dffa8dc734edddd300a5 100644
--- a/lite/docs/source_en/apicc/errorcode_and_metatype.md
+++ b/lite/docs/source_en/apicc/errorcode_and_metatype.md
@@ -13,6 +13,7 @@ Description of error code and meta type supported in MindSpore Lite.
| RET_NO_CHANGE | -4 | No change. |
| RET_SUCCESS_EXIT | -5 | No error but exit. |
| RET_MEMORY_FAILED | -6 | Fail to create memory. |
+| RET_NOT_SUPPORT | -7 | Fail to support. |
| RET_OUT_OF_TENSOR_RANGE | -101 | Failed to check range. |
| RET_INPUT_TENSOR_ERROR | -102 | Failed to check input tensor. |
| RET_REENTRANT_ERROR | -103 | Exist executor running. |
@@ -24,6 +25,8 @@ Description of error code and meta type supported in MindSpore Lite.
| RET_FORMAT_ERR | -401 | Failed to check the tensor format. |
| RET_INFER_ERR | -501 | Failed to infer shape. |
| RET_INFER_INVALID | -502 | Invalid infer shape before runtime. |
+| RET_INPUT_PARAM_INVALID | -601 | Invalid input param by user. |
+| RET_INPUT_PARAM_LACK | -602 | Lack input param by user. |
## MetaType
An **enum** type.
diff --git a/lite/docs/source_en/apicc/lite.md b/lite/docs/source_en/apicc/lite.md
index 93bc93edf0d709c8d227723f921ea39f9a39f3b0..1dbe44a3f99d3b35f2c6a501523ac75d90702ec4 100644
--- a/lite/docs/source_en/apicc/lite.md
+++ b/lite/docs/source_en/apicc/lite.md
@@ -23,23 +23,6 @@ Context()
Constructor of MindSpore Lite Context using default value for parameters.
-```
-Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx)
-```
-Constructor of MindSpore Lite Context using input value for parameters.
-
-- Parameters
-
- - `thread_num`: Define the work thread number during the runtime.
-
- - `allocator`: Define the allocator for malloc.
-
- - `device_ctx`: Define device information during the runtime.
-
-- Returns
-
- The instance of MindSpore Lite Context.
-
```
~Context()
```
@@ -52,10 +35,12 @@ float16_priority
```
A **bool** value. Defaults to **false**. Prior enable float16 inference.
+> Enabling float16 inference may cause low precision inference,because some variables may exceed the range of float16 during forwarding.
+
```
-device_ctx_{DT_CPU}
+device_type
```
-A [**DeviceContext**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicecontext) struct defined at the bottom of the text. Using to specify the device.
+A [**DeviceType**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicetype) **enum** type. Defaults to **DT_CPU**. Using to specify the device.
```
thread_num_
@@ -153,16 +138,6 @@ GPU device type.
DT_NPU = 0
```
NPU device type, not supported yet.
-## DeviceContext
-
-A **struct**. DeviceContext defined for holding DeviceType.
-
-**Attributes**
-```
-type
-```
-A [**DeviceType**](https://www.mindspore.cn/lite/docs/en/master/apicc/lite.html#devicetype) variable. The device type.
-
## Version
```
diff --git a/lite/docs/source_en/apicc/tensor.md b/lite/docs/source_en/apicc/tensor.md
index 014929ba12ea2d636478ea7515562559bd9af087..c721fd22d5d8fe14c3da625aa6539431a224c2d1 100644
--- a/lite/docs/source_en/apicc/tensor.md
+++ b/lite/docs/source_en/apicc/tensor.md
@@ -36,19 +36,6 @@ Get data type of the MindSpore Lite MSTensor.
MindSpore Lite TypeId of the MindSpore Lite MSTensor.
-```
-virtual TypeId set_data_type(TypeId data_type)
-```
-Set data type for the MindSpore Lite MSTensor.
-
-- Parameters
-
- - `data_type`: Define MindSpore Lite TypeId to be set in the MindSpore Lite MSTensor.
-
-- Returns
-
- MindSpore Lite TypeId of the MindSpore Lite MSTensor after set.
-
```
virtual std::vector shape() const
```
@@ -59,19 +46,6 @@ Get shape of the MindSpore Lite MSTensor.
A vector of int as the shape of the MindSpore Lite MSTensor.
-```
-virtual size_t set_shape(const std::vector &shape)
-```
-Set shape for the MindSpore Lite MSTensor.
-
-- Parameters
-
- - `shape`: Define a vector of int as shape to be set into the MindSpore Lite MSTensor.
-
-- Returns
-
- Size of shape of the MindSpore Lite MSTensor after set.
-
```
virtual int DimensionSize(size_t index) const
```
@@ -96,16 +70,6 @@ Get number of element in MSTensor.
Number of element in MSTensor.
-```
-virtual std::size_t hash() const
-```
-
-Get hash of the MindSpore Lite MSTensor.
-
-- Returns
-
- Hash of the MindSpore Lite MSTensor.
-
```
virtual size_t Size() const
```
@@ -129,23 +93,3 @@ Get the pointer of data in MSTensor.
- Returns
The pointer points to data in MSTensor.
-
-**Static Public Member Functions**
-
-```
-static MSTensor *CreateTensor(TypeId data_type, const std::vector &shape)
-```
-
-Static method to create a MSTensor pointer.
-
-> Note: TypeId is defined in [mindspore/mindspore/core/ir/dtype/type_id.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h). Only number types in TypeId enum are suitable for MSTensor.
-
-- Parameters
-
- - `data_type`: Define the data type of tensor to be created.
-
- - `shape`: Define the shape of tensor to be created.
-
-- Returns
-
- The pointer of MSTensor.
\ No newline at end of file
diff --git a/lite/docs/source_en/image_classification.md b/lite/docs/source_en/image_classification.md
new file mode 100644
index 0000000000000000000000000000000000000000..61e2321e45598f9cd38154dfa3f10838285cc8f5
--- /dev/null
+++ b/lite/docs/source_en/image_classification.md
@@ -0,0 +1,32 @@
+# Image classification
+
+
+
+## Image classification introduction
+
+Image classification is to identity what an image represents, to predict the object list and the probabilites. For example,the following tabel shows the classification results after mode inference.
+
+
+
+| Category | Probability |
+| ---------- | ----------- |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+Using MindSpore Lite to realize image classification [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification).
+
+## Image classification model list
+
+The following table shows the data of some image classification models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| Model name | Size(Mb) | Top1 | Top5 | F1 | CPU 4 thread delay (ms) |
+|-----------------------| :----------: | :----------: | :----------: | :----------: | :-----------: |
+| [MobileNetV2](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) | 11.5 | - | - | 65.5% | 14.595 |
+| [Inceptionv3](https://download.mindspore.cn/model_zoo/official/lite/inceptionv3_lite/inceptionv3.ms) | 90.9 | 78.62% | 94.08% | - | 92.086 |
+| [Shufflenetv2](https://download.mindspore.cn/model_zoo/official/lite/shufflenetv2_lite/shufflenetv2.ms) | 8.8 | 67.74% | 87.62% | - | 8.303 |
+| [GoogleNet](https://download.mindspore.cn/model_zoo/official/lite/googlenet_lite/googlenet.ms) | 25.3 | 72.2% | 90.06% | - | 23.257 |
+| [ResNext50](https://download.mindspore.cn/model_zoo/official/lite/resnext50_lite/resnext50.ms) | 95.8 | 73.1% | 91.21% | - | 138.164 |
diff --git a/lite/docs/source_en/images/image_classification_result.png b/lite/docs/source_en/images/image_classification_result.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7cc49f582440e31b6b5b14dbba5131bfed2a4b4
Binary files /dev/null and b/lite/docs/source_en/images/image_classification_result.png differ
diff --git a/lite/docs/source_en/images/object_detection.png b/lite/docs/source_en/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/lite/docs/source_en/images/object_detection.png differ
diff --git a/lite/docs/source_en/index.rst b/lite/docs/source_en/index.rst
index abecfe957e16896bca6efeb5a1cb376835251fa6..10e8c04337755b302a99f74116e0afc3b938c7fc 100644
--- a/lite/docs/source_en/index.rst
+++ b/lite/docs/source_en/index.rst
@@ -12,5 +12,7 @@ MindSpore Lite Documentation
architecture
apicc/apicc
+ image_classification
+ object_detection
operator_list
glossary
diff --git a/lite/docs/source_en/object_detection.md b/lite/docs/source_en/object_detection.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f2085c5d045ee3654140e34158a098502ce9733
--- /dev/null
+++ b/lite/docs/source_en/object_detection.md
@@ -0,0 +1,26 @@
+# Object detection
+
+
+
+## Object dectectin introduction
+
+Object detection can identify the object in the image and its position in the image. For the following figure, the output of the object detection model is shown in the following table. The rectangular box is used to identify the position of the object in the graph and the probability of the object category is marked. The four numbers in the coordinates are Xmin, Ymin, Xmax, Ymax; the probability represents the probility of the detected object.
+
+
+
+| Category | Probability | Coordinate |
+| -------- | ----------- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+Using MindSpore Lite to implement object detection [example](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection).
+
+## Object detection model list
+
+The following table shows the data of some object detection models using MindSpore Lite inference.
+
+> The performance of the table below is tested on the mate30.
+
+| Model name | Size | mAP(IoU=0.50:0.95) | CPU 4 thread delay (ms) |
+|-----------------------| :----------: | :----------: | :-----------: |
+| [MobileNetv2-SSD](https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms) | 16.7 | 0.22 | 25.4 |
+
diff --git a/lite/docs/source_en/operator_list.md b/lite/docs/source_en/operator_list.md
index 6038b5c95690dd4b30378a7101d828ef9d0cda90..3fbeb8544df8ddeb93636b1e364e0f22be0d36a3 100644
--- a/lite/docs/source_en/operator_list.md
+++ b/lite/docs/source_en/operator_list.md
@@ -5,107 +5,111 @@
> √ The checked items are the operators supported by MindSpore Lite。
| Operation | CPU FP16 | CPU FP32 | CPU Int8 | CPU UInt8 | GPU FP16 | GPU FP32 | Tensorflow Lite op supported | Caffe Lite op supported | Onnx Lite op supported |
-|-----------------------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------|
-| Abs | | √ | √ | √ | | | Abs | | Abs |
-| Add | √ | √ | √ | √ | | √ | Add | | Add |
-| AddN | | √ | | | | | AddN | | |
-| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax |
-| Argmin | | √ | √ | √ | | | Argmin | | |
-| AvgPool | √ | √ | √ | √ | | √ | MeanPooling| Pooling | AveragePool |
-| BatchNorm | √ | √ | √ | √ | | √ | | BatchNorm | BatchNormalization |
-| BatchToSpace | | √ | √ | √ | | | BatchToSpace, BatchToSpaceND | | |
-| BiasAdd | | √ | √ | √ | | √ | | | BiasAdd |
+|-----------------------|----------|----------|----------|-----------|----------|----------|------------|----------------|--------------------|
+| Abs | | √ | √ | √ | √ | √ | Abs | | Abs |
+| Add | √ | √ | √ | √ | √ | √ | Add | | Add |
+| AddN | | √ | | | | | AddN | | |
+| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax |
+| Argmin | | √ | √ | √ | | | Argmin | | |
+| AvgPool | √ | √ | √ | √ | √ | √ | MeanPooling| Pooling | AveragePool |
+| BatchNorm | √ | √ | √ | √ | √ | √ | | BatchNorm | BatchNormalization |
+| BatchToSpace | | √ | √ | √ | | | BatchToSpace | | |
+| BatchToSpaceND | | √ | √ | | | | BatchToSpaceND | | |
+| BiasAdd | | √ | √ | √ | √ | √ | | | BiasAdd |
| Broadcast | | √ | | | | | BroadcastTo | | Expand |
-| Cast | √ | √ | | √ | | | Cast, DEQUANTIZE* | | Cast |
-| Ceil | | √ | √ | √ | | | Ceil | | Ceil |
+| Cast | √ | √ | √ | √ | √ | √ | Cast, QUANTIZE, DEQUANTIZE | | Cast |
+| Ceil | | √ | √ | √ | √ | √ | Ceil | | Ceil |
| Concat | √ | √ | √ | √ | √ | √ | Concat | Concat | Concat |
| Conv2d | √ | √ | √ | √ | √ | √ | Conv2D | Convolution | Conv |
| Conv2dTranspose | √ | √ | √ | √ | √ | √ | DeConv2D | Deconvolution | ConvTranspose |
-| Cos | | √ | √ | √ | | | Cos | | Cos |
+| Cos | | √ | √ | √ | √ | √ | Cos | | Cos |
| Crop | | √ | √ | √ | | | | Crop | |
| DeDepthwiseConv2D | | √ | √ | √ | | | | Deconvolution| ConvTranspose |
| DepthToSpace | | √ | √ | √ | | | DepthToSpace| | DepthToSpace |
| DepthwiseConv2dNative | √ | √ | √ | √ | √ | √ | DepthwiseConv2D | Convolution | Convolution |
-| Div | √ | √ | √ | √ | | √ | Div, RealDiv | | Div |
+| DetectionPostProcess | | √ | | | | | DetectionPostProcess | | |
+| Div | √ | √ | √ | √ | √ | √ | Div, RealDiv | | Div |
| Eltwise | √ | √ | | | | | | Eltwise | |
-| Elu | | √ | | | | | Elu | | Elu |
+| Elu | | √ | | | | | Elu | | Elu |
| Equal | √ | √ | √ | √ | | | Equal | | Equal |
-| Exp | | √ | | | | | Exp | | Exp |
-| ExpandDims | | √ | | | | | | | |
+| Exp | | √ | | | √ | √ | Exp | Exp | Exp |
+| ExpandDims | | √ | | | | | ExpandDims | | |
| Fill | | √ | | | | | Fill | | |
| Flatten | | √ | | | | | | Flatten | |
-| Floor | | √ | √ | √ | | | flOOR | | Floor |
+| Floor | | √ | √ | √ | √ | √ | flOOR | | Floor |
| FloorDiv | √ | √ | | | | | FloorDiv | | |
| FloorMod | √ | √ | | | | | FloorMod | | |
-| FullConnection | | √ | √ | √ | | | FullyConnected | InnerProduct | |
+| FullConnection | √ | √ | √ | √ | √ | √ | FullyConnected | InnerProduct | |
| GatherNd | | √ | √ | √ | | | GatherND | | |
| GatherV2 | | √ | √ | √ | | | Gather | | Gather |
| Greater | √ | √ | √ | √ | | | Greater | | Greater |
| GreaterEqual | √ | √ | √ | √ | | | GreaterEqual| | |
| Hswish | √ | √ | √ | √ | | | HardSwish | | |
-| LeakyReLU | √ | √ | | | | √ | LeakyRelu | | LeakyRelu |
+| L2Norm | | √ | | | | | L2_NORMALIZATION | | |
+| LeakyReLU | √ | √ | | | √ | √ | LeakyRelu | | LeakyRelu |
| Less | √ | √ | √ | √ | | | Less | | Less |
| LessEqual | √ | √ | √ | √ | | | LessEqual | | |
-| LRN | | √ | | | | | LocalResponseNorm | | Lrn |
-| Log | | √ | √ | √ | | | Log | | Log |
+| LRN | | √ | | | | | LocalResponseNorm | | Lrn, LRN |
+| Log | | √ | √ | √ | √ | √ | Log | | Log |
| LogicalAnd | √ | √ | | | | | LogicalAnd | | |
-| LogicalNot | | √ | √ | √ | | | LogicalNot | | |
+| LogicalNot | | √ | √ | √ | √ | √ | LogicalNot | | |
| LogicalOr | √ | √ | | | | | LogicalOr | | |
| LSTM | | √ | | | | | | | |
| MatMul | | √ | √ | √ | √ | √ | | | MatMul |
| Maximum | √ | √ | | | | | Maximum | | Max |
-| MaxPool | √ | √ | √ | √ | | √ | MaxPooling | Pooling | MaxPool |
+| MaxPool | √ | √ | √ | √ | √ | √ | MaxPooling | Pooling | MaxPool |
| Minimum | √ | √ | | | | | Minimum | | Min |
-| Mul | √ | √ | √ | √ | | √ | Mul | | Mul |
+| Mul | √ | √ | √ | √ | √ | √ | Mul | | Mul |
+| Neg | | √ | | | | | Neg | | Neg |
| NotEqual | √ | √ | √ | √ | | | NotEqual | | |
| OneHot | | √ | | | | | OneHot | | |
-| Pad | | √ | √ | √ | | | Pad | | Pad |
-| Pow | | √ | √ | √ | | | Pow | Power | Power |
-| PReLU | | √ | | | | √ | | PReLU | |
+| Pad | √ | √ | √ | √ | | | Pad, MirrorPad | | Pad |
+| Pow | | √ | √ | √ | | | Pow | Power | Power |
+| PReLU | | √ | | | √ | √ | | PReLU | |
| Range | | √ | | | | | Range | | |
| Rank | | √ | | | | | Rank | | |
+| ReduceASum | | √ | | | | | | Reduction | |
| ReduceMax | √ | √ | √ | √ | | | ReduceMax | | ReduceMax |
-| ReduceMean | √ | √ | √ | √ | | | Mean | | ReduceMean |
+| ReduceMean | √ | √ | √ | √ | | | Mean | Reduction | ReduceMean |
| ReduceMin | √ | √ | √ | √ | | | ReduceMin | | ReduceMin |
| ReduceProd | √ | √ | √ | √ | | | ReduceProd | | |
-| ReduceSum | √ | √ | √ | √ | | | Sum | | ReduceSum |
-| ReduceSumSquare | √ | √ | √ | √ | | | | | |
-| ReLU | √ | √ | √ | √ | | √ | Relu | ReLU | Relu |
-| ReLU6 | √ | √ | √ | √ | | √ | Relu6 | ReLU6 | Clip* |
-| Reshape | √ | √ | √ | √ | | √ | Reshape | Reshape | Reshape,Flatten |
+| ReduceSum | √ | √ | √ | √ | | | Sum | Reduction | ReduceSum |
+| ReduceSumSquare | √ | √ | √ | √ | | | | Reduction | |
+| ReLU | √ | √ | √ | √ | √ | √ | Relu | ReLU | Relu |
+| ReLU6 | √ | √ | √ | √ | √ | √ | Relu6 | ReLU6 | Clip* |
+| Reshape | √ | √ | √ | √ | √ | √ | Reshape | Reshape | Reshape,Flatten |
| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | |
| Reverse | | √ | | | | | reverse | | |
| ReverseSequence | | √ | | | | | ReverseSequence | | |
-| Round | | √ | √ | √ | | | Round | | |
-| Rsqrt | | √ | √ | √ | | | Rsqrt | | |
-| Scale | | √ | | | | | | Scale | |
+| Round | | √ | √ | √ | √ | √ | Round | | |
+| Rsqrt | | √ | √ | √ | √ | √ | Rsqrt | | |
+| Scale | | √ | | | √ | √ | | Scale | |
| ScatterNd | | √ | | | | | ScatterNd | | |
-| Shape | | √ | | | | | Shape | | Shape |
-| Sigmoid | √ | √ | √ | √ | | √ | Logistic | Sigmoid | Sigmoid |
-| Sin | | √ | √ | √ | | | Sin | | Sin |
-| Slice | | √ | √ | √ | √ | √ | Slice | | Slice |
-| Softmax | √ | √ | √ | √ | | √ | Softmax | Softmax | Softmax |
-| SpaceToBatch | | √ | | | | | | | |
-| SpaceToBatchND | | √ | | | | | SpaceToBatchND | | |
+| Shape | | √ | | | | | Shape | | Shape |
+| Sigmoid | √ | √ | √ | √ | √ | √ | Logistic | Sigmoid | Sigmoid |
+| Sin | | √ | √ | √ | √ | √ | Sin | | Sin |
+| Slice | | √ | √ | √ | √ | √ | Slice | Slice | Slice |
+| Softmax | √ | √ | √ | √ | √ | √ | Softmax | Softmax | Softmax |
+| SpaceToBatch | | √ | √ | | | | SpaceToBatch | | |
+| SpaceToBatchND | | √ | √ | | | | SpaceToBatchND | | |
| SpaceToDepth | | √ | | | | | SpaceToDepth | | SpaceToDepth |
| SparseToDense | | √ | | | | | SpareToDense | | |
| Split | √ | √ | √ | √ | | | Split, SplitV | | |
-| Sqrt | | √ | √ | √ | | | Sqrt | | Sqrt |
-| Square | | √ | √ | √ | | | Square | | |
-| SquaredDifference | | √ | | | | | SquaredDifference | | |
+| Sqrt | | √ | √ | √ | √ | √ | Sqrt | | Sqrt |
+| Square | | √ | √ | √ | √ | √ | Square | | |
+| SquaredDifference | | √ | | | | | SquaredDifference | | |
| Squeeze | | √ | √ | √ | | | Squeeze | | Squeeze |
| StridedSlice | | √ | √ | √ | | | StridedSlice| | |
| Stack | | √ | | | | | Stack | | |
-| Sub | √ | √ | √ | √ | | √ | Sub | | Sub |
-| Tanh | √ | √ | | | | | Tanh | TanH | |
-| Tile | | √ | | | | | Tile | | Tile |
+| Sub | √ | √ | √ | √ | √ | √ | Sub | | Sub |
+| Tanh | √ | √ | | | √ | √ | Tanh | TanH | |
+| Tile | | √ | | | | | Tile | Tile | Tile |
| TopK | | √ | √ | √ | | | TopKV2 | | |
-| Transpose | √ | √ | | | | √ | Transpose | Permute | Transpose |
+| Transpose | √ | √ | | | √ | √ | Transpose | Permute | Transpose |
| Unique | | √ | | | | | Unique | | |
| Unsqueeze | | √ | √ | √ | | | | | Unsqueeze |
| Unstack | | √ | | | | | Unstack | | |
| Where | | √ | | | | | Where | | |
-| ZerosLike | | √ | | | | | ZerosLike | | |
+| ZerosLike | | √ | | | | | ZerosLike | | |
* Clip: only support convert clip(0, 6) to Relu6.
-* DEQUANTIZE: only support to convert fp16 to fp32.
diff --git a/lite/docs/source_zh_cn/_static/logo_source.png b/lite/docs/source_zh_cn/_static/logo_source.png
index fc347d271abe082ae8d16242328551648766b6fb..880f2bc87172daf487654c0ba4f1657c672bd2b8 100644
Binary files a/lite/docs/source_zh_cn/_static/logo_source.png and b/lite/docs/source_zh_cn/_static/logo_source.png differ
diff --git a/lite/docs/source_zh_cn/apicc/dataset.md b/lite/docs/source_zh_cn/apicc/dataset.md
index 379d3e11632327b3075c0f8a56d53c852cdeae80..2e9926063c23ce292b84127c2145517102b5e282 100644
--- a/lite/docs/source_zh_cn/apicc/dataset.md
+++ b/lite/docs/source_zh_cn/apicc/dataset.md
@@ -6,6 +6,8 @@
## image_process.h文件的函数
+### ResizeBilinear
+
```
bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
```
@@ -22,6 +24,8 @@ bool ResizeBilinear(LiteMat &src, LiteMat &dst, int dst_w, int dst_h)
返回True或者False。
+### InitFromPixel
+
```
bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m)
```
@@ -40,6 +44,8 @@ bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType d
返回True或者False。
+### ConvertTo
+
```
bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
```
@@ -56,6 +62,8 @@ bool ConvertTo(LiteMat &src, LiteMat &dst, double scale = 1.0)
返回True或者False。
+### Crop
+
```
bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
```
@@ -74,8 +82,10 @@ bool Crop(LiteMat &src, LiteMat &dst, int x, int y, int w, int h)
返回True或者False。
+### SubStractMeanNormalize
+
```
-bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float *norm)
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, const std::vector &mean, const std::vector &std);
```
规一化图像,当前支持的数据类型为float。
@@ -85,13 +95,15 @@ bool SubStractMeanNormalize(LiteMat &src, LiteMat &dst, const float *mean, float
- `src`: 输入的图片数据。
- `dst`: 输出图像数据。
- `mean`: 数据集的均值。
- - `norm`: 数据集的方差。
+ - `std`: 数据集的方差。
- 返回值
返回True或者False。
+### Pad
+
```
-bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int left, const int right, const PaddBorderType pad_type, uint8_t fill_r, uint8_t fill_g, uint8_t fill_b)
+bool Pad(const LiteMat &src, LiteMat &dst, int top, int bottom, int left, int right, PaddBorderType pad_type, uint8_t fill_b_or_gray, uint8_t fill_g, uint8_t fill_r)
```
填充图像,通道支持为3和1。
@@ -105,13 +117,15 @@ bool Padd(LiteMat &src, LiteMat &dst, const int top, const int bottom, const int
- `left`: 图片左边长度。
- `right`: 图片右边长度。
- `pad_type`: padding的类型。
- - `fill_r`: R.
+ - `fill_b_or_gray`: B或者GRAY.
- `fill_g`: G.
- - `fill_b`: B.
+ - `fill_r`: R.
- 返回值
返回True或者False。
+### Affine
+
```
void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsize, UINT8_C1 borderValue)
```
@@ -140,6 +154,8 @@ void Affine(LiteMat &src, LiteMat &out_img, double M[6], std::vector dsi
- `dsize`: 输出图像的大小。
- `borderValue`: 采图之后用于填充的像素值。
+### GetDefaultBoxes
+
```
std::vector> GetDefaultBoxes(BoxesConfig config)
```
@@ -154,6 +170,8 @@ std::vector> GetDefaultBoxes(BoxesConfig config)
返回默认框。
+### ConvertBoxes
+
```
void ConvertBoxes(std::vector> &boxes, std::vector> &default_boxes, BoxesConfig config)
```
@@ -166,6 +184,8 @@ void ConvertBoxes(std::vector> &boxes, std::vector ApplyNms(std::vector> &all_boxes, std::vector &all_scores, float thres, int max_boxes)
```
@@ -190,6 +210,7 @@ LiteMat是一个处理图像的类。
**构造函数和析构函数**
+### LiteMat
```
LiteMat()
@@ -211,6 +232,7 @@ MindSpore dataset LiteMat的析构函数。
**公有成员函数**
+### Init
```
void Init(int width, LDataType data_type = LDataType::UINT8)
@@ -222,6 +244,8 @@ void Init(int width, int height, int channel, LDataType data_type = LDataType::U
该函数用于初始化图像的通道,宽度和高度,参数不同。
+### IsEmpty
+
```
bool IsEmpty() const
```
@@ -232,6 +256,8 @@ bool IsEmpty() const
返回True或者False。
+### Release
+
```
void Release()
```
@@ -240,6 +266,8 @@ void Release()
**私有成员函数**
+### AlignMalloc
+
```
void *AlignMalloc(unsigned int size)
```
@@ -254,12 +282,17 @@ void *AlignMalloc(unsigned int size)
返回指针的大小。
+### AlignFree
+
```
void AlignFree(void *ptr)
```
释放指针内存大小的方法。
+
+### InitElemSize
+
```
void InitElemSize(LDataType data_type)
```
diff --git a/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md b/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
index 4195eaedcfa2cda8e0470d3db06950e35e2050d8..59f0d81ea4a3a254c7b37e9895c89de1d0357b3d 100644
--- a/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
+++ b/lite/docs/source_zh_cn/apicc/errorcode_and_metatype.md
@@ -13,6 +13,7 @@
| RET_NO_CHANGE | -4 | 无改变。 |
| RET_SUCCESS_EXIT | -5 | 无错误退出。 |
| RET_MEMORY_FAILED | -6 | 创建内存失败。 |
+| RET_NOT_SUPPORT | -7 | 尚未支持。 |
| RET_OUT_OF_TENSOR_RANGE | -101 | 输出检查越界。 |
| RET_INPUT_TENSOR_ERROR | -102 | 输入检查越界。 |
| RET_REENTRANT_ERROR | -103 | 存在运行中的执行器。 |
@@ -24,6 +25,8 @@
| RET_FORMAT_ERR | -401 | 张量格式检查失败。 |
| RET_INFER_ERR | -501 | 维度推理失败。 |
| RET_INFER_INVALID | -502 | 无效的维度推理。 |
+| RET_INPUT_PARAM_INVALID | -601 | 无效的用户输入参数。 |
+| RET_INPUT_PARAM_LACK | -602 | 缺少必要的输入参数。 |
## MetaType
diff --git a/lite/docs/source_zh_cn/apicc/lite.md b/lite/docs/source_zh_cn/apicc/lite.md
index 2673487a861f56db5c8b9f6bab8daac555cb7fed..839930645a3fed1484a6fc62945620ab22b8b313 100644
--- a/lite/docs/source_zh_cn/apicc/lite.md
+++ b/lite/docs/source_zh_cn/apicc/lite.md
@@ -21,31 +21,13 @@ Context类用于保存执行中的环境变量。
Context()
```
-用默认参数构造MindSpore Lite Context 对象。
-
-```
-Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx)
-```
-
-根据输入参数构造MindSpore Lite Context 对象。
-
-- 参数
-
- - `thread_num`: 定义了执行线程数。
-
- - `allocator`: 定义了内存分配器。
-
- - `device_ctx`: 定义了设备信息。
-
-- 返回值
-
- MindSpore Lite Context 指针。
+用默认参数构造MindSpore Lite Context对象。
```
~Context()
```
-MindSpore Lite Context 的析构函数。
+MindSpore Lite Context的析构函数。
**公有属性**
@@ -53,19 +35,21 @@ MindSpore Lite Context 的析构函数。
float16_priority
```
-**bool** 值,默认为**false**,用于使能float16 推理。
+**bool**值,默认为**false**,用于使能float16推理。
+
+> 使能float16推理可能会导致模型推理精度下降,因为在模型推理的中间过程中,有些变量可能会超出float16的数值范围。
```
-device_ctx_{DT_CPU}
+device_type
```
-[**DeviceContext**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicecontext)结构体。用于设置设备信息。
+[**DeviceType**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicetype)枚举类型。默认为**DT_CPU**,用于设置设备信息。
```
thread_num_
```
-**int** 值,默认为**2**,设置线程数。
+**int**值,默认为**2**,设置线程数。
```
allocator
@@ -173,18 +157,6 @@ DT_NPU = 0
设备为NPU,暂不支持。
-## DeviceContext
-
-定义设备类型的结构体。
-
-**属性**
-
-```
-type
-```
-
-[**DeviceType**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/lite.html#devicetype) 变量。设备类型。
-
## Version
```
diff --git a/lite/docs/source_zh_cn/apicc/session.md b/lite/docs/source_zh_cn/apicc/session.md
index 86556e1351e97bf4ad435e09db907fdca4e5fefd..e8203d44d6f872f816f211a8a97b80d5b922ff74 100644
--- a/lite/docs/source_zh_cn/apicc/session.md
+++ b/lite/docs/source_zh_cn/apicc/session.md
@@ -1,4 +1,4 @@
-# mindspore::session
+# mindspore::session
#include <[lite_session.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/include/lite_session.h)>
@@ -31,9 +31,9 @@ virtual void BindThread(bool if_bind)
```
virtual int CompileGraph(lite::Model *model)
```
-编译MindSpore Lite模型。
+编译MindSpore Lite模型。
-> 注意: CompileGraph必须在RunGraph方法之后调用。
+> 注意: CompileGraph必须在RunGraph方法之前调用。
- 参数
@@ -64,18 +64,18 @@ std::vector GetInputsByName(const std::string &node_name) c
- 返回值
MindSpore Lite MSTensor向量。
-
+
```
virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr)
```
-运行带有回调函数的会话。
+运行带有回调函数的会话。
> 注意: RunGraph必须在CompileGraph方法之后调用。
- 参数
- - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback) 结构体。定义了运行每个节点之前调用的回调函数。
+ - `before`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback)结构体。定义了运行每个节点之前调用的回调函数。
- - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback) 结构体。定义了运行每个节点之后调用的回调函数。
+ - `after`: 一个[**KernelCallBack**](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/session.html#kernelcallback)结构体。定义了运行每个节点之后调用的回调函数。
- 返回值
@@ -159,7 +159,7 @@ static LiteSession *CreateSession(lite::Context *context)
using KernelCallBack = std::function inputs, std::vector outputs, const CallBackParam &opInfo)>
```
-一个函数包装器。KernelCallBack 定义了指向回调函数的指针。
+一个函数包装器。KernelCallBack定义了指向回调函数的指针。
## CallBackParam
@@ -174,4 +174,4 @@ name_callback_param
```
type_callback_param
```
-**string** 类型变量。节点类型参数。
\ No newline at end of file
+**string** 类型变量。节点类型参数。
diff --git a/lite/docs/source_zh_cn/apicc/tensor.md b/lite/docs/source_zh_cn/apicc/tensor.md
index e9eae1f0fd9a62aa59e7b578b09a455bab843f1d..32d918604f0fddeabcf064d649dc76cfe9f1baf1 100644
--- a/lite/docs/source_zh_cn/apicc/tensor.md
+++ b/lite/docs/source_zh_cn/apicc/tensor.md
@@ -8,6 +8,7 @@
MSTensor定义了MindSpore Lite中的张量。
**构造函数和析构函数**
+
```
MSTensor()
```
@@ -15,7 +16,7 @@ MindSpore Lite MSTensor的构造函数。
- 返回值
- MindSpore Lite MSTensor 的实例。
+ MindSpore Lite MSTensor的实例。
```
virtual ~MSTensor()
@@ -35,19 +36,6 @@ virtual TypeId data_type() const
MindSpore Lite MSTensor类的MindSpore Lite TypeId。
-```
-virtual TypeId set_data_type(TypeId data_type)
-```
-设置MindSpore Lite MSTensor的数据类型。
-
-- 参数
-
- - `data_type`: 定义了MindSpore Lite MSTensor所需设置的MindSpore Lite TypeId。
-
-- 返回值
-
- 设置后的MindSpore Lite MSTensor的MindSpore Lite TypeI。
-
```
virtual std::vector shape() const
```
@@ -57,23 +45,10 @@ virtual std::vector shape() const
一个包含MindSpore Lite MSTensor形状数值的整型向量。
-```
-virtual size_t set_shape(const std::vector &shape)
-```
-设置MindSpore Lite MSTensor的形状.
-
-- 参数
-
- - `shape`: 定义了一个整型向量,包含了所需设置的MindSpore Lite MSTensor形状数值。
-
-- 返回值
-
- 设置形状后的MindSpore Lite MSTensor的大小。
-
```
virtual int DimensionSize(size_t index) const
```
-Get size of the dimension of the MindSpore Lite MSTensor index by the parameter index.
+通过参数索引获取MindSpore Lite MSTensor的维度的大小。
- 参数
@@ -92,15 +67,6 @@ virtual int ElementsNum() const
MSTensor中的元素个数
-```
-virtual std::size_t hash() const
-```
-获取MindSpore Lite MSTensor的哈希码。
-
-- 返回值
-
- MindSpore Lite MSTensor的哈希码。
-
```
virtual size_t Size() const
```
@@ -121,22 +87,3 @@ virtual void *MutableData() const
- 返回值
指向MSTensor中的数据的指针。
-
-**静态公有成员函数**
-
-```
-static MSTensor *CreateTensor(TypeId data_type, const std::vector &shape)
-```
-创建MSTensor指针的静态方法。
-
-> 注意:TypeId在[mindspore/mindspore/core/ir/dtype/type_id\.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/core/ir/dtype/type_id.h)中定义。只有TypeId枚举中的数字类型可用于MSTensor。
-
-- 参数
-
- - `data_type`: 定义了所要创建的张量的数据类型。
-
- - `shape`: 定义了所要创建的张量的形状。
-
-- 返回值
-
- 指向MSTensor的指针。
\ No newline at end of file
diff --git a/lite/docs/source_zh_cn/image_classification.md b/lite/docs/source_zh_cn/image_classification.md
new file mode 100644
index 0000000000000000000000000000000000000000..18a11ed4be0dd3d3903582518448c2e5781b795e
--- /dev/null
+++ b/lite/docs/source_zh_cn/image_classification.md
@@ -0,0 +1,33 @@
+# 图像分类
+
+
+
+## 图像分类介绍
+
+图像分类模型可以预测图片中出现哪些物体,识别出图片中出现物体列表及其概率。 比如下图经过模型推理的分类结果为下表:
+
+
+
+| 类别 | 概率 |
+| ---------- | ------ |
+| plant | 0.9359 |
+| flower | 0.8641 |
+| tree | 0.8584 |
+| houseplant | 0.7867 |
+
+使用MindSpore Lite实现图像分类的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification)。
+
+## 图像分类模型列表
+
+下表是使用MindSpore Lite推理的部分图像分类模型的数据。
+
+> 下表的性能是在mate30手机上测试的。
+
+| 模型名称 | 大小(Mb) | Top1 | Top5 | F1 | CPU 4线程时延(ms) |
+|-----------------------| :----------: | :----------: | :----------: | :----------: | :-----------: |
+| [MobileNetV2](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) | 11.5 | - | - | 65.5% | 14.595 |
+| [Inceptionv3](https://download.mindspore.cn/model_zoo/official/lite/inceptionv3_lite/inceptionv3.ms) | 90.9 | 78.62% | 94.08% | - | 92.086 |
+| [Shufflenetv2](https://download.mindspore.cn/model_zoo/official/lite/shufflenetv2_lite/shufflenetv2.ms) | 8.8 | 67.74% | 87.62% | - | 8.303 |
+| [GoogleNet](https://download.mindspore.cn/model_zoo/official/lite/googlenet_lite/googlenet.ms) | 25.3 | 72.2% | 90.06% | - | 23.257 |
+| [ResNext50](https://download.mindspore.cn/model_zoo/official/lite/resnext50_lite/resnext50.ms) | 95.8 | 73.1% | 91.21% | - | 138.164 |
+
diff --git a/lite/docs/source_zh_cn/images/image_classification_result.png b/lite/docs/source_zh_cn/images/image_classification_result.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7cc49f582440e31b6b5b14dbba5131bfed2a4b4
Binary files /dev/null and b/lite/docs/source_zh_cn/images/image_classification_result.png differ
diff --git a/lite/docs/source_zh_cn/images/object_detection.png b/lite/docs/source_zh_cn/images/object_detection.png
new file mode 100644
index 0000000000000000000000000000000000000000..ad5425c86393a9367701166796df42c9e4702988
Binary files /dev/null and b/lite/docs/source_zh_cn/images/object_detection.png differ
diff --git a/lite/docs/source_zh_cn/index.rst b/lite/docs/source_zh_cn/index.rst
index 20ecdbb72c0fe01cbc24c674bda6944504c792ff..53e1d51b2881d493dfd7f14db81fda7ab84d930e 100644
--- a/lite/docs/source_zh_cn/index.rst
+++ b/lite/docs/source_zh_cn/index.rst
@@ -1,4 +1,4 @@
-.. MindSpore documentation master file, created by
+.. MindSpore documentation master file, created by
sphinx-quickstart on Thu Aug 17 10:00:00 2020.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
@@ -12,5 +12,7 @@ MindSpore端侧文档
architecture
apicc/apicc
+ image_classification
+ object_detection
operator_list
glossary
diff --git a/lite/docs/source_zh_cn/object_detection.md b/lite/docs/source_zh_cn/object_detection.md
new file mode 100644
index 0000000000000000000000000000000000000000..70fd2ac5ea87952d8bdfaf09ed75d1d6bede876a
--- /dev/null
+++ b/lite/docs/source_zh_cn/object_detection.md
@@ -0,0 +1,26 @@
+# 对象检测
+
+
+
+## 对象检测介绍
+
+对象检测可以识别出图片中的对象和该对象在图片中的位置。 如:对下图使用对象检测模型的输出如下表所示,使用矩形框识别图中对象的位置并且标注出对象类别的概率,其中坐标中的4个数字分别为Xmin,Ymin,,Xmax,,Ymax;概率表示反应被检测物理的可信程度。
+
+
+
+| 类别 | 概率 | 坐标 |
+| ----- | ---- | ---------------- |
+| mouse | 0.78 | [10, 25, 35, 43] |
+
+使用MindSpore Lite实现对象检测的[示例代码](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/object_detection)。
+
+## 对象检测模型列表
+
+下表是使用MindSpore Lite推理的部分对象检测模型的数据。
+
+> 下表的性能是在mate30手机上测试的。
+
+| 模型名称 | 大小 | mAP(IoU=0.50:0.95) | CPU 4线程时延(ms) |
+|-----------------------| :----------: | :----------: | :-----------: |
+| [MobileNetv2-SSD](https://download.mindspore.cn/model_zoo/official/lite/ssd_mobilenetv2_lite/ssd.ms) | 16.7 | 0.22 | 25.4 |
+
diff --git a/lite/docs/source_zh_cn/operator_list.md b/lite/docs/source_zh_cn/operator_list.md
index 3384d8baf91b1af92ff4758816790af7b6e241bc..7558d93078cfc81db8e3de8ea82457ceb5face25 100644
--- a/lite/docs/source_zh_cn/operator_list.md
+++ b/lite/docs/source_zh_cn/operator_list.md
@@ -5,107 +5,111 @@
> √勾选的项为MindSpore Lite所支持的算子。
| 操作名 | CPU FP16 | CPU FP32 | CPU Int8 | CPU UInt8 | GPU FP16 | GPU FP32 | 支持的Tensorflow Lite op | 支持的Caffe Lite op | 支持的Onnx Lite op |
-|-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------|
-| Abs | | √ | √ | √ | | | Abs | | Abs |
-| Add | √ | √ | √ | √ | | √ | Add | | Add |
-| AddN | | √ | | | | | AddN | | |
-| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax |
-| Argmin | | √ | √ | √ | | | Argmin | | |
-| AvgPool | √ | √ | √ | √ | | √ | MeanPooling| Pooling | AveragePool |
-| BatchNorm | √ | √ | √ | √ | | √ | | BatchNorm | BatchNormalization |
-| BatchToSpace | | √ | √ | √ | | | BatchToSpace, BatchToSpaceND | | |
-| BiasAdd | | √ | √ | √ | | √ | | | BiasAdd |
+|-----------------------|----------|----------|----------|-----------|----------|----------|------------|----------------|--------------------|
+| Abs | | √ | √ | √ | √ | √ | Abs | | Abs |
+| Add | √ | √ | √ | √ | √ | √ | Add | | Add |
+| AddN | | √ | | | | | AddN | | |
+| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax |
+| Argmin | | √ | √ | √ | | | Argmin | | |
+| AvgPool | √ | √ | √ | √ | √ | √ | MeanPooling| Pooling | AveragePool |
+| BatchNorm | √ | √ | √ | √ | √ | √ | | BatchNorm | BatchNormalization |
+| BatchToSpace | | √ | √ | √ | | | BatchToSpace | | |
+| BatchToSpaceND | | √ | √ | | | | BatchToSpaceND | | |
+| BiasAdd | | √ | √ | √ | √ | √ | | | BiasAdd |
| Broadcast | | √ | | | | | BroadcastTo | | Expand |
-| Cast | √ | √ | | √ | | | Cast, DEQUANTIZE* | | Cast |
-| Ceil | | √ | √ | √ | | | Ceil | | Ceil |
+| Cast | √ | √ | √ | √ | √ | √ | Cast, QUANTIZE, DEQUANTIZE | | Cast |
+| Ceil | | √ | √ | √ | √ | √ | Ceil | | Ceil |
| Concat | √ | √ | √ | √ | √ | √ | Concat | Concat | Concat |
| Conv2d | √ | √ | √ | √ | √ | √ | Conv2D | Convolution | Conv |
| Conv2dTranspose | √ | √ | √ | √ | √ | √ | DeConv2D | Deconvolution | ConvTranspose |
-| Cos | | √ | √ | √ | | | Cos | | Cos |
+| Cos | | √ | √ | √ | √ | √ | Cos | | Cos |
| Crop | | √ | √ | √ | | | | Crop | |
| DeDepthwiseConv2D | | √ | √ | √ | | | | Deconvolution| ConvTranspose |
| DepthToSpace | | √ | √ | √ | | | DepthToSpace| | DepthToSpace |
| DepthwiseConv2dNative | √ | √ | √ | √ | √ | √ | DepthwiseConv2D | Convolution | Convolution |
-| Div | √ | √ | √ | √ | | √ | Div, RealDiv | | Div |
+| DetectionPostProcess | | √ | | | | | DetectionPostProcess | | |
+| Div | √ | √ | √ | √ | √ | √ | Div, RealDiv | | Div |
| Eltwise | √ | √ | | | | | | Eltwise | |
| Elu | | √ | | | | | Elu | | Elu |
| Equal | √ | √ | √ | √ | | | Equal | | Equal |
-| Exp | | √ | | | | | Exp | | Exp |
-| ExpandDims | | √ | | | | | | | |
+| Exp | | √ | | | √ | √ | Exp | Exp | Exp |
+| ExpandDims | | √ | | | | | ExpandDims | | |
| Fill | | √ | | | | | Fill | | |
| Flatten | | √ | | | | | | Flatten | |
-| Floor | | √ | √ | √ | | | flOOR | | Floor |
+| Floor | | √ | √ | √ | √ | √ | flOOR | | Floor |
| FloorDiv | √ | √ | | | | | FloorDiv | | |
| FloorMod | √ | √ | | | | | FloorMod | | |
-| FullConnection | | √ | √ | √ | | | FullyConnected | InnerProduct | |
+| FullConnection | √ | √ | √ | √ | √ | √ | FullyConnected | InnerProduct | |
| GatherNd | | √ | √ | √ | | | GatherND | | |
| GatherV2 | | √ | √ | √ | | | Gather | | Gather |
| Greater | √ | √ | √ | √ | | | Greater | | Greater |
| GreaterEqual | √ | √ | √ | √ | | | GreaterEqual| | |
| Hswish | √ | √ | √ | √ | | | HardSwish | | |
-| LeakyReLU | √ | √ | | | | √ | LeakyRelu | | LeakyRelu |
+| L2Norm | | √ | | | | | L2_NORMALIZATION | | |
+| LeakyReLU | √ | √ | | | √ | √ | LeakyRelu | | LeakyRelu |
| Less | √ | √ | √ | √ | | | Less | | Less |
| LessEqual | √ | √ | √ | √ | | | LessEqual | | |
-| LRN | | √ | | | | | LocalResponseNorm | | Lrn |
-| Log | | √ | √ | √ | | | Log | | Log |
+| LRN | | √ | | | | | LocalResponseNorm | | Lrn, LRN |
+| Log | | √ | √ | √ | √ | √ | Log | | Log |
| LogicalAnd | √ | √ | | | | | LogicalAnd | | |
-| LogicalNot | | √ | √ | √ | | | LogicalNot | | |
+| LogicalNot | | √ | √ | √ | √ | √ | LogicalNot | | |
| LogicalOr | √ | √ | | | | | LogicalOr | | |
| LSTM | | √ | | | | | | | |
| MatMul | | √ | √ | √ | √ | √ | | | MatMul |
| Maximum | √ | √ | | | | | Maximum | | Max |
-| MaxPool | √ | √ | √ | √ | | √ | MaxPooling | Pooling | MaxPool |
+| MaxPool | √ | √ | √ | √ | √ | √ | MaxPooling | Pooling | MaxPool |
| Minimum | √ | √ | | | | | Minimum | | Min |
-| Mul | √ | √ | √ | √ | | √ | Mul | | Mul |
+| Mul | √ | √ | √ | √ | √ | √ | Mul | | Mul |
+| Neg | | √ | | | | | Neg | | Neg |
| NotEqual | √ | √ | √ | √ | | | NotEqual | | |
| OneHot | | √ | | | | | OneHot | | |
-| Pad | | √ | √ | √ | | | Pad | | Pad |
-| Pow | | √ | √ | √ | | | Pow | Power | Power |
-| PReLU | | √ | | | | √ | | PReLU | |
+| Pad | √ | √ | √ | √ | | | Pad, MirrorPad | | Pad |
+| Pow | | √ | √ | √ | | | Pow | Power | Power |
+| PReLU | | √ | | | √ | √ | | PReLU | |
| Range | | √ | | | | | Range | | |
| Rank | | √ | | | | | Rank | | |
+| ReduceASum | | √ | | | | | | Reduction | |
| ReduceMax | √ | √ | √ | √ | | | ReduceMax | | ReduceMax |
-| ReduceMean | √ | √ | √ | √ | | | Mean | | ReduceMean |
+| ReduceMean | √ | √ | √ | √ | | | Mean | Reduction | ReduceMean |
| ReduceMin | √ | √ | √ | √ | | | ReduceMin | | ReduceMin |
| ReduceProd | √ | √ | √ | √ | | | ReduceProd | | |
-| ReduceSum | √ | √ | √ | √ | | | Sum | | ReduceSum |
-| ReduceSumSquare | √ | √ | √ | √ | | | | | |
-| ReLU | √ | √ | √ | √ | | √ | Relu | ReLU | Relu |
-| ReLU6 | √ | √ | √ | √ | | √ | Relu6 | ReLU6 | Clip* |
-| Reshape | √ | √ | √ | √ | | √ | Reshape | Reshape | Reshape,Flatten |
-| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | |
+| ReduceSum | √ | √ | √ | √ | | | Sum | Reduction | ReduceSum |
+| ReduceSumSquare | √ | √ | √ | √ | | | | Reduction | |
+| ReLU | √ | √ | √ | √ | √ | √ | Relu | ReLU | Relu |
+| ReLU6 | √ | √ | √ | √ | √ | √ | Relu6 | ReLU6 | Clip* |
+| Reshape | √ | √ | √ | √ | √ | √ | Reshape | Reshape | Reshape,Flatten |
+| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | |
| Reverse | | √ | | | | | reverse | | |
| ReverseSequence | | √ | | | | | ReverseSequence | | |
-| Round | | √ | √ | √ | | | Round | | |
-| Rsqrt | | √ | √ | √ | | | Rsqrt | | |
-| Scale | | √ | | | | | | Scale | |
+| Round | | √ | √ | √ | √ | √ | Round | | |
+| Rsqrt | | √ | √ | √ | √ | √ | Rsqrt | | |
+| Scale | | √ | | | √ | √ | | Scale | |
| ScatterNd | | √ | | | | | ScatterNd | | |
-| Shape | | √ | | | | | Shape | | Shape |
-| Sigmoid | √ | √ | √ | √ | | √ | Logistic | Sigmoid | Sigmoid |
-| Sin | | √ | √ | √ | | | Sin | | Sin |
-| Slice | | √ | √ | √ | √ | √ | Slice | | Slice |
-| Softmax | √ | √ | √ | √ | | √ | Softmax | Softmax | Softmax |
-| SpaceToBatch | | √ | | | | | | | |
-| SpaceToBatchND | | √ | | | | | SpaceToBatchND | | |
+| Shape | | √ | | | | | Shape | | Shape |
+| Sigmoid | √ | √ | √ | √ | √ | √ | Logistic | Sigmoid | Sigmoid |
+| Sin | | √ | √ | √ | √ | √ | Sin | | Sin |
+| Slice | | √ | √ | √ | √ | √ | Slice | Slice | Slice |
+| Softmax | √ | √ | √ | √ | √ | √ | Softmax | Softmax | Softmax |
+| SpaceToBatch | | √ | √ | | | | SpaceToBatch | | |
+| SpaceToBatchND | | √ | √ | | | | SpaceToBatchND | | |
| SpaceToDepth | | √ | | | | | SpaceToDepth | | SpaceToDepth |
| SparseToDense | | √ | | | | | SpareToDense | | |
| Split | √ | √ | √ | √ | | | Split, SplitV | | |
-| Sqrt | | √ | √ | √ | | | Sqrt | | Sqrt |
-| Square | | √ | √ | √ | | | Square | | |
-| SquaredDifference | | √ | | | | | SquaredDifference | | |
+| Sqrt | | √ | √ | √ | √ | √ | Sqrt | | Sqrt |
+| Square | | √ | √ | √ | √ | √ | Square | | |
+| SquaredDifference | | √ | | | | | SquaredDifference | | |
| Squeeze | | √ | √ | √ | | | Squeeze | | Squeeze |
| StridedSlice | | √ | √ | √ | | | StridedSlice| | |
| Stack | | √ | | | | | Stack | | |
-| Sub | √ | √ | √ | √ | | √ | Sub | | Sub |
-| Tanh | √ | √ | | | | | Tanh | TanH | |
-| Tile | | √ | | | | | Tile | | Tile |
+| Sub | √ | √ | √ | √ | √ | √ | Sub | | Sub |
+| Tanh | √ | √ | | | √ | √ | Tanh | TanH | |
+| Tile | | √ | | | | | Tile | Tile | Tile |
| TopK | | √ | √ | √ | | | TopKV2 | | |
-| Transpose | √ | √ | | | | √ | Transpose | Permute | Transpose |
+| Transpose | √ | √ | | | √ | √ | Transpose | Permute | Transpose |
| Unique | | √ | | | | | Unique | | |
| Unsqueeze | | √ | √ | √ | | | | | Unsqueeze |
| Unstack | | √ | | | | | Unstack | | |
| Where | | √ | | | | | Where | | |
-| ZerosLike | | √ | | | | | ZerosLike | | |
+| ZerosLike | | √ | | | | | ZerosLike | | |
* Clip: 仅支持将clip(0, 6)转换为Relu6.
-* DEQUANTIZE: 仅支持将fp16转换为fp32.
diff --git a/lite/tutorials/source_en/_static/logo_source.png b/lite/tutorials/source_en/_static/logo_source.png
index fc347d271abe082ae8d16242328551648766b6fb..880f2bc87172daf487654c0ba4f1657c672bd2b8 100644
Binary files a/lite/tutorials/source_en/_static/logo_source.png and b/lite/tutorials/source_en/_static/logo_source.png differ
diff --git a/lite/tutorials/source_en/build.md b/lite/tutorials/source_en/build.md
index ef1282a257493900b1c43c9371d083058f2e04de..1a996d30b22c9f7f4d95ef9b259853a9d078eb34 100644
--- a/lite/tutorials/source_en/build.md
+++ b/lite/tutorials/source_en/build.md
@@ -10,11 +10,7 @@
- [Output Description](#output-description)
- [Description of Converter's Directory Structure](#description-of-converters-directory-structure)
- [Description of Runtime and Other tools' Directory Structure](#description-of-runtime-and-other-tools-directory-structure)
- - [Windows Environment Compilation](#windows-environment-compilation)
- - [Environment Requirements](#environment-requirements-1)
- - [Compilation Options](#compilation-options-1)
- - [Compilation Example](#compilation-example-1)
- - [Output Description](#output-description-1)
+ - [Description of Imageprocess's Directory Structure](#description-of-imageprocesss-directory-structure)
@@ -24,10 +20,11 @@ This chapter introduces how to quickly compile MindSpore Lite, which includes th
| Module | Support Platform | Description |
| --- | ---- | ---- |
-| converter | Linux、Windows | Model Conversion Tool |
+| converter | Linux | Model Conversion Tool |
| runtime | Linux、Android | Model Inference Framework |
| benchmark | Linux、Android | Benchmarking Tool |
-| time_profiler | Linux、Android | Performance Analysis Tool |
+| timeprofiler | Linux、Android | Performance Analysis Tool |
+| imageprocess | Linux、Android | Image Processing Library |
## Linux Environment Compilation
@@ -35,7 +32,7 @@ This chapter introduces how to quickly compile MindSpore Lite, which includes th
- The compilation environment supports Linux x86_64 only. Ubuntu 18.04.02 LTS is recommended.
-- Compilation dependencies of runtime、benchmark and time_profiler:
+- Compilation dependencies of runtime、benchmark and timeprofiler:
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
- [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip)
@@ -53,6 +50,7 @@ This chapter introduces how to quickly compile MindSpore Lite, which includes th
- [Libevent](https://libevent.org) >= 2.0
- [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18
- [OpenSSL](https://www.openssl.org/) >= 1.1.1
+ - [Python](https://www.python.org/) >= 3.7.5
> - To install and use `Android_NDK`, you need to configure environment variables. The command example is `export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`.
> - In the `build.sh` script, run the `git clone` command to obtain the code in the third-party dependency library. Ensure that the network settings of Git are correct.
@@ -69,6 +67,7 @@ MindSpore Lite provides a compilation script `build.sh` for one-click compilatio
| -j[n] | Sets the number of threads used during compilation. Otherwise, the number of threads is set to 8 by default. | Integer | No |
| -e | In the Arm architecture, select the backend operator and set the `gpu` parameter. The built-in GPU operator of the framework is compiled at the same time. | GPU | No |
| -h | Displays the compilation help information. | None | No |
+| -n | Specifies to compile the lightweight image processing module. | lite_cv | No |
> When the `-I` parameter changes, such as `-I x86_64` is converted to `-I arm64`, adding `-i` for parameter compilation does not take effect.
@@ -102,11 +101,17 @@ Then, run the following commands in the root directory of the source code to com
bash build.sh -I arm64 -e gpu
```
+- Compile ARM64 with image preprocessing module:
+ ```bash
+ bash build.sh -I arm64 -n lite_cv
+ ```
+
### Output Description
-After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into two parts.
+After the compilation is complete, go to the `mindspore/output` directory of the source code to view the file generated after compilation. The file is divided into three parts.
- `mindspore-lite-{version}-converter-{os}.tar.gz`:Contains model conversion tool.
- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:Contains model inference framework、benchmarking tool and performance analysis tool.
+- `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`:Contains image processing library ImageProcess.
> version: version of the output, consistent with that of the MindSpore.
>
@@ -119,6 +124,7 @@ Execute the decompression command to obtain the compiled output:
```bash
tar -xvf mindspore-lite-{version}-converter-{os}.tar.gz
tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
+tar -xvf mindspore-lite-{version}-minddata-{os}-{device}.tar.gz
```
#### Description of Converter's Directory Structure
@@ -147,7 +153,7 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
- │ └── time_profile # Model network layer time-consuming analysis tool
+ │ └── time_profiler # Model network layer time-consuming analysis tool
```
@@ -158,74 +164,45 @@ The inference framework can be obtained under `-I x86_64`, `-I arm64` and `-I ar
│ └── benchmark # Benchmarking Tool
│ └── lib # Inference framework dynamic library
│ ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
- │ ├── liboptimize.so # Operator performance optimization library in MindSpore Lite
+ │ ├── libmindspore-lite-fp16.so # Operator performance optimization library support float16 in MindSpore Lite
+ │ ├── libmindspore-lite-optimize.so # Operator performance optimization library support dotprod instruction in MindSpore Lite
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
- │ └── time_profile # Model network layer time-consuming analysis tool
+ │ └── time_profiler # Model network layer time-consuming analysis tool
```
- When the compilation option is `-I arm32`:
```
|
- ├── mindspore-lite-{version}-runtime-arm64-cpu
+ ├── mindspore-lite-{version}-runtime-arm32-cpu
│ └── benchmark # Benchmarking Tool
│ └── lib # Inference framework dynamic library
│ ├── libmindspore-lite.so # Dynamic library of infernece framework in MindSpore Lite
│ └── third_party # Header files and libraries of third party libraries
│ ├── flatbuffers # Header files of FlatBuffers
│ └── include # Header files of inference framework
- │ └── time_profile # Model network layer time-consuming analysis tool
+ │ └── time_profiler # Model network layer time-consuming analysis tool
```
-> 1. `liboptimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
-> 2. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
-> 3. Before running the tools in the converter, benchmark or time_profile directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`; configure benchmark and timeprofiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
-
-## Windows Environment Compilation
-
-### Environment Requirements
-
-- The supported compilation environment is: Windows 10, 64-bit.
-
-- Compilation dependencies are:
- - [CMake](https://cmake.org/download/) >= 3.14.1
- - [MinGW GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download) = 7.3.0
- - [Python](https://www.python.org/) >= 3.7.5
-
-> The compilation script will execute `git clone` to obtain the code of the third-party dependent libraries. Please make sure that the git network settings are correct and available in advance.
-
-### Compilation Options
-
-The compilation options of MindSpore Lite are as follows:
+> 1. `libmindspore-lite-optimize.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support dotprod instruction.
+> 2. `libmindspore-lite-fp16.so` only exists in the output package of runtime-arm64 and is only used on ARMv8.2 and CPUs that support fp16.
+> 3. Compile ARM64 to get the inference framework output of arm64-cpu by default, if you add `-e gpu`, you will get the inference framework output of arm64-gpu, and the package name is `mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`, compiling ARM32 is in the same way.
+> 4. Before running the tools in the converter, benchmark or time_profiler directory, you need to configure environment variables, and configure the path where the dynamic libraries of MindSpore Lite and Protobuf are located to the path where the system searches for dynamic libraries. Take the compiled under version 0.7.0-beta as an example: configure converter: `export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`; configure benchmark and timeprofiler: `export LD_LIBRARY_PATH= ./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`.
-| Parameter | Parameter Description | Mandatory or Not |
-| -------- | ----- | ---- |
-| **lite** | **Set this parameter to compile the Mindspore Lite project.** | **Yes** |
-| [n] | Set the number of threads used during compilation, otherwise the default is set to 6 threads. | No |
+#### Description of Imageprocess's Directory Structure
-### Compilation Example
+The image processing library is only available under the `-I arm64 -n lite_cv` compilation option, and the content includes the following parts:
-First, use the git tool to download the source code from the MindSpore code repository.
-```bash
-git clone https://gitee.com/mindspore/mindspore.git
```
-
-Then, use the cmd tool to compile MindSpore Lite in the root directory of the source code and execute the following commands.
-
-- Compile the Windows version with the default number of threads (6 threads).
- ```bash
- call build.bat lite
- ```
-- Compile the Windows version with the specified number of threads 8.
- ```bash
- call build.bat lite 8
- ```
-
-### Output Description
-
-After the compilation is complete, enter the `mindspore/output/` directory, unzip the output file `mindspore-lite-{version}-converter-win-cpu.zip`, which contains the conversion tool executable file.
-
-> version: version of the output, consistent with that of the MindSpore.
+|
+├── mindspore-lite-{version}-minddata-{os}-{device}
+│ └── include # Head file
+│ ├── lite_cv # Image processing library header file
+│ └── lib # Dynamic library
+│ ├── libminddata-lite.so # Image processing dynamic library
+│ └── third_party # Third-party Iibrary header files and libraries
+│ ├── flatbuffers # Header files of FlatBuffers
+```
diff --git a/lite/tutorials/source_en/index.rst b/lite/tutorials/source_en/index.rst
index 26e9445ec1baace48e64a3418dd12fe1a1ec36a3..8a371e46b1c91f9bd423db181d9577fb251b5aa9 100644
--- a/lite/tutorials/source_en/index.rst
+++ b/lite/tutorials/source_en/index.rst
@@ -21,4 +21,5 @@ MindSpore Lite Tutorials
build
use/converter_tool
use/evaluating_the_model
+ use/image_processing
use/runtime
diff --git a/lite/tutorials/source_en/quick_start/quick_start.md b/lite/tutorials/source_en/quick_start/quick_start.md
index b0712f03d6a6b713fa0b63160f5be2714a3fc8a2..34d8d9aa7fedafd2b5bc3e581464cff1a7f66e87 100644
--- a/lite/tutorials/source_en/quick_start/quick_start.md
+++ b/lite/tutorials/source_en/quick_start/quick_start.md
@@ -43,7 +43,7 @@ After you retrain a model provided by MindSpore, export the model in the [.mindi
Take the mobilenetv2 model as an example. Execute the following script to convert a model into a MindSpore Lite model for on-device inference.
```bash
-./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
+./converter_lite --fmk=MINDIR --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
## Deploying an Application
@@ -54,9 +54,9 @@ The following section describes how to build and execute an on-device image clas
- Android Studio 3.2 or later (Android 4.0 or later is recommended.)
- Native development kit (NDK) 21.3
-- CMake 3.10.2
+- [CMake](https://cmake.org/download) 3.10.2
- Android software development kit (SDK) 26 or later
-- OpenCV 4.0.0 or later (included in the sample code)
+- [JDK]( https://www.oracle.com/downloads/otn-pub/java/JDK/) 1.8 or later
### Building and Running
@@ -68,7 +68,7 @@ The following section describes how to build and execute an on-device image clas

- (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`.
+ (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the NDK location in `Android NDK location` of `Project Structure`.

@@ -80,6 +80,8 @@ The following section describes how to build and execute an on-device image clas
For details about how to connect the Android Studio to a device for debugging, see .
+ The mobile phone needs to be turn on "USB debugging mode" before Android Studio can recognize the mobile phone. Huawei mobile phones generally turn on "USB debugging model" in Settings > system and update > developer Options > USB debugging.
+
3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result.

@@ -95,31 +97,22 @@ This image classification sample program on the Android device includes a Java l
```
app
-|
-├── libs # library files that store MindSpore Lite dependencies
-│ └── arm64-v8a
-│ ├── libopencv_java4.so
-│ └── libmindspore-lite.so
│
-├── opencv # dependency files related to OpenCV
-│ └── ...
-|
├── src/main
│ ├── assets # resource files
-| | └── model.ms # model file
+| | └── mobilenetv2.ms # model file
│ |
│ ├── cpp # main logic encapsulation classes for model loading and prediction
-| | ├── include # header files related to MindSpore calling
-| | | └── ...
-│ | |
+| | |── ...
+| | ├── mindspore_lite_x.x.x-minddata-arm64-cpu` #MindSpore Lite version
| | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling
│ | └── MindSporeNetnative.h # header file
│ |
│ ├── java # application code at the Java layer
-│ │ └── com.huawei.himindsporedemo
+│ │ └── com.mindspore.himindsporedemo
│ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling
│ │ │ └── ...
-│ │ └── obejctdetect # implementation related to camera enabling and drawing
+│ │ └── widget # implementation related to camera enabling and drawing
│ │ └── ...
│ │
│ ├── res # resource files related to Android
@@ -128,6 +121,7 @@ app
├── CMakeList.txt # CMake compilation entry file
│
├── build.gradle # Other Android configuration file
+├── download.gradle # MindSpore version download
└── ...
```
@@ -156,42 +150,40 @@ android{
Create a link to the `.so` library file in the `app/CMakeLists.txt` file:
```
-# Set MindSpore Lite Dependencies.
-include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore)
+# ============== Set MindSpore Dependencies. =============
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/ir/dtype)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/schema)
+
add_library(mindspore-lite SHARED IMPORTED )
-set_target_properties(mindspore-lite PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so")
+add_library(minddata-lite SHARED IMPORTED )
-# Set OpenCV Dependecies.
-include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include)
-add_library(lib-opencv SHARED IMPORTED )
-set_target_properties(lib-opencv PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so")
+set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
+set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
+# --------------- MindSpore Lite set End. --------------------
# Link target library.
target_link_libraries(
...
- mindspore-lite
- lib-opencv
+ # --- mindspore ---
+ minddata-lite
+ mindspore-lite
...
)
```
-In this example, the download.gradle File configuration auto download ` libmindspot-lite.so `and `libopencv_ Java4.so` library file, placed in the 'app / libs / arm64-v8a' directory.
+In this example, the download.gradle File configuration auto download MindSpore Lite version, placed in the `app/src/main/cpp/mindspore_lite_x.x.x-minddata-arm64-cpu` directory.
Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location.
-libmindspore-lite.so [libmindspore-lite.so]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
-
-libmindspore-lite include [libmindspore-lite include]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
-
-libopencv_java4.so [libopencv_java4.so](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
-
-libopencv include [libopencv include]( https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
-
-
+MindSpore Lite version [MindSpore Lite version](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%201.0/mindspore-lite-1.0.0-minddata-arm64-cpu.tar.gz)
### Downloading and Deploying a Model File
@@ -201,8 +193,6 @@ Note: if the automatic download fails, please manually download the relevant lib
mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms)
-
-
### Compiling On-Device Inference Code
Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference.
@@ -225,10 +215,8 @@ The inference code process is as follows. For details about the complete code, s
*labelEnv = labelNet;
// Create context.
- lite::Context *context = new lite::Context;
-
- context->device_ctx_.type = lite::DT_CPU;
- context->thread_num_ = numThread; //Specify the number of threads to run inference
+ mindspore::lite::Context *context = new mindspore::lite::Context;
+ context->thread_num_ = num_thread;
// Create the mindspore session.
labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
@@ -253,7 +241,7 @@ The inference code process is as follows. For details about the complete code, s
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
- BitmapToMat(env, srcBitmap, matImageSrc);
+ BitmapToMat(env, srcBitmap, matImageSrc);
// Processing such as zooming the picture size.
matImgPreprocessed = PreProcessImageData(matImageSrc);
@@ -278,7 +266,38 @@ The inference code process is as follows. For details about the complete code, s
delete[] (dataHWC);
```
-3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
+3. Pretreat the input data.
+
+ ```cpp
+ bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
+ bool ret = false;
+ LiteMat lite_mat_resize;
+ LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
+ ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+ if (!ret) {
+ MS_PRINT("ResizeBilinear error");
+ return false;
+ }
+ LiteMat lite_mat_convert_float;
+ ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
+ if (!ret) {
+ MS_PRINT("ConvertTo error");
+ return false;
+ }
+ LiteMat lite_mat_cut;
+ ret = Crop(lite_mat_convert_float, lite_mat_cut, 16, 16, 224, 224);
+ if (!ret) {
+ MS_PRINT("Crop error");
+ return false;
+ }
+ float means[3] = {0.485, 0.456, 0.406};
+ float vars[3] = {1.0 / 0.229, 1.0 / 0.224, 1.0 / 0.225};
+ SubStractMeanNormalize(lite_mat_cut, lite_norm_mat_cut, means, vars);
+ return true;
+ }
+ ```
+
+4. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing.
- Perform graph execution and on-device inference.
@@ -289,7 +308,12 @@ The inference code process is as follows. For details about the complete code, s
- Obtain the output data.
```cpp
- auto msOutputs = mSession->GetOutputs();
+ auto names = mSession->GetOutputTensorNames();
+ std::unordered_map msOutputs;
+ for (const auto &name : names) {
+ auto temp_dat =mSession->GetOutputByTensorName(name);
+ msOutputs.insert(std::pair {name, temp_dat});
+ }
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
@@ -298,39 +322,34 @@ The inference code process is as follows. For details about the complete code, s
std::string ProcessRunnetResult(std::unordered_map msOutputs, int runnetRet) {
- // Get model output results.
- std::unordered_map::iterator iter;
- iter = msOutputs.begin();
- auto brach1_string = iter->first;
- auto branch1_tensor = iter->second;
+ std::unordered_map::iterator iter;
+ iter = msOutputs.begin();
- int OUTPUTS_LEN = branch1_tensor->ElementsNum();
+ // The mobilenetv2.ms model output just one branch.
+ auto outputTensor = iter->second;
+ int tensorNum = outputTensor->ElementsNum();
- float *temp_scores = static_cast(branch1_tensor->MutableData());
+ // Get a pointer to the first score.
+ float *temp_scores = static_cast(outputTensor->MutableData());
- float scores[RET_CATEGORY_SUM];
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- scores[i] = temp_scores[i];
+ float scores[RET_CATEGORY_SUM];
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ if (temp_scores[i] > 0.5) {
+ MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
}
+ scores[i] = temp_scores[i];
+ }
- // Converted to text information that needs to be displayed in the APP.
- std::string retStr = "";
- if (runnetRet == 0) {
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- if (scores[i] > 0.3){
- retStr += g_labels_name_map[i];
- retStr += ":";
- std::string score_str = std::to_string(scores[i]);
- retStr += score_str;
- retStr += ";";
- }
- }
- else {
- MS_PRINT("MindSpore run net failed!");
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- retStr += " :0.0;";
- }
- }
- return retStr;
+ // Score for each category.
+ // Converted to text information that needs to be displayed in the APP.
+ std::string categoryScore = "";
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ categoryScore += labels_name_map[i];
+ categoryScore += ":";
+ std::string score_str = std::to_string(scores[i]);
+ categoryScore += score_str;
+ categoryScore += ";";
+ }
+ return categoryScore;
}
```
\ No newline at end of file
diff --git a/lite/tutorials/source_en/use/benchmark_tool.md b/lite/tutorials/source_en/use/benchmark_tool.md
index d6b3a09ae8554a462fd9a464120ee8cfc1f228f1..0dff3079895ef9bf83ff99a6e2d96f0b7c16e837 100644
--- a/lite/tutorials/source_en/use/benchmark_tool.md
+++ b/lite/tutorials/source_en/use/benchmark_tool.md
@@ -64,23 +64,16 @@ Mean bias of all nodes: 0%
```
-When the origin model's input or output data type is uint8, they needs to be reduced by 128 and converted to int8 type before it can be used as benchmark data to verify accuracy. And when the output data type is INT8, you need to specify calibDataType as INT8 in the parameter.
-
-```bash
-./benchmark --modelPath=./models/test_benchmark_int8.ms --inDataPath=./input/test_benchmark_int8.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark_int8.out --calibDataType=INT8
-```
-
## Parameter Description
The command used for benchmark testing based on the compiled Benchmark tool is as follows:
```bash
./benchmark [--modelPath=] [--accuracyThreshold=]
- [--calibDataPath=] [--cpuBindMode=]
- [--device=] [--help] [--inDataPath=]
- [--inDataType=] [--loopCount=]
- [--numThreads=] [--omModelPath=]
- [--resizeDims=] [--warmUpLoopCount=]
+ [--calibDataPath=] [--calibDataType=]
+ [--cpuBindMode=] [--device=] [--help]
+ [--inDataPath=] [--loopCount=]
+ [--numThreads=] [--warmUpLoopCount=]
[--fp16Priority=]
```
@@ -91,7 +84,7 @@ The following describes the parameters in detail.
| `--modelPath=` | Mandatory | Specifies the file path of the MindSpore Lite model for benchmark testing. | String | Null | - |
| `--accuracyThreshold=` | Optional | Specifies the accuracy threshold. | Float | 0.5 | - |
| `--calibDataPath=` | Optional | Specifies the file path of the benchmark data. The benchmark data, as the comparison output of the tested model, is output from the forward inference of the tested model under other deep learning frameworks using the same input. | String | Null | - |
-| `--calibDataType=` | Optional | Specifies the calibration data type. | String | FLOAT | FLOAT or INT8 |
+| `--calibDataType=` | Optional | Specifies the calibration data type. | String | FLOAT | UINT8, FLOAT or INT8 |
| `--cpuBindMode=` | Optional | Specifies the type of the CPU core bound to the model inference program. | Integer | 1 | −1: medium core 1: large core 0: not bound |
| `--device=` | Optional | Specifies the type of the device on which the model inference program runs. | String | CPU | CPU or GPU |
| `--help` | Optional | Displays the help information about the `benchmark` command. | - | - | - |
diff --git a/lite/tutorials/source_en/use/converter_tool.md b/lite/tutorials/source_en/use/converter_tool.md
index 38cd115fb12a93031009cf9f2d12e1ab77045a46..21f89632b3dbd2cb18313f3b869a4119b89fa2d0 100644
--- a/lite/tutorials/source_en/use/converter_tool.md
+++ b/lite/tutorials/source_en/use/converter_tool.md
@@ -53,7 +53,7 @@ The following describes how to use the conversion command by using several commo
The output is as follows:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
This indicates that the Caffe model is successfully converted into the MindSpore Lite model and the new file `lenet.ms` is generated.
@@ -61,7 +61,7 @@ The following describes how to use the conversion command by using several commo
- MindSpore model `model.mindir`
```bash
- ./converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ ./converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite model `model.tflite`
@@ -79,16 +79,18 @@ The following describes how to use the conversion command by using several commo
./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining
```
- - TensorFlow Lite aware quantization model `model_quant.tflite` set the input and output data type to be int8
+ - TensorFlow Lite aware quantization model `model_quant.tflite` set the input and output data type to be float
```bash
- ./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining --inputInferenceType=INT8 --inferenceType=INT8
+ ./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining --inferenceType=FLOAT
```
In the preceding scenarios, the following information is displayed, indicating that the conversion is successful. In addition, the target file `model.ms` is obtained.
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- If fail to run the conversion command, an [errorcode](https://www.mindspore.cn/lite/docs/en/master/apicc/errorcode_and_metatype.html) will be output.
+
### Parameter Description
MindSpore Lite model conversion tool provides multiple parameters.
@@ -100,13 +102,12 @@ The following describes the parameters in detail.
| Parameter | Mandatory or Not | Parameter Description | Value Range | Default Value |
| -------- | ------- | ----- | --- | ---- |
| `--help` | No | Prints all help information. | - | - |
-| `--fmk=` | Yes | Original format of the input model. | MS, CAFFE, TFLITE, or ONNX | - |
+| `--fmk=` | Yes | Original format of the input model. | MINDIR, CAFFE, TFLITE, or ONNX | - |
| `--modelFile=` | Yes | Path of the input model. | - | - |
| `--outputFile=` | Yes | Path of the output model. (If the path does not exist, a directory will be automatically created.) The suffix `.ms` can be automatically generated. | - | - |
| `--weightFile=` | Yes (for Caffe models only) | Path of the weight file of the input model. | - | - |
| `--quantType=` | No | Sets the quant type of the model. | PostTraining: quantization after training AwareTraining: perceptual quantization | - |
-|`--inputInferenceType=` | No(supported by aware quant models only) | Sets the input data type of the converted model. If the type is different from the origin model, the convert tool will insert data type convert op before the model to make sure the input data type is same as the input of origin model. | FLOAT or INT8 | FLOAT |
-|`--inferenceType= `| No(supported by aware quant models only) | Sets the output data type of the converted model. If the type is different from the origin model, the convert tool will insert data type convert op before the model to make sure the output data type is same as the input of origin model. | FLOAT or INT8 | FLOAT |
+|`--inferenceType= `| No(supported by aware quant models only) | Sets the input and output data type of the converted model. If the types are different from the origin model, the convert tool will insert data type convert op in the inputs and outputs of the model to make sure the data types are same as origin model. | UINT8, FLOAT or INT8 | FLOAT |
|`--stdDev=`| No(supported by aware quant models only) | Sets the standard deviation of the input data. | (0,+∞) | 128 |
|`--mean=`| No(supported by aware quant models only) | Sets the mean value of the input data. | [-128, 127] | -0.5 |
@@ -119,9 +120,7 @@ The following describes the parameters in detail.
To use the MindSpore Lite model conversion tool, the following environment preparations are required.
-- Compile: The model conversion tool code is in the `mindspore/lite/tools/converter` directory of the MindSpore source code, refer to the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements-1) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example-1) in the build document.
-
-- Run: Refer to [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description-1) in the deployment document to obtain the `converter` tool, and set the environment variable of MinGW(Add the bin directory of MinGW in the system variable Path).
+- Get the toolkit: To obtain the 'Converter' tool, download the zip package of windows conversion tool and unzip it to the local directory.
### Parameter Description
@@ -129,12 +128,7 @@ Reference description Linux environment model conversion tool [parameter descrip
### Example
-First, use the cmd tool to enter the command to compile in the root directory of the source code, refer to `build.md`.
-```bash
-call build.bat lite
-```
-
-Then, set the log printing level to INFO.
+Set the log printing level to INFO.
```bash
set MSLOG=INFO
```
@@ -151,7 +145,7 @@ Several common examples are selected below to illustrate the use of conversion c
The result is shown as:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
This means that the Caffe model has been successfully converted to the MindSpore Lite model and the new file `lenet.ms` has been obtained.
@@ -159,7 +153,7 @@ Several common examples are selected below to illustrate the use of conversion c
- MindSpore model `model.mindir`
```bash
- call converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ call converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite model`model.tflite`
@@ -179,5 +173,6 @@ Several common examples are selected below to illustrate the use of conversion c
In the above cases, the following conversion success prompt is displayed, and the `model.ms` target file is obtained at the same time.
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- If fail to run the conversion command, an [errorcode](https://www.mindspore.cn/lite/docs/en/master/apicc/errorcode_and_metatype.html) will be output.
diff --git a/lite/tutorials/source_en/use/image_processing.md b/lite/tutorials/source_en/use/image_processing.md
new file mode 100644
index 0000000000000000000000000000000000000000..cce6863a5b7258473fd3c4e9dc84f2c8bd4ccb8c
--- /dev/null
+++ b/lite/tutorials/source_en/use/image_processing.md
@@ -0,0 +1,149 @@
+# Preprocess image data
+
+
+
+- [Preprocess image data](#preprocess-image-data)
+ - [Overview](#Overview)
+ - [Import image preprocessing function library](#import-image-preprocessing-function-library)
+ - [Initialize the image](#initialize-the-image)
+ - [Usage example](#usage-example)
+ - [Optional image preprocessing operator](#optional-image-preprocessing-operator)
+ - [Resize image](#resize-image)
+ - [Usage example](#usage-example-1)
+ - [Convert the image data type](#convert-the-image-data-type)
+ - [Usage example](#usage-example-2)
+ - [Crop image data](#crop-image-data)
+ - [Usage example](#usage-example-3)
+ - [Normalize image data](#normalize-image-data)
+ - [Usage example](#usage-example-4)
+
+
+
+## Overview
+
+The main purpose of image preprocessing is to eliminate irrelevant information in the image, restore useful real information, enhance the detectability of related information and simplify data to the greatest extent, thereby improving the reliability of feature extraction, image segmentation, matching and recognition. Here, by creating a LiteMat object, the image data is processed before inference to meet the data format requirements for model inference.
+
+The process is as follows:
+
+## Import image preprocessing function library
+
+```
+#include "lite_cv/lite_mat.h"
+#include "lite_cv/image_process.h"
+```
+
+## Initialize the image
+
+Here, the [InitFromPixel](https://www.mindspore.cn/lite/docs/en/master/apicc/dataset.html#initfrompixel) function in the `image_process.h` file is used to initialize the image.
+
+```
+bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m);
+```
+
+### Usage example
+
+```
+// Create the data object of the LiteMat object.
+LiteMat lite_mat_bgr;
+
+// Initialize the lite_mat_bgr object.
+// The image data pointer passed in by the user (The data in the Bitmap corresponding to the Android platform).
+InitFromPixel(pixel_ptr, LPixelType::RGBA2GRAY, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+```
+
+## Optional image preprocessing operator
+
+The image processing operators here can be used in any combination according to the actual situation.
+
+### Resize image
+
+Here we use the [ResizeBilinear](https://www.mindspore.cn/lite/docs/en/master/apicc/dataset.html#resizebilinear) function in `image_process.h` to resize the image through a bilinear algorithm. Currently, the supported data type is unit8, the supported channels are 3 and 1.
+
+```
+bool ResizeBilinear(const LiteMat &src, LiteMat &dst, int dst_w, int dst_h);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create a resize image data object.
+LiteMat lite_mat_resize;
+
+// Resize the image.
+ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+```
+
+### Convert the image data type
+
+Here we use the [ConvertTo](https://www.mindspore.cn/lite/docs/en/master/apicc/dataset.html#convertto) function in `image_process.h` to convert the image data type. Currently, the supported conversion is to convert uint8 to float.
+
+```
+bool ConvertTo(const LiteMat &src, LiteMat &dst, double scale = 1.0);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the converted data type object.
+LiteMat lite_mat_convert_float;
+
+// Perform conversion type operations on the object. Currently, the supported conversion is to convert uint8 to float.
+ConvertTo(lite_mat_bgr, lite_mat_convert_float);
+```
+
+### Crop image data
+
+Here we use the [Crop](https://www.mindspore.cn/lite/docs/en/master/apicc/dataset.html#crop) function in `image_process.h` to crop the image. Currently, channels 3 and 1 are supported.
+
+```
+bool Crop(const LiteMat &src, LiteMat &dst, int x, int y, int w, int h);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the cropped object.
+LiteMat lite_mat_cut;
+
+// The image is cropped by the values of x, y, w, h.
+Crop(lite_mat_bgr, lite_mat_cut, 16, 16, 224, 224);
+```
+
+### Normalize image data
+
+In order to eliminate the dimensional influence among the data indicators, and solve the comparability problem among the data indicators through standardization processing, here is the use of the [SubStractMeanNormalize](https://www.mindspore.cn/lite/docs/en/master/apicc/dataset.html#substractmeannormalize) function in `image_process.h` to normalize the image data.
+
+```
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, float *mean, float *norm);
+```
+
+#### Usage example
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// The mean value of the image data.
+// The variance of the image data.
+float means[1] = {0.485};
+float norm[1] = {1.0 / 0.229};
+
+// Create a normalized image object.
+LiteMat lite_mat_bgr_norm;
+
+// The image data is normalized by the mean value and variance of the image data.
+SubStractMeanNormalize(lite_mat_bgr, lite_mat_bgr_norm, means, norm);
+```
\ No newline at end of file
diff --git a/lite/tutorials/source_en/use/runtime.md b/lite/tutorials/source_en/use/runtime.md
index 748ef39812baddc070e870445719177ee72218b9..a50dbccf1efa6b98490dce9d31b8c7a3d8b31893 100644
--- a/lite/tutorials/source_en/use/runtime.md
+++ b/lite/tutorials/source_en/use/runtime.md
@@ -1,4 +1,4 @@
-# Use Runtime for Model Inference
+# Use Runtime for Model Inference
@@ -28,6 +28,10 @@
- [Example](#example-5)
- [Obtaining Version String](#obtaining-version-string)
- [Example](#example-6)
+ - [Session parallel launch](#session-parallel-launch)
+ - [Single Session parallel launch](#single-session-parallel-launch)
+ - [Multiple Session parallel launch](#multiple-session-parallel-launch)
+ - [Example](#example-7)
@@ -50,7 +54,7 @@ Its components and their functions are described as follows:
- `Operator`: operator prototype, including operator attributes and methods for inferring the shape, data type, and format.
- `Kernel`: operator, which provides specific operator implementation and the operator forwarding function.
- `Tensor`: tensor used by MindSpore Lite, which provides functions and APIs for tensor memory operations.
-
+
## Reading Models
In MindSpore Lite, a model file is an `.ms` file converted using the model conversion tool. During model inference, the model needs to be loaded from the file system and parsed. Related operations are mainly implemented in the Model component. The Model component holds model data such as weight data and operator attributes.
@@ -77,66 +81,16 @@ Contexts save some basic configuration parameters required by sessions to guide
MindSpore Lite supports heterogeneous inference. The preferred backend for inference is specified by `device_ctx_` in `Context` and is CPU by default. During graph compilation, operator selection and scheduling are performed based on the preferred backend.
-```cpp
-/// \brief DeviceType defined for holding user's preferred backend.
-typedef enum {
- DT_CPU, /**< CPU device type */
- DT_GPU, /**< GPU device type */
- DT_NPU /**< NPU device type, not supported yet */
-} DeviceType;
-
-/// \brief DeviceContext defined for holding DeviceType.
-typedef struct {
- DeviceType type; /**< device type */
-} DeviceContext;
-
-DeviceContext device_ctx_{DT_CPU};
-```
-
MindSpore Lite has a built-in thread pool shared by processes. During inference, `thread_num_` is used to specify the maximum number of threads in the thread pool. The default maximum number is 2. It is recommended that the maximum number be no more than 4. Otherwise, the performance may be affected.
-```c++
-int thread_num_ = 2; /**< thread number config for thread pool */
-```
-
MindSpore Lite supports dynamic memory allocation and release. If `allocator` is not specified, a default `allocator` is generated during inference. You can also use the `Context` method to allow multiple `Context` to share the memory allocator.
If users create the `Context` by using `new`, it should be released by using `delete` once it's not required. Usually the `Context` is released after finishing the session creation.
-```cpp
-/// \brief Allocator defined a memory pool for malloc memory and free memory dynamically.
-///
-/// \note List public class and interface for reference.
-class Allocator;
-
-/// \brief Context defined for holding environment variables during runtime.
-class MS_API Context {
- public:
- /// \brief Constructor of MindSpore Lite Context using input value for parameters.
- ///
- /// \param[in] thread_num Define the work thread number during the runtime.
- /// \param[in] allocator Define the allocator for malloc.
- /// \param[in] device_ctx Define device information during the runtime.
- Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx);
-
- public:
- std::shared_ptr allocator = nullptr;
-}
-```
-
### Creating Sessions
Use the `Context` created in the previous step to call the static `CreateSession` method of LiteSession to create `LiteSession`. The `LiteSession` instance returned by the function is a pointer, which is created by using `new`. If the pointer is not required, you need to release it by using `delete`.
-```cpp
-/// \brief Static method to create a LiteSession pointer.
-///
-/// \param[in] context Define the context of session to be created.
-///
-/// \return Pointer of MindSpore Lite LiteSession.
-static LiteSession *CreateSession(lite::Context *context);
-```
-
### Example
The following sample code demonstrates how to create a `Context` and how to allow two `LiteSession` to share a memory pool.
@@ -148,13 +102,16 @@ if (context == nullptr) {
return RET_ERROR;
}
// The preferred backend is GPU, which means, if there is a GPU operator, it will run on the GPU first, otherwise it will run on the CPU.
-context->device_ctx_.type = lite::DT_GPU;
+context->device_type_ = lite::DT_GPU;
// The medium core takes priority in thread and core binding methods. This parameter will work in the BindThread interface. For specific binding effect, see the "Run Graph" section.
context->cpu_bind_mode_ = MID_CPU;
-// Configure the number of worker threads in the thread pool to 2, including the main thread.
+// Configure the number of worker threads in the thread pool to 2, including the main thread.
context->thread_num_ = 2;
// Allocators can be shared across multiple Contexts.
-auto *context2 = new Context(context->thread_num_, context->allocator, context->device_ctx_);
+auto *context2 = new Context();
+context2->thread_num_ = context->thread_num_;
+context2->allocator = context->allocator;
+context2->device_type_ = context->device_type_;
context2->cpu_bind_mode_ = context->cpu_bind_mode_;
// Use Context to create Session.
auto session1 = session::LiteSession::CreateSession(context);
@@ -167,7 +124,7 @@ if (session1 == nullptr) {
// session1 and session2 can share one memory pool.
auto session2 = session::LiteSession::CreateSession(context2);
delete (context2);
-if (session == nullptr) {
+if (session2 == nullptr) {
MS_LOG(ERROR) << "CreateSession failed while running %s", modelName.c_str();
return RET_ERROR;
}
@@ -179,6 +136,8 @@ if (session == nullptr) {
When using MindSpore Lite for inference, after the session creation and graph compilation have been completed, if you need to resize the input shape, you can reset the shape of the input tensor, and then call the session's Resize() interface.
+> Not all models support variable dimensions. For example, when there is a MatMul operator in the model whose input Tensor is a weight tensor and an input tensor, calling the variable dimension interface will cause the shape of the input tensor and the weight tensor being unmatched.
+
```cpp
/// \brief Get input MindSpore Lite MSTensors of model.
///
@@ -187,10 +146,11 @@ virtual std::vector GetInputs() const = 0;
/// \brief Resize inputs shape.
///
-/// \param[in] inputs Define the new inputs shape.
+/// \param[in] inputs Define Model inputs.
+/// \param[in] dims Define all inputs new shape.
///
/// \return STATUS as an error code of resize inputs, STATUS is defined in errorcode.h.
-virtual int Resize(const std::vector &inputs) = 0;
+virtual int Resize(const std::vector &inputs, const std::vector> &dims) = 0;
```
### Example
@@ -201,9 +161,10 @@ The following code demonstrates how to resize the input of MindSpore Lite:
// Assume we have created a LiteSession instance named session.
auto inputs = session->GetInputs();
std::vector resize_shape = {1, 128, 128, 3};
+std::vector> new_shapes;
+new_shapes.push_back(resize_shape);
// Assume the model has only one input,resize input shape to [1, 128, 128, 3]
-inputs[0]->set_shape(resize_shape);
-session->Resize(inputs);
+session->Resize(inputs, new_shapes);
```
### Compiling Graphs
@@ -324,14 +285,6 @@ Note:
After a MindSpore Lite session performs graph compilation, you can use `RunGraph` of `LiteSession` for model inference.
```cpp
-/// \brief Run session with callback.
-///
-/// \param[in] before Define a call_back_function to be called before running each node.
-/// \param[in] after Define a call_back_function to be called after running each node.
-///
-/// \note RunGraph should be called after CompileGraph.
-///
-/// \return STATUS as an error code of running graph, STATUS is defined in errorcode.h.
virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr) = 0;
```
@@ -506,16 +459,16 @@ virtual void *MutableData() const = 0;
### Example
-The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputMapByNode` method and print the first ten data or all data records of each output `MSTensor`.
+The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputs` method and print the first ten data or all data records of each output `MSTensor`.
```cpp
// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByNode();
+auto output_map = session->GetOutputs();
// Assume that the model has only one output node.
auto out_node_iter = output_map.begin();
std::string name = out_node_iter->first;
// Assume that the unique output node has only one output tensor.
-auto out_tensor = out_node_iter->second.front();
+auto out_tensor = out_node_iter->second;
if (out_tensor == nullptr) {
std::cerr << "Output tensor is nullptr" << std::endl;
return -1;
@@ -530,7 +483,7 @@ if (out_data == nullptr) {
std::cerr << "Data of out_tensor is nullptr" << std::endl;
return -1;
}
-// Print the first 10 float data or all output data of the output tensor.
+// Print the first 10 float data or all output data of the output tensor.
std::cout << "Output data: ";
for (size_t i = 0; i < 10 && i < out_tensor->ElementsNum(); i++) {
std::cout << " " << out_data[i];
@@ -539,7 +492,7 @@ std::cout << std::endl;
// The elements in outputs do not need to be free by users, because outputs are managed by the MindSpore Lite.
```
-Note that the vectors or map returned by the `GetOutputsByNodeName`, `GetOutputMapByNode`, `GetOutputByTensorName` and `GetOutputMapByTensor` methods do not need to be released by users.
+Note that the vectors or map returned by the `GetOutputsByNodeName`, `GetOutputByTensorName` and `GetOutputs` methods do not need to be released by users.
The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputsByNodeName` method.
@@ -555,28 +508,16 @@ if (out_tensor == nullptr) {
}
```
-The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputMapByTensor` method.
-
-```cpp
-// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByTensor();
-// Assume that output node named output_node_name_0 has only one output tensor.
-auto out_tensor = output_vec.front();
-if (out_tensor == nullptr) {
- std::cerr << "Output tensor is nullptr" << std::endl;
- return -1;
-}
-```
-
The following sample code shows how to obtain the output `MSTensor` from `LiteSession` using the `GetOutputByTensorName` method.
```cpp
+// Assume we have created a LiteSession instance named session.
// We can use GetOutputTensorNames method to get all name of output tensor of model which is in order.
-auto tensor_names = this->GetOutputTensorNames();
+auto tensor_names = session->GetOutputTensorNames();
// Assume we have created a LiteSession instance named session before.
// Use output tensor name returned by GetOutputTensorNames as key
for (auto tensor_name : tensor_names) {
- auto out_tensor = this->GetOutputByTensorName(tensor_name);
+ auto out_tensor = session->GetOutputByTensorName(tensor_name);
if (out_tensor == nullptr) {
std::cerr << "Output tensor is nullptr" << std::endl;
return -1;
@@ -592,5 +533,114 @@ The following sample code shows how to obtain version string using `Version` met
```cpp
#include "include/version.h"
-std::string version = mindspore::lite::Version();
+std::string version = mindspore::lite::Version();
+```
+
+## Session parallel launch
+MindSpore Lite supports multiple `LiteSession` parallel inferences, but does not support multiple threads calling the `RunGraph` interface of a single `LiteSession` at the same time.
+
+### Single Session parallel launch
+
+MindSpore Lite does not support multi-threaded parallel calling of the inference interface of a single `LiteSession`, otherwise we will get the following error message:
+```cpp
+ERROR [mindspore/lite/src/lite_session.cc:297] RunGraph] 10 Not support multi-threading
+```
+
+### Multiple Session parallel launch
+
+MindSpore Lite supports multiple `LiteSession` in doing inference in parallel. The thread pool and memory pool of each `LiteSession` are independent.
+
+### Example
+
+The following code shows how to create multiple `LiteSession` and do inference in parallel:
+```cpp
+#include
+#include "src/common/file_utils.h"
+#include "include/model.h"
+#include "include/version.h"
+#include "include/context.h"
+#include "include/lite_session.h"
+
+mindspore::session::LiteSession *GenerateSession(mindspore::lite::Model *model) {
+ if (model == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return nullptr;
+ }
+ auto context = new (std::nothrow) mindspore::lite::Context;
+ if (context == nullptr) {
+ std::cerr << "New context failed while running" << std::endl;
+ return nullptr;
+ }
+
+ auto session = mindspore::session::LiteSession::CreateSession(context);
+ delete (context);
+ if (session == nullptr) {
+ std::cerr << "CreateSession failed while running" << std::endl;
+ return nullptr;
+ }
+ auto ret = session->CompileGraph(model);
+ if (ret != mindspore::lite::RET_OK) {
+ std::cout << "CompileGraph failed while running" << std::endl;
+ delete (session);
+ return nullptr;
+ }
+ auto msInputs = session->GetInputs();
+ for (auto msInput : msInputs) {
+ (void)msInput->MutableData();
+ }
+ return session;
+}
+
+int main(int argc, const char **argv) {
+ size_t size = 0;
+ char *graphBuf = mindspore::lite::ReadFile("test.ms", &size);
+ if (graphBuf == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return -1;
+ }
+ auto model = mindspore::lite::Model::Import(graphBuf, size);
+ if (model == nullptr) {
+ std::cerr << "Import model file failed while running" << std::endl;
+ delete[](graphBuf);
+ return -1;
+ }
+ delete[](graphBuf);
+ auto session1 = GenerateSession(model);
+ if (session1 == nullptr) {
+ std::cerr << "Generate session 1 failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+ auto session2 = GenerateSession(model);
+ if (session2 == nullptr) {
+ std::cerr << "Generate session 2 failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+
+ std::thread thread1([&](){
+ auto status = session1->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session1 inference success" << std::endl;
+ });
+
+ std::thread thread2([&](){
+ auto status = session2->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session2 inference success" << std::endl;
+ });
+
+ thread1.join();
+ thread2.join();
+ delete (session1);
+ delete (session2);
+ delete (model);
+ return 0;
+}
```
diff --git a/lite/tutorials/source_en/use/timeprofiler_tool.md b/lite/tutorials/source_en/use/timeprofiler_tool.md
index b0e3d35860448974da085d8230d58654bf46868e..1442ecc46d9b1606ee501e4b3b19ae7139eed88d 100644
--- a/lite/tutorials/source_en/use/timeprofiler_tool.md
+++ b/lite/tutorials/source_en/use/timeprofiler_tool.md
@@ -20,16 +20,16 @@ After model conversion and before inference, you can use the TimeProfiler tool t
To use the TimeProfiler tool, you need to prepare the environment as follows:
-- Compilation: Install build dependencies and perform build. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profile` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example) in the build document.
+- Compilation: Install build dependencies and perform build. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profiler` directory of the MindSpore source code. For details about the build operations, see the [Environment Requirements](https://www.mindspore.cn/lite/tutorial/en/master/build.html#environment-requirements) and [Compilation Example](https://www.mindspore.cn/lite/tutorial/en/master/build.html#compilation-example) in the build document.
-- Run: Obtain the `timeprofile` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description) in the build document.
+- Run: Obtain the `timeprofiler` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/tutorial/en/master/build.html#output-description) in the build document.
## Parameter Description
The command used for analyzing the time consumption of forward inference at the network layer based on the compiled TimeProfiler tool is as follows:
```bash
-./timeprofile --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
+./timeprofiler --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
```
The following describes the parameters in detail.
@@ -49,7 +49,7 @@ The following describes the parameters in detail.
Take the `test_timeprofiler.ms` model as an example and set the number of model inference cycles to 10. The command for using TimeProfiler to analyze the time consumption at the network layer is as follows:
```bash
-./timeprofile --modelPath=./models/test_timeprofiler.ms --loopCount=10
+./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10
```
After this command is executed, the TimeProfiler tool outputs the statistics on the running time of the model at the network layer. In this example, the command output is as follows: The statistics are displayed by`opName` and `optype`. `opName` indicates the operator name, `optype` indicates the operator type, and `avg` indicates the average running time of the operator per single run, `percent` indicates the ratio of the operator running time to the total operator running time, `calledTimess` indicates the number of times that the operator is run, and `opTotalTime` indicates the total time that the operator is run for a specified number of times. Finally, `total time` and `kernel cost` show the average time consumed by a single inference operation of the model and the sum of the average time consumed by all operators in the model inference, respectively.
diff --git a/lite/tutorials/source_zh_cn/_static/logo_source.png b/lite/tutorials/source_zh_cn/_static/logo_source.png
index fc347d271abe082ae8d16242328551648766b6fb..880f2bc87172daf487654c0ba4f1657c672bd2b8 100644
Binary files a/lite/tutorials/source_zh_cn/_static/logo_source.png and b/lite/tutorials/source_zh_cn/_static/logo_source.png differ
diff --git a/lite/tutorials/source_zh_cn/build.md b/lite/tutorials/source_zh_cn/build.md
index a3e60383d37df133bbfc65f5b614311e45119032..71a6d6c612ed53473bafc8c3e26f9b50f3daf471 100644
--- a/lite/tutorials/source_zh_cn/build.md
+++ b/lite/tutorials/source_zh_cn/build.md
@@ -10,11 +10,7 @@
- [编译输出](#编译输出)
- [模型转换工具converter目录结构说明](#模型转换工具converter目录结构说明)
- [模型推理框架runtime及其他工具目录结构说明](#模型推理框架runtime及其他工具目录结构说明)
- - [Windows环境编译](#windows环境编译)
- - [环境要求](#环境要求-1)
- - [编译选项](#编译选项-1)
- - [编译示例](#编译示例-1)
- - [编译输出](#编译输出-1)
+ - [图像处理库目录结构说明](#图像处理库目录结构说明)
@@ -24,10 +20,11 @@
| 模块 | 支持平台 | 说明 |
| --- | ---- | ---- |
-| converter | Linux、Windows | 模型转换工具 |
+| converter | Linux | 模型转换工具 |
| runtime | Linux、Android | 模型推理框架 |
| benchmark | Linux、Android | 基准测试工具 |
-| time_profiler | Linux、Android | 性能分析工具 |
+| timeprofiler | Linux、Android | 性能分析工具 |
+| imageprocess | Linux、Android | 图像处理库 |
## Linux环境编译
@@ -35,7 +32,7 @@
- 系统环境:Linux x86_64,推荐使用Ubuntu 18.04.02LTS
-- runtime、benchmark、time_profiler编译依赖
+- runtime、benchmark、timeprofiler编译依赖
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0
- [Android_NDK](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip) >= r20
@@ -53,6 +50,7 @@
- [Libevent](https://libevent.org) >= 2.0
- [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18
- [OpenSSL](https://www.openssl.org/) >= 1.1.1
+ - [Python](https://www.python.org/) >= 3.7.5
> - 当安装完依赖项Android_NDK后,需配置环境变量:`export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`。
> - 编译脚本中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
@@ -69,6 +67,7 @@ MindSpore Lite提供编译脚本`build.sh`用于一键式编译,位于MindSpor
| -j[n] | 设定编译时所用的线程数,否则默认设定为8线程 | Integer | 否 |
| -e | 选择除CPU之外的其他内置算子类型,仅在ARM架构下适用,当前仅支持GPU | GPU | 否 |
| -h | 显示编译帮助信息 | 无 | 否 |
+| -n | 指定编译轻量级图片处理模块 | lite_cv | 否 |
> 在`-I`参数变动时,如`-I x86_64`变为`-I arm64`,添加`-i`参数进行增量编译不生效。
@@ -102,11 +101,17 @@ git clone https://gitee.com/mindspore/mindspore.git
bash build.sh -I arm64 -e gpu
```
+- 编译ARM64带图像预处理模块。
+ ```bash
+ bash build.sh -I arm64 -n lite_cv
+ ```
+
### 编译输出
-编译完成后,进入`mindspore/output/`目录,可查看编译后生成的文件。文件分为两部分:
+编译完成后,进入`mindspore/output/`目录,可查看编译后生成的文件。文件分为三部分:
- `mindspore-lite-{version}-converter-{os}.tar.gz`:包含模型转换工具converter。
-- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:包含模型推理框架runtime、基准测试工具benchmark和性能分析工具time_profiler。
+- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:包含模型推理框架runtime、基准测试工具benchmark和性能分析工具timeprofiler。
+- `mindspore-lite-{version}-minddata-{os}-{device}.tar.gz`:包含图像处理库imageprocess。
> version:输出件版本号,与所编译的分支代码对应的版本一致。
>
@@ -119,6 +124,7 @@ git clone https://gitee.com/mindspore/mindspore.git
```bash
tar -xvf mindspore-lite-{version}-converter-{os}.tar.gz
tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
+tar -xvf mindspore-lite-{version}-minddata-{os}-{device}.tar.gz
```
#### 模型转换工具converter目录结构说明
@@ -148,7 +154,7 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
- │ └── time_profile # 模型网络层耗时分析工具
+ │ └── time_profiler # 模型网络层耗时分析工具
```
@@ -159,75 +165,45 @@ tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz
│ └── benchmark # 基准测试工具
│ └── lib # 推理框架动态库
│ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库
- │ ├── liboptimize.so # MindSpore Lite算子性能优化库
+ │ ├── libmindspore-lite-fp16.so # MindSpore Lite Float16算子性能优化库
+ │ ├── libmindspore-lite-optimize.so # MindSpore Lite量化算子性能优化库
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
- │ └── time_profile # 模型网络层耗时分析工具
+ │ └── time_profiler # 模型网络层耗时分析工具
```
- 当编译选项为`-I arm32`时:
```
|
- ├── mindspore-lite-{version}-runtime-arm64-cpu
+ ├── mindspore-lite-{version}-runtime-arm32-cpu
│ └── benchmark # 基准测试工具
│ └── lib # 推理框架动态库
│ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库
│ └── third_party # 第三方库头文件和库
│ ├── flatbuffers # FlatBuffers头文件
│ └── include # 推理框架头文件
- │ └── time_profile # 模型网络层耗时分析工具
+ │ └── time_profiler # 模型网络层耗时分析工具
```
-> 1. `liboptimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2和支持fp16特性的CPU上使用。
-> 2. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。
-> 3. 运行converter、benchmark或time_profile目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`;配置benchmark和timeprofiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`。
-
-## Windows环境编译
-
-### 环境要求
-
-- 支持的编译环境为:Windows 10,64位。
-
-- 编译依赖
- - [CMake](https://cmake.org/download/) >= 3.14.1
- - [MinGW GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download) = 7.3.0
- - [Python](https://www.python.org/) >= 3.7.5
-
-> 编译脚本中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
-
-### 编译选项
-
-MindSpore Lite的编译选项如下。
-
-| 参数 | 参数说明 | 是否必选 |
-| -------- | ----- | ---- |
-| **lite** | **设置该参数,则对Mindspore Lite工程进行编译** | **是** |
-| [n] | 设定编译时所用的线程数,否则默认设定为6线程 | 否 |
+> 1. `libmindspore-lite-optimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2及以上版本且支持dotprod指令的CPU上使用的性能优化库。
+> 2. `libmindspore-lite-fp16.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2及以上版本且支持fp16的CPU上使用的性能优化库。
+> 3. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。
+> 4. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译为例:配置converter:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib:./output/mindspore-lite-0.7.0-converter-ubuntu/third_party/flatbuffers/lib:${LD_LIBRARY_PATH}`;配置benchmark和timeprofiler:`export LD_LIBRARY_PATH=./output/mindspore-lite-0.7.0-runtime-x86-cpu/lib:${LD_LIBRARY_PATH}`。
-### 编译示例
+#### 图像处理库目录结构说明
-首先,使用git工具从MindSpore代码仓下载源码。
+图像处理库在`-I arm64 -n lite_cv`编译选项下获得,内容包括以下几部分:
-```bash
-git clone https://gitee.com/mindspore/mindspore.git
```
-
-然后,使用cmd工具在源码根目录下,执行如下命令即可编译MindSpore Lite。
-
-- 以默认线程数(6线程)编译Windows版本。
- ```bash
- call build.bat lite
- ```
-- 以指定线程数8编译Windows版本。
- ```bash
- call build.bat lite 8
- ```
-
-### 编译输出
-
-编译完成之后,进入`mindspore/output/`目录,解压后即可获取输出件`mindspore-lite-{version}-converter-win-cpu.zip`,其中含有转换工具可执行文件。
-
-> version:输出件版本号,与所编译的分支代码对应的版本一致。
+|
+├── mindspore-lite-{version}-minddata-{os}-{device}
+│ └── include # 头文件
+│ ├── lite_cv # 图像处理库头文件
+│ └── lib # 动态库
+│ ├── libminddata-lite.so # 图像处理动态库
+│ └── third_party # 第三方库头文件和库
+│ ├── flatbuffers # Flatbuffers的动态库
+```
diff --git a/lite/tutorials/source_zh_cn/index.rst b/lite/tutorials/source_zh_cn/index.rst
index 3bfde552d2bec6205ba366d6d30c200bce0904d7..1f3de867e254412ed590e67d0ed725519cbb3b4e 100644
--- a/lite/tutorials/source_zh_cn/index.rst
+++ b/lite/tutorials/source_zh_cn/index.rst
@@ -21,4 +21,5 @@ MindSpore端侧教程
build
use/converter_tool
use/evaluating_the_model
+ use/image_processing
use/runtime
diff --git a/lite/tutorials/source_zh_cn/quick_start/quick_start.md b/lite/tutorials/source_zh_cn/quick_start/quick_start.md
index ef76d900d3bbb15f9e2680656e356f7e9bf71b2a..046ea3cabfe9be898d821c9752c98363b918be37 100644
--- a/lite/tutorials/source_zh_cn/quick_start/quick_start.md
+++ b/lite/tutorials/source_zh_cn/quick_start/quick_start.md
@@ -42,7 +42,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
以mobilenetv2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。
```bash
-./converter_lite --fmk=MS --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
+./converter_lite --fmk=MINDIR --modelFile=mobilenetv2.mindir --outputFile=mobilenetv2.ms
```
## 部署应用
@@ -53,9 +53,9 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
- Android Studio >= 3.2 (推荐4.0以上版本)
- NDK 21.3
-- CMake 3.10.2
+- [CMake](https://cmake.org/download) 3.10.2
- Android SDK >= 26
-- OpenCV >= 4.0.0 (本示例代码已包含)
+- [JDK]( https://www.oracle.com/downloads/otn-pub/java/JDK/) >= 1.8
### 构建与运行
@@ -67,7 +67,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds

- (可选)若安装时出现NDK版本问题,可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)(本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定SDK的位置。
+ (可选)若安装时出现NDK版本问题,可手动下载相应的[NDK版本](https://developer.android.com/ndk/downloads?hl=zh-cn)(本示例代码使用的NDK版本为21.3),并在`Project Structure`的`Android NDK location`设置中指定NDK的位置。

@@ -79,10 +79,14 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
Android Studio连接设备调试操作,可参考。
+ 手机需开启“USB调试模式”,Android Studio才能识别到手机。 华为手机一般在`设置->系统和更新->开发人员选项->USB调试`中打开“USB调试模式”。
+
3. 在Android设备上,点击“继续安装”,安装完即可查看到设备摄像头捕获的内容和推理结果。

+
+
识别结果如下图所示。

@@ -98,29 +102,22 @@ MindSpore Model Zoo中图像分类模型可[在此下载](https://download.minds
```
app
-|
-├── libs # 存放MindSpore Lite依赖的库文件
-│ └── arm64-v8a
-│ ├── libopencv_java4.so
-│ └── libmindspore-lite.so
-│
-├── opencv # opencv 相关依赖文件
-│ └── ...
-|
├── src/main
│ ├── assets # 资源文件
-| | └── model.ms # 存放模型文件
+| | └── mobilenetv2.ms # 存放模型文件
│ |
│ ├── cpp # 模型加载和预测主要逻辑封装类
| | ├── ..
+| | ├── mindspore_lite_x.x.x-minddata-arm64-cpu # MindSpore Lite版本
| | ├── MindSporeNetnative.cpp # MindSpore调用相关的JNI方法
│ | └── MindSporeNetnative.h # 头文件
+| | └── MsNetWork.cpp # MindSpore接口封装
│ |
│ ├── java # java层应用代码
-│ │ └── com.huawei.himindsporedemo
+│ │ └── com.mindspore.himindsporedemo
│ │ ├── gallery.classify # 图像处理及MindSpore JNI调用相关实现
│ │ │ └── ...
-│ │ └── obejctdetect # 开启摄像头及绘制相关实现
+│ │ └── widget # 开启摄像头及绘制相关实现
│ │ └── ...
│ │
│ ├── res # 存放Android相关的资源文件
@@ -129,6 +126,7 @@ app
├── CMakeList.txt # cmake编译入口文件
│
├── build.gradle # 其他Android配置文件
+├── download.gradle # 工程依赖文件下载
└── ...
```
@@ -136,19 +134,11 @@ app
Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html)生成`libmindspore-lite.so`库文件。
-本示例中,bulid过程由download.gradle文件配置自动下载`libmindspore-lite.so`以及OpenCV的`libopencv_java4.so`库文件,并放置在`app/libs/arm64-v8a`目录下。
+本示例中,build过程由download.gradle文件自动从华为服务器下载MindSpore Lite版本文件,并放置在`app/src/ main/cpp/mindspore_lite_x.x.x-minddata-arm64-cpu`目录下。
注: 若自动下载失败,请手动下载相关库文件并将其放在对应位置:
-libmindspore-lite.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so)
-
-libmindspore-lite include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip)
-
-libopencv_java4.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so)
-
-libopencv include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip)
-
-
+MindSpore Lite版本 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%201.0/mindspore-lite-1.0.0-minddata-arm64-cpu.tar.gz)
```
android{
@@ -169,23 +159,29 @@ android{
在`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。
```
-# Set MindSpore Lite Dependencies.
-include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore)
+# ============== Set MindSpore Dependencies. =============
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/third_party/flatbuffers/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION})
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/ir/dtype)
+include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/include/schema)
+
add_library(mindspore-lite SHARED IMPORTED )
-set_target_properties(mindspore-lite PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so")
+add_library(minddata-lite SHARED IMPORTED )
-# Set OpenCV Dependecies.
-include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include)
-add_library(lib-opencv SHARED IMPORTED )
-set_target_properties(lib-opencv PROPERTIES
- IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so")
+set_target_properties(mindspore-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libmindspore-lite.so)
+set_target_properties(minddata-lite PROPERTIES IMPORTED_LOCATION
+ ${CMAKE_SOURCE_DIR}/src/main/cpp/${MINDSPORELITE_VERSION}/lib/libminddata-lite.so)
+# --------------- MindSpore Lite set End. --------------------
# Link target library.
target_link_libraries(
...
- mindspore-lite
- lib-opencv
+ # --- mindspore ---
+ minddata-lite
+ mindspore-lite
...
)
```
@@ -218,13 +214,12 @@ target_link_libraries(
*labelEnv = labelNet;
// Create context.
- lite::Context *context = new lite::Context;
- context->device_ctx_.type = lite::DT_CPU;
- context->thread_num_ = numThread; //Specify the number of threads to run inference
+ mindspore::lite::Context *context = new mindspore::lite::Context;
+ context->thread_num_ = num_thread;
// Create the mindspore session.
- labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context);
- delete(context);
+ labelNet->CreateSessionMS(modelBuffer, bufferLen, context);
+ delete (context);
```
@@ -245,7 +240,7 @@ target_link_libraries(
```cpp
// Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing
- BitmapToMat(env, srcBitmap, matImageSrc);
+ BitmapToMat(env, srcBitmap, matImageSrc);
// Processing such as zooming the picture size.
matImgPreprocessed = PreProcessImageData(matImageSrc);
@@ -270,7 +265,38 @@ target_link_libraries(
delete[] (dataHWC);
```
-3. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
+3. 对输入数据进行处理。
+
+ ```cpp
+ bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
+ bool ret = false;
+ LiteMat lite_mat_resize;
+ LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
+ ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+ if (!ret) {
+ MS_PRINT("ResizeBilinear error");
+ return false;
+ }
+ LiteMat lite_mat_convert_float;
+ ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
+ if (!ret) {
+ MS_PRINT("ConvertTo error");
+ return false;
+ }
+ LiteMat lite_mat_cut;
+ ret = Crop(lite_mat_convert_float, lite_mat_cut, 16, 16, 224, 224);
+ if (!ret) {
+ MS_PRINT("Crop error");
+ return false;
+ }
+ float means[3] = {0.485, 0.456, 0.406};
+ float vars[3] = {1.0 / 0.229, 1.0 / 0.224, 1.0 / 0.225};
+ SubStractMeanNormalize(lite_mat_cut, lite_norm_mat_cut, means, vars);
+ return true;
+ }
+ ```
+
+4. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
- 图执行,端测推理。
@@ -281,7 +307,12 @@ target_link_libraries(
- 获取输出数据。
```cpp
- auto msOutputs = mSession->GetOutputs();
+ auto names = mSession->GetOutputTensorNames();
+ std::unordered_map msOutputs;
+ for (const auto &name : names) {
+ auto temp_dat =mSession->GetOutputByTensorName(name);
+ msOutputs.insert(std::pair {name, temp_dat});
+ }
std::string retStr = ProcessRunnetResult(msOutputs, ret);
```
@@ -290,39 +321,34 @@ target_link_libraries(
std::string ProcessRunnetResult(std::unordered_map msOutputs, int runnetRet) {
- // Get model output results.
- std::unordered_map::iterator iter;
- iter = msOutputs.begin();
- auto brach1_string = iter->first;
- auto branch1_tensor = iter->second;
+ std::unordered_map::iterator iter;
+ iter = msOutputs.begin();
- int OUTPUTS_LEN = branch1_tensor->ElementsNum();
+ // The mobilenetv2.ms model output just one branch.
+ auto outputTensor = iter->second;
+ int tensorNum = outputTensor->ElementsNum();
- float *temp_scores = static_cast(branch1_tensor->MutableData());
- float scores[RET_CATEGORY_SUM];
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- scores[i] = temp_scores[i];
- }
+ // Get a pointer to the first score.
+ float *temp_scores = static_cast(outputTensor->MutableData());
- // Converted to text information that needs to be displayed in the APP.
- std::string retStr = "";
- if (runnetRet == 0) {
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- if (scores[i] > 0.3){
- retStr += g_labels_name_map[i];
- retStr += ":";
- std::string score_str = std::to_string(scores[i]);
- retStr += score_str;
- retStr += ";";
- }
- }
- else {
- MS_PRINT("MindSpore run net failed!");
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- retStr += " :0.0;";
- }
- }
+ float scores[RET_CATEGORY_SUM];
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ if (temp_scores[i] > 0.5) {
+ MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]);
+ }
+ scores[i] = temp_scores[i];
+ }
- return retStr;
+ // Score for each category.
+ // Converted to text information that needs to be displayed in the APP.
+ std::string categoryScore = "";
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ categoryScore += labels_name_map[i];
+ categoryScore += ":";
+ std::string score_str = std::to_string(scores[i]);
+ categoryScore += score_str;
+ categoryScore += ";";
+ }
+ return categoryScore;
}
```
diff --git a/lite/tutorials/source_zh_cn/use/benchmark_tool.md b/lite/tutorials/source_zh_cn/use/benchmark_tool.md
index 83c6aadc638de8c469b46875c7a1f863e148c539..69d329a0860a786b702463a34342d6634f487274 100644
--- a/lite/tutorials/source_zh_cn/use/benchmark_tool.md
+++ b/lite/tutorials/source_zh_cn/use/benchmark_tool.md
@@ -63,12 +63,6 @@ Mean bias of all nodes: 0%
=======================================================
```
-原模型输入输出数据类型为uint8时,需要将其减去128再转换为int8类型后才能作为标杆数据验证精度,输出数据类型为int8时需要在参数中指定calibDataType为INT8。
-
-```bash
-./benchmark --modelPath=./models/test_benchmark_int8.ms --inDataPath=./input/test_benchmark_int8.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark_int8.out --calibDataType=INT8
-```
-
## 参数说明
@@ -76,11 +70,10 @@ Mean bias of all nodes: 0%
```bash
./benchmark [--modelPath=] [--accuracyThreshold=]
- [--calibDataPath=] [--cpuBindMode=]
- [--device=] [--help] [--inDataPath=]
- [--inDataType=] [--loopCount=]
- [--numThreads=] [--omModelPath=]
- [--resizeDims=] [--warmUpLoopCount=]
+ [--calibDataPath=] [--calibDataType=]
+ [--cpuBindMode=] [--device=] [--help]
+ [--inDataPath=] [--loopCount=]
+ [--numThreads=] [--warmUpLoopCount=]
[--fp16Priority=]
```
@@ -91,7 +84,7 @@ Mean bias of all nodes: 0%
| `--modelPath=` | 必选 | 指定需要进行基准测试的MindSpore Lite模型文件路径。 | String | null | - |
| `--accuracyThreshold=` | 可选 | 指定准确度阈值。 | Float | 0.5 | - |
| `--calibDataPath=` | 可选 | 指定标杆数据的文件路径。标杆数据作为该测试模型的对比输出,是该测试模型使用相同输入并由其它深度学习框架前向推理而来。 | String | null | - |
-| `--calibDataType=` | 可选 | 指定标杆数据类型。 | String | FLOAT | FLOAT、INT8 |
+| `--calibDataType=` | 可选 | 指定标杆数据类型。 | String | FLOAT | FLOAT、INT8、UINT8 |
| `--cpuBindMode=` | 可选 | 指定模型推理程序运行时绑定的CPU核类型。 | Integer | 1 | -1:表示中核 1:表示大核 0:表示不绑定 |
| `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、GPU |
| `--help` | 可选 | 显示`benchmark`命令的帮助信息。 | - | - | - |
diff --git a/lite/tutorials/source_zh_cn/use/converter_tool.md b/lite/tutorials/source_zh_cn/use/converter_tool.md
index 1b9ad944df5fa482e4e91a49b80a0234a86cc8f9..122ded8747984115886f42af209ef9847272426a 100644
--- a/lite/tutorials/source_zh_cn/use/converter_tool.md
+++ b/lite/tutorials/source_zh_cn/use/converter_tool.md
@@ -35,7 +35,7 @@ MindSpore Lite提供离线转换模型功能的工具,支持多种类型的模
### 使用示例
-首先,在源码根目录下,输入命令进行编译,可参考`build.md`。
+在源码根目录下,输入命令进行编译,可参考`build.md`。
```bash
bash build.sh -I x86_64
```
@@ -53,7 +53,7 @@ bash build.sh -I x86_64
结果显示为:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
这表示已经成功将Caffe模型转化为MindSpore Lite模型,获得新文件`lenet.ms`。
@@ -61,7 +61,7 @@ bash build.sh -I x86_64
- MindSpore模型`model.mindir`
```bash
- ./converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ ./converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite模型`model.tflite`
@@ -79,16 +79,17 @@ bash build.sh -I x86_64
./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining
```
- - 感知量化模型输入设置为int8,输出设置为int8
+ - 感知量化模型输入输出类型设置为float
```bash
- ./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining --inputInferenceType=INT8 --inferenceType=INT8
+ ./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining --inferenceType=FLOAT
```
以上几种情况下,均显示如下转换成功提示,且同时获得`model.ms`目标文件。
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- 如果转换命令执行失败,程序会返回一个[错误码](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/errorcode_and_metatype.html)。
> 训练后量化示例请参考。
@@ -101,15 +102,18 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需
| 参数 | 是否必选 | 参数说明 | 取值范围 | 默认值 |
| -------- | ------- | ----- | --- | ---- |
| `--help` | 否 | 打印全部帮助信息。 | - | - |
-| `--fmk=` | 是 | 输入模型的原始格式。 | MS、CAFFE、TFLITE、ONNX | - |
+| `--fmk=` | 是 | 输入模型的原始格式。 | MINDIR、CAFFE、TFLITE、ONNX | - |
| `--modelFile=` | 是 | 输入模型的路径。 | - | - |
| `--outputFile=` | 是 | 输出模型的路径(不存在时将自动创建目录),不需加后缀,可自动生成`.ms`后缀。 | - | - |
| `--weightFile=` | 转换Caffe模型时必选 | 输入模型weight文件的路径。 | - | - |
-| `--quantType=` | 否 | 设置模型的量化类型。 | PostTraining:训练后量化 AwareTraining:感知量化。 | - |
-|` --inputInferenceType=` | 否 | 设置感知量化模型输入数据类型,如果和原模型不一致则转换工具会在模型前插转换算子,使得转换后的模型输入类型和inputInferenceType保持一致。 | FLOAT、INT8 | FLOAT |
-| `--inferenceType=` | 否 | 设置感知量化模型输出数据类型,如果和原模型不一致则转换工具会在模型前插转换算子,使得转换后的模型输出类型和inferenceType保持一致。 | FLOAT、INT8 | FLOAT |
+| `--quantType=` | 否 | 设置模型的量化类型。 | WeightQuant:训练后量化(权重量化) PostTraining:训练后量化(全量化) AwareTraining:感知量化 | - |
+|` --inferenceType=` | 否 | 设置感知量化模型输入输出数据类型,如果和原模型不一致则转换工具会在模型前后插转换算子,使得转换后的模型输入输出类型和inferenceType保持一致。 | UINT8、FLOAT、INT8 | FLOAT |
| `--stdDev= `| 否 | 感知量化模型转换时用于设置输入数据的标准差。 | (0,+∞) | 128 |
| `--mean=` | 否 | 感知量化模型转换时用于设置输入数据的均值。 | [-128, 127] | -0.5 |
+| `--bitNum=` | 否 | 设定训练后量化(权重量化)的比特数,目前仅支持8bit量化 | 8 | 8 |
+| `--quantSize=` | 否 | 设定参与训练后量化(权重量化)的卷积核尺寸阈值,若卷积核尺寸大于该值,则对此权重进行量化 | (0,+∞) | 0 |
+| `--convWeightQuantChannelThreshold=` | 否 | 设定参与训练后量化(权重量化)的卷积通道数阈值,若卷积通道数大于该值,则对此权重进行量化 | (0,+∞) | 16 |
+| `--config_file=` | 否 | 训练后量化(全量化)校准数据集配置文件路径 | - | - |
> - 参数名和参数值之间用等号连接,中间不能有空格。
> - Caffe模型一般分为两个文件:`*.prototxt`模型结构,对应`--modelFile`参数;`*.caffemodel`模型权值,对应`--weightFile`参数。
@@ -120,9 +124,7 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需
使用MindSpore Lite模型转换工具,需要进行如下环境准备工作。
-- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id5)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id7)编译Windows版本。
-
-- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id8),获得`converter`工具,,并配置MinGW环境变量(在系统变量Path里添加MinGW的bin目录)。
+- 获取工具包:下载Windows转换工具的Zip包并解压至本地目录,获得`converter`工具。
### 参数说明
@@ -130,12 +132,7 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需
### 使用示例
-首先,使用cmd工具在源码根目录下,输入命令进行编译,可参考`build.md`。
-```bash
-call build.bat lite
-```
-
-然后,设置日志打印级别为INFO。
+首先,设置日志打印级别为INFO。
```bash
set MSLOG=INFO
```
@@ -152,7 +149,7 @@ set MSLOG=INFO
结果显示为:
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
这表示已经成功将Caffe模型转化为MindSpore Lite模型,获得新文件`lenet.ms`。
@@ -160,7 +157,7 @@ set MSLOG=INFO
- MindSpore模型`model.mindir`
```bash
- call converter_lite --fmk=MS --modelFile=model.mindir --outputFile=model
+ call converter_lite --fmk=MINDIR --modelFile=model.mindir --outputFile=model
```
- TensorFlow Lite模型`model.tflite`
@@ -180,5 +177,6 @@ set MSLOG=INFO
以上几种情况下,均显示如下转换成功提示,且同时获得`model.ms`目标文件。
```
- INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS!
+ CONVERTER RESULT SUCCESS:0
```
+- 如果转换命令执行失败,程序会返回一个[错误码](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/errorcode_and_metatype.html)。
diff --git a/lite/tutorials/source_zh_cn/use/image_processing.md b/lite/tutorials/source_zh_cn/use/image_processing.md
new file mode 100644
index 0000000000000000000000000000000000000000..139ad867a98de40dc0534a9c8ca74c3ce58b6c79
--- /dev/null
+++ b/lite/tutorials/source_zh_cn/use/image_processing.md
@@ -0,0 +1,149 @@
+# 预处理图像数据
+
+
+
+- [预处理图像数据](#预处理图像数据)
+ - [概述](#概述)
+ - [导入图像预处理函数的库](#导入图像预处理函数的库)
+ - [对图像进行初始化](#对图像进行初始化)
+ - [使用示例](#使用示例)
+ - [可选的图像预处理算子](#可选的图像预处理算子)
+ - [对图像进行缩放操作](#对图像进行缩放操作)
+ - [使用示例](#使用示例-1)
+ - [对图像数据类型进行转换](#对图像数据类型进行转换)
+ - [使用示例](#使用示例-2)
+ - [对图像数据进行裁剪](#对图像数据进行裁剪)
+ - [使用示例](#使用示例-3)
+ - [对图像数据进行归一化处理](#对图像数据进行归一化处理)
+ - [使用示例](#使用示例-4)
+
+
+
+## 概述
+
+图像预处理的主要目的是消除图像中无关的信息,恢复有用的真实信息,增强有关信息的可检测性和最大限度地简化数据,从而改进特征抽取、图像分割、匹配和识别的可靠性。此处是通过创建LiteMat对象,在推理前对图像数据进行处理,达到模型推理所需要的数据格式要求。
+
+流程如下:
+
+## 导入图像预处理函数的库
+
+```
+#include "lite_cv/lite_mat.h"
+#include "lite_cv/image_process.h"
+```
+
+## 对图像进行初始化
+
+这边使用的是`image_process.h`文件中的[InitFromPixel](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/dataset.html#initfrompixel)函数对图像进行初始化操作。
+
+```
+bool InitFromPixel(const unsigned char *data, LPixelType pixel_type, LDataType data_type, int w, int h, LiteMat &m);
+```
+
+### 使用示例
+
+```
+// Create the data object of the LiteMat object.
+LiteMat lite_mat_bgr;
+
+// Initialize the lite_mat_bgr object.
+// The image data pointer passed in by the user (The data in the Bitmap corresponding to the Android platform).
+InitFromPixel(pixel_ptr, LPixelType::RGBA2GRAY, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+```
+
+## 可选的图像预处理算子
+
+此处的图像处理算子,用户可以根据实际情况任意搭配使用。
+
+### 对图像进行缩放操作
+
+这边利用的是`image_process.h`中的[ResizeBilinear](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/dataset.html#resizebilinear)函数通过双线性算法调整图像大小,当前仅支持的数据类型为uint8,当前支持的通道为3和1。
+
+```
+bool ResizeBilinear(const LiteMat &src, LiteMat &dst, int dst_w, int dst_h);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create a resize image data object.
+LiteMat lite_mat_resize;
+
+// Resize the image.
+ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+```
+
+### 对图像数据类型进行转换
+
+这边利用的是`image_process.h`中的[ConvertTo](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/dataset.html#convertto)函数对图像数据类型进行转换,目前支持的转换是将uint8转换为float。
+
+```
+bool ConvertTo(const LiteMat &src, LiteMat &dst, double scale = 1.0);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the converted data type object.
+LiteMat lite_mat_convert_float;
+
+// Perform conversion type operations on the object. The currently supported conversion is to convert uint8 to float.
+ConvertTo(lite_mat_bgr, lite_mat_convert_float);
+```
+
+### 对图像数据进行裁剪
+
+这边利用的是`image_process.h`中的[Crop](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/dataset.html#crop)函数对图像进行裁剪,目前支持通道3和1。
+
+```
+bool Crop(const LiteMat &src, LiteMat &dst, int x, int y, int w, int h);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// Create the cropped object.
+LiteMat lite_mat_cut;
+
+// The image is cropped by the values of x, y, w, h.
+Crop(lite_mat_bgr, lite_mat_cut, 16, 16, 224, 224);
+```
+
+### 对图像数据进行归一化处理
+
+为了消除数据指标之间的量纲影响,通过标准化处理来解决数据指标之间的可比性问题,这边利用的是`image_process.h`中的[SubStractMeanNormalize](https://www.mindspore.cn/lite/docs/zh-CN/master/apicc/dataset.html#substractmeannormalize)函数对图像数据进行归一化处理。
+
+```
+bool SubStractMeanNormalize(const LiteMat &src, LiteMat &dst, float *mean, float *norm);
+```
+
+#### 使用示例
+
+```
+// Initialize the image data.
+LiteMat lite_mat_bgr;
+InitFromPixel(rgba_mat.data, LPixelType::RGBA2BGR, LDataType::UINT8, rgba_mat.cols, rgba_mat.rows, lite_mat_bgr);
+
+// The mean value of the image data.
+// The variance of the image data.
+float means[1] = {0.485};
+float norm[1] = {1.0 / 0.229};
+
+// Create a normalized image object.
+LiteMat lite_mat_bgr_norm;
+
+// The image data is normalized by the mean value and variance of the image data.
+SubStractMeanNormalize(lite_mat_bgr, lite_mat_bgr_norm, means, norm);
+```
\ No newline at end of file
diff --git a/lite/tutorials/source_zh_cn/use/post_training_quantization.md b/lite/tutorials/source_zh_cn/use/post_training_quantization.md
index 839a7347ac9387f3b7de95852484447a65f1f75c..edf2be8f7910701a6a3a14d055b3dac7ee3ab626 100644
--- a/lite/tutorials/source_zh_cn/use/post_training_quantization.md
+++ b/lite/tutorials/source_zh_cn/use/post_training_quantization.md
@@ -4,9 +4,15 @@
- [训练后量化](#训练后量化)
- [概述](#概述)
- - [使用示例](#使用示例)
- - [部分模型精度结果](#部分模型精度结果)
- - [参数说明](#参数说明)
+ - [权重量化](#权重量化)
+ - [参数说明](#参数说明)
+ - [使用步骤](#使用步骤)
+ - [部分模型精度结果](#部分模型精度结果)
+ - [全量化](#全量化)
+ - [参数说明](#参数说明-1)
+ - [使用步骤](#使用步骤-1)
+ - [部分模型精度结果](#部分模型精度结果-1)
+
@@ -14,14 +20,84 @@
## 概述
-对于已经训练好的`float32`模型,通过训练后量化将模型转为`int8`模型,不仅能减小模型大小,而且能显著提高推理性能。在MindSpore端侧框架中,这部分功能集成在模型转换工具`conveter_lite`中,通过增加命令行参数,便能够转换得到量化后模型。
+对于已经训练好的`float32`模型,通过训练后量化将其转为`int8`,不仅能减小模型大小,而且能显著提高推理性能。在MindSpore Lite中,这部分功能集成在模型转换工具`conveter_lite`内,通过增加命令行参数,便能够转换得到量化后模型。
目前训练后量化属于alpha阶段(支持部分网络,不支持多输入模型),正在持续完善中。
+MindSpore Lite训练后量化分为两类:
+1. 权重量化:单独对模型的权值进行量化;
+2. 全量化:对模型的权值、激活值、bias值统一进行量化。
+
+训练后量化在两种情况下所需的数据类型和参数设定不同,但均可通过转换工具设定。有关转换工具`converter_lite`的使用方法可参考[转换为MindSpore Lite模型](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)。在此基础之上进行配置,启用训练后量化。
+
+## 权重量化
+
+下面对权重量化的使用方式和效果进行阐述。
+
+### 参数说明
+
+权重量化转换命令的一般形式为:
+```
+./converter_lite --fmk=ModelType --modelFile=ModelFilePath --outputFile=ConvertedModelPath --quantType=WeightQuant --bitNum=BitNumValue --quantSize=QuantizationSizeThresholdValue --convWeightQuantChannelThreshold=ConvWeightQuantChannelThresholdValue
+```
+下面对此命令的量化相关参数进行说明:
+
+| 参数 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
+| -------- | ------- | ----- | ----- |----- | ----- |
+| `--quantType=` | 必选 | 设置为WeightQuant,启用权重量化 | String | - | 必须设置为WeightQuant |
+| `--bitNum=` | 可选 | 设定权重量化的比特数,目前仅支持8bit量化 | Integer | 8 | 8 |
+| `--quantSize=` | 可选 | 设定参与权重量化的卷积核尺寸阈值,若卷积核尺寸大于该值,则对此权重进行量化;建议设置为500 | Integer | 0 | (0,+∞) |
+| `--convWeightQuantChannelThreshold=` | 可选 | 设定参与权重量化的卷积通道数阈值,若卷积通道数大于该值,则对此权重进行量化;建议设置为16 | Integer | 16 | (0,+∞) |
+
+用户可根据模型及自身需要对权重量化的参数作出调整。
+
+
+### 使用步骤
+
+1. 正确编译出`converter_lite`可执行文件。该部分可参考构建文档[编译MindSpore Lite](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html),获得`converter_lite`工具,并配置环境变量。
+2. 以TensorFlow Lite模型为例,执行权重量化模型转换命令:
+ ```
+ ./converter_lite --fmk=TFLITE --modelFile=Inception_v3.tflite --outputFile=Inception_v3.tflite --quantType=WeightQuant --bitNum=8 --quantSize=0 --convWeightQuantChannelThreshold=0
+ ```
+3. 上述命令执行成功后,便可得到量化后的模型`Inception_v3.tflite.ms`,量化后的模型大小通常会下降到FP32模型的1/4。
+
+### 部分模型精度结果
+
+ | 模型 | 测试数据集 | FP32模型精度 | 权重量化精度 |
+ | -------- | ------- | ----- | ----- |
+ | [Inception_V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | [ImageNet](http://image-net.org/) | 77.92% | 77.84% |
+ | [Mobilenet_V1_1.0_224](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | [ImageNet](http://image-net.org/) | 70.96% | 70.56% |
+
+> 以上所有结果均在x86环境上测得。
+
+## 全量化
+
+下面对全量化的使用方式和效果进行阐述。
+
+### 参数说明
+
+全量化转换命令的一般形式为:
```
./converter_lite --fmk=ModelType --modelFile=ModelFilePath --outputFile=ConvertedModelPath --quantType=PostTraining --config_file=config.cfg
```
+下面对此命令的量化相关参数进行说明:
-## 使用示例
+| 参数 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
+| -------- | ------- | ----- | ----- |----- | ----- |
+| `--quantType=` | 必选 | 设置为PostTraining,启用全量化 | String | - | 必须设置为PostTraining |
+| `--config_file=` | 必选 | 校准数据集配置文件路径 | String | - | - |
+
+为了计算激活值的量化参数,用户需要提供校准数据集。校准数据集最好来自真实推理场景,能表征模型的实际输入情况,数量在100个左右。
+校准数据集配置文件采用`key=value`的方式定义相关参数,需要配置的`key`如下:
+
+| 参数名 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
+| -------- | ------- | ----- | ----- | ----- | ----- |
+| image_path | 必选 | 存放校准数据集的目录 | String | - | 该目录存放可直接用于执行推理的输入数据。由于目前框架还不支持数据预处理,所有数据必须事先完成所需的转换,使得它们满足推理的输入要求。 |
+| batch_count | 可选 | 使用的输入数目 | Integer | 100 | (0,+∞) |
+| method_x | 可选 | 网络层输入输出数据量化算法 | String | KL | KL,MAX_MIN。 KL: 基于[KL散度](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)对数据范围作量化校准; MAX_MIN:基于最大值、最小值计算数据的量化参数。 在模型以及数据集比较较简单的情况下,推荐使用MAX_MIN |
+| thread_num | 可选 | 使用校准数据集执行推理流程时的线程数 | Integer | 1 | (0,+∞) |
+
+
+### 使用步骤
1. 正确编译出`converter_lite`可执行文件。
2. 准备校准数据集,假设存放在`/dir/images`目录,编写配置文件`config.cfg`,内容如下:
@@ -32,34 +108,17 @@
thread_num=1
```
校准数据集可以选择测试数据集的子集,要求`/dir/images`目录下存放的每个文件均是预处理好的输入数据,每个文件都可以直接用于推理的输入。
-3. 以MindSpore模型为例,执行带训练后量化的模型转换命令:
+3. 以MindSpore模型为例,执行全量化的模型转换命令:
```
- ./converter_lite --fmk=MS --modelFile=lenet.ms --outputFile=lenet_quant --quantType=PostTraining --config_file=config.cfg
+ ./converter_lite --fmk=MINDIR --modelFile=lenet.mindir --outputFile=lenet_quant --quantType=PostTraining --config_file=config.cfg
```
-4. 上述命令执行成功后,便可得到量化后的模型lenet_quant.ms,通常量化后的模型大小会下降到FP32模型的1/4。
+4. 上述命令执行成功后,便可得到量化后的模型`lenet_quant.ms`,通常量化后的模型大小会下降到FP32模型的1/4。
-## 部分模型精度结果
+### 部分模型精度结果
- | 模型 | 测试数据集 | method_x | FP32模型精度 | 训练后量化精度 | 说明 |
+ | 模型 | 测试数据集 | method_x | FP32模型精度 | 全量化精度 | 说明 |
| -------- | ------- | ----- | ----- | ----- | ----- |
| [Inception_V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | [ImageNet](http://image-net.org/) | KL | 77.92% | 77.95% | 校准数据集随机选择ImageNet Validation数据集中的100张 |
| [Mobilenet_V1_1.0_224](https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | [ImageNet](http://image-net.org/) | KL | 70.96% | 70.69% | 校准数据集随机选择ImageNet Validation数据集中的100张 |
> 以上所有结果均在x86环境上测得。
-
-## 参数说明
-
-| 参数 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
-| -------- | ------- | ----- | ----- |----- | ----- |
-| --quantType | 必选 | 设置为PostTraining,启用训练后量化 | String | - | 必须设置为PostTraining |
-| --config_file | 必选 | 校准数据集配置文件路径 | String | - | - |
-
-为了计算激活值的量化参数,用户需要提供校准数据集。校准数据集最好来自真实推理场景,能表征模型的实际输入情况,数量在100个左右。
-校准数据集配置文件采用`key=value`的方式定义相关参数,需要配置的`key`如下:
-
-| 参数名 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 |
-| -------- | ------- | ----- | ----- | ----- | ----- |
-| image_path | 必选 | 存放校准数据集的目录 | String | - | 该目录存放可直接用于执行推理的输入数据。由于目前框架还不支持数据预处理,所有数据必须事先完成所需的转换,使得它们满足推理的输入要求。 |
-| batch_count | 可选 | 使用的输入数目 | Integer | 100 | 大于0 |
-| method_x | 可选 | 网络层输入输出数据量化算法 | String | KL | KL,MAX_MIN。 KL: 基于[KL散度](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)对数据范围作量化校准; MAX_MIN:基于最大值、最小值计算数据的量化参数。 在模型以及数据集比较较简单的情况下,推荐使用MAX_MIN |
-| thread_num | 可选 | 使用校准数据集执行推理流程时的线程数 | Integer | 1 | 大于0 |
\ No newline at end of file
diff --git a/lite/tutorials/source_zh_cn/use/runtime.md b/lite/tutorials/source_zh_cn/use/runtime.md
index 2ba5ab7bad1af9f591d3e7c7a2b2f92a18953c25..942784e896b537f2817907c2217ec190b9d55e7d 100644
--- a/lite/tutorials/source_zh_cn/use/runtime.md
+++ b/lite/tutorials/source_zh_cn/use/runtime.md
@@ -28,6 +28,10 @@
- [使用示例](#使用示例-5)
- [获取版本号](#获取版本号)
- [使用示例](#使用示例-6)
+ - [Session并行](#Session并行)
+ - [单Session并行](#单session并行)
+ - [多Session并行](#多session并行)
+ - [使用示例](#使用示例-7)
@@ -49,7 +53,7 @@ Runtime总体使用流程如下图所示:
- `Operator`:算子原型,包含算子的属性,以及shape、data type和format的推导方法。
- `Kernel`:算子库提供算子的具体实现,提供算子forward的能力。
- `Tensor`:MindSpore Lite使用的Tensor,提供了Tensor内存操作的功能和接口。
-
+
## 读取模型
在MindSpore Lite中,模型文件是从模型转换工具转换得到的`.ms`文件。进行模型推理时,需要从文件系统加载模型,并进行模型解析,这部分操作主要在Model中实现。Model持有权重数据、算子属性等模型数据。
@@ -76,66 +80,16 @@ static Model *Import(const char *model_buf, size_t size);
MindSpore Lite支持异构推理,推理时的主选后端由`Context`中的`device_ctx_`指定,默认为CPU。在进行图编译时,会根据主选后端进行算子选型调度。
-```cpp
-/// \brief DeviceType defined for holding user's preferred backend.
-typedef enum {
- DT_CPU, /**< CPU device type */
- DT_GPU, /**< GPU device type */
- DT_NPU /**< NPU device type, not supported yet */
-} DeviceType;
-
-/// \brief DeviceContext defined for holding DeviceType.
-typedef struct {
- DeviceType type; /**< device type */
-} DeviceContext;
-
-DeviceContext device_ctx_{DT_CPU};
-```
-
MindSpore Lite内置一个进程共享的线程池,推理时通过`thread_num_`指定线程池的最大线程数,默认为2线程,推荐最多不超过4个线程,否则可能会影响性能。
-```cpp
-int thread_num_ = 2; /**< thread number config for thread pool */
-```
-
MindSpore Lite支持动态内存分配和释放,如果没有指定`allocator`,推理时会生成一个默认的`allocator`,也可以通过`Context`方法在多个`Context`中共享内存分配器。
如果用户通过`new`创建`Context`,不再需要时,需要用户通过`delete`释放。一般在创建完Session后,Context即可释放。
-```cpp
-/// \brief Allocator defined a memory pool for malloc memory and free memory dynamically.
-///
-/// \note List public class and interface for reference.
-class Allocator;
-
-/// \brief Context defined for holding environment variables during runtime.
-class MS_API Context {
- public:
- /// \brief Constructor of MindSpore Lite Context using input value for parameters.
- ///
- /// \param[in] thread_num Define the work thread number during the runtime.
- /// \param[in] allocator Define the allocator for malloc.
- /// \param[in] device_ctx Define device information during the runtime.
- Context(int thread_num, std::shared_ptr allocator, DeviceContext device_ctx);
-
- public:
- std::shared_ptr allocator = nullptr;
-}
-```
-
### 创建会话
用上一步创建得到的`Context`,调用LiteSession的静态`CreateSession`方法来创建`LiteSession`。函数返回的`LiteSession`实例是一个指针,通过`new`创建,不再需要时,需要用户通过`delete`释放。
-```cpp
-/// \brief Static method to create a LiteSession pointer.
-///
-/// \param[in] context Define the context of session to be created.
-///
-/// \return Pointer of MindSpore Lite LiteSession.
-static LiteSession *CreateSession(lite::Context *context);
-```
-
### 使用示例
下面示例代码演示了`Context`的创建,以及在两个`LiteSession`间共享内存池的功能:
@@ -147,13 +101,16 @@ if (context == nullptr) {
return RET_ERROR;
}
// The preferred backend is GPU, which means, if there is a GPU operator, it will run on the GPU first, otherwise it will run on the CPU.
-context->device_ctx_.type = lite::DT_GPU;
+context->device_type_ = lite::DT_GPU;
// The medium core takes priority in thread and core binding methods. This parameter will work in the BindThread interface. For specific binding effect, see the "Run Graph" section.
context->cpu_bind_mode_ = MID_CPU;
-// Configure the number of worker threads in the thread pool to 2, including the main thread.
+// Configure the number of worker threads in the thread pool to 2, including the main thread.
context->thread_num_ = 2;
// Allocators can be shared across multiple Contexts.
-auto *context2 = new Context(context->thread_num_, context->allocator, context->device_ctx_);
+auto *context2 = new Context();
+context2->thread_num_ = context->thread_num_;
+context2->allocator = context->allocator;
+context2->device_type_ = context->device_type_;
context2->cpu_bind_mode_ = context->cpu_bind_mode_;
// Use Context to create Session.
auto session1 = session::LiteSession::CreateSession(context);
@@ -166,7 +123,7 @@ if (session1 == nullptr) {
// session1 and session2 can share one memory pool.
auto session2 = session::LiteSession::CreateSession(context2);
delete (context2);
-if (session == nullptr) {
+if (session2 == nullptr) {
MS_LOG(ERROR) << "CreateSession failed while running %s", modelName.c_str();
return RET_ERROR;
}
@@ -178,19 +135,7 @@ if (session == nullptr) {
使用MindSpore Lite进行推理时,在已完成会话创建与图编译之后,如果需要对输入的shape进行Resize,则可以通过对输入的tensor重新设置shape,然后调用session的Resize()接口。
-```cpp
-/// \brief Get input MindSpore Lite MSTensors of model.
-///
-/// \return The vector of MindSpore Lite MSTensor.
-virtual std::vector GetInputs() const = 0;
-
-/// \brief Resize inputs shape.
-///
-/// \param[in] inputs Define the new inputs shape.
-///
-/// \return STATUS as an error code of resize inputs, STATUS is defined in errorcode.h.
-virtual int Resize(const std::vector &inputs) = 0;
-```
+> 某些网络是不支持可变维度,会提示错误信息后异常退出,比如,模型中有MatMul算子,并且MatMul的一个输入Tensor是权重,另一个输入Tensor是输入时,调用可变维度接口会导致输入Tensor和权重Tensor的Shape不匹配,最终导致推理失败。
### 使用示例
@@ -199,9 +144,10 @@ virtual int Resize(const std::vector &inputs) = 0;
// Assume we have created a LiteSession instance named session.
auto inputs = session->GetInputs();
std::vector resize_shape = {1, 128, 128, 3};
+std::vector> new_shapes;
+new_shapes.push_back(resize_shape);
// Assume the model has only one input,resize input shape to [1, 128, 128, 3]
-inputs[0]->set_shape(resize_shape);
-session->Resize(inputs);
+session->Resize(inputs, new_shapes);
```
### 图编译
@@ -321,14 +267,6 @@ memcpy(in_data, input_buf, data_size);
MindSpore Lite会话在进行图编译以后,即可使用`LiteSession`的`RunGraph`进行模型推理。
```cpp
-/// \brief Run session with callback.
-///
-/// \param[in] before Define a call_back_function to be called before running each node.
-/// \param[in] after Define a call_back_function to be called after running each node.
-///
-/// \note RunGraph should be called after CompileGraph.
-///
-/// \return STATUS as an error code of running graph, STATUS is defined in errorcode.h.
virtual int RunGraph(const KernelCallBack &before = nullptr, const KernelCallBack &after = nullptr) = 0;
```
@@ -503,16 +441,16 @@ virtual void *MutableData() const = 0;
### 使用示例
-下面示例代码演示了使用`GetOutputMapByNode`接口获取输出`MSTensor`,并打印了每个输出`MSTensor`的前十个数据或所有数据:
+下面示例代码演示了使用`GetOutputs`接口获取输出`MSTensor`,并打印了每个输出`MSTensor`的前十个数据或所有数据:
```cpp
// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByNode();
+auto output_map = session->GetOutputs();
// Assume that the model has only one output node.
auto out_node_iter = output_map.begin();
std::string name = out_node_iter->first;
// Assume that the unique output node has only one output tensor.
-auto out_tensor = out_node_iter->second.front();
+auto out_tensor = out_node_iter->second;
if (out_tensor == nullptr) {
std::cerr << "Output tensor is nullptr" << std::endl;
return -1;
@@ -527,7 +465,7 @@ if (out_data == nullptr) {
std::cerr << "Data of out_tensor is nullptr" << std::endl;
return -1;
}
-// Print the first 10 float data or all output data of the output tensor.
+// Print the first 10 float data or all output data of the output tensor.
std::cout << "Output data: ";
for (size_t i = 0; i < 10 && i < out_tensor->ElementsNum(); i++) {
std::cout << " " << out_data[i];
@@ -536,7 +474,7 @@ std::cout << std::endl;
// The elements in outputs do not need to be free by users, because outputs are managed by the MindSpore Lite.
```
-需要注意的是,`GetOutputsByNodeName`、`GetOutputMapByNode`、`GetOutputByTensorName`和`GetOutputMapByTensor`方法返回的vector或map不需要用户释放。
+需要注意的是,`GetOutputsByNodeName`、`GetOutputByTensorName`和`GetOutputs`方法返回的vector或map不需要用户释放。
下面示例代码演示了使用`GetOutputsByNodeName`接口获取输出`MSTensor`的方法:
@@ -552,28 +490,16 @@ if (out_tensor == nullptr) {
}
```
-下面示例代码演示了使用`GetOutputMapByTensor`接口获取输出`MSTensor`的方法:
-
-```cpp
-// Assume we have created a LiteSession instance named session before.
-auto output_map = session->GetOutputMapByTensor();
-// Assume that output node named output_node_name_0 has only one output tensor.
-auto out_tensor = output_vec.front();
-if (out_tensor == nullptr) {
- std::cerr << "Output tensor is nullptr" << std::endl;
- return -1;
-}
-```
-
下面示例代码演示了使用`GetOutputByTensorName`接口获取输出`MSTensor`的方法:
```cpp
+// Assume we have created a LiteSession instance named session.
// We can use GetOutputTensorNames method to get all name of output tensor of model which is in order.
-auto tensor_names = this->GetOutputTensorNames();
+auto tensor_names = session->GetOutputTensorNames();
// Assume we have created a LiteSession instance named session before.
// Use output tensor name returned by GetOutputTensorNames as key
for (auto tensor_name : tensor_names) {
- auto out_tensor = this->GetOutputByTensorName(tensor_name);
+ auto out_tensor = session->GetOutputByTensorName(tensor_name);
if (out_tensor == nullptr) {
std::cerr << "Output tensor is nullptr" << std::endl;
return -1;
@@ -589,5 +515,114 @@ MindSpore Lite提供了`Version`方法可以获取版本号,包含在`include/
下面代码演示如何获取MindSpore Lite的版本号:
```cpp
#include "include/version.h"
-std::string version = mindspore::lite::Version();
+std::string version = mindspore::lite::Version();
+```
+
+## Session并行
+MindSpore Lite支持多个`LiteSession`并行推理,但不支持多个线程同时调用单个`LiteSession`的`RunGraph`接口。
+
+### 单Session并行
+
+MindSpore Lite不支持多线程并行执行单个`LiteSession`的推理,否则会得到以下错误信息:
+```cpp
+ERROR [mindspore/lite/src/lite_session.cc:297] RunGraph] 10 Not support multi-threading
+```
+
+### 多Session并行
+
+MindSpore Lite支持多个`LiteSession`同时进行推理的场景,每个`LiteSession`的线程池和内存池都是独立的。
+
+### 使用示例
+
+下面代码演示了如何创建多个`LiteSession`,并且并行执行推理的过程:
+```cpp
+#include
+#include "src/common/file_utils.h"
+#include "include/model.h"
+#include "include/version.h"
+#include "include/context.h"
+#include "include/lite_session.h"
+
+mindspore::session::LiteSession *GenerateSession(mindspore::lite::Model *model) {
+ if (model == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return nullptr;
+ }
+ auto context = new (std::nothrow) mindspore::lite::Context;
+ if (context == nullptr) {
+ std::cerr << "New context failed while running" << std::endl;
+ return nullptr;
+ }
+
+ auto session = mindspore::session::LiteSession::CreateSession(context);
+ delete (context);
+ if (session == nullptr) {
+ std::cerr << "CreateSession failed while running" << std::endl;
+ return nullptr;
+ }
+ auto ret = session->CompileGraph(model);
+ if (ret != mindspore::lite::RET_OK) {
+ std::cout << "CompileGraph failed while running" << std::endl;
+ delete (session);
+ return nullptr;
+ }
+ auto msInputs = session->GetInputs();
+ for (auto msInput : msInputs) {
+ (void)msInput->MutableData();
+ }
+ return session;
+}
+
+int main(int argc, const char **argv) {
+ size_t size = 0;
+ char *graphBuf = mindspore::lite::ReadFile("test.ms", &size);
+ if (graphBuf == nullptr) {
+ std::cerr << "Read model file failed while running" << std::endl;
+ return -1;
+ }
+ auto model = mindspore::lite::Model::Import(graphBuf, size);
+ if (model == nullptr) {
+ std::cerr << "Import model file failed while running" << std::endl;
+ delete[](graphBuf);
+ return -1;
+ }
+ delete[](graphBuf);
+ auto session1 = GenerateSession(model);
+ if (session1 == nullptr) {
+ std::cerr << "Generate session 1 failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+ auto session2 = GenerateSession(model);
+ if (session2 == nullptr) {
+ std::cerr << "Generate session 2 failed" << std::endl;
+ delete(model);
+ return -1;
+ }
+
+ std::thread thread1([&](){
+ auto status = session1->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session1 inference success" << std::endl;
+ });
+
+ std::thread thread2([&](){
+ auto status = session2->RunGraph();
+ if (status != 0) {
+ std::cerr << "Inference error " << status << std::endl;
+ return;
+ }
+ std::cout << "Session2 inference success" << std::endl;
+ });
+
+ thread1.join();
+ thread2.join();
+ delete (session1);
+ delete (session2);
+ delete (model);
+ return 0;
+}
```
diff --git a/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md b/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md
index fbe404c17898439bb7659b9d2e5afaf841dbf5be..7c7a60576bf95fb081f42b20dfbecef92646ad02 100644
--- a/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md
+++ b/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md
@@ -20,16 +20,16 @@
使用TimeProfiler工具,需要进行如下环境准备工作。
-- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profile`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id3)执行编译。
+- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profiler`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id3)执行编译。
-- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id4),获得`timeprofile`工具,并配置环境变量。
+- 运行:参考部署文档中的[编译输出](https://www.mindspore.cn/lite/tutorial/zh-CN/master/build.html#id4),获得`timeprofiler`工具,并配置环境变量。
## 使用示例
使用TimeProfiler对`test_timeprofiler.ms`模型的网络层进行耗时分析,并且设置模型推理循环运行次数为10,则其命令代码如下:
```bash
-./timeprofile --modelPath=./models/test_timeprofiler.ms --loopCount=10
+./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10
```
该条命令执行后,TimeProfiler工具会输出模型网络层运行耗时的相关统计信息。对于本例命令,输出的统计信息如下。其中统计信息按照`opName`和`optype`两种划分方式分别显示,`opName`表示算子名,`optype`表示算子类别,`avg`表示该算子的平均单次运行时间,`percent`表示该算子运行耗时占所有算子运行总耗时的比例,`calledTimess`表示该算子的运行次数,`opTotalTime`表示该算子运行指定次数的总耗时。最后,`total time`和`kernel cost`分别显示了该模型单次推理的平均耗时和模型推理中所有算子的平均耗时之和。
@@ -77,7 +77,7 @@ total time : 2.90800 ms, kernel cost : 2.74851 ms
使用编译好的TimeProfiler工具进行模型网络层耗时分析时,其命令格式如下所示。
```bash
-./timeprofile --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
+./timeprofiler --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=]
```
下面提供详细的参数说明。
diff --git a/resource/api_mapping.md b/resource/api_mapping.md
index 0eed65d611cd8eaba77c9f0804c892e8c2913d4e..3f4a30cd61ed62dafe076ec22402dadeacff1809 100644
--- a/resource/api_mapping.md
+++ b/resource/api_mapping.md
@@ -36,7 +36,7 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.expm1 | mindspore.ops.operations.Expm1 |
| torch.eye | mindspore.ops.operations.Eye |
| torch.flatten | mindspore.ops.operations.Flatten |
-| torch.flip | mindspore.ops.operations.ReverseV2
+| torch.flip | mindspore.ops.operations.ReverseV2 |
| torch.floor | mindspore.ops.operations.Floor |
| torch.fmod | mindspore.ops.operations.Mod |
| torch.linspace | mindspore.nn.LinSpace |
@@ -167,13 +167,13 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.utils.data.distributed.DistributedSampler | mindspore.dataset.DistributedSampler |
| torch.zeros | mindspore.ops.operations.ZerosLike |
| torch.zeros_like | mindspore.ops.operations.ZerosLike |
-| torchvision.datasets.ImageFolder | mindspore.dataset.ImageFolderDatasetV2 |
+| torchvision.datasets.ImageFolder | mindspore.dataset.ImageFolderDataset |
| torchvision.ops.nms | mindspore.ops.operations.NMSWithMask |
| torchvision.ops.roi_align | mindspore.ops.operations.ROIAlign |
-| torchvision.transforms.CenterCrop | mindspore.dataset.vision.py_transforms.CenterCrop |
-| torchvision.transforms.ColorJitter | mindspore.dataset.vision.py_transforms.RandomColorAdjust |
-| torchvision.transforms.Compose | mindspore.dataset.vision.py_transforms.Compose |
-| torchvision.transforms.Normalize | mindspore.dataset.vision.py_transforms.Normalize |
-| torchvision.transforms.RandomHorizontalFlip | mindspore.dataset.vision.py_transforms.RandomHorizontalFlip |
-| torchvision.transforms.Resize | mindspore.dataset.vision.py_transforms.Resize |
-| torchvision.transforms.ToTensor | mindspore.dataset.vision.py_transforms.ToTensor |
+| torchvision.transforms.CenterCrop | mindspore.dataset.vision.py_transforms.CenterCrop |
+| torchvision.transforms.ColorJitter | mindspore.dataset.vision.py_transforms.RandomColorAdjust |
+| torchvision.transforms.Compose | mindspore.dataset.transforms.py_transforms.Compose |
+| torchvision.transforms.Normalize | mindspore.dataset.vision.py_transforms.Normalize |
+| torchvision.transforms.RandomHorizontalFlip | mindspore.dataset.vision.py_transforms.RandomHorizontalFlip |
+| torchvision.transforms.Resize | mindspore.dataset.vision.py_transforms.Resize |
+| torchvision.transforms.ToTensor | mindspore.dataset.vision.py_transforms.ToTensor |
diff --git a/tutorials/notebook/computer_vision_application.ipynb b/tutorials/notebook/computer_vision_application.ipynb
index 6d8dfd2d87f44f46f8ca5573d295735a4ff30d91..2b2f978b1667398cc02fb7191d73ad3c9d875551 100644
--- a/tutorials/notebook/computer_vision_application.ipynb
+++ b/tutorials/notebook/computer_vision_application.ipynb
@@ -213,7 +213,7 @@
"import mindspore.common.dtype as mstype\n",
"import mindspore.ops.functional as F\n",
"import mindspore.dataset as ds\n",
- "import mindspore.dataset.transforms.vision.c_transforms as C\n",
+ "import mindspore.dataset.vision.c_transforms as C\n",
"import mindspore.dataset.transforms.c_transforms as C2\n",
"\n",
"\n",
@@ -252,8 +252,8 @@
" changeswap_op]\n",
"\n",
" # Apply map operations on images\n",
- " cifar_ds = cifar_ds.map(input_columns=\"label\", operations=type_cast_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=c_trans)\n",
+ " cifar_ds = cifar_ds.map(operations=type_cast_op, input_columns=\"label\")\n",
+ " cifar_ds = cifar_ds.map(operations=c_trans, input_columns=\"image\")\n",
"\n",
" # Apply shuffle operations\n",
" cifar_ds = cifar_ds.shuffle(buffer_size=10)\n",
@@ -314,7 +314,7 @@
"import matplotlib.pyplot as plt\n",
"dataset_show = create_dataset()\n",
"iterator_show= dataset_show.create_dict_iterator()\n",
- "images = iterator_show.get_next()[\"image\"]\n",
+ "images = iterator_show.get_next()[\"image\"].asnumpy()\n",
"# Images[0].shape is (3,224,224).We need transpose as (224,224,3) for using in plt.show().\n",
"picture_show = np.transpose(images[0],(1,2,0))\n",
"plt.imshow(picture_show)\n"
diff --git a/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb b/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb
index f34bc0c817f399bc5bdac90a497910d626d24d5f..0fea6b2a76021d054acc0f0e3fc7cc786c25159b 100644
--- a/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb
+++ b/tutorials/notebook/convert_dataset_to_mindspore_data_format/convert_dataset_to_mindspore_data_format.ipynb
@@ -194,7 +194,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1,\n",
+ "{'data': Tensor(shape=[803], dtype=UInt8, value= [255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1,\n",
" 0, 0, 1, 0, 1, 0, 0, 255, 219, 0, 67, 0, 2,\n",
" 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2, 2, 2,\n",
" 2, 4, 3, 2, 2, 2, 2, 5, 4, 4, 3, 4, 6,\n",
@@ -250,8 +250,7 @@
" 143, 6, 252, 112, 209, 62, 35, 120, 247, 224, 174, 137, 168,\n",
" 77, 241, 3, 92, 240, 206, 167, 29, 245, 142, 155, 115, 114,\n",
" 80, 27, 5, 157, 73, 203, 164, 139, 42, 249, 103, 12, 145,\n",
- " 195, 22, 229, 5, 128, 31, 149, 148, 81, 69, 21, 255, 217],\n",
- " dtype=uint8), 'label': array(3, dtype=int64)}\n"
+ " 195, 22, 229, 5, 128, 31, 149, 148, 81, 69, 21, 255, 217]), 'label': Tensor(shape=[], dtype=Int64, value= 3)}\n"
]
}
],
@@ -376,7 +375,7 @@
"# create MindDataset for reading data\n",
"csv_data_set = ds.MindDataset(dataset_file=csv_mindrecord_path)\n",
"# create a dictionary iterator and read a data record through the iterator\n",
- "print(next(csv_data_set.create_dict_iterator()))"
+ "print(next(csv_data_set.create_dict_iterator(output_numpy=True)))"
]
},
{
@@ -493,7 +492,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, ..., 35, 255, 217], dtype=uint8), 'id': array(30707, dtype=int64), 'label': array(4, dtype=int64)}\n"
+ "{'data': Tensor(shape=[1431], dtype=UInt8, value= [255, 216, 255, ..., 35, 255, 217]), 'id': Tensor(shape=[], dtype=Int64, value= 30707), 'label': Tensor(shape=[], dtype=Int64, value= 4)}\n"
]
}
],
@@ -620,7 +619,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, ..., 127, 255, 217], dtype=uint8), 'fine_label': array(88, dtype=int64), 'coarse_label': array(8, dtype=int64), 'id': array(10349, dtype=int64)}\n"
+ "{'data': Tensor(shape=[1374], dtype=UInt8, value= [255, 216, 255, ..., 127, 255, 217]), 'fine_label': Tensor(shape=[], dtype=Int64, value= 88), 'coarse_label': Tensor(shape=[], dtype=Int64, value= 8), 'id': Tensor(shape=[], dtype=Int64, value= 10349)}\n"
]
}
],
@@ -767,7 +766,7 @@
"# create MindDataset for reading data\n",
"imagenet_data_set = ds.MindDataset(dataset_file=file_name)\n",
"# create a dictionary iterator and read a data record through the iterator\n",
- "print(next(imagenet_data_set.create_dict_iterator()))"
+ "print(next(imagenet_data_set.create_dict_iterator(output_numpy=True)))"
]
},
{
@@ -938,7 +937,7 @@
"# create MindDataset for reading data\n",
"define_data_set = ds.MindDataset(dataset_file=file_name)\n",
"# create a dictionary iterator and read a data record through the iterator\n",
- "print(next(define_data_set.create_dict_iterator()))"
+ "print(next(define_data_set.create_dict_iterator(output_numpy=True)))"
]
},
{
diff --git a/tutorials/notebook/customized_debugging_information.ipynb b/tutorials/notebook/customized_debugging_information.ipynb
index 44be7bd3a753b3c00b1851729badec85be8b4584..95b9ab65083051ebefd3f6d1084c5bcc614ada4f 100644
--- a/tutorials/notebook/customized_debugging_information.ipynb
+++ b/tutorials/notebook/customized_debugging_information.ipynb
@@ -18,7 +18,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、`Print算子`、日志打印等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。\n",
+ "本文将使用[快速入门](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/lenet/lenet.py)作为样例,并通过构建自定义调试函数:`Callback`、`metrics`、`Print算子`、日志打印等,同时将构建的自定义调试函数添加进代码中,通过运行效果来展示具体如何使用MindSpore提供给我们的自定义调试能力,帮助快速调试训练网络。\n",
"体验过程如下:\n",
"1. 数据集准备。\n",
"2. 定义深度学习网络LeNet5。\n",
@@ -84,9 +84,9 @@
"outputs": [],
"source": [
"import mindspore.dataset as ds\n",
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.dataset.vision import Inter\n",
"from mindspore.common import dtype as mstype\n",
"\n",
"def create_dataset(data_path, batch_size=32, repeat_size=1,\n",
@@ -116,11 +116,11 @@
" type_cast_op = C.TypeCast(mstype.int32)\n",
"\n",
" # apply map operations on images\n",
- " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
"\n",
" # apply DatasetOps\n",
" buffer_size = 10000\n",
@@ -614,4 +614,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/tutorials/notebook/data_loading_enhance/data_loading_enhancement.ipynb b/tutorials/notebook/data_loading_enhance/data_loading_enhancement.ipynb
index fb562620846ff291b4c5438bb568de8baf6af099..6733e3aa75749174cab117c121e3a18aca7fbc5c 100644
--- a/tutorials/notebook/data_loading_enhance/data_loading_enhancement.ipynb
+++ b/tutorials/notebook/data_loading_enhance/data_loading_enhancement.ipynb
@@ -343,7 +343,7 @@
" print(data[\"data\"])\n",
"\n",
"func = lambda x:x**2\n",
- "ds11 = ds10.map(input_columns=\"data\",operations=func)\n",
+ "ds11 = ds10.map(operations=func, input_columns=\"data\")\n",
"print(\"After map:\")\n",
"for data in ds11.create_dict_iterator():\n",
" print(data[\"data\"])"
@@ -383,11 +383,11 @@
"[ 3 2 1 0 -1]\n",
"[4 3 2 1 0]\n",
"After zip:\n",
- "{'data': array([0, 1, 2, 3, 4], dtype=int32), 'data2': array([ 0, -1, -2, -3, -4], dtype=int32)}\n",
- "{'data': array([1, 2, 3, 4, 5], dtype=int32), 'data2': array([ 1, 0, -1, -2, -3], dtype=int32)}\n",
- "{'data': array([2, 3, 4, 5, 6], dtype=int32), 'data2': array([ 2, 1, 0, -1, -2], dtype=int32)}\n",
- "{'data': array([3, 4, 5, 6, 7], dtype=int32), 'data2': array([ 3, 2, 1, 0, -1], dtype=int32)}\n",
- "{'data': array([4, 5, 6, 7, 8], dtype=int32), 'data2': array([4, 3, 2, 1, 0], dtype=int32)}\n"
+ "{'data': Tensor(shape=[5], dtype=Int64, value= [0, 1, 2, 3, 4]), 'data2': Tensor(shape=[5], dtype=Int64, value= [ 0, -1, -2, -3, -4])}\n",
+ "{'data': Tensor(shape=[5], dtype=Int64, value= [1, 2, 3, 4, 5]), 'data2': Tensor(shape=[5], dtype=Int64, value= [ 1, 0, -1, -2, -3])}\n",
+ "{'data': Tensor(shape=[5], dtype=Int64, value= [2, 3, 4, 5, 6]), 'data2': Tensor(shape=[5], dtype=Int64, value= [ 2, 1, 0, -1, -2])}\n",
+ "{'data': Tensor(shape=[5], dtype=Int64, value= [3, 4, 5, 6, 7]), 'data2': Tensor(shape=[5], dtype=Int64, value= [ 3, 2, 1, 0, -1])}\n",
+ "{'data': Tensor(shape=[5], dtype=Int64, value= [4, 5, 6, 7, 8]), 'data2': Tensor(shape=[5], dtype=Int64, value= [4, 3, 2, 1, 0])}\n"
]
}
],
@@ -449,7 +449,7 @@
"outputs": [],
"source": [
"DATA_DIR = \"./enhance_images\"\n",
- "ds1 = ds.ImageFolderDatasetV2(DATA_DIR, decode=True)"
+ "ds1 = ds.ImageFolderDataset(DATA_DIR, decode=True)"
]
},
{
@@ -465,8 +465,8 @@
"metadata": {},
"outputs": [],
"source": [
- "from mindspore.dataset.transforms.vision import Inter\n",
- "import mindspore.dataset.transforms.vision.c_transforms as transforms"
+ "from mindspore.dataset.vision import Inter\n",
+ "import mindspore.dataset.vision.c_transforms as transforms"
]
},
{
@@ -476,7 +476,7 @@
"outputs": [],
"source": [
"resize_op = transforms.Resize(size=(800,800), interpolation=Inter.LINEAR)\n",
- "ds2 = ds1.map(input_columns=\"image\", operations=resize_op)"
+ "ds2 = ds1.map(operations=resize_op, input_columns=\"image\")"
]
},
{
@@ -518,7 +518,7 @@
],
"source": [
"for data in ds2.create_dict_iterator():\n",
- " imgplot_resized = plt.imshow(data[\"image\"])\n",
+ " imgplot_resized = plt.imshow(data[\"image\"].asnumpy())\n",
" plt.show()"
]
},
@@ -537,14 +537,15 @@
"metadata": {},
"outputs": [],
"source": [
- "import mindspore.dataset.transforms.vision.py_transforms as transforms"
+ "from mindspore.dataset.transforms.py_transforms import Compose\n",
+ "import mindspore.dataset.vision.py_transforms as transforms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "2. 定义数据增强算子,通过`ComposeOp`接口将多个数据增强组合使用。"
+ "2. 定义数据增强算子,通过`Compose`接口将多个数据增强组合使用。"
]
},
{
@@ -578,17 +579,17 @@
}
],
"source": [
- "ds3 = ds.ImageFolderDatasetV2(DATA_DIR)\n",
+ "ds3 = ds.ImageFolderDataset(DATA_DIR)\n",
"\n",
"transforms_list = [\n",
" transforms.Decode(), # Decode images to PIL format.\n",
" transforms.RandomCrop(size=(800,800)),\n",
" transforms.ToTensor() # Convert PIL images to Numpy ndarray.\n",
"]\n",
- "compose = transforms.ComposeOp(transforms_list)\n",
- "ds4 = ds3.map(input_columns=\"image\", operations=compose())\n",
+ "compose = Compose(transforms_list)\n",
+ "ds4 = ds3.map(operations=compose, input_columns=\"image\")\n",
"for data in ds4.create_dict_iterator():\n",
- " imgplot_resized = plt.imshow(data[\"image\"].transpose(1, 2, 0))\n",
+ " imgplot_resized = plt.imshow(data[\"image\"].asnumpy().transpose(1, 2, 0))\n",
" plt.show()"
]
},
diff --git a/tutorials/notebook/debugging_in_pynative_mode.ipynb b/tutorials/notebook/debugging_in_pynative_mode.ipynb
index ce3d50557b55592afefaca452b1ecbd56d45521a..82d4d76daedb25fead84f8c41fc11266c2555e9f 100644
--- a/tutorials/notebook/debugging_in_pynative_mode.ipynb
+++ b/tutorials/notebook/debugging_in_pynative_mode.ipynb
@@ -34,7 +34,7 @@
"\n",
"4. 执行神经网络训练,查看网络各参数梯度。\n",
"\n",
- "> 你可以在这里找到完整可运行的样例代码:。"
+ "> 你可以在这里找到完整可运行的样例代码:。"
]
},
{
@@ -92,9 +92,9 @@
"metadata": {},
"outputs": [],
"source": [
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.dataset.vision import Inter\n",
"from mindspore.common import dtype as mstype\n",
"import mindspore.dataset as ds\n",
"import numpy as np\n",
@@ -126,11 +126,11 @@
" type_cast_op = C.TypeCast(mstype.int32)\n",
"\n",
" # using map method to apply operations to a dataset\n",
- " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
" \n",
" # process the generated dataset\n",
" buffer_size = 10000\n",
@@ -187,8 +187,8 @@
"datas = create_dataset(train_data_path)\n",
"data1 = datas.create_dict_iterator()\n",
"data= data1.get_next()\n",
- "images = data[\"image\"]\n",
- "labels = data[\"label\"]\n",
+ "images = data[\"image\"].asnumpy()\n",
+ "labels = data[\"label\"].asnumpy()\n",
"print(images.shape)\n",
"count = 1\n",
"for i in images:\n",
@@ -600,4 +600,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/tutorials/notebook/linear_regression.ipynb b/tutorials/notebook/linear_regression.ipynb
index 4e3665dcf1c09e5aba7b1b2ee0527b46788b94d5..25008ff1e34df63dabb852fa6b2e3cee642b080f 100644
--- a/tutorials/notebook/linear_regression.ipynb
+++ b/tutorials/notebook/linear_regression.ipynb
@@ -4,33 +4,23 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "##
使用MindSpore实现简单线性函数拟合"
+ "# 使用MindSpore实现简单线性函数拟合"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 概述"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "回归问题算法通常是利用一系列属性来预测一个值,预测的值是连续的。例如给出一套房子的一些特征数据,如面积、卧室数等等来预测房价,利用最近一周的气温变化和卫星云图来预测未来的气温情况等。如果一套房子实际价格为500万元,通过回归分析的预测值为499万元,则认为这是一个比较好的回归分析。在机器学习问题中,常见的回归分析有线性回归、多项式回归、逻辑回归等。本例子介绍线性回归算法,并通过MindSpore进行线性回归AI训练体验。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "主要流程如下:\n",
+ "## 概述\n",
+ "\n",
+ "回归问题算法通常是利用一系列属性来预测一个值,预测的值是连续的。例如给出一套房子的一些特征数据,如面积、卧室数等等来预测房价,利用最近一周的气温变化和卫星云图来预测未来的气温情况等。如果一套房子实际价格为500万元,通过回归分析的预测值为499万元,则认为这是一个比较好的回归分析。在机器学习问题中,常见的回归分析有线性回归、多项式回归、逻辑回归等。本例子介绍线性回归算法,并通过MindSpore进行线性回归AI训练体验。\n",
+ "\n",
+ "整体流程如下:\n",
"\n",
"1. 生成数据集\n",
- "2. 定义前向传播网络\n",
- "3. 定义反向传播网络\n",
- "4. 定义线性拟合过程的可视化函数\n",
+ "2. 定义训练网络\n",
+ "3. 定义前向传播网络与反向传播网络并关联\n",
+ "4. 拟合过程可视化准备\n",
"5. 执行训练"
]
},
@@ -38,114 +28,90 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 环境准备"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "系统:Ubuntu18.04\n",
- "\n",
- "MindSpore版本:GPU\n",
+ "## 环境准备\n",
"\n",
- "设置MindSpore运行配置\n",
- "\n",
- "第三方支持包:`matplotlib`,未安装此包的,可使用命令`pip install matplotlib`预先安装。"
+ "设置MindSpore运行配置"
]
},
{
"cell_type": "code",
"execution_count": 1,
- "metadata": {},
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.002170Z",
+ "start_time": "2020-09-14T10:38:39.441746Z"
+ }
+ },
"outputs": [],
"source": [
"from mindspore import context\n",
"\n",
- "context.set_context(mode=context.PYNATIVE_MODE, device_target=\"GPU\")"
+ "context.set_context(mode=context.GRAPH_MODE, device_target=\"CPU\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "`PYNATIVE_MODE`:自定义调试模式。\n",
+ "`GRAPH_MODE`:自定义调试模式。\n",
"\n",
- "`device_target`:设置MindSpore的训练硬件为GPU。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 生成数据集"
+ "`device_target`:设置MindSpore的训练硬件为CPU。\n",
+ "\n",
+ "> 本教程代码依赖`matplotlib`第三方支持包,可使用命令`pip install matplotlib`安装。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
+ "## 生成数据集\n",
+ "\n",
"### 定义数据集生成函数\n",
"\n",
- "`get_data`用于生成训练数据集和测试数据集。由于拟合的是线性数据,假定要拟合的目标函数为:$y=2x+3$,那么我们需要的训练数据集应随机分布于函数周边,这里采用了$y=2x+3+noise$的方式生成,其中`noise`为遵循标准正态分布规律的随机数值。"
+ "`get_data`用于生成训练数据集和测试数据集。由于拟合的是线性数据,假定要拟合的目标函数为:$f(x)=2x+3$,那么我们需要的训练数据集应随机分布于函数周边,这里采用了$f(x)=2x+3+noise$的方式生成,其中`noise`为遵循标准正态分布规律的随机数值。"
]
},
{
"cell_type": "code",
"execution_count": 2,
- "metadata": {},
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.007850Z",
+ "start_time": "2020-09-14T10:38:40.003169Z"
+ }
+ },
"outputs": [],
"source": [
"import numpy as np\n",
- "import mindspore as ms\n",
- "from mindspore import Tensor\n",
- " \n",
- "def get_data(num,w=2.0, b=3.0):\n",
- " np_x = np.ones([num, 1])\n",
- " np_y = np.ones([num, 1])\n",
+ "\n",
+ "def get_data(num, w=2.0, b=3.0):\n",
" for i in range(num):\n",
" x = np.random.uniform(-10.0, 10.0)\n",
- " np_x[i] = x\n",
" noise = np.random.normal(0, 1)\n",
" y = x * w + b + noise\n",
- " np_y[i] = y\n",
- " return Tensor(np_x,ms.float32), Tensor(np_y,ms.float32)"
+ " yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "对于数据生成函数我们将有以下两个作用。\n",
- "\n",
- "1. 生成训练数据,对模型函数进行训练。\n",
- "2. 生成验证数据,在训练结束后,对模型函数进行精度验证。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 生成测试数据"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "使用数据生成函数`get_data`随机生成50组验证数据,并可视化展示。"
+ "使用`get_data`生成50组测试数据,并可视化。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
- "scrolled": true
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.355635Z",
+ "start_time": "2020-09-14T10:38:40.009930Z"
+ }
},
"outputs": [
{
"data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAEICAYAAAC6fYRZAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAX5ElEQVR4nO3dfZBcVZ3G8edJEFaUVTSTEEhiwALWaNWCdkVejARBBdY1YIkGt3YpdY2olC+lVYtSpZTW1uouirWlC4aSFbeUlxWQLIK8SSBbvMgEA4QNLAFxGRInk6ASXwprMr/94942neZ2pnv63u7bt7+fqq7pvvdOn1N3ep45c+455zoiBACopln9rgAAoDiEPABUGCEPABVGyANAhRHyAFBhhDwAVBghDwAVRsgDKdtrbf99h9+z2HbY3qeoegHdIOQxkGw/ZfsPtn/b8PhGv+u1N7aX2x7rdz0wXGh9YJD9dUTc1u9KAGVGSx6VYXs/27+2/bqGbSNpi3+u7QNt32B7wvav0ucLOixjtu0LbW+3/aSkv2ra/37bm2zvtP2k7Q+n218i6SZJBzf853Gw7aW270nrvdX2N2zvm8PpACQR8qiQiHhe0rWSzmrY/B5Jd0bENiWf93+X9CpJiyT9QVKnXTwfkvQOSUdLqkl6d9P+ben+P5f0fkkX2X59RPxO0qmStkTES9PHFkm7JH1K0hxJx0o6SdJHO6wT0BIhj0H2w7QFXH98SNL3tWfIvy/dpojYERHXRMTvI2KnpH+UdEKHZb5H0tcj4umIeFbSPzXujIgfRcQTkbhT0i2SlrV6s4hYHxH3RsRkRDwl6VszqBPQEn3yGGSnN/fJ254l6cW23yjpl5KOknRdum9/SRdJOkXSgem3HGB7dkTsarPMgyU93fD6F03lnyrpC5KOUNKI2l/Sw63ezPYRkr6m5L+C/ZX8Tq5vsy7AtGjJo1IiYkrS1Upa8++TdEPaapekT0s6UtIbI+LPJb053e4OitgqaWHD60X1J7b3k3SNpAslzYuIl0u6seH9s9b1vljSo5IOT+v0uQ7rA+wVIY8q+r6k90r6m/R53QFK+uF/bfsVSlrcnbpa0sdtL7B9oKTzGvbtK2k/SROSJtNW/dsa9o9LeqXtlzXV6TlJv7X9F5I+MoM6AS0R8hhk/9U0Tv46SYqI+yT9TknXyk0Nx39d0oslbZd0r6Qfz6DMSyXdLOlBSQ8oudCrtNydkj6u5A/Br5T8J7GmYf+jkq6Q9GR6DeFgSZ9Jj9uZvvdVM6gT0JK5MxQAVBcteQCoMEIeaGL7kqZuoPrjkn7XDegU3TUAUGGlGic/Z86cWLx4cb+rAQADZf369dsjYiRrX6lCfvHixRodHe13NQBgoNj+Rat99MkDQIUR8gBQYYQ8AFQYIQ8AFUbIA0CFEfIAUGGEPAD029SUND4uFTA5lZAHgH6ampJOPFFasEBavjx5nSNCHgD6aWJCuvtuaXIy+ToxkevbE/IA0E9z50rHHSfts0/yde7cXN++VMsaAMDQsaU77kha8HPnJq9zRMgDQL/NmiXNm1fMW3f7BrYX2r7D9ibbj9j+RLr9FbZvtf14+vXA7qsLAOhEHn3yk5I+HRGvkXSMpI/ZXqLkBse3R8Thkm7Xnjc8BgD0QNchHxFbI+KB9PlOSZskHSJphaTL08Mul3R6t2UBADqT6+ga24slHS3pPknzImKrlPwhkJR5ydj2Ktujtkcnch46BADDLreQt/1SSddI+mREPNfu90XE6oioRURtZCTzxiYAgBnKJeRtv0hJwH8vIq5NN4/bnp/uny9pWx5lAQDal8foGkv6tqRNEfG1hl1rJJ2dPj9b0vXdlgUA6Ewe4+SPl/S3kh62vSHd9jlJX5Z0te0PSvo/SWfmUBYAoANdh3xE/LekVlO0Tur2/QEAM8faNQBQYYQ8ANS1u657geu/542QBwApCe7ly6VDDpFOOKH1uu4Fr/+eN0IeAKSkZb5unbRrV/J1fDz7uILXf88bIQ8AUrLEb32Z38bnzQpe/z1vhDwASMlSv8uWJeG9bFnrpX/r67+PjUlr1+a+/nveWE8eAKTObt5R4PrveSPkAaBugMK7XXTXAECzARoiOR1CHgAaDdgQyekQ8gDQaMCGSE6HkAeARgM2RHI6XHgFMBymptobOdPJKJsBQEseQPV12s9eH2Uz4AEvEfIABkkno14aj61YP3snCHkAg6GT1njzsXPmVKqfvRP0yQMYDFmt8VYTl5qP3b69Uv3sncjrRt6X2d5me2PDtgtsP2N7Q/o4LY+yAAypTka9zJkj1WrS7Nm7j61QP3sn8mrJf0fSNyR9t2n7RRFxYU5lABhm7Y56mZqS3vIWaXRUWrpU+slPhi7YG+XSko+IuyQ9m8d7AUBL7bTGG7tq7r8/6aoZYkVfeD3X9kNpd86BWQfYXmV71PboxBBd8QZQkIpNZupWkSF/saRXSzpK0lZJX806KCJWR0QtImojIyMFVgfAUBiw9d6LVljIR8R4ROyKiClJl0paWlRZALCHIb3ImqWwkLc9v+HlGZI2tjoWAFCMXEbX2L5C0nJJc2yPSfqCpOW2j5IUkp6S9OE8ygIAtC+XkI+IszI2fzuP9wYAzBzLGgBAhRHyAFBhhDwAVBghDwAVRsgDQIUR8gDKoZMbgqBthDyA/uv09nxoGyEPoP+G+PZ8RSPkAfQfK0cWhtv/Aei/dm8Igo4R8gDKob5yJHJFdw0AVBghDwAVRsgDQIUR8gBQYYQ8gGzMQK0EQh7ACzEDtTJyCXnbl9neZntjw7ZX2L7V9uPp1wPzKAtADzADtTLyasl/R9IpTdvOk3R7RBwu6fb0NYBBwAzUysgl5CPiLknPNm1eIeny9Pnlkk7PoywAOWvse68/l5IZqGNj0tq1zEAdYEX2yc+LiK2SlH7NbArYXmV71PboBP8SAr3V2Pd+wgl79sNLyQxUAn6g9f3Ca0SsjohaRNRGRkb6XR1guDT3vdMPXzlFhvy47fmSlH7dVmBZAGaiue+9k354hlgOhCIXKFsj6WxJX06/Xl9gWQBmonn1x4j2VoKsd/PcfXfyB+GOO5IFxlA6eQ2hvELSPZKOtD1m+4NKwv2tth+X9Nb0NYAyaGyF11d/tPd8vjcMsRwYubTkI+KsFrtOyuP9AeQoj1Z4vZun/h4MsSwt1pMHqmhqqnW3S1YrvNN13LnJx8CgEw2omlZLEtS7aEZG8pno1G7XDvqKljxQNVkt9ZGRPbtobr9d2rGDVvgQoCUPVE3WkgTNwb9jB63wIUFLHqiaen/5+PjuEM/jQune+vlRWrTkgapauVJauDDpl4/obi0alh4eWIQ8UEXj49K6dUn3zLp1yetuLpQyLn5gEfJAFdm7lxuI6L57haWHBxYhD1RF4yzWefOkZcuk2bOTr52Og29W7+dn6eGBQ8gDVdDcZx6RhPEzz0h33plPKDMufiAR8kAVZPWZE8oQIQ9UA33maIFx8kAVsJYMWqAlD5RZJzfmoHsGGQh5oKyYgIQcEPJAWTEBCTkg5IGy4mIqclD4hVfbT0naKWmXpMmIqBVdJjBwshb/4mIqctCrlvyJEXEUAQ9k2FvfOxdT0SW6a4A8dTIapo6+dxSoFyEfkm6xvd72qh6UB/THTEfD0PeOAvViMtTxEbHF9lxJt9p+NCLuqu9Mg3+VJC1atKgH1QEKMtMbZNP3jgIV3pKPiC3p122SrpO0tGn/6oioRURtZGSk6OoAxemmRU7fOwpSaEve9kskzYqInenzt0n6YpFlAn1DixwlVHR3zTxJ1zn5sO8j6fsR8eOCywT6p94iB0qi0JCPiCcl/WWRZQAAWmMIJQBUGCEPABVGyANAhRHyAFBhhDwAVBghDwAVRsgDQIUR8gBQYYQ8AFQYIQ8AFUbIA0CFEfIYPjO5exMwoAh5DI+pKWnr1uSuTZ3evQkYUL24MxTQf/Vb89Xv3CR1dvcmYEAR8hgOjbfms5N137mfKoYA3TUYDo235lu2TBobk9au5e5NqDxa8hgO3JoPQ4qQx/Dg1nwYQoV319g+xfZjtjfbPq/o8gAAuxUa8rZnS/qmpFMlLZF0lu0lRZYJANit6Jb8UkmbI+LJiPijpCslrSi4TABAquiQP0TS0w2vx9Jtf2J7le1R26MTExMFVwcAhkvRIZ81hGGPueQRsToiahFRGxkZKbg6GAgsOwDkpuiQH5O0sOH1AklbCi4Tg6w+M5VlB4BcFB3y90s63PahtveVtFLSmoLLxCBrnJlaX3YAwIwVGvIRMSnpXEk3S9ok6eqIeKTIMjHgGmem1pcdoPsGmLHCJ0NFxI2Sbiy6HFRE88zUiN0Lix13XLJvFqtxAO3itwXlU5+ZatN9A3SJkEe5ZXXfAGgba9eg3FhYDOgKIY/yY2ExYMborgGACiPkAaDCCHnkh/HsQOkQ8shH43IEJ5wgbd1K2AMlQMgjH43j2detkxYtYu0ZoAQIeeSjPp599uxkmCOTl4BSIOSRj/p49rExadkyJi8BJcE4eeRn1izpoIOYvASUCC155K9x7ZlOMDoHyB0hj3LgZiFAIQh5JJpb0Xm2qtt5L1abBApByOOFrejJyfxa1e220FltEiiEo0T9n7VaLUZHR/tdjeEzPp6E8ORkErI/+5l09NG7X4+NzXyBsOb33tt7TU1xwRaYAdvrI6KWta+wlrztC2w/Y3tD+jitqLLQpeZW9JIl+bWqO2mhz/SCLYCWih5CeVFEXFhwGehW1prteQ2DZD14oK8YJ49E85rtea7hznrwQN8UfeH1XNsP2b7M9oFZB9heZXvU9ugEIyoGC+PagdLrKuRt32Z7Y8ZjhaSLJb1a0lGStkr6atZ7RMTqiKhFRG1kZKSb6qCXGNcODISuumsi4uR2jrN9qaQbuikLJZM1rp0uGaB0ihxdM7/h5RmSNhZVFrqQ1eXSTjcM49qBgVBkn/w/237Y9kOSTpT0qQLLwkxkdbm02w3TuOrk2rWMmgFKislQwyxropLU/uQlAKXQl8lQGABZXS50wwCVwjj5YdZqohKTl4DKoCVfVe2OYc9aSoDlBYDKIOSriDHsAFKEfBXtbW12ZqkCQ4WQr6JWF09p4QNDhwuvVdTqgiqzVIGhQ0u+qrIunjI8Ehg6tOSHCWu7A0OHkB82rO0ODBW6awCgwgh5AKgwQh4AKoyQB4AKI+T7idmnAApGyPcLs08B9AAh3y97W18GAHLSVcjbPtP2I7anbNea9n3W9mbbj9l+e3fVrCBmnwLogW4nQ22U9C5J32rcaHuJpJWSXivpYEm32T4iInZ1WV51MPsUQA901ZKPiE0R8VjGrhWSroyI5yPi55I2S1raTVmVxM05ABSsqD75QyQ93fB6LN32ArZX2R61PTpBvzQA5Gra7hrbt0k6KGPX+RFxfatvy9iWOU4wIlZLWi1JtVqNsYQAkKNpQz4iTp7B+45JWtjweoGkLTN4HwBAF4rqrlkjaaXt/WwfKulwST8tqKzqYHIUgJx1O4TyDNtjko6V9CPbN0tSRDwi6WpJ/yPpx5I+xsiaaTA5CkABHCVqNdZqtRgdHe13NfpjfDwJ+MnJZOz82BjrvgNoi+31EVHL2seM17JgchSAAnBnqLJgchSAAhDyZcKt+QDkjO6aojFiBkAfEfJ5ag50RswA6DNCPi9Zgc5ywgD6jJDPS1agM2IGQJ8R8nnJCvT6iJmxMWntWkbMAOg5RtfkpdUQSEbMAOgjQj5PBDqAkqG7plMMiQQwQAj5TkxOSm96E0MiAQwMumvaNTUlLVsm3Xtv8vruu5MW/axZLEMAoLRoybdrYkK6//7dr2s16b3vpVUPoNQI+XbNnSsdf7w0e7Z0zDHStddK99zDRCcApUZ3Tbuah0hKyXj4u+9mohOA0iLkO9E8RJKlgQGUXLe3/zvT9iO2p2zXGrYvtv0H2xvSxyXdVzUHeQ9/rIc+AQ+gpLrtk98o6V2S7srY90REHJU+zumynO6xIiSAIdRVyEfEpoh4LK/K5K6x5c6KkACGUJGjaw61/TPbd9pe1uog26tsj9oencgzeJtb7nPmsCIkgKEz7YVX27dJOihj1/kRcX2Lb9sqaVFE7LD9Bkk/tP3aiHiu+cCIWC1ptSTVarWZd5bX12+vXwRtbrlv386FUgBDZ9qWfEScHBGvy3i0CnhFxPMRsSN9vl7SE5KOyK/aTbL627OW/uVCKYAhU8gQStsjkp6NiF22D5N0uKQniyhLUnZ/+7x5tNwBDL1uh1CeYXtM0rGSfmT75nTXmyU9ZPtBST+QdE5EPNtdVfei1R2YaLkDGHKOEi2ZW6vVYnR0dGbf3NwnDwBDwvb6iKhl7avO2jX1VnsE670DQKo6IS8x4QkAmlQr5JnwBAB7qFbIt7oACwBDqlqrUDYvB8wFWABDrlohL71wOWAAGGLV6q4BAOyBkAeACiPkAaDCCHkAqDBCHgAqjJAHgAor1QJltick/WKaw+ZI2t6D6nSDOuaj7HUse/0k6piXstfxVRExkrWjVCHfDtujrVZbKwvqmI+y17Hs9ZOoY14GoY6t0F0DABVGyANAhQ1iyK/udwXaQB3zUfY6lr1+EnXMyyDUMdPA9ckDANo3iC15AECbCHkAqLBShrztM20/YnvKdq1p32dtb7b9mO23t/j+Q23fZ/tx21fZ3rfg+l5le0P6eMr2hhbHPWX74fS4Gd6xfMZ1vMD2Mw31PK3Fcaek53az7fN6WL9/sf2o7YdsX2f75S2O6/k5nO6c2N4v/QxsTj93i3tRr4byF9q+w/am9PfmExnHLLf9m4af/+d7Wce0Dnv92Tnxr+l5fMj263tcvyMbzs8G28/Z/mTTMX0/jx2LiNI9JL1G0pGS1kqqNWxfIulBSftJOlTSE5JmZ3z/1ZJWps8vkfSRHtb9q5I+32LfU5Lm9OmcXiDpM9McMzs9p4dJ2jc910t6VL+3Sdonff4VSV8pwzls55xI+qikS9LnKyVd1eOf7XxJr0+fHyDpfzPquFzSDf347LX7s5N0mqSbJFnSMZLu62NdZ0v6pZJJRqU6j50+StmSj4hNEfFYxq4Vkq6MiOcj4ueSNkta2niAbUt6i6QfpJsul3R6kfVtKvs9kq7oRXkFWCppc0Q8GRF/lHSlknNeuIi4JSIm05f3SlrQi3Lb0M45WaHkcyYln7uT0s9CT0TE1oh4IH2+U9ImSYf0qvwcrZD03UjcK+nltuf3qS4nSXoiIqabgV96pQz5vThE0tMNr8f0wg/zKyX9uiEwso4pyjJJ4xHxeIv9IekW2+ttr+pRnRqdm/4bfJntAzP2t3N+e+EDSlp0WXp9Dts5J386Jv3c/UbJ57Dn0q6ioyXdl7H7WNsP2r7J9mt7WrHEdD+7snz+pOQ/slaNtX6fx4707fZ/tm+TdFDGrvMj4vpW35axrXkMaDvHdKzN+p6lvbfij4+ILbbnSrrV9qMRcVe3dWunjpIulvQlJefiS0q6lT7Q/BYZ35vbGNt2zqHt8yVNSvpei7cp9Bxm6NtnrlO2XyrpGkmfjIjnmnY/oKTr4bfp9ZgfSjq8x1Wc7mdXlvO4r6R3Svpsxu4ynMeO9C3kI+LkGXzbmKSFDa8XSNrSdMx2Jf/m7ZO2qrKO6dh09bW9j6R3SXrDXt5jS/p1m+3rlHQF5BZQ7Z5T25dKuiFjVzvnd8baOIdnS3qHpJMi7QDNeI9Cz2GGds5J/Zix9HPwMknPFlinF7D9IiUB/72IuLZ5f2PoR8SNtv/N9pyI6NmiW2387Ar9/HXgVEkPRMR4844ynMdODVp3zRpJK9PRDIcq+Qv608YD0nC4Q9K7001nS2r1n0GeTpb0aESMZe20/RLbB9SfK7nQuLEH9aqX39i3eUaLsu+XdLiT0Un7KvmXdU2P6neKpH+Q9M6I+H2LY/pxDts5J2uUfM6k5HP3k1Z/pIqQ9v9/W9KmiPhai2MOql8nsL1Uye/+jh7WsZ2f3RpJf5eOsjlG0m8iYmuv6tig5X/k/T6PM9LvK79ZDyUhNCbpeUnjkm5u2He+ktEOj0k6tWH7jZIOTp8fpiT8N0v6T0n79aDO35F0TtO2gyXd2FCnB9PHI0q6KHp5Tv9D0sOSHlLyyzS/uY7p69OUjM54opd1TH9WT0vakD4uaa5fv85h1jmR9EUlf5Ak6c/Sz9nm9HN3WI9/tm9S0q3xUMP5O03SOfXPpKRz03P2oJIL28f1uI6ZP7umOlrSN9Pz/LAaRtb1sJ77KwntlzVsK815nMmDZQ0AoMIGrbsGANABQh4AKoyQB4AKI+QBoMIIeQCoMEIeACqMkAeACvt/V7N/5YFcCdgAAAAASUVORK5CYII=\n",
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAEICAYAAAC6fYRZAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO3dd3gU1f7H8fc3CR2kIyUU/YkUCxYEBAsiCoKKvSvqVQlgwYLg9YqC4kXxgqJwESvXAupVvAgo0osoiAgiINKlBgKIIFKSPb8/drKEsOnZks3n9Tx5kpmd3flmsvnk5MyZM+acQ0REYlNcpAsQEZHQUciLiMQwhbyISAxTyIuIxDCFvIhIDFPIi4jEMIW8SCZmNtPM7snltm3NbFOoaxLJL4W8FFlmtt7M/jKzfRk+Xot0XVkxszvNbG6k65DiJSHSBYgU0BXOuamRLkIkWqklLzHHzEqZ2e9mdmqGddW9Vn8NM6tsZhPMbIeZ7fa+Tszla5cxs3e95y0Hzsn0eF8zW2Nme81suZld7a1vAowEzvX+4/jdW9/ZzH40sz/MbKOZPVNYx0EEFPISg5xzB4HPgJszrL4BmOWc247/ff8OUB+oB/wF5Lab52ng/7yPDkDXTI+vAc4HKgL9gffNrJZzbgWQBHzrnCvvnKvkbf8ncAdQCegMdDezq/Lw7YpkSyEvRd3nXqs9/eNeb/2HHB3yt3jrcM7tdM596pzb75zbCwwELszl/m4ABjrndjnnNgLDMj7onPvEObfFOedzzn0ErAJaZPVizrmZzrml3vY/AWPyUItIjtQnL0XdVVn0yU8HyphZS2AbcAYwDsDMygJDgY5AZW/7CmYW75xLy2F/tYGNGZY3ZHzQzO4AHgEaeKvKA9WyejGvvkHAqUBJoBTwSQ41iOSaWvISk5xzPuBj/K35W4AJXqsd4FGgEdDSOXcccIG33nLx0luBuhmW66V/YWb1gTeA+4GqXpfMzxleN9iUrx8C44G6zrmK+Pvtc1OHSK4o5CWWfQjcCNzqfZ2uAv5++N/NrAr+fvbc+hh4wjt5mwg8kOGxcviDfAeAmd2Fv4WeLhlINLOSmWrZ5Zw7YGYt8P9BEik0Cnkp6r7INE5+XPoDzrn5+E9s1ga+zPCcl4EyQArwHfBVHvbXH38XzTrga+C9DPtbDvwL+BZ/oJ8GfJPhudOBZcA2M0vx1vUABpjZXqAf/j8iIoXGdNMQEZHYpZa8iEgMU8iLiMQwhbyISAxTyIuIxLCouhiqWrVqrkGDBpEuQ0SkSPnhhx9SnHPVgz0WVSHfoEEDFi5cGOkyRESKFDPbkNVj6q4REYlhCnkRkRimkBcRiWEKeRGRGKaQFxGJYQp5EZEYppAXEYkEnw+SkyHEk0Qq5EVEws3ng4sugsREaNvWvxwiCnkRkXDbsQPmzYPUVP/nHTtCtiuFvIhIuNWoAa1bQ0KC/3ONGiHbVVRNayAiUiyYwYwZ/hZ8jRr+5RBRyIuIREJcHBx/fOh3E/I9iIhIxCjkRURimEJeRCSGKeRFRGKYQl5EJIKcc8zeMJv9h/eH5PUV8iIiETJ7w2ziBsRx4bsXMn7l+JDsQ0MoRUTCyedj/5YN1B/TgpT9KQDUKl+LqxtfHZLdKeRFRMLF5+OlO06id8N1gVVz7prDefXOC9kuFfIiImGwZtcaTnr1JGjoX75rsfH2yK0hvyBKIS8iEkI+56Pj+x2ZsnZKYN22ofEc36zNkTlrfL6QTXGgE68iIiEyfuV44gfEBwJ+9FWjcU+lcfzKzTBzpj/QQzztsFryIiKF7PcDv1P5hcqB5TNqnsH3935PQpwXuRm7aIJNO1yIXThqyYuIFKI+U/ocFfBLkpbwY7cfjwR8ZiGedlgteRGRgvL5WLJiJmf89+LAqj5t+jCo/aCcnxviaYcV8iIiBZCaeojmfauwpMKfgXW7++ymUulKuX+REE47rO4aEZF8+s+S/1BiYKlAwI//KA6XtC1vAR9iBW7Jm1ld4D9ATcAHjHLOvWJmVYCPgAbAeuAG59zugu5PRCTStu3bRq1/1Qosd9hZmUkj9hDXuk1Ib+WXH4XRkk8FHnXONQFaAT3NrCnQF5jmnGsITPOWRUSKtLv+d9dRAb+65698teQ04izO35/uXASrO1aBQ945t9U5t8j7ei+wAqgDdAFGe5uNBq4q6L5ERAqNzwfJybkO5Tkb5mD9jXcXvwvA4EsG4552/F/acccOgYwihXri1cwaAGcC84HjnXNbwf+HwMyi638YESm+0i9AmjfPP2xxxgz/yc8g/jr8FycOO5Ft+7YBUKNcDdY/tJ4yJcr4N0gfApn+WjHYXQOAmZUHPgV6Oef+yMPz7jOzhWa2cEeU/QUUkRgV7AKkIIbOG0LZ58sGAn7WnbNIfiz5SMDDkSGQmzYduYo1ihRKS97MSuAP+A+cc595q5PNrJbXiq8FbA/2XOfcKGAUQPPmzaOrM0tEYlN2rW+fj3VrFnLihy0Dq+7YWoN3h2/B4uODv14Ih0AWVGGMrjHgLWCFc25IhofGA12BQd7n/xV0XyIihSKLC5BcWhqde1Xny2pHBgJueQlqHdgF/VOiNsizUxjdNW2A24F2ZrbY++iEP9wvMbNVwCXesohIdEhvfXsBP/HXicQ9lxAI+LfHG27yudQ6EJrpBsKlwC1559xcIKtOqIuzWC8iEhX2HNhD5Rcq4/D3Fp+6ryyLhh2kRKs2MH06pKSEZLqBcNEVryISu3IYJvnktCep9EKlQMAvum8RS1/YS4nfvKmA4+OPau0XRZq7RkRiUzbDJJcmL+X0kacHNn3s3McYfOngI88tgn3vWVHIi0hsCjJMMrV6VVq92Yoftv4Q2GzX47uoXMabGjiEd2iKFHXXiEhsyjRP+wfJUyjxbIlAwI+7cRzuaXd0wIfwDk2Ropa8iMQmb5jk9t+Wc/zo02DcbADan9ieybdN9s81k1GI79AUKWrJi0jMundCN3/Ae369/1em3D7l2ICHkN+hKVLUkheR6JaPfvJ5G+fR5u02geVBFw+iz3l9sn9SiO/QFCkKeRGJXnmYSAzgQOoBThp2Epv3bgagcunKbLr1B8rWaZC7/UXx9AT5pe4aEYleuZxIDGDY/GGUGVgmEPDTb5/KrkmnUfaEk2PqRGpeqSUvItErF9P4rv99PSe8ckJg+ZbTbuH9q9/Htm+PyROpeaWQF5HolU0/uXOOLmO78MWvXwTWbXp4E3WOq+NfiPJ53sNFIS8i0S1IP/lXq7/isg8uCyy/ccUb3HPWPUc/L0ZPpOaVQl5Eiow/Dv5BtRercdh3GIDG1RqzJGkJJeNLBn9CDJ5IzSudeBWRIqHfjH5UHFQxEPAL713Iip4rsg54AdSSF5Eot2z7Mk7996mB5V4tezG049AIVlS0KORFJCql+dJo/XZrFmxeEFi38/GdVClTJYJVFT3qrhGRqDNm6RgSnk0IBPx/L30T18+ngM8HteRFJGrs+HMHNV46MtSxbf22THsnjbjnkqD1f3K84lWOpZAXkajQfUJ3Rv4wMrD8S89faJRWCe5NLPYXNBWEQl5EImr+pvm0eqtVYPm5i57jyQue9C84pwuaCkghLyIRcTD1II1ea8SGPRsAqFCyAlsf3Uq5kuWObKQLmgpMnVsiEnbDFwyn9MDSgYCfevtU/njij6MDPl36BU0K+HxRS15Ewua3Pb9R/+X6geUbTrmBsdeOxRTgIaOQF5GQc85x7cfXMu6XcYF1G3v9RmLFuhGsqnhQd42IHMvng+Rk/4nPAj7n6zVfEzcgLhDwI78A199IvPK2YjvHezgp5EXkaOl3Y0pMzP3NNoI8Z+/BvZQZWIYO73cA4KTjTuDgwDi6/YD/D0EONwGRwqGQF5Gj5eFuTFk9Z8CXfTlu0HEcSD0AwIJ7FrCq1xpKtvLuu2qmIZFhoj55ETlafm624T1nxcpvaNo9FRYOBuD+c+7n1U6vHtlu5kx/l46ZRsyEiUJeRI6Wj7Hpac7HBXekMm9TWmDdjt47qFa22tEbxsVBrVqFXbFkQ901InKsPIxN/3jZxyQ8m8C8TfMA+Oi6j3BPu2MDXiJCLXkRyZeU/SlUH1w9sNymbhtm3TmL+Lj4CFYlmSnkRSTP7p90P8O/Hx5YXtFzBY2rNY5gRZIVhbyI5NqCzQto+WbLwHL/tv3pd2G/CFYkOVHIixQ3Pl+eJ/w6lHaIJsObsHb3WgDKlihL8mPJlC9ZPpSVSiHQiVeR4iQfFzq9vvB1Sj1XKhDwX9/2NX/+/U8FfBGhlrxIcRLsQqcsbsKxcc9G6r1cL7B8TZNr+O/1/9VkYkWMQl6kOMnFhU7OOW787418svyTwLoNvTZQr2K9Y7aV6KeQFylOcrjQadraabR/r31geXin4fQ4p0e4q5RCpJAXKW7SL3TK4M9Df1LzXzXZd2gfACdUOoEVPVdQKqFUJCqUQlQoJ17N7G0z225mP2dYV8XMppjZKu9z5cLYl4gUroGzB1L+n+UDAf/d375j7UNrFfAxorBG17wLdMy0ri8wzTnXEJjmLYtIlFiZshLrb/xjxj8ASDo7Cfe0o2ViyxyeKUVJoXTXOOdmm1mDTKu7AG29r0cDM4E+hbE/Eck/n/Nx0eiLmL1hdmDd9se2U71c9WyeJUVVKMfJH++c2wrgfQ46X6mZ3WdmC81s4Q7dQEAkpD5d/inxA+IDAf/hNR/innbBAz7YnZ7yc8coiaiIXwzlnBvlnGvunGtevbpaEiJ5lovg3bl/J9bfuO6T6wBoWaclqU+lcvNpN2f9mpkvmsrPHaMk4kIZ8slmVgvA+7w9hPsSKZ5yEby9vupFtcFHpv1d1mMZ393zXfazRQa7aCo/d4ySiAtlyI8HunpfdwX+F8J9iRRPwYLXa9n/sHkh1t94Zf4rAPS7oB/uaUfT6k1zft30i6YSEo5cNBVsnUS9QjnxamZj8J9krWZmm4CngUHAx2b2N+A34PrC2JeIZJD5CtZq1TjU7kJOP20uK73Ge8n4kqT0TqFCqQq5f92sLprK4x2jJPIKa3RNFh17XFwYry8iWcgUxm/OGsq9F80NPPxlpw/peE5Wv545CHLRVNB1EtV0xatIURcXx+YyqSQOONL72mWlMW7zeVi/myJYmEQDhbxIUeTNCe+qV+eWcbcy9uexgYfWPbCGBofLqUtFAIW8SNHjjaiZsXku7W4/MppmWMdhPNDygQgWJtFIIS9SxOzfsoE6587m9zL+5brl6/DrQ6spnVA6soVJVIr4xVAiknsvzH2Bcm+dGAj4bxY247dHNirgJUtqyYsUAat2ruLk104OLN975j2Mavmc+t0lRwp5kSjmcz4uee8Spq+bHliX/FgyNcrpQiTJHYW8SJQat2Ic13x8TWD5/avf59bTb41gRVIUKeRFoszuv3ZT5cUqgeXm1ZvxbbfvSYgvEcGqpKjSiVeRKPLY148dFfBL55/N9w8tI6Fde836KPmilrxIpPl8/LhiOmf995LAqr+f93cGnvogPJd49ORjmlJA8kghLxJBhw8f5My/V2FZ+f0AxFkcux7fRcXSFf3zw2ecfEyzPko+qLtGJELe+fEdSj5fOhDwE8bGkdZtiz/g4cjkY5s2wcyZGiop+aKWvEiYbd27ldpDageWO6dU4YuRe7DWbY5trWvWRykghbxImDjnuOPzO3j/p/cD69a+Fs8JTU+F38ZCzZpqrUuhU8iLhMGs9bNoO7ptYHlo6wH06jTgyEnVuDgFvISEQl4khPYf3k/9l+uTsj8FgNoVarPmwTWUji8FrafqpKqEnEJeJEQGfzOYx6c+Hliec9cczqt33pENdCs9CQOFvEghW7NrDSe9elJg+a4z7uLtLm8fu6FOqkoYKORFConP+ej4fkemrJ0SWLft0W0cX15BLpGjcfIi4J8yIDnZfwFSduuyMH7leOIHxAcCfvRVo3FPOwW8RJxa8iLe7fQCJ0FnzPCvz7wu7tg20e8HfqfyC5UDy2fUPIPv7/2ehDj9akl0UEteZMcOf5hnnCMm2LpM+kzpc1TAL0lawo/dflTAS1RRyIvUqOFvrSckHBnOWK0aNG8O8fHHDHFcsm0J1t94cd6LAPRp0wf3tOP040+P1HcgkiU1OUTS54hJH87oHLRrBwsXwjnnwJgxAKT6Ujl71Nn8lPxT4Km7++ymUulKkapcJEcKeRE4ejjj9u1Humrmz4d69VjcoRlntlgU2Hz8TeO5otEVESpWJPfUXSOSkc/nb8mfey4kJLCvhOPRi9No3twf8Jf+36Wk9UtTwEuRoZAXSZc+yqZuXTDj81mv0+SRkgxpDX/bWpOdvVOYfNtk4ky/NlJ0qLtGJJ03omZDuVQerDOH8VNmc1q90/io9fO0Pr2zph6QIkkhL+I5XLUyr9xYl6frr4N448X2g+jV6mFK6AbaUoQp5EWAbzd+S9LEJH5quI4r6l/Kq11ep37lBpEuS6TAFPJSrO3+azd9p/Zl1KJRJB6XyLgbx9GlURdMXTMSIxTyUiw55/hw6Yc88vUjpOxP4eFWD9O/bX8qlKoQ6dJECpVCXoqdX3f+So+JPZi2bhot6rTgq1u/4sxaZ0a6LJGQUMhLsXEg9QCD5g7in3P/SZmEMozoNIL7zr6P+Lj4SJcmEjIKeSkWpq2dRveJ3Vm1axU3nXoTQzsMpWbZGrozk8Q8XdUhMS15XzK3fXYb7d9rj8/5mHzbZMZcO8Yf8BddBImJ0Lat/0IokRiklrzEJJ/z8cYPb9B3Wl/+PPQnT13wFE+c9wRlSpTxbxBsKmHdik9iUMhb8mbW0cxWmtlqM+sb6v2J/JT8E23ebkPSxCTOqNKUn5KWMOCiAUcCHoJPLywSg0Ia8mYWDwwHLgOaAjebWdNQ7lOKIe82fX8e3Efvr3tz1utnsXrXakYvb8T0HvNpfF3Ssd0x6dMLb9oEM2eqT15iVqi7a1oAq51zawHMbCzQBVge4v1KceFNKjZ+x1we6FKC30of5J4z72FQs0eo+vfTITUt6+6YjNMLi8SoUHfX1AE2Zlje5K0LMLP7zGyhmS3cEeQWayLZ2bhuMVfVmUOXG30c98dB5nYZzxtXvkHVeo3VHSNC6EM+2P/A7qgF50Y555o755pXr149xOVIrEj1pTLk2yE0+egCvj7JGDQtjkXLzqNNs8v9G6g7RgQIfXfNJqBuhuVEYEuI9ykxbv6m+XSb0I0lyUvo3LAzr3UcRoOe5Y4d767uGJGQh/z3QEMzOwHYDNwE3BLifUpR5PPleGHS7wd+54mpT/D6D69Tu0JtPr3hU65ufLUmExPJRki7a5xzqcD9wGRgBfCxc25ZKPcpRVD6HZmyuDDJOceYpWNo/FpjRi0axYMtH2RFzxVc0+QaBbxIDkJ+MZRzbhIwKdT7kSIsmwuTVu9aTY+JPZiydgrNazdn0q2TOKvWWREuWKTo0LQGEnlBLkw6mHqQAbMGcOqIU5m/eT6vXfYa3/3tu+wD3hsvj3NZbyNSzGhaA4m89JEwXp/8jPUzSZqYxK87f+XGU25kSIch1K5QO/vXSO/ymTfP/4dixgz/iVeRYk6/BRId4uLYXt644/OutPtPO1J9qXx565eMvW5szgEPwbt8REQteYk8n/Px1qK36DO1D/sO7ePJ85/kyfOfPHqumZykd/mkt+R18ZMIoJCXCFuavJSkiUnM2ziPC+pfwMjOI2lSvUneXyhTl48ufhLxU8hLRPx56E8GzBrAkO+GULFURd7p8g5dm3Ut2JBIXfwkcgyFvITdhF8ncP+k+9mwZwN3n3E3L17yIlXLVo10WSIxSSEvYbPpj008+OWDjPtlHE2rN2X2nbM5v/75kS5LJKYp5KVgcjEdQaovldcWvMZTM54i1ZfK8+2e59HWj1IyvmSYixUpfjSEUvIvh+kIABZsXkCLN1rw8OSHOb/e+SzrsYwn2vShZMpuXbQkEgYKeclZVleSZjM2fc+BPfSc2JNWb7Yi+c9kPrn+EybeMpETKzbQDbRFwkghL9nLrrUeZDoC5xxjfx5L4+GNGfnDSO5vcT8req7guqbX+UfO6KIlkbBSn7xkL5vJwzKPTV+zey09JvXg6zVfc3ats/ni5i9oXrv50a+ni5ZEwkohL9nLKZTj4jhYtRKD5wxk4JyBlIgrwbCOw+hxTg/i4+KPfT1dtCQSVgp5yV4OoTxz/Uy6T+zOLym/cH3T6xnaYSh1jquTxYt5dNGSSNgo5CVnQUJ5x5876D2lN6OXjKZBpQZMvGUinRp2ilCBIpIVhbzkic/5eOfHd3h86uP8cfAPnjjvCf5xwT8oW6JspEsTkSAU8pJry7YvI2liEnN/m8v59c7n353/zSk1Tol0WSKSDYW85Gj/4f08O+tZXvr2JY4rdRxvXfkWd55xJ3GmEbgi0U4hL0fLNE3BpFWT6DmpJ+t/X8+dZ9zJ4EsGU61stUhXKSK5pJCXIzLcQm9z27N56J46fPrLZzSp1oSZXWdyYYMLI12hiOSRQl6O2LGDtG+/4bWz0/jHOfNJXVWa5y56jt5temsyMZEiSiEvAQtTf6PbA2VYdNw+OuyszPBHF/B/VU+KdFkiUgA6cybsObCHByY9QIs3W7Kldnk+av86X76ccnTAZzVJmYhENbXkizHnHJ8s/4ReX/Vi275t9DynJ8+1e46KpSsevWGGvnpat/ZfARun9oFIUaCQL6bW7l5Lz0k9+Wr1V5xZ80z+d9P/OKfOOcE3zm6SMhGJamqOFTOH0g7x/JznOWXEKcz9bS4vd3iZBfcuyDrgIeiUwiJSNKglX4zM3jCbpAlJrEhZwbVNruWVjq/kPJkYaOZIkSJMIV8MpOxP4fEpj/PO4neoX7E+E26eQOeTO+ftRTRzpEiRpJCPYc453l38Lr2n9GbPwT30adOHpy54inIly0W6NBEJE4V8jFq+YzlJE5KY89sc2tRtw8jLR3JqjVMjXZaIhJlCPsbsP7yfgbMHMnjeYMqXLM8bV7zB3WfercnERIophXwM+Wr1V/SY2IN1v6+ja7OuDL5kMNXLVY90WSISQQr5GLBl7xZ6fdWLT5Z/QqOqjZjRdQZtG7SNdFkiEgUU8kVYmi+NEd+P4MnpT3Io7RDPXvQsvVv3plRCqUiXJiJRQiFfRC3auohuE7qxcMtCLjnxEkZ0HsFJVTJNJpZpbngRKX50Nq6I+ePgHzz05UOc88Y5bNyzkTHXjmHybZODB/xFF0FiIrRt618WkWJHLfkiwjnHpys+5aGvHmLr3q10b96dgRcPpFLpSsGfoPlmRIQCtuTN7HozW2ZmPjNrnumxJ8xstZmtNLMOBSuzeFu3ex2Xj7mc6z+5nhrlavDt375leOfhWQc8aL4ZEQEK3pL/GbgGeD3jSjNrCtwEnALUBqaa2cnOubQC7q948PrSD1WtxJDvhjJg1gDiLI4hlw7hgZYPkBCXix+b5psREQoY8s65FQB2bIB0AcY65w4C68xsNdAC+LYg+ysWvL70uRu/IenaUiwrv5+rG1/NKx1foW7Funl7Lc03I1LsherEax1gY4blTd66Y5jZfWa20MwW7tixI0TlhFkB7qK0c+NK7qk8h/O7prE3bT/jO/6Hz278LO8BLyJCLkLezKaa2c9BPrpk97Qg64ImnnNulHOuuXOuefXqMXB1Zj5HtTjnGL14NI0/voB3mzl6zzOW/9iGK1rcFtp6RSSm5dhd45xrn4/X3QRkbHomAlvy8TpFTz5GtazYsYLuE7sza8Mszk08l9dvm8ppPWqqL11ECixU3TXjgZvMrJSZnQA0BBaEaF/RJQ+jWv46/Bf/mP4Pmo1sxpLkJYy6fBRz757LabWa+f8wKOBFpIAKdOLVzK4GXgWqAxPNbLFzroNzbpmZfQwsB1KBnsVmZE0uR7VMXj2ZHpN6sHb3Wm4//XZeuvQlapTTMEcRKVwFHV0zDhiXxWMDgYEFef0iK5tRLVv3buXhyQ/z0bKPOLnqyUy7YxrtTmgX5gJFpLjQFa+FKZu5YtJ8aYxcOJK/T/87B1MP0r9tf/q06aPJxEQkpBTyhSV9VM28ef6++Bkz/C164MetP9JtQje+3/I97U9sz4hOI2hYtWGECxaR4kAhX1iCjKrZW6ks/Wb0Y9iCYVQrW40PrvmAm0+9OdjFYyIiIaGQLyzpo2rmzcO1PpdxO+fy4AcPsWXvFrqd3Y3nL36eymUqR7pKESlmNNVwQaVf3QowYwbrl8/jym4VufaT66hatirz/jaPf1/+bwW8iESEWvIFkaEf/nCbVgx97nL6zx6AYbx0yUs81Oqh3E0mJiISIkqggvD64b+plUrSKXP5edpcujTqwrDLhlGvYr1IVyciUoxDvhBujberQgJ97q7Gm7W3UfdAKT6/cSxdGl9VyIWKiORf8eyTL+Ct8ZxzvLfkPRoPb8I7dXbwWLPuLH9mhwJeRKJO8WzJF+DWeCtTVtJ9YndmrJ9Bq8RWTOk8hWY1m4W4YBGR/CmeLfl83BrvQOoB+s3ox+kjT+fHbT8ysvNIvrn7GwW8iES14tmSz+Ot8aasmUKPST1YvWs1t552K/+69F8cX153XBKR6Fc8Qx5ydWu8bfu28cjkRxjz8xgaVmnIlNun0P7E/EyvLyISGcU35LPhcz5eX/g6T0x7gr9S/+LpC5+m73l9KZ1QOtKliYjkiUI+k8XbFpM0IYn5m+fT7oR2jOg0gkbVGkW6LBGRfFHIe+Pl91Uqy9Mzn+GV+a9QpUwV3rv6PW497dYjk4kVwrh6EZFwK94h742X/zxlLg9cmcCm0oe476z7GNR+0NFzzWQzjbCISDQr1iG/Ye0iHqwzh/HtHKclH+Kjm76gdbPLj92wAOPqRUQiqVg2Rw+nHealeS/R9OMLmXqS8eLUOH5Yfh6tT+8c/An5GFcvIhINil1L/tuN39JtQjeWbl/KFSdfwasdXqF+z7LZ97XncVy9iEi0KDYhv/uv3fSd2pdRi0aReFwi424cR5dGXXJ/l6ZcjKsXEYk2MR/yzjk+WPoBj379KDv37+SRVo/wTNtnqFCqQqRLExEJuZgO+V93/kr3id2Zvm46Leq0YPJtkzmj5hmRLktEJGxiMuQPpB5g0NxB/Jt+pMsAAAb7SURBVHPuPymTUIYRnUZw39n3ER8XH+nSRETCKuZCfuraqfSY2INVu1Zx86k3M6TDEGqWrxnpskREIiJmhlAm70vmts9u45L3LsHnfEy+bTIfXvuhP+DTb7btXKTLFBEJq5gI+UmrJtF4eGM+XvYxT13wFEu7L+XS/7vU/2AB7wIlIlKUxUR3zclVT6ZVYiuGdhhK42qNj35QV6uKSDEWEy35k6qcxJe3fnlswIOuVhWRYi0mWvLZ0tWqIlKMxX7Ig65WFZFiKya6a0REJDiFvIhIDFPIi4jEMIW8iEgMU8iLiMQwhbyISAwzF0XzuZjZDmBDPp9eDUgpxHIKi+rKu2itTXXljerKm4LUVd85Vz3YA1EV8gVhZgudc80jXUdmqivvorU21ZU3qitvQlWXumtERGKYQl5EJIbFUsiPinQBWVBdeRettamuvFFdeROSumKmT15ERI4VSy15ERHJRCEvIhLDilTIm9n1ZrbMzHxm1jzTY0+Y2WozW2lmHbJ4/glmNt/MVpnZR2ZWMgQ1fmRmi72P9Wa2OIvt1pvZUm+7hYVdR5D9PWNmmzPU1imL7Tp6x3C1mfUNQ12DzewXM/vJzMaZWaUstgvL8crp+zezUt7PeLX3XmoQqloy7LOumc0wsxXe+/+hINu0NbM9GX6+/UJdV4Z9Z/uzMb9h3jH7yczOCkNNjTIci8Vm9oeZ9cq0TViOmZm9bWbbzeznDOuqmNkUL4ummFnlLJ7b1dtmlZl1zVcBzrki8wE0ARoBM4HmGdY3BZYApYATgDVAfJDnfwzc5H09Euge4nr/BfTL4rH1QLUwHrtngMdy2CbeO3YnAiW9Y9o0xHVdCiR4X78AvBCp45Wb7x/oAYz0vr4J+CgMP7tawFne1xWAX4PU1RaYEK73U15+NkAn4EvAgFbA/DDXFw9sw3/BUNiPGXABcBbwc4Z1LwJ9va/7BnvfA1WAtd7nyt7XlfO6/yLVknfOrXDOrQzyUBdgrHPuoHNuHbAaaJFxAzMzoB3wX2/VaOCqUNXq7e8GYEyo9hECLYDVzrm1zrlDwFj8xzZknHNfO+dSvcXvgMRQ7i8Hufn+u+B/74D/vXSx97MOGefcVufcIu/rvcAKoE4o91nIugD/cX7fAZXMrFYY938xsMY5l9+r6QvEOTcb2JVpdcb3UVZZ1AGY4pzb5ZzbDUwBOuZ1/0Uq5LNRB9iYYXkTx/4SVAV+zxAowbYpTOcDyc65VVk87oCvzewHM7svhHVkdL/37/LbWfx7mJvjGEp342/xBROO45Wb7z+wjfde2oP/vRUWXvfQmcD8IA+fa2ZLzOxLMzslXDWR888m0u+rm8i6sRWpY3a8c24r+P+IA8FuPl0oxy3qbv9nZlOBmkEeetI597+snhZkXeaxobnZJldyWePNZN+Kb+Oc22JmNYApZvaL9xc/37KrC/g38Cz+7/lZ/F1Jd2d+iSDPLfAY29wcLzN7EkgFPsjiZQr9eAUrNci6kL2P8srMygOfAr2cc39kengR/u6Ifd75ls+BhuGoi5x/NpE8ZiWBK4EngjwcyWOWG4Vy3KIu5J1z7fPxtE1A3QzLicCWTNuk4P83McFrgQXbplBqNLME4Brg7GxeY4v3ebuZjcPfVVCg0MrtsTOzN4AJQR7KzXEs9Lq8E0qXAxc7rzMyyGsU+vEKIjfff/o2m7yfc0WO/Ve80JlZCfwB/4Fz7rPMj2cMfefcJDMbYWbVnHMhn4grFz+bkLyvcukyYJFzLjnzA5E8ZkCymdVyzm31uq62B9lmE/7zBukS8Z+PzJNY6a4ZD9zkjXw4Af9f4wUZN/DCYwZwnbeqK5DVfwYF1R74xTm3KdiDZlbOzCqkf43/5OPPwbYtLJn6QK/OYn/fAw3NPwqpJP5/c8eHuK6OQB/gSufc/iy2Cdfxys33Px7/ewf876XpWf1hKixen/9bwArn3JAstqmZfm7AzFrg/93eGcq6vH3l5mczHrjDG2XTCtiT3lURBln+Rx2pY+bJ+D7KKosmA5eaWWWve/VSb13ehPrMcmF+4A+nTcBBIBmYnOGxJ/GPjFgJXJZh/SSgtvf1ifjDfzXwCVAqRHW+CyRlWlcbmJShjiXexzL83RahPnbvAUuBn7w3WK3MdXnLnfCP3lgTprpW4+93XOx9jMxcVziPV7DvHxiA/48QQGnvvbPaey+dGIZjdB7+f9N/ynCcOgFJ6e8z4H7v2CzBfwK7dajryu5nk6k2A4Z7x3QpGUbGhbi2svhDu2KGdWE/Zvj/yGwFDnv59Tf853GmAau8z1W8bZsDb2Z47t3ee201cFd+9q9pDUREYlisdNeIiEgQCnkRkRimkBcRiWEKeRGRGKaQFxGJYQp5EZEYppAXEYlh/w/U1890TNGtXAAAAABJRU5ErkJggg==\n",
"text/plain": [
""
]
@@ -159,10 +125,14 @@
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
- "eval_x, eval_label = get_data(50)\n",
- "x1, y1 = eval_x.asnumpy(), eval_label.asnumpy()\n",
- "plt.scatter(x1, y1, color=\"red\", s=5)\n",
- "plt.title(\"Eval_data\")\n",
+ "eval_data = list(get_data(50))\n",
+ "x_target_label = np.array([-10, 10, 0.1])\n",
+ "y_target_label = x_target_label * 2 + 3\n",
+ "x_eval_label,y_eval_label = zip(*eval_data)\n",
+ "\n",
+ "plt.scatter(x_eval_label, y_eval_label, color=\"red\", s=5)\n",
+ "plt.plot(x_target_label, y_target_label, color=\"green\")\n",
+ "plt.title(\"Eval data\")\n",
"plt.show()"
]
},
@@ -170,500 +140,426 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 定义前向传播网络"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 初始化网络模型"
+ "上图中绿色线条部分为目标函数,红点部分为验证数据`eval_data`。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "使用`nn.Dense`定义了网络模型,即为线性模型,\n",
+ "### 定义数据增强函数\n",
"\n",
- "$$y=wx+b\\tag{1}$$\n",
+ "使用MindSpore的数据增强函数,将数据进行增强操作,操作解释如下:\n",
"\n",
- "其中,权重值$w$对应`weight`,$b$对应`bias`,并将其打印出来。"
+ "- `ds.GeneratorDataset`:将生成的数据转换为MindSpore的数据集,并且将生成的数据的x,y值存入到`data`和`label`的数组中。\n",
+ "- `batch`:将`batch_size`个数据组合成一个batch。\n",
+ "- `repeat`:将数据集数量倍增。"
]
},
{
"cell_type": "code",
"execution_count": 4,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "weight: -0.00034249047 bias: -0.019308656\n"
- ]
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.390782Z",
+ "start_time": "2020-09-14T10:38:40.356644Z"
}
- ],
+ },
+ "outputs": [],
"source": [
- "from mindspore.common.initializer import TruncatedNormal\n",
- "from mindspore import nn\n",
+ "from mindspore import dataset as ds\n",
"\n",
- "net = nn.Dense(1,1,TruncatedNormal(0.02),TruncatedNormal(0.02))\n",
- "print(\"weight:\", net.weight.set_data([0][0]), \"bias:\", net.bias.set_data([0]))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 查看初始化的网络模型"
+ "def create_dataset(num_data, batch_size=16, repeat_size=1):\n",
+ " input_data = ds.GeneratorDataset(list(get_data(num_data)), column_names=['data','label'])\n",
+ " input_data = input_data.batch(batch_size)\n",
+ " input_data = input_data.repeat(repeat_size)\n",
+ " return input_data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "我们将验证数据集和初始化的模型函数可视化。"
+ "使用数据集增强函数生成训练数据,并查看训练数据的格式。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
- "scrolled": true
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.435708Z",
+ "start_time": "2020-09-14T10:38:40.391790Z"
+ }
},
"outputs": [
{
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAEICAYAAAC6fYRZAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAaVUlEQVR4nO3df7QcZZ3n8feHIMiIC8F7EwJJDDj4I7pnUdsoYCACg8g4A3hGN8weZdXdLI7sGXdnzi4uZx1Gd+aoM4575owjG0eEmREVUSSDKARIhFF+3SCEhMgQFOWSeHNDVBh12b253/2jqjeVtvre/lHVP6o/r3P6dHdVdT1PV/f99nO/9TxPKSIwM7NqOqTfFTAzs/I4yJuZVZiDvJlZhTnIm5lVmIO8mVmFOcibmVWYg7wNFUmbJf27FrddI2my7DqVQdITks7udz1s+DnIWynSIPVLSf+cuf1Vv+vVjKR/K+kf+12Pskm6WtL/6Hc9rHcO7XcFrNJ+KyJu63clzEaZW/LWU5IOl/RTSa/KLBtPW/2LJC2UdJOkaUk/SR8vbXHfR6Qt1Z9IegR4XcP6yyQ9LulZSY9IujBd/grgSuCU9D+On6bLf1PSdyU9I+lJSVfMUfac9U7TTB+R9O20/FsljWXWv1PSDyU9Lenyed7n1ZI+Jenr6b7ulfSSzPqXS9ooaZ+kRyW9I12+Dvg3wH9J3+c/tHJcbbg5yFtPRcRzwFeBizKL3wF8KyL2kHwnPwe8GFgO/BJoNc3zR8BL0tubgYsb1j8OrAaOAv4Y+HtJSyJiB3AJcHdEHBkRR6fb/xx4F3A08JvA+yRd0KTsVur9u8C7gUXAYcAfAkhaCXwaeCdwHPAiYL4ftovS97AQ2An8SbqvFwAbgWvTci4C/lrSKyNiPfB54OPp+/ytecqwCnCQtzJ9LW2112//Pl1+LQcH+d9NlxERT0fEVyLiFxHxLEnwOqPF8t4B/ElE7IuIJ4G/zK6MiC9HxK6ImI2ILwGPAaua7SwiNkfEw+n2W4EvNKtLi/X+XET8U0T8ErgOODld/jvATRFxZ/oj+N+B2Xne61cj4r6ImCEJ3PV9vRV4IiI+FxEzEfEA8JW0DBtBzslbmS5okpO/AzhC0uuBH5MEqBsAJP0a8EngXJJWKsALJS2IiP3zlHcc8GTm+Q+zKyW9C/jPwIp00ZHAGE2k9fso8CqSlvfhwJebbNtKvX+ceckv0vJ/pd4R8XNJTzer1zz7ejHw+nrKKXUo8Hfz7M8qyi1567mImCVpyV5E0oq/KW39AvwB8DLg9RHxL4DT0+VqYde7gWWZ58vrDyS9GPgMcCnwojQlsy2z37zpWK8FNgDLIuIokrx9s3oUVu/0B+NFLbwuz5Mkqa+jM7cjI+J96XpPOztiHOStX64F/jXJicBrM8tfSJLP/qmkY0jy7K26DvhgehJ0KfAfM+teQBLgpgEkvZukhV43BSyVdFhDXfZFxP+WtIrkB6mZbup9PfBWSW9My/8wnf9t3gS8ND2R+7z09rr05DIk7/PEDvdtQ8hB3sr0Dw395G+or4iIe0lObB4HfCPzmv8JHAHsBe4BvtlGeX9MkqL5AXArmRRFRDwCfAK4myTQ/Uvg25nX3gFsB34saW+67PeAD0t6FvgQyY9IMx3XOyK2A+8n+bHbDfwE6GgQV/of0TnAWmAXSVrnYySpJoDPAivTcyRf66QMGy7yRUPMzKrLLXkzswpzkDczqzAHeTOzCnOQNzOrsIEaDDU2NhYrVqzodzXMzIbKli1b9kbEeN66gQryK1asYGJiot/VMDMbKpJ+2Gyd0zVmZhXmIG9mVmEO8mZmFeYgb2ZWYQ7yZmYV5iBvZlZhDvJmZkWZnYWpKRigiR8d5M3MijA7C296EyxdCmvWJM8HgIO8mVkRpqfhO9+BmZnkfnq63zUCHOTNzIqxaBGceiocemhyv2hRv2sEDNi0BmZmQ0uCTZuSFvyiRcnzAeAgb2ZWlEMOgcWL+12Lg3SdrpG0TNImSTskbZf0++nyYyRtlPRYer+w++qamVk7isjJzwB/EBGvAN4AvF/SSuAy4PaIOAm4PX1uZmY91HWQj4jdEfFA+vhZYAdwPHA+cE262TXABd2WZWZm7Sm0d42kFcCrgXuBxRGxG5IfAiD3VLOkdZImJE1MD0iXIzOzqigsyEs6EvgK8IGIeKbV10XE+oioRURtfDz3wiZmZtahQoK8pOeRBPjPR8RX08VTkpak65cAe4ooy8zMWldE7xoBnwV2RMRfZFZtAC5OH18M3NhtWWZm1p4i+smfBrwTeFjSg+my/wZ8FLhO0nuBHwFvL6AsMzNrQ9dBPiL+EWg2tOusbvdvZmad89w1ZmYV5iBvZtXWyhzvAzgPfFEc5M2sumZm4LTT4Pjjm8/xPqDzwBfFQd7Mqml2Fk4/He65B/bvh29/O3+O9wGdB74oDvJmVk3T03D//Qeev+51+XO8D+g88EVxkDezasoG71NOSVryeXO81+eBn5yEzZsHZh74ong+eTOrpnYu4jGA88AXxUHezKqrwsG7VU7XmNloqHA3ybk4yJtZ9VW8m+RcHOTNrPoq3k1yLg7yZlZ9Fe8mORefeDWz4TY7O38PmnZ62lSMW/JmNrzaybXXe9qMUIAHB3kzGwSt9nxp3G6Ec+2tcpA3s/5qtTWet90I59pbpRigPqO1Wi0mJib6XQ0z66WpqSRwz8wkwXpyMn8AU7PtWsnJV5ykLRFRy1tX1IW8r5K0R9K2zLIrJD0l6cH0dl4RZZlZxbTaGh8bg1oNFiw4eLsRzbW3qqjeNVcDfwX8bcPyT0bEnxdUhplVUSs9X2Zn4cwzYWICVq2CO+5wUG9RIS35iLgT2FfEvsxsBM3XGs+eYL3/fti7t7f1G2Jln3i9VNLWNJ2zMG8DSeskTUiamPaZcTPL4xOsHSszyH8aeAlwMrAb+ETeRhGxPiJqEVEbHx8vsTpmNrQqPud7mUoL8hExFRH7I2IW+AywqqyyzGwE+ARrR0oL8pKWZJ5eCGxrtq2ZmZWjkN41kr4ArAHGJE0CfwSskXQyEMATwH8ooiwzM2tdIUE+Ii7KWfzZIvZtZmad87QGZmYV5iBvZlZhDvJmZhXmIG9mVmEO8mZmFeYgb2bla/WiIFY4B3kzK1c7l+izwjnIm1m5fIm+vnKQN7NyeQbJvirqoiFmZvlauSiIlcZB3szKV59B0nrO6RozswpzkDczqzAHeTOzCnOQNzOrMAd5s1HlUagjwUHebBR5FOrIKCTIS7pK0h5J2zLLjpG0UdJj6f3CIsoyswJ4FOrIKKolfzVwbsOyy4DbI+Ik4Pb0uZkNAo9CHRmFBPmIuBPY17D4fOCa9PE1wAVFlGVmHcjm32dnYc8euOMOmJyEzZs9CrXCyszJL46I3QDpfW5TQdI6SROSJqb9L6NZ8bL59zPOOPD4zDNhfNwBvuL6fuI1ItZHRC0iauPj4/2ujln1NObfnYsfKWUG+SlJSwDS+z0llmVmzTTm31vNxbuLZSWUOUHZBuBi4KPp/Y0llmVmzTTOAhkx/4yQ9RTPd76T/Bhs2pRMMmZDp6gulF8A7gZeJmlS0ntJgvtvSHoM+I30uZn1SrYlXp8FUjr4cTPuYlkZhbTkI+KiJqvOKmL/Ztamblvi9RRP/fXuYjm0PJ+82TCbnc1PveS1xNuZz90X+qgMJ9nMhlWzqQlmZ5MUTbeDnVpJ69jAc5A3G1Z5rfV64F+2LAn0P/qRBzuNOAd5s2GVNzVBNvDffXfSGneAH2nOyZsNq3refGrqQCDv5oRps/y+DTW35M2G3dq1SXpmzZokRbNpU/tz0njq4cpykDcbZlNTcNddSXrmrruS552cMHW/+MpykDcbZtKBaQciOk+zeOrhynKQNxsmjfPJLF4Mq1fDggXJfTt94bPq+X1PPVw5DvJmwyIvby4lQfmpp+Bb3+ouOLtffCU5yJsNi2Z5cwdnm4ODvNmwcN7cOuB+8mbDwvPJWAfckjcbFK1cpMOpGWuTg7zZIPBgJCuJg7zZIPBgJCuJg7zZIPBJVStJ6SdeJT0BPAvsB2YiolZ2mWYDLW8iMJ9UtZL0qiX/pog42QHeRt5cuXefVLUSOF1j1o1WesRkOfduPdaLIB/ArZK2SFrXg/LMeqOTHjHOvVuP9WIw1GkRsUvSImCjpO9FxJ31lWngXwewfPnyHlTHrCCdXCzbuXfrsdJb8hGxK73fA9wArGpYvz4iahFRGx8fL7s6ZsXptFXu3Lv1UKkteUkvAA6JiGfTx+cAHy6zTLOecavchkDZ6ZrFwA1KvvyHAtdGxDdLLtOsd+qtcrMBVWqQj4jvA/+qzDLMzKw5d6E0M6swB3kzswpzkDczqzAHeTOzCnOQNzOrMAd5M7MKc5A3M6swB3kzswpzkDczqzAHeTOzCnOQNzOrMAd5Gw3tXsHJrCIc5K3aZmdh9+7kyk3tXMHJrCJ6cWUos/6oX56vfvUmaP0KTmYV4SBv1ZW9PJ+UzP3u66raiHG6xqore3m+1athchI2b/YVnGykuCVv1eXL85k5yFvF+fJ8NuJKT9dIOlfSo5J2Srqs7PLMzOyAUoO8pAXAp4C3ACuBiyStLLNMMzM7oOx0zSpgZ3pBbyR9ETgfeKTIQrZtg7Vri9xj5wYl7et6HMz1OJjrcbBBqMc558Cf/mnx+y07yB8PPJl5Pgm8PruBpHXAOoDly5d3VMjznw8vf3mHNSzQoAymdD0O5noczPU42KDU46ijytlv2UE+7/fxoEMaEeuB9QC1Wq2jw/3rvw7XX9/JK23gzM66N4xZgco+8ToJLMs8XwrsKrlMG1b1EaqefsCsMGUH+fuBkySdIOkwYC2woeQybVhlR6jWpx8ws66UGuQjYga4FLgF2AFcFxHbyyzThlh2hOqpp8LYmGeONOtS6YOhIuJm4Oayy7EKyI5QHRuDM89MWvSnnposP8SzcJi1y381NljqI1T37nXqxqwADvI2mBpTN5450qwjnrvGBpMnFzMrhIO8DS5PLmbWNadrzMwqzEHezKzCHOStOLOz7tduNmAc5K0Y2SkJzjgDdu92sDcbAA7yVozslAR33QXLl3v+GbMB4CBvxaj3a1+wIOnu6EFMZgPBQd6KUe/XPjkJq1d7EJPZgHA/eSvOIYfAscd6EJPZAHFL3opXH8TUToB3zxyzUjjIW//5YiFmpXGQtwMaW9NFta7n248vFmJWGgd5SzS2pmdmimldt9JK94yTZqVRDFAOtFarxcTERL+rMZqmppJAPDOTBNvvfhde/eoDzycnO5ssrHG/zfbjC3ibdUzSloio5a0rrSUv6QpJT0l6ML2dV1ZZVoDG1vTKlcW0rlttpXdystbM5lV2F8pPRsSfl1yGFSFv/vYiukJ6XnizvnI/eTugcf72ouZz97zwZn1T9onXSyVtlXSVpIV5G0haJ2lC0sS0e1UMF/dtNxt4XQV5SbdJ2pZzOx/4NPAS4GRgN/CJvH1ExPqIqEVEbXx8vJvqWC+5b7vZUOgqXRMRZ7eynaTPADd1U5YNmLy+7U7JmA2cMnvXLMk8vRDYVlZZ1qW8tMt8qRj3bTcbCmXm5D8u6WFJW4E3Af+pxLKsU3lpl1ZSMdlZJzdvdq8ZswHlwVCjLm+wErQ2gMnMBkJfBkPZkMhLuzgVY1YZ7ic/6poNVvIAJrNKcEu+6lrpy543pYCnGTCrBAf5KnNfdrOR5yBfZXPN0+7RqmYjwUG+ypqdQHUL32xk+MRrlTU7qerRqmYjwy35qss7geoukmYjwy35UeQ53s1GhoP8qPIc72YjwekaM7MKc5A3M6swB3kzswpzkDczqzAH+UHhEahmVgIH+UHgEahmVhIH+UEw1xwzZmZd6CrIS3q7pO2SZiXVGtZ9UNJOSY9KenN31aw4j0A1s5J0OxhqG/A24H9lF0paCawFXgkcB9wm6aURsb/L8qrJI1DNrCRdteQjYkdEPJqz6nzgixHxXET8ANgJrOqmrMrzRTrMrARl5eSPB57MPJ9Ml/0KSeskTUiamHYu2sysUPOmayTdBhybs+ryiLix2ctyluX2DYyI9cB6gFqt5v6DZmYFmjfIR8TZHex3EliWeb4U2NXBfszMrAtlpWs2AGslHS7pBOAk4L6SyqomD44yswJ024XyQkmTwCnA1yXdAhAR24HrgEeAbwLvd8+aNnhwlJkVRDFALcVarRYTExP9rkb/TU0lAX5mJuk7Pznpud/NrClJWyKilrfOI14HkQdHmVlBfGWoQeTBUWZWEAf5QeXL85lZAZyu6TX3mjGzHnKQL1NjQHevGTPrMQf5suQFdE8pbGY95iBflryA7l4zZtZjDvJlyQvo9V4zk5OwebN7zZhZ6dy7pizNukG614yZ9ZCDfJkc0M2sz5yuKYK7RZrZgHKQ79bMDLzxje4WaWYDyUG+G7OzsHo13H33gV40jzziFr2ZDQwH+W5MT8P99x94fsQRcPLJbtGb2cBwkO/GokVw2mmwYAG89rXw85/D/v0e6GRmA8NBvhv1bpJPPQX33Zfk5j3QycwGiLtQdivbTdLTA5vZgOn28n9vl7Rd0qykWmb5Ckm/lPRgeruy+6qWqKgukPWA7wBvZgOi23TNNuBtwJ056x6PiJPT2yVdllMezwxpZhXWVZCPiB0R8WhRlemZbMvdM0OaWYWVeeL1BEnflfQtSaubbSRpnaQJSRPTvQiwjS33sTHPDGlmlTXviVdJtwHH5qy6PCJubPKy3cDyiHha0muBr0l6ZUQ807hhRKwH1gPUarXiRxHV53GvnwxtbLnv3esTpmZWWfO25CPi7Ih4Vc6tWYAnIp6LiKfTx1uAx4GXFlftFuXl2/OmAPYJUzOrqFK6UEoaB/ZFxH5JJwInAd8vo6w55eXbFy92y93MRka3XSgvlDQJnAJ8XdIt6arTga2SHgKuBy6JiH3dVbUDza7E5Ja7mY0IxQBNplWr1WJiYqLYnTbm5M3MKkbSloio5a2r/rQG9VZ7hOd8N7ORU/0gDx7wZGYjazSCvAc8mdmIGo0g3+wErJlZxY3GLJT1KYF9AtbMRsxoBHk4eEpgM7MRMRrpGjOzEeUgb2ZWYQ7yZmYV5iBvZlZhDvJmZhXmIG9mVmEDNUGZpGngh13sYgzYW1B1iuR6tcf1at+g1s31ak+n9XpxRIznrRioIN8tSRPNZmLrJ9erPa5X+wa1bq5Xe8qol9M1ZmYV5iBvZlZhVQvy6/tdgSZcr/a4Xu0b1Lq5Xu0pvF6VysmbmdnBqtaSNzOzDAd5M7MKG6ogL+ntkrZLmpVUa1j3QUk7JT0q6c1NXn+CpHslPSbpS5IOK6meX5L0YHp7QtKDTbZ7QtLD6XYFX8E8t7wrJD2Vqdt5TbY7Nz2OOyVd1oN6/Zmk70naKukGSUc32a4nx2u+9y/p8PQz3pl+n1aUVZdMmcskbZK0I/0b+P2cbdZI+lnm8/1Q2fXKlD3nZ6PEX6bHbKuk1/SgTi/LHIsHJT0j6QMN2/TkmEm6StIeSdsyy46RtDGNRxslLWzy2ovTbR6TdHHbhUfE0NyAVwAvAzYDtczylcBDwOHACcDjwIKc118HrE0fXwm8rwd1/gTwoSbrngDGenj8rgD+cJ5tFqTH70TgsPS4riy5XucAh6aPPwZ8rF/Hq5X3D/wecGX6eC3wpR58dkuA16SPXwj8U0691gA39er71M5nA5wHfAMQ8Abg3h7XbwHwY5JBQz0/ZsDpwGuAbZllHwcuSx9flve9B44Bvp/eL0wfL2yn7KFqyUfEjoh4NGfV+cAXI+K5iPgBsBNYld1AkoAzgevTRdcAF5RZ37TMdwBfKLOcgq0CdkbE9yPi/wBfJDm+pYmIWyNiJn16D7C0zPLm0cr7P5/k+wPJ9+ms9LMuTUTsjogH0sfPAjuA48sss2DnA38biXuAoyUt6WH5ZwGPR0Q3I+o7FhF3AvsaFme/R83i0ZuBjRGxLyJ+AmwEzm2n7KEK8nM4Hngy83ySX/0DeBHw00wwydumaKuBqYh4rMn6AG6VtEXSupLrUndp+u/yVU3+PWzlWJbpPSQtvjy9OF6tvP//v036ffoZyferJ9L00KuBe3NWnyLpIUnfkPTKXtWJ+T+bfn+v1tK8sdWvY7Y4InZD8iMO5F18uuvjNnCX/5N0G3BszqrLI+LGZi/LWdbYN7SVbVrWYj0vYu5W/GkRsUvSImCjpO+lv/gdm6tewKeBj5C874+QpJLe07iLnNd23c+2leMl6XJgBvh8k90UfrzyqpqzrNTvUjskHQl8BfhARDzTsPoBknTEP6fnW74GnNSLejH/Z9PPY3YY8NvAB3NW9/OYtaLr4zZwQT4izu7gZZPAsszzpcCuhm32kvyLeGja+srbpmXz1VPSocDbgNfOsY9d6f0eSTeQpAq6ClqtHj9JnwFuylnVyrEsvF7pCaW3AmdFmozM2UfhxytHK++/vs1k+jkfxa/+K144Sc8jCfCfj4ivNq7PBv2IuFnSX0sai4jSJ+Jq4bMp5XvVorcAD0TEVOOKfh4zYErSkojYnaau9uRsM0ly3qBuKck5yZZVJV2zAVib9no4geSX+L7sBmng2AT8TrroYqDZfwZFOBv4XkRM5q2U9AJJL6w/Jjn5uC1v26I05EAvbFLe/cBJSnoiHUbyb+6Gkut1LvBfgd+OiF802aZXx6uV97+B5PsDyffpjmY/TEVJc/6fBXZExF802ebY+rkBSatI/r6fLrNeaVmtfDYbgHelvWzeAPysnqrogab/UffrmKWy36Nm8egW4BxJC9P06jnpstaVfVa5yBtJYJoEngOmgFsy6y4n6RXxKPCWzPKbgePSxyeSBP+dwJeBw0us69XAJQ3LjgNuztTlofS2nSRtUfbx+zvgYWBr+gVb0liv9Pl5JL03Hu9RvXaS5B0fTG9XNtarl8cr7/0DHyb5EQJ4fvr92Zl+n07swTF6I8m/6Vszx+k84JL69wy4ND02D5GcwD617HrN9dk01E3Ap9Jj+jCZ3nEl1+3XSIL2UZllPT9mJD8yu4H/m8aw95Kcx7kdeCy9Pybdtgb8Tea170m/azuBd7dbtqc1MDOrsKqka8zMLIeDvJlZhTnIm5lVmIO8mVmFOcibmVWYg7yZWYU5yJuZVdj/A9pJHOWc/ZqnAAAAAElFTkSuQmCC\n",
- "text/plain": [
- ""
- ]
- },
- "metadata": {
- "needs_background": "light"
- },
- "output_type": "display_data"
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "The dataset size of ds_train: 100\n",
+ "dict_keys(['data', 'label'])\n",
+ "The x label value shape: (16, 1)\n",
+ "The y label value shape: (16, 1)\n"
+ ]
}
],
"source": [
- "x = np.arange(-10, 10, 0.1)\n",
- "y = x * (net.weight.set_data([0][0]).asnumpy()) + (net.bias.set_data([0]).asnumpy())\n",
- "plt.scatter(x1, y1, color=\"red\", s=5)\n",
- "plt.plot(x, y, \"blue\")\n",
- "plt.title(\"Eval data and net\")\n",
- "plt.show()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "红色的点:为之前生成的50组验证数据集。\n",
+ "num_data = 1600\n",
+ "batch_size = 16\n",
+ "repeat_size = 1\n",
"\n",
- "蓝色的线:初始化的模型网络。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 定义损失函数"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "我们的网络模型表达式为:\n",
+ "ds_train = create_dataset(num_data, batch_size=batch_size, repeat_size=repeat_size) \n",
+ "print(\"The dataset size of ds_train:\", ds_train.get_dataset_size())\n",
+ "dict_datasets = ds_train.create_dict_iterator().get_next()\n",
"\n",
- "$$h(x)=wx+b\\tag{2}$$"
+ "print(dict_datasets.keys())\n",
+ "print(\"The x label value shape:\", dict_datasets[\"data\"].shape)\n",
+ "print(\"The y label value shape:\", dict_datasets[\"label\"].shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "一般地,数学上对线性回归模型采用均方差的方式来判断模型是否拟合得很好,即均方差的值$J(w)$值越小,函数模型便拟合得越好,验证数据代入后,预测得到的y值就越准确。公式2对应m个数据的均方差公式为:"
+ "通过定义的`create_dataset`将生成的1600个数据增强为了100组shape为16x1的数据集。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "$$J(w)=\\frac{1}{m}\\sum_{i=1}^m(h(x_i)-y^{(i)})^2\\tag{3}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "为了方便后续的计算,我们采用0.5倍的均方差的表达式来进行计算,均方差值整体缩小至0.5倍的计算方式对判断模型拟合的好坏没有影响。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "$$J(w)=\\frac{1}{2m}\\sum_{i=1}^m(h(x_i)-y^{(i)})^2\\tag{4}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "公式4即为网络训练中的损失函数,其中参数:\n",
- "\n",
- "- $J(w)$为均方差。\n",
+ "## 定义训练网络\n",
"\n",
- "- $m$为样本数据的数量。\n",
+ "在MindSpore中使用`nn.Dense`生成单个数据输入,单个数据输出的线性函数模型:\n",
"\n",
- "- $h(x_i)$为第$i$个数据的$x_i$值代入模型网络(公式2)后的预测值。\n",
+ "$$f(x)=wx+b\\tag{1}$$\n",
"\n",
- "- $y^{(i)}$为第$i$个数据中的$y$值(label值)。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "在MindSpore中定义损失函数的方法如下。"
+ "并使用Normal算子随机初始化权重$w$和$b$。"
]
},
{
"cell_type": "code",
"execution_count": 6,
- "metadata": {},
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.441532Z",
+ "start_time": "2020-09-14T10:38:40.436780Z"
+ }
+ },
"outputs": [],
"source": [
- "from mindspore.ops import operations as P\n",
- "\n",
- "class MyLoss(nn.loss.loss._Loss):\n",
- " def __init__(self,reduction='mean'):\n",
- " super().__init__(reduction)\n",
- " self.square = P.Square()\n",
- " def construct(self, data, label):\n",
- " x = self.square(data-label) * 0.5\n",
- " return self.get_loss(x)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "其中:\n",
- "\n",
- "- `nn.loss.loss._Loss`:是MindSpore自定义loss算子的一个基类。\n",
+ "from mindspore.common.initializer import Normal\n",
+ "from mindspore import nn\n",
"\n",
- "- `P.Square`:MindSpore训练的框架中的平方算子,算子需要注册过才能在框架的计算图中使用。"
+ "class LinearNet(nn.Cell):\n",
+ " def __init__(self):\n",
+ " super(LinearNet, self).__init__()\n",
+ " self.fc = nn.Dense(1, 1, Normal(0.02), Normal(0.02))\n",
+ " \n",
+ " def construct(self, x):\n",
+ " x = self.fc(x)\n",
+ " return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "### 损失函数与网络结合\n",
- "\n",
- "接下来我们需要将loss函数的表达式和网络net关联在一起,在MindSpore中需要`nn.WithLossCell`,实现方法如下:"
+ "调用网络查看初始化的模型参数。"
]
},
{
"cell_type": "code",
"execution_count": 7,
- "metadata": {},
- "outputs": [],
- "source": [
- "criterion = MyLoss()\n",
- "loss_opeartion = nn.WithLossCell(net, criterion) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.456400Z",
+ "start_time": "2020-09-14T10:38:40.442544Z"
+ },
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[Parameter (name=fc.weight, value=Tensor(shape=[1, 1], dtype=Float32,\n",
+ "[[3.68014202e-002]])), Parameter (name=fc.bias, value=Tensor(shape=[1], dtype=Float32, [3.68014202e-002]))]\n"
+ ]
+ }
+ ],
"source": [
- "其中:\n",
- "\n",
- "- `net`:网络模型。\n",
- "\n",
- "- `criterion`:即为实例化的loss函数。"
+ "net = LinearNet()\n",
+ "model_params = net.trainable_params()\n",
+ "print(model_params)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "上述从数据代入到计算出loss值的过程为AI训练中的前向传播过程。"
+ "初始化网络模型后,接下来将初始化的网络函数和训练数据集进行可视化,了解拟合前的模型函数情况。"
]
},
{
- "cell_type": "markdown",
- "metadata": {},
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.607733Z",
+ "start_time": "2020-09-14T10:38:40.457985Z"
+ },
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYQAAAD8CAYAAAB3u9PLAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO3dd3hU1dbH8e9KQpGiICCGJqKIgAhoriKoFEUQewevXS/iFaXYQFRAsCEi2FAQCza4gooionSQooJ0UfqLtFClt2T2+8dMhhAmdWYyM8nv8zx5mNOXJ+NZ2fvsYs45RERE4iIdgIiIRAclBBERAZQQRETERwlBREQAJQQREfFRQhARESAECcHMqprZFDNbZmZLzayTb30vM9tgZgt8P22CD1dERMLFgu2HYGaJQKJz7nczKw3MA64HbgX2Ouf6Bx+miIiEW0KwJ3DObQI2+T7vMbNlQOVgzysiIvkr6BLCMSczqw5MB84BugL3ALuBucBjzrmdAY5pD7QHKFmy5Plnn312yOIRESmQ/voL9u6FUqWgVi3mzZu3zTlXIdjThiwhmFkpYBrwgnPuKzOrCGwDHNAHb7XSfVmdIykpyc2dOzck8YiIFEjJyVClCqSkQEICrF+PnXrqPOdcUrCnDkkrIzMrAowGPnPOfQXgnEt2zqU65zzAUOCCUFxLRKRQO+UUaNzYmwwaN/Yuh0jQ7xDMzIBhwDLn3IB06xN97xcAbgCWBHstEZFCzwymTIGtW73JwCxkpw46IQBNgDuBxWa2wLfuaaCdmTXAW2W0FngwBNcSEZG4OKhYMeSnDUUro5+BQClqXLDnFhGR/KOeyiIiAighiIiIjxKCiIgASggiIjFr54GdVB9YPWTnC0UrIxERyUfOOdqNbsfIpSNDel6VEEREYoXHw0fTBhH3fJw/GfRs2jNkp1cJQUQkBvyRvIS679bzLzc8tSFzHphD0fii9KZ3SK6hhCAiEsX2Hd5H7bdr8/fuv/3r1rwZT/WlP0B80ZBeS1VGIiJRquO4jpR6qZQ/GXy9qA6ubwLV6zY5OoaRx0OREP1xrxKCiEiUGfPnGK4feb1/ueO/OvJmmzfB4zl2DCOPB5o3px6cG4rrKiGIiESJtf+s5fRBp/uXq55YlWUPL6Nk0ZLeFRnHMNq6FWbNwgIPH5RrSggiIhF2OPUwjd5vxPzN8/3rljy0hLqn1M36QN9Q2G769JBMbKN3CCIikeDxQHIyvaf2oljfYv5k8OF1H+J6uuyTAfiHwl4Mi0IRkkoIIiL5zeNhyg0NaXHe0ef4bXVv44ubvsByO79BXBxHICUUYSkhiIjko+S9yZz62qlwnne5WApsemAZZU+L/HzyQVcZmVlVM5tiZsvMbKmZdfKtP9nMJpjZCt+/ZYMPV0QkNqV6Umn9aWtvMvCZMxQOvmCUvetBbxVShIXiHUIK8JhzrjbQCHjYzOoA3YBJzrmawCTfsohIofPWr2+R0CeBH1f9CMCAKwbg2m/kwuQEcA5mzfK2GIqwUMyYtgnY5Pu8x8yWAZWB64Bmvt0+BqYCTwV7PRGRiMjYByAH5m2cR9LQJP9y8+rN+enOn0iI8yWCxo29yaBx46MdzSIopO8QzKw60BD4BajoSxY45zaZWeT/a0VE8sLXAcz/8J4yxdsnIBO7Du6i6utV2XN4j3/dxq4bSSydeHQnXwuh3CaZcApZs1MzKwWMBjo753bn4rj2ZjbXzOZujYIik4jIcXwdwEhJybJ6xznHHaP/TZlXyviTwYQ7J+B6umOTQZq0jmZRkAwgRAnBzIrgTQafOee+8q1ONrNE3/ZEYEugY51zQ5xzSc65pAoVKoQiHBGR0PJ1ACMh4fjqHV9/gs8WfUrc83F8tuRzAJ5eWxX3bCqX17g8QkHnXtBVRuZtNDsMWOacG5Bu07fA3cDLvn/HBHstEZGIyKx6x+Phz6supHajuf5d6yXDb0OgmG2Cl7ceO9RElAtFCaEJcCfQwswW+H7a4E0ELc1sBdDStywiEpsyVO/sP7KfGgNPPyYZrGw7m0VLL6WYBShJxIBQtDL6mcwHVros2POLiESbLuO7MPCXgf7lUaPiuOnki+G5C6PuRXFuqKeyiAjkqFnp2OVjueaLa/zLD57/IIOvfBvrsO3ocWYxVU2UnhKCiEg2zUrX7VrHaQNP8y9XLFmRlY+upFTRUr4VsZkAMlJCEBEJ1Ky0YkWOpB7h4g8v5tcNv/p3XdhhIedWTDcfTR46rEUrDX8tIhKgWekL01+gaN+i/mQw9JqhuJ7u+GTQvDlUqQLNmkXFeETBUAlBRCRds9IZB/7i0ueP/q18w9k3MOrWUcRZgL+fMylZxColBBERYOuB7Zzy7tGRSOMsjuTHkylfonzmB6WVLKJoPKJgKCGISMGQx7p8j/Nw7RfX8v2K7/3rZt43k8ZVG2d/cBSORxQMvUMQkdiXx7r8d+e+S/zz8f5k8PKFPXDPeXKWDNJE2XhEwVAJQURiXy7r8udvms95Q87zL19S7RImfwwJfV+BxjOyHc20oFJCEJHYl8O6/N2HdlN9YHV2HtzpX/d3l7+pcqAItK9SYF4O51XhS4EiUvCk1eWvXw9Tpx5XfeOc494x93LSyyf5k8EP//4B19NR5cQqWY9mWoiohCAiBUNaXX4GI5aMoN3odv7lJxo/Qb+W/Y7dqYC9HM4rJQQRKZBWbF/BWW+d5V+uVa4WCzosoHhC8cAHZJJQChMlBBEpUA4cOUD9d+uzYscK/7rlHZdTs1zNCEYVG/QOQUQKjCcnPEmJF0v4k8GIm0bgejolgxxSCUFEYt74leO58rMr/cv3NriXYdcOwwrpu4C8CklCMLMPgKuBLc65c3zregH/AdJmo37aOTcuFNcTEQHYsHsDVV6v4l8++YSTWdNpDScWOzGCUcWuUJUQPgLeAoZnWP+6c65/iK4hIgJAiieFZh81Y+bfM/3rfn9gLg0TqkDR0hGMLLaF5B2Cc246sCMU5xIRyUq/mf0o0qeIPxkMvmow7tlUGt7etcAMQx0p4X6H0NHM7gLmAo8553Zm3MHM2gPtAapVqxbmcEQkVs1cN5OLP7zYv3z1WVczpu0Y77DUyckFahjqSAlnK6PBwBlAA2AT8FqgnZxzQ5xzSc65pAoVKoQxHBGJRdv3byeud9wxyWDL41v4rt13R+coUE/jkAhbCcE5l5z22cyGAmPDdS0RKXg8zsNN/7uJb/78xr9u2j3TuPS0S4/fWT2NQyJsCcHMEp1zm3yLNwBLwnUtESlYhs4bSvux7f3LL7R4gacveTrrg9TTOGihanb6BdAMKG9m64GeQDMzawA4YC3wYCiuJSIF16JNC6g/pKF/+cLKFzLj3hkUiS8SwagKj5AkBOdcuwCrh4Xi3CJS8O05tIcz3zyTLfu2+Nf93+2/Uu3MJFX/5CMNXSEiOefxeFv0OBeS45xztP+uPSe+fKI/GXz3ObheUK1OYzUhzWdKCCKSM3mcpjKz40b9MYq45+MY+vtQADpf2Ak3+VKuXhXvLRWkb0Iq+UJjGYlIzuRymsrMjlu18jfO/KKRf/MZZc9g8UOLOaHICXCFB7Zsgdtuy3b2Mwk9lRBEJGfy2tbfd9yhYvGc07noMclg2cPLWPnoSm8yAG9LoVNPzXL2MwkflRBEJGfy2tbfjB7PNuHFmdOB/QB8csMn3HHuHZkfoyakEaGEICI5l8sH9YRVE7ji0yv8y3ecewfDrx+uYamjlBKCiITcxj0bqTygsn+5dNHSrOuyjjLFy0QwKsmOEoKIhEyKJ4WWn7Rk6tqp/nW//ec3kiolRS4oyTG9VBaRkBgwewBF+hTxJ4M3Wr+B6+mUDGKISggi4u0bkMeB4X5Z/wuNhh1tOdTqjFZ8f/v3xMfFhzpKCTMlBJHCLq3jWFq7/ylTvC+Ps7HjwA4SX0vkcOph/7rNj22mYim1DopVqjISKewCdTjLgnOOW7+8lXL9yvmTweS7JuN6OiWDGKeEIFLY5aLD2UcLPiLu+Ti+/ONLAHo17YXr6Wh+evP8ilbCSFVGIoVdDjqcLd2ylHMGn+NfPj/xfGbdP4ui8UXzM1IJMyUEEcm0w9m+w/uo9VYtNuzZ4F+3ptMaqpepno/BSX4JSZWRmX1gZlvMbEm6dSeb2QQzW+H7t2woriUi+aPjuI6UeqmUPxl8fdvXuJ5OyaAAC9U7hI+A1hnWdQMmOedqApN8yyIS5b758xust/H2b28D0PFfHXE9HdeffX2EI5NwC9WMadPNrHqG1dfhnVYT4GNgKvBUKK4nIqG3ZucaarxRw79c9cSqLHt4GSWLloxgVJKfwvkOoaJzbhOAc26TmQVsumBm7YH2ANWqVQtjOCISyOHUwzR6vxHzN8/3r1vy0BLqnlI38AGZdWILonObRIeINzt1zg1xziU555IqVKgQ6XBEYl8uprnsOaUnxfoW8yeDj677CNfTZZ0MAs2altfZ1CSqhLOEkGxmib7SQSKwJdsjRCQ4Oex1PGXNFFoMb+Ffvq3ubXxx0xfZD0ud2axpeZ1NTaJKOEsI3wJ3+z7fDYwJ47VEBAI/mNOVGDbv3Yz1Nn8yKJ5QnB1P7mDEzSNyNkdBZp3Y8jqbmkSVkJQQzOwLvC+Qy5vZeqAn8DLwPzO7H1gH3BKKa4lIFtIezGklhPLloXlzUmfP5MqHSjPh5H/8u865fw4XVrkwd+fPrBNbXmdTk6gSqlZG7TLZdFkozi8iOZTxwbxlC28e/plHe3gAbzIYcMUAulzUJe/XyGzWNE17GfPUU1mkoPE9mOdunMu/hv7L30OoxY4y/DRgK/Hx+t9eAtM3Q6SA+efgP1QZUIV9R/b51228cyGJp9dTVY5kSQlBpCDweHBbtvDvmV35YskX/tUT7pzA5TUuj2BgEkuUEERincfDJ+3qcFedv/yrelzSg74t+kYwKIlFSggiMWzZ1mXUeacO1PEu10uG3575P4pVUq9/yT0lBJEYtP/Ifs555xzW/LPGv27l2/GcUbsJJFaNYGQSyyI+dIWI5E7n8Z0p+WJJfzIYdcso3LOpnLF4A0ydqhfHkmcqIYjEiLHLx3LNF9f4lx88/0EGXzX4aA9j9QGQICkhiES5dbvWcdrA0/zLFUtWZOWjKylVtFQEo5KCSAlBJEodST1Ckw+a8NvG3/zrFt0yhXq1m6paSMJC7xBEolDf6X0p2reoPxkMvfo93ORLqVe/pYaXlrBRCUEkikxfM5Wmw5v7l2+sfSNf3vIlcVu2wqyHNby0hJUSgkgU2LJvCxX7H33Ax3sg+YktlCvlmzQq4yimGl5awkAJQSSCPM7D1Z9fzQ8rf/CvmzkMGm9KgP96IO29sYaXlnyghCASIYN/G8x/x/3Xv/zKpDieXFQa9u6FJgFKARpeWsIs7AnBzNYCe4BUIMU5lxTua4pEs/mb5nPekPP8y5ckNmJyx99IOJIKCftgwQKoW1elAMl3+VVCaO6c25ZP1xKJSrsO7uK0gaex69Au/7r1XdZTuXQl+KzZ0fcDSgYSIaoyEgkz5xz3jrmXjxd+7F83/t/jaXVmq6M76f2ARIH8SAgO+MnMHPCec25I+o1m1h5oD1CtmkZolIJlxJIRtBt9dIbZJxs/ySstXzl+R70fkCiQHwmhiXNuo5mdAkwwsz+dc9PTNvoSxBCApKQklw/xiITd8u3LqfVWLf9yrXK1WNBhAcUTikcwKpGshT0hOOc2+v7dYmZfAxcA07M+SiQ2HThygPrv1mfFjhX+dcs7LqdmuZoRjEokZ8I6dIWZlTSz0mmfgSuAJeG8pkiueTyQnAzO5Wx9Jp746QlKvFjCnwxG3DQC19MpGUjMCPdYRhWBn81sIfAr8L1zbnyYrymScx4PNG8OVaocO0ZQZusD+GHFD1hvo//s/gDc1+A+PM95uO2c28Ifv0gIhbXKyDm3GqgfzmuIBGXrVm9zz4xjBCUnw8yZkJqa6dhB63evp+rrR2cnK3dCOVZ3Ws2JxU7M7/8KkZDQaKdSuKWNEZSQcHSMII8H2rY9Wiq46KJjeg0fST1C42GNj0kG8x+cz7YntykZSExTPwQp3AKNEbRli7dU4Jw3Ubz9tn/3V35+hW6TuvmXB181mA5JHSIRuUjIKSGIZOwDkH5k0ZIloWFD1rU4n9Oa/Orf5ZqzruGbtt8QZypkS8GhhCCSUVqp4Y8/2J9Un36XeOh3gTcZxFkcmx/bTIWSFSIcpEjoKSGIpOfxwNatuAoVGOkW82SXIvxd/BC3JVfglRd+5bSy1SMdoUjYKCGIpPE1NZ23eiadbi7JzDK7aVi9IZ9d2JNL6l+rMYakwFNCEPHZ/H9L6XHyDD5s7qiwbzfvNx3APZc+SnxcfKRDE8kXeiMmhd6hlEO8OvNVzhrRhE/OhcfmGMvnNeb+pp2VDKRQUQlBCi3nHGOXj6XrT11ZuWMlV591Na9d/ipn/beshqGWQkkJQQqlpVuW0uXHLkxYPYHa5WsfPz+BSCGkhCCFyo4DO+g1tRfv/PYOpYuVZlDrQTyU9BBF4otEOjSRiFNCkEIhxZPCe3Pf47mpz/HPwX948PwHeb7585QvUT7SoYlEDSUEKfAmrZ5Ep/GdWLp1Kc2rN2dQ60HUq1gv0mGJRB0lBCmwVu1YxeMTHuebP7/h9DKn89WtX3H92ddjZv4OaHp5LHKUmp1KgbPn0B66TexGnXfqMGHVBF667CX+ePgPbqh9w9FkkMO5DkQKk7CXEMysNTAIiAfed869HO5rSuHkcR6GLxxO90nd2bx3M3fXv5sXL3uRSqUrHbtjZnMgSKHhnPfXf+SI9+fwYe9P+uVo3hYuYU0IZhYPvA20BNYDv5nZt865P8J5XSl8Zv09i07jOzF341wa7SrNmNHxXFB9DVx76vE7px/NNG0OhAhxLroeNLnZNyUlYrdNwiTcJYQLgJW+mdMwsxHAdYASQpRxzltzkprq/Un/OeNyVttys2/Q10jxsGPPCiac8DyL7XNKuUpctWcYZw/czggXz2d/F8Hzn4OkFiuR4TxGapWpeK45SGqR4qTeYjm6/syZkf4tSSglJECRIt6fokWP/ZxxOa/bQnWejJ/jMlT2h+o1WLgTQmXg73TL64EL0+9gZu2B9gDVqlXDuQg/ZArpNXI4j3z0SDgAjfvDxS/7prnswYHZ3ZjoKckUO0icSyE+DuK/OYG4OIiPP/rjXTbi4zPbdvzntP8RjxwJw39KQv48SEL9IEtIOP7BJLEt3AkhUN465tHjnBsCDAEwS3Kx/gWLiyPLh0xWD53stqU9mII5ZzDXj4ZrxMU5vl01ih6Tu/D3vg3ctBRenRTP6X88AhVLeX8JnmKwdbdaEInkUrgTwnqgarrlKsDGzHZOTIT27WP3QRYXp+dPOM3fNJ9O4zsxY90M6leszyczytF03B/HvwfIOAOaiORIuBPCb0BNMzsd2AC0BW7PbOdKlaBXrzBHJDFny74tPDP5Gd7//X3KlSjHe1e/x/0N7ye+vakvgUgIhTUhOOdSzKwj8CPeZqcfOOeWhvOaEiNy0DHscOph3vr1LXpP683+I/vp3KgzzzV9jjLFyxzdSSUBkZAJez8E59w4YFy4ryMxJK1jWFqzzylTjnk76Zxj3IpxdP2pK8u3L6dNzTYMuGIAtcrXimDQIgWfhq6Q/JdFx7BlW5fR9aeujF85nlrlavH97d/TpmabCAcsUjjEeJseiUlpHcMSEvwvhHce2Enn8Z05991zmf33bAZcMYBFDy3KWTLweCA5OQbbzopEF5UQJP+ZeauJtm4ltXw5hs57j2cmP8OOAztof357+jTvQ4WSFXJ2rmyqn0Qk55QQJDLi4piy/w86D+3MouRFND2tKQNbD6TBqQ1ydx6NSyQSMvpTSvLdmp1ruOl/N9FieAt2HdzFqFtGMeXuKblPBhCw+klE8kYlBMk3ew/v5aUZL/Ha7NeIj4unb/O+dL2oKycUOSHvJ01X/aT+CCLBUUKQsPM4D58u+pRuE7uxae8m7jz3Tl667CUqn1g5NBdQz2SRkFBCkLD6Zf0vPDr+UX7d8CsXVL6Ar277ikZVGkU6LBEJQAlBwmLD7g10n9SdTxZ9QmKpRD6+/mPuOPcO4kyvrUSilRKC5FwOhps4mHKQAbMH8OKMFzniOUL3Jt3oXusBSlepofp9kSinP9fkqKw6eGUzD7FzjtF/jKb227XpMbkHrc5sxbKHlvJin1mUrnG25i4WiQFKCOKV3cTzgdr7+yzcvJAWw1tw85c3U7poaSbdNYnRt46mRkrpTI8RkeijhCBeWTzwgYDt/bfu20qHsR04b8h5LE5ezDtt3uH3B3+nxektMj1GRKKX3iGIV3YTz6dr73+kXFne/mUQvab2Yu/hvTxywSP0bNqTsieUzfQY9REQiX5KCOKVk4d3XBw/7P6drqO68ue2P2l1RisGtBpAnQp1Mj+v+giIxIywVRmZWS8z22BmC3w/GsM42qU9vAMkg7+2/cVVn19Fm8/bkOpJZWy7sfzw7x+yTgYiElPCXUJ43TnXP8zXkDD65+A/9JnWhzd+fYMSRUrQv2V/HrnwEYrGF410aCISYqoykoBSPakMmz+MZyY/w7b923jgvAfo26Ivp5TUi2GRgircCaGjmd0FzAUec87tzLiDmbUH2gNUq1YtzOFITkxbO41O4zuxMHkhl1S7hEGtB9EwsWGkwxKRMDMXxCxTZjYRODXAph7AHGAb4IA+QKJz7r6szpeUlOTmzp2b53gkOGv/WcuTE57kyz++pNpJ1Xi15avcUucWTK2DRKKamc1zziUFe56gSgjOuctzsp+ZDQXGBnMtCaEMQ1DsO7yPl39+mf6z+2MYvZv15vHGj1OiSIlIRyoi+ShsVUZmluic2+RbvAFYEq5rSS6km3LSNb6Iz9/4D09N6s6GPRu4vd7tvHzZy1Q9qWqkoxSRCAjnO4R+ZtYAb5XRWuDBMF5LcsrXI/m3U1LoVHMGs7+ZwfmJ5zPy5pE0qdYk0tGJSASFLSE45+4M17kl7zadkEr3+8vxcWIyFQ8V4YNr3+XuBvdoWGoRUbPTwuJgykEGzhnICzNe4HCVwzxVryNPt+rLicVPOnbHHAxxLSIFkxJCAeecY8xfY3jsp8dYvXM119W6jv5X9OfMk888fud07xdo3Ng7lEWcSg4ihYUSQgG2OHkxnX/szOQ1k6lboS4T7pzA5TWyaBgWaMRTjUMkUmjoz78CaPv+7Tz8/cM0eK8B8zfN560r32JBhwVZJwPQcNUihZxKCAXIkdQjDJ47mF5Te7H70G7+m/RfejXrRbkS5XJ2Ag1XLVKoKSEUED+t+onO4zuzbNsyLq9xOQNbDaTuKXVzfyINVy1SaCkhxLgV21fw2E+P8d3y7zij7BmMaTuGa866RsNNiEiuKSHEqF0Hd9F3el8G/TKI4gnF6Xd5Px698FGKJRSLdGgiEqOUEGJMqieVjxZ8xNOTn2brvq3c2+BeXrjsBU4tFWiMQRGRnFNCiCE/r/uZTuM78fum32lctTHf3/49SZWCHuBQRARQQogJ63at48kJTzJy6UiqnFiFz2/8nLbntNV7AhEJKSWEKLb/yH76zexHv5n9cDh6Nu3JE42foGTRkpEOTUQKICWEKOScY+TSkTwx4QnW717PbXVvo1/LflQ7Kd2MchpzSERCTD2Vo8y8jfO45MNLaDe6HRVKVGD6PdMZcfOI45NB8+ZQpQo0a+ZdFhEJkkoIUWLz3s30mNSDDxd8SIWSFXj/mve5p8E9xMfFH7+zxhwSkTAIqoRgZreY2VIz85hZUoZt3c1spZn9ZWatgguz4DqUcoh+M/tx1ptn8cmiT3jsosdY3nE59593f+BkABpzSETCItgSwhLgRuC99CvNrA7QFqgLVAImmtlZzrnUIK8X29LV+zvgu+Xf0fXHrqzauYprzrqG1654jZrlamZ/Ho05JCJhEFRCcM4tAwI1f7wOGOGcOwSsMbOVwAXA7GCuF9PSzTWwtGV9urQry4TVE6ldvjbj/z2eVmfmshClMYdEJMTC9Q6hMjAn3fJ637rjmFl7oD1AtWrVAu0SeaFo0bN1Kzt+n0nPlqkM/tc8Sq8/iTdav0GHpA4UiS8S2nhFRPIg24RgZhOBQOMi9HDOjcnssADrXKAdnXNDgCEASUlJAfeJqBDMIpbiSeG9//uS5zoZ/8RDh42J9B64kPIlK4QpaBGR3Ms2ITjnsplVJaD1QNV0y1WAjXk4T+QF2aJn4uqJdB7fmaVbl9KiZgsG/utZ6tVuqnp/EYk64eqH8C3Q1syKmdnpQE3g1zBdK7zy2KJn1Y5VXD/ielp+0pL9R/bz9W1fM/GuidSr00zJQESiUlDvEMzsBuBNoALwvZktcM61cs4tNbP/AX8AKcDDMdvCKJctevYc2sMLM17g9TmvUySuCC9d9hKdG3WmeELxfApYRCRvzLnoqbZPSkpyc+fOjXQYeeJxHoYvHE73Sd3ZvHczd9e/mxcve5FKpStFOjQRKeDMbJ5zLuihj9VTOQRm/T2LTuM7MXfjXBpVacSYtmO4oPIFkQ5LRCRXlBCyk0WT0/W71/PUxKf4fPHnVCpdiU9v+JTb692uYalFJCYpIWQlkyan+4/sp/+s/rwy8xU8zsMzlzzDUxc/RamipSIdsYhInikhZCVDk1O3ZQtfbp/OExOeYN2uddxS5xb6texH9TLVIx2piEjQlBCyktbkdNYs5l9xLp1+uJUZ62ZQv2J9hl8/nKbVm0Y6QhGRkFFCyCjDO4MtY0fSY9zjDPvzc8ptK8d7V7/H/Q2zGIlURCRGKSGkl+6dweEmjXjzhet4fkYf9h/ZT5dGXXi26bOUKV4m0lGKiISFEkJ6W7fiZs3k+xqpdD33Z1ZM/Jk2Ndsw4IoB1CpfK9LRiYiEVewnhBDOLbzMttPloRP5sdxOau07gXHtRnHlWW1CFKiISHSL7TmVQzS38M4DO+k8vjP13j2XOZU8vN74eRa/+I+SgYgUKrFdQghyJNIUTwpD5w3l2SnPsvPgTv5z3n/o07wPFTQstYgUQrGdENI1C83t3MKT10ym8/jOLN6ymKanNWVQ60HUP7V+GIMVEYlusZ0Q8ri8PoQAAAo/SURBVDC38Oqdq3liwhN8tewrqpepzqhbRnFj7Rs13ISIFHqxnRAgx3ML7zm0h5d+fokBswcQHxdP3+Z96XpRV04ockI+BCkiEv1iPyFkw+M8fLroU7pN7MamvZu489w7eemyl6h8YsApnkVECq2gWhmZ2S1mttTMPGaWlG59dTM7YGYLfD/vBh9q7s1ZP4eLhl3E3d/cTdWTqjL7/tkMv2G4koGISADBlhCWADcC7wXYtso51yDI8+eOr0/ChuJH6Da5O58u+pTEUol8fP3H3HHuHcRZ3HH7hqL/gohIQRBUQnDOLQOi44Wsx8OByy5lgJvNi5dCatEiPH3x03S/pPvxw1JnMqy1iEhhFs53CKeb2XxgN/CMc25GuC7knGP0nA94osFM1paBG5cZr/aZTo2amcxaFmT/BRGRgijbhGBmE4FTA2zq4Zwbk8lhm4BqzrntZnY+8I2Z1XXO7Q5w/vZAe4Bq1arlPHKfhZsX0ml8J6b93zTqJZRk8icHaF75YjjzX5kfFET/BRGRgirbhOCcuzy3J3XOHQIO+T7PM7NVwFnA3AD7DgGGACQlJbmcXmPrvq08O+VZhv4+lLLFyzL4qsE80OA+Eh7fmf17gTz0XxARKejCUmVkZhWAHc65VDOrAdQEVofi3IdTD/P2r2/Te1pv9h3ZxyMXPELPpj0pe0JZ7w45rfrJYf8FEZHCIqiEYGY3AG8CFYDvzWyBc64VcCnwvJmlAKlAB+fcjmCD/WHFD3T5sQt/bf+LVme04vVWr1O7Qu1gTysiIgTfyuhr4OsA60cDo4M5d3p/bfuLrj91ZdyKcdQ8uSZj242lTc020dG6SUSkgIjqnsr/HPyH56c9z5u/vkmJIiXo37I/j1z4CEXji0Y6NBGRAicqE0KqJ5Vh84fRY3IPtu/fzgPnPUDfFn05paRaA4mIhEvUJYRpa6fRaXwnFiYv5JJqlzCo9SAaJjb0blTvYhGRsImq7rmrd66m2cfN2HlwJyNvHsm0e6YdmwxCMDuaiIgEZs7luOl/2MVVjnO9P+/N440fP35Y6uRkbzJISYGEBFi/Xs1GRUQAM5vnnEvKfs+sRVUJ4ZxTzuHZps8GnqMgrXdxQoJ6F4uIhEFUvUPIsvWQeheLiIRVVCWEbKl3sYhI2ERVlZGIiESOEoKIiABKCCIi4qOEICIigBKCiIj4KCGIiAighCAiIj5KCCIiAgSZEMzsVTP708wWmdnXZlYm3bbuZrbSzP4ys1bBhyoiIuEUbAlhAnCOc+5cYDnQHcDM6gBtgbpAa+AdM4sP8loiIhJGQSUE59xPzrkU3+IcoIrv83XACOfcIefcGmAlcEEw1xIRkfAK5VhG9wEjfZ8r400Qadb71h3HzNoD7X2Lh8xsSQhjCpfywLZIB5EDijO0FGfoxEKMEDtx1grFSbJNCGY2ETg1wKYezrkxvn16ACnAZ2mHBdg/4MQLzrkhwBDfeeaGYkzvcFOcoaU4QysW4oyFGCG24gzFebJNCM65y7MJ5G7gauAyd3S2nfVA1XS7VQE25jVIEREJv2BbGbUGngKudc7tT7fpW6CtmRUzs9OBmsCvwVxLRETCK9h3CG8BxYAJ5p2wZo5zroNzbqmZ/Q/4A29V0sPOudQcnG9IkPHkF8UZWooztGIhzliIEQpZnFE1p7KIiESOeiqLiAighCAiIj75nhDM7BYzW2pmHjNLyrAt2+EuzOx0M/vFzFaY2UgzK5oPMY80swW+n7VmtiCT/daa2WLffiFpBpbLOHuZ2YZ0sbbJZL/Wvnu80sy6RSDOTIc8ybBfvt/P7O6Nr6HESN/2X8ysen7ElSGGqmY2xcyW+f5f6hRgn2Zmtivdd+G5/I7TF0eWv0PzesN3PxeZ2XkRiLFWuvu0wMx2m1nnDPtE5H6a2QdmtiV9/ywzO9nMJviegRPMrGwmx97t22eFrzVo9pxz+foD1MbbiWIqkJRufR1gId6X1KcDq4D4AMf/D2jr+/wu8FA+x/8a8Fwm29YC5fP7nqa7fi/g8Wz2iffd2xpAUd89r5PPcV4BJPg+vwK8Eg33Myf3Bvgv8K7vc1tgZAR+z4nAeb7PpfEOG5MxzmbA2PyOLbe/Q6AN8APevkuNgF8iHG88sBk4LRruJ3ApcB6wJN26fkA33+dugf7/AU4GVvv+Lev7XDa76+V7CcE5t8w591eATdkOd2HepkwtgFG+VR8D14cz3gDXvxX4Ir+uGQYXACudc6udc4eBEXjvfb5xmQ95Emk5uTfX4f3egfd7eJnve5FvnHObnHO/+z7vAZaRyUgAMeA6YLjzmgOUMbPECMZzGbDKOfd/EYzBzzk3HdiRYXX672Bmz8BWwATn3A7n3E684861zu560fQOoTLwd7rlQMNdlAP+SfcwyXRIjDC5BEh2zq3IZLsDfjKzeb4hOSKho6/o/UEmRcmc3Of8dB/evxADye/7mZN749/H9z3chfd7GRG+KquGwC8BNl9kZgvN7Aczq5uvgR2V3e8w2r6Pbcn8D75ouJ8AFZ1zm8D7xwFwSoB98nRfQzmWkZ/lYLiLQIcFWJexTWyOh8TIrRzG3I6sSwdNnHMbzewUvH0z/vRl+JDJKk5gMNAH7z3pg7d6676MpwhwbMjbHufkftrxQ55kFPb7mUFEv4O5ZWalgNFAZ+fc7gybf8db7bHX9y7pG7wdRPNbdr/DaLqfRYFr8Y3anEG03M+cytN9DUtCcNkMd5GJnAx3sQ1vkTLB99dZyIbEyC5mM0sAbgTOz+IcG33/bjGzr/FWQYT0AZbTe2tmQ4GxATbly7AiObifgYY8yXiOsN/PDHJyb9L2We/7TpzE8UX6sDOzIniTwWfOua8ybk+fIJxz48zsHTMr75zL14HacvA7jKZhbq4EfnfOJWfcEC330yfZzBKdc5t81WtbAuyzHu97jzRV8L63zVI0VRllO9yF78ExBbjZt+puILMSR6hdDvzpnFsfaKOZlTSz0mmf8b44zdeRWzPUvd6QyfV/A2qat7VWUbxF5G/zI740lvmQJ+n3icT9zMm9+Rbv9w6838PJmSW0cPG9sxgGLHPODchkn1PT3m2Y2QV4/1/fnn9R5vh3+C1wl6+1USNgV1p1SARkWgMQDfcznfTfwcyegT8CV5hZWV/V8RW+dVmLwFvzG/Bmr0NAMvBjum098Lby+Au4Mt36cUAl3+caeBPFSuBLoFg+xf0R0CHDukrAuHRxLfT9LMVbNZLf9/YTYDGwyPelScwYp2+5Dd6WKasiFOdKvPWbC3w/72aMM1L3M9C9AZ7Hm7wAivu+dyt938MaEbh/F+Mt/i9Kdw/bAB3SvqNAR999W4j3xX3jCMQZ8HeYIU4D3vbd78Wka3mYz7GWwPuAPynduojfT7wJahNwxPfcvB/vO6tJwArfvyf79k0C3k937H2+7+lK4N6cXE9DV4iICBBdVUYiIhJBSggiIgIoIYiIiI8SgoiIAEoIIiLio4QgIiKAEoKIiPj8P/7eCZh1Y1QaAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
"source": [
- "## 定义反向传播网络"
+ "from mindspore import Tensor\n",
+ "\n",
+ "x_model_label = np.array([-10, 10, 0.1])\n",
+ "y_model_label = (x_model_label * Tensor(model_params[0]).asnumpy()[0][0] + \n",
+ " Tensor(model_params[1]).asnumpy()[0])\n",
+ "\n",
+ "plt.axis([-10, 10, -20, 25])\n",
+ "plt.scatter(x_eval_label, y_eval_label, color=\"red\", s=5)\n",
+ "plt.plot(x_model_label, y_model_label, color=\"blue\")\n",
+ "plt.plot(x_target_label, y_target_label, color=\"green\")\n",
+ "plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "有了损失函数后,我们如何使得损失函数最小呢?我们可以将公式1代入到损失函数公式4中展开:\n",
- "\n",
- "$$J(w,b)=\\frac{1}{2m}\\sum_{i=1}^m(wx_i+b-y^{(i)})^2\\tag{5}$$"
+ "从上图中可以看出,蓝色线条的初始化模型函数与绿色线条的目标函数还是有较大的差别的。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "公式5可以将$J(w)$看作为凹函数,对权重值$w$微分可求得:\n",
- "\n",
- "$$\\frac{\\partial{J(w)}}{\\partial{w}}=\\frac{1}{m}\\sum_{i=1}^mx_i(wx_i+b-y^{(i)})\\tag{6}$$\n"
+ "## 定义前向传播网络与反向传播网络并关联"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "由凹函数的特性可以知道,当公式6等于0时,损失函数有最小值:\n",
+ "接下来需要定义模型的损失函数,这里采用均方差的方法用于判断拟合的效果如何,即均方差值越小,拟合的效果越好,其损失损失函数公式为:\n",
"\n",
- "$$\\sum_{i=1}^mx_i(wx_i+b-y^{(i)})=0\\tag{7}$$ \n",
+ "$$J(w)=\\frac{1}{2m}\\sum_{i=1}^m(h(x_i)-y^{(i)})^2\\tag{2}$$\n",
"\n",
- "假设有一个$w_{min}$使得公式7成立。我们如何将初始的权重$w_{s}$逐步的变成$w_{min}$,在这里采取迭代法,也就是梯度下降方法\n",
+ "假设训练数据第$i$个数据为$(x_i,y^{(i)})$,公式2中的参数解释如下:\n",
"\n",
- "当权重$w_{s}w_{min}$,权重值需要左移即权重值变小接近$w_{min}$,才能使得损失函数逐步的变小,由凹函数的性质可知,在$w_{s}$处的导数为正(损失函数在$w_{min}$右边单调上升),公式8的值为正。其权重的更新公式为:\n",
- "\n",
- "$$w_{ud}=w_{s}-\\alpha\\frac{\\partial{J(w_{s})}}{\\partial{w}}\\tag{10}$$\n"
+ "net = LinearNet()\n",
+ "net_loss = nn.loss.MSELoss()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "当$w_{s}=w_{min}$时,到$\\frac{\\partial{J(w_{s})}}{\\partial{w}}$=0,即梯度消失,其表达式也可写为公式9的样式。\n",
+ "### 定义反向传播网络\n",
"\n",
- "在考虑了全区间的情况后,可以得出权重$w$的更新公式即为:\n",
+ "反向传播网络的目标是不断变换权重值,使得loss值取得最小值,一般的在线性网络中采用权重更新公式:\n",
"\n",
- "$$w_{ud}=w_{s}-\\alpha\\frac{\\partial{J(w_{s})}}{\\partial{w}}\\tag{11}$$\n",
+ "$$w_{t}=w_{t-1}-\\alpha\\frac{\\partial{J(w_{t-1})}}{\\partial{w}}\\tag{3}$$\n",
"\n",
- "当权重$w$在更新的过程中假如临近$w_{min}$在增加或者减少一个$\\Delta{w}$,从左边或者右边越过了$w_{min}$,公式11都会使权重往反的方向移动,那么最终$w_{s}$的值会在$w_{min}$附近来回迭代,在实际训练中我们也是这样采用迭代的方式取得最优权重$w$,使得损失函数无限逼近局部最小值。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "同理:对于公式5中的另一个权重$b$容易得出其更新公式为:\n",
+ "公式3参数解释:\n",
"\n",
- "$$b_{ud}=b_{s}-\\alpha\\frac{\\partial{J(b_{s})}}{\\partial{b}}\\tag{12}$$\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "当所有的权重更新完成后,将新的权重赋值给初始权重:即$w_{s}$=$w_{ud}$,$b_{s}$=$b_{ud}$。将新的初始权重传递回到模型函数中,这样就完成了反向传播的过程。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "> 当遇到多项式的回归模型时,上述梯度方法也适用,由于权重数量的增加,需要将权重的名称更新为$w_0,w_1,w_2,...,w_n$,引入矩阵的表达方式,公式将会更加简洁,这里就不多介绍了。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 实现梯度函数"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "在MindSpore中的所有要编入计算图的类都需要继承`nn.Cell`算子,MindSpore的梯度计算函数采用如下方式。"
+ "- $w_{t}$为迭代后的权重值。\n",
+ "- $w_{t-1}$为迭代前的权重值。\n",
+ "- $\\alpha$为学习率。\n",
+ "- $\\frac{\\partial{J(w_{t-1}\\ )}}{\\partial{w}}$为损失函数对权重$w_{t-1}$的微分。\n",
+ "\n",
+ "函数中所有的权重值更新完成后,将值传入到模型函数中,这个过程就是反向传播过程,实现此过程需要使用MindSpore中的优化器函数,如下:"
]
},
{
"cell_type": "code",
- "execution_count": 8,
- "metadata": {},
+ "execution_count": 10,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.629217Z",
+ "start_time": "2020-09-14T10:38:40.616392Z"
+ }
+ },
"outputs": [],
"source": [
- "from mindspore.ops import composite as C\n",
- "\n",
- "class GradWrap(nn.Cell):\n",
- " \"\"\" GradWrap definition \"\"\"\n",
- " def __init__(self, network):\n",
- " super().__init__(auto_prefix=False)\n",
- " self.network = network\n",
- " self.weights = ms.ParameterTuple(filter(lambda x: x.requires_grad,\n",
- " network.get_parameters()))\n",
- "\n",
- " def construct(self, data, label):\n",
- " weights = self.weights\n",
- " return C.GradOperation(get_by_list=True) \\\n",
- " (self.network, weights)(data, label)\n"
+ "opt = nn.Momentum(net.trainable_params(), learning_rate=0.005, momentum=0.9)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "上述代码中`GradWrap`实现的是对各个权重的微分$\\frac{\\partial{J(w)}}{\\partial{w}}$,其展开式子参考公式6。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### 反向传播更新权重"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式11,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。"
+ "### 关联前向和反向传播网络\n",
+ "\n",
+ "定义完成前向传播和反向传播后,在MindSpore中需要调用`Model`函数,将前面定义的网络,损失函数,优化器函数关联起来,使之变成完整的计算网络。"
]
},
{
"cell_type": "code",
- "execution_count": 9,
- "metadata": {},
+ "execution_count": 11,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.645718Z",
+ "start_time": "2020-09-14T10:38:40.630789Z"
+ }
+ },
"outputs": [],
"source": [
- "train_network = GradWrap(loss_opeartion) \n",
- "train_network.set_train()\n",
- "optim = nn.RMSProp(params=net.trainable_params(),learning_rate=0.02)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "通过以上操作,我们就完成了前向传播网络和反向传播网络的定义,接下来可以加载训练数据进行线性拟合了。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## 定义模型拟合过程可视化函数"
+ "from mindspore.train import Model\n",
+ "\n",
+ "model = Model(net, net_loss, opt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "定义一个可视化函数`plot_model_and_datasets`,将模型函数和验证数据集打印出来,观察其变化。"
+ "## 拟合过程可视化准备\n",
+ "\n",
+ "### 定义绘图函数\n",
+ "\n",
+ "为了使得整个训练过程更容易理解,需要将训练过程的测试数据、目标函数和模型网络进行可视化,这里定义了可视化函数,将在每个step训练结束后调用,展示模型网络的拟合过程。"
]
},
{
"cell_type": "code",
- "execution_count": 10,
- "metadata": {},
+ "execution_count": 12,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.680586Z",
+ "start_time": "2020-09-14T10:38:40.646738Z"
+ }
+ },
"outputs": [],
"source": [
- "import time \n",
+ "import matplotlib.pyplot as plt\n",
+ "import time\n",
"\n",
- "def plot_model_and_datasets(weight, bias, data_x, data_y):\n",
+ "def plot_model_and_datasets(net, eval_data):\n",
+ " weight = net.trainable_params()[0]\n",
+ " bias = net.trainable_params()[1]\n",
" x = np.arange(-10, 10, 0.1)\n",
- " y = x * ((weight[0][0]).asnumpy()) + ((bias[0]).asnumpy())\n",
- " plt.scatter(x1,y1,color=\"red\",s=5)\n",
- " plt.scatter(data_x.asnumpy(), data_y.asnumpy(), color=\"black\", s=5)\n",
- " plt.plot(x, y, \"blue\")\n",
+ " y = x * Tensor(weight).asnumpy()[0][0] + Tensor(bias).asnumpy()[0]\n",
+ " x1, y1 = zip(*eval_data)\n",
+ " x_target = x\n",
+ " y_target = x_target * 2 + 3\n",
+ " \n",
" plt.axis([-11, 11, -20, 25])\n",
+ " plt.scatter(x1, y1, color=\"red\", s=5)\n",
+ " plt.plot(x, y, color=\"blue\")\n",
+ " plt.plot(x_target, y_target, color=\"green\")\n",
" plt.show()\n",
- " time.sleep(0.02)"
+ " time.sleep(0.2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "上述函数的参数:\n",
- "\n",
- "- `weight`:模型函数的权重,即$w$。\n",
+ "### 定义回调函数\n",
"\n",
- "- `bias`:模型函数的权重,即$b$。\n",
+ "MindSpore提供的工具,可对模型训练过程进行自定义控制,这里在`step_end`中调用可视化函数,展示拟合过程。更多的使用可参考[官网说明](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/customized_debugging_information.html#callback)\n",
"\n",
- "- `data_x`:训练数据的x值。\n",
- "\n",
- "- `data_y`:训练数据的y值。"
+ "- `display.clear_output`:清除打印内容,实现动态拟合效果。"
]
},
{
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "> 可视化过程中,红色的点是验证数据集,黑色的点是单个batch的训练数据,蓝色的线条是正在训练的回归模型。"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:38:40.706063Z",
+ "start_time": "2020-09-14T10:38:40.681635Z"
+ }
+ },
+ "outputs": [],
"source": [
- "## 执行训练"
+ "from IPython import display\n",
+ "from mindspore.train.callback import Callback\n",
+ "\n",
+ "class ImageShowCallback(Callback):\n",
+ " def __init__(self, net, eval_data):\n",
+ " self.net = net\n",
+ " self.eval_data = eval_data\n",
+ " \n",
+ " def step_end(self, run_context):\n",
+ " plot_model_and_datasets(self.net, self.eval_data)\n",
+ " display.clear_output(wait=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "其训练过程如下:\n",
+ "## 执行训练\n",
"\n",
- "1. 设置训练的迭代次数`step_size`。\n",
- "2. 设置单次迭代的训练数据量`batch_size`。\n",
- "3. 正向传播训练`grads`。\n",
- "4. 反向传播训练`optim`。\n",
- "5. 图形展示模型函数和数据集。\n",
- "6. 清除本轮迭代的输出`display.clear_output`,起到动态可视化效果。\n",
+ "完成以上过程后,可以使用训练数`ds_train`对模型训练,这里调用`model.train`进行,其中参数解释:\n",
"\n",
- "迭代完成后,输出网络模型的权重值$w和b$。"
+ "- `epoch`:训练迭代的整个数据集的次数。\n",
+ "- `ds_train`:训练数据集。\n",
+ "- `callbacks`:训练过程中需要调用的回调函数。\n",
+ "- `dataset_sink_model`:数据集下沉模式,支持Ascend、GPU计算平台,本例为CPU计算平台设置为False。"
]
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": 14,
"metadata": {
- "scrolled": true
+ "ExecuteTime": {
+ "end_time": "2020-09-14T10:47:22.917679Z",
+ "start_time": "2020-09-14T10:38:40.707096Z"
+ }
},
"outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "loss_value: 0.42879593\n"
- ]
- },
{
"data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAD6CAYAAABEUDf/AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO3deXxU1f3/8dcniWKLVEEzgbKIWytRkSWgBNCgKJafBbW14lJtQaCodf1ZQani169rxaUqUlBcqRY3QAURMKgQQAOyxwWXliCGYRFUVAz3fP+4N3aIk5BkZjKTyfv5eOSRyb135n5yM/nk5NxzPsecc4iISHrKSHYAIiKSOEryIiJpTEleRCSNKcmLiKQxJXkRkTSmJC8iksZiTvJm1tbMCs2sxMxWm9nlwfYxZrbezJYFH/1jD1dERGrDYh0nb2atgFbOuaVm1gxYApwO/A74yjl3V01f68ADD3Tt27ePKR4RkcZmyZIlm5xz2dH2ZcX64s65DcCG4PGXZlYCtK7La7Vv357i4uJYQxIRaVTM7N9V7Ytrn7yZtQc6A4uDTZea2Qozm2Rmzat4zjAzKzaz4nA4HM9wREQavbgleTPbF3geuMI5tx14CDgU6ITf0h8b7XnOuQnOuTznXF52dtT/NkREpI7ikuTNbC/8BD/ZOfcCgHOuzDm3yznnAROB7vE4l4iI1Fw8RtcY8AhQ4py7O2J7q4jDzgBWxXouERGpnZhvvAI9gd8DK81sWbDtOuAcM+sEOOBTYHgcziUiIrUQj9E18wGLsmtGrK8tIiKx0YxXEZE0piQvIpJEnudRVlZGohZwUpIXEUkSz/Po06cPbdq0oaCgAM/z4n4OJXkRkSQJh8MUFRVRXl5OUVERiZgQqiQvIpIkoVCIHj16kpl5CPn5+YRCobifIx5DKEVEpA4++MAwKyQnx+OllzLwpx3Fl1ryIiL1bOdOuOUWOOYYWLHCuPnmTJo1i3+CByV5EZG4qOkomcWLIS8PRo+GAQOgpAQGD4YENOIBJXkRkZjVZJTMV1/BFVdAjx6wZQtMnQpTpkDLlomNTUleRCRGexolM3MmHHkk3HcfjBgBa9bAwIH1E5uSvIhIjEKhEPn5+WRlZe02SiYchvPPh/79oWlTmD8fHnwQfvaz+otNo2tERGJkZhQWFhIuKyNkBg6efAquvBK2b4cbb4RRo6BJk/qPTS15EZEYVNxwNefIGTSIT9v0pt8BxVxwARx+OLz7LowZk5wED0ryIiJ1FnnD9YReBYx9qxtH7VrGwi+O4P5btjN/vt8Xn0zqrhERqaP/3nDN5a1F9/AWeZxmLzOu22O0HfVs9CLs9UwteRGROmrWLESrVo8CS9hrr0N4+p8e0z/rRttFzyZu4HstxWP5v7ZmVmhmJWa22swuD7a3MLPZZvZh8Ll57OGKiKSGwkI45hhj3brzGTToezZsaM6gczKwljkpk+AhPi35cuBq51wH4DjgEjPLBUYCc51zhwNzg69FRBq0rVth6FA48UTwPJg9G55++icccEDqJPZIMSd559wG59zS4PGXQAnQGhgIPB4c9jhweqznEhFJFufgueegQwd49FG45hpYuRL69k12ZNWL641XM2sPdAYWAznOuQ3g/yEws6g1NM1sGDAMoF27dvEMR0QkLtavh0sugWnToHNnmDEDunRJdlQ1E7cbr2a2L/A8cIVzbntNn+ecm+Ccy3PO5WVnZ8crHBGRmHkePPQQ5ObCrFlw553w9tsNJ8FDnFryZrYXfoKf7Jx7IdhcZmatglZ8K2BjPM4lIlIfSkr8vvcFC+Ckk+Af/4BDD012VLUXj9E1BjwClDjn7o7YNR24MHh8ITAt1nOJiCTazp1w883QqZNfSOzRR/2bqw0xwUN8WvI9gd8DK81sWbDtOuB2YIqZDQH+A5wVh3OJiCTMwoV+6331ahg0CO69F3Jykh1VbGJO8s65+VQ9r+ukWF9fRCSRPM/jk082ce+92Tz4oNG6Nbz0Epx2WrIjiw/NeBWRtOWVl1O2ahUuyiIe4Cf4jh1Hcdhh3/LAA46LL3asWZM+CR6U5EUkTXnl5fQ54ADaHH00Bc2b45WX77Z/40Y488xvWb36DuBLMunFX6//nGbNkhNvoijJi0haCq9ZQ9H27ZQDRdu3E16zBvAnNT3+OHTo4Jg+PQu4AehCPgv9WvBpRkleRNJSKDubfPwbj/nB1x9/DKecAn/4Axx22PdkZHQFbiaLnUw59lisod9ljUJJXkTSkrVsSWHv3pRmZjKnVwFjn2rJUUfB4sUwbhwUFe1Fz54t/CX7evQgp6gopQqLxYvqyYtIejIjY948Pnt9K/2vbcHSvxi//rWf4Nu0AQiW7AuHCYVCWBomeFBLXkTS1I4dcO2oDLqdegDr1xtTpvi1Z/wE78vIyCAnJydtEzyoJS8iaej112HYMPjoIxgyBP52h0fz8jAQIiWWa6pHasmLSNrYsgUGD/ZrzZj5yf7hCR7Nz+zjN+ELCvyqY42IkryINHjOwZQpfq33J56AkSNhxQro0wcIh6GoCMrL/c/hcLLDrVdK8iLSoK1bBwMHwtlnQ9u2UFwMt90GP/lJcEAoBPn5kJXlfw5FXdoibalPXkQaHM/zKCsL8/xz2YwaBbs8Y+xY47LL/Fy+GzN/QdZw2E/waXyTNRq15EWk4fA8vA0b6N79D/z85x/x58syOO7ruaw66hyuusL7cYKvkJHhl5NsZAke1JIXkVTgeXtuaXse351wCtcvOJ4l7mHgSzK4gCd5kpbvZvnPT8MZq7FSS15Eksvz/Dukexj9UjTjCzrP/ztj3Q1k8yyZmUfTa7/p5GRmNsq+9pqKS5I3s0lmttHMVkVsG2Nm681sWfDRPx7nEpE0s4fRL9u3w6WXOHoNaM7XTVrwSsav+bz3P1hfupR5mzdj69fDvHmNsiumJuLVkn8MODXK9nucc52CjxlxOpeIpJNqRr+89BLk5jrGjXNcxv2szvsD/UsnkPHGG+S0bIllZjbavvaaikufvHPuTTNrH4/XEpFGJsrol88/h8sv98e+H3VEOc9nFnDsriJYnOXfRFVSr7FE98lfamYrgu6c5tEOMLNhZlZsZsXhRjZJQUQCwegXhzFpkj+paepUf0HtJcuyOLZnVqMd5x6rRCb5h4BDgU7ABmBstIOccxOcc3nOubzs7OwEhiMiqWztWujb1681c/TRsHw5jB4NezcJWvqlpep7r4OEJXnnXJlzbpdzzgMmAt0TdS4RabjKy+HOO/3EXlwM48f7ufyIIyIOasTj3GOVsHHyZtbKObch+PIMYFV1x4tI47NkCVx0ESxbBqefDg88AK1bJzuq9BKXJG9mTwMFwIFmVgrcCBSYWSfAAZ8Cw+NxLhFp+HbsgBtvhLvv9rvYn38ezjwz2VGlp3iNrjknyuZH4vHaIpJeZs+G4cPhk09g6FC/q2b//ZMdVfrSjFcRqRebN/sLaJ9yij9QZt48mDBBCT7RlORFJKGcg2ee8YdFTp4M113n13o/4YRkR9Y4qECZiCTMf/4DI0bAjBnQrRvMmQMdOyY7qsZFLXkRibtdu+D++yE31++WueceWLhQCT4Z1JIXkbhatcofFrl4MfTrB+PHebRvGoaMxreIdipQS15EfuCvuFSGc67Wz/3uO7jhBujSxZ+9+tRTMPMVj/Z/bLyLaKcCJXkRAfwE36dPH9q0aUNBQQFeLRLy/PnQqZNfa+bss6GkBM47D2xT415EOxUoyYsIAOFwmKKiIsrLyykqKqImBQO3bfNvrPbuDd98A6++Ck8+CT+UoWrki2inAiV5EQEgFAqRn59PVlYW+fn5hPaQkKdN82+sTpgAV17p98X361fpIFNxsWTTjVcRAcDMKCwsJBwOEwqFsCoS8oYNcNll8Nxz/miZqVP94ZFVqiguJkmhlryI/CAjI4OcnJyoCd45ePhhf1LTSy/Brbf6VSOrTfCSdGrJi8geffABDBsGb7zhz1SdMAF+8YtkRyU1oZa8iFTp++/httv8bplly/zk/vrrSvANiVryIo2V5+22rmpl77zjT2pasQJ+8xt/BmurVkmIU2KilrxIY+R50Cf6JKWvv4arr4bjjoNNm+DFF/2brErwDVNcknywUPdGM1sVsa2Fmc02sw+Dz1EX8haRJAhHn6Q0axYcdZS/mMewYbBmjb9ikzRc8WrJPwacWmnbSGCuc+5wYG7wtYikgkqTlDZlhLjgAjj1VGjSBN58Ex56CPbbL9mBSqzikuSdc28CWyptHgg8Hjx+HFB7QCRZPA/KyvzykGVl/rbCQty6UiYPnUeHXOPpp2H0aP8Ga+/eyQ1X4ieRffI5FQt5B581n1kkGSr631u3hgMO+KEf/t//hv5/zOH83xuHHgpLl/q1Z/bZJ9kBSzwlfXSNmQ0DhgG0a9cuydGIpKGK/vddu2DbNnaRwf3z8xh9FGBw331wySWQmZnsQCUREtmSLzOzVgDB543RDnLOTXDO5Tnn8rJ/qGokInFT0f+emcmKffPpwSKu9MZy/AnG6tV+iYKoCb6ii6cOZYcldSQyyU8HLgweXwhMS+C5RKQqZnw7s5DRf/6Crt/O59MDuvLPyY5XXjEOOqiK51QzxFIalngNoXwaWAj80sxKzWwIcDtwspl9CJwcfC0i9SVoib8xz3FM5wxuuXdfzjvPKHk/g3POteoLQlYxxFIanrj0yTvnzqli10nxeH0RqSXP44vev+bahaczwQ3l4IMdr71mnHxyDZ9f0cVTVKQ68A1c0m+8ikgdVVOW4IXHtnNp0UTKyOFqu5ub5pxH00NqUe63og58NWUPpGFQWQORhqiKPvPPSj3O/H/f8psh+xNq+jWLM3tyV+9pND24Di3xijrwSvANmpK8SENUqc/cKwszYbxHh/Y7mDnDcfvB43knfDB566dpRaZGTklepCGKKEvwfqez6TMoxPARGXT13mElR3Ptuj+z1/bNaomLkrxIg2TGzlmF3HLVJo5Z8QQrVsAjDzvm9hrDYVn/rv3NUo2JT1tK8iIN0OLF0DXPGH3nfgzY+Rwlu37J4At3YfPqsGi2xsSnNSV5kQbkq6/giiugRw/YGi5nGgOYwtm0/PJDeO+9ut0s1Zj4tKYkL9JAzJwJRx7puO8+GPEnx5r3Mxmw35v+zv32g9zcur1wpbLDGhOfXpTkRVJcOAznnQf9+0PTTf9mfuYJPLi6gJ/9DH/pppUrYcsWvxVfFxVj4mvbzSMNgpK8SIpyDp58Ejp0gGefhRuv/op3vzuSnrve/G+3SlaWv5RTXRN8BY2JT1tK8iIp6JNPoF8/uOACOPxwePddGPO3pjTpmaduFakVlTUQSSHl5fD3v8Nf/+o3ru+/H0aMqCgFrFIDUntK8iKpwPNYPm8rF13bguJi47TTYNw4aNu20nEV3SoiNaTuGpEk++Zrj1Ht/0nXk/bjP8u38sw/PaZPj5LgRepASV4kiQoLoePRHrevO58LeIIS7wjOPjGsnhiJGyV5kSTYuhWGDoUTTwTPMpl99FVMyhpOi54ddENV4irhffJm9inwJbALKHfO5SX6nCKpyu3yeP7R7Vw6ej82bTL+8he48Ubjp/vcBeFrdUNV4q6+brz2cc5tqqdziSSM53mEw2FCoRBmVu3CHZWtX+dxSecFTNvcm877fsCMRYfRJa/in2ndUJXEUHeNSA15nkefPn1o06YNBQUFeOXlNSrs5Xnw0EPQIRdmbc7jTq7h7W860qWtasRI4tVHknfAa2a2xMyGVd5pZsPMrNjMisMqjCQpLBwOU1RURHl5OUVFRYTfe2+Phb1KSuD44+Hii6H7scaqboO5Jutesnoeq753qRf1keR7Oue6AL8CLjGz4yN3OucmOOfynHN52dnZ9RCOSN2EQiHy8/PJysoiPz+fUG5ulYW9du6Em2+GTp1gzRp49FGYPds4dNFk1YiRepXwPnnn3GfB541m9iLQHXgz0ecViTczo7CwcPc++SgzUBcu9EfOrF4NgwbBvfdGdLeb+t6lfiW0JW9mTc2sWcVj4BRgVSLPKZJIGRkZ5OTk+Ane3/BDYa8vv4TLLoOePWHbNnjpJXj6aeV0Sa5Et+RzgBeDX4gs4J/OuVcTfE6RevfKK36NmdJSuOQSuPVWaNYs2VGJJDjJO+c+Bo5J5DlEkmnjRrj8cnjmGX/NjgUL/FWbRFKFhlCK1IFz8Pjjfq33F16Am27yywErwUuqURVKkVr66CP4059gzhy//33iRD/Zi6QiteRFaqi8HO66C44+GhYv9ksBv/mmErykNrXkRWrg3Xfhootg6VIYMAAefNCf6CqS6tSSF6nGjh1w7bXQrRusX++vtTp1qhK8NBxqyYtUYe5cGD7c74MfMgT+9jdo3jzZUYnUjlryIpVs2QKDB0Pfvv4k1tdfh4cfVoKXhklJXiTgHEyZ4t9IfeIJGDkSVqzwC02KNFTqrhEB1q3zZ6q+9BJ07QqzZvnFxUQaOrXkpXHwPCgr85vrlTY/+KA/W3XOHBg7FhYtUoKX9KEkL+nN82DDBn9Rj0qLe6xeDb16waWX+jNVV62Cq67yKweLpAu9nSV9eZ7foR4s7OEB4QUL2G/dRm5/tOUPRcQefxx+/3uVd5f0pCQv6Ssc3i3B9wEW7DqWJrnfsmMHnHsu3HOPFmiS9KbuGklfodAPKzd9nHcib9mD7OItduwwJk/eyuTJSvCS/tSSl/QVrNw0/antXHzdfjjnMHuA/PwZnHPOzGRHJ1IvEt6SN7NTzex9M1trZiMTfT6RCp9/Dr8blMHAC/eneXOjqAg2bDibt96a+d+VnUTSXKKX/8sEHsRfxDsXOMfMchN5ThHnYNIkf1LTtGn+gtpLlkCPHpWW7hNpBBLdku8OrHXOfeyc2wk8AwxM8DmlEVu71i9HMGSIXxJ4+XIYPRr23jvZkYkkR6KTfGtgXcTXpcE2kbj6/nu44w4/sRcXw/jxMG8eHHFEsiMTSa5E33iN9n/xblMOzWwYMAygXbt2CQ5HUp7n+UMfQ6EaD1xfssSv9b5sGZx+OjzwALRWU0IESHxLvhRoG/F1G+CzyAOccxOcc3nOubzs7OwEhyMprWLyUqWZqVXZsQOuuQa6d/dvsj7/PLz4ohK8SKREJ/l3gMPN7GAz2xsYBExP8DmloYqYvERRkf91FWbPhqOO8pfjGzIESkrgzDPrMVaRBiKhSd45Vw5cCswCSoApzrnViTynNGARk5fIz4cDD/xRUbHNm+EPf4BTTvEPmzcPJkyA/fdPWtQiKS3hk6GcczOAGYk+j6SBYPIS4bCf4E880W/R5+fjXi/kmSkZXH45bN0K110Hf/0r7LNPsoMWSW2a8SqpJSMDcnL8FnzQdfOfBesY0e97ZsxtQrdufkngjh2THahIw6DaNZKaQiF29ejF/RmXk+tWMW/h3txzDyxcqAQvUhtqyUtKWrXauGjn6yz2jH79HOPHG+3bJzsqkYZHLXlJKd9+CzfcAF26wNq1xlNPwcyZSvAidaWWvKSMt96CoUPh/ffh/PPh7rtBUydEYqOWvCTdtm0wYgQcf7zfkn/1VXjySSV4kXhQkpf4qWKx7OpMm+Yvoj1hAlx5pb/Oar9+CYxRpJFRkpf4qChJ0Lq1P5Fp165qD9+wAX77W7/WzIEHwqJFfvfMvvvWU7wijYSSvMRHOAwLFvjJfdEi6N07au0Z5+Dhh/1a7y+/DLfe6leN7NYtCTGLNAJK8hIfodDumfqdd35Ue+aDD/zG/tCh0KkTrFgBo0bBXnvVc6wijYiSvMSHGcyfDz16/Lf2TLBK9vffw223+ZOYli3z+99ffx1+8YskxyzSCGgIpcRPZqaf6CPqwb/zjl/rfcUK+M1v4P77oVWrKM+tQx15EdkzteQlvoLaM1/vMK66Co47DjZt8uu8P/dcNQm+FnXkRaTmlOSlalGGRHqeR1lZGa7SMMnI7bNm+bXe77kHhg11rJm3kdMHVjOsshZ15EWkdpTk5b8ik3qU1rXnefTp04c2bdpQUFCAF7S4K7a3bn0MLVu+xqmnQpMm8OY8j4dKCtgvt3X1LfTKdeSDvnwRiZ1VbpElU15enisuLk52GI1TRVIP6rfzzDPQrp3fus7KgtJSyoA2bdpQXl5OVlYWpaWl5OTk8PnnZbRufQ2eNxbYnyuv/I5bb92XfbaV+X8kIl6DnJyqz68+eZE6MbMlzrm8aPsS1pI3szFmtt7MlgUf/RN1LomDoMvEKy+nbMECf7X1Sq3rUChEfn4+WVlZ5OfnEwqF+PRT+OMfQ3jeE8DHdO06jLFjm/qLedSmhV5RR14JXiSuEtaSN7MxwFfOubtq+hy15JPIObwTTqDPW29RZEZ+794Uzp1LxubNu7WuPc8jHA5zwAEhHnjAuP56f9ctt3icdVaYVq1CWGSiVgtdJOGqa8lrCKX4zAj/618UtWtHeXk5RUVFhDdvJqdS90pGRgZlZTn8+tf+fKf+/WHcODjooAwgSldMRQtdRJIi0TdeLzWzFWY2ycyaRzvAzIaZWbGZFYc1qiKpQi1b/qg7JtK338Lo0dC1K3z6KfzzKY+XHynjoHapc19HRHYXU3eNmc0BWkbZdT2wCNgEOOBmoJVzbnB1r6fumuSr6I4JhXbvdnnjDRg2zC9NcOGFMPZvHgf8NuJGbWGh32oXkXqXsO4a51zfGgYwEXg5lnNJ/cjIyNiti+aLL+Avf4GJE+Hgg+G11+Dkk4GyKGPb1S0jknISObomcm7jGcCqRJ1LYhStDrzn8cKkL8jNdTzyCFx9NaxcGSR40Nh2kQYikTde7zSzTvjdNZ8CwxN4LqmryuPjCwv57DO4tNN8Xtx8PJ2afsj0hYeS171Se8DM76LRyBmRlJawlrxz7vfOuaOdcx2dcwOccxsSdS6JQURJAW/BQibc/RUdcmHm5m7czrW8/W1H8g6q4oa4xraLpDzdKWvsgm6X9zNzKWj6DsOv+Rld84yV3YZwbdbd7NWzu7piRBowJfl0VoM1V3d+b9xyciEdM1exMqMjjzwCc+cahy16yi9DMG+eWuoiDZiSfLqqosBYZAXJxYv9Me+j/5rBwIFGSYkxeHCQ09UVI5IWlOTTVaXyvV5Z2Q8VJHv3/hWXD/2aHj0cW7fCtGkwZQq0jDbjQUQaNCX5dFVpiGPYjKKiIsrL+7JgwT/4+8NNGdFyKmtWeQwYkOxgRSRRVLsmXVUa4mhhaN58BuHwyfyUNcyiJ73Cb8N3pUStOSMiaUEt+XSWkYEL5fDkU0ZurvHFF325+uqv2Nzzz/TKeluTmEQaAbXk09gnn8Dw4TB7NvToARMnGkceuS94szWJSaSRUEs+DZWXw913++usLlwIDzwA8+fDkUcGB2jkjEijoZZ8mlm2DC66CJYsgdNO82u9t22b7KhEJFnUkk9Blcez18Q338CoUZCXB+vW+Uu0Tp+uBC/S2CnJpxjP834Yz15QUIDneXt8TmEhdOwIt98OF1wAJSVw9tnqjRERJfmUEw6Hg/HswRJ81ayWtXUrDB0KJ57oT3CdMwcmTYIWLeoxYBFJaUryqSKoMxPKzq52CT7wS9E89xx06ACPPuov6rFyJZx0UhLiFpGUphuvqSCiprvl51M4dy7hzZt/tAQf+DXDLrnE72/v3BlmzIAuXZIUt4ikvJha8mZ2lpmtNjPPzPIq7RtlZmvN7H0z6xdbmGmuUp2ZjM2bycnJ2S3Bex489BDk5vrj3u+8E95+WwleRKoXa3fNKuBM4M3IjWaWCwwCjgROBcaZWWaM50pfe1hKr6QEjj8eLr4Yunf3u2auucY/XESkOrEu5F0C/KhLARgIPOOc+w74xMzWAt2BhbGcL21VsZTezp3+iJlbboGmTf3+9wsv1KgZEam5RLUFWwOLIr4uDbb9iJkNA4YBtGvXLkHhNAAVs1ADCxf6I2dWr4ZBg+Dee3fbLSJSI3vsrjGzOWa2KsrHwOqeFmVb1Jk9zrkJzrk851xednZ2TeNOW19+CZddBj17wrZt8NJL8PTTSvAiUjd7bMk75/rW4XVLgci5lm2Az+rwOo3KK6/AiBH/HUFz663QrFmyoxKRhixR4+SnA4PMrImZHQwcDrydoHM1eBs3wjnn+LVmmjWDBQvg/vs8mu2ofn1WEZE9iXUI5RlmVgr0AF4xs1kAzrnVwBRgDfAqcIlzbleswaYb5+Cxx/xJTS+8ADfdBO++Cz2O/fH6rCIidWG1KYKVaHl5ea64uDjZYdSLjz7ya73Pnev3v0+c6Cd7AMrK/ARfXu6PkywtVae8iFTJzJY45/Ki7VNZg3pWXg533QVHH+1PZho3Dt58MyLBwx7HzYuI1JSm0ySY53mEw2FCoRDLlhkXXQRLl8KAAfDgg36D/UeqGDcvIlJbasknUEXZ4NatD6ddu6fp1s2xfj08+yxMnVpFgq+g1ZtEJA7Ukk+gcFkZ8+c3wfPepbT0UM499xse+HsTmpeHgRDRpxOIiMSPWvIJsmWTx8jOS/G81wCPjh2v4Kkn9qb5mRo1IyL1R0k+zpyDKVOgQwfHk2X9uJbb+CSjM8tmjcQ2bdqt2iTVLAgiIhIPSvJxtG6df0P17LOh7UEZFHcZzu1ZN9C+V1csJ0ejZkSk3qlPPg4qar2PHAm7dsHYsXDZZUZWxkQI37r7CBmNmhGReqQkH6PVq/1qkQsXwsknw/jxcMghFXszfjyJKSPKNhGRBFF3TR199x2MGQOdOzvef8/j8fu+YNarLiLBi4gkn5J8HRQV+eur3nQTnLX/HEq2tuSCK1pgfQo0YkZEUoqSfC1s3+6XAO7VC77+GmZM3srkLb8iRNgfVlNUBGvWqHKkiKQMJfkamj7dX0T7oYf8RT1Wr4ZfnbO/P0oG/JuoP/0pdOqkMfAikjKU5Pfg88/hd7+DgQOheXP/Buu998K+++In9nnz4LPPYPlyv3m/a5fGwItIylCSr4JzMGmSXx1y2jT43/+FJUvg2GMrHZiRAa1awVFH+TWDNQZeRFJIrIuGnGVmq83MM7O8iO3tzewbM1sWfIyPPdT6s3Yt9O0LQ4b4JYGXL4frr4e9967mSRWVI0tL/da9xsCLSAqIdZz8KuBM4JXqPaIAAAjPSURBVB9R9n3knOsU4+vXD8+DcJjvm4e4+x5jzBg/oY8f74+Bz6jpn0KNgReRFBNTknfOlQBYQ2u1BkmdUMjvl+nThyULvuWifZ5i2deHc8YZcP/90Lp1sgMVEYlNIvvkDzazd83sDTPrncDz1I63+/qpOz4p45q3BtB9VxFlXzfl+Ue+4IUXlOBFJD3ssSVvZnOAllF2Xe+cm1bF0zYA7Zxzm82sKzDVzI50zm2P8vrDgGEA7dq1q3nkNRXZajfzHweVIGfP/wnDT8zmE3c1w2wid/SYyv5/fDn+MYiIJMkeW/LOub7OuaOifFSV4HHOfeec2xw8XgJ8BPyiimMnOOfynHN52dnZdf0+oqvUasfzIBRic7dTudCe4BTvVbKaZDLvdY9/bBjA/vNf1g1TEUkrCSlQZmbZwBbn3C4zOwQ4HPg4EeeqVkSrnaIi3MYwzxTmcPna6WzNhOuucfz1BmOffQzQDVMRST+xDqE8w8xKgR7AK2Y2K9h1PLDCzJYDzwF/cs5tiS3UOoio3/6fLqdz2pAQ554L7dsbS5YYt9xq7LNPvUclIlJvzKVQnZW8vDxXXFwc19fc9b3HuDu/YtRtzXDOuOUW+POfITMzrqcREUkaM1vinMuLti+t68mvWgUXXZTB4sU/o98pjvH/G6Z93oHqdxeRRiMtyxp8+y3ccINfDnjtWnjqCY+Z3xTQPv/nKh4mIo1K2iX5t97yC0HefDMMGgQlJXDeKWFsoRbQFpHGJ22S/LZtMGIEHH+835J/9VV48knIzkYLaItIo5UWffLFxX4p4M8/hyuvhP/5n6AUcIWK4mFaQFtEGpm0SPKHHAJHHglTp0K3blUcpOJhItIIpUWSb9ECXnst2VGIiKSetOmTFxGRH1OSFxFJY0ryIiJpTEleRCSNKcmLiKQxJXkRkTSmJC8iksaU5EVE0lhK1ZM3szDw7xhe4kBgU5zCiSfFVTuKq3YUV+2lamx1jesg51zU9VNTKsnHysyKqyqcn0yKq3YUV+0ortpL1dgSEZe6a0RE0piSvIhIGku3JD8h2QFUQXHVjuKqHcVVe6kaW9zjSqs+eRER2V26teRFRCSCkryISBprUEnezM4ys9Vm5plZXqV9o8xsrZm9b2b9qnj+wWa22Mw+NLN/mdneCYrzX2a2LPj41MyWVXHcp2a2MjiuOBGxVDrfGDNbHxFb/yqOOzW4jmvNbGQ9xPU3M3vPzFaY2Ytmtn8Vx9XL9drT929mTYKf8drg/dQ+UbFEnLOtmRWaWUnwO3B5lGMKzGxbxM/3hkTHFZy32p+L+f4eXK8VZtalHmL6ZcR1WGZm283sikrH1Nv1MrNJZrbRzFZFbGthZrODfDTbzJpX8dwLg2M+NLMLa31y51yD+QA6AL8E5gF5EdtzgeVAE+Bg4CMgM8rzpwCDgsfjgRH1EPNY4IYq9n0KHFiP128M8P/3cExmcP0OAfYOrmtuguM6BcgKHt8B3JGs61WT7x+4GBgfPB4E/KsefnatgC7B42bAB1HiKgBerq/3U01/LkB/YCZgwHHA4nqOLxP4HH/CUFKuF3A80AVYFbHtTmBk8HhktPc90AL4OPjcPHjcvDbnblAteedciXPu/Si7BgLPOOe+c859AqwFukceYGYGnAg8F2x6HDg9kfEG5/wd8HQizxNn3YG1zrmPnXM7gWfwr2/COOdec86VB18uAtok8nx7UJPvfyD++wf899NJwc86YZxzG5xzS4PHXwIlQOtEnjOOBgJPON8iYH8za1WP5z8J+Mg5F8ts+pg4594EtlTaHPk+qiof9QNmO+e2OOe2ArOBU2tz7gaV5KvRGlgX8XUpP/4FOAD4IiKZRDsm3noDZc65D6vY74DXzGyJmQ1LcCwVLg3+ZZ5Uxb+HNbmWiTQYv9UXTX1cr5p8/z8cE7yftuG/v+pF0D3UGVgcZXcPM1tuZjPN7Mh6CmlPP5dkv6cGUXVDKxnXq0KOc24D+H/EgVCUY2K+dim3kLeZzQFaRtl1vXNuWlVPi7Kt8tjQmhxTYzWM8xyqb8X3dM59ZmYhYLaZvRf8xa+z6uICHgJuxv++b8bvShpc+SWiPDfmcbY1uV5mdj1QDkyu4mXifr2ihRplW0LfS7VhZvsCzwNXOOe2V9q9FL9L4qvgfstU4PB6CGtPP5dkXq+9gQHAqCi7k3W9aiPma5dySd4517cOTysF2kZ83Qb4rNIxm/D/TcwKWl/RjqmxPcVpZlnAmUDXal7js+DzRjN7Eb+rIKakVdPrZ2YTgZej7KrJtYx7XMENpdOAk1zQGRnlNeJ+vaKoyfdfcUxp8HPejx//Kx53ZrYXfoKf7Jx7ofL+yKTvnJthZuPM7EDnXEILcdXg55KQ91QN/QpY6pwrq7wjWdcrQpmZtXLObQi6rzZGOaYU/95BhTb49yRrLF26a6YDg4JRDwfj/zV+O/KAIHEUAr8NNl0IVPWfQTz0Bd5zzpVG22lmTc2sWcVj/JuPq6IdGy+V+kHPqOJ87wCHmz8SaW/8f3WnJziuU4FrgQHOuR1VHFNf16sm3/90/PcP+O+n16v6wxQvQZ//I0CJc+7uKo5pWXFvwMy64/9+b05wXDX5uUwHLghG2RwHbKvopqgHVf43nYzrVUnk+6iqfDQLOMXMmgfdq6cE22quPu4sx+sDPzGVAt8BZcCsiH3X44+KeB/4VcT2GcDPg8eH4Cf/tcCzQJMExvoY8KdK234OzIiIZXnwsRq/2yLR1+9JYCWwIniDtaocV/B1f/zRGx/VU1xr8fsdlwUf4yvHVZ/XK9r3D/wP/h8hgH2C98/a4P10SD1co174/6aviLhO/YE/VbzPgEuDa7Mc/wZ2fj3EFfXnUikuAx4MrudKIkbGJTi2n+In7f0itiXleuH/odkAfB/ksCH493HmAh8Gn1sEx+YBD0c8d3DwXlsL/LG251ZZAxGRNJYu3TUiIhKFkryISBpTkhcRSWNK8iIiaUxJXkQkjSnJi4ikMSV5EZE09n8dJUn9W3RDugAAAABJRU5ErkJggg==\n",
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXkAAAD8CAYAAACSCdTiAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nO3deZzN5fvH8dc1Y18jOwmpDEoyWVqlkvaviqRSX2UIiVZL31LaqFQkW0obSYSQsiZEluxbFBrE2JdsM+f+/XGOn2mcMTPOOXNmzryfj8c85pzPdl/zmeNyz/25F3POISIikSkq3AGIiEjoKMmLiEQwJXkRkQimJC8iEsGU5EVEIpiSvIhIBAs4yZvZeWY208zWmNkqM3vSt72nmW01s6W+r1sDD1dERDLCAu0nb2ZlgbLOuSVmVhhYDPwHaA4ccs69HXiYIiJyNnIFegHn3HZgu+/1QTNbA5QP9LoiIhK4gGvy/7qYWSVgNlATeAp4BDgALAKeds7t9XNOHBAHULBgwTrVqlULWjwiIlnWunVw6BAUKgQXXxzQpRYvXrzLOVfS376gJXkzKwT8BLzmnBtrZqWBXYADeuFt0ml9pmvExsa6RYsWBSUeEZEsa8cOqFABEhMhVy6Ij4fSpc/6cma22DkX629fUHrXmFluYAzwpXNuLIBzbodzLsk55wGGAnWDUZaISLZXqhRceaU3wV95pfd9iATcJm9mBgwD1jjn+ibbXtbXXg/QFFgZaFkiIhHBDGbOhIQEb4I3C1lRASd54CrgIWCFmS31besO3G9ml+FtrtkEtA1CWSIikSEqKqAmmvQKRu+aOYC//4YmB3ptEREJjEa8iohEMCV5EZEIpiQvIhImmzfDs89CUlLoylCSFxHJZB4PDBgANWvChwMdK1aEriwleRGRTLR2LVx7LXTs6KhyxygqvXE5lartC1l5SvIiIpngxAl4/XWoVcuxcssmavW+g+UXtyB/3mh2/7M7ZOUGo5+8iIicweLF8OijsGx5ErWuf5Lf63/I74eMvk3e4Yn6ncgVFbpUrJq8iEiIHDkCzz8P9epB/InlXPhaPZZdO4BrNztWfWh0ueCBkCZ4UJIXEQmJn36CWrWgT98j1HiyG/tb1GFfri2MWFWNyaOiqVTjqlNz1ng83knLgjgr8ElK8iIiQXTgADz+ODRsCAdLTKfcq5ewvMibPHTpQ6ztuJb7v1qFxW+FWbO8c9Z4PHD99d5ZKRs29L4PIrXJi4gEyaRJ0K4dbN27m+rdn2Z1nk+pWqAq05tPp1HlRqcOTD5nTUICzJvnnXZ43jzv+yDOaaOavIhIgBIS4IEH4PbbHa7mCM7pEcP6fF/S/eruLG+3/N8JPqUQTzusmryIyFlyDr76Cjp1cuxjE1Vfas8Gm0LdknUZesc0Li19adoXCfG0w0ryIiJnIT7e2/Y+cXIi513bncNXvsPfSUa/296nfd0OREdFp/9iIZx2WM01IiIZ4PHAoEFQvTpMXfEbFV6sy18N3+KGPz2sHuB4olLzjCX4EAs4yZvZeWY208zWmNkqM3vSt724mU01s99934sFHq6ISPisX+/tCPN4p38o1vxZEltfwYmC2/h6RTUmjITz9jm4776g95AJRDBq8onA0865GKA+0MHMqgNdgenOuQuB6b73IiLZTmIi9Onj7fe+aO+PlHixJlvOe5vWtVuzpsMamvWfgeXK5W2kP9lDJosIxspQ24HtvtcHzWwNUB64C2joO+xTYBbwfKDliYgEhceTroedS5d6pyRYsjaBCnFPEV/8CyqccxHf3D6L6ypd5z0on/P2jJk3L+QLc2dUUNvkzawSUBtYAJQ+uZC373vW+alFJGdLxwCko0ehRw+oE+v4veBnFO4Ww44So/jftf9jWbtlpxI8nOohEx9/apBTFhG03jVmVggYA3R2zh2wdP6QZhYHxAFUrFgxWOGIiKQujQFIc+d6a+/rdm6kbJc4theawZXlrmTI7UOoUaqG/2tm0sLcGRWUmryZ5cab4L90zo31bd5hZmV9+8sCO/2d65wb4pyLdc7FlixZMhjhiIicWSoDkA4ehCc6Oq6+7jg7qvYmb4dqHMo9gw/XVeXnh39KPcFnYQHX5M1bZR8GrHHO9U22awLwMPCm7/v4QMsSEQkKPwOQpkyBtm0dWxIXUqLN3ewqs5X/rDU+mATlj2yCd3dnyZp6WoJRk78KeAhoZGZLfV+34k3uN5nZ78BNvvciIlmDr3ll9x6jVSu45a5DHGzQhajH6pG74FbGfh3Ft5vrU/5IaKYbyCzB6F0zB0itAf6GQK8vIhIKzsHo0dCxI+wu/j1Fuz3OXtvM4/FleWPkTorWuQpmzIBdu0Iy3UBm0bQGIhK5UukmuW0btG8P46ftoFjLznjKf0W5EjFMumMOV1VoAK8mOycbNtEkp2kNRCQy+ekm6Rx89BHEVHdM2v4x+Z+N4XDFsbzc8GV+a/sbV1W86lQvmWxac09JNXkRiUwpukluXLiHNt1KMHPZ7xR9JI7EYrNoUPEahtwxhGolqnnPSecAqexENXkRiUy+bpJJ0Xl4p+J71LyhMPOiXydXp0ugzG8MuX0Isx6Z9e8EH8IVmsJFNXkRiUxmrOg3k0cfSWJhwhIKd4jlYIGV3Fv9Xvo16UfZwmX/fXyIV2gKF9XkRSTiHDsGL70Eta88zIrKT2NtGlC0zD7GtxjP6GajT0/wEPIVmsJFNXkRydoy2E4+f753SoLVid9RoEt7juTeSocrOvDaDa9RJG+R1E8M8QpN4aKavIhkXRloJz90CDp3hgaNt/NnbHNoeSdVyp3DvKYT6X9LvzMn+JMirGcNKMmLSFbmr53cj6lToeYlHt6fM4Q8XWLwXDiB165/lcVfn0P9OndF1IPUjFKSF5GsK4128r17oXVraNxyLTtvbQh3tOXKKrVZ/vhyul/8GHnmzk/zP4hIpzZ5Ecm6ztBOPnYstH/iODsvepPoDq+RL39BPmg8jP9e9l/MzDtvQRZdyCMzKcmLSNaWYp72v//2zjcz5te55GsRhyuymmY1W/Deze9RulCyLo8R+iA1o9RcIyLZgnMwfDhUq7Wfb0+0h0evplSFQ0xqOYmR94z8d4I/KQIfpGaUavIikuX9+Se0bQtT//qWPK07Qv6/6Vy3M70a9aJQnkLhDi9LU5IXkSwrKQk++AC6vr6VEzc+AVd9S0zpWgy9YxxXlL8i3OFlC0ryIpIlrV4NrR/1sCBxMLniupIr73Fea/gmTzV4itzRucMdXrYRrDVePzaznWa2Mtm2nma2NcVqUSIiZ3T8OLzyCtS6cRWLL70Gbm9Pwwvrsqr9Sp6/+nkl+AwKVk1+OPAB8FmK7e86594OUhkiEuEWLoRHHjvK6nNfJyruTc7JX4R3G3/CQ2WbYMWy/2Rh4RCUmrxzbjawJxjXEpGc559/4JlnoF7z2axvdBlc14uWte5jbftVtOr8CXbeeTl61GogQt2FsqOZLfc15xTzd4CZxZnZIjNblJBDR6SJ5GQzZ0L1y/fxzvo43CPXUa7iMaY8MIXPm35OyX9I17QGkrpQJvmBwAXAZcB24B1/BznnhjjnYp1zsSVLlgxhOCKSlezbB23iHI06jib+rhii6gzjmQbPsLrDSm6uerP3oAid/jczhax3jXNux8nXZjYUmBiqskQke5kwAeKe+YsdsR2g+XdcUvpyht01icvLXv7vAzVqNWAhS/JmVtY5t933timw8kzHi0jk27kTOnZKYvSfHxLVvDv58nl4tdHbPFn/SXJFpZKOUkxrIBkTlCRvZiOBhkAJM4sHXgIamtllgAM2AW2DUZaIZD/OwRdfQMdeyznYMA5uXcANlW9m8B0DqVyscrjDi2hBSfLOufv9bB4WjGuLSPa2eTO0efwIU4/3wlq8RbGjjv7LqnJ/j4lYLo3HDDVNUCYip/N4YMcObxX8LM/xeGDAAKh2y0ymXXQpXPMGrVYksf6DJFp+uwG79lp1icwESvIi8m8ZWHIvtXPWrvbQoNEeOk5tzdH7GlGxomPag1MZ/nc9zj3iO2fhQnWJzARK8iLyb+lccs/fOScS4bU5V1PzgVEsrBdDVO3PeP6qrqx5YgU3XHAjzJkDDRqoS2QmUoOYiPzbyb7pGVlRqVQpllzyMA/+cQ9rbu8PF77OZSWvYPjdP1KrTK1Tx0VHexO9ukRmGiV5Efm3DPZNP3IEXnrZw9v5q0PHe8mX13jzpvfoWLcj0VHRp5+gLpGZSkleRE6XzkQ8ezY89OxSttRqA40XcVOlWxl614ecf875mRCkpIeSvIhk2IED8HTXf/jo95ehyTsUy1uCQXeNoln1Zt5FtCXL0INXEcmQSZPggsZT+Sj3JXB1Hx6p9V82dllD8xrNleCzINXkRSRdEhKg3VO7GHv4KbjlcyoWuIhPm82kYaWG4Q5NzkA1eZGcJoMDnZyDESMcVf7zBWPLxhBVayTdrnyBdV2WKcFnA0ryIjlJBgc6xcfDDff+wQPfN+FQ44e49LyqLHv8N16/qRf5cuXLnJglIEryIjlJOgc6eTwwYGAiVVu9zcyYmuSt+gv9mnzAkg5zqFmqZiYHLYFQm7xITpKOgU6//w73dVnMbxXawHW/cWOFO/mk2QAqFKkQhoAlUEryIjnJGQY6JSbCm+8cpufsF0mKfY+iuUrx0T3fcE/M3eo1k40pyYvkNH4GOi1bBvc8P4WNMe2g7mYerNaW/ne9yTn5zglTkBIswVo05GPgdmCnc66mb1txYBRQCe+iIc2dc3uDUZ6IBMfRo9Dt1Z28v64LrsEIyuepxsiWP3PN+VeHOzQJkmA9eB0ONEmxrSsw3Tl3ITDd915Esog5cxyV7x7OeydisBqjeb5uTzY+u1QJPsIEa2Wo2WZWKcXmu/AuCQjwKTALeD4Y5YnI2Tt4ENq/sIEv9rWFejOoXvgqvnloKDElY8IdmoRAKLtQlj65kLfvu9/5Ss0szswWmdmiBC0gIBJSEyef4LyWb/JF4UvIU2kR7984iBVdZvtP8P4GTZ3NilESVmHvJ++cG+Kci3XOxZYsWTLc4YhkP+lIvLt3w21xv3LHd7Hsj+3GdeVv5c9n1tDpqrZEmZ804G/Q1NmsGCVhF8okv8PMygL4vu8MYVkiOVMaidc5+Oyrg5zXpjOTy9WncOndjGr6LbMeH0O5wuVSv66/QVNns2KUhF0ok/wE4GHf64eB8SEsSyRn8pd4fTX7bVsdDR6eyMO/1uDIpf24r0p74rutpvml/0n7uicHTSVfps/fNsnygtWFciTeh6wlzCweeAl4E/jazB4FtgDNglGWiCSTcgRriRK4htfTd2lFujX5hxM1xlLaajC61VyuqdQg/ddNbdBUBlaMkqwhWL1r7k9l1w3BuL6IpCJFMt6wYBd3HI5lbfuPsdyH6Vy1K71bvEye6DwZv7a/1aG0dF+2oxGvItldVBRJJUrT/e11vL22LZ47f+KiTWUZt/1CYl55XTXuHC7svWtEJDBLlh3n/Idepc+BWkSXW0afqwaz5o3FxHy/QAleVJMXya6OHfHw+Esz+eTwk3DxKuoXbs7YNu9TtnCZcIcmWYiSvEg2NH32Ppq904m9tb+ggKcEQ28ZT8u6d4Y7LMmClORFspHDh6HFy+OYmNQRam/jzgUV+WJ2PIWfrRfu0CSLUpu8SDbx1aRtlOp4DxMLNuXc/OcyY0E9xk/bSuHYq9RnXVKlmrxIFrd7j4c7Xx7CvALPYxWOE1flDT5o+TS5LVp91iVNSvIiWVi/EWt4ZnYcJ8rO4XxPIya1G0yNslVPHaA+65IGJXmRLGjz1mPc8tobrCnxOtHnFqbnZZ/w4p0Paxk+yTAleZEsxDnoMXgOvde0wVN6LbWi7mfSQ90pX7WGmmTkrOjBq0gWsWzdPs5r3443dlxD7vxHGHLNRJZO20r56rU1ta+cNdXkRcIsMdER9+43DN/xJK7UDhrlf4pxz79C4f2HTp9hUm3wkkGqyYuE0YxF8ZTq9B8++ac5hQ8VYOK865n+zFsUzldQU/tKUKgmLxIGR44m0azPQCYd7Q7FE2nx4+V8Nn8JuaM2n6qxpzbdr0gGKMmLZLKvZqyk9bg2HDl3PqV31GHC+C3UTdwIFn16jV1T+0qAQp7kzWwTcBBIAhKdc7GhLlMkK9pz4Ci39X6V+dG9iSpwDl1KDuKdVztgiUneJpmlS6GGetFIcGVWTf5659yuTCpLJMt579ufeO7nOE4UXc9F/7Ti+y7vUKX0uTBqxKlVnZTgJQTUXCMSQpv+3kuTvs+yruAwckVV4a1LfuSZu286dYDa3CXEMiPJO+BHM3PAYOfckOQ7zSwOiAOoWLFiJoQjEnrOOZ777Gv6rn4ST/5d1D3+HJP+9xIlihb494Fqc5cQy4wkf5VzbpuZlQKmmtla59zskzt9SX8IQGxsrMuEeERCasnGLdw+sD3bC08i/7E6DG38PQ/cUDvcYUkOFfIk75zb5vu+08y+BeoCs898lkj2k5iUxCMDP+DL7T0gL9xi7/LNmx0pkE+tohI+IR0MZWYFzazwyddAY2BlKMsUOSseD+zY4Z085kzbUjFl6TJKdG3Al7s7c87+a5nadBWTX+ysBC9hF+oRr6WBOWa2DPgVmOScmxLiMkUyxuOB66+HChVOzRHjb5sfh48doXGfbtzybR3222YeLjiShPcmcWPs+Zn6I4ikJqTVDOfcH0CtUJYhErCEhNPniAGYOxeSkrzf/cwbM/yn6Tw+uS1HC2ykfEJrJj75FpddXDwMP4BI6jR3jYi/OWJKlIBChbz7CxXyvvf5e/9u6vR6hP/OupHjx43nS8/grwHDlOAlS1KDoYi/OWJ27fKumg3e7wkJuNKlee27EfSc35mkXPuotrs7k7u9QOUK+cMbv8gZKMmLwOn91U/W7n/+GRITWfPgPdx2ZWH+jP6B3Pvq8tbVQ+nS8tLwxSuSTkryIv6YwVdfkVixAk9eUYmB9X7DJUZT/0A/vuvVnhLnRoc7QpF0UZIXScnjgYQEZh/cyj1tK7Gr5B8U+P0mhrb4iJa3a1S2ZC9K8iLJeTwcvuFaWuX5m7H1N0G+kty+93NGDmxJocLqpyDZj5K8SDJfz/yG1rXWc7hYAsUW38OoNr256Y4Lwh2WyFlTkhcBdh5K4J4hTzHn4BeQdBGPDL+NwRW2kOf2KuEOTSQgSvKSoznneH/W5zw3/SlORB2g3B8vMuHp56nzwkFN/ysRQUlecqzfd23kzqHtWHt8GlE7ruS5i4fw+qc1iI4GKJDW6SLZgpK85Dgnkk7QfWJf+i7uiScxNxdt+ZBJL7el6gV6sCqRR0lecpRftizknuFt2O6WkeuPpvS+tj9Pv1ZerTISsZTkJUc4dPwQ7b7+H19u6AcHy1B391i+7dOUcuXCHZlIaCnJS8Qbt2oyD49+nAO2hfyrHmfgvW/QqnlRzHlgh9ZXlcimRkiJWDsO7eDGQffT9JvbOJBQiFu2zSF+8Ic8fJ8vwadjvniR7C7kSd7MmpjZOjPbYGZdQ12eiHOOD3/5mPPfimH61rEUXdKTCbcvYfLgqyh+cjbg1OaQF4kwIW2uMbNoYABwExAPLDSzCc651aEsV3Ku9bvX0+yztiw/MAv+uoYHv7uWgRf/QqGbc//7wJOzTM6bd2oOeZEIFOo2+brABt8KUZjZV8BdgJK8BI/Hw/G/t/Ly8k9585dX8RzPR5mlAxk99UuuTnoN9uU6fWUnf3PIi0SgUCf58sBfyd7HA/WSH2BmcUAcQMWKmuFPMsjj4Zc763BfxT/5q/R+bG0znrzofd4cW4Z8TUbCvFyp19RTziEvEoFCneT9VY/cv944NwQYAhAbG+v8HC/i18FjB3lyVBc+iV0GB8pzwYiXGDOwFbUanes9QDV1kZAn+XjgvGTvKwDbQlym5ADj107gv990YG/iVqJ/fZyeM3PT9YrvyHV951MHqaYuEvIkvxC40MwqA1uBFkDLEJcpEWz7we20/qYTU7Z8AztqUjt+NF+/V5eqRVVjF/EnpF0onXOJQEfgB2AN8LVzblUoy5RsyuOBHTvA+W+x8zgPgxYOoUrfGKb88R1557zGh7WWsGhcfape5KuxK8GLnCbkI16dc5OByaEuR7Ixj29g0snujDNneptafNbuWssDo+JYsutn+PN6Gh4azOefXEiFCmGMWSSb0IhXCb9UBiYdSzzGi9NfoeYHtVjy10oKTR/GiJunM2O0ErxIemnuGgk/PwOT5m6Zy4Nft2HT4TWwsgV3F3yPwWNLU6LEGa7jW4BbbfMipyjJS/glG5i0v0henh7fnmHLBsG+ipw7fxKf/e9Wbr01jWuk0eQjklMpyUvWEBXFt3vm0ebjjuw++jcs6EKbqq/w9g+FKFIkHef7a/JR90kRJXkJv60HttJ2fEcm/TEO/q5FxaXj+KLPFVxzTQYuorloRPxSkpew8TgPgxYN4pkpXTly/AQ2qzfPXtOFl2fmJl++DF5Mc9GI+KUkL2Gxaucq/js2joU75sHGG4nZOIgvP7iA2rUDuKhGuIqcRk+mJFMdTTzK/2a8SK2BtVm0aR25vvuU12N+ZNmsABO8iPilmrxkmtmbZ/PfsXH8cWAdLHuQevv68umXJbn44nBHJhK5VJOXwKQxHQHA3iN7eWx8G64bfh1/bjlOvtFTGFB3APOmllCCFwkxJXlJH3/J3HPmdVKdc4xeNZoL349h2JJPYO6z3Lh+OesKfkT7/51LVKPTzxGR4FKSl7SllszPsE7qX/v/4vYRd9L8m+bs2VSeoqMW8sXDffhh2GEqLhmntVVFMomSvKQttWR+sm96rlOrLyV5kui/oD/V+lfn+7Uz4Id3aLZ/Aetn1+aBB8BKn36OiISOHrxK2lIbaJSib/qKnSt5dHwbFm5fABtuptTCgQztU5k770x2LfVnF8lUSvKStjMl5qgojhQvQq8ZPegz9y3ckWIw6UvirryfPguMokX9XE/92UUyTciSvJn1BNoAJxtdu/vmlpfsKJXEPOPPGbSZ0JY/9m2A3x6h0vq3+eTDc2nYMPNDFJHThbom/65z7u0QlyFhsPuf3Tw79Vk+WfoJ0fsvwCZM45m7b6DnCChQINzRichJaq6RDHHO8dXKr3hi8pPsObIH5nSl2q4XGf5VfmJjwx2diKQU6t41Hc1suZl9bGbF/B1gZnFmtsjMFiWoO12WtmnfJm4bcRstx7Zk36ZKRH+0hF4N32DJr0rwIlmVuTOMVEzzZLNpQBk/u3oA84FdgAN6AWWdc63PdL3Y2Fi3aNGis45HQiPRk0j/Bf3pMeMFjh8zkn58nfpRHfh4WDQxMeGOTkTMbLFzzm9VK6DmGufcjekMYCgwMZCyJJOkWEJv6d9LeWzCYyzevpjojbeRZ+qH9O5ekfbtITo63MGKSFpC2bumrHNuu+9tU2BlqMqSIEm2hN4/V9fj5W5X8s78vkQdLQETRtGofDOG/GJUqhTuQEUkvUL54LWPmV2Gt7lmE9A2hGVJMPhGtk6tmEi7S+fxxy9ziVr6GAXm9+H9N4vRqpXGLolkNyFL8s65h0J1bQmNXYWieOrR4nxedid591SC4R9zd2xD+v8GZfw9eRGRLE9z1wjOOT5f9jnVBsTwZZk92M89OGfMKsb2bcjo0b4En44phUUk61GSz+H+2PsHN39xM63GteLwlovwDPyN1pVeZc2K/DRt6jsojSmFRSTr0mCoHCrRk8i7v7zLS7NeIvF4Lvj+A8rsepyhX0ZxY8o+U/5modTcMyLZgmryOdDibYu5YugVPDftOdyGxiS+v5qnrunAyhV+Ejz4nVJYRLIH1eRzkMPHD/PizBd5b8F75D1RGsaMoUpUUz7+wahX7wwnanpgkWxLST6HmLJhCu0mtmPz/s3kW9mWE1PepOcz59CtG+TJk44LaHpgkWxJST7C7Ty8ky4/dGHEihEUOlINRv7MpWWuZtgvULNmuKMTkVBTko9QzjmGLx3O0z8+zYGjh8gzryeJ87rS95W8dOqkKQlEcgol+Qi0Yc8G2k5sy4w/Z1Bk39UkfTGE6y6NYegyqFIl3NGJSGZSko8gJ5JO8Pa8t3ll9iuQmIdcUwbBmjZ89E4UrVvrealITqQkHyEWxC+gzXdtWLFzBedsu4d9I/vxnxvKMWAMlCsX7uhEJFyU5LO5g8cO8sKMF+j/a38KuXJEjR5Hnl13MfpjuOce1d5Fcjol+Wxs4vqJtJ/UnvgD8RRd3559Y17n4RZF6NsXihfHO/3ATvVtF8nJNOI1G/r70N/c98193DHyDg7vKYIbNpeicz9gyvgiDB+eLMFrvhmRHE9JPhvxOA9DFw8lZkAM364eR9HFvdjzxhI6NW3AypVw883JDvY334yI5DgBJXkza2Zmq8zMY2axKfZ1M7MNZrbOzG5O7RqSPut2reP6T68nbmIceffW4kS/5ZT9/QXmzs7D++9DoUIpTtB8MyJC4G3yK4G7gcHJN5pZdaAFUAMoB0wzs4ucc0kBlpczJFtn9bjnBL3n9ObVn18ltytA4ZkfsWtua17oavToAfnypXINzTcjIgS+kPcaADs9gdwFfOWcOwb8aWYbgLrAL4GUlyMkW2f1lyY1aXPLCVYlrKL83vvYOuw96lxchmGLoFatdFxL882I5HihapMvD/yV7H28b9tpzCzOzBaZ2aKESGk3DmQVpYQEDiyaS4fGiVxVZynbd+2nwLiJ7B78FW+9VIb589OZ4EVESEeSN7NpZrbSz9ddZzrNzza/Gc85N8Q5F+uciy1ZsmR64866AuzVMm7PPKp3imbgFVB+6f3seW0NVxS9jRUr4JlnvE3sIiLplWbKcM75W0YiLfHAecneVwC2ncV1sp+zXEVp28FtPPH9E4xdM5ayBS4lz/AhHNhVl8H9jcce87a8iIhkVKhSxwSghZnlNbPKwIXAryEqK2vJYK8Wj/MwaNEgYgbEMGndZCqsfYPtPRdxU/V6rFplxMUpwYvI2Qvoj38zawr0B0oCk8xsqXPuZufcKjP7GlgNJAIdckzPmgz0almdsJq47+KY+9dcKrtG/DVwMEc9VRn5Jdx3nzrEiEjgzJ3Nw8EQiY2NdSRXKCYAAA0ZSURBVIsWLQp3GCF3LPEYb8x5g9d/fp380YUpPLcvWye34oEHjPfegxIlwh2hiGQnZrbYORfrb58e42Wynzf/TNzEONbuWku14w+w9v2+FClWikmT4NZbwx2diEQatfYG0xm6Tu47uo+237Xl2uHXsu/gUUpP/Z61r39B+4dLsWqVEryIhIaSfLCk0nXSOceY1WOoPqA6H/32ETUPPM3fL66kaEITZs+GAQOgSJHwhi4ikUvNNcHip+tkfP4TdJjcgQnrJlA5X22Kjf6ONWvq0O05ePHFM0xJICISJErywXKy6+S8eSRd2YCBm76m+4wenPAkcsn2t1gxtDOXXZqLqQuhdu1wBysiOYWaa4LB44GdO2HGDFYsm8rVD53giSmdqBhVnzxDV7J++DO88Voufv1VCV5EMpdq8oHytcUfXTCXV+8vR+8q2ymS+xwuWf8FK0a05OqrjY9+gIsvDnegIpITKckHKiGBWVvnEBfn4fdz/+IKdz+r3unHn0dLMGAAtGunEasiEj45N/0EMlOkz54je3hsQXeuf8jDUctDzDdDWPjSCK67ogSrVkH79krwIhJeOTMFBThTpHOOUStHETMghuFLP+UanuPvj/awc/tjfPEFTJoEFSuGJnQRkYzImc01ZzlTJMCW/VtoP6k9k36fREzRWIpO+4Gff76MFi3g/fe1yp6IZC05syZ/FuufJnmSeH/++1QfUJ1Zm2Zx/bF3WfvMfA5vvIzx42HkSCV4Ecl6cmZNPoPrny77exltvmvDwm0LqVf8FrZ/NJCZy8+nbVvo3RuKFs2kuEVEMihn1uTh1PqnZ0jwR04coeu0rtQZUodN+zbTaM9IFnSaRO7D5zNzJgwapAQvIllbzqzJp8O0P6bRbmI7Nu7dyI3ntmZl37eYtbk4zz4LPXtCgQLhjlBEJG0B1eTNrJmZrTIzj5nFJtteycyOmNlS39egwEPNHLv/2c0j4x7hps9vwnmiaLh5BtOeGEbJQsVZsAD69FGCF5HsI9Ca/ErgbmCwn30bnXOXBXj90PN4ICEBV7IkI1aOpPMPndl3dB93ntOdn197gfj9+enVC557DvLk8sCO9LXji4hkBQEleefcGgDLrgnP11/+z9Vzefz+Ivxw7l5ql6xHtflDmTD2Eho0gGHDICbm1LHMm+ftkTNzpkY6iUiWF8osVdnMfjOzn8zsmhCWc9YSd2znnaQ51IxLYm6hvTTL9Rbru87ltx8uoV8/+PlnX4IH/33rRUSyuDRr8mY2DSjjZ1cP59z4VE7bDlR0zu02szrAODOr4Zw74Of6cUAcQMVMHCa6ZPsS2nzXhiU3ebh+fSEOzRrD6G2NadwYBg+GSpVSnJBsKuH09q0XEQm3NJO8c+7GjF7UOXcMOOZ7vdjMNgIXAaet0u2cGwIMAe9C3hktK6MOHz/MS7Ne4t3571KqQCla5hrFN2PupWABY/hwaNUqleb2DPatFxHJCkLShdLMSgJ7nHNJZlYFuBD4IxRlZcQPG36g3aR2bNq3ibvPj2P9h28yYmEx7r0X+veHMv7+XknuZN96EZFsItAulE3NLB5oAEwysx98u64FlpvZMuAboJ1zbk9goZ69hMMJPDj2QZp82YQ8UXlpeewnxj82mF1/FWPsWBg9Oh0JXkQkGwq0d823wLd+to8BxgRy7WBwzvHZss946senOHjsII9UepE5b3ZnxNq8PPoovPUWFCsW7ihFREInYke8btyzkbYT2zL9z+nUK3sllVYMZfiL1alcGaZOhRsz/KRBRCT7ibgkfyLpBH1/6UvPn3qSJzoP7c//kPH/a8vCbVE89RS88goULBjuKEVEMkdEJfmFWxfS5rs2LNuxjNuqNCXPtP582K08NWo4xgzZTb0mxdUrRkRylIgYsnno+CG6TOlC/WH1SfgngafKjWXBU2OZOLI8PV/ysKTYjdS7s8xZrQIlIpKdRUSSX75jOf1+7ccDF7fl0tmr6RvXlCpVYMkSeOnxBPLMn62RqiKSI0VEc0398lfSq8QGesdV5sQJ6NsXOnWC6GjAaaSqiORcEZHkZ8yAHh0q06gRDB0KVaok26mRqiKSg0VEkr/hBpgyBRo3TiWHa6SqiORQEZHkzeDmm8MdhYhI1hMRD15FRMQ/JXkRkQimJC8iEsGU5EVEIpiSvIhIBFOSFxGJYEryIiIRLNCVod4ys7VmttzMvjWzc5Lt62ZmG8xsnZmpF7uISBgEWpOfCtR0zl0KrAe6AZhZdaAFUANoAnxoZtEBliUiIhkUUJJ3zv3onEv0vZ0PVPC9vgv4yjl3zDn3J7ABqBtIWSIiknHBnNagNTDK97o83qR/Urxv22nMLA6I8709ZGbrAoihBLArgPNDRXFljOLKGMWVMZEY1/mp7UgzyZvZNKCMn109nHPjfcf0ABKBL0+e5ud45+/6zrkhwJC04kgPM1vknIsNxrWCSXFljOLKGMWVMTktrjSTvHPujEtem9nDwO3ADc65k4k8Hjgv2WEVgG1nG6SIiJydQHvXNAGeB+50zv2TbNcEoIWZ5TWzysCFwK+BlCUiIhkXaJv8B0BeYKp5J3Kf75xr55xbZWZfA6vxNuN0cM4lBVhWegSl2ScEFFfGKK6MUVwZk6PislMtLCIiEmk04lVEJIIpyYuIRLBsleTNrJmZrTIzj5nFptiX5jQKZlbZzBaY2e9mNsrM8oQozlFmttT3tcnMlqZy3CYzW+E7blEoYklRXk8z25ostltTOa6J7z5uMLOumRBXqtNjpDgu5PcrrZ/d15lglG//AjOrFIo4/JR7npnNNLM1vn8DT/o5pqGZ7U/2+30xk2I74+/FvPr57tlyM7s8E2K6ONl9WGpmB8ysc4pjMuV+mdnHZrbTzFYm21bczKb6ctFUMyuWyrkP+4753deTMeOcc9nmC4gBLgZmAbHJtlcHluF9CFwZ2AhE+zn/a6CF7/Ug4PFMiPkd4MVU9m0CSmTi/esJPJPGMdG++1cFyOO7r9VDHFdjIJfvdW+gdzjuV3p+dqA9MMj3ugUwKpN+d2WBy32vC+OdRiRlbA2BiZn1eUrv7wW4Ffge7/iZ+sCCTI4vGvgbOD8c9wu4FrgcWJlsWx+gq+91V3+feaA48IfvezHf62IZLT9b1eSdc2ucc/5GxKY5jYJ5u/80Ar7xbfoU+E8o4/WV2RwYGcpygqwusME594dz7jjwFd77GzIu9ekxMlt6fva78H52wPtZusH3ew4p59x259wS3+uDwBpSGUWeBd0FfOa85gPnmFnZTCz/BmCjc25zJpb5/5xzs4E9KTYn/xyllotuBqY65/Y45/binSusSUbLz1ZJ/gzKA38le+9vGoVzgX3JkkmqUy0E0TXADufc76nsd8CPZrbYN71DZujo+5P541T+REzPvQyl1nhrff6E+n6l52f//2N8n6X9eD9bmcbXRFQbWOBndwMzW2Zm35tZjUwKKa3fS7g/Uy1IvaIVjvsFUNo5tx28/4EDpfwcE5T7Fsy5a4LC0jGNgr/T/GxL2Tc03VMtpEc647yfM9fir3LObTOzUnjHGqz1/a9/1s4UFzAQ6IX35+6FtympdcpL+Dk34H626blfdvr0GCkF/X6lDNPPtpB+jjLKzAoBY4DOzrkDKXYvwdskccj3vGUc3oGIoZbW7yVs98z33O1OfDPkphCu+5VeQblvWS7JuzSmUUhFeqZR2IX3z8RcvhpYQFMtpBWnmeUC7gbqnOEa23zfd5rZt3ibCwJKWum9f2Y2FJjoZ1dIpqRIx/3yNz1GymsE/X6lkJ6f/eQx8b7fcVFO/1M8JMwsN94E/6VzbmzK/cmTvnNuspl9aGYlnHMhnYwrHb+XcE5zcguwxDm3I+WOcN0vnx1mVtY5t93XdLXTzzHxeJ8bnFQB7/PIDImU5po0p1HwJY6ZwL2+TQ8Dqf1lEAw3Amudc/H+dppZQTMrfPI13oePK/0dGywp2kGbplLeQuBC8/ZEyoP3T90JIY4rtekxkh+TGfcrPT/7BLyfHfB+lmak9p9SMPna/YcBa5xzfVM5pszJ5wNmVhfvv+/dIY4rPb+XCUArXy+b+sD+k00VmSDVv6bDcb+SSf45Si0X/QA0NrNivqbVxr5tGRPqJ8vB/MKbmOKBY8AO4Idk+3rg7RmxDrgl2fbJQDnf6yp4k/8GYDSQN4SxDgfapdhWDpicLJZlvq9VeJstQn3/PgdWAMt9H7KyKePyvb8Vb++NjZkU1wa8bY9LfV+DUsaVWffL388OvIL3PyCAfL7PzgbfZ6lKqO+Pr9yr8f6pvjzZfboVaHfycwZ09N2bZXgfYF+ZCXH5/b2kiMuAAb57uoJkPeNCHFsBvEm7aLJtmX6/8P4nsx044ctfj+J9jjMd+N33vbjv2Fjgo2TntvZ91jYA/z2b8jWtgYhIBIuU5hoREfFDSV5EJIIpyYuIRDAleRGRCKYkLyISwZTkRUQimJK8iEgE+z9Ig4J+qLqGCAAAAABJRU5ErkJggg==\n",
"text/plain": [
""
]
@@ -677,49 +573,41 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "weight: 1.9990227 bias: 2.9115517\n"
+ "Parameter (name=fc.weight, value=[[2.0405223]]) \n",
+ "Parameter (name=fc.bias, value=[2.9574146])\n"
]
}
],
"source": [
- "from IPython import display\n",
"\n",
- "step_size = 200\n",
- "batch_size = 16\n",
+ "from mindspore.train.callback import LossMonitor\n",
"\n",
- "for i in range(step_size):\n",
- " data_x,data_y = get_data(batch_size)\n",
- " grads = train_network(data_x,data_y) \n",
- " optim(grads)\n",
- " plot_model_and_datasets(net.weight.data, \n",
- " net.bias.data, data_x, data_y)\n",
- " display.clear_output(wait=True)\n",
+ "epoch = 1\n",
+ "imageshow_cb = ImageShowCallback(net, eval_data)\n",
+ "model.train(epoch, ds_train, callbacks=[imageshow_cb], dataset_sink_mode=False)\n",
"\n",
- "output = net(eval_x)\n",
- "loss_output = criterion(output, eval_label)\n",
- "print(\"loss_value:\", loss_output.asnumpy())\n",
- "plot_model_and_datasets(net.weight.data, net.bias.data, data_x,data_y)\n",
- "print(\"weight:\", net.weight.set_data([0][0]), \"bias:\", net.bias.set_data([0]))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "可以看到最终得到的线性拟合的权重值非常接近目标函数权重weight=2、bias=3。"
+ "plot_model_and_datasets(net,eval_data)\n",
+ "print(net.trainable_params()[0], \"\\n%s\" % net.trainable_params()[1])"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "ExecuteTime": {
+ "end_time": "2020-09-14T04:00:18.787349Z",
+ "start_time": "2020-09-14T04:00:18.784236Z"
+ }
+ },
"source": [
- "## 总结"
+ "训练完成后打印出最终模型的权重参数,其中weight接近于2.0,bias接近于3.0,模型训练完成,符合预期。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
+ "## 总结\n",
+ "\n",
"本次体验我们了解了线性拟合的算法原理,并在MindSpore框架下实现了相应的算法定义,了解了线性拟合这类的线性回归模型在MindSpore中的训练过程,并最终拟合出了一条接近目标函数的模型函数。另外有兴趣的可以调整数据集的生成区间从(-10,10)扩展到(-100,100),看看权重值是否更接近目标函数;调整学习率大小,看看拟合的效率是否有变化;当然也可以探索如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数或者更高次的函数。"
]
}
@@ -745,4 +633,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
\ No newline at end of file
+}
diff --git a/tutorials/notebook/loading_dataset.ipynb b/tutorials/notebook/loading_dataset.ipynb
index d1865e5fd7e5ef2a2db65bae5fc165cfc6feec09..61f899959097a3471b7734ac711f103657c3bf9d 100644
--- a/tutorials/notebook/loading_dataset.ipynb
+++ b/tutorials/notebook/loading_dataset.ipynb
@@ -8,7 +8,7 @@
"\n",
"## 概述\n",
"\n",
- "MindSpore可以帮助你加载常见的数据集、特定数据格式的数据集或自定义的数据集。加载数据集时,需先导入所需要依赖的库`mindspore.dataset`。\n",
+ "MindSpore可以帮助你加载常用的数据集、特定数据格式的数据集或自定义的数据集。加载数据集时,需先导入所需要依赖的库`mindspore.dataset`。\n",
"\n",
"接下来,以加载数常用数据集(CIFAR-10数据集)、特定格式数据集以及自定义数据集为例来体验MindSpore加载数据集操作。"
]
@@ -90,9 +90,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 加载常见的数据集\n",
+ "## 加载常用的数据集\n",
"\n",
- "MindSpore可以加载常见的标准数据集。支持的数据集如下表:\n",
+ "MindSpore可以加载常用的标准数据集。支持的数据集如下表:\n",
"\n",
"| 数据集: | 简要说明 |\n",
"| :---------: | :-------------:|\n",
@@ -103,7 +103,7 @@
"| PASCAL-VOC | 数据内容多样,可用于训练计算机视觉模型(分类、定位、检测、分割、动作识别等)。|\n",
"| CelebA | CelebA人脸数据集包含上万个名人身份的人脸图片,每张图片有40个特征标记,常用于人脸相关的训练任务。 |\n",
"\n",
- "加载常见数据集的详细步骤如下,以创建`CIFAR-10`对象为例,用于加载支持的数据集。\n",
+ "加载常用数据集的详细步骤如下,以创建`CIFAR-10`对象为例,用于加载支持的数据集。\n",
"\n",
"1. 使用二进制格式的数据集(CIFAR-10 binary version),配置数据集目录,定义需要加载的数据集实例。"
]
@@ -283,7 +283,7 @@
"count = 0\n",
"for data in cifar10_dataset.create_dict_iterator():\n",
"# In CIFAR-10 dataset, each dictionary of data has keys \"image\" and \"label\".\n",
- " image = data[\"image\"]\n",
+ " image = data[\"image\"].asnumpy()\n",
" print(f\"The data of image {count+1} is below:\")\n",
" print(image)\n",
" plt.figure(count)\n",
@@ -308,7 +308,7 @@
"\n",
"MindSpore天然支持读取MindSpore数据格式——`MindRecord`存储的数据集,在性能和特性上有更好的支持。 \n",
"\n",
- "> 阅读[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/converting_datasets.html),了解如何将数据集转换为MindSpore数据格式。\n",
+ "> 阅读[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/api/zh-CN/master/programming_guide/dataset_conversion.html),了解如何将数据集转换为MindSpore数据格式。\n",
"\n",
"可以通过`MindDataset`对象对数据集进行读取。详细方法如下所示:"
]
@@ -407,7 +407,7 @@
"## 加载自定义数据集\n",
"\n",
"现实场景中,数据集的种类多种多样,对于自定义数据集或者目前不支持直接加载的数据集,有两种方法可以处理。\n",
- "一种方法是将数据集转成MindRecord格式(请参考[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/converting_datasets.html)章节),另一种方法是通过`GeneratorDataset`对象加载,以下将展示如何使用`GeneratorDataset`。\n",
+ "一种方法是将数据集转成MindRecord格式(请参考[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/api/zh-CN/master/programming_guide/dataset_conversion.html)章节),另一种方法是通过`GeneratorDataset`对象加载,以下将展示如何使用`GeneratorDataset`。\n",
"\n",
"1. 定义一个可迭代的对象,用于生成数据集。以下展示了两种示例,一种是含有`yield`返回值的自定义函数,另一种是含有`__getitem__`的自定义类。两种示例都将产生一个含有从0到9数字的数据集。\n",
" \n",
@@ -491,27 +491,27 @@
"output_type": "stream",
"text": [
"dataset1:\n",
- "[array([0], dtype=int32)]\n",
- "[array([1], dtype=int32)]\n",
- "[array([2], dtype=int32)]\n",
- "[array([3], dtype=int32)]\n",
- "[array([4], dtype=int32)]\n",
- "[array([5], dtype=int32)]\n",
- "[array([6], dtype=int32)]\n",
- "[array([7], dtype=int32)]\n",
- "[array([8], dtype=int32)]\n",
- "[array([9], dtype=int32)]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [0])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [1])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [2])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [3])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [4])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [5])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [6])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [7])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [8])]\n",
+ "[Tensor(shape=[1], dtype=Int32, value= [9])]\n",
"dataset2:\n",
- "[array([0], dtype=int64)]\n",
- "[array([1], dtype=int64)]\n",
- "[array([2], dtype=int64)]\n",
- "[array([3], dtype=int64)]\n",
- "[array([4], dtype=int64)]\n",
- "[array([5], dtype=int64)]\n",
- "[array([6], dtype=int64)]\n",
- "[array([7], dtype=int64)]\n",
- "[array([8], dtype=int64)]\n",
- "[array([9], dtype=int64)]\n"
+ "[Tensor(shape=[1], dtype=Int64, value= [0])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [1])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [2])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [3])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [4])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [5])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [6])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [7])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [8])]\n",
+ "[Tensor(shape=[1], dtype=Int64, value= [9])]\n"
]
}
],
@@ -617,4 +617,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/tutorials/notebook/mindinsight/calculate_and_datagraphic.ipynb b/tutorials/notebook/mindinsight/calculate_and_datagraphic.ipynb
index 39bffb88e5b016f15bd05e5057c4450eaf9d103f..2e9bf71b08b1113b9a8fcf4cbcf59fafeff7f08d 100644
--- a/tutorials/notebook/mindinsight/calculate_and_datagraphic.ipynb
+++ b/tutorials/notebook/mindinsight/calculate_and_datagraphic.ipynb
@@ -142,9 +142,9 @@
"outputs": [],
"source": [
"import mindspore.dataset as ds\n",
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.dataset.vision import Inter\n",
"from mindspore.common import dtype as mstype\n",
"\n",
"\n",
@@ -177,11 +177,11 @@
" type_cast_op = C.TypeCast(mstype.int32)\n",
"\n",
" # using map method to apply operations to a dataset\n",
- " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
" \n",
" # process the generated dataset\n",
" buffer_size = 10000\n",
@@ -368,11 +368,11 @@
"2. 下面代码为上面的 `create_dataset` 函数中作数据预处理与数据增强的相关操作。可以从数据图中清晰地看到数据处理的流程。通过查看数据图,可以帮助分析是否存在不恰当的数据处理流程。\n",
"\n",
"```\n",
- "mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- "mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
- "mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
- "mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
- "mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ "mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ "mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ "mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ "mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ "mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
"\n",
"mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script\n",
"mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n",
@@ -418,4 +418,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
\ No newline at end of file
+}
diff --git a/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar_tensor.ipynb b/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar_tensor.ipynb
index 082a64b00e3f4f48d27703742e137aa6f11e6d64..689d0a1bcd80ab0a4fcf3c14e1dbc310b35aa840 100644
--- a/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar_tensor.ipynb
+++ b/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar_tensor.ipynb
@@ -157,7 +157,7 @@
"source": [
"import mindspore.dataset as ds\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"from mindspore.common import dtype as mstype\n",
"\n",
"\n",
@@ -177,14 +177,14 @@
" random_horizontal_op = CV.RandomHorizontalFlip()\n",
" channel_swap_op = CV.HWC2CHW()\n",
" typecast_op = C.TypeCast(mstype.int32)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"label\", operations=typecast_op)\n",
+ " cifar_ds = cifar_ds.map(operations=typecast_op, input_columns=\"label\")\n",
" if status == \"train\":\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=random_crop_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=random_horizontal_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=resize_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=rescale_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=normalize_op)\n",
- " cifar_ds = cifar_ds.map(input_columns=\"image\", operations=channel_swap_op)\n",
+ " cifar_ds = cifar_ds.map(operations=random_crop_op, input_columns=\"image\")\n",
+ " cifar_ds = cifar_ds.map(operations=random_horizontal_op, input_columns=\"image\")\n",
+ " cifar_ds = cifar_ds.map(operations=resize_op, input_columns=\"image\")\n",
+ " cifar_ds = cifar_ds.map(operations=rescale_op, input_columns=\"image\")\n",
+ " cifar_ds = cifar_ds.map(operations=normalize_op, input_columns=\"image\")\n",
+ " cifar_ds = cifar_ds.map(operations=channel_swap_op, input_columns=\"image\")\n",
"\n",
" cifar_ds = cifar_ds.shuffle(buffer_size=1000)\n",
" cifar_ds = cifar_ds.batch(batch_size, drop_remainder=True)\n",
@@ -240,8 +240,8 @@
"ds_iterator = ds_train.create_dict_iterator()\n",
"ds_iterator.get_next()\n",
"batch_1 = ds_iterator.get_next()\n",
- "batch_image = batch_1[\"image\"]\n",
- "batch_label = batch_1[\"label\"]\n",
+ "batch_image = batch_1[\"image\"].asnumpy()\n",
+ "batch_label = batch_1[\"label\"].asnumpy()\n",
"%matplotlib inline\n",
"plt.figure(dpi=144)\n",
"for i,image in enumerate(batch_image):\n",
@@ -305,10 +305,10 @@
"\n",
"当前支持的Summary算子:\n",
"\n",
- "- [ScalarSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=scalarsummary#mindspore.ops.operations.ScalarSummary): 记录标量数据\n",
- "- [TensorSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=tensorsummary#mindspore.ops.operations.TensorSummary): 记录张量数据\n",
- "- [ImageSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=imagesummary#mindspore.ops.operations.ImageSummary): 记录图片数据\n",
- "- [HistogramSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=histogramsummar#mindspore.ops.operations.HistogramSummary): 将张量数据转为直方图数据记录"
+ "- [ScalarSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html?highlight=scalarsummary#mindspore.ops.ScalarSummary): 记录标量数据\n",
+ "- [TensorSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html?highlight=tensorsummary#mindspore.ops.TensorSummary): 记录张量数据\n",
+ "- [ImageSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html?highlight=imagesummary#mindspore.ops.ImageSummary): 记录图片数据\n",
+ "- [HistogramSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.html?highlight=histogramsummar#mindspore.ops.HistogramSummary): 将张量数据转为直方图数据记录"
]
},
{
diff --git a/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb b/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb
index b76c1500a9b8676a02f7fda84fdac21a8fae9fda..d8a27986e8305b21408dc974927cadc8a791be15 100644
--- a/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb
+++ b/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb
@@ -165,9 +165,9 @@
"metadata": {},
"outputs": [],
"source": [
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.dataset.vision import Inter\n",
"from mindspore.common import dtype as mstype\n",
"import mindspore.dataset as ds\n",
"\n",
@@ -200,11 +200,11 @@
" type_cast_op = C.TypeCast(mstype.int32)\n",
"\n",
" # using map method to apply operations to a dataset\n",
- " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
" \n",
" # process the generated dataset\n",
" buffer_size = 10000\n",
@@ -493,4 +493,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
\ No newline at end of file
+}
diff --git a/tutorials/notebook/mixed_precision.ipynb b/tutorials/notebook/mixed_precision.ipynb
index 149d1ebf8a7d53e9d159029703272b483c8a05b4..69aa651439c59bdde6ce5b0babaf7ccfe6ef23c3 100644
--- a/tutorials/notebook/mixed_precision.ipynb
+++ b/tutorials/notebook/mixed_precision.ipynb
@@ -169,7 +169,7 @@
"datas = dict1.get_next()\n",
"image = datas[\"image\"]\n",
"print(\"the tensor of image is:\", image.shape)\n",
- "plt.imshow(np.array(image))\n",
+ "plt.imshow(np.array(image.asnumpy()))\n",
"plt.show()"
]
},
@@ -196,7 +196,7 @@
"import os\n",
"import mindspore.common.dtype as mstype\n",
"import mindspore.dataset.engine as de\n",
- "import mindspore.dataset.transforms.vision.c_transforms as C\n",
+ "import mindspore.dataset.vision.c_transforms as C\n",
"import mindspore.dataset.transforms.c_transforms as C2\n",
"\n",
"def create_dataset(dataset_path, do_train, repeat_num=1, batch_size=32, target=\"GPU\"):\n",
@@ -220,8 +220,8 @@
"\n",
" type_cast_op = C2.TypeCast(mstype.int32)\n",
"\n",
- " ds = ds.map(input_columns=\"label\", num_parallel_workers=8, operations=type_cast_op)\n",
- " ds = ds.map(input_columns=\"image\", num_parallel_workers=8, operations=trans)\n",
+ " ds = ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=8)\n",
+ " ds = ds.map(operations=trans, input_columns=\"image\", num_parallel_workers=8)\n",
"\n",
" # apply batch operations\n",
" ds = ds.batch(batch_size, drop_remainder=True)\n",
@@ -282,7 +282,7 @@
"print(\"the cifar dataset size is:\", ds.get_dataset_size())\n",
"dict1 = ds.create_dict_iterator()\n",
"datas = dict1.get_next()\n",
- "image = datas[\"image\"]\n",
+ "image = datas[\"image\"].asnumpy()\n",
"single_pic = np.transpose(image[0], (1,2,0))\n",
"print(\"the tensor of image is:\", image.shape)\n",
"plt.imshow(np.array(single_pic))\n",
@@ -991,4 +991,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
\ No newline at end of file
+}
diff --git a/tutorials/notebook/model_security.ipynb b/tutorials/notebook/model_security.ipynb
index d958155df781b04021d7fa920f1c03b65b69f6a9..46965199e2f60df5452003230e85c9fe2dcae96f 100644
--- a/tutorials/notebook/model_security.ipynb
+++ b/tutorials/notebook/model_security.ipynb
@@ -184,9 +184,9 @@
"outputs": [],
"source": [
"import mindspore.dataset as ds\n",
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.dataset.vision import Inter\n",
"from mindspore.common import dtype as mstype\n",
"\n",
"\n",
@@ -214,16 +214,16 @@
" # apply map operations on images\n",
" if not sparse:\n",
" one_hot_enco = C.OneHot(10)\n",
- " ds1 = ds1.map(input_columns=\"label\", operations=one_hot_enco,\n",
+ " ds1 = ds1.map(operations=one_hot_enco, input_columns=\"label\",\n",
" num_parallel_workers=num_parallel_workers)\n",
" type_cast_op = C.TypeCast(mstype.float32)\n",
- " ds1 = ds1.map(input_columns=\"label\", operations=type_cast_op,\n",
+ " ds1 = ds1.map(operations=type_cast_op, input_columns=\"label\",\n",
" num_parallel_workers=num_parallel_workers)\n",
- " ds1 = ds1.map(input_columns=\"image\", operations=resize_op,\n",
+ " ds1 = ds1.map(operations=resize_op, input_columns=\"image\",\n",
" num_parallel_workers=num_parallel_workers)\n",
- " ds1 = ds1.map(input_columns=\"image\", operations=rescale_op,\n",
+ " ds1 = ds1.map(operations=rescale_op,input_columns=\"image\",\n",
" num_parallel_workers=num_parallel_workers)\n",
- " ds1 = ds1.map(input_columns=\"image\", operations=hwc2chw_op,\n",
+ " ds1 = ds1.map(operations=hwc2chw_op, input_columns=\"image\",\n",
" num_parallel_workers=num_parallel_workers)\n",
"\n",
" # apply DatasetOps\n",
@@ -281,8 +281,8 @@
"ds_iterator = ds_train.create_dict_iterator()\n",
"ds_iterator.get_next()\n",
"batch_1 = ds_iterator.get_next()\n",
- "batch_image = batch_1[\"image\"]\n",
- "batch_label = batch_1[\"label\"]\n",
+ "batch_image = batch_1[\"image\"].asnumpy()\n",
+ "batch_label = batch_1[\"label\"].asnumpy()\n",
"%matplotlib inline\n",
"plt.figure(dpi=144)\n",
"for i,image in enumerate(batch_image):\n",
@@ -506,8 +506,8 @@
"i = 0\n",
"for data in ds_test.create_tuple_iterator():\n",
" i += 1\n",
- " images = data[0].astype(np.float32)\n",
- " labels = data[1]\n",
+ " images = data[0].asnumpy().astype(np.float32)\n",
+ " labels = data[1].asnumpy()\n",
" test_images.append(images)\n",
" test_labels.append(labels)\n",
" pred_labels = np.argmax(model.predict(Tensor(images)).asnumpy(),\n",
@@ -579,7 +579,7 @@
"source": [
"### 攻击模型\n",
"\n",
- "调用MindArmour提供的FGSM接口(`FastGradientSignMethod`),使用被攻击前抽取的96张数据图像`test_images`作为被攻击数据集,保存被攻击后数据集图像到当前notebook目录下的`ada_data`文件中。其中,参数`eps`为攻击对数据范围产生的单步对抗性摄动的比例,该值越大,则攻击程度越大。关于`FastGradientSignMethod`的详细使用说明,可参考[官方API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindarmour/mindarmour.attacks.html?highlight=fastgradientsignmethod#mindarmour.attacks.FastGradientSignMethod)。"
+ "调用MindArmour提供的FGSM接口(`FastGradientSignMethod`),使用被攻击前抽取的96张数据图像`test_images`作为被攻击数据集,保存被攻击后数据集图像到当前notebook目录下的`ada_data`文件中。其中,参数`eps`为攻击对数据范围产生的单步对抗性摄动的比例,该值越大,则攻击程度越大。关于`FastGradientSignMethod`的详细使用说明,可参考[官方API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindarmour/mindarmour.adv_robustness.attacks.html?#mindarmour.adv_robustness.attacks.FastGradientSignMethod)。"
]
},
{
@@ -589,7 +589,7 @@
"outputs": [],
"source": [
"import time\n",
- "from mindarmour.attacks.gradient_method import FastGradientSignMethod\n",
+ "from mindarmour.adv_robustness.attacks import FastGradientSignMethod\n",
"\n",
"\n",
"# attacking\n",
@@ -635,7 +635,7 @@
],
"source": [
"from scipy.special import softmax\n",
- "from mindarmour.evaluations.attack_evaluation import AttackEvaluate\n",
+ "from mindarmour.adv_robustness.evaluations import AttackEvaluate\n",
"\n",
"\n",
"pred_logits_adv = model.predict(Tensor(adv_data)).asnumpy()\n",
@@ -749,7 +749,7 @@
],
"source": [
"from mindspore.nn import SoftmaxCrossEntropyWithLogits\n",
- "from mindarmour.defenses import NaturalAdversarialDefense\n",
+ "from mindarmour.adv_robustness.defenses import NaturalAdversarialDefense\n",
"\n",
"\n",
"loss = SoftmaxCrossEntropyWithLogits(sparse=False, reduction='mean')\n",
@@ -843,4 +843,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/tutorials/notebook/nlp_application.ipynb b/tutorials/notebook/nlp_application.ipynb
index 02cf130217ea5634fb6d87cc6059dc4c71809427..d2df6a718d19955f19505f1d7c82a8a91ce638fd 100644
--- a/tutorials/notebook/nlp_application.ipynb
+++ b/tutorials/notebook/nlp_application.ipynb
@@ -652,8 +652,8 @@
],
"source": [
"iterator = ds_train.create_dict_iterator().get_next()\n",
- "first_batch_label = iterator[\"label\"]\n",
- "first_batch_first_feature = iterator[\"feature\"][0]\n",
+ "first_batch_label = iterator[\"label\"].asnumpy()\n",
+ "first_batch_first_feature = iterator[\"feature\"].asnumpy()[0]\n",
"print(f\"The first batch contains label below:\\n{first_batch_label}\\n\")\n",
"print(f\"The feature of the first item in the first batch is below vector:\\n{first_batch_first_feature}\")"
]
@@ -673,17 +673,43 @@
"metadata": {},
"outputs": [],
"source": [
+ "import math\n",
+ "\n",
"import numpy as np\n",
- "from mindspore import Tensor, nn, context\n",
- "from mindspore.ops import operations as P\n",
- "from mindspore.train.serialization import load_param_into_net, load_checkpoint"
+ "\n",
+ "from mindspore import Tensor, nn, context, Parameter, ParameterTuple\n",
+ "from mindspore.common.initializer import initializer\n",
+ "from mindspore.ops import operations as P"
]
},
+ {
+ "cell_type": "markdown",
+ "source": [
+ "2. 定义需要单层LSTM小算子堆叠的设备类型。"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "outputs": [],
+ "source": [
+ "STACK_LSTM_DEVICE = [\"CPU\"]"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ }
+ },
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "2. 定义`lstm_default_state`函数来初始化网络参数及网络状态。"
+ "3. 定义`lstm_default_state`函数来初始化网络参数及网络状态。"
]
},
{
@@ -695,38 +721,144 @@
"# Initialize short-term memory (h) and long-term memory (c) to 0\n",
"def lstm_default_state(batch_size, hidden_size, num_layers, bidirectional):\n",
" \"\"\"init default input.\"\"\"\n",
- " num_directions = 1\n",
- " if bidirectional:\n",
- " num_directions = 2\n",
- "\n",
- " if context.get_context(\"device_target\") == \"CPU\":\n",
- " h_list = []\n",
- " c_list = []\n",
- " i = 0\n",
- " while i < num_layers:\n",
- " hi = Tensor(np.zeros((num_directions, batch_size, hidden_size)).astype(np.float32))\n",
- " h_list.append(hi)\n",
- " ci = Tensor(np.zeros((num_directions, batch_size, hidden_size)).astype(np.float32))\n",
- " c_list.append(ci)\n",
- " i = i + 1\n",
- " h = tuple(h_list)\n",
- " c = tuple(c_list)\n",
- " return h, c\n",
- "\n",
- " h = Tensor(\n",
- " np.zeros((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32))\n",
- " c = Tensor(\n",
- " np.zeros((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32))\n",
+ " num_directions = 2 if bidirectional else 1\n",
+ " h = Tensor(np.zeros((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32))\n",
+ " c = Tensor(np.zeros((num_layers * num_directions, batch_size, hidden_size)).astype(np.float32))\n",
" return h, c"
]
},
+ {
+ "cell_type": "markdown",
+ "source": [
+ "4. 定义`stack_lstm_default_state`函数来初始化小算子堆叠需要的初始化网络参数及网络状态。"
+ ],
+ "metadata": {
+ "collapsed": false
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "outputs": [],
+ "source": [
+ "def stack_lstm_default_state(batch_size, hidden_size, num_layers, bidirectional):\n",
+ " \"\"\"init default input.\"\"\"\n",
+ " num_directions = 2 if bidirectional else 1\n",
+ "\n",
+ " h_list = c_list = []\n",
+ " for _ in range(num_layers):\n",
+ " h_list.append(Tensor(np.zeros((num_directions, batch_size, hidden_size)).astype(np.float32)))\n",
+ " c_list.append(Tensor(np.zeros((num_directions, batch_size, hidden_size)).astype(np.float32)))\n",
+ " h, c = tuple(h_list), tuple(c_list)\n",
+ " return h, c\n"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ }
+ },
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "3. 使用`Cell`方法,定义网络结构(`SentimentNet`网络)。"
+ "5. 针对CPU场景,自定义单层LSTM小算子堆叠,来实现多层LSTM大算子功能。、"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "outputs": [],
+ "source": [
+ "class StackLSTM(nn.Cell):\n",
+ " \"\"\"\n",
+ " Stack multi-layers LSTM together.\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self,\n",
+ " input_size,\n",
+ " hidden_size,\n",
+ " num_layers=1,\n",
+ " has_bias=True,\n",
+ " batch_first=False,\n",
+ " dropout=0.0,\n",
+ " bidirectional=False):\n",
+ " super(StackLSTM, self).__init__()\n",
+ " self.num_layers = num_layers\n",
+ " self.batch_first = batch_first\n",
+ " self.transpose = P.Transpose()\n",
+ "\n",
+ " # direction number\n",
+ " num_directions = 2 if bidirectional else 1\n",
+ "\n",
+ " # input_size list\n",
+ " input_size_list = [input_size]\n",
+ " for i in range(num_layers - 1):\n",
+ " input_size_list.append(hidden_size * num_directions)\n",
+ "\n",
+ " # layers\n",
+ " layers = []\n",
+ " for i in range(num_layers):\n",
+ " layers.append(nn.LSTMCell(input_size=input_size_list[i],\n",
+ " hidden_size=hidden_size,\n",
+ " has_bias=has_bias,\n",
+ " batch_first=batch_first,\n",
+ " bidirectional=bidirectional,\n",
+ " dropout=dropout))\n",
+ "\n",
+ " # weights\n",
+ " weights = []\n",
+ " for i in range(num_layers):\n",
+ " # weight size\n",
+ " weight_size = (input_size_list[i] + hidden_size) * num_directions * hidden_size * 4\n",
+ " if has_bias:\n",
+ " bias_size = num_directions * hidden_size * 4\n",
+ " weight_size = weight_size + bias_size\n",
+ "\n",
+ " # numpy weight\n",
+ " stdv = 1 / math.sqrt(hidden_size)\n",
+ " w_np = np.random.uniform(-stdv, stdv, (weight_size, 1, 1)).astype(np.float32)\n",
+ "\n",
+ " # lstm weight\n",
+ " weights.append(Parameter(initializer(Tensor(w_np), w_np.shape), name=\"weight\" + str(i)))\n",
+ "\n",
+ " #\n",
+ " self.lstms = layers\n",
+ " self.weight = ParameterTuple(tuple(weights))\n",
+ "\n",
+ " def construct(self, x, hx):\n",
+ " \"\"\"construct\"\"\"\n",
+ " if self.batch_first:\n",
+ " x = self.transpose(x, (1, 0, 2))\n",
+ " # stack lstm\n",
+ " h, c = hx\n",
+ " hn = cn = None\n",
+ " for i in range(self.num_layers):\n",
+ " x, hn, cn, _, _ = self.lstms[i](x, h[i], c[i], self.weight[i])\n",
+ " if self.batch_first:\n",
+ " x = self.transpose(x, (1, 0, 2))\n",
+ " return x, (hn, cn)"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "6. 使用`Cell`方法,定义网络结构(`SentimentNet`网络)。"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ }
+ },
{
"cell_type": "code",
"execution_count": 11,
@@ -753,14 +885,25 @@
" self.embedding.embedding_table.requires_grad = False\n",
" self.trans = P.Transpose()\n",
" self.perm = (1, 0, 2)\n",
- " self.encoder = nn.LSTM(input_size=embed_size,\n",
- " hidden_size=num_hiddens,\n",
- " num_layers=num_layers,\n",
- " has_bias=True,\n",
- " bidirectional=bidirectional,\n",
- " dropout=0.0)\n",
"\n",
- " self.h, self.c = lstm_default_state(batch_size, num_hiddens, num_layers, bidirectional)\n",
+ " if context.get_context(\"device_target\") in STACK_LSTM_DEVICE:\n",
+ " # stack lstm by user\n",
+ " self.encoder = StackLSTM(input_size=embed_size,\n",
+ " hidden_size=num_hiddens,\n",
+ " num_layers=num_layers,\n",
+ " has_bias=True,\n",
+ " bidirectional=bidirectional,\n",
+ " dropout=0.0)\n",
+ " self.h, self.c = stack_lstm_default_state(batch_size, num_hiddens, num_layers, bidirectional)\n",
+ " else:\n",
+ " # standard lstm\n",
+ " self.encoder = nn.LSTM(input_size=embed_size,\n",
+ " hidden_size=num_hiddens,\n",
+ " num_layers=num_layers,\n",
+ " has_bias=True,\n",
+ " bidirectional=bidirectional,\n",
+ " dropout=0.0)\n",
+ " self.h, self.c = lstm_default_state(batch_size, num_hiddens, num_layers, bidirectional)\n",
"\n",
" self.concat = P.Concat(1)\n",
" if bidirectional:\n",
@@ -783,7 +926,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "4. 实例化`SentimentNet`,创建网络,此步骤用时约1分钟。"
+ "7. 实例化`SentimentNet`,创建网络,此步骤用时约1分钟。"
]
},
{
@@ -976,4 +1119,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb b/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb
index dd9aa3ad20d655cf532ca9f3742c4434112d5e04..659c0fc204e3dc09c3cf5302e254ca8ed5c12965 100644
--- a/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb
+++ b/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb
@@ -148,7 +148,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "MindSpore为用户提供了多种数据加载方式,其中包括常用数据集加载、用户自定义数据集加载、MindSpore数据格式加载,详情内容请参考[加载数据集](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html)。对于数据集加载,底层实现方式的不同,会导致数据集加载的性能存在差异,如下所示:"
+ "MindSpore为用户提供了多种数据加载方式,其中包括常用数据集加载、用户自定义数据集加载、MindSpore数据格式加载,详情内容请参考[加载数据集](https://www.mindspore.cn/api/zh-CN/master/programming_guide/dataset_loading.html)。对于数据集加载,底层实现方式的不同,会导致数据集加载的性能存在差异,如下所示:"
]
},
{
@@ -181,7 +181,7 @@
"source": [
"数据加载性能优化建议如下:\n",
"- 已经支持的数据集格式优选内置加载算子,具体内容请参考[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html),如果性能仍无法满足需求,则可采取多线程并发方案,请参考本文[多线程优化方案](#多线程优化方案)。\n",
- "- 不支持的数据集格式,优选转换为MindSpore数据格式后再使用`MindDataset`类进行加载,具体内容请参考[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/converting_datasets.html),如果性能仍无法满足需求,则可采取多线程并发方案,请参考本文[多线程优化方案](#多线程优化方案)。\n",
+ "- 不支持的数据集格式,优选转换为MindSpore数据格式后再使用`MindDataset`类进行加载,具体内容请参考[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/api/zh-CN/master/programming_guide/dataset_conversion.html),如果性能仍无法满足需求,则可采取多线程并发方案,请参考本文[多线程优化方案](#多线程优化方案)。\n",
"- 不支持的数据集格式,算法快速验证场景,优选用户自定义`GeneratorDataset`类实现,如果性能仍无法满足需求,则可采取多进程并发方案,请参考本文[多进程优化方案](#多进程优化方案)。"
]
},
@@ -210,7 +210,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'image': array([[[235, 235, 235],\n",
+ "{'image': Tensor(shape=[32, 32, 3], dtype=UInt8, value=\n",
+ " [[[235, 235, 235],\n",
" [230, 230, 230],\n",
" [234, 234, 234],\n",
" ...,\n",
@@ -258,7 +259,7 @@
" ...,\n",
" [120, 120, 119],\n",
" [146, 146, 146],\n",
- " [177, 174, 190]]], dtype=uint8), 'label': array(9, dtype=uint32)}\n"
+ " [177, 174, 190]]]), 'label': Tensor(shape=[], dtype=UInt32, value= 9)}\n"
]
}
],
@@ -287,7 +288,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([255, 216, 255, ..., 63, 255, 217], dtype=uint8), 'id': array(30474, dtype=int64), 'label': array(2, dtype=int64)}\n"
+ "{'data': Tensor(shape=[1431], dtype=UInt8, value= [255, 216, 255, ..., 63, 255, 217]), 'id': Tensor(shape=[], dtype=Int64, value= 30474), 'label': Tensor(shape=[], dtype=Int64, value= 2)}\n"
]
}
],
@@ -323,7 +324,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'data': array([0], dtype=int64)}\n"
+ "{'data': Tensor(shape=[1], dtype=Int64, value= [0])}\n"
]
}
],
@@ -349,7 +350,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "shuffle操作主要是对有序的数据集或者进行过repeat的数据集进行混洗,MindSpore专门为用户提供了`shuffle`函数,其中设定的`buffer_size`参数越大,混洗程度越大,但时间、计算资源消耗也会大。该接口支持用户在整个pipeline的任何时候都可以对数据进行混洗,具体内容请参考[shuffle处理](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_processing_and_augmentation.html#shuffle)。但是因为底层的实现方式不同,该方式的性能不如直接在[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)中设置`shuffle`参数直接对数据进行混洗。"
+ "shuffle操作主要是对有序的数据集或者进行过repeat的数据集进行混洗,MindSpore专门为用户提供了`shuffle`函数,其中设定的`buffer_size`参数越大,混洗程度越大,但时间、计算资源消耗也会大。该接口支持用户在整个pipeline的任何时候都可以对数据进行混洗,具体内容请参考[shuffle处理](https://www.mindspore.cn/api/zh-CN/master/programming_guide/augmentation.html)。但是因为底层的实现方式不同,该方式的性能不如直接在[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)中设置`shuffle`参数直接对数据进行混洗。"
]
},
{
@@ -400,7 +401,8 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "{'image': array([[[254, 254, 254],\n",
+ "{'image': Tensor(shape=[32, 32, 3], dtype=UInt8, value=\n",
+ " [[[254, 254, 254],\n",
" [255, 255, 254],\n",
" [255, 255, 254],\n",
" ...,\n",
@@ -448,7 +450,7 @@
" ...,\n",
" [ 64, 61, 63],\n",
" [ 63, 58, 60],\n",
- " [ 61, 56, 58]]], dtype=uint8), 'label': array(9, dtype=uint32)}\n"
+ " [ 61, 56, 58]]]), 'label': Tensor(shape=[], dtype=UInt32, value= 9)}\n"
]
}
],
@@ -526,7 +528,7 @@
"- 使用内置Python算子(`py_transforms`模块)进行数据增强。\n",
"- 用户可根据自己的需求,自定义Python函数进行数据增强。\n",
"\n",
- "具体的内容请参考[数据增强](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_processing_and_augmentation.html#id3)。因为底层的实现方式不同,所以性能还是有一定的差异,如下所示:"
+ "具体的内容请参考[数据增强](https://www.mindspore.cn/api/zh-CN/master/programming_guide/augmentation.html#id3)。因为底层的实现方式不同,所以性能还是有一定的差异,如下所示:"
]
},
{
@@ -600,7 +602,7 @@
],
"source": [
"import mindspore.dataset.transforms.c_transforms as c_transforms\n",
- "import mindspore.dataset.transforms.vision.c_transforms as C\n",
+ "import mindspore.dataset.vision.c_transforms as C\n",
"import matplotlib.pyplot as plt\n",
"cifar10_path = \"./dataset/Cifar10Data/cifar-10-batches-bin/\"\n",
"\n",
@@ -608,10 +610,10 @@
"cifar10_dataset = ds.Cifar10Dataset(cifar10_path,num_parallel_workers=4)\n",
"transforms = C.RandomResizedCrop((800,800))\n",
"# apply the transform to the dataset through dataset.map()\n",
- "cifar10_dataset = cifar10_dataset.map(input_columns=\"image\",operations=transforms,num_parallel_workers=4)\n",
+ "cifar10_dataset = cifar10_dataset.map(operations=transforms,input_columns=\"image\",num_parallel_workers=4)\n",
"\n",
"data = next(cifar10_dataset.create_dict_iterator())\n",
- "plt.imshow(data[\"image\"])\n",
+ "plt.imshow(data[\"image\"].asnumpy())\n",
"plt.show()"
]
},
@@ -657,7 +659,7 @@
" print(data[\"data\"])\n",
"\n",
"func = lambda x:x**2\n",
- "ds4 = ds3.map(input_columns=\"data\",operations=func,python_multiprocessing=True,num_parallel_workers=4)\n",
+ "ds4 = ds3.map(operations=func,input_columns=\"data\",python_multiprocessing=True,num_parallel_workers=4)\n",
"print(\"after map:\")\n",
"for data in ds4.create_dict_iterator():\n",
" print(data[\"data\"])"
@@ -737,7 +739,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "提供某些融合算子,这些算子将两个或多个算子的功能聚合到一个算子中。具体内容请参考[数据增强算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.vision.html),与它们各自组件的流水线相比,这种融合算子提供了更好的性能。如图所示:"
+ "提供某些融合算子,这些算子将两个或多个算子的功能聚合到一个算子中。具体内容请参考[数据增强算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.vision.html),与它们各自组件的流水线相比,这种融合算子提供了更好的性能。如图所示:"
]
},
{
@@ -769,4 +771,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/tutorials/notebook/quick_start.ipynb b/tutorials/notebook/quick_start.ipynb
index 80dec971e2e6a692f62f0a1c18a580772c2c3a9f..99757efd668765bbf059c0cb8ed1a1b24023d4de 100644
--- a/tutorials/notebook/quick_start.ipynb
+++ b/tutorials/notebook/quick_start.ipynb
@@ -250,15 +250,15 @@
"\n",
"dic_ds = mnist_ds.create_dict_iterator()\n",
"item = dic_ds.get_next()\n",
- "img = item[\"image\"]\n",
- "label = item[\"label\"]\n",
+ "img = item[\"image\"].asnumpy()\n",
+ "label = item[\"label\"].asnumpy()\n",
"\n",
"print(\"The item of mnist_ds:\", item.keys())\n",
"print(\"Tensor of image in item:\", img.shape) \n",
"print(\"The label of item:\", label)\n",
"\n",
"plt.imshow(np.squeeze(img))\n",
- "plt.title(\"number:%s\"% item[\"label\"])\n",
+ "plt.title(\"number:%s\"% item[\"label\"].asnumpy())\n",
"plt.show()"
]
},
@@ -336,11 +336,11 @@
" type_cast_op = C.TypeCast(mstype.int32)\n",
"\n",
" # using map to apply operations to a dataset\n",
- " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=resize_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
" \n",
" # process the generated dataset\n",
" buffer_size = 10000\n",
@@ -1062,8 +1062,8 @@
"source": [
"ds_test = create_dataset(test_data_path).create_dict_iterator()\n",
"data = ds_test.get_next()\n",
- "images = data[\"image\"]\n",
- "labels = data[\"label\"]\n",
+ "images = data[\"image\"].asnumpy()\n",
+ "labels = data[\"label\"].asnumpy()\n",
"\n",
"output = model.predict(Tensor(data['image']))\n",
"prb = output.asnumpy()\n",
diff --git a/tutorials/notebook/synchronization_training_and_evaluation.ipynb b/tutorials/notebook/synchronization_training_and_evaluation.ipynb
index 80f857391986c557ac75db948419f81a400a3473..8c22d397b8a86c04a7d178190ce9d0f91748261b 100644
--- a/tutorials/notebook/synchronization_training_and_evaluation.ipynb
+++ b/tutorials/notebook/synchronization_training_and_evaluation.ipynb
@@ -94,9 +94,9 @@
"source": [
"import os\n",
"import mindspore.dataset as ds\n",
- "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.vision.c_transforms as CV\n",
"import mindspore.dataset.transforms.c_transforms as C\n",
- "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.dataset.vision import Inter\n",
"from mindspore.common import dtype as mstype\n",
"\n",
"def create_dataset(data_path, batch_size=32, repeat_size=1,\n",
@@ -112,9 +112,9 @@
" type_cast_op = C.TypeCast(mstype.int32) \n",
"\n",
" # apply map operations on images\n",
- " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
- " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=[resize_op,rescale_op,rescale_nml_op,hwc2chw_op],\n",
- " num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(operations=[resize_op,rescale_op,rescale_nml_op,hwc2chw_op],\n",
+ " input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n",
"\n",
" # apply DatasetOps\n",
" buffer_size = 10000\n",
diff --git a/tutorials/source_en/_static/logo_source.png b/tutorials/source_en/_static/logo_source.png
index fc347d271abe082ae8d16242328551648766b6fb..880f2bc87172daf487654c0ba4f1657c672bd2b8 100644
Binary files a/tutorials/source_en/_static/logo_source.png and b/tutorials/source_en/_static/logo_source.png differ
diff --git a/tutorials/source_en/advanced_use/auto_augmentation.md b/tutorials/source_en/advanced_use/auto_augmentation.md
new file mode 100644
index 0000000000000000000000000000000000000000..f16244960aa5571755a27c480671ea7e2447f1b6
--- /dev/null
+++ b/tutorials/source_en/advanced_use/auto_augmentation.md
@@ -0,0 +1 @@
+# Auto Augmentation
diff --git a/tutorials/source_en/advanced_use/computer_vision_application.md b/tutorials/source_en/advanced_use/computer_vision_application.md
index 8ead2a76fd2e0f4abca0363f9cd3947462b07221..950540f3ae8168cbdc67cc32499b9bf92ab397cd 100644
--- a/tutorials/source_en/advanced_use/computer_vision_application.md
+++ b/tutorials/source_en/advanced_use/computer_vision_application.md
@@ -22,13 +22,13 @@
## Overview
-Computer vision is the most widely researched and mature technology field of deep learning, and is widely used in scenarios such as mobile phone photographing, intelligent security protection, and automated driving. Since AlexNet won the ImageNet competition in 2012, deep learning has greatly promoted the development of the computer vision field. Almost all the most advanced computer vision algorithms are related to deep learning. Deep neural network can extract image features layer by layer and retain local invariance. It is widely used in visual tasks such as classification, detection, segmentation, tracking, retrieval, recognition, promotion, and reconstruction.
+Computer vision is one of the most widely researched and mature technology fields of deep learning, and is widely applied to scenarios such as mobile phone photographing, intelligent security protection, and automated driving. Since AlexNet won the ImageNet competition in 2012, deep learning has greatly promoted the development of the computer vision field. Almost all the most advanced computer vision algorithms are related to deep learning. Deep neural network can extract image features layer by layer and retain local invariance. It is widely used in visual tasks such as classification, detection, segmentation, tracking, retrieval, recognition, promotion, and reconstruction.
This chapter describes how to apply MindSpore to computer vision scenarios based on image classification tasks.
## Image Classification
-Image classification is the most basic computer vision application and belongs to the supervised learning category. For example, determine the class of a digital image, such as cat, dog, airplane, or car. The function is as follows:
+Image classification is one of the most basic computer vision applications and belongs to the supervised learning category. For example, determine the class of a digital image, such as cat, dog, airplane, or car. The function is as follows:
```python
def classify(image):
@@ -49,9 +49,9 @@ MindSpore supports the following image classification networks: LeNet, AlexNet,
Figure 1: CIFAR-10 dataset [1]
-Figure 1 shows that the CIFAR-10 dataset contains 10 classes of 60,000 images. Each class contains 6000 images. 50,000 images are for training and 10,000 images are for testing. The size of each image is 32 x 32 pixels.
+The CIFAR-10 dataset contains 10 classes of 60,000 images. Each class contains 6000 images. 50,000 images are for training and 10,000 images are for testing. The size of each image is 32 x 32 pixels.
-Generally, a training indicator of image classification is accuracy, that is, a ratio of a quantity of accurately predicted examples to a total quantity of predicted examples.
+Generally, a training indicator of image classification is accuracy, that is, a ratio of the quantity of accurately predicted examples to the total quantity of predicted examples.
Next, let's use MindSpore to solve the image classification task. The overall process is as follows:
1. Download the CIFAR-10 dataset.
@@ -61,12 +61,12 @@ Next, let's use MindSpore to solve the image classification task. The overall pr
5. Call the high-level `Model` API to train and save the model file.
6. Load the saved model for inference.
-> This example is for the hardware platform of the Ascend 910 AI processor. You can find the complete executable sample code at: .
+> This example uses the hardware platform of the Ascend 910 AI processor. You can find the complete executable sample code at: .
The key parts of the task process code are explained below.
### Downloading the CIFAR-10 Dataset
-CIFAR-10 dataset download address: [the website of Cifar-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html) In this example, the data is in binary format. In the Linux environment, run the following command to download the dataset:
+CIFAR-10 dataset download address: [the website of Cifar-10 Dataset](https://www.cs.toronto.edu/~kriz/cifar.html). In this example, the data is in binary format. In the Linux environment, run the following command to download the dataset:
```shell
wget https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz
@@ -119,8 +119,8 @@ tar -zvxf cifar-10-binary.tar.gz
c_trans += [resize_op, rescale_op, normalize_op, changeswap_op]
# apply map operations on images
- cifar_ds = cifar_ds.map(input_columns="label", operations=type_cast_op)
- cifar_ds = cifar_ds.map(input_columns="image", operations=c_trans)
+ cifar_ds = cifar_ds.map(operations=type_cast_op, input_columns="label")
+ cifar_ds = cifar_ds.map(operations=c_trans, input_columns="image")
```
3. Shuffle and batch process the data.
diff --git a/tutorials/source_en/advanced_use/customized_debugging_information.md b/tutorials/source_en/advanced_use/customized_debugging_information.md
index 4d2add7d7ada5eb6138d28fa3be072cd894f3e79..c5acc66fd282f17ed80d414904d9d5228843a072 100644
--- a/tutorials/source_en/advanced_use/customized_debugging_information.md
+++ b/tutorials/source_en/advanced_use/customized_debugging_information.md
@@ -11,7 +11,9 @@
- [Custom Callback](#custom-callback)
- [MindSpore Metrics](#mindspore-metrics)
- [MindSpore Print Operator](#mindspore-print-operator)
- - [Asynchronous Data Dump](#asynchronous-data-dump)
+ - [Data Dump Introduction](#data-dump-introduction)
+ - [Synchronous Dump](#synchronous-dump)
+ - [Asynchronous Dump](#asynchronous-dump)
- [Log-related Environment Variables and Configurations](#log-related-environment-variables-and-configurations)
@@ -118,8 +120,8 @@ Here are two examples to further understand the usage of custom Callback.
loss = cb_params.net_outputs
cur_time = time.time()
if (cur_time - cb_params.init_time) > self.run_time:
- print("epoch: ", epoch_num, " step: ", step_num, " loss: ", loss)
- run_context.request_stop()
+ print("epoch: ", epoch_num, " step: ", step_num, " loss: ", loss)
+ run_context.request_stop()
stop_cb = StopAtTime(run_time=10)
model.train(100, dataset, callbacks=stop_cb)
@@ -259,50 +261,108 @@ val:[[1 1]
[1 1]]
```
-## Asynchronous Data Dump
+## Data Dump Introduction
-When the training result deviates from the expectation on Ascend, the input and output of the operator can be dumped for debugging through Asynchronous Data Dump.
+The input and output of the operator can be saved for debugging through the data dump when the training result deviates from the expectation. Data dump includes Synchronous Dump and Asynchronous Dump.
-> `comm_ops` operators are not supported by Asynchronous Data Dump. `comm_ops` can be found in [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
+### Synchronous Dump
-1. Turn on the switch to save graph IR: `context.set_context(save_graphs=True)`.
-2. Execute training script.
-3. Open `hwopt_d_end_graph_{graph id}.ir` in the directory you execute the script and find the name of the operators you want to Dump.
-4. Configure json file: `data_dump.json`.
+1. Create dump json file:`data_dump.json`.
+
+ The name and location of the JSON file can be customized.
```json
{
- "DumpSettings": {
+ "common_dump_settings": {
+ "dump_mode": 0,
+ "path": "/tmp/net/",
"net_name": "ResNet50",
+ "iteration": 0,
+ "input_output": 0,
+ "kernels": ["Default/Conv-op12"],
+ "support_device": [0,1,2,3,4,5,6,7]
+ },
+ "e2e_dump_settings": {
+ "enable": false,
+ "trans_flag": false
+ }
+ }
+ ```
+
+ - `dump_mode`:0:dump all kernels in graph, 1: dump kernels in kernels list.
+ - `path`:The absolute path where dump saves data.
+ - `net_name`:net name eg:ResNet50.
+ - `iteration`:Specify the iterations to dump. All kernels in graph will be dumped.
+ - `input_output`:0:dump input and output of kernel, 1:dump input of kernel, 2:dump output of kernel. This parameter does not take effect on the GPU and only the output of operator will be dumped.
+ - `kernels`:full name of kernel. Enable `context.set_context(save_graphs=True)` and get full name of kernel from `ir` file. You can get it from `hwopt_d_end_graph_{graph_id}.ir` when `device_target` is `Ascend` and you can get it from `hwopt_pm_7_getitem_tuple.ir` when `device_target` is `GPU`.
+ - `support_device`:support devices, default setting is `[0,1,2,3,4,5,6,7]`. You can specify specific device ids to dump specific device data.
+ - `enable`:enable synchronous dump.
+ - `trans_flag`:enable trans flag. Transform the device data format into NCHW.
+
+2. Specify the location of the JSON file.
+
+ ```bash
+ export MINDSPORE_DUMP_CONFIG={Absolute path of data_dump.json}
+ ```
+
+ - Set the environment variables before executing the training script. Settings will not take effect during training.
+ - Dump environment variables need to be configured before calling `mindspore.communication.management.init`.
+
+3. Execute the training script to dump data.
+
+ You can set `context.set_context(reserve_class_name_in_scope=False)` in your training script to avoid dump failure because of file name is too long.
+
+4. Parse the Dump file
+
+ Call `numpy.fromfile` to parse dump data file.
+
+### Asynchronous Dump
+
+1. Create dump json file:`data_dump.json`.
+
+ The name and location of the JSON file can be customized.
+ ```json
+ {
+ "common_dump_settings": {
"dump_mode": 0,
- "op_debug_mode": 0,
+ "path": "/relative_path",
+ "net_name": "ResNet50",
"iteration": 0,
- "kernels": ["Default/Conv2D-op2", "Default/TensorAdd-op10"]
+ "input_output": 0,
+ "kernels": ["Default/Conv-op12"],
+ "support_device": [0,1,2,3,4,5,6,7]
+ },
+ "async_dump_settings": {
+ "enable": false,
+ "op_debug_mode": 0
}
}
```
- > - `net_name`: net name eg:ResNet50.
- > - `dump_mode`: 0: dump all kernels, 1: dump kernels in kernels list.
- > - `op_debug_mode`: please set to 0.
- > - `iteration`: specified iteration to dump. `iteration` should be set to 0 when `dataset_sink_mode` is False and data of every iteration will be dumped.
- > - `kernels`: `fullname_with_scope` of kernel which need to dump.
+ - `dump_mode`:0:dump all kernels in graph, 1: dump kernels in kernels list.
+ - `path`:Relative path where dump data saves. eg:data will be saved in `/var/log/npu/ide_daemon/dump/relative_path`.
+ - `net_name`:net name eg:ResNet50.
+ - `iteration`:Specify the iterations to dump. Iteration should be set to 0 when dataset_sink_mode is False and data of every iteration will be dumped.
+ - `input_output`:0:dump input and output of kernel, 1:dump input of kernel, 2:dump output of kernel.
+ - `kernels`:Full name of kernel. Enable `context.set_context(save_graphs=True)` and get full name of kernel from `hwopt_d_end_graph_{graph_id}.ir`. `kernels` only support TBE operator, AiCPU operator and communication operator. Data of communication operation input operator will be dumped if `kernels` is set to the name of communication operator.
+ - `support_device`:support devices, default setting is `[0,1,2,3,4,5,6,7]`. You can specify specific device ids to dump specific device data.
+ - `enable`:enable Asynchronous Dump.
+ - `op_debug_mode`:please set to 0.
-5. Set environment variables.
+2. Specify the json configuration file of Dump.
```bash
- export ENABLE_DATA_DUMP=1
- export DATA_DUMP_PATH=/test
- export DATA_DUMP_CONFIG_PATH=data_dump.json
+ export MINDSPORE_DUMP_CONFIG={Absolute path of data_dump.json}
```
- > - Set the environment variables before executing the training script. Setting environment variables during training will not take effect.
- > - Dump environment variables need to be configured before calling `mindspore.communication.management.init`.
+ - Set the environment variables before executing the training script. Setting environment variables during training will not take effect.
+ - Dump environment variables need to be configured before calling `mindspore.communication.management.init`.
+
+3. Execute the training script to dump data.
-6. Execute the training script again.
-7. Parse the Dump file.
+4. Parse the Dump file
- Change directory to `/var/log/npu/ide_daemon/dump/` after training and execute the following commands to parse Dump data file:
+ Change directory to /var/log/npu/ide_daemon/dump/ after training, execute the following commands to parse Dump data file:
```bash
python /usr/local/Ascend/toolkit/tools/operator_cmp/compare/dump_data_conversion.pyc -type offline -target numpy -i ./{Dump file path}} -o ./{output file path}
diff --git a/tutorials/source_en/advanced_use/dashboard.md b/tutorials/source_en/advanced_use/dashboard.md
index 7c875e1c8151661194b42f4f8c26825cb76b6d2d..b3e3660fd1ab0eee35ddac21f30f75373c1aef5f 100644
--- a/tutorials/source_en/advanced_use/dashboard.md
+++ b/tutorials/source_en/advanced_use/dashboard.md
@@ -20,7 +20,7 @@
## Overview
-Training dashboard is an important part of mindinsight's visualization component, and its tags include scalar visualization, parameter distribution visualization, computational visualization, data visualization, image visualization and tensor visualization.
+Training dashboard is an important part of mindinsight's visualization component, and its tags include scalar visualization, parameter distribution visualization, computational graph visualization, data graph visualization, image visualization and tensor visualization.
Access the Training Dashboard by selecting a specific training from the training list.
@@ -159,8 +159,7 @@ Figure 12: Table display
Figure 12 shows tensors recorded by a user in a form of a table which includes the following function:
- Click the small square button on the right side of the table to zoom in the table.
-- The white box in the table shows the tensor data under which dimension is currently displayed, where the colon `:` represents all values of the current dimension, you can enter the corresponding index or `:` in the box and press `Enter` or click the button of tick on the back to query tensor data for specific dimensions.
- Assuming a certain dimension is 32, the index range is -32 to 31. Note: tensor data from 0 to 2 dimensions can be queried. Tensor data of more than two dimensions is not supported, in other word, the query conditions of more than two colons `:` cannot be set.
+- The white box in the table shows the tensor data under which dimension is currently displayed. The colon `:` indicates index range of the current dimension which is basically the same as the meaning of Python index. If no specific index is specified, it indicates all the values of the current dimension and `2:5` indicates the value of index from 2 to 5 (not including 5). you can enter the corresponding index or index range containing `:` in the box and press `Enter` or click the button of tick on the back to query tensor data for specific dimensions. Assuming a certain dimension is 32, the index range is -32 to 31. Note: tensor data from 0 to 2 dimensions can be queried. Tensor data of more than two dimensions is not supported, in other word, the query conditions of more than two colons `:` cannot be set.
- Query the tensor data of a specific step by dragging the hollow circle below the table.

diff --git a/tutorials/source_en/advanced_use/dataset_conversion.md b/tutorials/source_en/advanced_use/dataset_conversion.md
new file mode 100644
index 0000000000000000000000000000000000000000..3bfbb3df49a953184848cf7250ea68f7fb64e818
--- /dev/null
+++ b/tutorials/source_en/advanced_use/dataset_conversion.md
@@ -0,0 +1 @@
+# Convert Dataset to MindRecord
diff --git a/tutorials/source_en/advanced_use/debugger.md b/tutorials/source_en/advanced_use/debugger.md
new file mode 100644
index 0000000000000000000000000000000000000000..94121a490e8a8f7956ad1067a17d70bc01079eff
--- /dev/null
+++ b/tutorials/source_en/advanced_use/debugger.md
@@ -0,0 +1,187 @@
+# Using Debugger
+
+`Linux` `Ascend` `GPU` `Graph Mode` `Debug Training` `Intermediate` `Expert`
+
+
+
+- [Using Debugger](#using-debugger)
+ - [Overview](#overview)
+ - [Operation Process](#operation-process)
+ - [Debugger Environment Preparation](#debugger-environment-preparation)
+ - [Debugger UI Introduction](#debugger-UI-introduction)
+ - [Computational Graph](#computational-graph)
+ - [Node List](#node-list)
+ - [Graph Node Details](#graph-node-details)
+ - [Conditional Breakpoint](#conditional-breakpoint)
+ - [Training Control](#training-control)
+ - [Debugger Usage Example](#debugger-usage-example)
+ - [Notices](#notices)
+
+
+
+
+
+## Overview
+
+MindSpore Debugger is a debugging tool for training in `Graph Mode`. It can be applied to visualize and analyze the intermediate computation results of the computational graph.
+
+In `Graph Mode` training, the computation results of intermediate nodes in the computational graph can not be acquired from python layer, which makes it difficult for users to do the debugging. By applying MindSpore Debugger, users can:
+
+- Visualize the computational graph on the UI and analyze the output of the graph node;
+- Set a conditional breakpoint to monitor training exceptions (such as INF), if the condition is met, users can track the cause of the bug when an exception occurs;
+- Visualize and analyze the change of parameters, such as weights.
+
+## Operation Process
+
+- Launch MindInsight in debugger mode, and set Debugger environment variables for the training;
+- At the beginning of the training, set conditional breakpoints;
+- Analyze the training progress on MindInsight Debugger UI.
+
+## Debugger Environment Preparation
+
+At first, install MindInsight and launch it in debugger mode. MindSpore will send training information to MindInsight Debugger Server in debugger mode, users can analyze the information on MindInsight UI.
+
+The command to launch MindInsight in debugger mode is as follows:
+
+```shell script
+mindinsight start --port {PORT} --enable-debugger True --debugger-port {DEBUGGER_PORT}
+```
+
+The Debugger related parameters:
+
+|Name|Argument|Description|Type|Default|Scope|
+|---|---|---|---|---|---|
+|`--port {PORT}`|Optional|Specifies the port number of the web visualization service.|Integer|8080|1~65535|
+|`--enable-debugger {ENABLE_DEBUGGER}`|Required|Should be set to `True`, this will launch the MindInsight debugger server.|Boolean|False|True/False|
+|`--debugger-port {DEBUGGER_PORT}`|Optional|Specifies the port number of the debugger server.|Integer|50051|1~65535|
+
+For more launch parameters, please refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html).
+
+Then, set `export ENABLE_MS_DEBUGGER=1` to specify the training is in the debugger mode, and set the debugger host and port to which the training is connected:
+`export MS_DEBUGGER_HOST=127.0.0.1` (the service address must be consistent with MindInsight host address);
+`export MS_DEBUGGER_PORT=50051` (the port must be consistent with MindInsight debugger-port).
+
+If the memory space of your equipment is limited, you can use the memory reuse mode before starting the training to reduce the running space: `export MS_DEBUGGER_PARTIAL_MEM=1`。
+
+Besides, do not use dataset sink mode (Set the parameter `dataset_sink_mode` in `model.train` to `False`) to ensure the Debugger can acquire information for all steps.
+
+## Debugger UI Introduction
+
+After the Debugger environment preparation, users can run the training script.
+Before the execution of the computational graph, the MindInsight Debugger UI will show the information of the optimized computational graph.
+The following are the Debugger UI components.
+
+
+
+Figure 1: The initial UI of debugger
+
+### Computational Graph
+
+Debugger will display the optimized computational graph in the upper middle area of the page.
+Users can click the box (stand for one `scope`) to expand the graph, and analyze the nodes contained in that `scope`.
+
+In the GPU environment, there are `Current Node` and `Next Node` buttons in the upper right corner of the computational graph panel,
+which are used to return to the current execution node and execute the next node respectively. Users can easily execute one node at a time.
+
+The area on the top shows the training metadata, such as the `Client IP` (address and port of the training script process),
+`Device ID` being used and the current training `Step`.
+
+### Node List
+
+As shown in Figure 1,the Computational Graph `Node List` will be displayed on the left of the page.
+The `Node List` can be expanded according to the `scope` of the nodes.
+When clicking one node in the list, the computational graph on the right will also be expanded and choose the corresponding node automatically.
+
+The search bar on the top can be used to search for nodes in the graph by node name.
+
+### Graph Node Details
+
+
+
+Figure 2: The Graph Node Details
+
+When choosing one node on the graph, the details of this node will be displayed at the bottom.
+The `Tensor Value Overview` area will show the input nodes and the outputs of this node. The `Type`, `Shape` and `Value` of the `Tensor` can also be viewed.
+
+For GPU environment, after selecting an executable node on the graph, right-click to select `Continue to` on this node,
+which means running the training script to the selected node within one step.
+After left-click `Continue to`, the training script will be executed and paused after running to this node.
+
+
+
+Figure 3: `Tensor` Value Visualization
+
+Some outputs of the node contain too many dimensions.
+For these `Tensors`, users can click the `View` link and visualize the `Tensor` in the new panel, which is shown in Figure 3.
+
+
+
+Figure 4: Previous Step Value Compare For Parameter Nodes
+
+In addition, the output of the parameter nodes can be compared with their output in the previous step.
+Click the `Compare with Previous Step` button to enter the comparison interface, as shown in Figure 4.
+
+### Conditional Breakpoint
+
+
+
+Figure 5: Set Conditional Breakpoint (Watch Point)
+
+In order to monitor the training and find out the bugs, users can set conditional breakpoints (called `Watch Point List` on UI) to analyze the outputs of the
+specified nodes automatically. Figure 5 displays how to set a `Watch Point`:
+- At first, click the `+` button on the upper right corner, and then choose a watch condition;
+- Select the nodes to be watched in the `Node List`, tick the boxes in the front of the chosen nodes;
+- Click the `OK` button to add this `Watch Point`.
+
+The outputs of the watched nodes will be checked by the corresponding conditions. Once the condition is satisfied, the training will pause, and users can analyze
+the triggered `Watch Point List` on the Debugger UI.
+
+
+
+Figure 6: The Triggered `Watch Point List`
+
+Figure 6 displays the triggered `Watch Point List`, the displayed area is the same as the `Node List`.
+The triggered nodes and corresponding conditions are displayed in the execution order. Click one line in the list, the node will be shown in the computational graph automatically.
+Users can further trace the reason of the bug by analyzing the node details.
+
+### Training Control
+
+At the bottom of the watchpoint setting panel is the training control panel, which shows the training control functions of the debugger,
+with four buttons: `CONTINUE`, `PAUSE`, `TERMINATE` and `OK`:
+
+- `OK` stands for executing the training for several steps, the number of the `step` can be specified in the above bar.
+The training will be paused until the `Watch Point List` is triggered, or the number of `step` is executed.
+- `CONTINUE` stands for executing the training until the `Watch Point List` is triggered, or the training is finished.
+- `PAUSE` stands for pausing the training.
+- `TERMINATE` stands for terminating the training.
+
+## Debugger Usage Example
+
+1. Prepare the debugger environment, and open the MindInsight Debugger UI.
+
+ 
+
+ Figure 7: Debugger Start and Waiting for the Training
+
+ The Debugger server is launched and waiting for the training to connect.
+
+2. Run the training script, after a while, the computational graph will be displayed on Debugger UI, as shown in Figure 1.
+
+3. Set conditional breakpoints for the training, as shown in Figure 5.
+
+ In Figure 5, the conditions are selected, and some nodes are watched, which means whether there is any output meeting the conditions in the training process of these nodes.
+ After setting the conditional breakpoint, users can set steps in the control panel and click `OK` or `CONTINUE` to continue training.
+
+4. The conditional breakpoints are triggered, as shown in Figure 6.
+
+ When the conditional breakpoints are triggered, users can analyze the corresponding node details to find out the reason of the bug.
+
+## Notices
+
+- Debugger will slow down the training performance.
+- A single Debugger Server can only be connected to one training process.
+- The debugger does not support distributed training scenarios.
+- The debugger does not support multi-graph training scenarios.
+- When too many `Watch Points` are set, the system may run out of memory.
+- Debugger cannot get the initialization parameters of the neural network based on Davinci device.
+- For GPU environment, only the parameter nodes that meet the conditions can be compared with the results of themselves in the previous step: the node executed with the `Next Node` and `Continue to`, and the parameter nodes as the input of the `Watch Points`. Otherwise, `Compare with Previous Step` cannot be used.
diff --git a/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md b/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
index 98c55656a9f80709e92f5e33176c7ac958c0a274..bbb9ad368809a108989deda5640cc9aea751642d 100644
--- a/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
+++ b/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
@@ -17,7 +17,7 @@
## Overview
-MindSpore supports the following running modes which are optimized in terms of debugging or running:
+MindSpore supports the following running modes which are optimized for debugging or running:
- PyNative mode: dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model.
- Graph mode: static graph mode. In this mode, the neural network model is compiled into an entire graph and then delivered for execution. This mode uses technologies such as graph optimization to improve the running performance and facilitates large-scale deployment and cross-platform running.
@@ -105,12 +105,12 @@ print(output.asnumpy())
[3. 3. 3.]]
```
-> Parallel execution and summary is not supported in PyNative mode, so parallel and summary related operators can not be used.
+> Parallel execution and summary are not supported in PyNative mode, so parallel and summary related operators cannot be used.
### Improving PyNative Performance
-MindSpore provides the staging function to improve the execution speed of inference tasks in PyNative mode. This function compiles Python functions or Python class methods into computational graphs in PyNative mode and improves the execution speed by using graph optimization technologies, as shown in the following example:
+MindSpore provides the Staging function to improve the execution speed of inference tasks in PyNative mode. This function compiles Python functions or Python class methods into computational graphs in PyNative mode and improves the execution speed by using graph optimization technologies, as shown in the following example:
```python
import numpy as np
diff --git a/tutorials/source_en/advanced_use/differential_privacy.md b/tutorials/source_en/advanced_use/differential_privacy.md
index 57bd79f4adb8ef69eb377f3b42265c972142be9a..00e7743f9978913614d1f9bef7c18351d2414b01 100644
--- a/tutorials/source_en/advanced_use/differential_privacy.md
+++ b/tutorials/source_en/advanced_use/differential_privacy.md
@@ -45,7 +45,7 @@ MindArmour differential privacy module Differential-Privacy implements the diffe
The LeNet model and MNIST dataset are used as an example to describe how to use the differential privacy optimizer to train a neural network model on MindSpore.
-> This example is for the Ascend 910 AI processor. You can download the complete sample code from .
+> This example is for the Ascend 910 AI processor. You can download the complete sample code from .
## Implementation
@@ -70,13 +70,11 @@ import mindspore.dataset.transforms.c_transforms as C
from mindspore.dataset.vision import Inter
import mindspore.common.dtype as mstype
-from mindarmour.diff_privacy import DPModel
-from mindarmour.diff_privacy import PrivacyMonitorFactory
-from mindarmour.diff_privacy import NoiseMechanismsFactory
-from mindarmour.diff_privacy import ClipMechanismsFactory
+from mindarmour.privacy.diff_privacy import DPModel
+from mindarmour.privacy.diff_privacy import PrivacyMonitorFactory
+from mindarmour.privacy.diff_privacy import NoiseMechanismsFactory
+from mindarmour.privacy.diff_privacy import ClipMechanismsFactory
from mindarmour.utils.logger import LogUtil
-from lenet5_net import LeNet5
-from lenet5_config import mnist_cfg as cfg
LOGGER = LogUtil.get_instance()
LOGGER.set_level('INFO')
@@ -85,7 +83,7 @@ TAG = 'Lenet5_train'
### Configuring Parameters
-1. Set the running environment, dataset path, model training parameters, checkpoint storage parameters, and differential privacy parameters. Replace 'data_path' with you data path. For more configurations, see .
+1. Set the running environment, dataset path, model training parameters, checkpoint storage parameters, and differential privacy parameters. Replace 'data_path' with you data path. For more configurations, see .
```python
cfg = edict({
@@ -99,7 +97,7 @@ TAG = 'Lenet5_train'
'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
'device_target': 'Ascend', # device used
- 'data_path': './MNIST_unzip', # the path of training and testing data set
+ 'data_path': '../../common/dataset/MNIST', # the path of training and testing data set
'dataset_sink_mode': False, # whether deliver all training data to device one time
'micro_batches': 32, # the number of small batches split from an original batch
'norm_bound': 1.0, # the clip bound of the gradients of model's training parameters
diff --git a/tutorials/source_en/advanced_use/distributed_training_ascend.md b/tutorials/source_en/advanced_use/distributed_training_ascend.md
index 81415d95d3f5639ca735a47a0a568165458fdac9..1fe5d9380d06b54fbdca971dc97ca84fb186d7fa 100644
--- a/tutorials/source_en/advanced_use/distributed_training_ascend.md
+++ b/tutorials/source_en/advanced_use/distributed_training_ascend.md
@@ -17,6 +17,11 @@
- [Defining the Optimizer](#defining-the-optimizer)
- [Training the Network](#training-the-network)
- [Running the Script](#running-the-script)
+ - [Distributed Training Model Parameters Saving and Loading](#distributed-training-model-parameters-saving-and-loading)
+ - [Auto Parallel Mode](#auto-parallel-mode)
+ - [Data Parallel Mode](#data-parallel-mode)
+ - [Semi Auto Parallel Mode](#semi-auto-parallel-mode)
+ - [Hybrid Parallel Mode](#hybrid-parallel-mode)
@@ -219,7 +224,7 @@ The `Momentum` optimizer is used as the parameter update tool. The definition is
> You are advised to set `device_num` and `global_rank` to their default values. The framework calls the HCCL API to obtain the values.
-If multiple network cases exist in the script, call `context.reset_auto_parallel_context()` to restore all parameters to default values before executing the next case.
+If multiple network cases exist in the script, call `context.reset_auto_parallel_context` to restore all parameters to default values before executing the next case.
In the following sample code, the automatic parallel mode is specified. To switch to the data parallel mode, you only need to change `parallel_mode` to `DATA_PARALLEL`.
@@ -334,3 +339,190 @@ epoch: 8 step: 156, loss is 1.2943741
epoch: 9 step: 156, loss is 1.2316195
epoch: 10 step: 156, loss is 1.1533381
```
+
+## Distributed Training Model Parameters Saving and Loading
+
+The below content introduced how to save and load models under the four distributed parallel training modes respectively. Before saving model parameters for distributed training, it is necessary to configure distributed environment variables and collective communication library in accordance with this tutorial.
+
+### Auto Parallel Mode
+
+It is convenient to save and load the model parameters in auto parallel mode. Just add configuration `CheckpointConfig` and `ModelCheckpoint` to `test_train_cifar` method in the training network steps of this tutorial, and the model parameters can be saved. The code is as follows:
+
+```python
+def test_train_cifar(epoch_size=10):
+ context.set_auto_parallel_context(parallel_mode=ParallelMode.AUTO_PARALLEL, gradients_mean=True)
+ loss_cb = LossMonitor()
+ dataset = create_dataset(data_path)
+ batch_size = 32
+ num_classes = 10
+ net = resnet50(batch_size, num_classes)
+ loss = SoftmaxCrossEntropyExpand(sparse=True)
+ opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01, 0.9)
+ save_path = '...'
+ ckpt_config = CheckpointConfig()
+ ckpt_callback = ModelCheckpoint(prefix='auto_parallel', directory=save_path, config=ckpt_config)
+ model = Model(net, loss_fn=loss, optimizer=opt)
+ model.train(epoch_size, dataset, callbacks=[loss_cb, ckpt_callback], dataset_sink_mode=True)
+```
+
+After saving the checkpoint file, users can easily load model parameters for reasoning or retraining. For example, the following code can be used for retraining:
+
+```python
+net = Net()
+param_dict = load_checkpoint(save_path)
+load_param_into_net(net, param_dict)
+```
+
+For checkpoint configuration policy and saving method, please refer to [Saving and Loading Model Parameters](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#checkpoint-configuration-policies).
+
+### Data Parallel Mode
+
+Under Data Parallel Mode, checkpoint can be used as shown in the following example:
+
+```python
+from mindspore.train import Model
+from context import set_auto_parallel_context, reset_auto_parallel_context
+from mindspore.nn import Momentum, Cell, Flatten, ReLU
+from mindspore.train.callback import CheckpointConfig, ModelCheckpoint, LossMonitor
+from mindspore.communication.management import get_rank
+from mindspore.common.parameter import Parameter
+from mindspore import Tensor
+import mindspore.ops.operations as P
+import numpy as np
+# define network
+class DataParallelNet(Cell):
+ def __init__(self, test_size, transpose_a=False, transpose_b=False, strategy=None, layerwise_parallel=True):
+ super().__init__()
+ weight_np = np.full(test_size, 0.1, dtype=np.float32)
+ self.weight = Parameter(Tensor(weight_np), name="fc_weight", layerwise_parallel=layerwise_parallel)
+ self.relu = ReLU()
+ self.fc = P.MatMul(transpose_a=transpose_a, transpose_b=transpose_b)
+ if strategy is not None:
+ self.fc.shard(strategy)
+
+ def construct(self, inputs, label):
+ x = self.relu(inputs)
+ x = self.fc(x, self.weight)
+ return x
+```
+
+Assuming that the Data Parallel mode is used to train and save the model on an 8P machine, the data needs to be obtained first, and the parallel strategy and parallel mode need to be set. The code is as follows:
+
+```python
+# create data sets
+parallel_dataset = CreateData()
+# set parallel strategy
+strategy = ((1, 1), (1, 8))
+# create network model
+net = DataParallelNet(strategy=strategy)
+# reset parallel mode
+context.reset_auto_parallel_context()
+# set parallel mode, data parallel mode is selected for training and model saving. If you want to choose auto parallel
+# mode, you can simply change the value of parallel_mode parameter to ParallelMode.AUTO_PARALLEL.
+context.set_auto_parallel_context(parallel_mode=ParallelMode.DATA_PARALLEL, device_num=8)
+```
+
+Then set the checkpoint saving policy, optimizer and loss function as required. The code is as follows:
+
+```python
+# config checkpoint
+ckpt_config = CheckpointConfig(keep_checkpoint_max=1)
+# define checkpoint save path
+ckpt_path = './rank_{}_ckpt'.format(get_rank)
+# create a ModelCheckpoint object
+ckpt_callback = ModelCheckpoint(prefix='data_parallel', directory=ckpt_path, config=ckpt_config)
+# set optimizer and loss function
+opt = Momentum()
+loss = SoftmaxCrossEntropyExpand()
+model = Model(net, loss_fb=loss, optimizer=opt)
+# After training, the system will automatically save the checkpoint file.
+model.train(train_dataset=parallel_dataset, callbacks=[ckpt_callback, loss])
+# After training, reset the parallel mode to avoid unnecessary trouble when retraining.
+context.reset_auto_parallel_context()
+```
+
+After saving the checkpoint file, users can also use `load_checkpoint` and `load_param_into_Net` to load the model parameters.
+
+### Semi Auto Parallel Mode
+
+The whole process of using checkpoint in Semi Auto parallel Mode also starts from defining a network model.
+
+```python
+class SemiAutoParallelNet(Cell):
+ def __init__(self, mul_size, test_size, strategy=None, strategy2=None):
+ super().__init__()
+ mul_np = np.full(mul_size, 0.5, dtype=np.float32)
+ equal_np = np.full(test_size, 0.1, dtype=np.float32)
+ self.mul_weight = Parameter(Tensor(mul_np), name="mul_weight")
+ self.equal_weight = Parameter(Tensor(equal_np), name="equal_weight")
+ self.mul = P.Mul()
+ self.equal = P.Equal()
+ if strategy is not None:
+ self.mul.shard(strategy)
+ self.equal.shard(strategy2)
+
+ def construct(self, inputs, label):
+ x = self.mul(inputs, self.mul_weight)
+ x = self.equal(x, self.equal_weight)
+ return x
+```
+
+It is assumed that Semi Auto Parallel Mode is also trained and saved on an 8p machine. The code for getting data and setting the parallel strategy and parallel mode is as follows:
+
+```python
+# create data sets
+parallel_dataset = CreateData()
+# set parallel strategy
+strategy = ((1, 1), (1, 8))
+# create network model
+net = SemiAutoParallelNet(strategy=strategy, strategy2=strategy)
+# reset parallel mode
+context.reset_auto_parallel_context()
+# set parallel mode, data parallel mode is selected for training and model saving. If you want to choose auto parallel
+# mode, you can simply change the value of parallel_mode parameter to ParallelMode.AUTO_PARALLEL.
+context.set_auto_parallel_context(parallel_mode=ParallelMode.SEMI_AUTO_PARALLEL,
+ strategy_ckpt_save_file='./rank_{}_ckpt/strategy.txt'.format(get_rank))
+```
+
+Then set the checkpoint saving policy, optimizer and loss function as required. The code is as follows:
+
+```python
+# config checkpoint
+ckpt_config = CheckpointConfig(keep_checkpoint_max=1)
+# define checkpoint save path
+ckpt_path = './rank_{}_ckpt'.format(get_rank)
+# create a ModelCheckpoint object
+ckpt_callback = ModelCheckpoint(prefix='semi_auto_parallel', directory=ckpt_path, config=ckpt_config)
+# set optimizer and loss function
+opt = Momentum()
+loss = SoftmaxCrossEntropyExpand()
+model = Model(net, loss_fb=loss, optimizer=opt)
+# After you've trained your network, the system will automatically save the checkpoint file.
+model.train(train_dataset=parallel_dataset, callbacks=[ckpt_callback, loss])
+# After training, reset the parallel mode to avoid unnecessary trouble when retraining.
+context.reset_auto_parallel_context()
+```
+
+After saving the checkpoint file, users can also use `load_checkpoint`, `load_param_into_Net` to load the model parameters。
+
+For the three parallel training modes described above, the checkpoint file is saved in a complete way on each card. Users also can save only the checkpoint file of this card on each card, take Semi Auto parallel Mode as an example for explanation.
+
+Only by changing the code that sets the checkpoint saving policy, the checkpoint file of each card can be saved on itself. The specific changes are as follows:
+
+Change the checkpoint configuration policy from:
+```python
+# config checkpoint
+ckpt_config = CheckpointConfig(keep_checkpoint_max=1)
+```
+
+to:
+```python
+# config checkpoint
+ckpt_config = CheckpointConfig(keep_checkpoint_max=1, integrated_save=False)
+```
+
+It should be noted that if users chooses this checkpoint saving policy, users need to save and load the segmented checkpoint for subsequent reasoning or retraining. Specific usage can refer to [Integrating the Saved Checkpoint Files](https://www.mindspore.cn/tutorial/en/master/advanced_use/checkpoint_for_hybrid_parallel.html#integrating-the-saved-checkpoint-files).
+
+### Hybrid Parallel Mode
+
+For model parameter saving and loading in Hybrid Parallel Mode, please refer to [Saving and Loading Model Parameters in the Hybrid Parallel Scenario](https://www.mindspore.cn/tutorial/en/master/advanced_use/checkpoint_for_hybrid_parallel.html).
\ No newline at end of file
diff --git a/tutorials/source_en/advanced_use/distributed_training_gpu.md b/tutorials/source_en/advanced_use/distributed_training_gpu.md
new file mode 100644
index 0000000000000000000000000000000000000000..f7fd304a3354b7c9644d6c8f8b104f375f6a333a
--- /dev/null
+++ b/tutorials/source_en/advanced_use/distributed_training_gpu.md
@@ -0,0 +1,147 @@
+# Distributed Parallel Training (GPU)
+
+`Linux` `GPU` `Model` `Training` `Intermediate` `Expert`
+
+
+
+- [Distributed Parallel Training (GPU)](#distributed-parallel-training-gpu)
+ - [Overview](#overview)
+ - [Preparation](#preparation)
+ - [Downloading the Dataset](#downloading-the-dataset)
+ - [Configuring Distributed Environment](#configuring-distributed-environment)
+ - [Calling the Collective Communication Library](#calling-the-collective-communication-library)
+ - [Defining the Network](#defining-the-network)
+ - [Running the Script](#running-the-script)
+ - [Running the Multi-Host Script](#running-the-multi-host-script)
+
+
+
+
+
+## Overview
+
+This tutorial describes how to train the ResNet-50 network using MindSpore data parallelism and automatic parallelism on GPU hardware platform.
+
+## Preparation
+
+### Downloading the Dataset
+
+The `CIFAR-10` dataset is used as an example. The method of downloading and loading the dataset is the same as that for the Ascend 910 AI processor.
+
+> The method of downloading and loading the dataset:
+>
+>
+
+### Configuring Distributed Environment
+
+- `OpenMPI-3.1.5`: multi-process communication library used by MindSpore.
+
+ > Download the OpenMPI-3.1.5 source code package `openmpi-3.1.5.tar.gz` from .
+ >
+ > For details about how to install OpenMPI, see the official tutorial: .
+
+- `NCCL-2.4.8`: Nvidia collective communication library.
+
+ > Download NCCL-2.4.8 from .
+ >
+ > For details about how to install NCCL, see the official tutorial: .
+
+- Password-free login between hosts (required for multi-host training). If multiple hosts are involved in the training, you need to configure password-free login between them. The procedure is as follows:
+ 1. Ensure that the same user is used to log in to each host. (The root user is not recommended.)
+ 2. Run the `ssh-keygen -t rsa -P ""` command to generate a key.
+ 3. Run the `ssh-copy-id DEVICE-IP` command to set the IP address of the host that requires password-free login.
+ 4. Run the`ssh DEVICE-IP` command. If you can log in without entering the password, the configuration is successful.
+ 5. Run the preceding command on all hosts to ensure that every two hosts can communicate with each other.
+
+### Calling the Collective Communication Library
+
+On the GPU hardware platform, MindSpore parallel distributed training uses NCCL for communication.
+
+> On the GPU platform, MindSpore does not support the following operations:
+>
+> `get_local_rank`, `get_local_size`, `get_world_rank_from_group_rank`, `get_group_rank_from_world_rank` and `create_group`
+
+The sample code for calling the HCCL is as follows:
+
+```python
+from mindspore import context
+from mindspore.communication.management import init
+
+if __name__ == "__main__":
+ context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
+ init("nccl")
+ ...
+```
+
+In the preceding information,
+
+- `mode=context.GRAPH_MODE`: sets the running mode to graph mode for distributed training. (The PyNative mode does not support parallel running.)
+- `init("nccl")`: enables NCCL communication and completes the distributed training initialization.
+
+## Defining the Network
+
+On the GPU hardware platform, the network definition is the same as that for the Ascend 910 AI processor.
+
+> For details about the definitions of the network, optimizer, and loss function, see .
+
+## Running the Script
+
+On the GPU hardware platform, MindSpore uses OpenMPI `mpirun` for distributed training. The following takes the distributed training script for eight devices as an example to describe how to run the script:
+
+> Obtain the running script of the example from:
+>
+>
+>
+> If the script is executed by the root user, the `--allow-run-as-root` parameter must be added to `mpirun`.
+
+```bash
+#!/bin/bash
+
+DATA_PATH=$1
+export DATA_PATH=${DATA_PATH}
+
+rm -rf device
+mkdir device
+cp ./resnet50_distributed_training.py ./resnet.py ./device
+cd ./device
+echo "start training"
+mpirun -n 8 pytest -s -v ./resnet50_distributed_training.py > train.log 2>&1 &
+```
+
+The script requires the variable `DATA_PATH`, which indicates the path of the dataset. In addition, you need to modify the `resnet50_distributed_training.py` file. Since the `DEVICE_ID` environment variable does not need to be set on the GPU, you do not need to call `int(os.getenv('DEVICE_ID'))` in the script to obtain the physical sequence number of the device, and `context` does not require `device_id`. You need to set `device_target` to `GPU` and call `init("nccl")` to enable the NCCL. The log file is saved in the device directory, and the loss result is saved in train.log. The output loss values of the grep command are as follows:
+
+```
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+epoch: 1 step: 1, loss is 2.3025854
+```
+
+## Running the Multi-Host Script
+
+If multiple hosts are involved in the training, you need to set the multi-host configuration in the `mpirun` command. You can use the `-H` option in the `mpirun` command. For example, `mpirun -n 16 -H DEVICE1_IP:8,DEVICE2_IP:8 python hello.py` indicates that eight processes are started on the host whose IP addresses are DEVICE1_IP and DEVICE2_IP, respectively. Alternatively, you can create a hostfile similar to the following and transfer its path to the `--hostfile` option of `mpirun`. Each line in the hostfile is in the format of `[hostname] slots=[slotnum]`, where hostname can be an IP address or a host name.
+```bash
+DEVICE1 slots=8
+DEVICE2 slots=8
+```
+
+The following is the execution script of the 16-device two-host cluster. The variables `DATA_PATH` and `HOSTFILE` need to be transferred, indicating the dataset path and hostfile path. For details about more mpirun options, see the OpenMPI official website.
+
+```bash
+#!/bin/bash
+
+DATA_PATH=$1
+HOSTFILE=$2
+
+rm -rf device
+mkdir device
+cp ./resnet50_distributed_training.py ./resnet.py ./device
+cd ./device
+echo "start training"
+mpirun -n 16 --hostfile $HOSTFILE -x DATA_PATH=$DATA_PATH -x PATH -mca pml ob1 pytest -s -v ./resnet50_distributed_training.py > train.log 2>&1 &
+```
+Running on GPU, the model parameters can be saved and loaded for reference[Distributed Training Model Parameters Saving and Loading](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_tutorials.html)
diff --git a/tutorials/source_en/advanced_use/distributed_training_tutorials.rst b/tutorials/source_en/advanced_use/distributed_training_tutorials.rst
index 4807338b07f88a8ef709abf861f1af9334deb256..3fd6919a96bce0b89fdcedf52f8e5668f9373348 100644
--- a/tutorials/source_en/advanced_use/distributed_training_tutorials.rst
+++ b/tutorials/source_en/advanced_use/distributed_training_tutorials.rst
@@ -17,6 +17,7 @@ MindSpore also provides the parallel distributed training function. It supports
:maxdepth: 1
distributed_training_ascend
+ distributed_training_gpu
host_device_training
checkpoint_for_hybrid_parallel
parameter_server_training
diff --git a/tutorials/source_en/advanced_use/fuzzer.md b/tutorials/source_en/advanced_use/fuzzer.md
new file mode 100644
index 0000000000000000000000000000000000000000..ab03dd2f72c6040ec1ac4996c20924d0413042ce
--- /dev/null
+++ b/tutorials/source_en/advanced_use/fuzzer.md
@@ -0,0 +1,211 @@
+# AI Model Security Test
+
+`Ascend` `GPU` `CPU` `Data Preparation` `Model Development` `Model Training` `Model Optimization` `Enterprise` `Expert`
+
+
+
+- [AI Model Security Test](#ai-model-security-test)
+ - [Overview](#overview)
+ - [Implementation](#implementation)
+ - [Importing Library Files](#importing-library-files)
+ - [Parameter Configuration](#parameter-configuration)
+ - [Fuzz Testing Application](#fuzz-testing-application)
+
+
+
+
+## Overview
+
+The decision logic of traditional software is determined by the code logic. Traditional software determines whether the test is adequate based on the code line coverage rate. Ideally, the higher the coverage rate is, the more adequate the code test is. However, for deep neural network, the decision logic of the program is determined by the training data, network structure, and parameters through a black box mechanism. The code line coverage fails to evaluate the test adequacy. A more suitable test evaluation criterion needs to be selected according to the deep network features to guide the neural network to perform a more adequate test and find more corner error cases, thereby ensuring universality and robustness of a model.
+
+The fuzz testing module of MindArmour uses the neuron coverage rate as the test evaluation criterion. Neuron coverage is the range of the number of neurons observed and activated and the range of the neuron output value through a set of inputs. The neuron coverage is used to guide input mutation so that input can activate more neurons and neuron values can be distributed in a wider range. In this way, we can explore different types of model output results and incorrect behaviors.
+
+The LeNet model and MNIST dataset are used as an example to describe how to use Fuzz testing.
+
+> This example is for CPUs, GPUs, and Ascend 910 AI processors. You can download the complete sample code at .
+
+## Implementation
+
+### Importing Library Files
+
+The following lists the required common modules, MindSpore-related modules, Fuzz testing feature modules, and configuration log labels and log levels.
+
+```python
+import numpy as np
+from mindspore import Model
+from mindspore import context
+from mindspore.train.serialization import load_checkpoint, load_param_into_net
+
+from mindarmour.fuzz_testing import Fuzzer
+from mindarmour.fuzz_testing import ModelCoverageMetrics
+from mindarmour.utils.logger import LogUtil
+
+from examples.common.dataset.data_processing import generate_mnist_dataset
+from examples.common.networks.lenet5.lenet5_net import LeNet5
+
+LOGGER = LogUtil.get_instance()
+TAG = 'Fuzz_testing'
+LOGGER.set_level('INFO')
+```
+
+### Parameter Configuration
+
+Configure necessary information, including the environment information and execution mode.
+
+```python
+context.set_context(mode=context.GRAPH_MODE, device_target="Ascend")
+```
+
+For details about the API configuration, see the `context.set_context`.
+
+### Fuzz Testing Application
+
+1. Create a LeNet model and load the MNIST dataset. The operation is the same as that for [Model Security]().
+
+ ```python
+ ...
+ # Lenet model
+ model = Model(net)
+ # get training data
+ data_list = "../common/dataset/MNIST/train"
+ batch_size = 32
+ ds = generate_mnist_dataset(data_list, batch_size, sparse=False)
+ train_images = []
+ for data in ds.create_tuple_iterator():
+ images = data[0].asnumpy().astype(np.float32)
+ train_images.append(images)
+ train_images = np.concatenate(train_images, axis=0)
+
+ # get test data
+ data_list = "../common/dataset/MNIST/test"
+ batch_size = 32
+ ds = generate_mnist_dataset(data_list, batch_size, sparse=False)
+ test_images = []
+ test_labels = []
+ for data in ds.create_tuple_iterator():
+ images = data[0].asnumpy().astype(np.float32)
+ labels = data[1].asnumpy()
+ test_images.append(images)
+ test_labels.append(labels)
+ test_images = np.concatenate(test_images, axis=0)
+ test_labels = np.concatenate(test_labels, axis=0)
+ ```
+
+2. Configure Fuzzer parameters.
+
+ Set the data mutation method and parameters. Multiple methods can be configured at the same time. Currently, the following data mutation methods are supported:
+
+ - Image affine transformation methods: Translate, Scale, Shear, and Rotate.
+ - Methods based on image pixel value changes: Contrast, Brightness, Blur, and Noise.
+ - Methods for generating adversarial examples based on white-box and black-box attacks: FGSM, PGD, and MDIIM.
+
+ The data mutation method must include the method based on the image pixel value changes.
+
+ The first two image transform methods support user-defined configuration parameters and randomly generated parameters by algorithms. For user-defined configuration parameters see the class methods corresponding to https://gitee.com/mindspore/mindarmour/blob/master/mindarmour/fuzz_testing/image_transform.py. For randomly generated parameters by algorithms you can set method's params to `'auto_param': [True]`. The mutation parameters are randomly generated within the recommended range.
+
+ For details about how to set parameters based on the attack defense method, see the corresponding attack method class.
+
+ Following is an example for configure Fuzzer parameters.
+
+ ```python
+ mutate_config = [{'method': 'Blur',
+ 'params': {'radius': [0.1, 0.2, 0.3],
+ 'auto_param': [True, False]}},
+ {'method': 'Contrast',
+ 'params': {'auto_param': [True]}},
+ {'method': 'Translate',
+ 'params': {'auto_param': [True]}},
+ {'method': 'Brightness',
+ 'params': {'auto_param': [True]}},
+ {'method': 'Noise',
+ 'params': {'auto_param': [True]}},
+ {'method': 'Scale',
+ 'params': {'auto_param': [True]}},
+ {'method': 'Shear',
+ 'params': {'auto_param': [True]}},
+ {'method': 'FGSM',
+ 'params': {'eps': [0.3, 0.2, 0.4], 'alpha': [0.1]}}
+ ]
+ ```
+
+ Set evaluation metrics. Currently, the following evaluation metrics are supported:
+ - General evaluation metric: accuracy
+ - Neuron coverage rate metrics: kmnc, nbc, and snac
+ - Adversarial attack evaluation metric: attack_success_rate.
+ You can set this parameter to `auto`. By default, all evaluation metrics are used.
+
+ ```python
+ eval_metrics =['accuracy', 'kmnc', 'attack_success_rate']
+ ```
+
+3. Initialize the seed queue. Each seed in the seed queue has two values: original image and image label. Here we select 100 samples as initial seed queue.
+
+ ```python
+ # make initial seeds
+ initial_seeds = []
+ for img, label in zip(test_images, test_labels):
+ initial_seeds.append([img, label])
+ initial_seeds = initial_seeds[:100]
+ ```
+
+4. Test the neuron coverage rate before the fuzz testing.
+
+ ```python
+ segmented_num=1000
+ neuron_num=10
+ model_coverage_test = ModelCoverageMetrics(model, segmented_num, neuron_num, train_images)
+ model_coverage_test.calculate_coverage(np.array(test_images[:100]).astype(np.float32))
+ LOGGER.info(TAG, 'KMNC of this test is : %s', model_coverage_test.get_kmnc())
+ ```
+
+ Result:
+
+ ```python
+ KMNC of this test is : 0.0851
+ ```
+
+5. Perform the fuzz testing.
+
+ ```python
+ eval_metrics = 'auto'
+ model_fuzz_test = Fuzzer(model, train_images, neuron_num, segmented_num)
+ _, _, _, _, metrics = model_fuzz_test.fuzzing(mutate_config, initial_seeds, eval_metrics=eval_metrics)
+ ```
+
+6. Experiment results.
+
+ The results of fuzz testing contains five aspect data:
+
+ - fuzz_samples: mutated samples in fuzz testing.
+ - true_labels: the ground truth labels of fuzz_samples.
+ - fuzz_pred: predictions of tested model about fuzz_samples.
+ - fuzz_strategies: the methods used to mutate fuzz_samples.
+ - metrics_report: metrics report of fuzz testing.
+
+ The first 4 returns can be used to further calculated complex metrics and analyze the robustness of the model.
+
+ Run the following command to view the result:
+
+ ```python
+ if metrics:
+ for key in metrics:
+ LOGGER.info(TAG, key + ': %s', metrics[key])
+ ```
+
+ The fuzz testing result is as follows:
+
+ ```python
+ Accuracy: 0.7929
+ Attack_success_rate: 0.3939
+ Neural_coverage_KMNC: 0.4797
+ ```
+
+ Before the fuzzing test, the KMNC neuron coverage rate of the seed is 8.5%. After the fuzzing test, the KMNC neuron coverage rate is 47.97%, and the neuron coverage rate and sample diversity increase. After the fuzzing test, the accuracy rate of the model to generate samples is 79.29%, and the attack success rate is 39.39% for samples using the adversarial attack method. Since the initial seed, the mutation method and the corresponding parameters are all randomly selected, it is normal that the result floats to some extent.
+
+ Original image:
+
+ 
+
+ Mutation images generated by fuzzing:
+
+ 
\ No newline at end of file
diff --git a/tutorials/source_en/advanced_use/gradient_accumulation.md b/tutorials/source_en/advanced_use/gradient_accumulation.md
index acb3af83cf4afb4606793916738daf181310ba2a..f1d150dc290405bebd01b14118246fa624afce9b 100644
--- a/tutorials/source_en/advanced_use/gradient_accumulation.md
+++ b/tutorials/source_en/advanced_use/gradient_accumulation.md
@@ -1,6 +1,6 @@
# Gradient Accumulation
-`Linux` `Ascend` `GPU` `Model Optimization` `Intermediate` `Expert`
+`Linux` `GPU` `Model Optimization` `Intermediate` `Expert`
@@ -29,7 +29,7 @@ Different from the traditional training method, the concept of mini-batch is int
The ultimate objective is to achieve the same effect as training with N x mini-batch data.
-> This tutorial is applicable to GPUs and Ascend 910 AI Processors. You can download the main training sample code from .
+> This tutorial is applicable to GPUs. You can download the main training sample code from .
## Creating a Gradient Accumulation Model
@@ -129,8 +129,8 @@ class TrainClear(Cell):
self.hyper_map = C.HyperMap()
def construct(self):
- seccess = self.hyper_map(F.partial(_clear_op), self.grad_sum, self.zeros)
- return seccess
+ success = self.hyper_map(F.partial(_clear_op), self.grad_sum, self.zeros)
+ return success
```
### Defining the Training Process
@@ -207,8 +207,8 @@ Call the network, optimizer, and loss function, and then customize the `train_pr
```python
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='MindSpore Gard Cumulative Example')
- parser.add_argument('--device_target', type=str, default="Ascend", choices=['Ascend', 'GPU'],
- help='device where the code will be implemented (default: Ascend)')
+ parser.add_argument('--device_target', type=str, default="GPU", choices=['GPU'],
+ help='device where the code will be implemented (default: GPU)')
parser.add_argument('--data_path', type=str, default="./Data",
help='path where the dataset is saved')
args = parser.parse_args()
@@ -230,9 +230,11 @@ After 10 epochs, the accuracy on the test set is about 96.31%.
**Training Execution**
1. Run the training code and view the running result.
+
```shell
$ python train.py --data_path=./MNIST_Data
```
+
The output is as follows. The loss value decreases during training.
```shell
@@ -245,7 +247,7 @@ After 10 epochs, the accuracy on the test set is about 96.31%.
epoch: 10 step: 448 loss is 0.06443884
epoch: 10 step: 449 loss is 0.0067842817
```
-
+
2. Check the saved checkpoint files.
The model file `gradient_accumulation.ckpt` is saved during training.
@@ -255,7 +257,7 @@ After 10 epochs, the accuracy on the test set is about 96.31%.
Use the saved checkpoint file to load the validation dataset through [eval.py]() in the lenet directory of model_zoo.
```shell
-$ python eval.py --data_path=./MNIST_Data --ckpt_path=./gradient_accumulation.ckpt
+$ python eval.py --data_path=./MNIST_Data --ckpt_path=./gradient_accumulation.ckpt --device_target=GPU
```
The output is as follows. The accuracy of the validation dataset is about 96.31%, which is the same as the result when the value of batch_size is 32.
diff --git a/tutorials/source_en/advanced_use/hub_tutorial.md b/tutorials/source_en/advanced_use/hub_tutorial.md
index 13e98abd3fa8aa362ba1b7613cd5bb613a82a43a..1b5c5d35ae7c2ad3b1f76c82ece4e6de57a268c8 100644
--- a/tutorials/source_en/advanced_use/hub_tutorial.md
+++ b/tutorials/source_en/advanced_use/hub_tutorial.md
@@ -1,52 +1,87 @@
-## Submitting, Loading and Fine-tuning Models using MindSpore Hub
+# Submitting, Loading and Fine-tuning Models using MindSpore Hub
-`Ascend` `GPU` `MindSpore Hub` `Model Submission` `Model Loading` `Model Fine-tuning` `Beginner` `Intermediate` `Expert`
+`Linux` `Ascend` `GPU` `MindSpore Hub` `Model Submission` `Model Loading` `Model Fine-tuning` `Beginner` `Intermediate` `Expert`
- [Submitting, Loading and Fine-tuning Models using MindSpore Hub](#submitting-loading-and-fine-tuning-models-using-mindspore-hub)
- - [Overview](#overview)
- - [How to submit models](#how-to-submit-models)
- - [Steps](#steps)
- - [How to load models](#how-to-load-models)
- - [Model Fine-tuning](#model-fine-tuning)
+ - [Overview](#overview)
+ - [How to submit models](#how-to-submit-models)
+ - [Steps](#steps)
+ - [How to load models](#how-to-load-models)
+ - [Model Fine-tuning](#model-fine-tuning)
-### Overview
+## Overview
-For algorithm developers who are interested in publishing models into MindSpore Hub, this tutorial introduces the specific steps to submit models using GoogleNet as an example. It also describes how to load/fine-tune MindSpore Hub models for application developers who aim to do inference/transfer learning on new dataset. In summary, this tutorial helps the algorithm developers submit models efficiently and enables the application developers to perform inference or fine-tuning using MindSpore Hub APIs quickly.
+MindSpore Hub is a pre-trained model application tool of the MindSpore ecosystem, which serves as a channel for model developers and application developers. It not only provides model developers with a convenient and fast channel for model submission, but also provides application developers with simple model loading and fine-tuning APIs. For model developers who are interested in publishing models into MindSpore Hub, this tutorial introduces the specific steps to submit models using GoogleNet as an example. It also describes how to load/fine-tune MindSpore Hub models for application developers who aim to do inference/transfer learning on new dataset. In summary, this tutorial helps the model developers submit models efficiently and enables the application developers to perform inference or fine-tuning using MindSpore Hub APIs quickly.
-### How to submit models
+## How to submit models
-We accept publishing models to MindSpore Hub via PR in `hub` repo. Here we use GoogleNet as an example to list the steps of model submission to MindSpore Hub.
+We accept publishing models to MindSpore Hub via PR in [hub](https://gitee.com/mindspore/hub) repo. Here we use GoogleNet as an example to list the steps of model submission to MindSpore Hub.
-#### Steps
+### Steps
1. Host your pre-trained model in a storage location where we are able to access.
-2. Add a model generation python file called `mindspore_hub_conf.py` in your own repo using this [template](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/mindspore_hub_conf.py).
+2. Add a model generation python file called `mindspore_hub_conf.py` in your own repo using this [template](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/mindspore_hub_conf.py). The location of the `mindspore_hub_conf.py` file is shown below:
+
+ ```shell script
+ googlenet
+ ├── src
+ │ ├── googlenet.py
+ ├── script
+ │ ├── run_train.sh
+ ├── train.py
+ ├── test.py
+ ├── mindspore_hub_conf.py
+ ```
-3. Create a `{model_name}_{model_version}_{dataset}.md` file in `hub/mshub_res/assets` using this [template](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/gpu/0.6/alexnet_v1_cifar10.md). For each pre-trained model, please run the following command to obtain a hash value required at `asset-sha256` of this `.md` file:
+3. Create a `{model_name}_{model_version}_{dataset}.md` file in `hub/mshub_res/assets/mindspore/ascend/0.7` using this [template](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/ascend/0.7/googlenet_v1_cifar10.md). Here `ascend` refers to the hardware platform for the pre-trained model, and `0.7` indicates the MindSpore version. The structure of the `hub/mshub_res` folder is as follows:
+
+ ```shell script
+ hub
+ ├── mshub_res
+ │ ├── assets
+ │ ├── mindspore
+ | ├── gpu
+ | ├── 0.7
+ | ├── ascend
+ | ├── 0.7
+ | ├── googlenet_v1_cifar10.md
+ │ ├── tools
+ | ├── md_validator.py
+ | └── md_validator.py
+ ```
+
+ Note that it is required to fill in the `{model_name}_{model_version}_{dataset}.md` template by providing `file-format`、`asset-link` and `asset-sha256` below, which refers to the model file format, model storage location from step 1 and model hash value, respectively. The MindSpore Hub supports multiple model file formats including [MindSpore CKPT](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#checkpoint-configuration-policies), [AIR](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html), [MindIR](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model), [ONNX](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) and [MSLite](https://www.mindspore.cn/lite/tutorial/en/master/use/converter_tool.html).
+
+ ```shell script
+ file-format: ckpt
+ asset-link: https://download.mindspore.cn/model_zoo/official/cv/googlenet/goolenet_ascend_0.2.0_cifar10_official_classification_20200713/googlenet.ckpt
+ asset-sha256: 114e5acc31dad444fa8ed2aafa02ca34734419f602b9299f3b53013dfc71b0f7
+ ```
+ For each pre-trained model, please run the following command to obtain a hash value required at `asset-sha256` of this `.md` file. Here the pre-trained model `googlenet.ckpt` is accessed from the storage location in step 1 and then saved in `tools` folder. The output hash value is: `114e5acc31dad444fa8ed2aafa02ca34734419f602b9299f3b53013dfc71b0f7`.
```python
cd ../tools
python get_sha256.py ../googlenet.ckpt
```
-4. Check the format of the markdown file locally using `hub/mshub_res/tools/md_validator.py` by running the following command:
+4. Check the format of the markdown file locally using `hub/mshub_res/tools/md_validator.py` by running the following command. The output is `All Passed`,which indicates that the format and content of the `.md` file meets the requirements.
```python
python md_validator.py ../assets/mindspore/ascend/0.7/googlenet_v1_cifar10.md
```
-5. Create a PR in `mindspore/hub` repo.
+5. Create a PR in `mindspore/hub` repo. See our [Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md) for more information about creating a PR.
-Once your PR is merged into master branch here, your model will show up in [MindSpore Hub Website](https://hub.mindspore.com/mindspore) within 24 hours. For more information, please refer to the [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md).
+Once your PR is merged into master branch here, your model will show up in [MindSpore Hub Website](https://hub.mindspore.com/mindspore) within 24 hours. Please refer to [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md) for more information about model submission.
-### How to load models
+## How to load models
`mindspore_hub.load` API is used to load the pre-trained model in a single line of code. The main process of model loading is as follows:
@@ -56,92 +91,119 @@ Once your PR is merged into master branch here, your model will show up in [Mind
- Complete the task of loading model using `url` , as shown in the example below:
-```python
-import mindspore_hub as mshub
-import mindspore
-from mindspore import context, Tensor, nn
-from mindspore.train.model import Model
-from mindspore.common import dtype as mstype
-from mindspore.dataset.transforms import py_transforms
-from PIL import Image
-import cv2
-
-context.set_context(mode=context.GRAPH_MODE,
- device_target="Ascend",
- device_id=0)
-
-model = "mindspore/ascend/0.7/googlenet_v1_cifar10"
+ ```python
-image = Image.open('cifar10/a.jpg')
-transforms = py_transforms.ComposeOp([py_transforms.ToTensor()])
+ import mindspore_hub as mshub
+ import mindspore
+ from mindspore import context, Tensor, nn
+ from mindspore.train.model import Model
+ from mindspore.common import dtype as mstype
+ import mindspore.dataset.vision.py_transforms as py_transforms
+
+ context.set_context(mode=context.GRAPH_MODE,
+ device_target="Ascend",
+ device_id=0)
+
+ model = "mindspore/ascend/0.7/googlenet_v1_cifar10"
+
+ # Initialize the number of classes based on the pre-trained model.
+ network = mshub.load(model, num_classes=10)
+ network.set_train(False)
+
+ # ...
-# Initialize the number of classes based on the pre-trained model.
-network = mshub.load(model, num_classes=10)
-network.set_train(False)
-out = network(transforms(image))
-```
+ ```
+- After loading the model, you can use MindSpore to do inference. You can refer to [here](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html).
-### Model Fine-tuning
+## Model Fine-tuning
-When loading a model with `mindspore_hub.load` API, we can add an extra argument to load the feature extraction part of the model only. So we can easily add new layers to perform transfer learning. *This feature can be found in the related model page when an extra argument (e.g., include_top) has been integrated into the model construction by the algorithm engineer.*
+When loading a model with `mindspore_hub.load` API, we can add an extra argument to load the feature extraction part of the model only. So we can easily add new layers to perform transfer learning. This feature can be found in the related model page when an extra argument (e.g., include_top) has been integrated into the model construction by the model developer. The value of `include_top` is True or False, indicating whether to keep the top layer in the fully-connected network.
-We use Googlenet as example to illustrate how to load a model trained on ImageNet dataset and then perform transfer learning (re-training) on specific sub-task dataset. The main steps are listed below:
+We use GoogleNet as example to illustrate how to load a model trained on ImageNet dataset and then perform transfer learning (re-training) on specific sub-task dataset. The main steps are listed below:
1. Search the model of interest on [MindSpore Hub Website](https://hub.mindspore.com/mindspore) and get the related `url`.
-2. Load the model from MindSpore Hub using the `url`. *Note that the parameter `include_top` is provided by the model developer*.
+2. Load the model from MindSpore Hub using the `url`. Note that the parameter `include_top` is provided by the model developer.
```python
import mindspore
- from mindspore import nn
- from mindspore import context
+ from mindspore import nn, context, Tensor
+ from mindpsore.train.serialization import save_checkpoint
+ from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits
+ from mindspore.ops import operations as P
+ from mindspore.nn import Momentum
+
+ import math
+ import numpy as np
+
import mindspore_hub as mshub
+ from src.dataset import create_dataset
context.set_context(mode=context.GRAPH_MODE, device_target="Ascend",
save_graphs=False)
-
- network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', include_top=False)
+ model_url = "mindspore/ascend/0.7/googlenet_v1_cifar10"
+ network = mshub.load(model_url, include_top=False, num_classes=1000)
network.set_train(False)
```
3. Add a new classification layer into current model architecture.
```python
+ class ReduceMeanFlatten(nn.Cell):
+ def __init__(self):
+ super(ReduceMeanFlatten, self).__init__()
+ self.mean = P.ReduceMean(keep_dims=True)
+ self.flatten = nn.Flatten()
+
+ def construct(self, x):
+ x = self.mean(x, (2, 3))
+ x = self.flatten(x)
+ return x
+
# Check MindSpore Hub website to conclude that the last output shape is 1024.
last_channel = 1024
# The number of classes in target task is 26.
num_classes = 26
+
+ reducemean_flatten = ReduceMeanFlatten()
+
classification_layer = nn.Dense(last_channel, num_classes)
classification_layer.set_train(True)
- train_network = nn.SequentialCell([network, classification_layer])
+ train_network = nn.SequentialCell([network, reducemean_flatten, classification_layer])
```
4. Define `loss` and `optimizer` for training.
```python
- from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits
+ epoch_size = 60
# Wrap the backbone network with loss.
- loss_fn = SoftmaxCrossEntropyWithLogits()
+ loss_fn = SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
loss_net = nn.WithLossCell(train_network, loss_fn)
- # Create an optimizer.
- optim = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), Tensor(lr), config.momentum, config.weight_decay)
+ lr = get_lr(global_step=0,
+ lr_init=0,
+ lr_max=0.05,
+ lr_end=0.001,
+ warmup_epochs=5,
+ total_epochs=epoch_size)
+ # Create an optimizer.
+ optim = Momentum(filter(lambda x: x.requires_grad, loss_net.get_parameters()), Tensor(lr), 0.9, 4e-5)
train_net = nn.TrainOneStepCell(loss_net, optim)
```
-5. Create dataset and start fine-tuning.
+5. Create dataset and start fine-tuning. As is shown below, the new dataset used for fine-tuning is the garbage classification data located at `/ssd/data/garbage/train` folder.
```python
- from src.dataset import create_dataset
- from mindspore.train.serialization import _exec_save_checkpoint
-
- dataset = create_dataset("/ssd/data/garbage/train", do_train=True, batch_size=32)
-
- epoch_size = 15
+ dataset = create_dataset("/ssd/data/garbage/train",
+ do_train=True,
+ batch_size=32,
+ platform="Ascend",
+ repeat_num=1)
+
for epoch in range(epoch_size):
for i, items in enumerate(dataset):
data, label = items
@@ -149,10 +211,10 @@ We use Googlenet as example to illustrate how to load a model trained on ImageNe
label = mindspore.Tensor(label)
loss = train_net(data, label)
- print(f"epoch: {epoch}, loss: {loss}")
+ print(f"epoch: {epoch}/{epoch_size}, loss: {loss}")
# Save the ckpt file for each epoch.
ckpt_path = f"./ckpt/garbage_finetune_epoch{epoch}.ckpt"
- _exec_save_checkpoint(train_network, ckpt_path)
+ save_checkpoint(train_network, ckpt_path)
```
6. Eval on test set.
@@ -160,22 +222,31 @@ We use Googlenet as example to illustrate how to load a model trained on ImageNe
```python
from mindspore.train.serialization import load_checkpoint, load_param_into_net
- network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', include_top=False)
- train_network = nn.SequentialCell([network, nn.Dense(last_channel, num_classes)])
+ network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', pretrained=False,
+ include_top=False, num_classes=1000)
+
+ reducemean_flatten = ReduceMeanFlatten()
+
+ classification_layer = nn.Dense(last_channel, num_classes)
+ classification_layer.set_train(False)
+ softmax = nn.Softmax()
+ network = nn.SequentialCell([network, reducemean_flatten,
+ classification_layer, softmax])
# Load a pre-trained ckpt file.
- ckpt_path = "./ckpt/garbage_finetune_epoch15.ckpt"
+ ckpt_path = "./ckpt/garbage_finetune_epoch59.ckpt"
trained_ckpt = load_checkpoint(ckpt_path)
- load_param_into_net(train_network, trained_ckpt)
+ load_param_into_net(network, trained_ckpt)
# Define loss and create model.
- loss = SoftmaxCrossEntropyWithLogits()
- model = Model(network, loss_fn=loss, metrics={'acc'})
+ model = Model(network, metrics={'acc'}, eval_network=network)
- eval_dataset = create_dataset("/ssd/data/garbage/train", do_train=False,
- batch_size=32)
+ eval_dataset = create_dataset("/ssd/data/garbage/test",
+ do_train=True,
+ batch_size=32,
+ platform="Ascend",
+ repeat_num=1)
res = model.eval(eval_dataset)
print("result:", res, "ckpt=", ckpt_path)
- ```
-
+ ```
\ No newline at end of file
diff --git a/tutorials/source_en/advanced_use/images/cifar10_c_transforms.png b/tutorials/source_en/advanced_use/images/cifar10_c_transforms.png
new file mode 100644
index 0000000000000000000000000000000000000000..10dc267dc650764566f6d20b7f090e20c12f8e11
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/cifar10_c_transforms.png differ
diff --git a/tutorials/source_en/advanced_use/images/compose.png b/tutorials/source_en/advanced_use/images/compose.png
new file mode 100644
index 0000000000000000000000000000000000000000..97b8ca59f4438852526b56a8a7ce00ff63771b40
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/compose.png differ
diff --git a/tutorials/source_en/advanced_use/images/data_chart.png b/tutorials/source_en/advanced_use/images/data_chart.png
index f698c682119efc886b46a911d3c61f50ab017879..9f1d5f4247472602823649909d934ad6f7160005 100644
Binary files a/tutorials/source_en/advanced_use/images/data_chart.png and b/tutorials/source_en/advanced_use/images/data_chart.png differ
diff --git a/tutorials/source_en/advanced_use/images/data_enhancement_performance_scheme.png b/tutorials/source_en/advanced_use/images/data_enhancement_performance_scheme.png
new file mode 100644
index 0000000000000000000000000000000000000000..6417031a63dd2bade4902a83934c05aeee6be195
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/data_enhancement_performance_scheme.png differ
diff --git a/tutorials/source_en/advanced_use/images/data_label.png b/tutorials/source_en/advanced_use/images/data_label.png
index f76c645e26b28401285f00dd0613d27e3506982c..ac79c2d53fe416e96b9ac841692b26f3eaf6ddd2 100644
Binary files a/tutorials/source_en/advanced_use/images/data_label.png and b/tutorials/source_en/advanced_use/images/data_label.png differ
diff --git a/tutorials/source_en/advanced_use/images/data_loading_performance_scheme.png b/tutorials/source_en/advanced_use/images/data_loading_performance_scheme.png
new file mode 100644
index 0000000000000000000000000000000000000000..44c84c1f14dee40cdd76926994ab670494abc006
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/data_loading_performance_scheme.png differ
diff --git a/tutorials/source_en/advanced_use/images/data_table.png b/tutorials/source_en/advanced_use/images/data_table.png
index 65dcd39049b2754ef9ed22641981743f985e2b85..c9f73cd59b8202eff0121b4c57466f9b39d1d0b9 100644
Binary files a/tutorials/source_en/advanced_use/images/data_table.png and b/tutorials/source_en/advanced_use/images/data_table.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_init_page.png b/tutorials/source_en/advanced_use/images/debugger_init_page.png
new file mode 100644
index 0000000000000000000000000000000000000000..e0fedfd5e48d8679ea601c390411a47bdb564881
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_init_page.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_set_watch_point.png b/tutorials/source_en/advanced_use/images/debugger_set_watch_point.png
new file mode 100644
index 0000000000000000000000000000000000000000..5b984c5ce447b2dd68e3e5295d67d79bb2920985
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_set_watch_point.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_tensor_compare.png b/tutorials/source_en/advanced_use/images/debugger_tensor_compare.png
new file mode 100644
index 0000000000000000000000000000000000000000..8e82a42a8b9addb6ea87f7841f09e1b2596902b2
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_tensor_compare.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_tensor_info.png b/tutorials/source_en/advanced_use/images/debugger_tensor_info.png
new file mode 100644
index 0000000000000000000000000000000000000000..ac4246a4865f4bbc7e724d057582f5c1e5f8bc5e
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_tensor_info.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_tensor_value.png b/tutorials/source_en/advanced_use/images/debugger_tensor_value.png
new file mode 100644
index 0000000000000000000000000000000000000000..faa2dc2992e528bdc46804be558497580c4e904b
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_tensor_value.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_waiting.png b/tutorials/source_en/advanced_use/images/debugger_waiting.png
new file mode 100644
index 0000000000000000000000000000000000000000..8171fce24fdef74135dfe0f0368bdfebadca1c4b
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_waiting.png differ
diff --git a/tutorials/source_en/advanced_use/images/debugger_watch_point_hit.png b/tutorials/source_en/advanced_use/images/debugger_watch_point_hit.png
new file mode 100644
index 0000000000000000000000000000000000000000..c8920281f0200a2fe37bd28fe5f299205599d8bb
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/debugger_watch_point_hit.png differ
diff --git a/tutorials/source_en/advanced_use/images/fuzz_res.png b/tutorials/source_en/advanced_use/images/fuzz_res.png
new file mode 100644
index 0000000000000000000000000000000000000000..be6d022850438ff4b9c070f7225cbd950e1e3686
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/fuzz_res.png differ
diff --git a/tutorials/source_en/advanced_use/images/fuzz_seed.png b/tutorials/source_en/advanced_use/images/fuzz_seed.png
new file mode 100644
index 0000000000000000000000000000000000000000..cb138aebfabea1a1f778fbb65b6a0ee4533974e2
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/fuzz_seed.png differ
diff --git a/tutorials/source_en/advanced_use/images/lineage_label.png b/tutorials/source_en/advanced_use/images/lineage_label.png
index 56f6eb7dfd4cd39ce7c8ebf6fa5e2b0d61ea5871..15c88f91edb7e870246b85f9f4d96f00145d9199 100644
Binary files a/tutorials/source_en/advanced_use/images/lineage_label.png and b/tutorials/source_en/advanced_use/images/lineage_label.png differ
diff --git a/tutorials/source_en/advanced_use/images/lineage_model_chart.png b/tutorials/source_en/advanced_use/images/lineage_model_chart.png
index 32e307551e210a48cfbd5022fc2901e841dd9b8a..56d08cc34e51293a82aa63dd50fc1fa1c90e7ab3 100644
Binary files a/tutorials/source_en/advanced_use/images/lineage_model_chart.png and b/tutorials/source_en/advanced_use/images/lineage_model_chart.png differ
diff --git a/tutorials/source_en/advanced_use/images/lineage_model_table.png b/tutorials/source_en/advanced_use/images/lineage_model_table.png
index 923b3ee95c08f2a32437988aae99c1aba6d191ef..a288ac6aa099c69a8b7f5cf97183992adb94b71a 100644
Binary files a/tutorials/source_en/advanced_use/images/lineage_model_table.png and b/tutorials/source_en/advanced_use/images/lineage_model_table.png differ
diff --git a/tutorials/source_en/advanced_use/images/operator_fusion.png b/tutorials/source_en/advanced_use/images/operator_fusion.png
new file mode 100644
index 0000000000000000000000000000000000000000..4aa6ee89a0970889abc84f1b74b95297f2ae2db4
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/operator_fusion.png differ
diff --git a/tutorials/source_en/advanced_use/images/pipeline.png b/tutorials/source_en/advanced_use/images/pipeline.png
new file mode 100644
index 0000000000000000000000000000000000000000..bbb1a391f8378bc02f4d821d657f2c74c21ff24e
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/pipeline.png differ
diff --git a/tutorials/source_en/advanced_use/images/shuffle_performance_scheme.png b/tutorials/source_en/advanced_use/images/shuffle_performance_scheme.png
new file mode 100644
index 0000000000000000000000000000000000000000..f4c72a99fbade41067f9e6dfe6383634d06433a8
Binary files /dev/null and b/tutorials/source_en/advanced_use/images/shuffle_performance_scheme.png differ
diff --git a/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md b/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md
index b120a5f0fffc6e70905efce9869d2922333af9ec..aaebbfcd1d649672ad4dbd85e5d823926a29da01 100644
--- a/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md
+++ b/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md
@@ -47,7 +47,7 @@ The overview page on the left shows information about optimization objective and
Figure 4: Overview page
-Figure 4 shows the optimization objective distribution, parameter importance, and scatter plots.
+Figure 4 shows the optimization objective distribution, parameter importance, and scatter plots. You can select the optimization objective to view the importance of the parameters, and then click the histogram to view the scatter plot of the parameters and optimization objective.
## Dataset Lineage
diff --git a/tutorials/source_en/advanced_use/mindinsight_commands.md b/tutorials/source_en/advanced_use/mindinsight_commands.md
index 8ed9fcbed9126ed1ea78140626d7b5bc2411b317..73ffc4d884c85048eaaf25f309bc52b11790f162 100644
--- a/tutorials/source_en/advanced_use/mindinsight_commands.md
+++ b/tutorials/source_en/advanced_use/mindinsight_commands.md
@@ -30,10 +30,12 @@ mindinsight --version
## Start the Service
```shell
-mindinsight start [-h] [--config ] [--workspace ]
- [--port ] [--url-path-prefix ]
- [--reload-interval ]
- [--summary-base-dir ]
+mindinsight start [-h] [--config {CONFIG}] [--workspace {WORKSPACE}]
+ [--port {PORT}] [--url-path-prefix {URL_PATH_PREFIX}]
+ [--reload-interval {RELOAD_INTERVAL}]
+ [--summary-base-dir {SUMMARY_BASE_DIR}]
+ [--enable-debugger {ENABLE_DEBUGGER}]
+ [--debugger-port {DEBUGGER_PORT}]
```
Optional parameters as follows:
@@ -41,12 +43,14 @@ Optional parameters as follows:
|Name|Argument|Description|Type|Default|Scope|Specifications|
|---|---|---|---|---|---|---|
|`-h, --help`|Optional|Displays the help information about the start command.|-|-|-|-|
-|`--config `|Optional|Specifies the configuration file or module.|String|Empty string|-|Physical file path (file:/path/to/config.py) or a module path (python:path.to.config.module) that can be identified by Python.|
-|`--workspace `|Optional|Specifies the working directory.|String|$HOME/mindinsight|-|-|
-|`--port `|Optional|Specifies the port number of the web visualization service.|Integer|8080|1~65535|-|
-|`--url-path-prefix `|Optional|Specifies the URL path prefix of the web visualization service.|String|Empty string|-|URL path prefix consists of segments separated by slashes. Each segment supports alphabets / digits / underscores / dashes / dots, but not single dot or double dots.|
-|`--reload-interval `|Optional|Specifies the interval (unit: second) for loading data.|Integer|3|-|The value 0 indicates that data is loaded only once.|
-|`--summary-base-dir `|Optional|Specifies the root directory for loading training log data.|String|./|-|MindInsight traverses the direct subdirectories in this directory and searches for log files. If a direct subdirectory contains log files, it is identified as the log file directory. If a root directory contains log files, it is identified as the log file directory.|
+|`--config {CONFIG}`|Optional|Specifies the configuration file or module.|String|Empty string|-|Physical file path (file:/path/to/config.py) or a module path (python:path.to.config.module) that can be identified by Python.|
+|`--workspace {WORKSPACE}`|Optional|Specifies the working directory.|String|$HOME/mindinsight|-|-|
+|`--port {PORT}`|Optional|Specifies the port number of the web visualization service.|Integer|8080|1~65535|-|
+|`--url-path-prefix {URL_PATH_PREFIX}`|Optional|Specifies the URL path prefix of the web visualization service.|String|Empty string|-|URL path prefix consists of segments separated by slashes. Each segment supports alphabets / digits / underscores / dashes / dots, but not single dot or double dots.|
+|`--reload-interval {RELOAD_INTERVAL}`|Optional|Specifies the interval (unit: second) for loading data.|Integer|3|-|The value 0 indicates that data is loaded only once.|
+|`--summary-base-dir {SUMMARY_BASE_DIR}`|Optional|Specifies the root directory for loading training log data.|String|./|-|MindInsight traverses the direct subdirectories in this directory and searches for log files. If a direct subdirectory contains log files, it is identified as the log file directory. If a root directory contains log files, it is identified as the log file directory.|
+|`--enable-debugger {ENABLE_DEBUGGER}`|Optional|Whether to launch the MindInsight Debugger.|Boolean|False|True/False|-|
+|`--debugger-port {DEBUGGER_PORT}`|Optional|Specifies the port number of the debugger server.|Integer|50051|1~65535|-|
> When the service is started, the parameter values of the command line are saved as the environment variables of the process and start with `MINDINSIGHT_`, for example, `MINDINSIGHT_CONFIG`, `MINDINSIGHT_WORKSPACE`, and `MINDINSIGHT_PORT`.
diff --git a/tutorials/source_en/advanced_use/mixed_precision.md b/tutorials/source_en/advanced_use/mixed_precision.md
index b211b1737c85729308459f6cb6e71bd15ad1b977..a49dbaf22adeb18c727998bddb7dda36b7eac78a 100644
--- a/tutorials/source_en/advanced_use/mixed_precision.md
+++ b/tutorials/source_en/advanced_use/mixed_precision.md
@@ -39,14 +39,14 @@ This document describes the computation process by using examples of automatic a
## Automatic Mixed Precision
-To use the automatic mixed precision, you need to invoke the corresponding API, which takes the network to be trained and the optimizer as the input. This API converts the operators of the entire network into FP16 operators (except the `BatchNorm` and Loss operators).
+To use the automatic mixed precision, you need to invoke the corresponding API, which takes the network to be trained and the optimizer as the input. This API converts the operators of the entire network into FP16 operators (except the `BatchNorm` and Loss operators). You can use automatic mixed precision through API `amp` or API `Model`.
-The procedure is as follows:
-1. Introduce the MindSpore mixed precision API.
+The procedure of using automatic mixed precision by API `amp` is as follows:
+1. Introduce the MindSpore mixed precision API `amp`.
2. Define the network. This step is the same as the common network definition. (You do not need to manually configure the precision of any specific operator.)
-3. Use the `amp.build_train_network` API to encapsulate the network model and optimizer. In this step, MindSpore automatically converts the operators to the required format.
+3. Use the `amp.build_train_network` API to encapsulate the network model and optimizer. You can learn how to set parameter `level` through . In this step, MindSpore automatically converts the operators to the required format.
A code example is as follows:
@@ -92,6 +92,77 @@ train_network = amp.build_train_network(net, optimizer, loss, level="O3", loss_s
output = train_network(predict, label)
```
+The procedure of using automatic mixed precision by API `Model` is as follows:
+1. Introduce the MindSpore model API `Model`.
+
+2. Define the network. This step is the same as the common network definition. (You do not need to manually configure the precision of any specific operator.)
+
+3. Create dataset.You can learn detail step at .
+
+4. Use the `Model` API to encapsulate the network model and optimizer. You can learn how to set parameter `amp_level` through . In this step, MindSpore automatically converts the operators to the required format.
+
+A code example is as follows:
+
+```python
+import numpy as np
+import mindspore.nn as nn
+from mindspore import context
+from mindspore.common.initializer import Normal
+from mindspore.train import Model
+from src.dataset import create_dataset
+
+context.set_context(mode=context.GRAPH_MODE)
+context.set_context(device_target="Ascend")
+
+# Define network
+class LeNet5(nn.Cell):
+ """
+ Lenet network
+
+ Args:
+ num_class (int): Number of classes. Default: 10.
+ num_channel (int): Number of channels. Default: 1.
+
+ Returns:
+ Tensor, output tensor
+ Examples:
+ >>> LeNet(num_class=10)
+
+ """
+ def __init__(self, num_class=10, num_channel=1):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
+ self.fc1 = nn.Dense(16 * 5 * 5, 120, weight_init=Normal(0.02))
+ self.fc2 = nn.Dense(120, 84, weight_init=Normal(0.02))
+ self.fc3 = nn.Dense(84, num_class, weight_init=Normal(0.02))
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.flatten = nn.Flatten()
+
+ def construct(self, x):
+ x = self.max_pool2d(self.relu(self.conv1(x)))
+ x = self.max_pool2d(self.relu(self.conv2(x)))
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
+# create dataset
+ds_train = create_dataset("/dataset/train", 32)
+
+# Initialize network
+network = LeNet5(10)
+
+# Define Loss and Optimizer
+net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
+net_opt = nn.Momentum(network.trainable_params(),learning_rate=0.01, momentum=0.9)
+model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()},amp_level="O3")
+
+# Run training
+model.train(epoch=10, train_dataset=ds_train)
+```
## Manual Mixed Precision
diff --git a/tutorials/source_en/advanced_use/model_scripts_transformation.md b/tutorials/source_en/advanced_use/model_scripts_transformation.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e4ba4dc0d26c89abba7ac34209d49aef13af325
--- /dev/null
+++ b/tutorials/source_en/advanced_use/model_scripts_transformation.md
@@ -0,0 +1,205 @@
+# Migrate From Third Party Framework
+
+`Linux` `Ascend` `Model Development` `Beginner`
+
+
+
+- [Model Scripts Transformation](#Model-Scripts-Transformation)
+ - [Overview](#Overview)
+ - [Installation](#Installation)
+ - [Usage](#Usage)
+ - [Scenario](#Scenario)
+ - [Example](#Example)
+ - [AST-Based Conversion](#AST-Based-Conversion)
+ - [Graph-Based Conversion](#Graph-Based-Conversion)
+ - [Caution](#Caution)
+
+
+
+
+
+## Overview
+
+MindConverter is a migration tool to transform the model scripts from PyTorch to Mindspore. Users can migrate their PyTorch models to Mindspore rapidly with minor changes according to the conversion report.
+
+
+## Installation
+
+Mindconverter is a submodule in MindInsight. Please follow the [Guide](https://www.mindspore.cn/install/en) here to install MindInsight.
+
+
+## Usage
+
+MindConverter currently only provides command-line interface. Here is the manual page.
+
+```bash
+usage: mindconverter [-h] [--version] [--in_file IN_FILE]
+ [--model_file MODEL_FILE] [--shape SHAPE]
+ [--output OUTPUT] [--report REPORT]
+ [--project_path PROJECT_PATH]
+
+optional arguments:
+ -h, --help show this help message and exit
+ --version show program version number and exit
+ --in_file IN_FILE Specify path for script file to use AST schema to do
+ script conversation.
+ --model_file MODEL_FILE
+ PyTorch .pth model file path to use graph based schema
+ to do script generation. When `--in_file` and
+ `--model_file` are both provided, use AST schema as
+ default.
+ --shape SHAPE Optional, excepted input tensor shape of
+ `--model_file`. It is required when use graph based
+ schema. Usage: --shape 3,244,244
+ --output OUTPUT Optional, specify path for converted script file
+ directory. Default output directory is `output` folder
+ in the current working directory.
+ --report REPORT Optional, specify report directory. Default is
+ converted script directory.
+ --project_path PROJECT_PATH
+ Optional, PyTorch scripts project path. If PyTorch
+ project is not in PYTHONPATH, please assign
+ `--project_path` when use graph based schema. Usage:
+ --project_path ~/script_file/
+
+```
+
+**MindConverter provides two modes:**
+
+1. **Abstract Syntax Tree (AST) based conversion**:Use the argument `--in_file` will enable the AST mode.
+2. **Computational Graph basedconversion**:Use `--model_file` and `--shape` arguments will enable the Graph mode.
+
+> The AST mode will be enabled, if both `--in_file` and `--model_file` are specified.
+
+For the Graph mode, `--shape` is mandatory.
+
+For the AST mode, `--shape` is ignored.
+
+`--output` and `--report` is optional. MindConverter creates an `output` folder under the current working directory, and outputs generated scripts and conversion reports to it.
+
+Please note that your original PyTorch project is included in the module search path (PYTHONPATH). Use the python interpreter and test your module can be successfully loaded by `import` command. Use `--project_path` instead if your project is not in the PYTHONPATH to ensure MindConverter can load it.
+
+> Assume the project is located at `/home/user/project/model_training`, users can use this command to add the project to `PYTHONPATH` : `export PYTHONPATH=/home/user/project/model_training:$PYTHONPATH`
+
+> MindConverter needs the original PyTorch scripts because of the reverse serialization.
+
+
+
+## Scenario
+
+MindConverter provides two modes for different migration demands.
+
+1. Keep original scripts' structures, including variables, functions, and libraries.
+2. Keep extra modifications as few as possible, or no modifications are required after conversion.
+
+The AST mode is recommended for the first demand. It parses and analyzes PyTorch scripts, then replace them with the MindSpore AST to generate codes. Theoretically, The AST mode supports any model script. However, the conversion may differ due to the coding style of original scripts.
+
+For the second demand, the Graph mode is recommended. As the computational graph is a standard descriptive language, it is not affected by user's coding style. This mode may have more operators converted as long as these operators are supported by MindConverter.
+
+Some typical image classification networks such as ResNet and VGG have been tested for the Graph mode. Note that:
+
+> 1. Currently, the Graph mode does not support models with multiple inputs. Only models with a single input and single output are supported.
+> 2. The Dropout operator will be lost after conversion because the inference mode is used to load the PyTorch model. Manually re-implement is necessary.
+> 3. The Graph-based mode will be continuously developed and optimized with further updates.
+
+
+## Example
+
+### AST-Based Conversion
+
+Assume the PyTorch script is located at `/home/user/model.py`, and outputs the transformed MindSpore script to `/home/user/output`, with the conversion report to `/home/user/output/report`. Use the following command:
+
+```bash
+mindconverter --in_file /home/user/model.py \
+ --output /home/user/output \
+ --report /home/user/output/report
+```
+
+In the conversion report, non-transformed code is listed as follows:
+
+```text
+line :
[UnConvert] 'operator' didn't convert. ...
+```
+
+For non-transformed operators, the original code keeps. Please manually migrate them. [Click here](https://www.mindspore.cn/docs/en/master/index.html#operator_api) for more information about operator mapping.
+
+
+Here is an example of the conversion report:
+```text
+ [Start Convert]
+ [Insert] 'import mindspore.ops.operations as P' is inserted to the converted file.
+ line 1:0: [Convert] 'import torch' is converted to 'import mindspore'.
+ ...
+ line 157:23: [UnConvert] 'nn.AdaptiveAvgPool2d' didn't convert. Maybe could convert to mindspore.ops.operations.ReduceMean.
+ ...
+ [Convert Over]
+```
+
+For non-transformed operators, suggestions are provided in the report. For instance, MindConverter suggests that replace `torch.nn.AdaptiveAvgPool2d` with `mindspore.ops.operations.ReduceMean`.
+
+
+### Graph-Based Conversion
+
+Assume the PyTorch model (.pth file) is located at `/home/user/model.pth`, with input shape (3, 224, 224) and the original PyTorch script is at `/home/user/project/model_training`. Output the transformed MindSpore script to `/home/user/output`, with the conversion report to `/home/user/output/report`. Use the following command:
+
+```bash
+mindconverter --model_file /home/user/model.pth --shape 3,224,224 \
+ --output /home/user/output \
+ --report /home/user/output/report \
+ --project_path /home/user/project/model_training
+```
+
+The Graph mode has the same conversion report as the AST mode. However, the line number and column number refer to the transformed scripts since no original scripts are used in the process.
+
+In addition, input and output Tensor shape of unconverted operators shows explicitly (`input_shape` and `output_shape`) as comments in converted scripts to help further manual modifications. Here is an example of the `Reshape` operator (Not supported in current version):
+
+```python
+class Classifier(nn.Cell):
+
+ def __init__(self):
+ super(Classifier, self).__init__()
+ ...
+ self.reshape = onnx.Reshape(input_shape=(1, 1280, 1, 1),
+ output_shape=(1, 1280))
+ ...
+
+ def construct(self, x):
+ ...
+ # Suppose input of `reshape` is x.
+ reshape_output = self.reshape(x)
+ ...
+
+```
+
+It is convenient to replace the operators according to the `input_shape` and `output_shape` parameters. The replacement is like this:
+
+```python
+from mindspore.ops import operations as P
+...
+
+class Classifier(nn.Cell):
+
+ def __init__(self):
+ super(Classifier, self).__init__()
+ ...
+ self.reshape = P.Reshape(input_shape=(1, 1280, 1, 1),
+ output_shape=(1, 1280))
+ ...
+
+ def construct(self, x):
+ ...
+ # Suppose input of `reshape` is x.
+ reshape_output = self.reshape(x, (1, 1280))
+ ...
+
+```
+
+> Note: `--output` and `--report` are optional. MindConverter creates an `output` folder under the current working directory, and outputs generated scripts and conversion reports to it.
+
+
+## Caution
+
+1. PyTorch is not an explicitly stated dependency library in MindInsight. The Graph conversion requires the consistent PyTorch version as the model is trained. (MindConverter recommends PyTorch 1.4.0 or 1.6.0)
+2. This script conversion tool relies on operators which supported by MindConverter and MindSpore. Unsupported operators may not successfully mapped to MindSpore operators. You can manually edit, or implement the mapping based on MindConverter, and contribute to our MindInsight repository. We appreciate your support for the MindSpore community.
+
+
diff --git a/tutorials/source_en/advanced_use/model_security.md b/tutorials/source_en/advanced_use/model_security.md
index a13cf2a91dc17a44cb47c36ae7bd11a60784eb89..972d42574ada423ffdbe7c62df1f611d34fa30ed 100644
--- a/tutorials/source_en/advanced_use/model_security.md
+++ b/tutorials/source_en/advanced_use/model_security.md
@@ -59,9 +59,9 @@ from mindspore import Tensor
from mindspore import context
from mindspore.train.callback import LossMonitor
-from mindarmour.attacks.gradient_method import FastGradientSignMethod
+from mindarmour.adv_robustness.attacks import FastGradientSignMethod
from mindarmour.utils.logger import LogUtil
-from mindarmour.evaluations.attack_evaluation import AttackEvaluate
+from mindarmour.adv_robustness.evaluations import AttackEvaluate
context.set_context(mode=context.GRAPH_MODE, device_target="Ascend")
@@ -99,7 +99,7 @@ def generate_mnist_dataset(data_path, batch_size=32, repeat_size=1,
# apply map operations on images
if not sparse:
one_hot_enco = C.OneHot(10)
- ds1 = ds1.map(input_columns="label", operations=one_hot_enco,
+ ds1 = ds1.map(operations=one_hot_enco, input_columns="label",
num_parallel_workers=num_parallel_workers)
type_cast_op = C.TypeCast(mstype.float32)
ds1 = ds1.map(operations=type_cast_op, input_columns="label",
@@ -178,7 +178,7 @@ The LeNet model is used as an example. You can also create and train your own mo
2. Train LeNet model. Use the defined data loading function `generate_mnist_dataset` to load data.
```python
- mnist_path = "./MNIST_unzip/"
+ mnist_path = "./MNIST/"
batch_size = 32
# train original model
ds_train = generate_mnist_dataset(os.path.join(mnist_path, "train"),
@@ -198,8 +198,8 @@ The LeNet model is used as an example. You can also create and train your own mo
inputs = []
labels = []
for data in ds_test.create_tuple_iterator():
- inputs.append(data[0].astype(np.float32))
- labels.append(data[1])
+ inputs.append(data[0].asnumpy().astype(np.float32))
+ labels.append(data[1].asnumpy())
test_inputs = np.concatenate(inputs)
test_labels = np.concatenate(labels)
```
@@ -297,7 +297,7 @@ Natural Adversarial Defense (NAD) is a simple and effective adversarial example
Call the NAD API provided by MindArmour.
```python
-from mindarmour.defenses import NaturalAdversarialDefense
+from mindarmour.adv_robustness.defenses import NaturalAdversarialDefense
# defense
diff --git a/tutorials/source_en/advanced_use/optimize_the_performance_of_data_preparation.md b/tutorials/source_en/advanced_use/optimize_the_performance_of_data_preparation.md
new file mode 100644
index 0000000000000000000000000000000000000000..017e6022b2377d186a6ea1f01e94a529dfaf901c
--- /dev/null
+++ b/tutorials/source_en/advanced_use/optimize_the_performance_of_data_preparation.md
@@ -0,0 +1,389 @@
+# Optimizing the Data Preparation Performance
+
+`Linux` `Ascend` `GPU` `CPU` `Data Preparation` `Beginner` `Intermediate` `Expert`
+
+
+
+- [Optimizing the Data Preparation Performance](#optimizing-the-data-preparation-performance)
+ - [Overview](#overview)
+ - [Overall Process](#overall-process)
+ - [Preparations](#preparations)
+ - [Importing Modules](#importing-modules)
+ - [Downloading the Required Dataset](#downloading-the-required-dataset)
+ - [Optimizing the Data Loading Performance](#optimizing-the-data-loading-performance)
+ - [Performance Optimization Solution](#performance-optimization-solution)
+ - [Code Example](#code-example)
+ - [Optimizing the Shuffle Performance](#optimizing-the-shuffle-performance)
+ - [Performance Optimization Solution](#performance-optimization-solution-1)
+ - [Code Example](#code-example-1)
+ - [Optimizing the Data Augmentation Performance](#optimizing-the-data-augmentation-performance)
+ - [Performance Optimization Solution](#performance-optimization-solution-2)
+ - [Code Example](#code-example-2)
+ - [Performance Optimization Solution Summary](#performance-optimization-solution-summary)
+ - [Multi-thread Optimization Solution](#multi-thread-optimization-solution)
+ - [Multi-process Optimization Solution](#multi-process-optimization-solution)
+ - [Compose Optimization Solution](#compose-optimization-solution)
+ - [Operator Fusion Optimization Solution](#operator-fusion-optimization-solution)
+
+
+
+
+
+## Overview
+
+Data is the most important factor of deep learning. Data quality determines the upper limit of deep learning result, whereas model quality enables the result to approach the upper limit.Therefore, high-quality data input is beneficial to the entire deep neural network. During the entire data processing and data augmentation process, data continuously flows through a "pipeline" to the training system, as shown in the following figure:
+
+
+
+MindSpore provides data processing and data augmentation functions for users. In the pipeline process, if each step can be properly used, the data performance will be greatly improved. This section describes how to optimize performance during data loading, data processing, and data augmentation based on the CIFAR-10 dataset.
+
+## Overall Process
+- Prepare data.
+- Optimize the data loading performance.
+- Optimize the shuffle performance.
+- Optimize the data augmentation performance.
+- Summarize the performance optimization solution.
+
+## Preparations
+
+### Importing Modules
+
+The `dataset` module provides APIs for loading and processing datasets.
+
+
+```python
+import mindspore.dataset as ds
+```
+
+The `numpy` module is used to generate ndarrays.
+
+
+```python
+import numpy as np
+```
+
+### Downloading the Required Dataset
+
+1. Create the `./dataset/Cifar10Data` directory in the current working directory. The dataset used for this practice is stored in this directory.
+2. Create the `./transform` directory in the current working directory. The dataset generated during the practice is stored in this directory.
+3. Download [the CIFAR-10 dataset in binary format](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz) and decompress the dataset file to the `./dataset/Cifar10Data/cifar-10-batches-bin` directory. The dataset will be used during data loading.
+4. Download [the CIFAR-10 Python dataset in file-format](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz) and decompress the dataset file to the `./dataset/Cifar10Data/cifar-10-batches-py` directory. The dataset will be used for data conversion.
+
+The directory structure is as follows:
+
+
+ dataset/Cifar10Data
+ ├── cifar-10-batches-bin
+ │ ├── batches.meta.txt
+ │ ├── data_batch_1.bin
+ │ ├── data_batch_2.bin
+ │ ├── data_batch_3.bin
+ │ ├── data_batch_4.bin
+ │ ├── data_batch_5.bin
+ │ ├── readme.html
+ │ └── test_batch.bin
+ └── cifar-10-batches-py
+ ├── batches.meta
+ ├── data_batch_1
+ ├── data_batch_2
+ ├── data_batch_3
+ ├── data_batch_4
+ ├── data_batch_5
+ ├── readme.html
+ └── test_batch
+
+In the preceding information:
+- The `cifar-10-batches-bin` directory is the directory for storing the CIFAR-10 dataset in binary format.
+- The `cifar-10-batches-py` directory is the directory for storing the CIFAR-10 dataset in Python file format.
+
+## Optimizing the Data Loading Performance
+
+MindSpore provides multiple data loading methods, including common dataset loading, user-defined dataset loading, and MindSpore data format loading. For details, see [Loading Datasets](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/loading_the_datasets.html). The dataset loading performance varies depending on the underlying implementation method.
+
+| | Common Dataset | User-defined Dataset | MindRecord Dataset |
+| :----: | :----: | :----: | :----: |
+| Underlying implementation | C++ | Python | C++ |
+| Performance | High | Medium | High |
+
+### Performance Optimization Solution
+
+
+
+Suggestions on data loading performance optimization are as follows:
+- Built-in loading operators are preferred for supported dataset formats. For details, see [Built-in Loading Operators](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.dataset.html). If the performance cannot meet the requirements, use the multi-thread concurrency solution. For details, see [Multi-thread Optimization Solution](#multi-thread-optimization-solution).
+- For a dataset format that is not supported, convert the format to MindSpore data format and then use the `MindDataset` class to load the dataset. For details, see [Converting Datasets into MindSpore Data Format](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html). If the performance cannot meet the requirements, use the multi-thread concurrency solution, for details, see [Multi-thread Optimization Solution](#multi-thread-optimization-solution).
+- For dataset formats that are not supported, the user-defined `GeneratorDataset` class is preferred for implementing fast algorithm verification. If the performance cannot meet the requirements, the multi-process concurrency solution can be used. For details, see [Multi-process Optimization Solution](#multi-process-optimization-solution).
+
+### Code Example
+
+Based on the preceding suggestions of data loading performance optimization, the `Cifar10Dataset` class of built-in loading operators, the `MindDataset` class after data conversion, and the `GeneratorDataset` class are used to load data. The sample code is displayed as follows:
+
+1. Use the `Cifar10Dataset` class of built-in operators to load the CIFAR-10 dataset in binary format. The multi-thread optimization solution is used for data loading. Four threads are enabled to concurrently complete the task. Finally, a dictionary iterator is created for the data and a data record is read through the iterator.
+
+
+ ```python
+ cifar10_path = "./dataset/Cifar10Data/cifar-10-batches-bin/"
+
+ # create Cifar10Dataset for reading data
+ cifar10_dataset = ds.Cifar10Dataset(cifar10_path, num_parallel_workers=4)
+ # create a dictionary iterator and read a data record through the iterator
+ print(next(cifar10_dataset.create_dict_iterator()))
+ ```
+
+ The output is as follows:
+ ```
+ {'image': Tensor(shape=[32, 32, 3], dtype=UInt8, value=
+ [[[235, 235, 235],
+ [230, 230, 230],
+ [234, 234, 234],
+ ...,
+ [248, 248, 248],
+ [248, 248, 248],
+ [249, 249, 249]],
+ ...,
+ [120, 120, 119],
+ [146, 146, 146],
+ [177, 174, 190]]]), 'label': Tensor(shape=[], dtype=UInt32, value= 9)}
+ ```
+
+2. Use the `Cifar10ToMR` class to convert the CIFAR-10 dataset into MindSpore data format. In this example, the CIFAR-10 dataset in Python file format is used. Then use the `MindDataset` class to load the dataset in MindSpore data format. The multi-thread optimization solution is used for data loading. Four threads are enabled to concurrently complete the task. Finally, a dictionary iterator is created for data and a data record is read through the iterator.
+
+
+ ```python
+ from mindspore.mindrecord import Cifar10ToMR
+
+ cifar10_path = './dataset/Cifar10Data/cifar-10-batches-py/'
+ cifar10_mindrecord_path = './transform/cifar10.record'
+
+ cifar10_transformer = Cifar10ToMR(cifar10_path, cifar10_mindrecord_path)
+ # executes transformation from Cifar10 to MindRecord
+ cifar10_transformer.transform(['label'])
+
+ # create MindDataset for reading data
+ cifar10_mind_dataset = ds.MindDataset(dataset_file=cifar10_mindrecord_path, num_parallel_workers=4)
+ # create a dictionary iterator and read a data record through the iterator
+ print(next(cifar10_mind_dataset.create_dict_iterator()))
+ ```
+
+ The output is as follows:
+ ```
+ {'data': Tensor(shape=[1431], dtype=UInt8, value= [255, 216, 255, ..., 63, 255, 217]), 'id': Tensor(shape=[], dtype=Int64, value= 30474), 'label': Tensor(shape=[], dtype=Int64, value= 2)}
+ ```
+
+3. The `GeneratorDataset` class is used to load the user-defined dataset, and the multi-process optimization solution is used. Four processes are enabled to concurrently complete the task. Finally, a dictionary iterator is created for the data, and a data record is read through the iterator.
+
+
+ ```python
+ def generator_func(num):
+ for i in range(num):
+ yield (np.array([i]),)
+
+ # create GeneratorDataset for reading data
+ dataset = ds.GeneratorDataset(source=generator_func(5), column_names=["data"], num_parallel_workers=4)
+ # create a dictionary iterator and read a data record through the iterator
+ print(next(dataset.create_dict_iterator()))
+ ```
+
+ The output is as follows:
+ ```
+ {'data': Tensor(shape=[1], dtype=Int64, value= [0])}
+ ```
+
+## Optimizing the Shuffle Performance
+
+The shuffle operation is used to shuffle ordered datasets or repeated datasets. MindSpore provides the `shuffle` function for users. A larger value of `buffer_size` indicates a higher shuffling degree, consuming more time and computing resources. This API allows users to shuffle the data at any time during the entire pipeline process. For details, see [Shuffle Processing](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_processing_and_augmentation.html#shuffle). However, because the underlying implementation methods are different, the performance of this method is not as good as that of setting the `shuffle` parameter to directly shuffle data by referring to the [Built-in Loading Operators](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.dataset.html).
+
+### Performance Optimization Solution
+
+
+
+Suggestions on shuffle performance optimization are as follows:
+- Use the `shuffle` parameter of built-in loading operators to shuffle data.
+- If the `shuffle` function is used and the performance still cannot meet the requirements, increase the value of the `buffer_size` parameter to improve the performance.
+
+### Code Example
+
+Based on the preceding shuffle performance optimization suggestions, the `shuffle` parameter of the `Cifar10Dataset` class of built-in loading operators and the `Shuffle` function are used to shuffle data. The sample code is displayed as follows:
+
+1. Use the built-in operator in `Cifar10Dataset` class to load the CIFAR-10 dataset. In this example, the CIFAR-10 dataset in binary format is used, and the `shuffle` parameter is set to True to perform data shuffle. Finally, a dictionary iterator is created for the data and a data record is read through the iterator.
+
+
+ ```python
+ cifar10_path = "./dataset/Cifar10Data/cifar-10-batches-bin/"
+
+ # create Cifar10Dataset for reading data
+ cifar10_dataset = ds.Cifar10Dataset(cifar10_path, shuffle=True)
+ # create a dictionary iterator and read a data record through the iterator
+ print(next(cifar10_dataset.create_dict_iterator()))
+ ```
+
+ The output is as follows:
+ ```
+ {'image': Tensor(shape=[32, 32, 3], dtype=UInt8, value=
+ [[[235, 235, 235],
+ [230, 230, 230],
+ [234, 234, 234],
+ ...,
+ [248, 248, 248],
+ [248, 248, 248],
+ [249, 249, 249]],
+ ...,
+ [120, 120, 119],
+ [146, 146, 146],
+ [177, 174, 190]]]), 'label': Tensor(shape=[], dtype=UInt32, value= 9)}
+ ```
+
+2. Use the `shuffle` function to shuffle data. Set `buffer_size` to 3 and use the `GeneratorDataset` class to generate data.
+
+
+ ```python
+ def generator_func():
+ for i in range(5):
+ yield (np.array([i, i+1, i+2, i+3, i+4]),)
+
+ ds1 = ds.GeneratorDataset(source=generator_func, column_names=["data"])
+ print("before shuffle:")
+ for data in ds1.create_dict_iterator():
+ print(data["data"])
+
+ ds2 = ds1.shuffle(buffer_size=3)
+ print("after shuffle:")
+ for data in ds2.create_dict_iterator():
+ print(data["data"])
+ ```
+ ```
+ The output is as follows:
+
+ before shuffle:
+ [0 1 2 3 4]
+ [1 2 3 4 5]
+ [2 3 4 5 6]
+ [3 4 5 6 7]
+ [4 5 6 7 8]
+ after shuffle:
+ [2 3 4 5 6]
+ [0 1 2 3 4]
+ [4 5 6 7 8]
+ [1 2 3 4 5]
+ [3 4 5 6 7]
+ ```
+
+## Optimizing the Data Augmentation Performance
+
+During image classification training, especially when the dataset is small, users can use data augmentation to preprocess images to enrich the dataset. MindSpore provides multiple data augmentation methods, including:
+- Use the built-in C operator (`c_transforms` module) to perform data augmentation.
+- Use the built-in Python operator (`py_transforms` module) to perform data augmentation.
+- Users can define Python functions as needed to perform data augmentation.
+
+For details, see [Data Augmentation](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_processing_and_augmentation.html#id3). The performance varies according to the underlying implementation methods.
+
+| Module | Underlying API | Description |
+| :----: | :----: | :----: |
+| c_transforms | C++ (based on OpenCV) | High performance |
+| py_transforms | Python (based on PIL) | This module provides multiple image augmentation functions and the method for converting PIL images into NumPy arrays. |
+
+
+### Performance Optimization Solution
+
+
+
+
+Suggestions on data augmentation performance optimization are as follows:
+- The `c_transforms` module is preferentially used to perform data augmentation for its highest performance. If the performance cannot meet the requirements, refer to [Multi-thread Optimization Solution](#multi-thread-optimization-solution), [Compose Optimization Solution](#compose-optimization-solution), or [Operator Fusion Optimization Solution](#operator-fusion-optimization-solution).
+- If the `py_transforms` module is used to perform data augmentation and the performance still cannot meet the requirements, refer to [Multi-thread Optimization Solution](#multi-thread-optimization-solution), [Multi-process Optimization Solution](#multi-process-optimization-solution), [Compose Optimization Solution](#compose-optimization-solution), or [Operator Fusion Optimization Solution](#operator-fusion-optimization-solution).
+- The `c_transforms` module maintains buffer management in C++, and the `py_transforms` module maintains buffer management in Python. Because of the performance cost of switching between Python and C++, it is advised not to use different operator types together.
+- If the user-defined Python functions are used to perform data augmentation and the performance still cannot meet the requirements, use the [Multi-thread Optimization Solution](#multi-thread-optimization-solution) or [Multi-process Optimization Solution](#multi-process-optimization-solution). If the performance still cannot be improved, in this case, optimize the user-defined Python code.
+
+### Code Example
+
+Based on the preceding suggestions of data augmentation performance optimization, the `c_transforms` module and user-defined Python function are used to perform data augmentation. The code is displayed as follows:
+
+1. The `c_transforms` module is used to perform data augmentation. During data augmentation, the multi-thread optimization solution is used. Four threads are enabled to concurrently complete the task. The operator fusion optimization solution is used and the `RandomResizedCrop` fusion class is used to replace the `RandomResize` and `RandomCrop` classes.
+
+
+ ```python
+ import mindspore.dataset.transforms.c_transforms as c_transforms
+ import mindspore.dataset.vision.c_transforms as C
+ import matplotlib.pyplot as plt
+ cifar10_path = "./dataset/Cifar10Data/cifar-10-batches-bin/"
+
+ # create Cifar10Dataset for reading data
+ cifar10_dataset = ds.Cifar10Dataset(cifar10_path, num_parallel_workers=4)
+ transforms = C.RandomResizedCrop((800, 800))
+ # apply the transform to the dataset through dataset.map()
+ cifar10_dataset = cifar10_dataset.map(operations=transforms, input_columns="image", num_parallel_workers=4)
+
+ data = next(cifar10_dataset.create_dict_iterator())
+ plt.imshow(data["image"].asnumpy())
+ plt.show()
+ ```
+
+ The output is as follows:
+
+ 
+
+
+2. A user-defined Python function is used to perform data augmentation. During data augmentation, the multi-process optimization solution is used, and four processes are enabled to concurrently complete the task.
+
+
+ ```python
+ def generator_func():
+ for i in range(5):
+ yield (np.array([i, i+1, i+2, i+3, i+4]),)
+
+ ds3 = ds.GeneratorDataset(source=generator_func, column_names=["data"])
+ print("before map:")
+ for data in ds3.create_dict_iterator():
+ print(data["data"])
+
+ func = lambda x:x**2
+ ds4 = ds3.map(operations=func, input_columns="data", python_multiprocessing=True, num_parallel_workers=4)
+ print("after map:")
+ for data in ds4.create_dict_iterator():
+ print(data["data"])
+ ```
+
+ The output is as follows:
+ ```
+ before map:
+ [0 1 2 3 4]
+ [1 2 3 4 5]
+ [2 3 4 5 6]
+ [3 4 5 6 7]
+ [4 5 6 7 8]
+ after map:
+ [ 0 1 4 9 16]
+ [ 1 4 9 16 25]
+ [ 4 9 16 25 36]
+ [ 9 16 25 36 49]
+ [16 25 36 49 64]
+ ```
+
+## Performance Optimization Solution Summary
+
+### Multi-thread Optimization Solution
+
+During the data pipeline process, the number of threads for related operators can be set to improve the concurrency and performance. For example:
+- During data loading, the `num_parallel_workers` parameter in the built-in data loading class is used to set the number of threads.
+- During data augmentation, the `num_parallel_workers` parameter in the `map` function is used to set the number of threads.
+- During batch processing, the `num_parallel_workers` parameter in the `batch` function is used to set the number of threads.
+
+For details, see [Built-in Loading Operators](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.dataset.html).
+
+### Multi-process Optimization Solution
+
+During data processing, operators implemented by Python support the multi-process mode. For example:
+- By default, the `GeneratorDataset` class is in multi-process mode. The `num_parallel_workers` parameter indicates the number of enabled processes. The default value is 1. For details, see [Generator Dataset](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.dataset.html#mindspore.dataset.GeneratorDataset)
+- If the user-defined Python function or the `py_transforms` module is used to perform data augmentation and the `python_multiprocessing` parameter of the `map` function is set to True, the `num_parallel_workers` parameter indicates the number of processes and the default value of the `python_multiprocessing` parameter is False. In this case, the `num_parallel_workers` parameter indicates the number of threads. For details, see [Built-in Loading Operators](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.dataset.html).
+
+### Compose Optimization Solution
+
+Map operators can receive the Tensor operator list and apply all these operators based on a specific sequence. Compared with the Map operator used by each Tensor operator, such Fat Map operators can achieve better performance, as shown in the following figure:
+
+
+
+### Operator Fusion Optimization Solution
+
+Some fusion operators are provided to aggregate the functions of two or more operators into one operator. For details, see [Data Augmentation Operators](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.dataset.vision.html). Compared with the pipelines of their components, such fusion operators provide better performance. As shown in the figure:
+
+
diff --git a/tutorials/source_en/advanced_use/parameter_server_training.md b/tutorials/source_en/advanced_use/parameter_server_training.md
index 570bb28d7f6c21d533c17dfa56ab71bb0bba744c..27200d45e2bc288c08da8c57fc691acf08e89053 100644
--- a/tutorials/source_en/advanced_use/parameter_server_training.md
+++ b/tutorials/source_en/advanced_use/parameter_server_training.md
@@ -25,7 +25,7 @@ The ps-lite architecture consists of three independent components: server, worke
- Server: saves model weights and backward computation gradients, and updates the model using gradients pushed by workers.
-- Worker: performs forward and backward computation on the network. The gradient value for forward computation is uploaded to a server through the `Push` API, and the model updated by the server is downloaded to the worker through the `Pull` API.
+- Worker: performs forward and backward computation on the network. The gradient value for backward computation is uploaded to a server through the `Push` API, and the model updated by the server is downloaded to the worker through the `Pull` API.
- Scheduler: establishes the communication relationship between the server and worker.
diff --git a/tutorials/source_en/advanced_use/performance_profiling.md b/tutorials/source_en/advanced_use/performance_profiling.md
index 53551b6126a1bce1e33e56ec29c2da06c1edf26f..98d79ae100acf9f9c9e6a1a130472d869afb49f9 100644
--- a/tutorials/source_en/advanced_use/performance_profiling.md
+++ b/tutorials/source_en/advanced_use/performance_profiling.md
@@ -31,7 +31,7 @@ Performance data like operators' execution time is recorded in files and can be
## Preparing the Environment
-Before using Profiler, make sure the process of ada in background running right. The ada process must using the root user to run. The start command is `/usr/local/Ascend/driver/tools/ada`.
+Before using Profiler, make sure the process of ada in background running right. The ada process must using users in HwHiAiUser user group or the root user to run, and run the scripts using the same user. The start command is `/usr/local/Ascend/driver/tools/ada`.
## Preparing the Training Script
diff --git a/tutorials/source_en/advanced_use/performance_profiling_gpu.md b/tutorials/source_en/advanced_use/performance_profiling_gpu.md
index d3327f4f1a557f7da90d3dfaff92893f65fef5d8..c84d19b695b13bab6f037cfc199b32f0e903f05b 100644
--- a/tutorials/source_en/advanced_use/performance_profiling_gpu.md
+++ b/tutorials/source_en/advanced_use/performance_profiling_gpu.md
@@ -26,6 +26,11 @@ Performance data like operators' execution time is recorded in files and can be
>
>
+> By default, common users do not have the permission to access the NVIDIA GPU performance counters on the target device.
+> If common users need to use the profiler performance statistics capability in the training script, configure the permission by referring to the following description:
+>
+>
+
## Preparing the Training Script
To enable the performance profiling of neural networks, MindSpore Profiler APIs should be added into the script.Only the output_path in parameters is worked in GPU now. Then, at the end of the training, `Profiler.analyse()` should be called to finish profiling and generate the perforamnce analyse results.
@@ -77,8 +82,9 @@ Users can access the Performance Profiler by selecting a specific training from
Figure 1:Overall Performance
-Figure 1 displays the overall performance of the training, including the overall data of Step Trace, Operator Performance, MindData Performance and Timeline. Operator Performance Analysis is supportted only:
+Figure 1 displays the overall performance of the training, including the overall data of Step Trace, Operator Performance, MindData Performance and Timeline:
- Operator Performance: It will collect the average execution time of operators and operator types. The overall performance page will show the pie graph for different operator types.
+- Timeline: It will collect execution time for operations and CUDA activity. The tasks will be shown on the time axis. The overall performance page will show the statistics for tasks.
Users can click the detail link to see the details of each components.
diff --git a/tutorials/source_en/advanced_use/quantization_aware.md b/tutorials/source_en/advanced_use/quantization_aware.md
index 490857301540a76e6f765506417b2ab02ef4cfe5..760e6d7831d4a6a6aec9acdccb61e33d86002426 100644
--- a/tutorials/source_en/advanced_use/quantization_aware.md
+++ b/tutorials/source_en/advanced_use/quantization_aware.md
@@ -60,21 +60,19 @@ Aware quantization training specifications
The procedure for the quantization aware training model is the same as that for the common training. After the network is defined and the model is generated, additional operations need to be performed. The complete process is as follows:
1. Process data and load datasets.
-2. Define a network.
-3. Define a fusion network. After a network is defined, replace the specified operators to define a fusion network.
+2. Define an original unquantative network.
+3. Define a fusion network. After defining a original unquantative network, replace the specified operators to define a fusion network.
4. Define an optimizer and loss function.
-5. Perform model training. Generate a fusion model based on the fusion network training.
-6. Generate a quantization network. After the fusion model is obtained based on the fusion network training, insert a fake quantization node into the fusion model by using a conversion API to generate a quantization network.
-7. Perform quantization training. Generate a quantization model based on the quantization network training.
+5. Generate a quantization network. Insert a fake quantization node into the fusion network by using a conversion API, a quantization network will be generated based on the fusion network.
+6. Perform quantization training. Generate a quantization model based on the quantization network training.
-Compared with common training, the quantization aware training requires additional steps which are steps 3, 6, and 7 in the preceding process.
+Compared with common training, the quantization aware training requires additional steps which are steps 3, 5, and 6 in the preceding process.
> - Fusion network: network after the specified operators (`nn.Conv2dBnAct` and `nn.DenseBnAct`) are used for replacement.
-> - Fusion model: model in the checkpoint format generated by the fusion network training.
> - Quantization network: network obtained after the fusion model uses a conversion API (`convert_quant_network`) to insert a fake quantization node.
> - Quantization model: model in the checkpoint format obtained after the quantization network training.
-Next, the LeNet network is used as an example to describe steps 3 and 6.
+Next, the LeNet network is used as an example to describe steps 2 and 3.
> You can obtain the complete executable sample code at .
@@ -132,8 +130,8 @@ class LeNet5(nn.Cell):
super(LeNet5, self).__init__()
self.num_class = num_class
- self.conv1 = nn.Conv2dBnAct(1, 6, kernel_size=5, batchnorm=True, activation='relu')
- self.conv2 = nn.Conv2dBnAct(6, 16, kernel_size=5, batchnorm=True, activation='relu')
+ self.conv1 = nn.Conv2dBnAct(1, 6, kernel_size=5, activation='relu')
+ self.conv2 = nn.Conv2dBnAct(6, 16, kernel_size=5, activation='relu')
self.fc1 = nn.DenseBnAct(16 * 5 * 5, 120, activation='relu')
self.fc2 = nn.DenseBnAct(120, 84, activation='relu')
@@ -155,9 +153,9 @@ class LeNet5(nn.Cell):
Use the `convert_quant_network` API to automatically insert a fake quantization node into the fusion model to convert the fusion model into a quantization network.
```python
-from mindspore.train.quant import quant as qat
+from mindspore.train.quant import quant
-net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=10000, weight_bits=8, act_bits=8)
+net = quant.convert_quant_network(network, quant_delay=900, bn_fold=False, per_channel=[True, False], symmetric=[False, False])
```
## Retraining and Inference
@@ -167,16 +165,16 @@ net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=100
The preceding describes the quantization aware training from scratch. A more common case is that an existing model file needs to be converted to a quantization model. The model file and training script obtained through common network model training are available for quantization aware training. To use a checkpoint file for retraining, perform the following steps:
1. Process data and load datasets.
- 2. Define a network.
- 3. Define a fusion network.
- 4. Define an optimizer and loss function.
- 5. Load a model file and retrain the model. Load an existing model file and retrain the model based on the fusion network to generate a fusion model. For details, see .
- 6. Generate a quantization network.
- 7. Perform quantization training.
+ 2. Define an original unquantative network.
+ 3. Train the original network to generate a unquantative model.
+ 4. Define a fusion network.
+ 5. Define an optimizer and loss function.
+ 6. Generate a quantative network based on the fusion network.
+ 7. Load a model file and retrain the model. Load the unquantative model file generated in step 3 and retrain the quantative model based on the quantative network to generate a quantative model. For details, see .
### Inference
-The inference using a quantization model is the same as common model inference. The inference can be performed by directly using the checkpoint file or converting the checkpoint file into a common model format (such as ONNX or AIR).
+The inference using a quantization model is the same the common model inference. The inference can be performed by directly using the checkpoint file or converting the checkpoint file into a common model format (such as AIR or MINDIR).
For details, see .
diff --git a/tutorials/source_en/advanced_use/second_order_optimizer_for_resnet50_application.md b/tutorials/source_en/advanced_use/second_order_optimizer_for_resnet50_application.md
new file mode 100644
index 0000000000000000000000000000000000000000..5aa73da80ac66c478049924da4cc689cdee74b2f
--- /dev/null
+++ b/tutorials/source_en/advanced_use/second_order_optimizer_for_resnet50_application.md
@@ -0,0 +1,474 @@
+# ResNet-50 Second-Order Optimization Practice
+
+`Ascend` `GPU` `Model Development` `Model Optimization` `Expert`
+
+
+
+- [ResNet-50 Second-Order Optimization Practice](#resnet-50-second-order-optimization-practice)
+ - [Overview](#overview)
+ - [Preparation](#preparation)
+ - [Preparing the Dataset](#preparing-the-dataset)
+ - [Configuring Distributed Environment Variables](#configuring-distributed-environment-variables)
+ - [Ascend 910](#ascend-910)
+ - [GPU](#gpu)
+ - [Loading the Dataset](#loading-the-dataset)
+ - [Defining the Network](#defining-the-network)
+ - [Defining the Loss Function and Optimizer THOR](#defining-the-loss-function-and-optimizer-thor)
+ - [Defining the Loss Function](#defining-the-loss-function)
+ - [Defining the Optimizer](#defining-the-optimizer)
+ - [Training the Network](#training-the-network)
+ - [Saving the Configured Model](#saving-the-configured-model)
+ - [Configuring the Network Training](#configuring-the-network-training)
+ - [Running the Script](#running-the-script)
+ - [Ascend 910](#ascend-910-1)
+ - [GPU](#gpu-1)
+ - [Model Inference](#model-inference)
+ - [Defining the Inference Network](#defining-the-inference-network)
+ - [Inference](#inference)
+ - [Ascend 910](#ascend-910-2)
+ - [GPU](#gpu-2)
+
+
+
+
+## Overview
+
+Common optimization algorithms are classified into the first-order and the second-order optimization algorithms. Typical first-order optimization algorithms, such as stochastic gradient descent (SGD), support a small amount of computation with high computation speed but a low convergence speed and require a large number of training steps. The second-order optimization algorithms use the second-order derivative of the objective function to accelerate convergence to the optimal value of a model, and require a small quantity of training steps. However, the second-order optimization algorithms have excessively high computation costs, an overall execution time of the second-order optimization algorithms is still slower than that of the first-order optimization algorithms. As a result, the second-order optimization algorithms are not widely used in deep neural network training. The main computation costs of the second-order optimization algorithms lie in the inverse operation of the second-order information matrices such as the Hessian matrix and the [Fisher information matrix (FIM)](https://arxiv.org/pdf/1808.07172.pdf). The time complexity is about $O(n^3)$.
+
+Based on the existing natural gradient algorithm, MindSpore development team uses optimized acceleration methods such as approximation and sharding for the FIM, greatly reducing the computation complexity of the inverse matrix and developing the available second-order optimizer THOR. With eight Ascend 910 AI processors, THOR can complete the training of ResNet-50 v1.5 network and ImageNet dataset within 72 minutes, which is nearly twice the speed of SGD+Momentum.
+
+
+This tutorial describes how to use the second-order optimizer THOR provided by MindSpore to train the ResNet-50 v1.5 network and ImageNet dataset on Ascend 910 and GPU.
+> Download address of the complete code example:
+
+
+Directory Structure of Code Examples
+
+```shell
+├── resnet_thor
+ ├── README.md
+ ├── scripts
+ ├── run_distribute_train.sh # launch distributed training for Ascend 910
+ └── run_eval.sh # launch inference for Ascend 910
+ ├── run_distribute_train_gpu.sh # launch distributed training for GPU
+ └── run_eval_gpu.sh # launch inference for GPU
+ ├── src
+ ├── crossentropy.py # CrossEntropy loss function
+ ├── config.py # parameter configuration
+ ├── dataset_helper.py # dataset helper for minddata dataset
+ ├── grad_reducer_thor.py # grad reduce for thor
+ ├── model_thor.py # model for train
+ ├── resnet_thor.py # resnet50_thor backone
+ ├── thor.py # thor optimizer
+ ├── thor_layer.py # thor layer
+ └── dataset.py # data preprocessing
+ ├── eval.py # infer script
+ └── train.py # train script
+
+```
+
+The overall execution process is as follows:
+1. Prepare the ImageNet dataset and process the required dataset.
+2. Define the ResNet-50 network.
+3. Define the loss function and the optimizer THOR.
+4. Load the dataset and perform training. After the training is complete, check the result and save the model file.
+5. Load the saved model for inference.
+
+
+## Preparation
+
+Ensure that MindSpore has been correctly installed. If not, install it by referring to [Install](https://www.mindspore.cn/install/en).
+
+### Preparing the Dataset
+
+Download the complete ImageNet2012 dataset, decompress the dataset, and save it to the `ImageNet2012/ilsvrc` and `ImageNet2012/ilsvrc_eval` directories in the local workspace.
+
+The directory structure is as follows:
+
+```
+└─ImageNet2012
+ ├─ilsvrc
+ │ n03676483
+ │ n04067472
+ │ n01622779
+ │ ......
+ └─ilsvrc_eval
+ │ n03018349
+ │ n02504013
+ │ n07871810
+ │ ......
+
+```
+### Configuring Distributed Environment Variables
+#### Ascend 910
+For details about how to configure the distributed environment variables of Ascend 910 AI processors, see [Parallel Distributed Training (Ascend)](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html#id4).
+
+#### GPU
+For details about how to configure the distributed environment of GPUs, see [Parallel Distributed Training (GPU)](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_gpu.html#id4).
+
+
+## Loading the Dataset
+
+During distributed training, load the dataset in parallel mode and process it through the data argumentation API provided by MindSpore. The `src/dataset.py` script in the source code is for loading and processing the dataset.
+```python
+import os
+import mindspore.common.dtype as mstype
+import mindspore.dataset.engine as de
+import mindspore.dataset.transforms.vision.c_transforms as C
+import mindspore.dataset.transforms.c_transforms as C2
+from mindspore.communication.management import init, get_rank, get_group_size
+
+def create_dataset(dataset_path, do_train, repeat_num=1, batch_size=32, target="Ascend"):
+ if target == "Ascend":
+ device_num, rank_id = _get_rank_info()
+ else:
+ init()
+ rank_id = get_rank()
+ device_num = get_group_size()
+ if device_num == 1:
+ ds = de.ImageFolderDatasetV2(dataset_path, num_parallel_workers=8, shuffle=True)
+ else:
+ ds = de.ImageFolderDatasetV2(dataset_path, num_parallel_workers=8, shuffle=True,
+ num_shards=device_num, shard_id=rank_id)
+
+ image_size = 224
+ mean = [0.485 * 255, 0.456 * 255, 0.406 * 255]
+ std = [0.229 * 255, 0.224 * 255, 0.225 * 255]
+ # define map operations
+ if do_train:
+ trans = [
+ C.RandomCropDecodeResize(image_size, scale=(0.08, 1.0), ratio=(0.75, 1.333)),
+ C.RandomHorizontalFlip(prob=0.5),
+ C.Normalize(mean=mean, std=std),
+ C.HWC2CHW()
+ ]
+ else:
+ trans = [
+ C.Decode(),
+ C.Resize(256),
+ C.CenterCrop(image_size),
+ C.Normalize(mean=mean, std=std),
+ C.HWC2CHW()
+ ]
+ type_cast_op = C2.TypeCast(mstype.int32)
+ ds = ds.map(input_columns="image", num_parallel_workers=8, operations=trans)
+ ds = ds.map(input_columns="label", num_parallel_workers=8, operations=type_cast_op)
+
+ # apply batch operations
+ ds = ds.batch(batch_size, drop_remainder=True)
+
+ # apply dataset repeat operation
+ ds = ds.repeat(repeat_num)
+
+ return ds
+```
+
+> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see [Data Processing and Augmentation](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_processing_and_augmentation.html).
+
+
+## Defining the Network
+Use the ResNet-50 v1.5 network model as an example. Define the [ResNet-50 network](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py), and replace the `Conv2d` and `Dense` operators with the operators customized by the second-order optimizer.
+ The defined network model stores in the `src/resnet_thor.py` script in the source code, and the customized operators `Conv2d_thor` and `Dense_thor` store in the `src/thor_layer.py` script.
+
+- Use `Conv2d_thor` to replace `Conv2d` in the original network model.
+- Use `Dense_thor` to replace `Dense` in the original network model.
+
+> The `Conv2d_thor` and `Dense_thor` operators customized by THOR are used to save the second-order matrix information in model training. The backbone of the newly defined network is the same as that of the original network model.
+
+After the network is built, call the defined ResNet-50 in the `__main__` function.
+```python
+...
+from src.resnet_thor import resnet50
+...
+if __name__ == "__main__":
+ ...
+ # define the net
+ net = resnet50(class_num=config.class_num, damping=damping, loss_scale=config.loss_scale,
+ frequency=config.frequency, batch_size=config.batch_size)
+ ...
+```
+
+
+## Defining the Loss Function and Optimizer THOR
+
+
+### Defining the Loss Function
+
+Loss functions supported by MindSpore include `SoftmaxCrossEntropyWithLogits`, `L1Loss`, and `MSELoss`. The `SoftmaxCrossEntropyWithLogits` loss function is required by THOR.
+
+The implementation procedure of the loss function is in the `src/crossentropy.py` script. A common trick in deep network model training, label smoothing, is used to improve the model tolerance to error label classification by smoothing real labels, thereby improving the model generalization capability.
+```python
+class CrossEntropy(_Loss):
+ """CrossEntropy"""
+ def __init__(self, smooth_factor=0., num_classes=1000):
+ super(CrossEntropy, self).__init__()
+ self.onehot = P.OneHot()
+ self.on_value = Tensor(1.0 - smooth_factor, mstype.float32)
+ self.off_value = Tensor(1.0 * smooth_factor / (num_classes - 1), mstype.float32)
+ self.ce = nn.SoftmaxCrossEntropyWithLogits()
+ self.mean = P.ReduceMean(False)
+
+ def construct(self, logit, label):
+ one_hot_label = self.onehot(label, F.shape(logit)[1], self.on_value, self.off_value)
+ loss = self.ce(logit, one_hot_label)
+ loss = self.mean(loss, 0)
+ return loss
+```
+Call the defined loss function in the `__main__` function.
+
+```python
+...
+from src.crossentropy import CrossEntropy
+...
+if __name__ == "__main__":
+ ...
+ # define the loss function
+ if not config.use_label_smooth:
+ config.label_smooth_factor = 0.0
+ loss = CrossEntropy(smooth_factor=config.label_smooth_factor, num_classes=config.class_num)
+ ...
+```
+
+### Defining the Optimizer
+
+The parameter update formula of THOR is as follows:
+
+$$ \theta^{t+1} = \theta^t + \alpha F^{-1}\nabla E$$
+
+The meanings of parameters in the formula are as follows:
+- $\theta$: trainable parameters on the network
+- $t$: number of training steps
+- $\alpha$: learning rate, which is the parameter update value per step
+- $F^{-1}$: FIM obtained from the network computation
+- $\nabla E$: the first-order gradient value
+
+As shown in the parameter update formula, THOR needs to additionally compute an FIM of each layer, and the FIM of each layer is obtained through computation in the customized network model. The FIM can adaptively adjust the parameter update step and direction of each layer, accelerating convergence and reducing parameter optimization complexity.
+
+```python
+...
+if args_opt.device_target == "Ascend":
+ from src.thor import THOR
+else:
+ from src.thor import THOR_GPU as THOR
+...
+
+if __name__ == "__main__":
+ ...
+ # learning rate setting
+ lr = get_model_lr(0, config.lr_init, config.lr_decay, config.lr_end_epoch, step_size, decay_epochs=39)
+ # define the optimizer
+ opt = THOR(filter(lambda x: x.requires_grad, net.get_parameters()), Tensor(lr), config.momentum,
+ filter(lambda x: 'matrix_A' in x.name, net.get_parameters()),
+ filter(lambda x: 'matrix_G' in x.name, net.get_parameters()),
+ filter(lambda x: 'A_inv_max' in x.name, net.get_parameters()),
+ filter(lambda x: 'G_inv_max' in x.name, net.get_parameters()),
+ config.weight_decay, config.loss_scale)
+ ...
+```
+
+## Training the Network
+
+### Saving the Configured Model
+
+MindSpore provides the callback mechanism to execute customized logic during training. The `ModelCheckpoint` function provided by the framework is used in this example.
+`ModelCheckpoint` can save the network model and parameters for subsequent fine-tuning.
+`TimeMonitor` and `LossMonitor` are callback functions provided by MindSpore. They can be used to monitor the single training step time and `loss` value changes during training, respectively.
+
+```python
+...
+from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, TimeMonitor, LossMonitor
+...
+if __name__ == "__main__":
+ ...
+ # define callbacks
+ time_cb = TimeMonitor(data_size=step_size)
+ loss_cb = LossMonitor()
+ cb = [time_cb, loss_cb]
+ if config.save_checkpoint:
+ config_ck = CheckpointConfig(save_checkpoint_steps=config.save_checkpoint_epochs * step_size,
+ keep_checkpoint_max=config.keep_checkpoint_max)
+ ckpt_cb = ModelCheckpoint(prefix="resnet", directory=ckpt_save_dir, config=config_ck)
+ cb += [ckpt_cb]
+ ...
+```
+
+### Configuring the Network Training
+
+Use the `model.train` API provided by MindSpore to easily train the network. THOR reduces the computation workload and improves the computation speed by reducing the frequency of updating the second-order matrix. Therefore, the Model_Thor class is redefined to inherit the Model class provided by MindSpore. The parameter for controlling the frequency of updating the second-order matrix is added to the Model_Thor class. You can adjust this parameter to optimize the overall performance.
+
+
+```python
+...
+from mindspore.train.loss_scale_manager import FixedLossScaleManager
+from src.model_thor import Model_Thor as Model
+...
+
+if __name__ == "__main__":
+ ...
+ loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False)
+ if target == "Ascend":
+ model = Model(net, loss_fn=loss, optimizer=opt, amp_level='O2', loss_scale_manager=loss_scale,
+ keep_batchnorm_fp32=False, metrics={'acc'}, frequency=config.frequency)
+ else:
+ model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'},
+ amp_level="O2", keep_batchnorm_fp32=True, frequency=config.frequency)
+ ...
+```
+
+### Running the Script
+After the training script is defined, call the shell script in the `scripts` directory to start the distributed training process.
+#### Ascend 910
+Currently, MindSpore distributed execution on Ascend uses the single-device single-process running mode. That is, one process runs on a device, and the number of total processes is the same as the number of devices that are being used. For device 0, the corresponding process is executed in the foreground. For other devices, the corresponding processes are executed in the background. Create a directory named `train_parallel`+`device_id` for each process to store log information, operator compilation information, and training checkpoint files. The following takes the distributed training script for eight devices as an example to describe how to run the script:
+
+Run the script.
+```
+sh run_distribute_train.sh [RANK_TABLE_FILE] [DATASET_PATH] [DEVICE_NUM]
+```
+Variables `RANK_TABLE_FILE`, `DATASET_PATH`, and `DEVICE_NUM` need to be transferred to the script. The meanings of variables are as follows:
+- `RANK_TABLE_FILE`: path for storing the networking information file
+- `DATASET_PATH`: training dataset path
+- `DEVICE_NUM`: the actual number of running devices.
+For details about other environment variables, see configuration items in the installation guide.
+
+The following is an example of loss values output during training:
+
+```bash
+...
+epoch: 1 step: 5004, loss is 4.4182425
+epoch: 2 step: 5004, loss is 3.740064
+epoch: 3 step: 5004, loss is 4.0546017
+epoch: 4 step: 5004, loss is 3.7598825
+epoch: 5 step: 5004, loss is 3.3744206
+...
+epoch: 40 step: 5004, loss is 1.6907625
+epoch: 41 step: 5004, loss is 1.8217756
+epoch: 42 step: 5004, loss is 1.6453942
+...
+```
+
+After the training is complete, the checkpoint file generated by each device is stored in the training directory. The following is an example of the checkpoint file generated by `device_0`:
+
+```bash
+└─train_parallel0
+ ├─resnet-1_5004.ckpt
+ ├─resnet-2_5004.ckpt
+ │ ......
+ ├─resnet-42_5004.ckpt
+ │ ......
+```
+
+In the preceding information,
+`*.ckpt` indicates the saved model parameter file. The name of a checkpoint file is in the following format: *Network name*-*Number of epochs*_*Number of steps*.ckpt.
+
+#### GPU
+On the GPU hardware platform, MindSpore uses `mpirun` of OpenMPI to perform distributed training. The process creates a directory named `train_parallel` to store log information and training checkpoint files. The following takes the distributed training script for eight devices as an example to describe how to run the script:
+```
+sh run_distribute_train_gpu.sh [DATASET_PATH] [DEVICE_NUM]
+```
+Variables `DATASET_PATH` and `DEVICE_NUM` need to be transferred to the script. The meanings of variables are as follows:
+- `DATASET_PATH`: training dataset path
+- `DEVICE_NUM`: the actual number of running devices
+
+During GPU-based training, the `DEVICE_ID` environment variable is not required. Therefore, you do not need to call `int(os.getenv('DEVICE_ID'))` in the main training script to obtain the device ID or transfer `device_id` to `context`. You need to set `device_target` to `GPU` and call `init()` to enable the NCCL.
+
+The following is an example of loss values output during training:
+```bash
+...
+epoch: 1 step: 5004, loss is 4.2546034
+epoch: 2 step: 5004, loss is 4.0819564
+epoch: 3 step: 5004, loss is 3.7005644
+epoch: 4 step: 5004, loss is 3.2668946
+epoch: 5 step: 5004, loss is 3.023509
+...
+epoch: 36 step: 5004, loss is 1.645802
+...
+```
+
+The following is an example of model files saved after training:
+
+```bash
+└─train_parallel
+ ├─ckpt_0
+ ├─resnet-1_5004.ckpt
+ ├─resnet-2_5004.ckpt
+ │ ......
+ ├─resnet-36_5004.ckpt
+ │ ......
+ ......
+ ├─ckpt_7
+ ├─resnet-1_5004.ckpt
+ ├─resnet-2_5004.ckpt
+ │ ......
+ ├─resnet-36_5004.ckpt
+ │ ......
+```
+
+## Model Inference
+
+Use the checkpoint files saved during training to perform inference and validate the model generalization capability. Load the model file using the `load_checkpoint` API, call the `eval` API of the `Model` to predict the input image class, and compare the predicted class with the actual class of the input image to obtain the final prediction accuracy.
+
+### Defining the Inference Network
+
+1. Use the `load_checkpoint` API to load the model file.
+2. Use the `model.eval` API to read the test dataset for inference.
+3. Compute the prediction accuracy.
+
+```python
+...
+from mindspore.train.serialization import load_checkpoint, load_param_into_net
+...
+
+if __name__ == "__main__":
+ ...
+ # define net
+ net = resnet(class_num=config.class_num)
+ net.add_flags_recursive(thor=False)
+
+ # load checkpoint
+ param_dict = load_checkpoint(args_opt.checkpoint_path)
+ keys = list(param_dict.keys())
+ for key in keys:
+ if "damping" in key:
+ param_dict.pop(key)
+ load_param_into_net(net, param_dict)
+ net.set_train(False)
+
+ # define model
+ model = Model(net, loss_fn=loss, metrics={'top_1_accuracy', 'top_5_accuracy'})
+
+ # eval model
+ res = model.eval(dataset)
+ print("result:", res, "ckpt=", args_opt.checkpoint_path)
+```
+
+### Inference
+After the inference network is defined, the shell script in the `scripts` directory is called for inference.
+#### Ascend 910
+On the Ascend 910 hardware platform, run the following inference command:
+```
+sh run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH]
+```
+Variables `DATASET_PATH` and `CHECKPOINT_PATH` need to be transferred to the script. The meanings of variables are as follows:
+- `DATASET_PATH`: inference dataset path
+- `CHECKPOINT_PATH`: path for storing the checkpoint file
+
+Currently, a single device (device 0 by default) is used for inference. The inference result is as follows:
+```
+result: {'top_5_accuracy': 0.9295574583866837, 'top_1_accuracy': 0.761443661971831} ckpt=train_parallel0/resnet-42_5004.ckpt
+```
+- `top_5_accuracy`: For an input image, if the labels whose prediction probability ranks top 5 contain actual labels, the classification is correct.
+- `top_1_accuracy`: For an input image, if the label with the highest prediction probability is the same as the actual label, the classification is correct.
+#### GPU
+
+On the GPU hardware platform, run the following inference command:
+```
+sh run_eval_gpu.sh [DATASET_PATH] [CHECKPOINT_PATH]
+```
+Variables `DATASET_PATH` and `CHECKPOINT_PATH` need to be transferred to the script. The meanings of variables are as follows:
+- `DATASET_PATH`: inference dataset path
+- `CHECKPOINT_PATH`: path for storing the checkpoint file
+
+The inference result is as follows:
+```
+result: {'top_5_accuracy': 0.9287972151088348, 'top_1_accuracy': 0.7597031049935979} ckpt=train_parallel/resnet-36_5004.ckpt
+```
\ No newline at end of file
diff --git a/tutorials/source_en/advanced_use/summary_record.md b/tutorials/source_en/advanced_use/summary_record.md
index 23ec33e637f881deebd02a9953dfad3d466d4c7a..e6b8d4c1ea9e515f12ff3ff019968a9ed78b3a0f 100644
--- a/tutorials/source_en/advanced_use/summary_record.md
+++ b/tutorials/source_en/advanced_use/summary_record.md
@@ -127,10 +127,10 @@ model.eval(ds_eval, callbacks=[summary_collector])
In addition to providing the `SummaryCollector` that automatically collects some summary data, MindSpore provides summary operators that enable custom collection other data on the network, such as the input of each convolutional layer, or the loss value in the loss function, etc.
Summary operators currently supported:
-- [ScalarSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=scalarsummary#mindspore.ops.operations.ScalarSummary): Record a scalar data.
-- [TensorSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=tensorsummary#mindspore.ops.operations.TensorSummary): Record a tensor data.
-- [ImageSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=imagesummary#mindspore.ops.operations.ImageSummary): Record a image data.
-- [HistogramSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=histogramsummar#mindspore.ops.operations.HistogramSummary): Convert tensor data into histogram data records.
+- [ScalarSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html?highlight=scalarsummary#mindspore.ops.ScalarSummary): Record a scalar data.
+- [TensorSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html?highlight=tensorsummary#mindspore.ops.TensorSummary): Record a tensor data.
+- [ImageSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html?highlight=imagesummary#mindspore.ops.ImageSummary): Record a image data.
+- [HistogramSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.html?highlight=histogramsummar#mindspore.ops.HistogramSummary): Convert tensor data into histogram data records.
The recording method is shown in the following steps.
@@ -366,4 +366,6 @@ For more parameter Settings, see the [MindInsight related commands](https://www.
model.train(epoch=2, train_dataset, callbacks=[confusion_callback, summary_collector])
```
-3. In each Summary log file directory, only one training data should be placed. If a summary log directory contains summary data from multiple training, MindInsight will overlay the summary data from these training when visualizing the data, which may not be consistent with the expected visualizations.
\ No newline at end of file
+3. In each Summary log file directory, only one training data should be placed. If a summary log directory contains summary data from multiple training, MindInsight will overlay the summary data from these training when visualizing the data, which may not be consistent with the expected visualizations.
+
+4. Currently, `SummaryCollector` and `SummaryRecord` do not support scenarios with GPU multi-card running.
\ No newline at end of file
diff --git a/tutorials/source_en/advanced_use/synchronization_training_and_evaluation.md b/tutorials/source_en/advanced_use/synchronization_training_and_evaluation.md
index d42c89c84014a15d0646b82ed3019ab837eb04ad..193c6f9f116c87186ff404c0a9f448cdc9fa7ae0 100644
--- a/tutorials/source_en/advanced_use/synchronization_training_and_evaluation.md
+++ b/tutorials/source_en/advanced_use/synchronization_training_and_evaluation.md
@@ -32,7 +32,7 @@ Implementation idea: The model accuracy is validated every n epochs. The model a
Core implementation: Validation points are set in `epoch_end` of the callback function as follows:
-`cur_epoch % eval_per_epoch == 0`: indicates that the model accuracy is validated every `eval_per_epoch` epochs.
+`cur_epoch % eval_per_epoch == 0`: indicates that the model accuracy is validated every `eval_per_epoch` epoch.
- `cur_epoch`: indicates epoch value in the current training process.
- `eval_per_epoch`: indicates user-defined value, that is, the validation frequency.
@@ -40,7 +40,7 @@ Core implementation: Validation points are set in `epoch_end` of the callback fu
Other parameters are described as follows:
- `model`: indicates `Model` function in MindSpore.
-- `eval_dataset`: indicates validation dataset.
+- `eval_dataset`: indicates the validation dataset.
- `epoch_per_eval`: records the accuracy of the validation model and the corresponding number of epochs. The data format is `{"epoch": [], "acc": []}`.
```python
@@ -75,7 +75,7 @@ The parameters are described as follows:
- `keep_checkpoint_max`: indicates the maximum number of models that can be saved.
- `ckpoint_cb`: defines the name and path for saving the model.
- `model`: defines a model.
-- `model.train`: indicates model training function.
+- `model.train`: indicates the model training function.
- `epoch_per_eval`: defines the number for collecting `epoch` and the dictionary of corresponding model accuracy information.
```python
diff --git a/tutorials/source_en/advanced_use/visualization_tutorials.rst b/tutorials/source_en/advanced_use/visualization_tutorials.rst
index 17b1532bba19766e25351c89b259100f1b96d47d..e2912987938bf4e475c3fa9292f1413bdae57b3a 100644
--- a/tutorials/source_en/advanced_use/visualization_tutorials.rst
+++ b/tutorials/source_en/advanced_use/visualization_tutorials.rst
@@ -9,4 +9,5 @@ Training Process Visualization
lineage_and_scalars_comparision
performance_profiling
performance_profiling_gpu
+ debugger
mindinsight_commands
diff --git a/tutorials/source_en/index.rst b/tutorials/source_en/index.rst
index 0c0e217dec12430a11983202537e4c2176226531..fc81a983602b23e8ffa49b3549a00773ed91cdbf 100644
--- a/tutorials/source_en/index.rst
+++ b/tutorials/source_en/index.rst
@@ -19,7 +19,7 @@ MindSpore Tutorials
:maxdepth: 1
:caption: Use
- use/data_preparation/data_preparation
+ use/data_preparation
use/defining_the_network
use/saving_and_loading_model_parameters
use/multi_platform_inference
@@ -31,7 +31,9 @@ MindSpore Tutorials
advanced_use/computer_vision_application
advanced_use/nlp_application
- advanced_use/synchronization_training_and_evaluation.md
+ advanced_use/synchronization_training_and_evaluation
+ advanced_use/optimize_the_performance_of_data_preparation
+ advanced_use/second_order_optimizer_for_resnet50_application
.. toctree::
:glob:
@@ -52,6 +54,8 @@ MindSpore Tutorials
advanced_use/graph_kernel_fusion
advanced_use/quantization_aware
advanced_use/gradient_accumulation
+ advanced_use/dataset_conversion
+ advanced_use/auto_augmentation
.. toctree::
:glob:
@@ -66,6 +70,7 @@ MindSpore Tutorials
:caption: Network Migration
advanced_use/network_migration
+ advanced_use/model_scripts_transformation
.. toctree::
:glob:
@@ -74,4 +79,4 @@ MindSpore Tutorials
advanced_use/model_security
advanced_use/differential_privacy
-
+ advanced_use/fuzzer
diff --git a/tutorials/source_en/quick_start/quick_video.md b/tutorials/source_en/quick_start/quick_video.md
index a46f6ae41dde7d4826433e6a24140039427d6ad2..b11de9ebd0d6a767ad183dc4e19fc019ee7bcc26 100644
--- a/tutorials/source_en/quick_start/quick_video.md
+++ b/tutorials/source_en/quick_start/quick_video.md
@@ -269,7 +269,7 @@ Provides video tutorials from installation to try-on, helping you quickly use Mi
class="video-item-wraper" style="width: 33.3%;display: flex;justify-content: center;align-items: center;padding: 10px;box-sizing: border-box;">
-