diff --git a/docs/source_en/benchmark.md b/docs/source_en/benchmark.md
index 51fd7faaef4c4e91382a85d00ca102fcc764e04d..13a2238d6772fad14aa598ba6ffa75330a816575 100644
--- a/docs/source_en/benchmark.md
+++ b/docs/source_en/benchmark.md
@@ -3,17 +3,17 @@
This document describes the MindSpore benchmarks.
-For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo).
+For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
## Training Performance
### ResNet
-| Network | Network Type | Dataset | MindSpore Version | Resource | Precision | Batch Size | Throughput | Speedup |
+| Network | Network Type | Dataset | MindSpore Version | Resource | Precision | Batch Size | Throughput | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
-| ResNet-50 v1.5 | CNN | ImageNet2012 | 0.2.0-alpha | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 32 | 1787 images/sec | - |
-| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 32 | 13689 images/sec | 0.95 |
-| | | | | Ascend: 16 * Ascend 910 CPU:384 Cores | Mixed | 32 | 27090 images/sec | 0.94 |
+| ResNet-50 v1.5 | CNN | ImageNet2012 | 0.5.0-beta | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 256 | 2115 images/sec | - |
+| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 256 | 16600 images/sec | 0.98 |
+| | | | | Ascend: 16 * Ascend 910 CPU:384 Cores | Mixed | 256 | 32768 images/sec | 0.96 |
1. The preceding performance is obtained based on ModelArts, the HUAWEI CLOUD AI development platform. It is the average performance obtained by the Ascend 910 AI processor during the overall training process.
2. For details about other open source frameworks, see [ResNet-50 v1.5 for TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Classification/RN50v1.5#nvidia-dgx-2-16x-v100-32g).
@@ -22,8 +22,8 @@ For details about the MindSpore pre-trained model, see [Model Zoo](https://gitee
| Network | Network Type | Dataset | MindSpore Version | Resource | Precision | Batch Size | Throughput | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
-| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 96 | 210 sentences/sec | - |
-| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 96 | 1613 sentences/sec | 0.96 |
+| BERT-Large | Attention | zhwiki | 0.5.0-beta | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 96 | 269 sentences/sec | - |
+| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 96 | 2069 sentences/sec | 0.96 |
1. The preceding performance is obtained based on ModelArts, the HUAWEI CLOUD AI development platform. The network contains 24 hidden layers, the sequence length is 128 tokens, and the vocabulary contains 21128 tokens.
2. For details about other open source frameworks, see [BERT For TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT).
\ No newline at end of file
diff --git a/docs/source_en/network_list.md b/docs/source_en/network_list.md
index e8aef86babfa22792e62f8a1dbe9f99672676f31..0e15071754070f8065bc8551f355822a6f701c11 100644
--- a/docs/source_en/network_list.md
+++ b/docs/source_en/network_list.md
@@ -2,17 +2,28 @@
+## Model Zoo
| Domain | Sub Domain | Network | Ascend | GPU | CPU
|:------ |:------| :----------- |:------ |:------ |:-----
|Computer Version (CV) | Image Classification | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| Computer Version (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/googlenet.py) | Supported | Doing | Doing
+| Computer Version (CV) | Image Classification | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/googlenet/src/googlenet.py) | Supported | Doing | Doing
| Computer Version (CV) | Image Classification | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/lenet/src/lenet.py) | Supported | Supported | Supported
-| Computer Version (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/resnet.py) | Supported | Doing | Doing
-|Computer Version (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/resnet.py) | Supported |Doing | Doing
-| Computer Version (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/vgg.py) | Supported | Doing | Doing
-| Computer Version (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/mobilenetv2/src/mobilenetV2.py) | Supported | Doing | Doing
+| Computer Version (CV) | Image Classification | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/resnet/src/resnet.py) | Supported | Doing | Doing
+|Computer Version (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/resnet/src/resnet.py) | Supported |Doing | Doing
+| Computer Version (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/vgg16/src/vgg.py) | Supported | Doing | Doing
+| Computer Version (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
+| Computer Version (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
|Computer Version (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/ssd/src/ssd.py) | Supported |Doing | Doing
| Computer Version (CV) | Targets Detection | [YoloV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/yolov3/src/yolov3.py) | Supported | Doing | Doing
+| Computer Version (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/faster_rcnn/src/FasterRcnn) | Supported | Doing | Doing
| Computer Version (CV) | Semantic Segmentation | [Deeplabv3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/bert/src/bert_model.py) | Supported | Doing | Doing
+| Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/Transformer/src/transformer_model.py) | Supported | Doing | Doing
| Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/lstm/src/lstm.py) | Doing | Supported | Supported
+| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/deepfm/src/deepfm.py) | Supported | Doing | Doing
+| Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/wide_and_deep/src/wide_and_deep.py) | Supported | Doing | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/gcn/src/gcn.py) | Supported | Doing | Doing
+| Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/gat/src/gat.py) | Supported | Doing | Doing
+
+## Pre-trained Models
+Coming soon.
diff --git a/docs/source_en/operator_list.md b/docs/source_en/operator_list.md
index 528884e8945edc8ae7dfc6963642e06860eb5c82..d3fce738ce2e944ddad994a3ff63beb533d3a734 100644
--- a/docs/source_en/operator_list.md
+++ b/docs/source_en/operator_list.md
@@ -23,7 +23,7 @@
| [mindspore.nn.LeakyReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLU) | Supported |Doing | Doing |layer/activation
| [mindspore.nn.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Tanh) | Supported | Supported | Doing |layer/activation
| [mindspore.nn.GELU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GELU) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Doing | Doing |layer/activation
+| [mindspore.nn.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Supported | Doing |layer/activation
| [mindspore.nn.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.PReLU) | Supported |Doing | Doing |layer/activation
| [mindspore.nn.Dropout](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dropout) |Supported | Supported | Doing |layer/basic
| [mindspore.nn.Flatten](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Doing |layer/basic
@@ -49,6 +49,10 @@
| [mindspore.nn.BatchNorm2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) | Supported | Supported | Doing |layer/normalization
| [mindspore.nn.GroupNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GroupNorm) | Supported | Doing | Doing |layer/normalization
| [mindspore.nn.LayerNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LayerNorm) | Supported | Supported | Doing |layer/normalization
+| [mindspore.nn.MatrixDiag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixDiagPart](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiagPart) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixSetDiag](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixSetDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.LinSpace](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) |Doing | Supported | Doing |layer/pooling
| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Doing |Doing | Doing |loss/loss
@@ -100,9 +104,10 @@
| [mindspore.ops.operations.ReLU6](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU6) | Supported | Supported |Doing | nn_ops
| [mindspore.ops.operations.HSwish](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSwish) | Doing | Supported |Doing | nn_ops
| [mindspore.ops.operations.HSigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSigmoid) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Supported |Doing | nn_ops
| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | Supported | Supported |Doing | nn_ops
| [mindspore.ops.operations.BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchNorm) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.operations.LRN](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LRN) | Supported | Doing |Doing | nn_ops
| [mindspore.ops.operations.Conv2D](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2D) | Supported | Supported | Supported | nn_ops
| [mindspore.ops.operations.DepthwiseConv2dNative](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
| [mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
@@ -136,10 +141,10 @@
| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.LSTM](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LSTM) | Doing | Supported | Supported | nn_ops
| [mindspore.ops.operations.BasicLSTMCell](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
| [mindspore.ops.operations.Pad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pad) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.ROIAlign](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ROIAlign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Supported | Doing | nn_ops
| [mindspore.ops.operations.BinaryCrossEntropy](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BinaryCrossEntropy) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.LARSUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LARSUpdate) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | Supported | Supported | Doing | math_ops
@@ -254,7 +259,7 @@
| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.ResizeNearestNeighbor](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeNearestNeighbor) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Supported | Doing | array_ops
| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate) | Supported | Doing | Doing | array_ops
diff --git a/docs/source_zh_cn/benchmark.md b/docs/source_zh_cn/benchmark.md
index 264a5f6d69fa784a8c41f9105eb6035fbf835b2a..e401cfde8770e8bf7c54880051fee77c8d1f70f8 100644
--- a/docs/source_zh_cn/benchmark.md
+++ b/docs/source_zh_cn/benchmark.md
@@ -2,17 +2,17 @@
-本文介绍MindSpore的基准性能。MindSpore预训练模型可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)。
+本文介绍MindSpore的基准性能。MindSpore预训练模型可参考[Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
## 训练性能
### ResNet
-| Network | Network Type | Dataset | MindSpore Version | Resource | Precision | Batch Size | Throughput | Speedup |
+| Network | Network Type | Dataset | MindSpore Version | Resource | Precision | Batch Size | Throughput | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
-| ResNet-50 v1.5 | CNN | ImageNet2012 | 0.2.0-alpha | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 32 | 1787 images/sec | - |
-| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 32 | 13689 images/sec | 0.95 |
-| | | | | Ascend: 16 * Ascend 910 CPU:384 Cores | Mixed | 32 | 27090 images/sec | 0.94 |
+| ResNet-50 v1.5 | CNN | ImageNet2012 | 0.5.0-beta | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 256 | 2115 images/sec | - |
+| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 256 | 16600 images/sec | 0.98 |
+| | | | | Ascend: 16 * Ascend 910 CPU:384 Cores | Mixed | 256 | 32768 images/sec | 0.96 |
1. 以上数据基于华为云AI开发平台ModelArts测试获得,是训练过程整体下沉至Ascend 910 AI处理器执行所得的平均性能。
2. 业界其他开源框架数据可参考:[ResNet-50 v1.5 for TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/Classification/RN50v1.5#nvidia-dgx-2-16x-v100-32g)。
@@ -21,8 +21,8 @@
| Network | Network Type | Dataset | MindSpore Version | Resource | Precision | Batch Size | Throughput | Speedup |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
-| BERT-Large | Attention | zhwiki | 0.2.0-alpha | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 96 | 210 sentences/sec | - |
-| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 96 | 1613 sentences/sec | 0.96 |
+| BERT-Large | Attention | zhwiki | 0.5.0-beta | Ascend: 1 * Ascend 910 CPU:24 Cores | Mixed | 96 | 269 sentences/sec | - |
+| | | | | Ascend: 8 * Ascend 910 CPU:192 Cores | Mixed | 96 | 2069 sentences/sec | 0.96 |
1. 以上数据基于华为云AI开发平台ModelArts测试获得,其中网络包含24个隐藏层,句长为128个token,字典表包含21128个token。
2. 业界其他开源框架数据可参考:[BERT For TensorFlow](https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT)。
\ No newline at end of file
diff --git a/docs/source_zh_cn/index.rst b/docs/source_zh_cn/index.rst
index 3ce1555906e8fd0b38c6661ec21d748b6d8c4089..8e68b121f8549e85df45b59096585031b0784450 100644
--- a/docs/source_zh_cn/index.rst
+++ b/docs/source_zh_cn/index.rst
@@ -13,8 +13,9 @@ MindSpore文档
architecture
roadmap
benchmark
+ technical_white_paper
network_list
operator_list
constraints_on_network_construction
glossary
- community
+ community
\ No newline at end of file
diff --git a/docs/source_zh_cn/network_list.md b/docs/source_zh_cn/network_list.md
index 09e12befc6b672f1fb689e3b7cbcbf4e608fdff6..25d7640f15fcb0da99978da3213234df6f83eeab 100644
--- a/docs/source_zh_cn/network_list.md
+++ b/docs/source_zh_cn/network_list.md
@@ -2,17 +2,29 @@
+## Model Zoo
| 领域 | 子领域 | 网络 | Ascend | GPU | CPU
|:------ |:------| :----------- |:------ |:------ |:-----
|计算机视觉(CV) | 图像分类(Image Classification) | [AlexNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/alexnet/src/alexnet.py) | Supported | Supported | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/googlenet.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [GoogleNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/googlenet/src/googlenet.py) | Supported | Doing | Doing
| 计算机视觉(CV) | 图像分类(Image Classification) | [LeNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/lenet/src/lenet.py) | Supported | Supported | Supported
-| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/resnet.py) | Supported | Doing | Doing
-|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/resnet.py) | Supported |Doing | Doing
-| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/vgg.py) | Supported | Doing | Doing
-| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/mobilenetv2/src/mobilenetV2.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/resnet/src/resnet.py) | Supported | Doing | Doing
+|计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/resnet/src/resnet.py) | Supported |Doing | Doing
+| 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/vgg16/src/vgg.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing
+| 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing
|计算机视觉(CV) | 目标检测(Targets Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/ssd/src/ssd.py) | Supported |Doing | Doing
| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/yolov3/src/yolov3.py) | Supported | Doing | Doing
+| 计算机视觉(CV) | 目标检测(Targets Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/faster_rcnn/src/FasterRcnn) | Supported | Doing | Doing
| 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [Deeplabv3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [BERT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/bert/src/bert_model.py) | Supported | Doing | Doing
+| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/Transformer/src/transformer_model.py) | Supported | Doing | Doing
| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/lstm/src/lstm.py) | Doing | Supported | Supported
+| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/deepfm/src/deepfm.py) | Supported | Doing | Doing
+| 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/wide_and_deep/src/wide_and_deep.py) | Supported | Doing | Doing
+| 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/gcn/src/gcn.py) | Supported | Doing | Doing
+| 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/gat/src/gat.py) | Supported | Doing | Doing
+
+
+## 预训练模型
+建设中,即将上线
diff --git a/docs/source_zh_cn/operator_list.md b/docs/source_zh_cn/operator_list.md
index 7c1c98cba1aae4dbf1431d636a795874db83fbaa..4f6703e0fcd3a85dd0b431d87c880aefe1d855b1 100644
--- a/docs/source_zh_cn/operator_list.md
+++ b/docs/source_zh_cn/operator_list.md
@@ -23,7 +23,7 @@
| [mindspore.nn.LeakyReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LeakyReLU) | Supported |Doing | Doing |layer/activation
| [mindspore.nn.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Tanh) | Supported | Supported | Doing |layer/activation
| [mindspore.nn.GELU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GELU) | Supported | Supported | Doing |layer/activation
-| [mindspore.nn.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Doing | Doing |layer/activation
+| [mindspore.nn.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Sigmoid) | Supported |Supported | Doing |layer/activation
| [mindspore.nn.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.PReLU) | Supported |Doing | Doing |layer/activation
| [mindspore.nn.Dropout](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Dropout) |Supported | Supported | Doing |layer/basic
| [mindspore.nn.Flatten](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Flatten) |Supported | Supported | Doing |layer/basic
@@ -49,6 +49,10 @@
| [mindspore.nn.BatchNorm2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) | Supported | Supported | Doing |layer/normalization
| [mindspore.nn.GroupNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.GroupNorm) | Supported | Doing | Doing |layer/normalization
| [mindspore.nn.LayerNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LayerNorm) | Supported | Supported | Doing |layer/normalization
+| [mindspore.nn.MatrixDiag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixDiagPart](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixDiagPart) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.MatrixSetDiag](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MatrixSetDiag) | Supported | Doing | Doing | layer/normalization
+| [mindspore.nn.LinSpace](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.LinSpace) | Supported | Doing | Doing | layer/normalization
| [mindspore.nn.MaxPool2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.MaxPool2d) | Supported | Supported | Supported |layer/pooling
| [mindspore.nn.AvgPool2d](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.AvgPool2d) |Doing | Supported | Doing |layer/pooling
| [mindspore.nn.L1Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.L1Loss) |Doing |Doing | Doing |loss/loss
@@ -100,9 +104,10 @@
| [mindspore.ops.operations.ReLU6](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ReLU6) | Supported | Supported |Doing | nn_ops
| [mindspore.ops.operations.HSwish](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSwish) | Doing | Supported |Doing | nn_ops
| [mindspore.ops.operations.HSigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.HSigmoid) | Doing | Supported |Doing | nn_ops
-| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.operations.Sigmoid](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sigmoid) | Supported | Supported |Doing | nn_ops
| [mindspore.ops.operations.Tanh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Tanh) | Supported | Supported |Doing | nn_ops
| [mindspore.ops.operations.BatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BatchNorm) | Supported | Doing |Doing | nn_ops
+| [mindspore.ops.operations.LRN](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LRN) | Supported | Doing |Doing | nn_ops
| [mindspore.ops.operations.Conv2D](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Conv2D) | Supported | Supported | Supported | nn_ops
| [mindspore.ops.operations.DepthwiseConv2dNative](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNative) | Supported | Doing |Doing | nn_ops
| [mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.DepthwiseConv2dNativeBackpropInput) | Supported | Doing |Doing | nn_ops
@@ -136,10 +141,10 @@
| [mindspore.ops.operations.PReLU](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.PReLU) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.LSTM](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LSTM) | Doing | Supported | Supported | nn_ops
| [mindspore.ops.operations.BasicLSTMCell](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BasicLSTMCell) | Doing | Doing | Doing | nn_ops
-| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.operations.SigmoidCrossEntropyWithLogits](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SigmoidCrossEntropyWithLogits) | Supported | Supported | Doing | nn_ops
| [mindspore.ops.operations.Pad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pad) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.ROIAlign](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ROIAlign) | Supported | Doing | Doing | nn_ops
-| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Doing | Doing | nn_ops
+| [mindspore.ops.operations.Adam](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Adam) | Supported | Supported | Doing | nn_ops
| [mindspore.ops.operations.BinaryCrossEntropy](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.BinaryCrossEntropy) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.LARSUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.LARSUpdate) | Supported | Doing | Doing | nn_ops
| [mindspore.ops.operations.TensorAdd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.TensorAdd) | Supported | Supported | Doing | math_ops
@@ -254,7 +259,7 @@
| [mindspore.ops.operations.ScatterNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNd) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.ResizeNearestNeighbor](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ResizeNearestNeighbor) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.GatherNd](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.GatherNd) | Supported | Doing | Doing | array_ops
-| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Doing | Doing | array_ops
+| [mindspore.ops.operations.ApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ApplyFtrl) | Supported | Supported | Doing | array_ops
| [mindspore.ops.operations.SparseApplyFtrl](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SparseApplyFtrl) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.ScatterNdUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterNdUpdate) | Supported | Doing | Doing | array_ops
| [mindspore.ops.operations.ScatterUpdate](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.ScatterUpdate) | Supported | Doing | Doing | array_ops
diff --git a/docs/source_zh_cn/technical_white_paper.md b/docs/source_zh_cn/technical_white_paper.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e34a3725679ae2105bea39aa768378483720ce1
--- /dev/null
+++ b/docs/source_zh_cn/technical_white_paper.md
@@ -0,0 +1,11 @@
+# 技术白皮书
+
+## 引言
+深度学习研究和应用在近几十年得到了爆炸式的发展,掀起了人工智能的第三次浪潮,并且在图像识别、语音识别与合成、无人驾驶、机器视觉等方面取得了巨大的成功。这也对算法的应用以及依赖的框架有了更高级的要求。深度学习框架的不断发展使得在大型数据集上训练神经网络模型时,可以方便地使用大量的计算资源。
+
+深度学习是使用多层结构从原始数据中自动学习并提取高层次特征的一类机器学习算法。通常,从原始数据中提取高层次、抽象的特征是非常困难的。目前有两种主流的深度学习框架:一种是在执行之前构造一个静态图,定义所有操作和网络结构,典型代表是TensorFlow,这种方法以牺牲易用性为代价,来提高训练期间的性能;另一种是立即执行的动态图计算,典型代表是PyTorch。通过比较可以发现,动态图更灵活、更易调试,但会牺牲性能。因此,现有深度学习框架难以同时满足易开发、高效执行的要求。
+
+## 简介
+MindSpore作为新一代深度学习框架,是源于全产业的最佳实践,最佳匹配昇腾处理器算力,支持终端、边缘、云全场景灵活部署,开创全新的AI编程范式,降低AI开发门槛。MindSpore是一种全新的深度学习计算框架,旨在实现易开发、高效执行、全场景覆盖三大目标。为了实现易开发的目标,MindSpore采用基于源码转换(Source Code Transformation,SCT)的自动微分(Automatic Differentiation,AD)机制,该机制可以用控制流表示复杂的组合。函数被转换成函数中间表达(Intermediate Representation,IR),中间表达构造出一个能够在不同设备上解析和执行的计算图。在执行前,计算图上应用了多种软硬件协同优化技术,以提升端、边、云等不同场景下的性能和效率。MindSpore支持动态图,更易于检查运行模式。由于采用了基于源码转换的自动微分机制,所以动态图和静态图之间的模式切换非常简单。为了在大型数据集上有效训练大模型,通过高级手动配置策略,MindSpore可以支持数据并行、模型并行和混合并行训练,具有很强的灵活性。此外,MindSpore还有“自动并行”能力,它通过在庞大的策略空间中进行高效搜索来找到一种快速的并行策略。MindSpore框架的具体优势,请查看详细介绍。
+
+[查看技术白皮书](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com:443/%E7%99%BD%E7%9A%AE%E4%B9%A6/MindSpore%EF%BC%9A%E4%B8%80%E7%A7%8D%E5%85%A8%E5%9C%BA%E6%99%AF%E8%A6%86%E7%9B%96%E7%9A%84%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E8%AE%A1%E7%AE%97%E6%A1%86%E6%9E%B6.pdf)
\ No newline at end of file
diff --git a/install/mindspore_cpu_install.md b/install/mindspore_cpu_install.md
index ba4c8e689aac880bca8386d7e95ebcaa659d078a..49b23fbde9e75ddd6d51850d9b3d2a6563325380 100644
--- a/install/mindspore_cpu_install.md
+++ b/install/mindspore_cpu_install.md
@@ -21,9 +21,9 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
**安装依赖:**
与可执行文件安装依赖相同 |
-- Ubuntu版本为18.04时,GCC 7.3.0可以直接通过apt命令安装。
+- GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
### Conda安装(可选)
@@ -68,11 +68,11 @@
2. 在源码根目录下执行如下命令编译MindSpore。
```bash
- bash build.sh -e cpu -z -j4
+ bash build.sh -e cpu -j4
```
> - 在执行上述命令前,需保证可执行文件cmake和patch所在路径已加入环境变量PATH中。
> - `build.sh`中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
- > - 如果编译机性能较好,可在执行中增加-j{线程数}来增加线程数量。如`bash build.sh -e cpu -z -j12`。
+ > - 如果编译机性能较好,可在执行中增加-j{线程数}来增加线程数量。如`bash build.sh -e cpu -j12`。
3. 执行如下命令安装MindSpore。
@@ -97,7 +97,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
-| MindArmour master | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
diff --git a/install/mindspore_cpu_install_en.md b/install/mindspore_cpu_install_en.md
index 6170254ae2690bce51e2fe5db63bbe4497088440..7e6df0bc512e5f4d4192005f3bcd6c2060ba2084 100644
--- a/install/mindspore_cpu_install_en.md
+++ b/install/mindspore_cpu_install_en.md
@@ -21,9 +21,9 @@ This document describes how to quickly install MindSpore on a Ubuntu system with
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
same as the executable file installation dependencies. |
+| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
same as the executable file installation dependencies. |
-- When Ubuntu version is 18.04, GCC 7.3.0 can be installed by using apt command.
+- GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
### (Optional) Installing Conda
@@ -68,11 +68,11 @@ This document describes how to quickly install MindSpore on a Ubuntu system with
2. Run the following command in the root directory of the source code to compile MindSpore:
```bash
- bash build.sh -e cpu -z -j4
+ bash build.sh -e cpu -j4
```
> - Before running the preceding command, ensure that the paths where the executable files cmake and patch store have been added to the environment variable PATH.
> - In the `build.sh` script, the `git clone` command will be executed to obtain the code in the third-party dependency database. Ensure that the network settings of Git are correct.
- > - If the compiler performance is strong, you can add -j{Number of threads} in to script to increase the number of threads. For example, `bash build.sh -e cpu -z -j12`.
+ > - If the compiler performance is strong, you can add -j{Number of threads} in to script to increase the number of threads. For example, `bash build.sh -e cpu -j12`.
3. Run the following command to install MindSpore:
@@ -97,7 +97,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
diff --git a/install/mindspore_cpu_win_install.md b/install/mindspore_cpu_win_install.md
index 5b33ddb3c8567ac9182297fb91ba4b93f3173497..90e6af3b20a0589acf93f0db436652d97ade6413 100644
--- a/install/mindspore_cpu_win_install.md
+++ b/install/mindspore_cpu_win_install.md
@@ -4,7 +4,7 @@
-- [Windows系统安装MindSpore](#windows系统安装mindspore)
+- [安装MindSpore](#安装mindspore)
- [环境要求](#环境要求)
- [系统要求和软件依赖](#系统要求和软件依赖)
- [Conda安装(可选)](#conda安装可选)
diff --git a/install/mindspore_cpu_win_install_en.md b/install/mindspore_cpu_win_install_en.md
index 2f1fe17739a318d9ebbc63c5d79ea46409ec662d..5a8a48949d997c0d23aea9990016abf0dcb2f358 100644
--- a/install/mindspore_cpu_win_install_en.md
+++ b/install/mindspore_cpu_win_install_en.md
@@ -4,7 +4,7 @@ This document describes how to quickly install MindSpore on a Windows system wit
-- [MindSpore Installation Guide on Windows](#mindspore-installation-guide-on-windows)
+- [MindSpore Installation Guide](#mindspore-installation-guide)
- [Environment Requirements](#environment-requirements)
- [System Requirements and Software Dependencies](#system-requirements-and-software-dependencies)
- [(Optional) Installing Conda](#optional-installing-conda)
diff --git a/install/mindspore_d_install.md b/install/mindspore_d_install.md
index ddd4c1c9b88ba603323c552997a4bceb586a33d4..76bf4fd4516497c2d2a4382cb40ffebd2bc3a36f 100644
--- a/install/mindspore_d_install.md
+++ b/install/mindspore_d_install.md
@@ -33,10 +33,10 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00T100)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00T100)
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00)
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 |
-- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00T100)的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。
-- Ubuntu版本为18.04时,GCC 7.3.0可以直接通过apt命令安装。
+- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00)的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。
+- GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
### Conda安装(可选)
@@ -57,7 +57,7 @@
### 配套软件包依赖配置
- - 安装Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00T100)提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。
+ - 安装Ascend 910 AI处理器配套软件包(对应版本Atlas Data Center Solution V100R020C00)提供的whl包,whl包随配套软件包发布,升级配套软件包之后需要重新安装。
```bash
pip install /usr/local/Ascend/fwkacllib/lib64/topi-{version}-py3-none-any.whl
@@ -88,11 +88,11 @@
2. 在源码根目录下,执行如下命令编译MindSpore。
```bash
- bash build.sh -e d -z
+ bash build.sh -e ascend
```
> - 在执行上述命令前,需保证可执行文件`cmake`和`patch`所在路径已加入环境变量PATH中。
> - `build.sh`中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
- > - `build.sh`中默认的编译线程数为8,如果编译机性能较差可能会出现编译错误,可在执行中增加-j{线程数}来减少线程数量。如`bash build.sh -e d -z -j4`。
+ > - `build.sh`中默认的编译线程数为8,如果编译机性能较差可能会出现编译错误,可在执行中增加-j{线程数}来减少线程数量。如`bash build.sh -e ascend -j4`。
3. 执行如下命令安装MindSpore。
@@ -103,19 +103,40 @@
## 配置环境变量
-- 安装好MindSpore之后,需要导出Runtime相关环境变量。
+- EulerOS操作系统,安装好MindSpore之后,需要导出Runtime相关环境变量。
```bash
# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
export GLOG_v=2
+
# Conda environmental options
LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package
+
# lib libraries that the run package depends on
export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/fwkacllib/lib64:${LD_LIBRARY_PATH}
+
+ # Environment variables that must be configured
+ export TBE_IMPL_PATH=${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
+ export PATH=${LOCAL_ASCEND}/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
+ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
+ ```
+
+- Ubuntu操作系统,安装好MindSpore之后,需要导出Runtime相关环境变量,注意:需要将如下配置中{version}替换为环境上真实的版本号。
+
+ ```bash
+ # control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
+ export GLOG_v=2
+
+ # Conda environmental options
+ LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package
+
+ # lib libraries that the run package depends on
+ export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/{version}/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LD_LIBRARY_PATH}
+
# Environment variables that must be configured
- export TBE_IMPL_PATH=${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
- export PATH=${LOCAL_ASCEND}/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
- export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
+ export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/{version}/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
+ export PATH=${LOCAL_ASCEND}/ascend-toolkit/{version}/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
+ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
```
## 安装验证
@@ -160,7 +181,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 aarch64
- 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindInsight master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64
| - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
diff --git a/install/mindspore_d_install_en.md b/install/mindspore_d_install_en.md
index bc6292f557f8502fb7f001543f70e6e8acb2ec2a..57996d7c608ff94625108189c20d2210645396b3 100644
--- a/install/mindspore_d_install_en.md
+++ b/install/mindspore_d_install_en.md
@@ -32,10 +32,10 @@ This document describes how to quickly install MindSpore on an Ascend AI process
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100)
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00)
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00)
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. |
-- Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document.
-- When Ubuntu version is 18.04, GCC 7.3.0 can be installed by using apt command.
+- Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document.
+- GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
### (Optional) Installing Conda
@@ -56,7 +56,7 @@ This document describes how to quickly install MindSpore on an Ascend AI process
### Configuring software package Dependencies
- - Install the .whl package provided in Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00T100). The .whl package is released with the software package. After software package is upgraded, reinstall the .whl package.
+ - Install the .whl package provided in Ascend 910 AI processor software package(Version:Atlas Data Center Solution V100R020C00). The .whl package is released with the software package. After software package is upgraded, reinstall the .whl package.
```bash
pip install /usr/local/Ascend/fwkacllib/lib64/topi-{version}-py3-none-any.whl
@@ -87,11 +87,11 @@ The compilation and installation must be performed on the Ascend 910 AI processo
2. Run the following command in the root directory of the source code to compile MindSpore:
```bash
- bash build.sh -e d -z
+ bash build.sh -e ascend
```
> - Before running the preceding command, ensure that the paths where the executable files `cmake` and `patch` store have been added to the environment variable PATH.
> - In the `build.sh` script, the `git clone` command will be executed to obtain the code in the third-party dependency database. Ensure that the network settings of Git are correct.
- > - In the `build.sh` script, the default number of compilation threads is 8. If the compiler performance is poor, compilation errors may occur. You can add -j{Number of threads} in to script to reduce the number of threads. For example, `bash build.sh -e d -z -j4`.
+ > - In the `build.sh` script, the default number of compilation threads is 8. If the compiler performance is poor, compilation errors may occur. You can add -j{Number of threads} in to script to reduce the number of threads. For example, `bash build.sh -e ascend -j4`.
3. Run the following command to install MindSpore:
@@ -102,19 +102,40 @@ The compilation and installation must be performed on the Ascend 910 AI processo
## Configuring Environment Variables
-- After MindSpore is installed, export runtime-related environment variables.
+- In EulerOS, after MindSpore is installed, export runtime-related environment variables.
```bash
# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
export GLOG_v=2
+
# Conda environmental options
LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package
+
# lib libraries that the run package depends on
export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/fwkacllib/lib64:${LD_LIBRARY_PATH}
+
+ # Environment variables that must be configured
+ export TBE_IMPL_PATH=${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
+ export PATH=${LOCAL_ASCEND}/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
+ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
+ ```
+
+- In Ubuntu, after MindSpore is installed, export runtime-related environment variables. Note: you need to replace {version} in the following configuration with the actual version number on the environment.
+
+ ```bash
+ # control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, default level is WARNING.
+ export GLOG_v=2
+
+ # Conda environmental options
+ LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package
+
+ # lib libraries that the run package depends on
+ export LD_LIBRARY_PATH=${LOCAL_ASCEND}/add-ons/:${LOCAL_ASCEND}/ascend-toolkit/{version}/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LD_LIBRARY_PATH}
+
# Environment variables that must be configured
- export TBE_IMPL_PATH=${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
- export PATH=${LOCAL_ASCEND}/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
- export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
+ export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/{version}/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path
+ export PATH=${LOCAL_ASCEND}/ascend-toolkit/{version}/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path
+ export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on
```
## Installation Verification
diff --git a/install/mindspore_gpu_install.md b/install/mindspore_gpu_install.md
index 6fec7a71c51708a188ec475332e4169fa737f47d..95f6c23866ba3d78c15ce1d333a880aeca9bedf6 100644
--- a/install/mindspore_gpu_install.md
+++ b/install/mindspore_gpu_install.md
@@ -28,9 +28,8 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (可选,单机多卡/多机多卡训练需要)
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (可选,单机多卡/多机多卡训练需要)
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
**安装依赖:**
与可执行文件安装依赖相同 |
-- Ubuntu版本为18.04时,GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
- 为了方便用户使用,MindSpore降低了对Autoconf、Libtool、Automake版本的依赖,可以使用系统自带版本。
@@ -69,11 +68,11 @@
2. 在源码根目录下执行如下命令编译MindSpore。
```bash
- bash build.sh -e gpu -M on -z
+ bash build.sh -e gpu
```
> - 在执行上述命令前,需保证可执行文件`cmake`和`patch`所在路径已加入环境变量PATH中。
> - `build.sh`中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。
- > - `build.sh`中默认的编译线程数为8,如果编译机性能较差可能会出现编译错误,可在执行中增加-j{线程数}来减少线程数量。如`bash build.sh -e gpu -M on -z -j4`。
+ > - `build.sh`中默认的编译线程数为8,如果编译机性能较差可能会出现编译错误,可在执行中增加-j{线程数}来减少线程数量。如`bash build.sh -e gpu -j4`。
3. 执行如下命令安装MindSpore。
@@ -124,7 +123,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
+| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**安装依赖:**
与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -189,7 +188,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
-| MindArmour master | Ubuntu 16.04(及以上) x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
diff --git a/install/mindspore_gpu_install_en.md b/install/mindspore_gpu_install_en.md
index d8ed2663716a0aa00562a282559c4a898050f171..a137ca4e4877d7b11546f8c6a3523bf66dd578b8 100644
--- a/install/mindspore_gpu_install_en.md
+++ b/install/mindspore_gpu_install_en.md
@@ -28,9 +28,8 @@ This document describes how to quickly install MindSpore on a NVIDIA GPU environ
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
- [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.4.8-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training)
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69
- [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30
- [Automake](https://www.gnu.org/software/automake) >= 1.15.1
- [CUDA 9.2](https://developer.nvidia.com/cuda-92-download-archive) / [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base)
- [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6
**Installation dependencies:**
same as the executable file installation dependencies. |
-- When Ubuntu version is 18.04, GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during `.whl` package installation. In other cases, you need to manually install dependency items.
- MindSpore reduces dependency on Autoconf, Libtool, Automake versions for the convenience of users, default versions of these tools built in their systems are now supported.
@@ -69,11 +68,11 @@ This document describes how to quickly install MindSpore on a NVIDIA GPU environ
2. Run the following command in the root directory of the source code to compile MindSpore:
```bash
- bash build.sh -e gpu -M on -z
+ bash build.sh -e gpu
```
> - Before running the preceding command, ensure that the paths where the executable files `cmake` and `patch` store have been added to the environment variable PATH.
> - In the `build.sh` script, the `git clone` command will be executed to obtain the code in the third-party dependency database. Ensure that the network settings of Git are correct.
- > - In the `build.sh` script, the default number of compilation threads is 8. If the compiler performance is poor, compilation errors may occur. You can add -j{Number of threads} in to script to reduce the number of threads. For example, `bash build.sh -e gpu -M on -z -j4`.
+ > - In the `build.sh` script, the default number of compilation threads is 8. If the compiler performance is poor, compilation errors may occur. You can add -j{Number of threads} in to script to reduce the number of threads. For example, `bash build.sh -e gpu -j4`.
3. Run the following command to install MindSpore:
@@ -124,7 +123,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**Installation dependencies:**
same as the executable file installation dependencies. |
+| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- [CMake](https://cmake.org/download/) >= 3.14.1
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [node.js](https://nodejs.org/en/download/) >= 10.19.0
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3
**Installation dependencies:**
same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -191,7 +190,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | Ubuntu 16.04 or later x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
diff --git a/resource/api_mapping.md b/resource/api_mapping.md
index bb3471b378ec24b365eb784563d60598c72dc3d0..4743f88c02b7b56530ad9ba7ecfc48eb0f5f5db3 100644
--- a/resource/api_mapping.md
+++ b/resource/api_mapping.md
@@ -9,12 +9,18 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.add | mindspore.ops.operations.TensorAdd |
| torch.argmax | mindspore.ops.operations.Argmax |
| torch.argmin | mindspore.ops.operations.Argmin |
+| torch.asin | mindspore.ops.operations.Asin |
+| torch.atan | mindspore.ops.operations.Atan |
| torch.atan2 | mindspore.ops.operations.Atan2 |
+| torch.bitwise_and | mindspore.ops.operations.BitwiseAnd |
+| torch.bitwise_or | mindspore.ops.operations.BitwiseOr |
| torch.bmm | mindspore.ops.operations.BatchMatMul |
| torch.cat | mindspore.ops.operations.Concat |
+| torch.ceil | mindspore.ops.operations.Ceil |
| torch.chunk | mindspore.ops.operations.Split |
| torch.clamp | mindspore.ops.composite.clip_by_value |
| torch.cos | mindspore.ops.operations.Cos |
+| torch.cosh | mindspore.ops.operations.Cosh |
| torch.cuda.device_count | mindspore.communication.get_group_size |
| torch.cuda.set_device | mindspore.context.set_context |
| torch.cumprod | mindspore.ops.operations.CumProd |
@@ -27,9 +33,11 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.eq | mindspore.ops.operations.Equal |
| torch.erfc | mindspore.ops.operations.Erfc |
| torch.exp | mindspore.ops.operations.Exp |
+| torch.expm1 | mindspore.ops.operations.Expm1 |
| torch.eye | mindspore.ops.operations.Eye |
| torch.flatten | mindspore.ops.operations.Flatten |
| torch.floor | mindspore.ops.operations.Floor |
+| torch.linspace | mindspore.nn.LinSpace |
| torch.load | mindspore.train.serialization.load_checkpoint |
| torch.log | mindspore.ops.operations.Log |
| torch.log1p | mindspore.ops.operations.Log1p |
@@ -96,7 +104,10 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.numel | mindspore.ops.operations.Size |
| torch.ones | mindspore.ops.operations.OnesLike |
| torch.ones_like | mindspore.ops.operations.OnesLike |
+| torch.optim.Adadelta | mindspore.ops.operations.ApplyAdadelta |
+| torch.optim.Adagrad | mindspore.ops.operations.ApplyAdagrad |
| torch.optim.Adam | mindspore.nn.Adam |
+| torch.optim.Adamax | mindspore.ops.operations.ApplyAdaMax |
| torch.optim.AdamW | mindspore.nn.AdamWeightDecay |
| torch.optim.lr_scheduler.CosineAnnealingWarmRestarts | mindspore.nn.dynamic_lr.cosine_decay_lr |
| torch.optim.lr_scheduler.StepLR | mindspore.nn.dynamic_lr.piecewise_constant_lr |
@@ -106,10 +117,12 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.pow | mindspore.ops.operations.Pow |
| torch.prod | mindspore.ops.operations.ReduceProd |
| torch.randn | mindspore.ops.operations.TruncatedNormal |
+| torch.range | mindspore.nn.Range |
| torch.round | mindspore.ops.operations.Round |
| torch.save | mindspore.train.serialization.save_checkpoint |
| torch.sigmoid | mindspore.ops.operations.Sigmoid |
| torch.sin | mindspore.ops.operations.Sin |
+| torch.sinh | mindspore.ops.operations.Sinh |
| torch.sparse.FloatTensor | mindspore.Tensor |
| torch.split | mindspore.ops.operations.Split |
| torch.sqrt | mindspore.ops.operations.Sqrt |
@@ -121,8 +134,10 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torch.tensor | mindspore.Tensor |
| torch.Tensor | mindspore.Tensor |
| torch.Tensor.chunk | mindspore.ops.operations.Split |
+| torch.Tensor.expand | mindspore.ops.operations.BroadcastTo |
| torch.Tensor.fill_ | mindspore.ops.operations.Fill |
| torch.Tensor.float | mindspore.ops.operations.Cast |
+| torch.Tensor.index_add | mindspore.ops.operations.InplaceAdd |
| torch.Tensor.mm | mindspore.ops.operations.MatMul |
| torch.Tensor.mul | mindspore.ops.operations.Mul |
| torch.Tensor.pow | mindspore.ops.operations.Pow |
@@ -157,4 +172,4 @@ Mapping between PyTorch APIs and MindSpore APIs, which is provided by the commun
| torchvision.transforms.Normalize | mindspore.dataset.transforms.vision.py_transforms.Normalize |
| torchvision.transforms.RandomHorizontalFlip | mindspore.dataset.transforms.vision.py_transforms.RandomHorizontalFlip |
| torchvision.transforms.Resize | mindspore.dataset.transforms.vision.py_transforms.Resize |
-| torchvision.transforms.ToTensor | mindspore.dataset.transforms.vision.py_transforms.ToTensor |
\ No newline at end of file
+| torchvision.transforms.ToTensor | mindspore.dataset.transforms.vision.py_transforms.ToTensor |
diff --git a/resource/faq/FAQ_en.md b/resource/faq/FAQ_en.md
index 7fa3b3034787ff8ea08b355864c47e199bc1cfb6..5815ab26b8cfaecc15635dd13bec6e6b1fa6c772 100644
--- a/resource/faq/FAQ_en.md
+++ b/resource/faq/FAQ_en.md
@@ -74,7 +74,7 @@ A: MindSpore has basic support for common training scenarios, please refer to [R
Q: What are the available recommendation or text generation networks or models provided by MindSpore?
-A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo).
+A: Currently, recommendation models such as Wide & Deep, DeepFM, and NCF are under development. In the natural language processing (NLP) field, Bert\_NEZHA is available and models such as MASS are under development. You can rebuild the network into a text generation network based on the scenario requirements. Please stay tuned for updates on the [MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).
### Backend Support
diff --git a/resource/faq/FAQ_zh_cn.md b/resource/faq/FAQ_zh_cn.md
index db448da184e2b5370da5245b4992fa995be8aa38..3ee10f6b70d94b00af22bc113e9b98e5e40f8248 100644
--- a/resource/faq/FAQ_zh_cn.md
+++ b/resource/faq/FAQ_zh_cn.md
@@ -73,7 +73,7 @@ A:MindSpore针对典型场景均有模型训练支持,支持情况详见[Rel
Q:MindSpore有哪些现成的推荐类或生成类网络或模型可用?
-A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)。
+A:目前正在开发Wide & Deep、DeepFM、NCF等推荐类模型,NLP领域已经支持Bert_NEZHA,正在开发MASS等模型,用户可根据场景需要改造为生成类网络,可以关注[MindSpore Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)。
### 后端支持
diff --git a/resource/release/release_list_en.md b/resource/release/release_list_en.md
index f4634a873d5b6bee40823a3b30a8b391ca42845a..5c43336038ddc2fac9f5d84005a924ecfaeb6c51 100644
--- a/resource/release/release_list_en.md
+++ b/resource/release/release_list_en.md
@@ -3,6 +3,12 @@
- [Release List](#release-list)
+ - [0.5.0-beta](#050-beta)
+ - [Releasenotes](#releasenotes)
+ - [Downloads](#downloads)
+ - [Tutorials](#tutorials)
+ - [API](#api)
+ - [Docs](#docs)
- [0.3.0-alpha](#030-alpha)
- [Releasenotes](#releasenotes)
- [Downloads](#downloads)
@@ -25,6 +31,44 @@
+## 0.5.0-beta
+
+### Releasenotes
+
+
+
+### Downloads
+
+| Module Name | Hardware Platform | Operating System | Download Links | SHA-256 |
+| --- | --- | --- | --- | --- |
+| MindSpore | Ascend910 | Ubuntu-x86 | | 0c5afb5cef15065424cfa60beb6bb3a6073c977e815fae1004299f8de4bd0fac |
+| | | Ubuntu-aarch64 | | eda47fc6e4646f0b3bcee3e37af5eb8426208f162fcee2d53b2c8310f13509c3 |
+| | | EulerOS-x86 | | a108b9f238a91dee75c3005f81454a4a4e82972c54d062ebd8d62951704c0a56 |
+| | | EulerOS-aarch64 | | c7aba79315c6fabdc8587e8f62f26b0069e0057d308eb4d81257f05f27b4c154 |
+| | GPU CUDA 10.1 | Ubuntu-x86 | | 8532e060f31e96fc0bef6c196959ede665a9d049d60ce9e2e533ddc1d6b6222d |
+| | CPU | Ubuntu-x86 | | 72e0755120060ee450e74a8ef953133b6c22a203e19de25dcba8b861fae08d52 |
+| | | Windows-x64 | | ecd9144406ec7415cdfce8b55a9fd1616b528c84d6fde5c53cf329420dfb6409 |
+| MindInsight | Ascend910 | Ubuntu-x86 | | 34b3c1a5ffbf9fa5e46dc6f295abde0308b65d76fd18d4551103ca0e222e3651 |
+| | | Ubuntu-aarch64 | | 97f92b556f8e97e250f311f5d11caace4ac5686015b099b98462d9603e2c5724 |
+| | | EulerOS-x86 | | 5fab87c3dfda57851a9981c7567200f0f0d856462b8dd521402b085830e6554f |
+| | | EulerOS-aarch64 | | 7a157fb849f078fef6792353414737a8eccd98ba7a6fdd3c4ba3b497bc3f019f |
+| | GPU CUDA 10.1 | Ubuntu-x86 | | 34b3c1a5ffbf9fa5e46dc6f295abde0308b65d76fd18d4551103ca0e222e3651 |
+| MindArmour | Ascend910 | Ubuntu-x86/EulerOS-x86 | | 1c80113575e27d8330f6f951fd3a68b7a01b2b642a3d2b2d8c070325d71161e5 |
+| | | Ubuntu-aarch64/EulerOS-aarch64 | | f5d9bf5941d5f3273deb72cf77dc63767ee5ab09f9e329b4020899195f67d951 |
+| | GPU CUDA 10.1/CPU | Ubuntu-x86 | | 1c80113575e27d8330f6f951fd3a68b7a01b2b642a3d2b2d8c070325d71161e5 |
+
+### Tutorials
+
+
+
+### API
+
+
+
+### Docs
+
+
+
## 0.3.0-alpha
### Releasenotes
diff --git a/resource/release/release_list_zh_cn.md b/resource/release/release_list_zh_cn.md
index e1d1672fc3e89d922af2350455f2742492d3c0d1..20cd0b4fa496923fd992734740a749514cc02475 100644
--- a/resource/release/release_list_zh_cn.md
+++ b/resource/release/release_list_zh_cn.md
@@ -3,6 +3,12 @@
- [发布版本列表](#发布版本列表)
+ - [0.5.0-beta](#050-beta)
+ - [版本说明](#版本说明)
+ - [下载地址](#下载地址)
+ - [教程](#教程)
+ - [API](#api)
+ - [文档](#文档)
- [0.3.0-alpha](#030-alpha)
- [版本说明](#版本说明)
- [下载地址](#下载地址)
@@ -25,6 +31,39 @@
+## 0.5.0-beta
+### 版本说明
+
+
+
+### 下载地址
+| 组件 | 硬件平台 | 操作系统 | 链接 | SHA-256 |
+| --- | --- | --- | --- | --- |
+| MindSpore | Ascend910 | Ubuntu-x86 | | 0c5afb5cef15065424cfa60beb6bb3a6073c977e815fae1004299f8de4bd0fac |
+| | | Ubuntu-aarch64 | | eda47fc6e4646f0b3bcee3e37af5eb8426208f162fcee2d53b2c8310f13509c3 |
+| | | EulerOS-x86 | | a108b9f238a91dee75c3005f81454a4a4e82972c54d062ebd8d62951704c0a56 |
+| | | EulerOS-aarch64 | | c7aba79315c6fabdc8587e8f62f26b0069e0057d308eb4d81257f05f27b4c154 |
+| | GPU CUDA 10.1 | Ubuntu-x86 | | 8532e060f31e96fc0bef6c196959ede665a9d049d60ce9e2e533ddc1d6b6222d |
+| | CPU | Ubuntu-x86 | | 72e0755120060ee450e74a8ef953133b6c22a203e19de25dcba8b861fae08d52 |
+| | | Windows-x64 | | ecd9144406ec7415cdfce8b55a9fd1616b528c84d6fde5c53cf329420dfb6409 |
+| MindInsight | Ascend910 | Ubuntu-x86 | | 34b3c1a5ffbf9fa5e46dc6f295abde0308b65d76fd18d4551103ca0e222e3651 |
+| | | Ubuntu-aarch64 | | 97f92b556f8e97e250f311f5d11caace4ac5686015b099b98462d9603e2c5724 |
+| | | EulerOS-x86 | | 5fab87c3dfda57851a9981c7567200f0f0d856462b8dd521402b085830e6554f |
+| | | EulerOS-aarch64 | | 7a157fb849f078fef6792353414737a8eccd98ba7a6fdd3c4ba3b497bc3f019f |
+| | GPU CUDA 10.1 | Ubuntu-x86 | | 34b3c1a5ffbf9fa5e46dc6f295abde0308b65d76fd18d4551103ca0e222e3651 |
+| MindArmour | Ascend910 | Ubuntu-x86/EulerOS-x86 | | 1c80113575e27d8330f6f951fd3a68b7a01b2b642a3d2b2d8c070325d71161e5 |
+| | | Ubuntu-aarch64/EulerOS-aarch64 | | f5d9bf5941d5f3273deb72cf77dc63767ee5ab09f9e329b4020899195f67d951 |
+| | GPU CUDA 10.1/CPU | Ubuntu-x86 | | 1c80113575e27d8330f6f951fd3a68b7a01b2b642a3d2b2d8c070325d71161e5 |
+
+### 教程
+
+
+### API
+
+
+### 文档
+
+
## 0.3.0-alpha
### 版本说明
diff --git a/tutorials/notebook/mindinsight/images/data_lineage.png b/tutorials/notebook/mindinsight/images/data_lineage.png
new file mode 100644
index 0000000000000000000000000000000000000000..583707d9d796d63b04a66ce48dced48b162aad8f
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/data_lineage.png differ
diff --git a/tutorials/notebook/mindinsight/images/histogram.png b/tutorials/notebook/mindinsight/images/histogram.png
new file mode 100644
index 0000000000000000000000000000000000000000..a0bea73d058cbd971e56801a0c92d035b20b35c3
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/histogram.png differ
diff --git a/tutorials/notebook/mindinsight/images/histogram_func.png b/tutorials/notebook/mindinsight/images/histogram_func.png
new file mode 100644
index 0000000000000000000000000000000000000000..15437442b436b8d1783ad36a816b05df4f156c06
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/histogram_func.png differ
diff --git a/tutorials/notebook/mindinsight/images/histogram_only.png b/tutorials/notebook/mindinsight/images/histogram_only.png
new file mode 100644
index 0000000000000000000000000000000000000000..27febd13cefef260a93842f2c5012898dc2bf6ef
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/histogram_only.png differ
diff --git a/tutorials/notebook/mindinsight/images/histogram_only_all.png b/tutorials/notebook/mindinsight/images/histogram_only_all.png
new file mode 100644
index 0000000000000000000000000000000000000000..c3f9bdadeb51f1e3f14d8c8135b73bb03d9b6df2
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/histogram_only_all.png differ
diff --git a/tutorials/notebook/mindinsight/images/histogram_panel.png b/tutorials/notebook/mindinsight/images/histogram_panel.png
new file mode 100644
index 0000000000000000000000000000000000000000..fbf9de6e1fcdd025c47ac97812242c13d292cffc
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/histogram_panel.png differ
diff --git a/tutorials/notebook/mindinsight/images/image_function.png b/tutorials/notebook/mindinsight/images/image_function.png
new file mode 100644
index 0000000000000000000000000000000000000000..8e9e0bfe1deec2fca17e0c5e653ed4634b33c0c5
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/image_function.png differ
diff --git a/tutorials/notebook/mindinsight/images/image_only.png b/tutorials/notebook/mindinsight/images/image_only.png
new file mode 100644
index 0000000000000000000000000000000000000000..e08bd9f0c7ef2d26b4303a314f1a46d513cad65c
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/image_only.png differ
diff --git a/tutorials/notebook/mindinsight/images/image_panel.png b/tutorials/notebook/mindinsight/images/image_panel.png
new file mode 100644
index 0000000000000000000000000000000000000000..19221e047e5f36b63193aa54f6911f4c9b98f0ec
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/image_panel.png differ
diff --git a/tutorials/notebook/mindinsight/images/image_vi.png b/tutorials/notebook/mindinsight/images/image_vi.png
new file mode 100644
index 0000000000000000000000000000000000000000..a941238ee2fd945bdef5619410ba32a9157e2ff7
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/image_vi.png differ
diff --git a/tutorials/notebook/mindinsight/images/loss_scalar_only.png b/tutorials/notebook/mindinsight/images/loss_scalar_only.png
new file mode 100644
index 0000000000000000000000000000000000000000..71ef414dd9dd7f1e0c25017e4c14624ad8f644e9
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/loss_scalar_only.png differ
diff --git a/tutorials/notebook/mindinsight/images/mindinsight_panel.png b/tutorials/notebook/mindinsight/images/mindinsight_panel.png
new file mode 100644
index 0000000000000000000000000000000000000000..8eb80073b47556ea1759bc44b3b02b0e8f5ed022
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/mindinsight_panel.png differ
diff --git a/tutorials/notebook/mindinsight/images/mindinsight_panel2.png b/tutorials/notebook/mindinsight/images/mindinsight_panel2.png
new file mode 100644
index 0000000000000000000000000000000000000000..1122a225c18174c4c968232a9a888f834e0bef58
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/mindinsight_panel2.png differ
diff --git a/tutorials/notebook/mindinsight/images/mnist_dataset.png b/tutorials/notebook/mindinsight/images/mnist_dataset.png
new file mode 100644
index 0000000000000000000000000000000000000000..9cd3787d4ddb932f1a79177955066602d439a317
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/mnist_dataset.png differ
diff --git a/tutorials/notebook/mindinsight/images/model_lineage_all.png b/tutorials/notebook/mindinsight/images/model_lineage_all.png
new file mode 100644
index 0000000000000000000000000000000000000000..a8761f3b33fa8fbc90113a2c8202f504ccf049b3
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/model_lineage_all.png differ
diff --git a/tutorials/notebook/mindinsight/images/model_lineage_cp.png b/tutorials/notebook/mindinsight/images/model_lineage_cp.png
new file mode 100644
index 0000000000000000000000000000000000000000..b8a3226990209c2289a7d56ffb49d2e8937d2961
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/model_lineage_cp.png differ
diff --git a/tutorials/notebook/mindinsight/images/multi_scalars.png b/tutorials/notebook/mindinsight/images/multi_scalars.png
new file mode 100644
index 0000000000000000000000000000000000000000..4e43be097b6fbf7108a8cfffc6beaebf0a0e6d73
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/multi_scalars.png differ
diff --git a/tutorials/notebook/mindinsight/images/multi_scalars_select.png b/tutorials/notebook/mindinsight/images/multi_scalars_select.png
new file mode 100644
index 0000000000000000000000000000000000000000..182beb2c5ae782294fa7f5619eb3542308861989
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/multi_scalars_select.png differ
diff --git a/tutorials/notebook/mindinsight/images/scalar.png b/tutorials/notebook/mindinsight/images/scalar.png
new file mode 100644
index 0000000000000000000000000000000000000000..91d687c3d3c2cbd0f61c9486ec57c61b39f7f3b6
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/scalar.png differ
diff --git a/tutorials/notebook/mindinsight/images/scalar_panel.png b/tutorials/notebook/mindinsight/images/scalar_panel.png
new file mode 100644
index 0000000000000000000000000000000000000000..7e0794a2f4270a1dff868f29f41554c353ce2dc9
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/scalar_panel.png differ
diff --git a/tutorials/notebook/mindinsight/images/scalar_select.png b/tutorials/notebook/mindinsight/images/scalar_select.png
new file mode 100644
index 0000000000000000000000000000000000000000..a5a75e646fd236b4350b6f5dc57f4fa08b2096c6
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/scalar_select.png differ
diff --git a/tutorials/notebook/mindinsight/images/summary_list.png b/tutorials/notebook/mindinsight/images/summary_list.png
new file mode 100644
index 0000000000000000000000000000000000000000..5b3f170433d0fee73d4d462efe6cfd6dfeb5a166
Binary files /dev/null and b/tutorials/notebook/mindinsight/images/summary_list.png differ
diff --git a/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar.ipynb b/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..c4a69bcf7e47a34c8df8ae305fc255d117c5a0bc
--- /dev/null
+++ b/tutorials/notebook/mindinsight/mindinsight_image_histogram_scalar.ipynb
@@ -0,0 +1,600 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# MindInsight之标量、直方图和图像\n",
+ "\n",
+ "MindInsight可以将神经网络训练过程中的损失值标量、直方图、图像信息记录到日志文件中,通过可视化界面解析以供用户查看。\n",
+ "\n",
+ "整体流程:\n",
+ "\n",
+ "1. 下载MNIST数据集。\n",
+ "\n",
+ "2. 原始数据预处理。\n",
+ "\n",
+ "3. 初始化`lenet`网络。\n",
+ "\n",
+ "4. 执行主程序,使用`SummaryCollector`记录图像信息、损失值标量、权重梯度等参数,启动MindInsight服务。\n",
+ "\n",
+ "5. 在MindInsight可视化面板中查看结果。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 数据集操作\n",
+ "\n",
+ "本次流程用到MNIST数据集,MNIST数据集是一个手写的数据文件,数据库里的图像都是28x28的灰度图像,每个像素都是一个八位字节,包含了60000张训练图像和10000张测试图像,常被用作神经网络训练和测试任务的数据集。\n",
+ "\n",
+ "\n",
+ "\n",
+ "## 下载MNIST数据集\n",
+ "\n",
+ "下面一段代码分为两部分:\n",
+ "\n",
+ "1. 判断是否存在MNIST数据集目录,不存在则创建目录,存在则跳至[**数据预处理**](#数据预处理)。\n",
+ "\n",
+ "2. 判断是否存在MNIST数据集,不存在则下载MNIST数据集,存在则跳至[**数据预处理**](#数据预处理)。\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import urllib.request\n",
+ "from urllib.parse import urlparse\n",
+ "import gzip\n",
+ "\n",
+ "def unzipfile(gzip_path):\n",
+ " \"\"\"unzip dataset file\n",
+ " Args:\n",
+ " gzip_path: dataset file path\n",
+ " \"\"\"\n",
+ " open_file = open(gzip_path.replace('.gz',''), 'wb')\n",
+ " gz_file = gzip.GzipFile(gzip_path)\n",
+ " open_file.write(gz_file.read())\n",
+ " gz_file.close()\n",
+ "\n",
+ "\n",
+ "def download_dataset():\n",
+ " \"\"\"Download the dataset from http://yann.lecun.com/exdb/mnist/.\"\"\"\n",
+ " print(\"******Downloading the MNIST dataset******\")\n",
+ " train_path = \"./MNIST_Data/train/\"\n",
+ " test_path = \"./MNIST_Data/test/\"\n",
+ " train_path_check = os.path.exists(train_path)\n",
+ " test_path_check = os.path.exists(test_path)\n",
+ " if train_path_check == False and test_path_check ==False:\n",
+ " os.makedirs(train_path)\n",
+ " os.makedirs(test_path)\n",
+ " train_url = {\"http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\"}\n",
+ " test_url = {\"http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\"}\n",
+ " for url in train_url:\n",
+ " url_parse = urlparse(url)\n",
+ " # split the file name from url\n",
+ " file_name = os.path.join(train_path,url_parse.path.split('/')[-1])\n",
+ " if not os.path.exists(file_name.replace('.gz','')):\n",
+ " file = urllib.request.urlretrieve(url, file_name)\n",
+ " unzipfile(file_name)\n",
+ " os.remove(file_name)\n",
+ " for url in test_url:\n",
+ " url_parse = urlparse(url)\n",
+ " # split the file name from url\n",
+ " file_name = os.path.join(test_path,url_parse.path.split('/')[-1])\n",
+ " if not os.path.exists(file_name.replace('.gz','')):\n",
+ " file = urllib.request.urlretrieve(url, file_name)\n",
+ " unzipfile(file_name)\n",
+ " os.remove(file_name)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 数据预处理\n",
+ "\n",
+ "好的数据集可以有效提高训练精度和效率,在加载数据集前,会进行一些处理,增加数据的可用性和随机性。下面一段代码定义`create_dataset`函数进行数据处理操作。\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import mindspore.dataset as ds\n",
+ "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.transforms.c_transforms as C\n",
+ "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.common import dtype as mstype\n",
+ "\n",
+ "\n",
+ "def create_dataset(data_path, batch_size=32, repeat_size=1,\n",
+ " num_parallel_workers=1):\n",
+ " \"\"\"create dataset for train or test.\"\"\"\n",
+ " # define dataset\n",
+ " mnist_ds = ds.MnistDataset(data_path)\n",
+ "\n",
+ " resize_height, resize_width = 32, 32\n",
+ " rescale = 1.0 / 255.0\n",
+ " shift = 0.0\n",
+ " rescale_nml = 1 / 0.3081\n",
+ " shift_nml = -1 * 0.1307 / 0.3081\n",
+ "\n",
+ " # define map operations\n",
+ " resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode\n",
+ " rescale_op = CV.Rescale(rescale, shift)\n",
+ " hwc2chw_op = CV.HWC2CHW()\n",
+ " type_cast_op = C.TypeCast(mstype.int32)\n",
+ "\n",
+ " # apply map operations on images\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ "\n",
+ " # apply DatasetOps\n",
+ " buffer_size = 10000\n",
+ " mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script\n",
+ " mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n",
+ " mnist_ds = mnist_ds.repeat(repeat_size)\n",
+ "\n",
+ " return mnist_ds"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 网络初始化\n",
+ "\n",
+ "在进行训练之前,需定义神经网络模型,本流程采用最简单的[LeNet卷积神经网络](http://yann.lecun.com/exdb/lenet/)。\n",
+ "\n",
+ "LeNet网络不包括输入层的情况下,共有7层:2个卷积层、2个下采样层(池化层)、3个全连接层。每层都包含不同数量的训练参数。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import mindspore.nn as nn\n",
+ "from mindspore.common.initializer import TruncatedNormal\n",
+ "\n",
+ "\n",
+ "def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):\n",
+ " \"\"\"weight initial for conv layer\"\"\"\n",
+ " weight = weight_variable()\n",
+ " return nn.Conv2d(in_channels, out_channels,\n",
+ " kernel_size=kernel_size, stride=stride, padding=padding,\n",
+ " weight_init=weight, has_bias=False, pad_mode=\"valid\")\n",
+ "\n",
+ "\n",
+ "def fc_with_initialize(input_channels, out_channels):\n",
+ " \"\"\"weight initial for fc layer\"\"\"\n",
+ " weight = weight_variable()\n",
+ " bias = weight_variable()\n",
+ " return nn.Dense(input_channels, out_channels, weight, bias)\n",
+ "\n",
+ "\n",
+ "def weight_variable():\n",
+ " \"\"\"weight initial\"\"\"\n",
+ " return TruncatedNormal(0.02)\n",
+ "\n",
+ "\n",
+ "class LeNet5(nn.Cell):\n",
+ " \"\"\"\n",
+ " Lenet network\n",
+ "\n",
+ " Args:\n",
+ " num_class (int): Num classes. Default: 10.\n",
+ "\n",
+ " Returns:\n",
+ " Tensor, output tensor\n",
+ "\n",
+ " \"\"\"\n",
+ " def __init__(self, num_class=10, channel=1):\n",
+ " super(LeNet5, self).__init__()\n",
+ " self.num_class = num_class\n",
+ " self.conv1 = conv(channel, 6, 5)\n",
+ " self.conv2 = conv(6, 16, 5)\n",
+ " self.fc1 = fc_with_initialize(16 * 5 * 5, 120)\n",
+ " self.fc2 = fc_with_initialize(120, 84)\n",
+ " self.fc3 = fc_with_initialize(84, self.num_class)\n",
+ " self.relu = nn.ReLU()\n",
+ " self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n",
+ " self.flatten = nn.Flatten()\n",
+ "\n",
+ " def construct(self, x):\n",
+ " x = self.conv1(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.max_pool2d(x)\n",
+ " x = self.conv2(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.max_pool2d(x)\n",
+ " x = self.flatten(x)\n",
+ " x = self.fc1(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.fc2(x)\n",
+ " x = self.relu(x)\n",
+ " x = self.fc3(x)\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 记录标量、直方图、图像\n",
+ "\n",
+ "在主程序中应用`SummaryCollector`来记录标量、直方图、图像信息。\n",
+ "\n",
+ "## 运行主程序\n",
+ "\n",
+ "在MindSpore中通过`Callback`机制提供支持快速简易地收集损失值、参数权重、梯度等信息的`Callback`, 叫做`SummaryCollector`。详细的用法可以参考API文档中`mindspore.train.callback.SummaryCollector`。 \n",
+ "\n",
+ "1. 为了记录损失值标量、直方图、图像信息,在主程序代码中需要在`specified`参数中指定需要记录的信息。\n",
+ "\n",
+ " ```python\n",
+ " specified={\"collect_metric\": True, \"histogram_regular\": \"^conv1.*|^conv2.*\", \"collect_input_data\": True}\n",
+ " ```\n",
+ "\n",
+ " 其中:\n",
+ " - `\"collect_metric\"`为记录损失值标量信息。\n",
+ " - `\"histogram_regular\"`为记录`conv1`层和`conv2`层直方图信息。\n",
+ " - `\"collect_input_data\"`为记录图像信息。\n",
+ "\n",
+ "2. 实例化`SummaryCollector`,并将其应用到`model.train`或者`model.eval`中。\n",
+ "\n",
+ "程序运行过程中将启动MindInsight服务并自动遍历读取当前notebook目录下`summary_dir`子目录下所有日志文件、解析进行可视化展示。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import mindspore.nn as nn\n",
+ "from mindspore import context\n",
+ "from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor, TimeMonitor\n",
+ "from mindspore.train import Model\n",
+ "from mindspore.nn.metrics import Accuracy\n",
+ "from mindspore.train.callback import SummaryCollector\n",
+ "from mindspore.train.serialization import load_checkpoint, load_param_into_net\n",
+ "\n",
+ "\n",
+ "if __name__ == \"__main__\":\n",
+ " device_target = \"GPU\"\n",
+ " summary_base_dir = \"./summary_dir\"\n",
+ " context.set_context(mode=context.GRAPH_MODE, device_target=device_target)\n",
+ " download_dataset()\n",
+ " ds_train = create_dataset(data_path=\"./MNIST_Data/train/\")\n",
+ " network = LeNet5()\n",
+ " net_loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True, reduction=\"mean\")\n",
+ " net_opt = nn.Momentum(network.trainable_params(), learning_rate=0.01, momentum=0.9)\n",
+ " time_cb = TimeMonitor(data_size=ds_train.get_dataset_size())\n",
+ " config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)\n",
+ " ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\", config=config_ck)\n",
+ " model = Model(network, net_loss, net_opt, metrics={\"Accuracy\": Accuracy()})\n",
+ " os.system(f\"mindinsight start --summary-base-dir {summary_base_dir} --port=8080\")\n",
+ " # Init a SummaryCollector callback instance, and use it in model.train or model.eval\n",
+ " specified = {\"collect_metric\": True, \"histogram_regular\": \"^conv1.*|^conv2.*\", \"collect_input_data\": True}\n",
+ " summary_collector = SummaryCollector(summary_dir=\"./summary_dir/summary_01\", collect_specified_data=specified, collect_freq=1, keep_default_action=False)\n",
+ " print(\"============== Starting Training ==============\")\n",
+ " # Note: dataset_sink_mode should be set to False, else you should modify collect freq in SummaryCollector\n",
+ " model.train(epoch=3, train_dataset=ds_train, callbacks=[time_cb, ckpoint_cb, LossMonitor(), summary_collector], dataset_sink_mode=False)\n",
+ " print(\"============== Starting Testing ==============\")\n",
+ " param_dict = load_checkpoint(\"checkpoint_lenet-3_1875.ckpt\")\n",
+ " load_param_into_net(network, param_dict)\n",
+ " ds_eval = create_dataset(\"./MNIST_Data/test/\")\n",
+ " acc = model.eval(ds_eval, callbacks=summary_collector, dataset_sink_mode=False)\n",
+ " print(\"============== {} ==============\".format(acc))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# MindInsight看板\n",
+ "\n",
+ "在本地浏览器中打开地址:`127.0.0.1:8080`,进入到可视化面板。\n",
+ "\n",
+ "\n",
+ "\n",
+ "在上图所示面板中可以看到`summary_01`日志文件目录,点击训练看板进入到下图所示的训练数据展示面板,该面板展示了标量数据、直方图和图像信息,并随着训练、测试的进行实时刷新数据,实时显示训练过程参数的变化情况。\n",
+ "\n",
+ "\n",
+ "\n",
+ "## 标量可视化\n",
+ "\n",
+ "标量可视化用于展示训练过程中标量的变化趋势情况,点击打开标量信息展示面板,该面板记录了迭代计算过程中的学习率(下图左侧所示)和损失值(下图右侧所示)标量信息。\n",
+ "\n",
+ "\n",
+ "\n",
+ "如下图的loss值标量可视化信息——标量趋势图。\n",
+ "\n",
+ "\n",
+ "\n",
+ "上图展示了神经网络在训练过程中loss值的变化过程。横坐标是训练步骤,纵坐标是loss值。\n",
+ "\n",
+ "图中右上角有几个按钮功能,从左到右功能分别是全屏展示,切换Y轴比例,开启/关闭框选,分步回退和还原图形。\n",
+ "\n",
+ "- 全屏展示即全屏展示该标量曲线,再点击一次即可恢复。\n",
+ "\n",
+ "- 切换Y轴比例是指可以将Y轴坐标进行对数转换。\n",
+ "\n",
+ "- 开启/关闭框选是指可以框选图中部分区域,并放大查看该区域, 可以在已放大的图形上叠加框选。\n",
+ "\n",
+ "- 分步回退是指对同一个区域连续框选并放大查看时,可以逐步撤销操作。\n",
+ "\n",
+ "- 还原图形是指进行了多次框选后,点击此按钮可以将图还原回原始状态。\n",
+ "\n",
+ "\n",
+ "\n",
+ "上图展示的标量可视化的功能区,提供了根据选择不同标签,水平轴的不同维度和平滑度来查看标量信息的功能。\n",
+ "\n",
+ "- 标签:提供了对所有标签进行多项选择的功能,用户可以通过勾选所需的标签,查看对应的标量信息。\n",
+ "\n",
+ "- 水平轴:可以选择“步骤”、“相对时间”、“绝对时间”中的任意一项,来作为标量曲线的水平轴。\n",
+ "\n",
+ "- 平滑度:可以通过调整平滑度,对标量曲线进行平滑处理。\n",
+ "\n",
+ "- 标量合成:可以选中两条标量曲线进行合成并展示在一个图中,以方便对两条曲线进行对比或者查看合成后的图。\n",
+ " 标量合成的功能区与标量可视化的功能区相似。其中与标量可视化功能区不一样的地方,在于标签选择时,标量合成功能最多只能同时选择两个标签,将其曲线合成并展示。\n",
+ "\n",
+ "## 直方图可视化\n",
+ "\n",
+ "\n",
+ "直方图用于将用户所指定的张量以直方图的形式展示。点击打开直方图展示面板,以直方图的形式记录了在迭代过程中所有层参数分布信息。\n",
+ "\n",
+ "\n",
+ "\n",
+ "如下图为`conv1`层参数分布信息,点击图中右上角,可以将图放大。\n",
+ "\n",
+ "\n",
+ "\n",
+ "上图展示直方图的功能区,包含以下内容:\n",
+ "\n",
+ "- 标签选择:提供了对所有标签进行多项选择的功能,用户可以通过勾选所需的标签,查看对应的直方图。\n",
+ "\n",
+ "- 纵轴:可以选择步骤、相对时间、绝对时间中的任意一项,来作为直方图纵轴显示的数据。\n",
+ "\n",
+ "- 视角:可以选择正视和俯视中的一种。正视是指从正面的角度查看直方图,此时不同步骤之间的数据会覆盖在一起。俯视是指偏移以45度角俯视直方图区域,这时可以呈现不同步骤之间数据的差异。\n",
+ "\n",
+ "## 图像可视化\n",
+ "\n",
+ "图像可视化用于展示用户所指定的图片。点击图像展示面板,展示了每个step进行处理的图像信息。\n",
+ "\n",
+ "\n",
+ "\n",
+ "下图为展示`summary_01`记录的图像信息。\n",
+ "\n",
+ "\n",
+ "\n",
+ "通过滑动上图中的\"步骤\"滑条,查看不同步骤的图片。\n",
+ "\n",
+ "\n",
+ "\n",
+ "上图展示图像可视化的功能区,提供了选择查看不同标签,不同亮度和不同对比度来查看图片信息。\n",
+ "\n",
+ "- 标签:提供了对所有标签进行多项选择的功能,用户可以通过勾选所需的标签,查看对应的图片信息。\n",
+ "\n",
+ "- 亮度调整:可以调整所展示的所有图片亮度。\n",
+ "\n",
+ "- 对比度调整:可以调整所展示的所有图片对比度。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 对比看板\n",
+ "\n",
+ "对比看板可视用于多个训练之间的标量数据对比,为了展示对比看板,执行以下代码,在可视化面板中可以得到`summary_02`日志记录信息。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import mindspore.nn as nn\n",
+ "from mindspore import context\n",
+ "from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor, TimeMonitor\n",
+ "from mindspore.train import Model\n",
+ "from mindspore.nn.metrics import Accuracy\n",
+ "from mindspore.train.callback import SummaryCollector\n",
+ "from mindspore.train.serialization import load_checkpoint, load_param_into_net\n",
+ "\n",
+ "\n",
+ "if __name__ == \"__main__\":\n",
+ " device_target = \"GPU\"\n",
+ " context.set_context(mode=context.GRAPH_MODE, device_target=device_target)\n",
+ " download_dataset()\n",
+ " ds_train = create_dataset(data_path=\"./MNIST_Data/train/\")\n",
+ " network = LeNet5()\n",
+ " net_loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True, reduction=\"mean\")\n",
+ " net_opt = nn.Momentum(network.trainable_params(), learning_rate=0.01, momentum=0.9)\n",
+ " time_cb = TimeMonitor(data_size=ds_train.get_dataset_size())\n",
+ " config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)\n",
+ " ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\", config=config_ck)\n",
+ " model = Model(network, net_loss, net_opt, metrics={\"Accuracy\": Accuracy()})\n",
+ " # Init a SummaryCollector callback instance, and use it in model.train or model.eval\n",
+ " specified = {\"collect_metric\": True, \"histogram_regular\": \"^conv1.*|^conv2.*\", \"collect_input_data\": True}\n",
+ " summary_collector = SummaryCollector(summary_dir=\"./summary_dir/summary_02\", collect_specified_data=specified, collect_freq=1, keep_default_action=False)\n",
+ " print(\"============== Starting Training ==============\")\n",
+ " # Note: dataset_sink_mode should be set to False, else you should modify collect freq in SummaryCollector\n",
+ " model.train(epoch=3, train_dataset=ds_train, callbacks=[time_cb, ckpoint_cb, LossMonitor(), summary_collector], dataset_sink_mode=False)\n",
+ " print(\"============== Starting Testing ==============\")\n",
+ " param_dict = load_checkpoint(\"checkpoint_lenet_1-3_1875.ckpt\")\n",
+ " load_param_into_net(network, param_dict)\n",
+ " ds_eval = create_dataset(\"./MNIST_Data/test/\")\n",
+ " acc = model.eval(ds_eval, callbacks=summary_collector, dataset_sink_mode=False)\n",
+ " print(\"============== {} ==============\".format(acc))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "打开对比看板,可以得到`summary_01`和`summary_02`标量对比信息。\n",
+ "\n",
+ "\n",
+ "\n",
+ "上图展示了多个训练之间的标量曲线对比效果,横坐标是训练步骤,纵坐标是标量值。\n",
+ "\n",
+ "图中右上角有几个按钮功能,从左到右功能分别是全屏展示,切换Y轴比例,开启/关闭框选,分步回退和还原图形。\n",
+ "\n",
+ "- 全屏展示即全屏展示该标量曲线,再点击一次即可恢复。\n",
+ "\n",
+ "- 切换Y轴比例是指可以将Y轴坐标进行对数转换。\n",
+ "\n",
+ "- 开启/关闭框选是指可以框选图中部分区域,并放大查看该区域, 可以在已放大的图形上叠加框选。\n",
+ "\n",
+ "- 分步回退是指对同一个区域连续框选并放大查看时,可以逐步撤销操作。\n",
+ "\n",
+ "- 还原图形是指进行了多次框选后,点击此按钮可以将图还原回原始状态。\n",
+ "\n",
+ "\n",
+ "\n",
+ "上图展示的对比看板可视的功能区,提供了根据选择不同训练或标签,水平轴的不同维度和平滑度来进行标量对比的功能。\n",
+ "\n",
+ "- 训练: 提供了对所有训练进行多项选择的功能,用户可以通过勾选或关键字筛选所需的训练。\n",
+ "\n",
+ "- 标签:提供了对所有标签进行多项选择的功能,用户可以通过勾选所需的标签,查看对应的标量信息。\n",
+ "\n",
+ "- 水平轴:可以选择“步骤”、“相对时间”、“绝对时间”中的任意一项,来作为标量曲线的水平轴。\n",
+ "\n",
+ "- 平滑度:可以通过调整平滑度,对标量曲线进行平滑处理。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 单独记录数据\n",
+ "\n",
+ "以上流程为整体展示Summary算子能记录到的所有数据,也可以单独记录关心的数据,以降低性能开销和日志文件大小。\n",
+ "\n",
+ "> 为了展示运行的效果,进行以下每个步骤之前先删除当前notebook根目录下的`summary_dir/summary_02`目录,配置完`specified`参数后执行[**对比看板**](#对比看板)中的代码。\n",
+ "\n",
+ "## 单独记录损失值标量\n",
+ "\n",
+ "在主程序中配置`specified`参数为:\n",
+ "\n",
+ "```python\n",
+ "specified={\"collect_metric\": True}\n",
+ "```\n",
+ "\n",
+ "\n",
+ "\n",
+ "在MindInsight面板中,如上图所示,只展示了损失值标量信息。\n",
+ "\n",
+ "## 单独记录参数分布直方图\n",
+ "\n",
+ "在主程序中配置`specified`参数为:\n",
+ "\n",
+ "```python\n",
+ "specified = {\"histogram_regular\": \"^conv1.*|^conv2.*|fc1.*|fc2.*|fc3.*\"}\n",
+ "```\n",
+ "\n",
+ "\n",
+ "\n",
+ "在MindInsight面板中,如上图所示,只展示了参数直方图信息。\n",
+ "\n",
+ "\n",
+ "\n",
+ "点击进入直方图面板,如上图所示,展示了`conv1`、`conv2`、`fc1`、`fc2`、`fc3`等各层的权重、参数信息。\n",
+ "\n",
+ "## 单独记录图像\n",
+ "\n",
+ "在主程序中配置`specified`参数为:\n",
+ "\n",
+ "```python\n",
+ "specified = {\"collect_input_data\": True}\n",
+ "```\n",
+ "\n",
+ "\n",
+ "\n",
+ "在MindInsight面板中,如上图所示,只展示了输入图像信息。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 关闭MindInsight服务\n",
+ "\n",
+ "执行以下代码关闭MindInsight服务。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "\n",
+ "os.system(\"mindinsight stop --port 8080\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 注意事项和规格\n",
+ "\n",
+ "- 为了控制列出summary列表的用时,MindInsight最多支持发现999个summary列表条目。\n",
+ "\n",
+ "- 为了控制内存占用,MindInsight对标签(tag)数目和步骤(step)数目进行了限制:\n",
+ "\n",
+ " - 每个训练看板的最大标签数量为300个标签。标量标签、图片标签、计算图标签、参数分布图(直方图)标签的数量总和不得超过300个。特别地,每个训练看板最多有10个计算图标签。当实际标签数量超过这一限制时,将依照MindInsight的处理顺序,保留最近处理的300个标签。\n",
+ "\n",
+ " - 每个训练看板的每个标量标签最多有1000个步骤的数据。当实际步骤的数目超过这一限制时,将对数据进行随机采样,以满足这一限制。\n",
+ "\n",
+ " - 每个训练看板的每个图片标签最多有10个步骤的数据。当实际步骤的数目超过这一限制时,将对数据进行随机采样,以满足这一限制。\n",
+ " \n",
+ " - 每个训练看板的每个参数分布图(直方图)标签最多有50个步骤的数据。当实际步骤的数目超过这一限制时,将对数据进行随机采样,以满足这一限制。\n",
+ " \n",
+ "- 出于性能上的考虑,MindInsight对比看板使用缓存机制加载训练的标量曲线数据,并进行以下限制:\n",
+ " \n",
+ " - 对比看板只支持在缓存中的训练进行比较标量曲线对比。\n",
+ " \n",
+ " - 缓存最多保留最新(按修改时间排列)的15个训练。\n",
+ " \n",
+ " - 用户最多同时对比5个训练的标量曲线。"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
\ No newline at end of file
diff --git a/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb b/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..2d98c64623b94002380bfa9bce94aa646d43704f
--- /dev/null
+++ b/tutorials/notebook/mindinsight/mindinsight_model_lineage_and_data_lineage.ipynb
@@ -0,0 +1,713 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# MindInsight的模型溯源和数据溯源体验"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 概述\n",
+ "在AI训练的过程中,面对陌生的神经网络训练,经常需要事先优化神经网络训练中的参数,毕竟在训练一个十分复杂的神经网络时,有时候需要花费少则几天多则几周甚至更多的时间,为了更好的管理、调试和优化神经网络的训练过程,我们需要一个工具来对训练过程中的计算图、各种指标随着时间的变化趋势以及训练中使用到的图像信息进行分析和记录工作,而MindSpore就提供了一个对用户十分易用友好的可视化工具MindInsight,赋能给用户进行数据溯源和模型溯源的可视化分析,能明显提升用户对网络搭建过程和数据增强过程的纠错调优能力。而本次体验会从MindInsight的数据记录,可视化效果,如何方便用户在模型调优,数据调优上做一次整体流程的体验。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "下面按照MindSpore的训练数据模型的正常步骤进行,当使用到MindInsight或者`SummaryCollector`算子进行数据保存操作时,会增加相应的说明,本次体验的整体流程如下:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "1、数据集的准备,这里使用的是MNIST数据集。\n",
+ "\n",
+ "2、构建一个网络,这里使用LeNet网络。(此处将使用第二种记录方式`ImageSummary`)。\n",
+ "\n",
+ "3、训练网络和测试网络的搭建及运行。(此处将操作`SummaryCollector`初始化,并记录模型训练和模型测试相关信息)。\n",
+ "\n",
+ "4、启动MindInsight服务。\n",
+ "\n",
+ "5、模型溯源的使用。调整模型参数多次存储数据,并使用MindInsight的模型溯源功能对不同优化参数下训练产生的模型作对比,了解MindSpore中的各类优化对训练过程的影响及如何调优训练过程。\n",
+ "\n",
+ "6、数据溯源的使用。调整数据参数多次存储数据,并使用MindInsight的数据溯源功能对不同数据集下训练产生的模型进行对比分析,了解如何调优。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "本次体验将使用快速入门案例作为基础用例,将MindInsight的模型溯源和数据溯源的数据记录功能加入到案例中,快速入门案例的源码请参考:https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/lenet.py 。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 一、训练的数据集下载"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1、数据集准备"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 方法一:\n",
+ "从以下网址下载,并将数据包解压缩后放至Jupyter的工作目录下:
训练数据集:{\"http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\"}\n",
+ "
测试数据集:{\"http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\"}
我们用下面代码查询jupyter的工作目录。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "os.getcwd()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "训练数据集放在----`Jupyter工作目录+\\MNIST_Data\\train\\`,此时`train`文件夹内应该包含两个文件,`train-images-idx3-ubyte`和`train-labels-idx1-ubyte`
测试数据集放在----`Jupyter工作目录+\\MNIST_Data\\test\\`,此时`test`文件夹内应该包含两个文件,`t10k-images-idx3-ubyte`和`t10k-labels-idx1-ubyte`"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### 方法二:\n",
+ "直接执行下面代码,会自动进行训练集的下载与解压,但是整个过程根据网络好坏情况会需要花费几分钟时间。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Network request module, data download module, decompression module\n",
+ "import urllib.request \n",
+ "from urllib.parse import urlparse\n",
+ "import gzip \n",
+ "\n",
+ "def unzipfile(gzip_path):\n",
+ " \"\"\"unzip dataset file\n",
+ " Args:\n",
+ " gzip_path: dataset file path\n",
+ " \"\"\"\n",
+ " open_file = open(gzip_path.replace('.gz',''), 'wb')\n",
+ " gz_file = gzip.GzipFile(gzip_path)\n",
+ " open_file.write(gz_file.read())\n",
+ " gz_file.close()\n",
+ " \n",
+ "def download_dataset():\n",
+ " \"\"\"Download the dataset from http://yann.lecun.com/exdb/mnist/.\"\"\"\n",
+ " print(\"******Downloading the MNIST dataset******\")\n",
+ " train_path = \"./MNIST_Data/train/\" \n",
+ " test_path = \"./MNIST_Data/test/\"\n",
+ " train_path_check = os.path.exists(train_path)\n",
+ " test_path_check = os.path.exists(test_path)\n",
+ " if train_path_check == False and test_path_check == False:\n",
+ " os.makedirs(train_path)\n",
+ " os.makedirs(test_path)\n",
+ " train_url = {\"http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\"}\n",
+ " test_url = {\"http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\", \"http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\"}\n",
+ " \n",
+ " for url in train_url:\n",
+ " url_parse = urlparse(url)\n",
+ " # split the file name from url\n",
+ " file_name = os.path.join(train_path,url_parse.path.split('/')[-1])\n",
+ " if not os.path.exists(file_name.replace('.gz', '')):\n",
+ " file = urllib.request.urlretrieve(url, file_name)\n",
+ " unzipfile(file_name)\n",
+ " os.remove(file_name)\n",
+ " \n",
+ " for url in test_url:\n",
+ " url_parse = urlparse(url)\n",
+ " # split the file name from url\n",
+ " file_name = os.path.join(test_path,url_parse.path.split('/')[-1])\n",
+ " if not os.path.exists(file_name.replace('.gz', '')):\n",
+ " file = urllib.request.urlretrieve(url, file_name)\n",
+ " unzipfile(file_name)\n",
+ " os.remove(file_name)\n",
+ "\n",
+ "download_dataset()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "这样就完成了数据集的下载解压缩工作。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2、数据集处理"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "数据集处理对于训练非常重要,好的数据集可以有效提高训练精度和效率。在加载数据集前,我们通常会对数据集进行一些处理。\n",
+ "
我们定义一个函数`create_dataset`来创建数据集。在这个函数中,我们定义好需要进行的数据增强和处理操作:\n",
+ "
1、定义数据集。\n",
+ "
2、定义进行数据增强和处理所需要的一些参数。\n",
+ "
3、根据参数,生成对应的数据增强操作。\n",
+ "
4、使用`map`映射函数,将数据操作应用到数据集。\n",
+ "
5、对生成的数据集进行处理。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "> 具体的数据集操作可以在MindInsight的数据溯源中进行可视化分析。另外提取图像需要将`normalize`算子的数据处理(`CV.Rescale`)操作取消,否则取出来的图像为全黑图像。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import mindspore.dataset.transforms.vision.c_transforms as CV\n",
+ "import mindspore.dataset.transforms.c_transforms as C\n",
+ "from mindspore.dataset.transforms.vision import Inter\n",
+ "from mindspore.common import dtype as mstype\n",
+ "import mindspore.dataset as ds\n",
+ "\n",
+ "def create_dataset(data_path, batch_size=32, repeat_size=1,\n",
+ " num_parallel_workers=1):\n",
+ " \"\"\" create dataset for train or test\n",
+ " Args:\n",
+ " data_path: Data path\n",
+ " batch_size: The number of data records in each group\n",
+ " repeat_size: The number of replicated data records\n",
+ " num_parallel_workers: The number of parallel workers\n",
+ " \"\"\"\n",
+ " # define dataset\n",
+ " mnist_ds = ds.MnistDataset(data_path)\n",
+ "\n",
+ " # Define some parameters needed for data enhancement and rough justification\n",
+ " resize_height, resize_width = 32, 32\n",
+ " rescale = 1.0 / 255.0\n",
+ " shift = 0.0\n",
+ " rescale_nml = 1 / 0.3081\n",
+ " shift_nml = -1 * 0.1307 / 0.3081\n",
+ "\n",
+ " # According to the parameters, generate the corresponding data enhancement method\n",
+ " resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Resize images to (32, 32) by bilinear interpolation\n",
+ " rescale_nml_op = CV.Rescale(rescale_nml, shift_nml) # normalize images\n",
+ " rescale_op = CV.Rescale(rescale, shift) # rescale images\n",
+ " hwc2chw_op = CV.HWC2CHW() # change shape from (height, width, channel) to (channel, height, width) to fit network.\n",
+ " type_cast_op = C.TypeCast(mstype.int32) # change data type of label to int32 to fit network\n",
+ "\n",
+ " # Using map() to apply operations to a dataset\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"label\", operations=type_cast_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=resize_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=rescale_nml_op, num_parallel_workers=num_parallel_workers)\n",
+ " mnist_ds = mnist_ds.map(input_columns=\"image\", operations=hwc2chw_op, num_parallel_workers=num_parallel_workers)\n",
+ " \n",
+ " # Process the generated dataset\n",
+ " buffer_size = 10000\n",
+ " mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script\n",
+ " mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n",
+ " mnist_ds = mnist_ds.repeat(repeat_size)\n",
+ "\n",
+ " return mnist_ds"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 二、构建LeNet5网络"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 使用ImageSummary记录图像数据"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "在构建LeNet5网络的`__init__`中,初始化`ImageSummary`算子,同时在`construct`中将`ImageSummary`放在第一步,其第一个参数`image`为抽取出来的图片的自定义命名,第二个参数`x`是图像数据。此方法与`SummaryCollector`抽取图像的方法不冲突,可以同时使用。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from mindspore.ops import operations as P\n",
+ "import mindspore.nn as nn\n",
+ "from mindspore.common.initializer import TruncatedNormal\n",
+ "\n",
+ "# Initialize 2D convolution function\n",
+ "def conv(in_channels, out_channels, kernel_size, stride=1, padding=0):\n",
+ " \"\"\"Conv layer weight initial.\"\"\"\n",
+ " weight = weight_variable()\n",
+ " return nn.Conv2d(in_channels, out_channels,\n",
+ " kernel_size=kernel_size, stride=stride, padding=padding,\n",
+ " weight_init=weight, has_bias=False, pad_mode=\"valid\")\n",
+ "\n",
+ "# Initialize full connection layer\n",
+ "def fc_with_initialize(input_channels, out_channels):\n",
+ " \"\"\"Fc layer weight initial.\"\"\"\n",
+ " weight = weight_variable()\n",
+ " bias = weight_variable()\n",
+ " return nn.Dense(input_channels, out_channels, weight, bias)\n",
+ "\n",
+ "# Set truncated normal distribution\n",
+ "def weight_variable():\n",
+ " \"\"\"Weight initial.\"\"\"\n",
+ " return TruncatedNormal(0.02)\n",
+ "\n",
+ "class LeNet5(nn.Cell):\n",
+ " \"\"\"Lenet network structure.\"\"\"\n",
+ " # define the operator required\n",
+ " def __init__(self):\n",
+ " super(LeNet5, self).__init__()\n",
+ " self.batch_size = 32 # 32 pictures in each group\n",
+ " self.conv1 = conv(1, 6, 5) # Convolution layer 1, 1 channel input (1 Figure), 6 channel output (6 figures), convolution core 5 * 5\n",
+ " self.conv2 = conv(6, 16, 5) # Convolution layer 2,6-channel input, 16 channel output, convolution kernel 5 * 5\n",
+ " self.fc1 = fc_with_initialize(16 * 5 * 5, 120)\n",
+ " self.fc2 = fc_with_initialize(120, 84)\n",
+ " self.fc3 = fc_with_initialize(84, 10)\n",
+ " self.relu = nn.ReLU()\n",
+ " self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n",
+ " self.flatten = nn.Flatten()\n",
+ " #Init ImageSummary\n",
+ " self.sm_image = P.ImageSummary()\n",
+ "\n",
+ " # use the preceding operators to construct networks\n",
+ " def construct(self, x):\n",
+ " self.sm_image(\"image\",x)\n",
+ " x = self.conv1(x) # 1*32*32-->6*28*28\n",
+ " x = self.relu(x) # 6*28*28-->6*14*14\n",
+ " x = self.max_pool2d(x) # Pool layer\n",
+ " x = self.conv2(x) # Convolution layer\n",
+ " x = self.relu(x) # Function excitation layer\n",
+ " x = self.max_pool2d(x) # Pool layer\n",
+ " x = self.flatten(x) # Dimensionality reduction\n",
+ " x = self.fc1(x) # Full connection\n",
+ " x = self.relu(x) # Function excitation layer\n",
+ " x = self.fc2(x) # Full connection\n",
+ " x = self.relu(x) # Function excitation layer\n",
+ " x = self.fc3(x) # Full connection\n",
+ " return x"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 三、训练网络和测试网络构建"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1、使用SummaryCollector放入到训练网络中记录训练数据"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "`summary_callback`,即是`SummaryCollector`,在`model.train`的回调函数中使用,可以记录训练数据溯源和模型溯源信息。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Training and testing related modules\n",
+ "from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor, SummaryCollector, Callback\n",
+ "from mindspore.train import Model\n",
+ "import os\n",
+ "\n",
+ "\n",
+ "def train_net(model, epoch_size, mnist_path, repeat_size, ckpoint_cb, summary_collector):\n",
+ " \"\"\"Define the training method.\"\"\"\n",
+ " print(\"============== Starting Training ==============\")\n",
+ " # load training dataset\n",
+ " ds_train = create_dataset(os.path.join(mnist_path, \"train\"), 32, repeat_size)\n",
+ " model.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(), summary_collector], dataset_sink_mode=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2、使用SummaryCollector放入到测试网络中记录测试数据"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "`summary_callback`,即是`SummaryCollector`,在`model.eval`的回调函数中使用,可以记录训练精度信息和测试样本数量信息。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from mindspore.train.serialization import load_checkpoint, load_param_into_net\n",
+ "\n",
+ "def test_net(network, model, mnist_path, summary_collector):\n",
+ " \"\"\"Define the evaluation method.\"\"\"\n",
+ " print(\"============== Starting Testing ==============\")\n",
+ " # load the saved model for evaluation\n",
+ " param_dict = load_checkpoint(\"checkpoint_lenet-3_1875.ckpt\")\n",
+ " # load parameter to the network\n",
+ " load_param_into_net(network, param_dict)\n",
+ " # load testing dataset\n",
+ " ds_eval = create_dataset(os.path.join(mnist_path, \"test\"))\n",
+ " acc = model.eval(ds_eval, callbacks=[summary_collector], dataset_sink_mode=True)\n",
+ " print(\"============== Accuracy:{} ==============\".format(acc))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 3、主程序运行入口"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "初始化`SummaryCollector`,使用`collect_specified_data`控制需要记录的数据,我们这里只需要记录模型溯源和数据溯源,所以将`collect_train_lineage`和`collect_eval_lineage`参数设置成`True`,其他的参数使用`keep_default_action`设置成`False`,SummaryCollector能够记录哪些数据,请参考官网:https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.train.html?highlight=collector#mindspore.train.callback.SummaryCollector 。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from mindspore.train.callback import SummaryCollector\n",
+ "from mindspore.train.summary.summary_record import SummaryRecord\n",
+ "from mindspore.nn.metrics import Accuracy\n",
+ "from mindspore import context\n",
+ "from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits\n",
+ "\n",
+ "if __name__==\"__main__\":\n",
+ " context.set_context(mode=context.GRAPH_MODE, device_target = \"GPU\")\n",
+ " lr = 0.01 # learning rate\n",
+ " momentum = 0.9 \n",
+ " epoch_size = 3\n",
+ " mnist_path = \"./MNIST_Data\"\n",
+ " \n",
+ " net_loss = SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True, reduction='mean')\n",
+ " repeat_size = epoch_size\n",
+ " # create the network\n",
+ " network = LeNet5()\n",
+ "\n",
+ " # define the optimizer\n",
+ " net_opt = nn.Momentum(network.trainable_params(), lr, momentum)\n",
+ " config_ck = CheckpointConfig(save_checkpoint_steps=1875, keep_checkpoint_max=10)\n",
+ " ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\", config=config_ck)\n",
+ " model = Model(network, net_loss, net_opt, metrics={\"Accuracy\": Accuracy()})\n",
+ " \n",
+ " collect_specified_data = {\"collect_eval_lineage\":True,\"collect_train_lineage\":True}\n",
+ " summary_collector = SummaryCollector(summary_dir=\"./summary_base/quick_start_summary01\", collect_specified_data=collect_specified_data, keep_default_action=False) \n",
+ " train_net(model, epoch_size, mnist_path, repeat_size, ckpoint_cb, summary_collector)\n",
+ " test_net(network, model, mnist_path, summary_collector)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 四、启动及关闭MindInsight服务"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "这里主要展示如何启用及关闭MindInsight,更多的命令集信息,请参考MindSpore官方网站:https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/visualization_tutorials.html 。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- 启动MindInsight服务\n",
+ "\n",
+ " 在安装过MindInsight的环境中启动MindInsight服务:\n",
+ " - `--summary-base-dir`:MindInsight指定启动工作路径的命令。\n",
+ " - `./summary_base`:SummaryRecord保存文件夹的目录。\n",
+ " - `--port`:MindInsight指定启动的端口,数值可以任意为1~65535的范围内。"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "os.system(\"mindinsight start --summary-base-dir=./summary_base --port=8090\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "查询是否启动成功,在网址输入:`127.0.0.1:8090`,如果看到如下界面说明启动成功。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- 关闭MindInsight服务\n",
+ "\n",
+ " 在安装过MindInsight的环境中输入命令:`mindinsight stop --port=8090`\n",
+ " - `mindinsight stop`:MindInsight关闭服务命令。\n",
+ " - `--port=8090`:即MindInsight服务开启在`8090`端口,所以这里写成`--port=8090`。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 五、模型溯源"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1、连接到模型溯源地址"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "浏览器中输入:`127.0.0.1:8090`,点击模型溯源,如下模型溯源界面:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "我们可以勾选展示列,由于训练过程涉及的参数很多,在调整训练参数时,一般只会调整少量参数,所以对大部分相同参数可以去掉勾选,不显示出来,使得用户更方便的观察不同参数对模型训练的影响,上图中的不同参数的竖直线段代表的各个参数,数根连接各个参数的折线图代表不同的模型训练过程,其中各参数从左到右如下:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- `summary路径`:表示存储记录数据的文件夹路径,即`summary_dir`。\n",
+ "- `Accuracy`:模型的精度值。\n",
+ "- `loss`:模型的loss值。\n",
+ "- 网络:表示神经网络名称(用户可自行命名)。\n",
+ "- 优化器:表示训练过程中采用的优化器。\n",
+ "- 训练样本数量:训练样本数量。\n",
+ "- 测试样本数量:测试样本数量。\n",
+ "- 学习率:learning_rate的值。\n",
+ "- `epoch`:训练圈数。\n",
+ "- `steps`:训练步数。\n",
+ "- device数目:启用的训练卡数目。\n",
+ "- 模型大小:生成的模型文件`.ckpt`的大小。\n",
+ "- 损失函数:表示训练过程中采用的损失函数(用户可自行命名)。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "根据上述记录的信息,我们可以调整模型训练过程中的参数,训练生成模型,然后选择要对比的训练,进行比对观察分析。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2、观察分析记录下来的溯源参数"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "下图选择了数条不同参数下训练生成的模型进行对比:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "在这几次训练的参数中,优化器,epoch和学习率都不一致,可以看到不同的训练生成的模型精度`Accuracy`和loss值是不一致的,当然最好是调整单个参数来观察对模型生成的影响,避免多重因素干扰,难以分辨哪个参数是正影响,哪个参数是负影响。这需要我们调整不同的参数,多训练几遍生成模型,分析各参数对训练产生的影响,这对前期学习AI训练时很有帮助。在以后应对复杂训练时,可以节省不少时间。\n",
+ "> 在多次训练时,需要将`summary_dir`的保存路径的文件夹进行重命名操作,否则训练记录的数据会生成在同一个文件夹下,而在同一文件夹下MindInsight只会读取最后一位数字比较大的文件即最后生成的文件。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 六、数据溯源"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 1、连接到数据溯源地址"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "浏览器中输入:127.0.0.1:8090连接上MindInsight的服务,点击模型溯源,如下图数据溯源界面:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "数据溯源的根本是重现数据集从左到右进行数据增强的整个过程,方便自己发现增强过程中是否有遗漏的步骤或者不合理的操作,方便自己查找错误,也方便自己找到最优的数据增强方式,毕竟一个好的数据集对模型训练是有事半功倍的效果的。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "- `summary路径`:表示存储记录数据的文件夹名称,即为`SummaryRecord`的路径下的文件夹名称。\n",
+ "- `MnistDataset`:表示数据集信息,包含数据集路径。\n",
+ "- `Map_TypeCast`:定义数据集的类型。\n",
+ "- `Map_Resize`:图像缩放后的尺寸。\n",
+ "- `Map_Rescale`:图像的缩放比例。\n",
+ "- `Map_HWC2CHW`:数据集的张量由:高×宽×通道-->通道×高×宽。\n",
+ "- `Shuffle`:数据集混洗的缓存空间。\n",
+ "- `Batch`:每组训练样本数量。\n",
+ "- `Repeat`:数据图片复制次数,用于增强数据的数量。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### 2、观察分析数据溯源参数"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "可以从上图看到数据增强过程由原数据集MnistDataset开始,按照先后顺序经过了下面的操作:label的数据类型转换(`Map_Typecast`),图像的高宽缩放(`Map_Resize`),图像的比例缩放(`Map_Rescale`),图像数据的张量变换(`Map_HWC2CHW`),图像混洗(`Shuffle`),图像成组(`Batch`),图像数量增强(`Repeat`)然后输出训练需要的数据。显然这样的可视化的数据溯源功能,在你检查数据增强操作是否有误的时候,比起一行行的去检查代码效率多了。"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## 最后关闭MindInsight服务"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "os.system(\"mindinsight stop --port=8090\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "以上就是这次对MindInsight的使用方法,模型溯源和数据溯源的全部过程。"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/tutorials/source_en/advanced_use/dashboard_and_lineage.md b/tutorials/source_en/advanced_use/dashboard_and_lineage.md
index 4ef01904e3a473e9a41bed4149ebb5b1ee7ac78b..551be944c7d5a6c88dfc5c2036e22b0db043784f 100644
--- a/tutorials/source_en/advanced_use/dashboard_and_lineage.md
+++ b/tutorials/source_en/advanced_use/dashboard_and_lineage.md
@@ -21,7 +21,7 @@
-
+
## Overview
Scalars, images, computational graphs, and model hyperparameters during training are recorded in files and can be viewed on the web page.
diff --git a/tutorials/source_en/advanced_use/differential_privacy.md b/tutorials/source_en/advanced_use/differential_privacy.md
index 50717af4aa584d0f93e7b5a8d2f0dd41ccbaf8de..c85f4c6b3c30d32c9d9c43f5c0abc783601cc369 100644
--- a/tutorials/source_en/advanced_use/differential_privacy.md
+++ b/tutorials/source_en/advanced_use/differential_privacy.md
@@ -1,4 +1,4 @@
-# Differential Privacy in Machine Learning
+# Differential Privacy in Machine Learning
@@ -86,25 +86,25 @@ TAG = 'Lenet5_train'
```python
cfg = edict({
- 'device_target': 'Ascend', # device used
- 'data_path': './MNIST_unzip', # the path of training and testing data set
- 'dataset_sink_mode': False, # whether deliver all training data to device one time
- 'num_classes': 10, # the number of classes of model's output
- 'lr': 0.01, # the learning rate of model's optimizer
- 'momentum': 0.9, # the momentum value of model's optimizer
- 'epoch_size': 10, # training epochs
- 'batch_size': 256, # batch size for training
- 'image_height': 32, # the height of training samples
- 'image_width': 32, # the width of training samples
- 'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
- 'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
- 'micro_batches': 32, # the number of small batches split from an original batch
- 'l2_norm_bound': 1.0, # the clip bound of the gradients of model's training parameters
- 'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training
- # parameters' gradients
- 'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training
- 'optimizer': 'Momentum' # the base optimizer used for Differential privacy training
- })
+ 'num_classes': 10, # the number of classes of model's output
+ 'lr': 0.1, # the learning rate of model's optimizer
+ 'momentum': 0.9, # the momentum value of model's optimizer
+ 'epoch_size': 10, # training epochs
+ 'batch_size': 256, # batch size for training
+ 'image_height': 32, # the height of training samples
+ 'image_width': 32, # the width of training samples
+ 'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
+ 'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
+ 'device_target': 'Ascend', # device used
+ 'data_path': './MNIST_unzip', # the path of training and testing data set
+ 'dataset_sink_mode': False, # whether deliver all training data to device one time
+ 'micro_batches': 16, # the number of small batches split from an original batch
+ 'norm_clip': 1.0, # the clip bound of the gradients of model's training parameters
+ 'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training
+ # parameters' gradients
+ 'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training
+ 'optimizer': 'Momentum' # the base optimizer used for Differential privacy training
+ })
```
2. Configure necessary information, including the environment information and execution mode.
@@ -320,13 +320,13 @@ ds_train = generate_mnist_dataset(os.path.join(cfg.data_path, "train"),
5. Display the result.
- The accuracy of the LeNet model without differential privacy is 99%, and the accuracy of the LeNet model with adaptive differential privacy AdaDP is 91%.
+ The accuracy of the LeNet model without differential privacy is 99%, and the accuracy of the LeNet model with adaptive differential privacy AdaDP is 98%.
```
============== Starting Training ==============
...
============== Starting Testing ==============
...
- ============== Accuracy: 0.9115 ==============
+ ============== Accuracy: 0.9879 ==============
```
### References
diff --git a/tutorials/source_en/advanced_use/images/data_op_profile.png b/tutorials/source_en/advanced_use/images/data_op_profile.png
index 6a03815bac3797b1333050e6eae1c89950e01a1c..b83408e92777181f6447ec20239fc92e28084a6a 100644
Binary files a/tutorials/source_en/advanced_use/images/data_op_profile.png and b/tutorials/source_en/advanced_use/images/data_op_profile.png differ
diff --git a/tutorials/source_en/advanced_use/images/minddata_profile.png b/tutorials/source_en/advanced_use/images/minddata_profile.png
index 035939f5e3d548f39e2f5c6c16b2bc7d0c7469ce..79dfad25e6828769a2efc697bb7b02a171dbbdd0 100644
Binary files a/tutorials/source_en/advanced_use/images/minddata_profile.png and b/tutorials/source_en/advanced_use/images/minddata_profile.png differ
diff --git a/tutorials/source_en/advanced_use/images/performance_overall.png b/tutorials/source_en/advanced_use/images/performance_overall.png
index 2d627f972cac0b7848eff1114b0fd2fa4f030e74..3aa536d5f24fc348ad013fa07084fddb1b4f01af 100644
Binary files a/tutorials/source_en/advanced_use/images/performance_overall.png and b/tutorials/source_en/advanced_use/images/performance_overall.png differ
diff --git a/tutorials/source_en/advanced_use/images/step_trace.png b/tutorials/source_en/advanced_use/images/step_trace.png
index 6c54e790e34f52780e4c16f487f81a39906512bf..49c8bb72741173cd3285bfdbacfb206dbc33e3a9 100644
Binary files a/tutorials/source_en/advanced_use/images/step_trace.png and b/tutorials/source_en/advanced_use/images/step_trace.png differ
diff --git a/tutorials/source_en/advanced_use/images/timeline.png b/tutorials/source_en/advanced_use/images/timeline.png
index 21453967d9799b73795fae05529cda6fcb82f6ee..19c60e104169d86f1022758eda15bbc9c8a0dcf6 100644
Binary files a/tutorials/source_en/advanced_use/images/timeline.png and b/tutorials/source_en/advanced_use/images/timeline.png differ
diff --git a/tutorials/source_en/advanced_use/mindinsight_commands.md b/tutorials/source_en/advanced_use/mindinsight_commands.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc3ab201ac4a70a09d083b4aa8c559d0fc31ad61
--- /dev/null
+++ b/tutorials/source_en/advanced_use/mindinsight_commands.md
@@ -0,0 +1,64 @@
+# MindInsight Commands
+
+1. View the command help information.
+
+ ```shell
+ mindinsight --help
+ ```
+
+2. View the version information.
+
+ ```shell
+ mindinsight --version
+ ```
+
+3. Start the service.
+
+ ```shell
+ mindinsight start [-h] [--config ] [--workspace ]
+ [--port ] [--reload-interval ]
+ [--summary-base-dir ]
+ ```
+
+ Optional parameters as follows:
+
+ - `-h, --help` : Displays the help information about the startup command.
+ - `--config ` : Specifies the configuration file or module. CONFIG indicates the physical file path (file:/path/to/config.py), or a module path (python:path.to.config.module) that can be identified by Python.
+ - `--workspace ` : Specifies the working directory. The default value of WORKSPACE is $HOME/mindinsight.
+ - `--port ` : Specifies the port number of the web visualization service. The value ranges from 1 to 65535. The default value of PORT is 8080.
+ - `--url-path-prefix ` : Specifies the path prefix of the web visualization service. The default value of URL_PATH_PREFIX is empty string.
+ - `--reload-interval ` : Specifies the interval (unit: second) for loading data. The value 0 indicates that data is loaded only once. The default value of RELOAD_INTERVAL is 3 seconds.
+ - `--summary-base-dir ` : Specifies the root directory for loading training log data. MindInsight traverses the direct subdirectories in this directory and searches for log files. If a direct subdirectory contains log files, it is identified as the log file directory. If a root directory contains log files, it is identified as the log file directory. SUMMARY_BASE_DIR is the current directory path by default.
+
+ > When the service is started, the parameter values of the command line are saved as the environment variables of the process and start with `MINDINSIGHT_`, for example, `MINDINSIGHT_CONFIG`, `MINDINSIGHT_WORKSPACE`, and `MINDINSIGHT_PORT`.
+
+4. View the service process information.
+
+ MindInsight provides user with web services. Run the following command to view the running web service process:
+
+ ```shell
+ ps -ef | grep mindinsight
+ ```
+
+ Run the following command to access the working directory `WORKSPACE` corresponding to the service process based on the service process ID:
+
+ ```shell
+ lsof -p | grep access
+ ```
+
+ Output with the working directory `WORKSPACE` as follows:
+
+ ```shell
+ gunicorn /log/gunicorn/access.log
+ ```
+
+5. Stop the service.
+
+ ```shell
+ mindinsight stop [-h] [--port PORT]
+ ```
+
+ Optional parameters as follows:
+
+ - `-h, --help` : Displays the help information about the stop command.
+ - `--port ` : Specifies the port number of the web visualization service. The value ranges from 1 to 65535. The default value of PORT is 8080.
diff --git a/tutorials/source_en/advanced_use/network_migration.md b/tutorials/source_en/advanced_use/network_migration.md
index 70fab25c04fa994d30aa02c7ccf45a1a72669f21..9f396fe5480fbc544b0baf578da8103cb0385450 100644
--- a/tutorials/source_en/advanced_use/network_migration.md
+++ b/tutorials/source_en/advanced_use/network_migration.md
@@ -79,7 +79,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
num_shards=device_num, shard_id=rank_id)
```
- Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see .
+ Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see .
3. Build a network.
@@ -214,7 +214,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
6. Build the entire network.
- The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`.
+ The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`.
7. Define a loss function and an optimizer.
@@ -272,9 +272,7 @@ Models trained on the Ascend 910 AI processor can be used for inference on diffe
## Examples
-1. [Common network script examples](https://gitee.com/mindspore/mindspore/tree/master/example)
+1. [Common dataset examples](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/loading_the_datasets.html)
-2. [Common dataset examples](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/loading_the_datasets.html)
-
-3. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)
+2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)
diff --git a/tutorials/source_en/advanced_use/on_device_inference.md b/tutorials/source_en/advanced_use/on_device_inference.md
index 41d0999bff841ecd2befcb6bb3bbc8d112437d8f..f3ddc120b3c3bcea04bc95c931aeb638aa8b2611 100644
--- a/tutorials/source_en/advanced_use/on_device_inference.md
+++ b/tutorials/source_en/advanced_use/on_device_inference.md
@@ -28,8 +28,8 @@ The environment requirements are as follows:
- Hard disk space: 10 GB or above
- System requirements
- - System: Ubuntu = 16.04.02LTS (availability is checked)
- - Kernel: 4.4.0-62-generic (availability is checked)
+ - System: Ubuntu = 18.04.02LTS (availability is checked)
+ - Kernel: 4.15.0-45-generic (availability is checked)
- Software dependencies
- [cmake](https://cmake.org/download/) >= 3.14.1
@@ -87,7 +87,7 @@ To perform on-device model inference using MindSpore, perform the following step
### Generating an On-Device Model File
1. After training is complete, load the generated checkpoint file to the defined network.
```python
- param_dict = load_checkpoint(ckpoint_file_name=ckpt_file_path)
+ param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path)
load_param_into_net(net, param_dict)
```
2. Call the `export` API to export the `.ms` model file on the device.
@@ -145,7 +145,7 @@ if __name__ == '__main__':
is_ckpt_exist = os.path.exists(ckpt_file_path)
if is_ckpt_exist:
- param_dict = load_checkpoint(ckpoint_file_name=ckpt_file_path)
+ param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path)
load_param_into_net(net, param_dict)
export(net, input_data, file_name="./lenet.ms", file_format='LITE')
print("export model success.")
diff --git a/tutorials/source_en/advanced_use/performance_profiling.md b/tutorials/source_en/advanced_use/performance_profiling.md
index 3152b8150655b5256cae13a0366e06caa3ccc408..c517dbd767d3872b99a0463a1649e59fd837d6f9 100644
--- a/tutorials/source_en/advanced_use/performance_profiling.md
+++ b/tutorials/source_en/advanced_use/performance_profiling.md
@@ -19,11 +19,11 @@
## Overview
-Performance data like operators' execution time are recorded in files and can be viewed on the web page, this can help the user optimize the performance of neural networks. MindInsight Profiler can only support the Ascend chip now.
+Performance data like operators' execution time is recorded in files and can be viewed on the web page, this can help the user optimize the performance of neural networks. MindInsight Profiler can only support the Ascend chip now.
## Operation Process
-- Prepare a training script, add profiler apis in the training script, and run the training script.
+- Prepare a training script, add profiler APIs in the training script, and run the training script.
- Start MindInsight and specify the profiler data directory using startup parameters. After MindInsight is started, access the visualization page based on the IP address and port number. The default access IP address is `http://127.0.0.1:8080`.
- Find the training in the list, click the performance profiling link, and view the data on the web page.
@@ -64,7 +64,7 @@ def test_profiler():
## Launch MindInsight
-The MindInsight launch command can refer to the **MindInsight Command** part in [Training Process Visualization](https://www.mindspore.cn/tutorial/en/master/advanced_use/visualization_tutorials.html).
+The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html).
### Performance Analysis
@@ -164,6 +164,10 @@ Users can get the most detailed information from the Timeline:
- From high level, users can analyse whether the stream split strategy can be optimized and whether is step tail is too long.
- From low level, users can analyse the execution time for all the operators, etc.
+Users can click the download button on the overall performance page to view Timeline details. The Timeline data file (json format) will be stored on local machine, and can be displayed by tools. We suggest to use `chrome://tracing` or [Perfetto](https://ui.perfetto.dev/#!viewer) to visualize the Timeline.
+- Chrome tracing: Click "load" on the upper left to load the file.
+- Perfetto: Click "Open trace file" on the left to load the file.
+

Figure 7: Timeline Analysis
diff --git a/tutorials/source_en/advanced_use/visualization_tutorials.rst b/tutorials/source_en/advanced_use/visualization_tutorials.rst
index 9c6c8a82031dc7cbc511f20e0517bd66426e75c1..5092b38d50b911d991e2ac351eecd3eba9938207 100644
--- a/tutorials/source_en/advanced_use/visualization_tutorials.rst
+++ b/tutorials/source_en/advanced_use/visualization_tutorials.rst
@@ -6,71 +6,4 @@ Training Process Visualization
dashboard_and_lineage
performance_profiling
-
-MindInsight Commands
---------------------
-
-1. View the command help information.
-
- .. code-block::
-
- mindinsight --help
-
-2. View the version information.
-
- .. code-block::
-
- mindinsight --version
-
-3. Start the service.
-
- .. code-block::
-
- mindinsight start [-h] [--config ] [--workspace ]
- [--port ] [--reload-interval ]
- [--summary-base-dir ]
-
- Optional parameters as follows:
-
- - `-h, --help` : Displays the help information about the startup command.
- - `--config ` : Specifies the configuration file or module. CONFIG indicates the physical file path (file:/path/to/config.py), or a module path (python:path.to.config.module) that can be identified by Python.
- - `--workspace ` : Specifies the working directory. The default value of WORKSPACE is $HOME/mindinsight.
- - `--port ` : Specifies the port number of the web visualization service. The value ranges from 1 to 65535. The default value of PORT is 8080.
- - `--url-path-prefix ` : Specifies the path prefix of the web visualization service. The default value of URL_PATH_PREFIX is empty string.
- - `--reload-interval ` : Specifies the interval (unit: second) for loading data. The value 0 indicates that data is loaded only once. The default value of RELOAD_INTERVAL is 3 seconds.
- - `--summary-base-dir ` : Specifies the root directory for loading training log data. MindInsight traverses the direct subdirectories in this directory and searches for log files. If a direct subdirectory contains log files, it is identified as the log file directory. If a root directory contains log files, it is identified as the log file directory. SUMMARY_BASE_DIR is the current directory path by default.
-
- .. note::
-
- When the service is started, the parameter values of the command line are saved as the environment variables of the process and start with `MINDINSIGHT_`, for example, `MINDINSIGHT_CONFIG`, `MINDINSIGHT_WORKSPACE`, and `MINDINSIGHT_PORT`.
-
-4. View the service process information.
-
- MindInsight provides user with web services. Run the following command to view the running web service process:
-
- .. code-block::
-
- ps -ef | grep mindinsight
-
- Run the following command to access the working directory `WORKSPACE` corresponding to the service process based on the service process ID:
-
- .. code-block::
-
- lsof -p | grep access
-
- Output with the working directory `WORKSPACE` as follows:
-
- .. code-block::
-
- gunicorn /log/gunicorn/access.log
-
-5. Stop the service.
-
- .. code-block::
-
- mindinsight stop [-h] [--port PORT]
-
- Optional parameters as follows:
-
- - `-h, --help` : Displays the help information about the stop command.
- - `--port ` : Specifies the port number of the web visualization service. The value ranges from 1 to 65535. The default value of PORT is 8080.
+ mindinsight_commands
diff --git a/tutorials/source_en/use/data_preparation/converting_datasets.md b/tutorials/source_en/use/data_preparation/converting_datasets.md
index 50fbc8687ef6a22486af9847c3a66f1505be3afa..11ea33c866010d2275ede0a1831a1da03e1b1320 100644
--- a/tutorials/source_en/use/data_preparation/converting_datasets.md
+++ b/tutorials/source_en/use/data_preparation/converting_datasets.md
@@ -178,12 +178,14 @@ You can use the `ImageNetToMR` class to convert the raw ImageNet data (images an
Store the downloaded ImageNet dataset in a folder. The folder contains all images and a mapping file that records labels of the images.
- In the mapping file, there are three columns, which are separated by spaces. They indicate image classes, label IDs, and label names. The following is an example of the mapping file:
- ```
- n02119789 1 pen
- n02100735 2 notbook
- n02110185 3 mouse
- n02096294 4 orange
+ In the mapping file, there are two columns, which are separated by spaces. They indicate image classes and label IDs. The following is an example of the mapping file:
+ ```
+ n01440760 0
+ n01443537 1
+ n01484850 2
+ n01491361 3
+ n01494475 4
+ n01496331 5
```
2. Import the `ImageNetToMR` class for dataset converting.
diff --git a/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md b/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md
index 94973c079f3c50f3409349aae8b7775afff89826..ee669c4d3452f00e8ba08b36ed877348a3b6ac94 100644
--- a/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md
+++ b/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md
@@ -279,7 +279,7 @@ Data augmentation requires the `map` function. For details about how to use the
```
2. Define data augmentation operators. The following uses `Resize` as an example:
```python
- dataset = ds.ImageFolderDatasetV2(DATA_DIR, decode=True) # Deocde images.
+ dataset = ds.ImageFolderDatasetV2(DATA_DIR, decode=True) # Decode images.
resize_op = transforms.Resize(size=(500,500), interpolation=Inter.LINEAR)
dataset.map(input_columns="image", operations=resize_op)
diff --git a/tutorials/source_en/use/data_preparation/loading_the_datasets.md b/tutorials/source_en/use/data_preparation/loading_the_datasets.md
index 6a47a1a22b3b896b632aaf386421184f93306909..4ffaf9de19e9d997bfb30ac93723f44a4026af01 100644
--- a/tutorials/source_en/use/data_preparation/loading_the_datasets.md
+++ b/tutorials/source_en/use/data_preparation/loading_the_datasets.md
@@ -65,7 +65,7 @@ To read a dataset using the `MindDataset` object, perform the following steps:
data_set = ds.MindDataset(dataset_file=CV_FILE_NAME)
```
In the preceding information:
- `dataset_file`: specifies the MindRecord file, including the path and file name.
+ `dataset_file`: specifies the MindRecord file or list of MindRecord files.
2. Create a dictionary iterator and read data records through the iterator.
```python
@@ -148,32 +148,112 @@ MindSpore can also read datasets in the `TFRecord` data format through the `TFRe
```
## Loading a Custom Dataset
-You can load a custom dataset using the `GeneratorDataset` object.
+In real scenarios, there are virous datasets. For a custom dataset or a dataset that can't be loaded by APIs directly, there are tow ways.
+One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
+The following shows how to use `GeneratorDataset`.
-1. Define a function (for example, `Generator1D`) to generate a dataset.
- > The custom generation function returns the objects that can be called. Each time, tuples of `numpy array` are returned as a row of data.
+1. Define an iterable object to generate a dataset. There are two examples following. One is a customized function which contains `yield`. The other one is a customized class which contains `__getitem__`.
+ Both of them will generator a dataset with numbers from 0 to 9.
+ > The custom iterable object returns a tuple of `numpy arrays` as a row of data each time.
An example of a custom function is as follows:
```python
import numpy as np # Import numpy lib.
- def Generator1D():
- for i in range(64):
+ def generator_func(num):
+ for i in range(num):
yield (np.array([i]),) # Notice, tuple of only one element needs following a comma at the end.
```
-2. Transfer `Generator1D` to `GeneratorDataset` to create a dataset and set `column` to data.
+ An example of a custom class is as follows:
```python
- dataset = ds.GeneratorDataset(Generator1D, ["data"])
+ import numpy as np # Import numpy lib.
+ class Generator():
+
+ def __init__(self, num):
+ self.num = num
+
+ def __getitem__(self, item):
+ return (np.array([item]),) # Notice, tuple of only one element needs following a comma at the end.
+
+ def __len__(self):
+ return self.num
+ ```
+
+2. Create a dataset with `GeneratorDataset`. Transfer `generator_func` to `GeneratorDataset` to create a dataset and set `column` to `data`.
+Define a `Generator` and transfer it to `GeneratorDataset` to create a dataset and set `column` to `data`.
+ ```python
+ dataset1 = ds.GeneratorDataset(source=generator_func(10), column_names=["data"], shuffle=False)
+ dataset2 = ds.GeneratorDataset(source=Generator(10), column_names=["data"], shuffle=False)
```
3. After creating a dataset, create an iterator for the dataset to obtain the corresponding data. Iterator creation methods are as follows:
- - Create an iterator whose return value is of the sequence type.
+ - Create an iterator whose return value is of the sequence type. As is shown in the following, create the iterators for `dataset1` and `dataset2`, and print the output.
```python
- for data in dataset.create_tuple_iterator(): # each data is a sequence
+ print("dataset1:")
+ for data in dataset1.create_tuple_iterator(): # each data is a sequence
+ print(data[0])
+
+ print("dataset2:")
+ for data in dataset2.create_tuple_iterator(): # each data is a sequence
print(data[0])
```
+ The output is as follows:
+ ```
+ dataset1:
+ [array([0], dtype=int64)]
+ [array([1], dtype=int64)]
+ [array([2], dtype=int64)]
+ [array([3], dtype=int64)]
+ [array([4], dtype=int64)]
+ [array([5], dtype=int64)]
+ [array([6], dtype=int64)]
+ [array([7], dtype=int64)]
+ [array([8], dtype=int64)]
+ [array([9], dtype=int64)]
+ dataset2:
+ [array([0], dtype=int64)]
+ [array([1], dtype=int64)]
+ [array([2], dtype=int64)]
+ [array([3], dtype=int64)]
+ [array([4], dtype=int64)]
+ [array([5], dtype=int64)]
+ [array([6], dtype=int64)]
+ [array([7], dtype=int64)]
+ [array([8], dtype=int64)]
+ [array([9], dtype=int64)]
+ ```
- - Create an iterator whose return value is of the dictionary type.
- ```python
- for data in dataset.create_dict_iterator(): # each data is a dictionary
+ - Create an iterator whose return value is of the dictionary type. As is shown in the following, create the iterators for `dataset1` and `dataset2`, and print the output.
+ ```python
+ print("dataset1:")
+ for data in dataset1.create_dict_iterator(): # each data is a dictionary
+ print(data["data"])
+
+ print("dataset2:")
+ for data in dataset2.create_dict_iterator(): # each data is a dictionary
print(data["data"])
```
+ The output is as follows:
+ ```
+ dataset1:
+ {'data': array([0], dtype=int64)}
+ {'data': array([1], dtype=int64)}
+ {'data': array([2], dtype=int64)}
+ {'data': array([3], dtype=int64)}
+ {'data': array([4], dtype=int64)}
+ {'data': array([5], dtype=int64)}
+ {'data': array([6], dtype=int64)}
+ {'data': array([7], dtype=int64)}
+ {'data': array([8], dtype=int64)}
+ {'data': array([9], dtype=int64)}
+ dataset2:
+ {'data': array([0], dtype=int64)}
+ {'data': array([1], dtype=int64)}
+ {'data': array([2], dtype=int64)}
+ {'data': array([3], dtype=int64)}
+ {'data': array([4], dtype=int64)}
+ {'data': array([5], dtype=int64)}
+ {'data': array([6], dtype=int64)}
+ {'data': array([7], dtype=int64)}
+ {'data': array([8], dtype=int64)}
+ {'data': array([9], dtype=int64)}
+ ```
\ No newline at end of file
diff --git a/tutorials/source_en/use/multi_platform_inference.md b/tutorials/source_en/use/multi_platform_inference.md
index 230af0e3e0cdd929127e1f1bab2b8f7b9b080701..373dc9b634b7ce3a531d58d77582c12f88a10326 100644
--- a/tutorials/source_en/use/multi_platform_inference.md
+++ b/tutorials/source_en/use/multi_platform_inference.md
@@ -16,7 +16,7 @@ Models based on MindSpore training can be used for inference on different hardwa
1. Inference on the Ascend 910 AI processor
- MindSpore provides the `model.eval` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see .
+ MindSpore provides the `model.eval` API for model validation. You only need to import the validation dataset. The processing method of the validation dataset is the same as that of the training dataset. For details about the complete code, see .
```python
res = model.eval(dataset)
diff --git a/tutorials/source_zh_cn/advanced_use/aware_quantization.md b/tutorials/source_zh_cn/advanced_use/aware_quantization.md
index fcb5a520ae3ec77bbeb5c01c00a873643cc89805..87890a300f1e2618015e2aab0f68db5b60addd69 100644
--- a/tutorials/source_zh_cn/advanced_use/aware_quantization.md
+++ b/tutorials/source_zh_cn/advanced_use/aware_quantization.md
@@ -3,147 +3,192 @@
- [量化](#量化)
- - [概述](#概述)
- - [感知量化训练](#感知量化训练)
+ - [背景](#背景)
+ - [概念](#概念)
+ - [量化](#量化-1)
- [伪量化节点](#伪量化节点)
- - [感知量化示例](#感知量化示例)
- - [导入模型重训与推理](#导入模型重训与推理)
+ - [感知量化训练](#感知量化训练)
+ - [感知量化训练示例](#感知量化训练示例)
+ - [定义融合网络](#定义融合网络)
+ - [转化为量化网络](#转化为量化网络)
+ - [重训和推理](#重训和推理)
+ - [导入模型重新训练](#导入模型重新训练)
+ - [进行推理](#进行推理)
- [参考文献](#参考文献)
-## 概述
-
-与FP32类型相比,FP16、INT8、INT4的低精度数据表达类型所占用空间更小,因此对应的存储空间和传输时间都可以大幅下降。以手机为例,为了提供更人性化和智能的服务,现在越来越多的OS和APP集成了深度学习的功能,自然需要包含大量的模型及权重文件。经典的AlexNet,原始权重文件的大小已经超过了200MB,而最近出现的新模型正在往结构更复杂、参数更多的方向发展。显然,低精度类型的空间受益还是很明显的。低比特的计算性能也更高,INT8相对比FP32的加速比可达到3倍甚至更高,功耗上也对应有所减少。
-
-量化即以较低的推理精度损失将连续取值(或者大量可能的离散取值)的浮点型模型权重或流经模型的张量数据定点近似(通常为int8)为有限多个(或较少的)离散值的过程,它是以更少位数的数据类型用于近似表示32位有限范围浮点型数据的过程,而模型的输入输出依然是浮点型,从而达到减少模型尺寸大小、减少模型内存消耗及加快模型推理速度等目标。
+## 背景
-量化方案主要分为两种:感知量化训练(aware quantization training)和训练后量化(post-training quantization)。
+越来越多的应用选择在移动设备或者边缘设备上使用深度学习技术。以手机为例,为了提供人性化和智能的服务,现在操作系统和应用都开始集成深度学习功能。而使用该功能,涉及训练或者推理,自然包含大量的模型及权重文件。经典的AlexNet,原始权重文件已经超过了200MB,而最近出现的新模型正往结构更复杂、参数更多的方向发展。由于移动设备、边缘设备的硬件资源有限,需要对模型进行精简,而量化(Quantization)技术就是应对该类问题衍生出的技术之一。
-## 感知量化训练
+## 概念
-感知量化训练为在网络模型训练的过程中,插入伪量化节点进行伪量化训练的过程。
+### 量化
-MindSpore的感知量化训练是一种伪量化的过程,它是在可识别的某些操作内嵌入伪量化节点,用以统计训练时流经该节点数据的最大最小值。其目的是减少精度损失,其参与模型训练的前向推理过程令模型获得量化损失,但梯度更新需要在浮点下进行,因而其并不参与反向传播过程。
+量化即以较低的推理精度损失将连续取值(或者大量可能的离散取值)的浮点型模型权重或流经模型的张量数据定点近似(通常为INT8)为有限多个(或较少的)离散值的过程,它是以更少位数的数据类型用于近似表示32位有限范围浮点型数据的过程,而模型的输入输出依然是浮点型。这样的好处是可以减小模型尺寸大小,减少模型内存占用,加快模型推理速度,降低功耗等。
-在MindSpore的伪量化训练中,支持非对称和对称的量化算法,支持4、7和8bit的量化方案。
+如上所述,与FP32类型相比,FP16、INT8、INT4等低精度数据表达类型所占用空间更小。使用低精度数据表达类型替换高精度数据表达类型,可以大幅降低存储空间和传输时间。而低比特的计算性能也更高,INT8相对比FP32的加速比可达到3倍甚至更高,对于相同的计算,功耗上也有明显优势。
-目前MindSpore感知量化训练支持的后端有GPU和Ascend。
+当前业界量化方案主要分为两种:感知量化训练(Aware Quantization Training)和训练后量化(Post-training Quantization)。
### 伪量化节点
-伪量化节点的作用:(1)找到网络数据的分布,即找到待量化参数的最大值和最小值;(2)模拟量化到低比特操作的时候的精度损失,把该损失作用到网络模型中,传递给损失函数,让优化器去在训练过程中对该损失值进行优化。
-
-伪量化节点的意义在于统计流经数据的min和max值,并参与前向传播,让损失函数的值增大,优化器感知到损失值的增加,并进行持续性地反向传播学习,进一步减少因为伪量化操作而引起的精度下降,从而提升精确度。
-
-对于权值和数据的量化,MindSpore都采用参考文献[1]中的方案进行量化。
-
-### 感知量化示例
-
-使用感知量化训练特性,主要的步骤为:
-
-1. 定义网络模型
-2. 量化自动构图
-
-代码样例如下:
-
-
-1. 定义网络模型
-
- 以LeNet5网络模型为例子,原网络模型的定义如下所示。
-
- ```python
- class LeNet5(nn.Cell):
- def __init__(self, num_class=10):
- super(LeNet5, self).__init__()
- self.num_class = num_class
-
- self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
- self.bn1 = nn.batchnorm(6)
- self.act1 = nn.relu()
-
- self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
- self.bn2 = nn.batchnorm(16)
- self.act2 = nn.relu()
-
- self.fc1 = nn.Dense(16 * 5 * 5, 120)
- self.fc2 = nn.Dense(120, 84)
- self.act3 = nn.relu()
- self.fc3 = nn.Dense(84, self.num_class)
- self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
-
- def construct(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.act1(x)
- x = self.max_pool2d(x)
- x = self.conv2(x)
- x = self.bn2(x)
- x = self.act2(x)
- x = self.max_pool2d(x)
- x = self.flattern(x)
- x = self.fc1(x)
- x = self.act3(x)
- x = self.fc2(x)
- x = self.act3(x)
- x = self.fc3(x)
- return x
- ```
-
- 融合网络模型定义:
-
- 使用`nn.Conv2dBnAct`算子替换原网络模型中的三个算子`nn.Conv2d`、`nn.batchnorm`和`nn.relu`;
-
- 同理,使用`nn.DenseBnAct`算子替换原网络模型中的对应的算子`nn.Dense`、`nn.batchnorm`和`nn.relu`。
-
- 即使`nn.Dense`和`nn.Conv2d`算子后面没有`nn.batchnorm`和`nn.relu`,都要按规定使用上述两个算子进行融合替换。
-
- ```python
- class LeNet5(nn.Cell):
- def __init__(self, num_class=10):
- super(LeNet5, self).__init__()
- self.num_class = num_class
-
- self.conv1 = nn.Conv2dBnAct(1, 6, kernel_size=5, batchnorm=True, activation='relu')
- self.conv2 = nn.Conv2dBnAct(6, 16, kernel_size=5, batchnorm=True, activation='relu')
-
- self.fc1 = nn.DenseBnAct(16 * 5 * 5, 120, activation='relu')
- self.fc2 = nn.DenseBnAct(120, 84, activation='relu')
- self.fc3 = nn.DenseBnAct(84, self.num_class)
- self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
-
- def construct(self, x):
- x = self.conv1(x)
- x = self.max_pool2d(x)
- x = self.conv2(x)
- x = self.max_pool2d(x)
- x = self.flattern(x)
- x = self.fc1(x)
- x = self.fc2(x)
- x = self.fc3(x)
- return x
- ```
-2. 量化自动构图
-
- 使用`create_training_network`接口封装网络模型,该步骤将会自动在融合网络模型中插入伪量化算子。
-
- ```python
- from mindspore.train.quant import quant as qat
- net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=10000, weight_bits=8, act_bits=8)
- ```
-
- 其余步骤(如定义损失函数、优化器、超参数和训练网络等)与普通网络训练相同。
-
-### 导入模型重训与推理
-
-经过`create_training_network`函数之后,融合网络模型的图自动转换为感知量化的图。在训练和推理的时候分为下面三种情况:
-
-- 使用融合网络模型训练得到的checkpoint文件导入,进行感知量化训练,步骤为:a)定义融合网络模型,b)加载checkpoint文件,c)转换量化自动构图,d)训练。
-- 使用感知量化训练得到的checkpoint文件导入,进行感知量化推理,步骤为:a)定义融合网络模型,b)转换量化自动构图,c)加载checkpoint文件,d)推理。
-- 使用正常网络模型训练得到的checkpoint文件导入,进行感知量化训练,步骤为:a)定义融合网络模型,b)加载checkpoint文件,c)训练并保存为融合网络模型对应的checkpoint文件,d)使用融合网络模型训练得到的checkpoint文件导入,进行感知量化训练。
+伪量化节点,是指感知量化训练中插入的节点,用以寻找网络数据分布,并反馈损失精度,具体作用如下:
+- 找到网络数据的分布,即找到待量化参数的最大值和最小值;
+- 模拟量化为低比特时的精度损失,把该损失作用到网络模型中,传递给损失函数,让优化器在训练过程中对该损失值进行优化。
+
+## 感知量化训练
+MindSpore的感知量化训练是在训练基础上,使用低精度数据替换高精度数据来简化训练模型的过程。这个过程不可避免引入精度的损失,这时使用伪量化节点来模拟引入的精度损失,并通过反向传播学习,来减少精度损失。对于权值和数据的量化,MindSpore采用了参考文献[1]中的方案。
+
+感知量化训练规格
+| 规格 | 规格说明 |
+| --- | --- |
+| 硬件支持 | GPU、Ascend AI 910处理器的硬件平台 |
+| 网络支持 | 已实现的网络包括LeNet、ResNet50等网络,具体请参见。 |
+| 算法支持 | 在MindSpore的伪量化训练中,支持非对称和对称的量化算法。 |
+| 方案支持 | 支持4、7和8比特的量化方案。 |
+
+## 感知量化训练示例
+
+感知量化训练模型与一般训练步骤一致,在定义网络和最后生成模型阶段后,需要进行额外的操作,完整流程如下:
+
+1. 数据处理加载数据集。
+2. 定义网络。
+3. 定义融合网络。在完成定义网络后,替换指定的算子,完成融合网络的定义。
+4. 定义优化器和损失函数。
+5. 进行模型训练。基于融合网络训练生成融合模型。
+6. 转化量化网络。基于融合网络训练后得到的融合模型,使用转化接口在融合模型中插入伪量化节点,生成的量化网络。
+7. 进行量化训练。基于量化网络训练,生成量化模型。
+
+在上面流程中,第3、6、7步是感知量化训练区别普通训练需要额外进行的步骤。
+
+> - 融合网络:使用指定算子(`nn.Conv2dBnAct`、`nn.DenseBnAct`)替换后的网络。
+> - 融合模型:使用融合网络训练生成的checkpoint格式的模型。
+> - 量化网络:融合模型使用转换接口(`convert_quant_network`)插入伪量化节点后得到的网络。
+> - 量化模型:量化网络训练后得到的checkpoint格式的模型。
+
+接下来,以LeNet网络为例,展开叙述3、6两个步骤。
+
+> 你可以在这里找到完整可运行的样例代码: 。
+
+### 定义融合网络
+
+定义融合网络,在定义网络后,替换指定的算子。
+
+1. 使用`nn.Conv2dBnAct`算子替换原网络模型中的3个算子`nn.Conv2d`、`nn.batchnorm`和`nn.relu`。
+2. 使用`nn.DenseBnAct`算子替换原网络模型中的3个算子`nn.Dense`、`nn.batchnorm`和`nn.relu`。
+
+> 即使`nn.Dense`和`nn.Conv2d`算子后面没有`nn.batchnorm`和`nn.relu`,都要按规定使用上述两个算子进行融合替换。
+
+原网络模型的定义如下所示:
+
+```python
+class LeNet5(nn.Cell):
+ def __init__(self, num_class=10):
+ super(LeNet5, self).__init__()
+ self.num_class = num_class
+
+ self.conv1 = nn.Conv2d(1, 6, kernel_size=5)
+ self.bn1 = nn.batchnorm(6)
+ self.act1 = nn.relu()
+
+ self.conv2 = nn.Conv2d(6, 16, kernel_size=5)
+ self.bn2 = nn.batchnorm(16)
+ self.act2 = nn.relu()
+
+ self.fc1 = nn.Dense(16 * 5 * 5, 120)
+ self.fc2 = nn.Dense(120, 84)
+ self.act3 = nn.relu()
+ self.fc3 = nn.Dense(84, self.num_class)
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.bn1(x)
+ x = self.act1(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.bn2(x)
+ x = self.act2(x)
+ x = self.max_pool2d(x)
+ x = self.flattern(x)
+ x = self.fc1(x)
+ x = self.act3(x)
+ x = self.fc2(x)
+ x = self.act3(x)
+ x = self.fc3(x)
+ return x
+```
+
+替换算子后的融合网络如下:
+
+```python
+class LeNet5(nn.Cell):
+ def __init__(self, num_class=10):
+ super(LeNet5, self).__init__()
+ self.num_class = num_class
+
+ self.conv1 = nn.Conv2dBnAct(1, 6, kernel_size=5, batchnorm=True, activation='relu')
+ self.conv2 = nn.Conv2dBnAct(6, 16, kernel_size=5, batchnorm=True, activation='relu')
+
+ self.fc1 = nn.DenseBnAct(16 * 5 * 5, 120, activation='relu')
+ self.fc2 = nn.DenseBnAct(120, 84, activation='relu')
+ self.fc3 = nn.DenseBnAct(84, self.num_class)
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.max_pool2d(x)
+ x = self.flattern(x)
+ x = self.fc1(x)
+ x = self.fc2(x)
+ x = self.fc3(x)
+ return x
+```
+
+### 转化为量化网络
+
+使用`convert_quant_network`接口自动在融合模型中插入伪量化节点,将融合模型转化为量化网络。
+
+```python
+from mindspore.train.quant import quant as qat
+
+net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=10000, weight_bits=8, act_bits=8)
+```
+
+## 重训和推理
+
+### 导入模型重新训练
+
+上面介绍了从零开始进行感知量化训练。更常见情况是已有一个模型文件,希望生成量化模型,这时已有正常网络模型训练得到的模型文件及训练脚本,进行感知量化训练。这里使用checkpoint文件重新训练的功能,详细步骤为:
+
+ 1. 数据处理加载数据集。
+ 2. 定义网络。
+ 3. 定义融合网络。
+ 4. 定义优化器和损失函数。
+ 5. 加载模型文件模型重训。加载已有模型文件,基于融合网络重新训练生成融合模型。详细模型重载训练,请参见
+ 6. 转化量化网络。
+ 7. 进行量化训练。
+
+### 进行推理
+
+使用量化模型进行推理,与普通模型推理一致,分为直接checkpoint文件推理及转化为通用模型格式(ONNX、GEIR等)进行推理。
+
+> 推理详细说明请参见。
+
+- 使用感知量化训练后得到的checkpoint文件进行推理:
+
+ 1. 加载量化模型。
+ 2. 推理。
+
+- 转化为ONNX等通用格式进行推理(暂不支持,开发完善后补充)。
+
## 参考文献
[1] Jacob B, Kligys S, Chen B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 2704-2713.
@@ -151,4 +196,3 @@ MindSpore的感知量化训练是一种伪量化的过程,它是在可识别
[2] Krishnamoorthi R. Quantizing deep convolutional networks for efficient inference: A whitepaper[J]. arXiv preprint arXiv:1806.08342, 2018.
[3] Jacob B, Kligys S, Chen B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 2704-2713.
-
diff --git a/tutorials/source_zh_cn/advanced_use/dashboard_and_lineage.md b/tutorials/source_zh_cn/advanced_use/dashboard_and_lineage.md
index d7ac86f0c089880abb8d737c185cb5f05bf1ba64..3c5fa33c47175c3a2930e46d473fb68c90659e4e 100644
--- a/tutorials/source_zh_cn/advanced_use/dashboard_and_lineage.md
+++ b/tutorials/source_zh_cn/advanced_use/dashboard_and_lineage.md
@@ -21,7 +21,8 @@
-
+
+
## 概述
训练过程中的标量、图像、计算图以及模型超参等信息记录到文件中,通过可视化界面供用户查看。
diff --git a/tutorials/source_zh_cn/advanced_use/differential_privacy.md b/tutorials/source_zh_cn/advanced_use/differential_privacy.md
index 84c381f7cfc2ade4fe58fe837697b83fb2f81b64..74998d99282254d93306c4782602a8e9faa5b2f5 100644
--- a/tutorials/source_zh_cn/advanced_use/differential_privacy.md
+++ b/tutorials/source_zh_cn/advanced_use/differential_privacy.md
@@ -72,25 +72,25 @@ TAG = 'Lenet5_train'
```python
cfg = edict({
- 'device_target': 'Ascend', # device used
- 'data_path': './MNIST_unzip', # the path of training and testing data set
- 'dataset_sink_mode': False, # whether deliver all training data to device one time
- 'num_classes': 10, # the number of classes of model's output
- 'lr': 0.01, # the learning rate of model's optimizer
- 'momentum': 0.9, # the momentum value of model's optimizer
- 'epoch_size': 10, # training epochs
- 'batch_size': 256, # batch size for training
- 'image_height': 32, # the height of training samples
- 'image_width': 32, # the width of training samples
- 'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
- 'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
- 'micro_batches': 32, # the number of small batches split from an original batch
- 'l2_norm_bound': 1.0, # the clip bound of the gradients of model's training parameters
- 'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training
- # parameters' gradients
- 'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training
- 'optimizer': 'Momentum' # the base optimizer used for Differential privacy training
- })
+ 'num_classes': 10, # the number of classes of model's output
+ 'lr': 0.1, # the learning rate of model's optimizer
+ 'momentum': 0.9, # the momentum value of model's optimizer
+ 'epoch_size': 10, # training epochs
+ 'batch_size': 256, # batch size for training
+ 'image_height': 32, # the height of training samples
+ 'image_width': 32, # the width of training samples
+ 'save_checkpoint_steps': 234, # the interval steps for saving checkpoint file of the model
+ 'keep_checkpoint_max': 10, # the maximum number of checkpoint files would be saved
+ 'device_target': 'Ascend', # device used
+ 'data_path': './MNIST_unzip', # the path of training and testing data set
+ 'dataset_sink_mode': False, # whether deliver all training data to device one time
+ 'micro_batches': 16, # the number of small batches split from an original batch
+ 'norm_clip': 1.0, # the clip bound of the gradients of model's training parameters
+ 'initial_noise_multiplier': 1.5, # the initial multiplication coefficient of the noise added to training
+ # parameters' gradients
+ 'mechanisms': 'AdaGaussian', # the method of adding noise in gradients while training
+ 'optimizer': 'Momentum' # the base optimizer used for Differential privacy training
+ })
```
2. 配置必要的信息,包括环境信息、执行的模式。
@@ -306,13 +306,13 @@ ds_train = generate_mnist_dataset(os.path.join(cfg.data_path, "train"),
5. 结果展示。
- 不加差分隐私的LeNet模型精度稳定在99%,加了自适应差分隐私AdaDP的LeNet模型收敛,精度稳定在91%。
+ 不加差分隐私的LeNet模型精度稳定在99%,加了自适应差分隐私AdaDP的LeNet模型收敛,精度稳定在98%。
```
============== Starting Training ==============
...
============== Starting Testing ==============
...
- ============== Accuracy: 0.9115 ==============
+ ============== Accuracy: 0.9879 ==============
```
### 引用
diff --git a/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md b/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md
index 961085f13e9ffc8fab4be64cab69796a6fc22108..3f53b6c3fc50167bace2807e8678b14eb9ed3848 100644
--- a/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md
+++ b/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md
@@ -111,29 +111,42 @@ context.set_context(enable_graph_kernel=True)
```bash
pytest -s test_graph_kernel_fusion::test_basic_fuse
- ```
- 脚本执行结束后,我们在脚本运行目录可以得到一些`.dot`文件,使用`dot`工具可以将`.dot`文件转换为`.png`文件进行查看。我们以`6_validate.dot`和`hwopt_d_fuse_basic_opt_end_graph_0.dot`生成初始计算图和基础算子融合后计算图。
- 如下图所示,我们构造的网络的初始计算中有两个基础算子计算,打开图算融合的开关之后会自动将两个基础算子(`Mul`、`TensorAdd`)融合为一个算子(组合算子)。第二张图中,右上角部分即为融合之后的组合算子,现在网络只需要执行一个组合算子就可以完成原有的`Mul`、`TensorAdd`两次计算。
+ ```
+
+ 脚本执行结束后,我们在脚本运行目录可以得到一些`.dot`文件,使用`dot`工具可以将`.dot`文件转换为`.png`文件进行查看。我们以`6_validate.dot`和`hwopt_d_fuse_basic_opt_end_graph_0.dot`生成初始计算图和基础算子融合后计算图。
+
+ 如图1所示,我们构造的网络的初始计算中有两个基础算子计算,打开图算融合的开关之后会自动将两个基础算子(`Mul`、`TensorAdd`)融合为一个算子(组合算子)。图2中,右上角部分即为融合之后的组合算子,现在网络只需要执行一个组合算子就可以完成原有的`Mul`、`TensorAdd`两次计算。
+
+ 
- | 类别 | 计算图 |
- | ------ | ------ |
- | 初始计算图 |  |
- | 基础算子融合后计算图 |  |
+ 图1:初始计算图
+
+ 
+
+ 图2:基础算子融合后计算图
2. 组合算子融合场景:组合算子融合是指将原有的组合算子和与其相关的基础算子进行分析,在可以得到性能收益的条件下,将原有的组合算子和基础算子融合成为一个更大的组合算子,以简单样例`NetCompositeFuse`说明。
```bash
pytest -s test_graph_kernel_fusion::test_composite_fuse
- ```
- 同样,我们以`6_validate.dot`、`hwopt_d_fuse_basic_opt_end_graph_0.dot`和`hwopt_d_composite_opt_end_graph_0.dot`生成初始计算图、基础算子融合后计算图和组合算子融合后计算图。
- 如下图所示,我们构造的网络的初始计算中有三个基础算子计算,打开图算融合的开关之后,在基础算子融合阶段,会自动将前两个基础算子(`Mul`、`TensorAdd`)融合为一个算子(组合算子),第二张图中可以看到,右上角部分即为融合之后的组合算子,左下角的主图中还有一个基础算子`Pow`。在接下来的组合算子融合阶段分析后,会进一步将剩余的基础算子(`Pow`)和已有的一个组合算子进行融合,形成一个新的组合算子。第三张图中,右上角部分即为融合三个基础算子之后的组合算子,现在网络只需要执行一个组合算子就可以完成原有的`Mul`、`TensorAdd`、`Pow`三次计算。
+ ```
+
+ 同样,我们以`6_validate.dot`、`hwopt_d_fuse_basic_opt_end_graph_0.dot`和`hwopt_d_composite_opt_end_graph_0.dot`生成初始计算图、基础算子融合后计算图和组合算子融合后计算图。
+
+ 如图3所示,我们构造的网络的初始计算中有三个基础算子计算,打开图算融合的开关之后,在基础算子融合阶段,会自动将前两个基础算子(`Mul`、`TensorAdd`)融合为一个算子(组合算子)。从图4中可以看到,右上角部分即为融合之后的组合算子,左下角的主图中还有一个基础算子`Pow`。在接下来的组合算子融合阶段分析后,会进一步将剩余的基础算子(`Pow`)和已有的一个组合算子进行融合,形成一个新的组合算子。图5中,右上角部分即为融合三个基础算子之后的组合算子,现在网络只需要执行一个组合算子就可以完成原有的`Mul`、`TensorAdd`、`Pow`三次计算。
+
+ 
+
+ 图3:初始计算图
+
+ 
+
+ 图4:基础算子融合后计算图
- | 类别 | 计算图 |
- | ------ | ------ |
- | 初始计算图 |  |
- | 基础算子融合后计算图 |  |
- | 组合算子融合后计算图 |  |
+ 
+ 图5:组合算子融合后计算图
+
### 训练单step时间
BERT-large场景:BERT-large网络启用图算融合后,在保持与启用前精度一致的前提下,训练的单step时间可提升5%左右。
diff --git a/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png b/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png
index 6a03815bac3797b1333050e6eae1c89950e01a1c..b83408e92777181f6447ec20239fc92e28084a6a 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png and b/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_after.png b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_after.png
index fe12a44349f8edad6945312acb21403f5115695f..bcc85cbde11d73e8dd24e1744269cbf1543c87a0 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_after.png and b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_after.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_before.png b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_before.png
index a31ec3cff7e49704151518a718e2157dce2c1d23..7e400d641f7fd1eaf0654c0802fd2ba9b77b78fb 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_before.png and b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_basic_before.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_after.png b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_after.png
index 1ee038af8f83b3877b605c8fa5399b45c9115e64..74b761f33fb710de15acc0bcdcbbe8a6af0d05d4 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_after.png and b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_after.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_before.png b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_before.png
index 5a18b93dc616936bef6723c47772a405af02130c..85faa754d9c18738260e1dfddf791bbd714f0d78 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_before.png and b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_before.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_middle.png b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_middle.png
index 36081dd9b1bf4e6199912af63b5cc86abb1cf748..638f7b73bdfc5be6248073e14c7a26588792233c 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_middle.png and b/tutorials/source_zh_cn/advanced_use/images/graph_kernel_fusion_example_fuse_composite_middle.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png b/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png
index 035939f5e3d548f39e2f5c6c16b2bc7d0c7469ce..79dfad25e6828769a2efc697bb7b02a171dbbdd0 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png and b/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/performance_overall.png b/tutorials/source_zh_cn/advanced_use/images/performance_overall.png
index 2d627f972cac0b7848eff1114b0fd2fa4f030e74..3aa536d5f24fc348ad013fa07084fddb1b4f01af 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/performance_overall.png and b/tutorials/source_zh_cn/advanced_use/images/performance_overall.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/step_trace.png b/tutorials/source_zh_cn/advanced_use/images/step_trace.png
index 6c54e790e34f52780e4c16f487f81a39906512bf..49c8bb72741173cd3285bfdbacfb206dbc33e3a9 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/step_trace.png and b/tutorials/source_zh_cn/advanced_use/images/step_trace.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/images/timeline.png b/tutorials/source_zh_cn/advanced_use/images/timeline.png
index 21453967d9799b73795fae05529cda6fcb82f6ee..19c60e104169d86f1022758eda15bbc9c8a0dcf6 100644
Binary files a/tutorials/source_zh_cn/advanced_use/images/timeline.png and b/tutorials/source_zh_cn/advanced_use/images/timeline.png differ
diff --git a/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md b/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md
new file mode 100644
index 0000000000000000000000000000000000000000..b611bab4f433fe71ce4e80efba03c126a04f7d37
--- /dev/null
+++ b/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md
@@ -0,0 +1,64 @@
+# MindInsight相关命令
+
+1. 查看命令帮助信息
+
+ ```shell
+ mindinsight --help
+ ```
+
+2. 查看版本信息
+
+ ```shell
+ mindinsight --version
+ ```
+
+3. 启动服务
+
+ ```shell
+ mindinsight start [-h] [--config ] [--workspace ]
+ [--port ] [--reload-interval ]
+ [--summary-base-dir ]
+ ```
+
+ 参数含义如下:
+
+ - `-h, --help` : 显示启动命令的帮助信息。
+ - `--config ` : 指定配置文件或配置模块,CONFIG为物理文件路径(file:/path/to/config.py)或Python可识别的模块路径(python:path.to.config.module)。
+ - `--workspace ` : 指定工作目录路径,WORKSPACE默认为 $HOME/mindinsight。
+ - `--port ` : 指定Web可视化服务端口,取值范围是1~65535,PORT默认为8080。
+ - `--url-path-prefix ` : 指定Web服务地址前缀,URL_PATH_PREFIX默认为空。
+ - `--reload-interval ` : 指定加载数据的时间间隔(单位:秒),设置为0时表示只加载一次数据,RELOAD_INTERVAL默认为3秒。
+ - `--summary-base-dir ` : 指定加载训练日志数据的根目录路径,MindInsight将遍历此路径下的直属子目录。若某个直属子目录包含日志文件,则该子目录被识别为日志文件目录,若根目录包含日志文件,则根目录被识别为日志文件目录。SUMMARY_BASE_DIR默认为当前目录路径。
+
+ > 服务启动时,命令行参数值将被保存为进程的环境变量,并以 `MINDINSIGHT_` 开头作为标识,如 `MINDINSIGHT_CONFIG`,`MINDINSIGHT_WORKSPACE`,`MINDINSIGHT_PORT` 等。
+
+4. 查看服务进程信息
+
+ MindInsight向用户提供Web服务,可通过以下命令,查看当前运行的Web服务进程。
+
+ ```shell
+ ps -ef | grep mindinsight
+ ```
+
+ 根据服务进程PID,可通过以下命令,查看当前服务进程对应的工作目录`WORKSPACE`。
+
+ ```shell
+ lsof -p | grep access
+ ```
+
+ 输出如下,可查看`WORKSPACE`。
+
+ ```shell
+ gunicorn /log/gunicorn/access.log
+ ```
+
+5. 停止服务
+
+ ```shell
+ mindinsight stop [-h] [--port PORT]
+ ```
+
+ 参数含义如下:
+
+ - `-h, --help` : 显示停止命令的帮助信息。
+ - `--port ` : 指定Web可视化服务端口,取值范围是1~65535,PORT默认为8080。
diff --git a/tutorials/source_zh_cn/advanced_use/network_migration.md b/tutorials/source_zh_cn/advanced_use/network_migration.md
index 3e9cf2e776d0ffc163e2cb8dd83da375bfd4d752..8d3f574dbe4a90fec2c34772db6d2cdd12426fba 100644
--- a/tutorials/source_zh_cn/advanced_use/network_migration.md
+++ b/tutorials/source_zh_cn/advanced_use/network_migration.md
@@ -77,7 +77,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
num_shards=device_num, shard_id=rank_id)
```
- 然后对数据进行了数据增强、数据清洗和批处理等操作。代码详见。
+ 然后对数据进行了数据增强、数据清洗和批处理等操作。代码详见。
3. 构建网络。
@@ -210,7 +210,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
6. 构造整网。
- 将定义好的多个子网连接起来就是整个[ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/mindspore/model_zoo/resnet.py)网络的结构了。同样遵循先定义后使用的原则,在`__init__`中定义所有用到的子网,在`construct`中连接子网。
+ 将定义好的多个子网连接起来就是整个[ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/resnet/src/resnet.py)网络的结构了。同样遵循先定义后使用的原则,在`__init__`中定义所有用到的子网,在`construct`中连接子网。
7. 定义损失函数和优化器。
@@ -267,8 +267,6 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
## 样例参考
-1. [常用网络脚本样例](https://gitee.com/mindspore/mindspore/tree/master/example)
+1. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html)
-2. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html)
-
-3. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)
\ No newline at end of file
+2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)
\ No newline at end of file
diff --git a/tutorials/source_zh_cn/advanced_use/on_device_inference.md b/tutorials/source_zh_cn/advanced_use/on_device_inference.md
index 446a65e5de3a143a08864b228f3b2bd79d42a256..eb224de00025b55d2015db23265a7012b98c07c7 100644
--- a/tutorials/source_zh_cn/advanced_use/on_device_inference.md
+++ b/tutorials/source_zh_cn/advanced_use/on_device_inference.md
@@ -28,8 +28,8 @@ MindSpore Predict是一个轻量级的深度神经网络推理引擎,提供了
- 硬盘空间10GB以上
- 系统要求
- - 系统:Ubuntu = 16.04.02LTS(验证可用)
- - 内核:4.4.0-62-generic(验证可用)
+ - 系统:Ubuntu = 18.04.02LTS(验证可用)
+ - 内核:4.15.0-45-generic(验证可用)
- 软件依赖
- [cmake](https://cmake.org/download/) >= 3.14.1
@@ -86,7 +86,7 @@ MindSpore进行端侧模型推理的步骤如下。
### 生成端侧模型文件
1. 加载训练完毕所生成的CheckPoint文件至定义好的网络中。
```python
- param_dict = load_checkpoint(ckpoint_file_name=ckpt_file_path)
+ param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path)
load_param_into_net(net, param_dict)
```
2. 调用`export`接口,导出端侧模型文件(`.ms`)。
@@ -144,7 +144,7 @@ if __name__ == '__main__':
is_ckpt_exist = os.path.exists(ckpt_file_path)
if is_ckpt_exist:
- param_dict = load_checkpoint(ckpoint_file_name=ckpt_file_path)
+ param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path)
load_param_into_net(net, param_dict)
export(net, input_data, file_name="./lenet.ms", file_format='LITE')
print("export model success.")
diff --git a/tutorials/source_zh_cn/advanced_use/performance_profiling.md b/tutorials/source_zh_cn/advanced_use/performance_profiling.md
index 94c9fc8ae7838ce3261dda151609b976529f3b94..07d6c24219c94813544932f09e898fbffa94caa0 100644
--- a/tutorials/source_zh_cn/advanced_use/performance_profiling.md
+++ b/tutorials/source_zh_cn/advanced_use/performance_profiling.md
@@ -66,7 +66,7 @@ def test_profiler():
## 启动MindInsight
-启动命令请参考[训练过程可视](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/visualization_tutorials.html)中**MindInsight相关命令**小节。
+启动命令请参考[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mindinsight_commands.html)。
### 性能分析
@@ -164,6 +164,10 @@ Timeline组件可以展示:
通过分析Timeline,用户可以对训练过程进行细粒度分析:从High Level层面,可以分析流切分方法是否合理、迭代间隙和拖尾时间是否过长等;从Low Level层面,可以分析算子执行时间等。
+用户可以点击总览页面Timeline部分的下载按钮,将Timeline数据文件 (json格式) 保存至本地,再通过工具查看Timeline的详细信息。推荐使用 `chrome://tracing` 或者 [Perfetto](https://ui.perfetto.dev/#!viewer) 做Timeline展示。
+- Chrome tracing:点击左上角"load"加载文件。
+- Perfetto:点击左侧"Open trace file"加载文件。
+

图7:Timeline分析
diff --git a/tutorials/source_zh_cn/advanced_use/visualization_tutorials.rst b/tutorials/source_zh_cn/advanced_use/visualization_tutorials.rst
index 2b6b4aa018d3d849bc0963a2a134ac16a0383d8f..814a815ae3f5ee65736374f2f474cad13fec9887 100644
--- a/tutorials/source_zh_cn/advanced_use/visualization_tutorials.rst
+++ b/tutorials/source_zh_cn/advanced_use/visualization_tutorials.rst
@@ -6,71 +6,4 @@
dashboard_and_lineage
performance_profiling
-
-MindInsight相关命令
---------------------
-
-1. 查看命令帮助信息
-
- .. code-block::
-
- mindinsight --help
-
-2. 查看版本信息
-
- .. code-block::
-
- mindinsight --version
-
-3. 启动服务
-
- .. code-block::
-
- mindinsight start [-h] [--config ] [--workspace ]
- [--port ] [--reload-interval ]
- [--summary-base-dir ]
-
- 参数含义如下:
-
- - `-h, --help` : 显示启动命令的帮助信息。
- - `--config ` : 指定配置文件或配置模块,CONFIG为物理文件路径(file:/path/to/config.py)或Python可识别的模块路径(python:path.to.config.module)。
- - `--workspace ` : 指定工作目录路径,WORKSPACE默认为 $HOME/mindinsight。
- - `--port ` : 指定Web可视化服务端口,取值范围是1~65535,PORT默认为8080。
- - `--url-path-prefix ` : 指定Web服务地址前缀,URL_PATH_PREFIX默认为空。
- - `--reload-interval ` : 指定加载数据的时间间隔(单位:秒),设置为0时表示只加载一次数据,RELOAD_INTERVAL默认为3秒。
- - `--summary-base-dir ` : 指定加载训练日志数据的根目录路径,MindInsight将遍历此路径下的直属子目录。若某个直属子目录包含日志文件,则该子目录被识别为日志文件目录,若根目录包含日志文件,则根目录被识别为日志文件目录。SUMMARY_BASE_DIR默认为当前目录路径。
-
- .. note::
-
- 服务启动时,命令行参数值将被保存为进程的环境变量,并以 `MINDINSIGHT_` 开头作为标识,如 `MINDINSIGHT_CONFIG`,`MINDINSIGHT_WORKSPACE`,`MINDINSIGHT_PORT` 等。
-
-4. 查看服务进程信息
-
- MindInsight向用户提供Web服务,可通过以下命令,查看当前运行的Web服务进程。
-
- .. code-block::
-
- ps -ef | grep mindinsight
-
- 根据服务进程PID,可通过以下命令,查看当前服务进程对应的工作目录`WORKSPACE`。
-
- .. code-block::
-
- lsof -p | grep access
-
- 输出如下,可查看`WORKSPACE`。
-
- .. code-block::
-
- gunicorn /log/gunicorn/access.log
-
-5. 停止服务
-
- .. code-block::
-
- mindinsight stop [-h] [--port PORT]
-
- 参数含义如下:
-
- - `-h, --help` : 显示停止命令的帮助信息。
- - `--port ` : 指定Web可视化服务端口,取值范围是1~65535,PORT默认为8080。
+ mindinsight_commands
diff --git a/tutorials/source_zh_cn/index.rst b/tutorials/source_zh_cn/index.rst
index 8a241a818c611eabfeecac74d6987e2368cb0eb7..7a0252dd78f775ec5212ae1dd479e027454ed3cc 100644
--- a/tutorials/source_zh_cn/index.rst
+++ b/tutorials/source_zh_cn/index.rst
@@ -48,6 +48,7 @@ MindSpore教程
advanced_use/distributed_training_tutorials
advanced_use/mixed_precision
+ advanced_use/graph_kernel_fusion
advanced_use/aware_quantization
.. toctree::
diff --git a/tutorials/source_zh_cn/use/data_preparation/converting_datasets.md b/tutorials/source_zh_cn/use/data_preparation/converting_datasets.md
index 7bd585166ad30dc26656b09a6686fd7cacc8b398..ba744652da3054a25d64a8d55e95dfbf84da3071 100644
--- a/tutorials/source_zh_cn/use/data_preparation/converting_datasets.md
+++ b/tutorials/source_zh_cn/use/data_preparation/converting_datasets.md
@@ -178,12 +178,14 @@ MindSpore提供转换常见数据集的工具类,将常见数据集转换为Mi
对下载后的ImageNet数据集,整理数据集组织形式为一个包含所有图片的文件夹,以及一个记录图片对应标签的映射文件。
- 标签映射文件包含3列,分别为各类别图片目录、标签ID、标签名,用空格隔开,映射文件示例如下:
- ```
- n02119789 1 pen
- n02100735 2 notbook
- n02110185 3 mouse
- n02096294 4 orange
+ 标签映射文件包含2列,分别为各类别图片目录、标签ID,用空格隔开,映射文件示例如下:
+ ```
+ n01440760 0
+ n01443537 1
+ n01484850 2
+ n01491361 3
+ n01494475 4
+ n01496331 5
```
2. 导入转换数据集的工具类`ImageNetToMR`。
diff --git a/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md b/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md
index 13fc97b52e907834b2d88751ae82adc472a1803e..190440fb1e9addb7a4a13d6b2498da3fd143ac71 100644
--- a/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md
+++ b/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md
@@ -278,7 +278,7 @@ MindSpore提供`c_transforms`模块以及`py_transforms`模块函数供用户进
```
2. 定义数据增强算子,以`Resize`为例:
```python
- dataset = ds.ImageFolderDatasetV2(DATA_DIR, decode=True) # Deocde images.
+ dataset = ds.ImageFolderDatasetV2(DATA_DIR, decode=True) # Decode images.
resize_op = transforms.Resize(size=(500,500), interpolation=Inter.LINEAR)
dataset.map(input_columns="image", operations=resize_op)
diff --git a/tutorials/source_zh_cn/use/data_preparation/loading_the_datasets.md b/tutorials/source_zh_cn/use/data_preparation/loading_the_datasets.md
index 3186acaa2bd0c2779064fc2ae411bd324d8898f2..26ddc142db4beb0018256f866b200ed3c5c4493f 100644
--- a/tutorials/source_zh_cn/use/data_preparation/loading_the_datasets.md
+++ b/tutorials/source_zh_cn/use/data_preparation/loading_the_datasets.md
@@ -65,7 +65,7 @@ MindSpore天然支持读取MindSpore数据格式——`MindRecord`存储的数
data_set = ds.MindDataset(dataset_file=CV_FILE_NAME)
```
其中,
- `dataset_file`:指定MindRecord的文件,含路径及文件名。
+ `dataset_file`:指定MindRecord的文件或文件列表。
2. 创建字典迭代器,通过迭代器读取数据记录。
```python
@@ -148,32 +148,110 @@ MindSpore也支持读取`TFRecord`数据格式的数据集,可以通过`TFReco
```
## 加载自定义数据集
-对于自定义数据集,可以通过`GeneratorDataset`对象加载。
+现实场景中,数据集的种类多种多样,对于自定义数据集或者目前不支持直接加载的数据集,有两种方法可以处理。
+一种方法是将数据集转成MindRecord格式(请参考[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/converting_datasets.html)章节),另一种方法是通过`GeneratorDataset`对象加载,以下将展示如何使用`GeneratorDataset`。
-1. 定义一个函数(示例函数名为`Generator1D`)用于生成数据集的函数。
- > 自定义的生成函数返回的是可调用的对象,每次返回`numpy array`的元组,作为一行数据。
+1. 定义一个可迭代的对象,用于生成数据集。以下展示了两种示例,一种是含有`yield`返回值的自定义函数,另一种是含有`__getitem__`的自定义类。两种示例都将产生一个含有从0到9数字的数据集。
+ > 自定义的可迭代对象,每次返回`numpy array`的元组,作为一行数据。
自定义函数示例如下:
```python
import numpy as np # Import numpy lib.
- def Generator1D():
- for i in range(64):
+ def generator_func(num):
+ for i in range(num):
yield (np.array([i]),) # Notice, tuple of only one element needs following a comma at the end.
```
-2. 将`Generator1D`传入`GeneratorDataset`创建数据集,并设定`column`名为“data”。
+
+ 自定义类示例如下:
```python
- dataset = ds.GeneratorDataset(Generator1D, ["data"])
+ import numpy as np # Import numpy lib.
+ class Generator():
+
+ def __init__(self, num):
+ self.num = num
+
+ def __getitem__(self, item):
+ return (np.array([item]),) # Notice, tuple of only one element needs following a comma at the end.
+
+ def __len__(self):
+ return self.num
+ ```
+2. 使用`GeneratorDataset`创建数据集。将`generator_func`传入`GeneratorDataset`创建数据集`dataset1`,并设定`column`名为“data” 。
+ 将定义的`Generator`对象传入`GeneratorDataset`创建数据集`dataset2`,并设定`column`名为“data” 。
+ ```python
+ dataset1 = ds.GeneratorDataset(source=generator_func(10), column_names=["data"], shuffle=False)
+ dataset2 = ds.GeneratorDataset(source=Generator(10), column_names=["data"], shuffle=False)
```
3. 在创建数据集后,可以通过给数据创建迭代器的方式,获取相应的数据。有两种创建迭代器的方法。
- - 创建返回值为序列类型的迭代器。
+ - 创建返回值为序列类型的迭代器。以下分别对`dataset1`和`dataset2`创建迭代器,并打印输出数据观察结果。
```python
- for data in dataset.create_tuple_iterator(): # each data is a sequence
- print(data[0])
+ print("dataset1:")
+ for data in dataset1.create_tuple_iterator(): # each data is a sequence
+ print(data)
+
+ print("dataset2:")
+ for data in dataset2.create_tuple_iterator(): # each data is a sequence
+ print(data)
```
-
- - 创建返回值为字典类型的迭代器。
- ```python
- for data in dataset.create_dict_iterator(): # each data is a dictionary
+ 输出如下所示:
+ ```
+ dataset1:
+ [array([0], dtype=int64)]
+ [array([1], dtype=int64)]
+ [array([2], dtype=int64)]
+ [array([3], dtype=int64)]
+ [array([4], dtype=int64)]
+ [array([5], dtype=int64)]
+ [array([6], dtype=int64)]
+ [array([7], dtype=int64)]
+ [array([8], dtype=int64)]
+ [array([9], dtype=int64)]
+ dataset2:
+ [array([0], dtype=int64)]
+ [array([1], dtype=int64)]
+ [array([2], dtype=int64)]
+ [array([3], dtype=int64)]
+ [array([4], dtype=int64)]
+ [array([5], dtype=int64)]
+ [array([6], dtype=int64)]
+ [array([7], dtype=int64)]
+ [array([8], dtype=int64)]
+ [array([9], dtype=int64)]
+ ```
+
+ - 创建返回值为字典类型的迭代器。以下分别对`dataset1`和`dataset2`创建迭代器,并打印输出数据观察结果。
+ ```python
+ print("dataset1:")
+ for data in dataset1.create_dict_iterator(): # each data is a dictionary
+ print(data["data"])
+
+ print("dataset2:")
+ for data in dataset2.create_dict_iterator(): # each data is a dictionary
print(data["data"])
```
+ 输出如下所示:
+ ```
+ dataset1:
+ {'data': array([0], dtype=int64)}
+ {'data': array([1], dtype=int64)}
+ {'data': array([2], dtype=int64)}
+ {'data': array([3], dtype=int64)}
+ {'data': array([4], dtype=int64)}
+ {'data': array([5], dtype=int64)}
+ {'data': array([6], dtype=int64)}
+ {'data': array([7], dtype=int64)}
+ {'data': array([8], dtype=int64)}
+ {'data': array([9], dtype=int64)}
+ dataset2:
+ {'data': array([0], dtype=int64)}
+ {'data': array([1], dtype=int64)}
+ {'data': array([2], dtype=int64)}
+ {'data': array([3], dtype=int64)}
+ {'data': array([4], dtype=int64)}
+ {'data': array([5], dtype=int64)}
+ {'data': array([6], dtype=int64)}
+ {'data': array([7], dtype=int64)}
+ {'data': array([8], dtype=int64)}
+ {'data': array([9], dtype=int64)}
+ ```