diff --git a/api/source_en/api/python/mindspore/mindspore.nn.probability.rst b/api/source_en/api/python/mindspore/mindspore.nn.probability.rst index 9ed8e3699313bcc607274c7616faa39cd49d6f23..2235f574850eceaf0f26a4d6a1b2c4927a2a247e 100644 --- a/api/source_en/api/python/mindspore/mindspore.nn.probability.rst +++ b/api/source_en/api/python/mindspore/mindspore.nn.probability.rst @@ -12,7 +12,14 @@ mindspore.nn.probability.bnn_layers .. automodule:: mindspore.nn.probability.bnn_layers :members: + :exclude-members: ConvReparam , DenseReparam + .. autoclass:: ConvReparam(in_channels, out_channels, kernel_size,stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_prior_fn=NormalPrior, weight_posterior_fn=, bias_prior_fn=NormalPrior, bias_posterior_fn=) + :members: + + .. autoclass:: DenseReparam(in_channels, out_channels, activation=None, has_bias=True, weight_prior_fn=NormalPrior, weight_posterior_fn=, bias_prior_fn=NormalPrior, bias_posterior_fn=) + :members: + .. autoclass:: WithBNNLossCell :members: diff --git a/api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst b/api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst index 729903dfca591c264e4d1b5b093bf3356203c67a..70e8f69bf9f2a3e6d8d6f1be8636cad79b61fb69 100644 --- a/api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst +++ b/api/source_zh_cn/api/python/mindspore/mindspore.nn.probability.rst @@ -12,7 +12,14 @@ mindspore.nn.probability.bnn_layers .. automodule:: mindspore.nn.probability.bnn_layers :members: + :exclude-members: ConvReparam , DenseReparam + .. autoclass:: ConvReparam(in_channels, out_channels, kernel_size,stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_prior_fn=NormalPrior, weight_posterior_fn=, bias_prior_fn=NormalPrior, bias_posterior_fn=) + :members: + + .. autoclass:: DenseReparam(in_channels, out_channels, activation=None, has_bias=True, weight_prior_fn=NormalPrior, weight_posterior_fn=, bias_prior_fn=NormalPrior, bias_posterior_fn=) + :members: + .. autoclass:: WithBNNLossCell :members: diff --git a/api/source_zh_cn/programming_guide/ops.md b/api/source_zh_cn/programming_guide/ops.md index eb7d4a3352ff0f63bc3b813b30c1dfb1ecd957a7..53bb69f5dfd011be5b4ed47d1f440da45df482a6 100644 --- a/api/source_zh_cn/programming_guide/ops.md +++ b/api/source_zh_cn/programming_guide/ops.md @@ -1,4 +1,4 @@ -# ops模块 +# ops模块 @@ -20,7 +20,7 @@ ops主要包括operations、functional和composite,可通过ops直接获取到 ## mindspore.ops.operations -operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。 +operations提供了所有的Primitive算子接口,是开放给用户的最低阶算子接口。算子支持情况可查询[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html#mindspore-ops-operations)。 Primitive算子也称为算子原语,它直接封装了底层的Ascend、GPU、AICPU、CPU等多种算子的具体实现,为用户提供基础算子能力。 @@ -47,25 +47,10 @@ output = [ 1. 8. 64.] ## mindspore.ops.functional -为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html#mindspore-ops-operations)。 +为了简化没有属性的算子的调用流程,MindSpore提供了一些算子的functional版本。入参要求参考原算子的输入输出要求。算子支持情况可以查询[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html#mindspore-ops-functional)。 例如`P.Pow`算子,我们提供了functional版本的`F.tensor_pow`算子。 -使用operations的代码样例如下: - -```python -import numpy as np -import mindspore -from mindspore import Tensor -from mindspore.ops import operations as P - -input_x = mindspore.Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32) -input_y = 3.0 -pow = P.Pow() -output = pow(input_x, input_y) -print("output =", output) -``` - 使用functional的代码样例如下: ```python @@ -138,4 +123,4 @@ tensor [[2.4, 4.2] scalar 3 ``` -此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的求梯度的函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.composite.html#mindspore.ops.composite.GradOperation)。 \ No newline at end of file +此外,高阶函数`GradOperation`提供了根据输入的函数,求这个函数对应的梯度函数的方式,详细可以参阅[API文档](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.composite.html#mindspore.ops.composite.GradOperation)。 \ No newline at end of file diff --git a/api/source_zh_cn/programming_guide/tensor.md b/api/source_zh_cn/programming_guide/tensor.md index 08f9ec6d7f5ed1c450964eb15e5f613831607152..c935dc71bba6a42e0174e03048f7e4d704db9bb4 100644 --- a/api/source_zh_cn/programming_guide/tensor.md +++ b/api/source_zh_cn/programming_guide/tensor.md @@ -55,22 +55,22 @@ print(x, "\n\n", y, "\n\n", z, "\n\n", m, "\n\n", n, "\n\n", p) ``` [[1 2] - [3 4]] + [3 4]] -1 +1 -2 +2 -True +True -[1 2 3] +[1 2 3] [4. 5. 6.] ``` ## 变量张量 -变量张量的值在网络中可以被更新,用来表示需要被更新的参数,MindSpore使用Tensor的子类Parameter构造变量张量,构造时支持传入Tensor或者Initializer。 +变量张量的值在网络中可以被更新,用来表示需要被更新的参数,MindSpore使用Tensor的子类Parameter构造变量张量,构造时支持传入Tensor、Initializer或者Number。 代码样例如下: @@ -83,21 +83,24 @@ from mindspore.common.initializer import initializer x = Tensor(np.arange(2*3).reshape((2, 3))) y = Parameter(x, name="x") z = Parameter(initializer('ones', [1, 2, 3], mstype.float32), name='y') +m = Parameter(2.0, name='m') -print(x, "\n\n", y, "\n\n", z) +print(x, "\n\n", y, "\n\n", z, "\n\n", m) ``` 输出如下: ``` [[0 1 2] - [3 4 5]] + [3 4 5]] -Parameter (name=x, value=[[0 1 2] - [3 4 5]]) +Parameter (name=x, value=[[0 1 2] + [3 4 5]]) Parameter (name=y, value=[[[1. 1. 1.] [1. 1. 1.]]]) + +Parameter (name=m, value=2.0) ``` ## 张量的属性和方法 @@ -152,9 +155,9 @@ print(x_all, "\n\n", x_any, "\n\n", x_array) 输出如下: ``` -False +False -True +True [[ True True] [False False]] @@ -190,9 +193,9 @@ True ``` [[1. 1.] - [1. 1.]] + [1. 1.]] - 1.0 + 1.0 [1 2 3] ``` @@ -270,17 +273,17 @@ True ``` [[[0 1 2] - [3 4 5]]] + [3 4 5]]] [[[0 1] [2 3] - [4 5]]] + [4 5]]] [[[[0 1 2] - [3 4 5]]]] + [3 4 5]]]] [[0 1 2] - [3 4 5]] + [3 4 5]] [[[0 3]] [[1 4]] @@ -315,17 +318,17 @@ True ``` [[0 1 2] - [3 4 5]] + [3 4 5]] [[[0 1 2] [3 4 5]] [[0 1 2] - [3 4 5]]] + [3 4 5]]] [[0 1 2] [3 4 5] [0 1 2] - [3 4 5]] + [3 4 5]] (Tensor(shape=[1, 3], dtype=Int64, [[0 1 2]]), Tensor(shape=[1, 3], dtype=Int64, [[3 4 5]])) ``` @@ -357,7 +360,7 @@ print(x, "\n\n", y) ``` [[0 1 2] - [3 4 5]] + [3 4 5]] [[0 1 2 0 1 2 0 1 2] [3 4 5 3 4 5 3 4 5] diff --git a/docs/source_en/FAQ.md b/docs/source_en/FAQ.md index 681d68614b8840f572c53221c748d1fe4bb01157..6478958c1882e17cafa51e5d0586c4132527e972 100644 --- a/docs/source_en/FAQ.md +++ b/docs/source_en/FAQ.md @@ -1,5 +1,7 @@ # FAQ +`Ascend` `GPU` `CPU` `Environmental Setup` `Model Export` `Model Training` `Beginner` `Intermediate` `Expert` + - [FAQ](#faq) @@ -16,6 +18,7 @@ - [Supported Features](#supported-features) + ## Installation diff --git a/docs/source_en/design/mindinsight/graph_visual_design.md b/docs/source_en/design/mindinsight/graph_visual_design.md index d4d4efaf3a47cf51c0b77522bde76dc94ae5f8e7..1aa1ab694d04bbb8d7d4026780b254cb6add957e 100644 --- a/docs/source_en/design/mindinsight/graph_visual_design.md +++ b/docs/source_en/design/mindinsight/graph_visual_design.md @@ -1,6 +1,6 @@ # Computational Graph Visualization Design -`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` +`Ascend` `GPU` `CPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` diff --git a/docs/source_en/design/mindinsight/tensor_visual_design.md b/docs/source_en/design/mindinsight/tensor_visual_design.md index 3f3a21e5eeccf1bdc1b926b28cde8ba048acb67f..8117ef8e0e0e19e2353c933b4c0d8998e9c83e4d 100644 --- a/docs/source_en/design/mindinsight/tensor_visual_design.md +++ b/docs/source_en/design/mindinsight/tensor_visual_design.md @@ -1,6 +1,6 @@ # Tensor Visualization Design -`Ascend` `GPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` +`Ascend` `GPU` `CPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` diff --git a/docs/source_en/design/mindinsight/training_visual_design.md b/docs/source_en/design/mindinsight/training_visual_design.md index fdc47ea1e516d92e509fa6d49a50b5f83ac7ad38..0b19c78cff668b7ad76c49b1c3980aaebd82a3de 100644 --- a/docs/source_en/design/mindinsight/training_visual_design.md +++ b/docs/source_en/design/mindinsight/training_visual_design.md @@ -1,5 +1,7 @@ # Overall Design of Training Visualization +`Ascend` `GPU` `CPU` `Model Development` `Model Optimization` `Framework Development` `Intermediate` `Expert` `Contributor` + - [Overall Design of Training Visualization](#overall-design-of-training-visualization) diff --git a/docs/source_en/network_list.md b/docs/source_en/network_list.md index 916efdde0c895d7038a1c16cc6e821e6784ad9cd..1f5a802e5d42d39d5f4706defe431d687145e9dd 100644 --- a/docs/source_en/network_list.md +++ b/docs/source_en/network_list.md @@ -23,10 +23,12 @@ |Computer Vision (CV) | Image Classification | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing |Computer Vision (CV) | Image Classification | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing | Computer Vision (CV) | Image Classification | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing +| Computer Vision (CV) | Image Classification | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | Computer Vision (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing | Computer Vision (CV) | Mobile Image Classification
Image Classification
Semantic Tegmentation | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing |Computer Vision (CV) | Targets Detection | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing | Computer Vision (CV) | Targets Detection | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing +| Computer Vision (CV) | Targets Detection | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing | Computer Vision (CV) | Targets Detection | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing | Computer Vision (CV) | Semantic Segmentation | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing | Computer Vision(CV) | Targets Detection | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing @@ -34,8 +36,8 @@ | Natural Language Processing (NLP) | Natural Language Understanding | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing | Natural Language Processing (NLP) | Natural Language Understanding | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported | Natural Language Processing (NLP) | Natural Language Understanding | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing -| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Doing | Doing -| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Doing | Doing +| Natural Language Processing (NLP) | Natural Language Understanding | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing +| Recommender | Recommender System, CTR prediction | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing | Recommender | Recommender System, Search ranking | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing | Graph Neural Networks(GNN)| Text Classification | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing | Graph Neural Networks(GNN)| Text Classification | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing diff --git a/docs/source_en/operator_list.md b/docs/source_en/operator_list.md index b4c760c3175107added8c5376538b050a492c89f..3a79b0a2b9e79d7375608e4ed9a421cf1360a1f4 100644 --- a/docs/source_en/operator_list.md +++ b/docs/source_en/operator_list.md @@ -102,7 +102,7 @@ | [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops | [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops | [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops -| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Doing | Doing | Doing | nn_ops +| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops | [mindspore.ops.operations.Unpack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Unpack) | Supported | Doing | Doing | nn_ops | [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | Supported | Doing | Doing | nn_ops | [mindspore.ops.operations.L2Loss](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Loss) | Supported | Doing | Doing | nn_ops @@ -184,8 +184,8 @@ | [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | Supported | Supported | Supported | math_ops | [mindspore.ops.operations.Square](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | Supported | Supported | Doing | math_ops | [mindspore.ops.operations.SquareSumAll](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquareSumAll) | Supported | Doing | Doing | math_ops -| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Supported | Doing | math_ops -| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Supported | Doing | math_ops +| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Doing | Doing | math_ops +| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Doing | Doing | math_ops | [mindspore.ops.operations.Reciprocal](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reciprocal) | Supported | Supported | Doing | math_ops | [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | Supported | Supported | Doing | math_ops | [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | Supported | Supported | Doing | math_ops diff --git a/docs/source_en/roadmap.md b/docs/source_en/roadmap.md index a9525b2e967e301fdc33201d7263ac9a1c896a9e..befe13ad85fa0bcbd51743290f7a8b7af42498dc 100644 --- a/docs/source_en/roadmap.md +++ b/docs/source_en/roadmap.md @@ -69,11 +69,14 @@ We sincerely hope that you can join the discussion in the user community and con * Protect data privacy during training and inference. ## Inference Framework -* Support TensorFlow, Caffe, and ONNX model formats. -* Support iOS. -* Improve more CPU operators. -* Support more CV/NLP models. -* Online learning. -* Support deployment on IoT devices. -* Low-bit quantization. -* CPU and NPU heterogeneous scheduling. +* Continuous optimization for operator, and add more operator. +* Support NLP neural networks. +* Visualization for MindSpore lite model. +* MindSpore Micro, which supports ARM Cortex-A and Cortex-M with Ultra-lightweight. +* Support re-training and federated learning on mobile device. +* Support auto-parallel. +* MindData on mobile device, which supports image resize and pixel data transform. +* Support post-training quantize, which supports inference with mixed precision to improve performance. +* Support Kirin NPU, MTK APU. +* Support inference for multi models with pipeline. +* C++ API for model construction. diff --git a/docs/source_zh_cn/FAQ.md b/docs/source_zh_cn/FAQ.md index 9873b933d666d18b00026515f9cfd14a228f23f5..1698974891db256511ea819d05b944e46ef69ae7 100644 --- a/docs/source_zh_cn/FAQ.md +++ b/docs/source_zh_cn/FAQ.md @@ -1,5 +1,7 @@ # FAQ +`Ascend` `GPU` `CPU` `环境准备` `模型导出` `模型训练` `初级` `中级` `高级` + - [FAQ](#faq) @@ -16,6 +18,7 @@ - [特性支持](#特性支持) + ## 安装类 diff --git a/docs/source_zh_cn/design/mindinsight/graph_visual_design.md b/docs/source_zh_cn/design/mindinsight/graph_visual_design.md index b90f8d882ceccfc679c2f91d77fc8d2351cc9a0e..fe3e6d334cb57caef2d76f013dd150f9c1f39414 100644 --- a/docs/source_zh_cn/design/mindinsight/graph_visual_design.md +++ b/docs/source_zh_cn/design/mindinsight/graph_visual_design.md @@ -1,6 +1,6 @@ # 计算图可视设计 -`Ascend` `GPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` +`Ascend` `GPU` `CPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` diff --git a/docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png b/docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png index 251daa59ae9bb785990bdd8680840896e87c1900..35eef99934ce9d743ebe0294e18ff0b5ea40abab 100644 Binary files a/docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png and b/docs/source_zh_cn/design/mindinsight/images/time_order_profiler.png differ diff --git a/docs/source_zh_cn/design/mindinsight/profiler_design.md b/docs/source_zh_cn/design/mindinsight/profiler_design.md index cc13bf9eea14a67d40f2672f517e078d6764e526..8bfd00397831e4fc25bab87fd25af3b27acc28fe 100644 --- a/docs/source_zh_cn/design/mindinsight/profiler_design.md +++ b/docs/source_zh_cn/design/mindinsight/profiler_design.md @@ -1,5 +1,7 @@ # Profiler设计文档 +`Ascend` `GPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` + - [Profiler设计文档](#profiler设计文档) diff --git a/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md b/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md index d84cb8ba7cd23c97dd2a5ca4398128f36b3105a5..eca40e518ca471120ad52ed0b78abb40ab4c00a6 100644 --- a/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md +++ b/docs/source_zh_cn/design/mindinsight/tensor_visual_design.md @@ -1,6 +1,6 @@ # 张量可视设计 -`Ascend` `GPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` +`Ascend` `GPU` `CPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` diff --git a/docs/source_zh_cn/design/mindinsight/training_visual_design.md b/docs/source_zh_cn/design/mindinsight/training_visual_design.md index 1c86233723b7bd456efd5b7790279f828351d841..8dae35eef0244c8f66322912bf1464e53ade5965 100644 --- a/docs/source_zh_cn/design/mindinsight/training_visual_design.md +++ b/docs/source_zh_cn/design/mindinsight/training_visual_design.md @@ -1,6 +1,6 @@ # 训练可视总体设计 -`Ascend` `GPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` +`Ascend` `GPU` `CPU` `模型开发` `模型调优` `框架开发` `中级` `高级` `贡献者` diff --git a/docs/source_zh_cn/design/mindspore/distributed_training_design.md b/docs/source_zh_cn/design/mindspore/distributed_training_design.md index ae38fdd6bc47fb2215bcdc931fa0d46c953f9af0..ab026a6526ac6fc4d4d113e102e24e0fa945eb68 100644 --- a/docs/source_zh_cn/design/mindspore/distributed_training_design.md +++ b/docs/source_zh_cn/design/mindspore/distributed_training_design.md @@ -29,7 +29,7 @@ ### 集合通信 -集合通信指在一组进程间通信,组内所有进程满足一定规则的发送和接收数据。MindSpore通过集合通信的方式进行并行训练过程中的数据传输工作,在Ascend芯片上它依赖于华为集合通信库HCCL完成。 +集合通信指在一组进程间通信,组内所有进程满足一定规则的发送和接收数据。MindSpore通过集合通信的方式进行并行训练过程中的数据传输工作,在Ascend芯片上它依赖于华为集合通信库`HCCL`完成,在GPU上它依赖于英伟达集合通信库`NCCL`完成。 ### 同步模式 @@ -41,11 +41,11 @@ ### 数据并行原理 -
数据并行图解
+![数据并行图解](./images/data_parallel.png) 1. 环境依赖 - 每次开始进行并行训练前,通过调用`mindspore.communication.init`接口初始化通信资源,并自动创建全局通信组`HCCL_WORLD_GROUP`。 + 每次开始进行并行训练前,通过调用`mindspore.communication.init`接口初始化通信资源,并自动创建全局通信组`WORLD_COMM_GROUP`。 2. 数据分发 @@ -77,28 +77,28 @@ ## 自动并行 -自动并行作为MindSpore的关键特性,用于实现自动的数据并行加模型并行的混合并行训练方式,旨在帮助用户以单机的脚本表达并行算法逻辑,降低分布式训练难度,提高算法研发效率,同时又能保持训练的高性能。 +自动并行作为MindSpore的关键特性,用于实现自动的数据并行加模型并行的混合并行训练方式,旨在帮助用户以单机的脚本表达并行算法逻辑,降低分布式训练难度,提高算法研发效率,同时又能保持训练的高性能。这个小节介绍了在MindSpore中`ParallelMode.AUTO_PARALLEL`自动并行模式及`ParallelMode.SEMI_AUTO_PARALLEL`半自动并行模式是如何工作的。 ### 自动并行原理 -自动并行架构图 +![自动并行图解](./images/auto_parallel_design.png) 1. 通用的张量排布模型 在上面的架构图中,自动并行流程会对单机的正向计算图(ANF Graph)进行遍历,以算子(Distributed Operator)为单位对张量进行切分建模,表示一个算子的输入输出张量如何分布到集群各个卡上(Tensor Layout)。这种模型充分地表达了张量和设备间的映射关系,并且可以通过算法推导得到任意排布的张量间通信转换方式(Tensor Redistribution)。 为了得到张量的排布模型,每个算子都具有切分策略(Parallel Strategy),它表示算子的各个输入在相应维度的切分情况。通常情况下只要满足以2为基、均匀分配的原则,张量的任意维度均可切分。以下图为例,这是一个三维矩阵乘操作,它的切分策略由两个元组构成,分别表示`input`和`weight`的切分形式。其中元组中的元素与张量维度一一对应,`2^N`为切分份数,`1`表示不切。当我们想表示一个数据并行切分策略时,即`input`的`batch`维度切分,其他维度不切,可以表达为`strategy=((2^N, 1, 1),(1, 1, 1))`;当表示一个模型并行切分策略时,即`weight`的`channel`维度切分,其他维度不切,可以表达为`strategy=((1, 1, 1),(1, 1, 2^N))`;当表示一个混合并行切分策略时,可以表达为`strategy=((2^N, 1, 1),(1, 1, 2^N))`。 - 算子切分定义 - + ![算子切分定义](./images/operator_split.png) + 依据算子的切分策略,框架将自动推导得到算子输入张量和输出张量的排布模型。这个排布模型由`device_matrix`,`tensor_shape`和`tensor map`组成,分别表示设备矩阵形状、张量形状、设备和张量维度间的映射关系。根据排布模型框架可以自动实现对整图的切分,并推导插入算子内张量重复计算及算子间不同排布的张量变换所需要的通信操作。以数据并行转模型并行为例,第一个数据并行矩阵乘的输出在`batch`维度存在切分,而第二个模型并行矩阵乘的输入需要全量张量,框架将会自动插入`AllGather`算子实现排布变换。 - 张量排布变换 + ![张量排布变换](./images/tensor_redistribution.png) 总体来说这种分布式表达打破了数据并行和模型并行的边界,轻松实现混合并行。并且用户无需感知模型各切片放到哪个设备上运行,框架会自动调度分配。从脚本层面上,用户仅需构造单机网络,即可表达并行算法逻辑。 2. 高效的并行策略搜索算法 - 当用户熟悉了算子的切分表达,并手动对算子配置切分策略,这就是`SEMI_AUTO_PARALLEL`半自动并行模式。这种方式对手动调优有帮助,但还是具有一定的调试难度,用户需要掌握并行原理,并根据网络结构、集群拓扑等计算分析得到高性能的并行方案。为了进一步帮助用户加速并行网络训练过程,在半自动并行模式的基础上,`AUTO_PARALLEL`自动并行模式引入了并行切分策略自动搜索的特性。自动并行围绕昇腾AI处理器构建代价函数模型(Cost Model),计算出一定数据量、一定算子在不同切分策略下的计算开销(Computation Cost),内存开销(Memory Cost)及通信开销(Communication Cost)。然后通过动态规划算法(Dynamic Programming),以单卡的内存上限为约束条件,高效地搜索出性能较优的切分策略。 + 当用户熟悉了算子的切分表达,并手动对算子配置切分策略,这就是`SEMI_AUTO_PARALLEL`半自动并行模式。这种方式对手动调优有帮助,但还是具有一定的调试难度,用户需要掌握并行原理,并根据网络结构、集群拓扑等计算分析得到高性能的并行方案。为了进一步帮助用户加速并行网络训练过程,在半自动并行模式的基础上,`AUTO_PARALLEL`自动并行模式引入了并行切分策略自动搜索的特性。自动并行围绕硬件平台构建相应的代价函数模型(Cost Model),计算出一定数据量、一定算子在不同切分策略下的计算开销(Computation Cost),内存开销(Memory Cost)及通信开销(Communication Cost)。然后通过动态规划算法(Dynamic Programming)或者递归规划算法(Recursive Programming),以单卡的内存上限为约束条件,高效地搜索出性能较优的切分策略。 策略搜索这一步骤代替了用户手动指定模型切分,在短时间内可以得到较高性能的切分方案,极大降低了并行训练的使用门槛。 diff --git a/docs/source_zh_cn/design/mindspore/images/auto_parallel.png b/docs/source_zh_cn/design/mindspore/images/auto_parallel.png deleted file mode 100644 index 544e65eee9b5a6ac984ff2315022135ce7cd4456..0000000000000000000000000000000000000000 Binary files a/docs/source_zh_cn/design/mindspore/images/auto_parallel.png and /dev/null differ diff --git a/docs/source_zh_cn/design/mindspore/images/auto_parallel_design.png b/docs/source_zh_cn/design/mindspore/images/auto_parallel_design.png new file mode 100644 index 0000000000000000000000000000000000000000..7f1006775ed0006208ae1086b8cd062b463237d6 Binary files /dev/null and b/docs/source_zh_cn/design/mindspore/images/auto_parallel_design.png differ diff --git a/docs/source_zh_cn/design/mindspore/images/operator_split.png b/docs/source_zh_cn/design/mindspore/images/operator_split.png index 1fe2fda44fc148c7443b5c6dd6f95a3a0a2a1e99..51dd93cd853cda7b2b284e4955bbc941f3c1b1dd 100644 Binary files a/docs/source_zh_cn/design/mindspore/images/operator_split.png and b/docs/source_zh_cn/design/mindspore/images/operator_split.png differ diff --git a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png index 86b4630bb52146479ec4c0f766059d22db12bf10..ce4485c8cbf91c721c9dd19ab105a58cd3d18d22 100644 Binary files a/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png and b/docs/source_zh_cn/design/mindspore/images/tensor_redistribution.png differ diff --git a/docs/source_zh_cn/network_list.md b/docs/source_zh_cn/network_list.md index 6e5954b63a4f07f43f47273607a92e8ed130fea1..3e220484c721e7c0cfbf5a2e59b9f59652e8334e 100644 --- a/docs/source_zh_cn/network_list.md +++ b/docs/source_zh_cn/network_list.md @@ -23,10 +23,12 @@ |计算机视觉(CV) | 图像分类(Image Classification) | [ResNet-101](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) | Supported |Doing | Doing |计算机视觉(CV) | 图像分类(Image Classification) | [ResNext50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnext50/src/image_classification.py) | Supported | Supported | Doing | 计算机视觉(CV) | 图像分类(Image Classification) | [VGG16](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/vgg16/src/vgg.py) | Supported | Doing | Doing +| 计算机视觉(CV) | 图像分类(Image Classification) | [InceptionV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/inceptionv3/src/inception_v3.py) | Supported | Supported | Doing | 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV2](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv2/src/mobilenetV2.py) | Supported | Supported | Doing | 计算机视觉(CV) | 移动端图像分类(Mobile Image Classification)
目标检测(Image Classification)
语义分割(Semantic Tegmentation) | [MobileNetV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/mobilenetv3/src/mobilenetV3.py) | Doing | Supported | Doing |计算机视觉(CV) | 目标检测(Targets Detection) | [SSD](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/ssd/src/ssd.py) | Supported |Doing | Doing | 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-ResNet18](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_resnet18/src/yolov3.py) | Supported | Doing | Doing +| 计算机视觉(CV) | 目标检测(Targets Detection) | [YoloV3-DarkNet53](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/yolov3_darknet53/src/yolo.py) | Supported | Doing | Doing | 计算机视觉(CV) | 目标检测(Targets Detection) | [FasterRCNN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/faster_rcnn/src/FasterRcnn/faster_rcnn_r50.py) | Supported | Doing | Doing | 计算机视觉(CV) | 语义分割(Semantic Segmentation) | [DeeplabV3](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/deeplabv3/src/deeplabv3.py) | Supported | Doing | Doing | 计算机视觉(CV) | 目标检测(Targets Detection) | [WarpCTC](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/warpctc/src/warpctc.py) | Doing | Supported | Doing @@ -34,8 +36,8 @@ | 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [Transformer](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/transformer/src/transformer_model.py) | Supported | Doing | Doing | 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [SentimentNet](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/lstm/src/lstm.py) | Doing | Supported | Supported | 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [MASS](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/mass/src/transformer/transformer_for_train.py) | Supported | Doing | Doing -| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Doing | Doing -| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Doing | Doing +| 自然语言处理(NLP) | 自然语言理解(Natural Language Understanding) | [TinyBert](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/nlp/tinybert/src/tinybert_model.py) | Supported | Supported | Doing +| 推荐(Recommender) | 推荐系统、点击率预估(Recommender System, CTR prediction) | [DeepFM](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/deepfm/src/deepfm.py) | Supported | Supported | Doing | 推荐(Recommender) | 推荐系统、搜索、排序(Recommender System, Search ranking) | [Wide&Deep](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/recommend/wide_and_deep/src/wide_and_deep.py) | Supported | Supported | Doing | 图神经网络(GNN) | 文本分类(Text Classification) | [GCN](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gcn/src/gcn.py) | Supported | Doing | Doing | 图神经网络(GNN) | 文本分类(Text Classification) | [GAT](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/gnn/gat/src/gat.py) | Supported | Doing | Doing diff --git a/docs/source_zh_cn/operator_list.md b/docs/source_zh_cn/operator_list.md index 6244a7101f59c9705ec876ff3271bc7774026c14..db8e29e3935556c95e4aabc477cc297d2561c8d0 100644 --- a/docs/source_zh_cn/operator_list.md +++ b/docs/source_zh_cn/operator_list.md @@ -102,7 +102,7 @@ | [mindspore.ops.operations.Acosh](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Acosh) | Doing | Doing | Doing | nn_ops | [mindspore.ops.operations.FloorMod](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.FloorMod) | Supported | Doing | Doing | nn_ops | [mindspore.ops.operations.Elu](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Elu) | Supported | Doing | Doing | nn_ops -| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Doing | Doing | Doing | nn_ops +| [mindspore.ops.operations.MirrorPad](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.MirrorPad) | Supported | Supported | Doing | nn_ops | [mindspore.ops.operations.Unpack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Unpack) | Supported | Doing | Doing | nn_ops | [mindspore.ops.operations.Pack](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pack) | Supported| Doing | Doing | nn_ops | [mindspore.ops.operations.L2Loss](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.L2Loss) | Supported | Doing | Doing | nn_ops @@ -184,8 +184,8 @@ | [mindspore.ops.operations.Mul](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Mul) | Supported | Supported | Supported | math_ops | [mindspore.ops.operations.Square](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Square) | Supported | Supported | Doing | math_ops | [mindspore.ops.operations.SquareSumAll](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.SquareSumAll) | Supported | Doing | Doing | math_ops -| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Supported | Doing | math_ops -| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Supported | Doing | math_ops +| [mindspore.ops.operations.Rsqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Rsqrt) | Supported | Doing | Doing | math_ops +| [mindspore.ops.operations.Sqrt](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Sqrt) | Supported | Doing | Doing | math_ops | [mindspore.ops.operations.Reciprocal](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Reciprocal) | Supported | Supported | Doing | math_ops | [mindspore.ops.operations.Pow](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Pow) | Supported | Supported | Doing | math_ops | [mindspore.ops.operations.Exp](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html#mindspore.ops.operations.Exp) | Supported | Supported | Doing | math_ops diff --git a/docs/source_zh_cn/roadmap.md b/docs/source_zh_cn/roadmap.md index a985100c4d335268130a07263065889bf6cdf7bb..1dfd7945b1079c44d1d312414d8a0f2368fd7e87 100644 --- a/docs/source_zh_cn/roadmap.md +++ b/docs/source_zh_cn/roadmap.md @@ -70,11 +70,14 @@ * 保护训练和推理过程中的数据隐私 ## 推理框架 -* 提供Tensorflow/Caffe/ONNX模型格式支持 -* IOS系统支持 -* 完善更多的CPU算子 -* 更多CV/NLP模型支持 -* 在线学习 -* 支持部署在IOT设备 -* 低比特量化 -* CPU和NPU异构调度 +* 算子性能与完备度的持续优化 +* 支持语音模型推理 +* 端侧模型的可视化 +* Micro方案,适用于嵌入式系统的超轻量化推理, 支持ARM Cortex-A、Cortex-M硬件 +* 支持端侧重训及联邦学习 +* 端侧自动并行特性 +* 端侧MindData,包含图片Resize、像素数据转换等功能 +* 配套MindSpore混合精度量化训练(或训练后量化),实现混合精度推理,提升推理性能 +* 支持Kirin NPU、MTK APU等AI加速硬件 +* 支持多模型推理pipeline +* C++构图接口 diff --git a/install/mindspore_cpu_install.md b/install/mindspore_cpu_install.md index e31b2a72edd0edfc3a4879fc958d6267adc19651..721cad66b18b2db7b7fdc312765fc60c4a5db594 100644 --- a/install/mindspore_cpu_install.md +++ b/install/mindspore_cpu_install.md @@ -97,7 +97,7 @@ | 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 | | ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- | -| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 | +| MindArmour master | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 | - 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。 diff --git a/install/mindspore_cpu_install_en.md b/install/mindspore_cpu_install_en.md index 3da8d9eff691e60b198680f084259888b8dffd20..d21f05bdeb3d451ff36588a8346a9753bdf831d6 100644 --- a/install/mindspore_cpu_install_en.md +++ b/install/mindspore_cpu_install_en.md @@ -97,7 +97,7 @@ If you need to conduct AI model security research or enhance the security of the | Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies | | ---- | :--- | :--- | :--- | -| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. | +| MindArmour master | - Ubuntu 18.04 x86_64
- Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5
- MindSpore master
- For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. | - When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items. diff --git a/install/mindspore_d_install.md b/install/mindspore_d_install.md index f355690cddd86d88ba89a4b8b417f14917acdd05..a0d6eb70a9f6d7f7c4452a593695e95c024c135f 100644 --- a/install/mindspore_d_install.md +++ b/install/mindspore_d_install.md @@ -32,7 +32,7 @@ | 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 | | ---- | :--- | :--- | :--- | -| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- CentOS 7.6 aarch64
- CentOS 7.6 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 | +| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt) | **编译依赖:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**安装依赖:**
与可执行文件安装依赖相同 | - 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。 - GCC 7.3.0可以直接通过apt命令安装。 diff --git a/install/mindspore_d_install_en.md b/install/mindspore_d_install_en.md index 4eb1e3ae067791ac98787314d78af52cfc1999f6..827f23bb76e748d6304eabf3444d7974956bc0a1 100644 --- a/install/mindspore_d_install_en.md +++ b/install/mindspore_d_install_en.md @@ -32,7 +32,7 @@ This document describes how to quickly install MindSpore in an Ascend AI process | Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies | | ---- | :--- | :--- | :--- | -| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- CentOS 7.6 aarch64
- CentOS 7.6 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. | +| MindSpore master | - Ubuntu 18.04 aarch64
- Ubuntu 18.04 x86_64
- EulerOS 2.8 aarch64
- EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
- For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:**
- [Python](https://www.python.org/downloads/) 3.7.5
- Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))
- [wheel](https://pypi.org/project/wheel/) >= 0.32.0
- [GCC](https://gcc.gnu.org/releases.html) 7.3.0
- [CMake](https://cmake.org/download/) >= 3.14.1
- [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5
- [gmp](https://gmplib.org/download/gmp/) 6.1.2
**Installation dependencies:**
same as the executable file installation dependencies. | - Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document. - GCC 7.3.0 can be installed by using apt command. diff --git a/lite/docs/source_en/architecture.md b/lite/docs/source_en/architecture.md index 5878f5b22076b4d53e68c63697f9cfe4c56a7ff6..64585775720d39c365190d9f8f24c82931cf24e3 100644 --- a/lite/docs/source_en/architecture.md +++ b/lite/docs/source_en/architecture.md @@ -1,3 +1,19 @@ -# Overall Architecture +# Overall Architecture - + + +The overall architecture of MindSpore Lite is as follows: + +![architecture](./images/MindSpore-Lite-architecture.png) + +- **Frontend:** generates models. You can use the model building API to build models and convert third-party models and models trained by MindSpore to MindSpore Lite models. Third-party models include TensorFlow Lite, Caffe 1.0, and ONNX models. + +- **IR:** defines the tensors, operators, and graphs of MindSpore. + +- **Backend:** optimizes graphs based on IR, including graph high level optimization (GHLO), graph low level optimization (GLLO), and quantization. GHLO is responsible for hardware-independent optimization, such as operator fusion and constant folding. GLLO is responsible for hardware-related optimization. Quantizer supports quantization methods after training, such as weight quantization and activation value quantization. + +- **Runtime:** inference runtime of intelligent devices. Sessions are responsible for session management and provide external APIs. The thread pool and parallel primitives are responsible for managing the thread pool used for graph execution. Memory allocation is responsible for memory overcommitment of each operator during graph execution. The operator library provides the CPU and GPU operators. + +- **Micro:** runtime of IoT devices, including the model generation .c file, thread pool, memory overcommitment, and operator library. + +Runtime and Micro share the underlying infrastructure layers, such as the operator library, memory allocation, thread pool, and parallel primitives. diff --git a/lite/docs/source_en/images/MindSpore-Lite-architecture.png b/lite/docs/source_en/images/MindSpore-Lite-architecture.png new file mode 100644 index 0000000000000000000000000000000000000000..abf28796690f5649f8bc92382dfd4c2c83187620 Binary files /dev/null and b/lite/docs/source_en/images/MindSpore-Lite-architecture.png differ diff --git a/lite/docs/source_en/operator_list.md b/lite/docs/source_en/operator_list.md index 1c7383aad2f3457a31b7c89de4efbfbfce1d2d73..6038b5c95690dd4b30378a7101d828ef9d0cda90 100644 --- a/lite/docs/source_en/operator_list.md +++ b/lite/docs/source_en/operator_list.md @@ -4,121 +4,108 @@ > √ The checked items are the operators supported by MindSpore Lite。 -| Operation | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | Operator category | Tensorflow
Lite op supported | Caffe
Lite op supported | Onnx
Lite op supported | -|-----------------------|----------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------| -| Abs | | √ | √ | √ | | | math_ops | Abs | | Abs | -| Add | | | | | | √ | | Add | | Add | -| AddN | | √ | | | | | math_ops | AddN | | | -| Argmax | | √ | √ | √ | | | array_ops | Argmax | ArgMax | ArgMax | -| Argmin | | √ | | | | | array_ops | Argmin | | | -| Asin | | | | | | | | | | Asin | -| Atan | | | | | | | | | | Atan | -| AvgPool | | √ | √ | √ | | √ | nn_ops | MeanPooling | Pooling | AveragePool | -| BatchMatMul | √ | √ | √ | √ | | | math_ops | | | | -| BatchNorm | | √ | | | | √ | nn_ops | | BatchNorm | BatchNormalization | -| BatchToSpace | | | | | | | array_ops | BatchToSpace, BatchToSpaceND | | | -| BatchToSpaceND | | | | | | | | | | | -| BiasAdd | | √ | | √ | | √ | nn_ops | | | BiasAdd | -| Broadcast | | √ | | | | | comm_ops | BroadcastTo | | Expand | -| Cast | | √ | | | | | array_ops | Cast, DEQUANTIZE* | | Cast | -| Ceil | | √ | | √ | | | math_ops | Ceil | | Ceil | -| Concat | | √ | √ | √ | | √ | array_ops | Concat | Concat | Concat | -| Constant | | | | | | | | | | Constant | -| Conv1dTranspose | | | | √ | | | layer/conv | | | | -| Conv2d | √ | √ | √ | √ | | √ | layer/conv | Conv2D | Convolution | Conv | -| Conv2dTranspose | | √ | √ | √ | | √ | layer/conv | DeConv2D | Deconvolution | ConvTranspose | -| Cos | | √ | √ | √ | | | math_ops | Cos | | Cos | -| Crop | | | | | | | | | Crop | | -| DeDepthwiseConv2D | | | | | | | | | Deconvolution| ConvTranspose | -| DepthToSpace | | | | | | | | DepthToSpace | | DepthToSpace | -| DepthwiseConv2dNative | √ | √ | √ | √ | | √ | nn_ops | DepthwiseConv2D | Convolution | Convolution | -| Div | | √ | √ | √ | | √ | math_ops | Div | | Div | -| Dropout | | | | | | | | | | Dropout | -| Eltwise | | | | | | | | | Eltwise | | -| Elu | | | | | | | | Elu | | Elu | -| Equal | | √ | √ | √ | | | math_ops | Equal | | Equal | -| Exp | | √ | | | | | math_ops | Exp | | Exp | -| ExpandDims | | √ | | | | | array_ops | | | | -| Fill | | √ | | | | | array_ops | Fill | | | -| Flatten | | | | | | | | | Flatten | | -| Floor | | √ | √ | √ | | | math_ops | flOOR | | Floor | -| FloorDiv | | √ | | | | | math_ops | FloorDiv | | | -| FloorMod | | √ | | | | | nn_ops | FloorMod | | | -| FullConnection | | √ | | | | | layer/basic | FullyConnected | InnerProduct | | -| GatherNd | | √ | | | | | array_ops | GatherND | | | -| GatherV2 | | √ | | | | | array_ops | Gather | | Gather | -| Greater | | √ | √ | √ | | | math_ops | Greater | | Greater | -| GreaterEqual | | √ | √ | √ | | | math_ops | GreaterEqual | | | -| Hswish | | | | | | | | HardSwish | | | -| L2norm | | | | | | | | L2_NORMALIZATION | | | -| LeakyReLU | | √ | | | | √ | layer/activation | LeakyRelu | | LeakyRelu | -| Less | | √ | √ | √ | | | math_ops | Less | | Less | -| LessEqual | | √ | √ | √ | | | math_ops | LessEqual | | | -| LocalResponseNorm | | | | | | | | LocalResponseNorm | | Lrn | -| Log | | √ | √ | √ | | | math_ops | Log | | Log | -| LogicalAnd | | √ | | | | | math_ops | LogicalAnd | | | -| LogicalNot | | √ | √ | √ | | | math_ops | LogicalNot | | | -| LogicalOr | | √ | | | | | math_ops | LogicalOr | | | -| LSTM | | √ | | | | | layer/lstm | | | | -| MatMul | √ | √ | √ | √ | | √ | math_ops | | | MatMul | -| Maximum | | | | | | | math_ops | Maximum | | Max | -| MaxPool | | √ | √ | √ | | √ | nn_ops | MaxPooling | Pooling | MaxPool | -| Minimum | | | | | | | math_ops | Minimum | | Min | -| Mul | | √ | √ | √ | | √ | math_ops | Mul | | Mul | -| Neg | | | | | | | math_ops | | | Neg | -| NotEqual | | √ | √ | √ | | | math_ops | NotEqual | | | -| OneHot | | √ | | | | | layer/basic | OneHot | | | -| Pack | | √ | | | | | nn_ops | | | | -| Pad | | √ | √ | √ | | | nn_ops | Pad | | Pad | -| Pow | | √ | √ | √ | | | math_ops | Pow | Power | Power | -| PReLU | | √ | √ | √ | | √ | layer/activation | Prelu | PReLU | PRelu | -| Range | | √ | | | | | layer/basic | Range | | | -| Rank | | √ | | | | | array_ops | Rank | | | -| RealDiv | | √ | √ | √ | | √ | math_ops | RealDiv | | | -| ReduceMax | | √ | √ | √ | | | math_ops | ReduceMax | | ReduceMax | -| ReduceMean | | √ | √ | √ | | | math_ops | Mean | | ReduceMean | -| ReduceMin | | √ | √ | √ | | | math_ops | ReduceMin | | ReduceMin | -| ReduceProd | | √ | √ | √ | | | math_ops | ReduceProd | | | -| ReduceSum | | √ | √ | √ | | | math_ops | Sum | | ReduceSum | -| ReLU | | √ | √ | √ | | √ | layer/activation | Relu | ReLU | Relu | -| ReLU6 | | √ | | | | √ | layer/activation | Relu6 | ReLU6 | Clip* | -| Reshape | | √ | √ | √ | | √ | array_ops | Reshape | Reshape | Reshape,Flatten | -| Resize | | | | | | | | ResizeBilinear, NearestNeighbor | Interp | | -| Reverse | | | | | | | | reverse | | | -| ReverseSequence | | √ | | | | | array_ops | ReverseSequence | | | -| Round | | √ | | √ | | | math_ops | Round | | | -| Rsqrt | | √ | √ | √ | | | math_ops | Rsqrt | | | -| Scale | | | | | | | | | Scale | | -| ScatterNd | | √ | | | | | array_ops | ScatterNd | | | -| Shape | | √ | | √ | | | array_ops | Shape | | Shape | -| Sigmoid | | √ | √ | √ | | √ | nn_ops | Logistic | Sigmoid | Sigmoid | -| Sin | | | | | | | | Sin | | Sin | -| Slice | | √ | √ | √ | | √ | array_ops | Slice | | Slice | -| Softmax | | √ | √ | √ | | √ | layer/activation | Softmax | Softmax | Softmax | -| SpaceToBatchND | | √ | | | | | array_ops | SpaceToBatchND | | | -| SpareToDense | | | | | | | | SpareToDense | | | -| SpaceToDepth | | √ | | | | | array_ops | SpaceToDepth | | SpaceToDepth | -| Split | | √ | √ | √ | | | array_ops | Split, SplitV | | | -| Sqrt | | √ | √ | √ | | | math_ops | Sqrt | | Sqrt | -| Square | | √ | √ | √ | | | math_ops | Square | | | -| SquaredDifference | | | | | | | | SquaredDifference | | | -| Squeeze | | √ | √ | √ | | | array_ops | Squeeze | | Squeeze | -| StridedSlice | | √ | √ | √ | | | array_ops | StridedSlice | | | -| Stack | | | | | | | | Stack | | | -| Sub | | √ | √ | √ | | √ | math_ops | Sub | | Sub | -| Tan | | | | | | | | | | Tan | -| Tanh | | √ | | | | | layer/activation | Tanh | TanH | | -| TensorAdd | | √ | √ | √ | | √ | math_ops | | | | -| Tile | | √ | | | | | array_ops | Tile | | Tile | -| TopK | | √ | √ | √ | | | nn_ops | TopKV2 | | | -| Transpose | | √ | √ | √ | | √ | array_ops | Transpose | Permute | Transpose | -| Unique | | | | | | | | Unique | | | -| Unpack | | √ | | | | | nn_ops | | | | -| Unsample | | | | | | | | | | Unsample | -| Unsqueeze | | | | | | | | | | Unsqueeze | -| Unstack | | | | | | | | Unstack | | | -| Where | | | | | | | | Where | | | -| ZerosLike | | √ | | | | | array_ops | ZerosLike | | | +| Operation | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | Tensorflow
Lite op supported | Caffe
Lite op supported | Onnx
Lite op supported | +|-----------------------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------| +| Abs | | √ | √ | √ | | | Abs | | Abs | +| Add | √ | √ | √ | √ | | √ | Add | | Add | +| AddN | | √ | | | | | AddN | | | +| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax | +| Argmin | | √ | √ | √ | | | Argmin | | | +| AvgPool | √ | √ | √ | √ | | √ | MeanPooling| Pooling | AveragePool | +| BatchNorm | √ | √ | √ | √ | | √ | | BatchNorm | BatchNormalization | +| BatchToSpace | | √ | √ | √ | | | BatchToSpace, BatchToSpaceND | | | +| BiasAdd | | √ | √ | √ | | √ | | | BiasAdd | +| Broadcast | | √ | | | | | BroadcastTo | | Expand | +| Cast | √ | √ | | √ | | | Cast, DEQUANTIZE* | | Cast | +| Ceil | | √ | √ | √ | | | Ceil | | Ceil | +| Concat | √ | √ | √ | √ | √ | √ | Concat | Concat | Concat | +| Conv2d | √ | √ | √ | √ | √ | √ | Conv2D | Convolution | Conv | +| Conv2dTranspose | √ | √ | √ | √ | √ | √ | DeConv2D | Deconvolution | ConvTranspose | +| Cos | | √ | √ | √ | | | Cos | | Cos | +| Crop | | √ | √ | √ | | | | Crop | | +| DeDepthwiseConv2D | | √ | √ | √ | | | | Deconvolution| ConvTranspose | +| DepthToSpace | | √ | √ | √ | | | DepthToSpace| | DepthToSpace | +| DepthwiseConv2dNative | √ | √ | √ | √ | √ | √ | DepthwiseConv2D | Convolution | Convolution | +| Div | √ | √ | √ | √ | | √ | Div, RealDiv | | Div | +| Eltwise | √ | √ | | | | | | Eltwise | | +| Elu | | √ | | | | | Elu | | Elu | +| Equal | √ | √ | √ | √ | | | Equal | | Equal | +| Exp | | √ | | | | | Exp | | Exp | +| ExpandDims | | √ | | | | | | | | +| Fill | | √ | | | | | Fill | | | +| Flatten | | √ | | | | | | Flatten | | +| Floor | | √ | √ | √ | | | flOOR | | Floor | +| FloorDiv | √ | √ | | | | | FloorDiv | | | +| FloorMod | √ | √ | | | | | FloorMod | | | +| FullConnection | | √ | √ | √ | | | FullyConnected | InnerProduct | | +| GatherNd | | √ | √ | √ | | | GatherND | | | +| GatherV2 | | √ | √ | √ | | | Gather | | Gather | +| Greater | √ | √ | √ | √ | | | Greater | | Greater | +| GreaterEqual | √ | √ | √ | √ | | | GreaterEqual| | | +| Hswish | √ | √ | √ | √ | | | HardSwish | | | +| LeakyReLU | √ | √ | | | | √ | LeakyRelu | | LeakyRelu | +| Less | √ | √ | √ | √ | | | Less | | Less | +| LessEqual | √ | √ | √ | √ | | | LessEqual | | | +| LRN | | √ | | | | | LocalResponseNorm | | Lrn | +| Log | | √ | √ | √ | | | Log | | Log | +| LogicalAnd | √ | √ | | | | | LogicalAnd | | | +| LogicalNot | | √ | √ | √ | | | LogicalNot | | | +| LogicalOr | √ | √ | | | | | LogicalOr | | | +| LSTM | | √ | | | | | | | | +| MatMul | | √ | √ | √ | √ | √ | | | MatMul | +| Maximum | √ | √ | | | | | Maximum | | Max | +| MaxPool | √ | √ | √ | √ | | √ | MaxPooling | Pooling | MaxPool | +| Minimum | √ | √ | | | | | Minimum | | Min | +| Mul | √ | √ | √ | √ | | √ | Mul | | Mul | +| NotEqual | √ | √ | √ | √ | | | NotEqual | | | +| OneHot | | √ | | | | | OneHot | | | +| Pad | | √ | √ | √ | | | Pad | | Pad | +| Pow | | √ | √ | √ | | | Pow | Power | Power | +| PReLU | | √ | | | | √ | | PReLU | | +| Range | | √ | | | | | Range | | | +| Rank | | √ | | | | | Rank | | | +| ReduceMax | √ | √ | √ | √ | | | ReduceMax | | ReduceMax | +| ReduceMean | √ | √ | √ | √ | | | Mean | | ReduceMean | +| ReduceMin | √ | √ | √ | √ | | | ReduceMin | | ReduceMin | +| ReduceProd | √ | √ | √ | √ | | | ReduceProd | | | +| ReduceSum | √ | √ | √ | √ | | | Sum | | ReduceSum | +| ReduceSumSquare | √ | √ | √ | √ | | | | | | +| ReLU | √ | √ | √ | √ | | √ | Relu | ReLU | Relu | +| ReLU6 | √ | √ | √ | √ | | √ | Relu6 | ReLU6 | Clip* | +| Reshape | √ | √ | √ | √ | | √ | Reshape | Reshape | Reshape,Flatten | +| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | | +| Reverse | | √ | | | | | reverse | | | +| ReverseSequence | | √ | | | | | ReverseSequence | | | +| Round | | √ | √ | √ | | | Round | | | +| Rsqrt | | √ | √ | √ | | | Rsqrt | | | +| Scale | | √ | | | | | | Scale | | +| ScatterNd | | √ | | | | | ScatterNd | | | +| Shape | | √ | | | | | Shape | | Shape | +| Sigmoid | √ | √ | √ | √ | | √ | Logistic | Sigmoid | Sigmoid | +| Sin | | √ | √ | √ | | | Sin | | Sin | +| Slice | | √ | √ | √ | √ | √ | Slice | | Slice | +| Softmax | √ | √ | √ | √ | | √ | Softmax | Softmax | Softmax | +| SpaceToBatch | | √ | | | | | | | | +| SpaceToBatchND | | √ | | | | | SpaceToBatchND | | | +| SpaceToDepth | | √ | | | | | SpaceToDepth | | SpaceToDepth | +| SparseToDense | | √ | | | | | SpareToDense | | | +| Split | √ | √ | √ | √ | | | Split, SplitV | | | +| Sqrt | | √ | √ | √ | | | Sqrt | | Sqrt | +| Square | | √ | √ | √ | | | Square | | | +| SquaredDifference | | √ | | | | | SquaredDifference | | | +| Squeeze | | √ | √ | √ | | | Squeeze | | Squeeze | +| StridedSlice | | √ | √ | √ | | | StridedSlice| | | +| Stack | | √ | | | | | Stack | | | +| Sub | √ | √ | √ | √ | | √ | Sub | | Sub | +| Tanh | √ | √ | | | | | Tanh | TanH | | +| Tile | | √ | | | | | Tile | | Tile | +| TopK | | √ | √ | √ | | | TopKV2 | | | +| Transpose | √ | √ | | | | √ | Transpose | Permute | Transpose | +| Unique | | √ | | | | | Unique | | | +| Unsqueeze | | √ | √ | √ | | | | | Unsqueeze | +| Unstack | | √ | | | | | Unstack | | | +| Where | | √ | | | | | Where | | | +| ZerosLike | | √ | | | | | ZerosLike | | | * Clip: only support convert clip(0, 6) to Relu6. * DEQUANTIZE: only support to convert fp16 to fp32. diff --git a/lite/docs/source_zh_cn/index.rst b/lite/docs/source_zh_cn/index.rst index 08270a72e46b955616944d50149024f3765bf318..28634fb696c434f747949830a37d3efb2b0436e4 100644 --- a/lite/docs/source_zh_cn/index.rst +++ b/lite/docs/source_zh_cn/index.rst @@ -11,6 +11,5 @@ MindSpore端侧文档 :maxdepth: 1 architecture - roadmap operator_list glossary \ No newline at end of file diff --git a/lite/docs/source_zh_cn/operator_list.md b/lite/docs/source_zh_cn/operator_list.md index 0864989bc46b9182b5b90bb624021a1e756f0647..3384d8baf91b1af92ff4758816790af7b6e241bc 100644 --- a/lite/docs/source_zh_cn/operator_list.md +++ b/lite/docs/source_zh_cn/operator_list.md @@ -4,121 +4,108 @@ > √勾选的项为MindSpore Lite所支持的算子。 -| 操作名 | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | 算子类别 | 支持的Tensorflow
Lite op | 支持的Caffe
Lite op | 支持的Onnx
Lite op | -|-----------------------|----------|----------|----------|-----------|----------|----------|------------------|----------|----------|----------| -| Abs | | √ | √ | √ | | | math_ops | Abs | | Abs | -| Add | | | | | | √ | | Add | | Add | -| AddN | | √ | | | | | math_ops | AddN | | | -| Argmax | | √ | √ | √ | | | array_ops | Argmax | ArgMax | ArgMax | -| Argmin | | √ | | | | | array_ops | Argmin | | | -| Asin | | | | | | | | | | Asin | -| Atan | | | | | | | | | | Atan | -| AvgPool | | √ | √ | √ | | √ | nn_ops | MeanPooling | Pooling | AveragePool | -| BatchMatMul | √ | √ | √ | √ | | | math_ops | | | | -| BatchNorm | | √ | | | | √ | nn_ops | | BatchNorm | BatchNormalization | -| BatchToSpace | | | | | | | array_ops | BatchToSpace, BatchToSpaceND | | | -| BatchToSpaceND | | | | | | | | | | | -| BiasAdd | | √ | | √ | | √ | nn_ops | | | BiasAdd | -| Broadcast | | √ | | | | | comm_ops | BroadcastTo | | Expand | -| Cast | | √ | | | | | array_ops | Cast, DEQUANTIZE* | | Cast | -| Ceil | | √ | | √ | | | math_ops | Ceil | | Ceil | -| Concat | | √ | √ | √ | | √ | array_ops | Concat | Concat | Concat | -| Constant | | | | | | | | | | Constant | -| Conv1dTranspose | | | | √ | | | layer/conv | | | | -| Conv2d | √ | √ | √ | √ | | √ | layer/conv | Conv2D | Convolution | Conv | -| Conv2dTranspose | | √ | √ | √ | | √ | layer/conv | DeConv2D | Deconvolution | ConvTranspose | -| Cos | | √ | √ | √ | | | math_ops | Cos | | Cos | -| Crop | | | | | | | | | Crop | | -| DeDepthwiseConv2D | | | | | | | | | Deconvolution| ConvTranspose | -| DepthToSpace | | | | | | | | DepthToSpace | | DepthToSpace | -| DepthwiseConv2dNative | √ | √ | √ | √ | | √ | nn_ops | DepthwiseConv2D | Convolution | Convolution | -| Div | | √ | √ | √ | | √ | math_ops | Div | | Div | -| Dropout | | | | | | | | | | Dropout | -| Eltwise | | | | | | | | | Eltwise | | -| Elu | | | | | | | | Elu | | Elu | -| Equal | | √ | √ | √ | | | math_ops | Equal | | Equal | -| Exp | | √ | | | | | math_ops | Exp | | Exp | -| ExpandDims | | √ | | | | | array_ops | | | | -| Fill | | √ | | | | | array_ops | Fill | | | -| Flatten | | | | | | | | | Flatten | | -| Floor | | √ | √ | √ | | | math_ops | flOOR | | Floor | -| FloorDiv | | √ | | | | | math_ops | FloorDiv | | | -| FloorMod | | √ | | | | | nn_ops | FloorMod | | | -| FullConnection | | √ | | | | | layer/basic | FullyConnected | InnerProduct | | -| GatherNd | | √ | | | | | array_ops | GatherND | | | -| GatherV2 | | √ | | | | | array_ops | Gather | | Gather | -| Greater | | √ | √ | √ | | | math_ops | Greater | | Greater | -| GreaterEqual | | √ | √ | √ | | | math_ops | GreaterEqual | | | -| Hswish | | | | | | | | HardSwish | | | -| L2norm | | | | | | | | L2_NORMALIZATION | | | -| LeakyReLU | | √ | | | | √ | layer/activation | LeakyRelu | | LeakyRelu | -| Less | | √ | √ | √ | | | math_ops | Less | | Less | -| LessEqual | | √ | √ | √ | | | math_ops | LessEqual | | | -| LocalResponseNorm | | | | | | | | LocalResponseNorm | | Lrn | -| Log | | √ | √ | √ | | | math_ops | Log | | Log | -| LogicalAnd | | √ | | | | | math_ops | LogicalAnd | | | -| LogicalNot | | √ | √ | √ | | | math_ops | LogicalNot | | | -| LogicalOr | | √ | | | | | math_ops | LogicalOr | | | -| LSTM | | √ | | | | | layer/lstm | | | | -| MatMul | √ | √ | √ | √ | | √ | math_ops | | | MatMul | -| Maximum | | | | | | | math_ops | Maximum | | Max | -| MaxPool | | √ | √ | √ | | √ | nn_ops | MaxPooling | Pooling | MaxPool | -| Minimum | | | | | | | math_ops | Minimum | | Min | -| Mul | | √ | √ | √ | | √ | math_ops | Mul | | Mul | -| Neg | | | | | | | math_ops | | | Neg | -| NotEqual | | √ | √ | √ | | | math_ops | NotEqual | | | -| OneHot | | √ | | | | | layer/basic | OneHot | | | -| Pack | | √ | | | | | nn_ops | | | | -| Pad | | √ | √ | √ | | | nn_ops | Pad | | Pad | -| Pow | | √ | √ | √ | | | math_ops | Pow | Power | Power | -| PReLU | | √ | √ | √ | | √ | layer/activation | Prelu | PReLU | PRelu | -| Range | | √ | | | | | layer/basic | Range | | | -| Rank | | √ | | | | | array_ops | Rank | | | -| RealDiv | | √ | √ | √ | | √ | math_ops | RealDiv | | | -| ReduceMax | | √ | √ | √ | | | math_ops | ReduceMax | | ReduceMax | -| ReduceMean | | √ | √ | √ | | | math_ops | Mean | | ReduceMean | -| ReduceMin | | √ | √ | √ | | | math_ops | ReduceMin | | ReduceMin | -| ReduceProd | | √ | √ | √ | | | math_ops | ReduceProd | | | -| ReduceSum | | √ | √ | √ | | | math_ops | Sum | | ReduceSum | -| ReLU | | √ | √ | √ | | √ | layer/activation | Relu | ReLU | Relu | -| ReLU6 | | √ | | | | √ | layer/activation | Relu6 | ReLU6 | Clip* | -| Reshape | | √ | √ | √ | | √ | array_ops | Reshape | Reshape | Reshape,Flatten | -| Resize | | | | | | | | ResizeBilinear, NearestNeighbor | Interp | | -| Reverse | | | | | | | | reverse | | | -| ReverseSequence | | √ | | | | | array_ops | ReverseSequence | | | -| Round | | √ | | √ | | | math_ops | Round | | | -| Rsqrt | | √ | √ | √ | | | math_ops | Rsqrt | | | -| Scale | | | | | | | | | Scale | | -| ScatterNd | | √ | | | | | array_ops | ScatterNd | | | -| Shape | | √ | | √ | | | array_ops | Shape | | Shape | -| Sigmoid | | √ | √ | √ | | √ | nn_ops | Logistic | Sigmoid | Sigmoid | -| Sin | | | | | | | | Sin | | Sin | -| Slice | | √ | √ | √ | | √ | array_ops | Slice | | Slice | -| Softmax | | √ | √ | √ | | √ | layer/activation | Softmax | Softmax | Softmax | -| SpaceToBatchND | | √ | | | | | array_ops | SpaceToBatchND | | | -| SpareToDense | | | | | | | | SpareToDense | | | -| SpaceToDepth | | √ | | | | | array_ops | SpaceToDepth | | SpaceToDepth | -| Split | | √ | √ | √ | | | array_ops | Split, SplitV | | | -| Sqrt | | √ | √ | √ | | | math_ops | Sqrt | | Sqrt | -| Square | | √ | √ | √ | | | math_ops | Square | | | -| SquaredDifference | | | | | | | | SquaredDifference | | | -| Squeeze | | √ | √ | √ | | | array_ops | Squeeze | | Squeeze | -| StridedSlice | | √ | √ | √ | | | array_ops | StridedSlice | | | -| Stack | | | | | | | | Stack | | | -| Sub | | √ | √ | √ | | √ | math_ops | Sub | | Sub | -| Tan | | | | | | | | | | Tan | -| Tanh | | √ | | | | | layer/activation | Tanh | TanH | | -| TensorAdd | | √ | √ | √ | | √ | math_ops | | | | -| Tile | | √ | | | | | array_ops | Tile | | Tile | -| TopK | | √ | √ | √ | | | nn_ops | TopKV2 | | | -| Transpose | | √ | √ | √ | | √ | array_ops | Transpose | Permute | Transpose | -| Unique | | | | | | | | Unique | | | -| Unpack | | √ | | | | | nn_ops | | | | -| Unsample | | | | | | | | | | Unsample | -| Unsqueeze | | | | | | | | | | Unsqueeze | -| Unstack | | | | | | | | Unstack | | | -| Where | | | | | | | | Where | | | -| ZerosLike | | √ | | | | | array_ops | ZerosLike | | | +| 操作名 | CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | 支持的Tensorflow
Lite op | 支持的Caffe
Lite op | 支持的Onnx
Lite op | +|-----------------------|----------|----------|----------|-----------|----------|-------------------|----------|----------|---------| +| Abs | | √ | √ | √ | | | Abs | | Abs | +| Add | √ | √ | √ | √ | | √ | Add | | Add | +| AddN | | √ | | | | | AddN | | | +| Argmax | | √ | √ | √ | | | Argmax | ArgMax | ArgMax | +| Argmin | | √ | √ | √ | | | Argmin | | | +| AvgPool | √ | √ | √ | √ | | √ | MeanPooling| Pooling | AveragePool | +| BatchNorm | √ | √ | √ | √ | | √ | | BatchNorm | BatchNormalization | +| BatchToSpace | | √ | √ | √ | | | BatchToSpace, BatchToSpaceND | | | +| BiasAdd | | √ | √ | √ | | √ | | | BiasAdd | +| Broadcast | | √ | | | | | BroadcastTo | | Expand | +| Cast | √ | √ | | √ | | | Cast, DEQUANTIZE* | | Cast | +| Ceil | | √ | √ | √ | | | Ceil | | Ceil | +| Concat | √ | √ | √ | √ | √ | √ | Concat | Concat | Concat | +| Conv2d | √ | √ | √ | √ | √ | √ | Conv2D | Convolution | Conv | +| Conv2dTranspose | √ | √ | √ | √ | √ | √ | DeConv2D | Deconvolution | ConvTranspose | +| Cos | | √ | √ | √ | | | Cos | | Cos | +| Crop | | √ | √ | √ | | | | Crop | | +| DeDepthwiseConv2D | | √ | √ | √ | | | | Deconvolution| ConvTranspose | +| DepthToSpace | | √ | √ | √ | | | DepthToSpace| | DepthToSpace | +| DepthwiseConv2dNative | √ | √ | √ | √ | √ | √ | DepthwiseConv2D | Convolution | Convolution | +| Div | √ | √ | √ | √ | | √ | Div, RealDiv | | Div | +| Eltwise | √ | √ | | | | | | Eltwise | | +| Elu | | √ | | | | | Elu | | Elu | +| Equal | √ | √ | √ | √ | | | Equal | | Equal | +| Exp | | √ | | | | | Exp | | Exp | +| ExpandDims | | √ | | | | | | | | +| Fill | | √ | | | | | Fill | | | +| Flatten | | √ | | | | | | Flatten | | +| Floor | | √ | √ | √ | | | flOOR | | Floor | +| FloorDiv | √ | √ | | | | | FloorDiv | | | +| FloorMod | √ | √ | | | | | FloorMod | | | +| FullConnection | | √ | √ | √ | | | FullyConnected | InnerProduct | | +| GatherNd | | √ | √ | √ | | | GatherND | | | +| GatherV2 | | √ | √ | √ | | | Gather | | Gather | +| Greater | √ | √ | √ | √ | | | Greater | | Greater | +| GreaterEqual | √ | √ | √ | √ | | | GreaterEqual| | | +| Hswish | √ | √ | √ | √ | | | HardSwish | | | +| LeakyReLU | √ | √ | | | | √ | LeakyRelu | | LeakyRelu | +| Less | √ | √ | √ | √ | | | Less | | Less | +| LessEqual | √ | √ | √ | √ | | | LessEqual | | | +| LRN | | √ | | | | | LocalResponseNorm | | Lrn | +| Log | | √ | √ | √ | | | Log | | Log | +| LogicalAnd | √ | √ | | | | | LogicalAnd | | | +| LogicalNot | | √ | √ | √ | | | LogicalNot | | | +| LogicalOr | √ | √ | | | | | LogicalOr | | | +| LSTM | | √ | | | | | | | | +| MatMul | | √ | √ | √ | √ | √ | | | MatMul | +| Maximum | √ | √ | | | | | Maximum | | Max | +| MaxPool | √ | √ | √ | √ | | √ | MaxPooling | Pooling | MaxPool | +| Minimum | √ | √ | | | | | Minimum | | Min | +| Mul | √ | √ | √ | √ | | √ | Mul | | Mul | +| NotEqual | √ | √ | √ | √ | | | NotEqual | | | +| OneHot | | √ | | | | | OneHot | | | +| Pad | | √ | √ | √ | | | Pad | | Pad | +| Pow | | √ | √ | √ | | | Pow | Power | Power | +| PReLU | | √ | | | | √ | | PReLU | | +| Range | | √ | | | | | Range | | | +| Rank | | √ | | | | | Rank | | | +| ReduceMax | √ | √ | √ | √ | | | ReduceMax | | ReduceMax | +| ReduceMean | √ | √ | √ | √ | | | Mean | | ReduceMean | +| ReduceMin | √ | √ | √ | √ | | | ReduceMin | | ReduceMin | +| ReduceProd | √ | √ | √ | √ | | | ReduceProd | | | +| ReduceSum | √ | √ | √ | √ | | | Sum | | ReduceSum | +| ReduceSumSquare | √ | √ | √ | √ | | | | | | +| ReLU | √ | √ | √ | √ | | √ | Relu | ReLU | Relu | +| ReLU6 | √ | √ | √ | √ | | √ | Relu6 | ReLU6 | Clip* | +| Reshape | √ | √ | √ | √ | | √ | Reshape | Reshape | Reshape,Flatten | +| Resize | | √ | √ | √ | | | ResizeBilinear, NearestNeighbor | Interp | | +| Reverse | | √ | | | | | reverse | | | +| ReverseSequence | | √ | | | | | ReverseSequence | | | +| Round | | √ | √ | √ | | | Round | | | +| Rsqrt | | √ | √ | √ | | | Rsqrt | | | +| Scale | | √ | | | | | | Scale | | +| ScatterNd | | √ | | | | | ScatterNd | | | +| Shape | | √ | | | | | Shape | | Shape | +| Sigmoid | √ | √ | √ | √ | | √ | Logistic | Sigmoid | Sigmoid | +| Sin | | √ | √ | √ | | | Sin | | Sin | +| Slice | | √ | √ | √ | √ | √ | Slice | | Slice | +| Softmax | √ | √ | √ | √ | | √ | Softmax | Softmax | Softmax | +| SpaceToBatch | | √ | | | | | | | | +| SpaceToBatchND | | √ | | | | | SpaceToBatchND | | | +| SpaceToDepth | | √ | | | | | SpaceToDepth | | SpaceToDepth | +| SparseToDense | | √ | | | | | SpareToDense | | | +| Split | √ | √ | √ | √ | | | Split, SplitV | | | +| Sqrt | | √ | √ | √ | | | Sqrt | | Sqrt | +| Square | | √ | √ | √ | | | Square | | | +| SquaredDifference | | √ | | | | | SquaredDifference | | | +| Squeeze | | √ | √ | √ | | | Squeeze | | Squeeze | +| StridedSlice | | √ | √ | √ | | | StridedSlice| | | +| Stack | | √ | | | | | Stack | | | +| Sub | √ | √ | √ | √ | | √ | Sub | | Sub | +| Tanh | √ | √ | | | | | Tanh | TanH | | +| Tile | | √ | | | | | Tile | | Tile | +| TopK | | √ | √ | √ | | | TopKV2 | | | +| Transpose | √ | √ | | | | √ | Transpose | Permute | Transpose | +| Unique | | √ | | | | | Unique | | | +| Unsqueeze | | √ | √ | √ | | | | | Unsqueeze | +| Unstack | | √ | | | | | Unstack | | | +| Where | | √ | | | | | Where | | | +| ZerosLike | | √ | | | | | ZerosLike | | | -* Clip: only support convert clip(0, 6) to Relu6. -* DEQUANTIZE: only support to convert fp16 to fp32. +* Clip: 仅支持将clip(0, 6)转换为Relu6. +* DEQUANTIZE: 仅支持将fp16转换为fp32. diff --git a/lite/docs/source_zh_cn/roadmap.md b/lite/docs/source_zh_cn/roadmap.md deleted file mode 100644 index 6bafce4c91194936f9e2715a5896819b72ee99a8..0000000000000000000000000000000000000000 --- a/lite/docs/source_zh_cn/roadmap.md +++ /dev/null @@ -1,15 +0,0 @@ -# RoadMap - - - -1. 增加更多的FP16、INT8和UINT8 CPU算子; -2. 增加更多的openCL、openGL、vulkan和metal GPU算子; -3. 增加控制流算子支持; -4. 增加NPU支持; -5. 增加部署在IoT设备的推理框架; -6. 增加图像分割、文字识别、人脸检测等预制模型; -7. 增加Lite的图像分割、文字识别、人脸检测等预置样例; -8. 增加Micro的样例; -9. 端侧训练支持; -10. pipeline数据处理丰富; -11. 模型转换工具支持windows和MAC。 \ No newline at end of file diff --git a/lite/tutorials/source_en/deploy.md b/lite/tutorials/source_en/compile.md similarity index 86% rename from lite/tutorials/source_en/deploy.md rename to lite/tutorials/source_en/compile.md index 350654ea725fc9a286be6f113d007e4b5ce62ff6..9f7ae53c96ac66900cc3a590cd2cfb224f2ad3f7 100644 --- a/lite/tutorials/source_en/deploy.md +++ b/lite/tutorials/source_en/compile.md @@ -1,8 +1,8 @@ -# Deploy +# Compile -- [Deployment](#deployment) +- [compilation](#compilation) - [Environment Requirements](#environment-requirements) - [Compilation Options](#compilation-options) - [Output Description](#output-description) @@ -10,7 +10,7 @@ - + This document describes how to quickly install MindSpore Lite on the Ubuntu system. @@ -57,7 +57,7 @@ After the compilation is complete, go to the `mindspore/output` directory of the > version: version of the output, consistent with that of the MindSpore. > -> function: function of the output. `convert` indicates the output of the conversion tool and `runtime` indicates the output of the inference framework. +> function: function of the output. `converter` indicates the output of the conversion tool and `runtime` indicates the output of the inference framework. > > OS: OS on which the output will be deployed. @@ -81,9 +81,11 @@ Generally, the compiled output files include the following types. The architectu | third_party | Header file and library of the third-party library | Yes | Yes | Take the 0.7.0-beta version and CPU as an example. The contents of `third party` and `lib` vary depending on the architecture as follows: -- `mindspore-lite-0.7.0-converter-ubuntu`: include `protobuf` (Protobuf dynamic library). -- `mindspore-lite-0.7.0-runtime-x86-cpu`: include `flatbuffers` (FlatBuffers header file). -TODO: Add document content. +- `mindspore-lite-0.7.0-converter-ubuntu`: `third party`include `protobuf` (Protobuf dynamic library). +- `mindspore-lite-0.7.0-runtime-x86-cpu`: `third party`include `flatbuffers` (FlatBuffers header file), `lib`include`libmindspore-lite.so`(Dynamic library of MindSpore Lite inference framework). +- `mindspore-lite-0.7.0-runtime-arm64-cpu`: `third party`include `flatbuffers` (FlatBuffers header file), `lib`include`libmindspore-lite.so`(Dynamic library of MindSpore Lite inference framework) and `liboptimize.so`(Dynamic library of MindSpore Lite advanced operators). + +> `liboptimize.so` only exits in runtime-arm64 outputs, and only can be used in the CPU which supports armv8.2 and fp16. > Before running the tools in the `converter`, `benchmark`, or `time_profiler` directory, you need to configure environment variables and set the paths of the dynamic libraries of MindSpore Lite and Protobuf to the paths of the system dynamic libraries. The following uses the 0.7.0-beta version as an example: `export LD_LIBRARY_PATH=./mindspore-lite-0.7.0/lib:./mindspore-lite-0.7.0/third_party/protobuf/lib:${LD_LIBRARY_PATH}`. diff --git a/lite/tutorials/source_en/images/lite_quick_start_app_result.jpg b/lite/tutorials/source_en/images/lite_quick_start_app_result.jpg new file mode 100644 index 0000000000000000000000000000000000000000..9287aad111992c39145c70f6a473818e31402bc7 Binary files /dev/null and b/lite/tutorials/source_en/images/lite_quick_start_app_result.jpg differ diff --git a/lite/tutorials/source_en/images/lite_quick_start_home.png b/lite/tutorials/source_en/images/lite_quick_start_home.png new file mode 100644 index 0000000000000000000000000000000000000000..c48cf581b33afbc15dbf27be495215b999e1be60 Binary files /dev/null and b/lite/tutorials/source_en/images/lite_quick_start_home.png differ diff --git a/lite/tutorials/source_en/images/lite_quick_start_project_structure.png b/lite/tutorials/source_en/images/lite_quick_start_project_structure.png new file mode 100644 index 0000000000000000000000000000000000000000..ade37a61ef97a479401240215e302011c014824c Binary files /dev/null and b/lite/tutorials/source_en/images/lite_quick_start_project_structure.png differ diff --git a/lite/tutorials/source_en/images/lite_quick_start_run_app.PNG b/lite/tutorials/source_en/images/lite_quick_start_run_app.PNG new file mode 100644 index 0000000000000000000000000000000000000000..2557b6293de5b3d7fefe7f6e58b57c03deabb55d Binary files /dev/null and b/lite/tutorials/source_en/images/lite_quick_start_run_app.PNG differ diff --git a/lite/tutorials/source_en/images/lite_quick_start_sdk.png b/lite/tutorials/source_en/images/lite_quick_start_sdk.png new file mode 100644 index 0000000000000000000000000000000000000000..1fcb8acabc9ba9d289efbe7e82ee5e2da8bfe073 Binary files /dev/null and b/lite/tutorials/source_en/images/lite_quick_start_sdk.png differ diff --git a/lite/tutorials/source_en/index.rst b/lite/tutorials/source_en/index.rst index ac48c19eeb0dc9ee7e406a857c3f3bd7d89a31f1..569f33def4647002337f602cf29a5341f136a05f 100644 --- a/lite/tutorials/source_en/index.rst +++ b/lite/tutorials/source_en/index.rst @@ -11,8 +11,8 @@ MindSpore Lite Tutorials :maxdepth: 1 :caption: Quick Start - deploy - quick_start/quick_start_lite + compile + quick_start/quick_start .. toctree:: :glob: @@ -20,4 +20,6 @@ MindSpore Lite Tutorials :caption: Use use/converter_tool - use/tools + use/runtime + use/benchmark_tool + use/timeprofiler_tool diff --git a/lite/tutorials/source_en/quick_start/quick_start.md b/lite/tutorials/source_en/quick_start/quick_start.md new file mode 100644 index 0000000000000000000000000000000000000000..970385a74446206461d1bf793b7c6c9111965e1f --- /dev/null +++ b/lite/tutorials/source_en/quick_start/quick_start.md @@ -0,0 +1,336 @@ +# Quick Start (Lite) + + + +- [Quick Start (Lite)](#quick-start-lite) + - [Overview](#overview) + - [Selecting a Model](#selecting-a-model) + - [Converting a Model](#converting-a-model) + - [Deploying an Application](#deploying-an-application) + - [Running Dependencies](#running-dependencies) + - [Building and Running](#building-and-running) + - [Detailed Description of the Sample Program](#detailed-description-of-the-sample-program) + - [Sample Program Structure](#sample-program-structure) + - [Configuring MindSpore Lite Dependencies](#configuring-mindspore-lite-dependencies) + - [Downloading and Deploying a Model File](#downloading-and-deploying-a-model-file) + - [Compiling On-Device Inference Code](#compiling-on-device-inference-code) + + + +## Overview + +It is recommended that you start from the image classification demo on the Android device to understand how to build the MindSpore Lite application project, configure dependencies, and use related APIs. + +This tutorial demonstrates the on-device deployment process based on the image classification sample program on the Android device provided by the MindSpore team. +1. Select an image classification model. +2. Convert the model into a MindSpore Lite model. +3. Use the MindSpore Lite inference model on the device. The following describes how to use the MindSpore Lite C++ APIs (Android JNIs) and MindSpore Lite image classification models to perform on-device inference, classify the content captured by a device camera, and display the most possible classification result on the application's image preview screen. + +> Click to find [Android image classification models](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite) and [sample code](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/lite/image_classification). + +## Selecting a Model + +The MindSpore team provides a series of preset device models that you can use in your application. +Click [here](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) to download image classification models in MindSpore ModelZoo. +In addition, you can use the preset model to perform migration learning to implement your image classification tasks. For details, see [Saving and Loading Model Parameters](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#id6). + +## Converting a Model + +After you retrain a model provided by MindSpore, export the model in the [.mindir format](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#mindir). Use the MindSpore Lite [model conversion tool](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html) to convert the .mindir model to a .ms model. + +Take the MindSpore MobileNetV2 model as an example. Execute the following script to convert a model into a MindSpore Lite model for on-device inference. +```bash +./converter_lite --fmk=MS --modelFile=mobilenet_v2.mindir --outputFile=mobilenet_v2.ms +``` + +## Deploying an Application + +The following section describes how to build and execute an on-device image classification task on MindSpore Lite. + +### Running Dependencies + +- Android Studio 3.2 or later (Android 4.0 or later is recommended.) +- Native development kit (NDK) 21.3 +- CMake +- Android software development kit (SDK) 26 or later +- OpenCV 4.0.0 or later (included in the sample code) + +### Building and Running + +1. Load the sample source code to Android Studio and install the corresponding SDK. (After the SDK version is specified, Android Studio automatically installs the SDK.) + + ![start_home](../images/lite_quick_start_home.png) + + Start Android Studio, click `File > Settings > System Settings > Android SDK`, and select the corresponding SDK. As shown in the following figure, select an SDK and click `OK`. Android Studio automatically installs the SDK. + + ![start_sdk](../images/lite_quick_start_sdk.png) + + (Optional) If an NDK version issue occurs during the installation, manually download the corresponding [NDK version](https://developer.android.com/ndk/downloads) (the version used in the sample code is 21.3). Specify the SDK location in `Android NDK location` of `Project Structure`. + + ![project_structure](../images/lite_quick_start_project_structure.png) + +2. Connect to an Android device and runs the image classification application. + + Connect to the Android device through a USB cable for debugging. Click `Run 'app'` to run the sample project on your device. + + ![run_app](../images/lite_quick_start_run_app.PNG) + + For details about how to connect the Android Studio to a device for debugging, see . + +3. Continue the installation on the Android device. After the installation is complete, you can view the content captured by a camera and the inference result. + + As shown in the following figure, the keyboard and mouse are successfully identified. + + ![result](../images/lite_quick_start_app_result.jpg) + + +## Detailed Description of the Sample Program + +This image classification sample program on the Android device includes a Java layer and a JNI layer. At the Java layer, the Android Camera 2 API is used to enable a camera to obtain image frames and process images. At the JNI layer, the model inference process is completed in [Runtime](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html). + +> This following describes the JNI layer implementation of the sample program. At the Java layer, the Android Camera 2 API is used to enable a device camera and process image frames. Readers are expected to have the basic Android development knowledge. + +### Sample Program Structure + +``` +app +| +├── libs # library files that store MindSpore Lite dependencies +│ └── arm64-v8a +│ ├── libopencv_java4.so +│ └── libmindspore-lite.so +│ +├── opencv # dependency files related to OpenCV +│ └── ... +| +├── src/main +│ ├── assets # resource files +| | └── model.ms # model file +│ | +│ ├── cpp # main logic encapsulation classes for model loading and prediction +| | ├── include # header files related to MindSpore calling +| | | └── ... +│ | | +| | ├── MindSporeNetnative.cpp # JNI methods related to MindSpore calling +│ | └── MindSporeNetnative.h # header file +│ | +│ ├── java # application code at the Java layer +│ │ └── com.huawei.himindsporedemo +│ │ ├── gallery.classify # implementation related to image processing and MindSpore JNI calling +│ │ │ └── ... +│ │ └── obejctdetect # implementation related to camera enabling and drawing +│ │ └── ... +│ │ +│ ├── res # resource files related to Android +│ └── AndroidManifest.xml # Android configuration file +│ +├── CMakeList.txt # CMake compilation entry file +│ +├── build.gradle # Other Android configuration file +└── ... +``` + +### Configuring MindSpore Lite Dependencies + +When MindSpore C++ APIs are called at the Android JNI layer, related library files are required. You can use MindSpore Lite [source code compilation](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html) to generate the `libmindspore-lite.so` library file. + +In Android Studio, place the compiled `libmindspore-lite.so` library file (which can contain multiple compatible architectures) in the `app/libs/ARM64-V8a` (Arm64) or `app/libs/armeabi-v7a` (Arm32) directory of the application project. In the `build.gradle` file of the application, configure the compilation support of CMake, `arm64-v8a`, and `armeabi-v7a`.   + +``` +android{ + defaultConfig{ + externalNativeBuild{ + cmake{ + arguments "-DANDROID_STL=c++_shared" + } + } + + ndk{ + abiFilters'armeabi-v7a', 'arm64-v8a' + } + } +} +``` + +Create a link to the `.so` or `.a` library file in the `app/CMakeLists.txt` file: + +``` +# Set MindSpore Lite Dependencies. +include_directories(${CMAKE_SOURCE_DIR}/src/main/cpp/include/MindSpore) +add_library(mindspore-lite SHARED IMPORTED ) +set_target_properties(mindspore-lite PROPERTIES + IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libmindspore-lite.so") + +# Set OpenCV Dependecies. +include_directories(${CMAKE_SOURCE_DIR}/opencv/sdk/native/jni/include) +add_library(lib-opencv SHARED IMPORTED ) +set_target_properties(lib-opencv PROPERTIES + IMPORTED_LOCATION "${CMAKE_SOURCE_DIR}/libs/libopencv_java4.so") + +# Link target library. +target_link_libraries( + ... + mindspore-lite + lib-opencv + ... +) +``` + + + +In this example, the download.gradle File configuration auto download ` libmindspot-lite.so `and `libopencv_ Java4.so` library file, placed in the 'app / libs / arm64-v8a' directory. + +Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location. + +libmindspore-lite.so[libmindspore-lite.so]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so) + +libmindspore-lite include [libmindspore-lite include]( https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip) + +libopencv_java4.so [libopencv_java4.so](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so) + +libopencv include [libopencv include]( https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip) + + + +### Downloading and Deploying a Model File + +In this example, the download.gradle File configuration auto download `mobilenet_v2.ms `and placed in the 'app / libs / arm64-v8a' directory. + +Note: if the automatic download fails, please manually download the relevant library files and put them in the corresponding location. + +mobilenetv2.ms [mobilenetv2.ms]( https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) + + + +### Compiling On-Device Inference Code + +Call MindSpore Lite C++ APIs at the JNI layer to implement on-device inference. + +The inference code process is as follows. For details about the complete code, see `src/cpp/MindSporeNetnative.cpp`. + +1. Load the MindSpore Lite model file and build the context, session, and computational graph for inference. + + - Load a model file. Create and configure the context for model inference. + ```cpp + // Buffer is the model data passed in by the Java layer + jlong bufferLen = env->GetDirectBufferCapacity(buffer); + char *modelBuffer = CreateLocalModelBuffer(env, buffer); + ``` + + - Create a session. + ```cpp + void **labelEnv = new void *; + MSNetWork *labelNet = new MSNetWork; + *labelEnv = labelNet; + + // Create context. + lite::Context *context = new lite::Context; + + context->device_ctx_.type = lite::DT_CPU; + context->thread_num_ = numThread; //Specify the number of threads to run inference + + // Create the mindspore session. + labelNet->CreateSessionMS(modelBuffer, bufferLen, "device label", context); + delete(context); + + ``` + + - Load the model file and build a computational graph for inference. + ```cpp + void MSNetWork::CreateSessionMS(char* modelBuffer, size_t bufferLen, std::string name, mindspore::lite::Context* ctx) + { + CreateSession(modelBuffer, bufferLen, ctx); + session = mindspore::session::LiteSession::CreateSession(ctx); + auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen); + int ret = session->CompileGraph(model); + } + ``` + +2. Convert the input image into the Tensor format of the MindSpore model. + + Convert the image data to be detected into the Tensor format of the MindSpore model. + + ```cpp + // Convert the Bitmap image passed in from the JAVA layer to Mat for OpenCV processing + BitmapToMat(env, srcBitmap, matImageSrc); + // Processing such as zooming the picture size. + matImgPreprocessed = PreProcessImageData(matImageSrc); + + ImgDims inputDims; + inputDims.channel = matImgPreprocessed.channels(); + inputDims.width = matImgPreprocessed.cols; + inputDims.height = matImgPreprocessed.rows; + float *dataHWC = new float[inputDims.channel * inputDims.width * inputDims.height] + + // Copy the image data to be detected to the dataHWC array. + // The dataHWC[image_size] array here is the intermediate variable of the input MindSpore model tensor. + float *ptrTmp = reinterpret_cast(matImgPreprocessed.data); + for(int i = 0; i < inputDims.channel * inputDims.width * inputDims.height; i++){ + dataHWC[i] = ptrTmp[i]; + } + + // Assign dataHWC[image_size] to the input tensor variable. + auto msInputs = mSession->GetInputs(); + auto inTensor = msInputs.front(); + memcpy(inTensor->MutableData(), dataHWC, + inputDims.channel * inputDims.width * inputDims.height * sizeof(float)); + delete[] (dataHWC); + ``` + +3. Perform inference on the input tensor based on the model, obtain the output tensor, and perform post-processing. + + - Perform graph execution and on-device inference. + + ```cpp + // After the model and image tensor data is loaded, run inference. + auto status = mSession->RunGraph(); + ``` + + - Obtain the output data. + ```cpp + auto msOutputs = mSession->GetOutputMapByNode(); + std::string retStr = ProcessRunnetResult(msOutputs, ret); + ``` + + - Perform post-processing of the output data. + ```cpp + std::string ProcessRunnetResult(std::unordered_map> msOutputs, + int runnetRet) { + + // Get model output results. + std::unordered_map>::iterator iter; + iter = msOutputs.begin(); + auto brach1_string = iter->first; + auto branch1_tensor = iter->second; + + int OUTPUTS_LEN = branch1_tensor[0]->ElementsNum(); + + float *temp_scores = static_cast(branch1_tensor[0]->MutableData()); + + float scores[RET_CATEGORY_SUM]; + for (int i = 0; i < RET_CATEGORY_SUM; ++i) { + scores[i] = temp_scores[i]; + } + + // Converted to text information that needs to be displayed in the APP. + std::string retStr = ""; + if (runnetRet == 0) { + for (int i = 0; i < RET_CATEGORY_SUM; ++i) { + if (scores[i] > 0.3){ + retStr += g_labels_name_map[i]; + retStr += ":"; + std::string score_str = std::to_string(scores[i]); + retStr += score_str; + retStr += ";"; + } + } + else { + MS_PRINT("MindSpore run net failed!"); + for (int i = 0; i < RET_CATEGORY_SUM; ++i) { + retStr += " :0.0;"; + } + } + return retStr; + } + ``` \ No newline at end of file diff --git a/lite/tutorials/source_en/quick_start/quick_start_lite.md b/lite/tutorials/source_en/quick_start/quick_start_lite.md deleted file mode 100644 index 295731f750d31e5d8dda258cd9a4631e64d103ba..0000000000000000000000000000000000000000 --- a/lite/tutorials/source_en/quick_start/quick_start_lite.md +++ /dev/null @@ -1,3 +0,0 @@ -# Quick Start (Lite) - - diff --git a/lite/tutorials/source_en/use/benchmark_tool.md b/lite/tutorials/source_en/use/benchmark_tool.md index e96f6d25948fe288b48379577abd27beb6916e62..78cd34120988addfd60b85883f7f400681160319 100644 --- a/lite/tutorials/source_en/use/benchmark_tool.md +++ b/lite/tutorials/source_en/use/benchmark_tool.md @@ -1,3 +1,97 @@ # Benchmark Tool + + +- [Benchmark Tool](#benchmark-tool) + - [Overview](#overview) + - [Environment Preparation](#environment-preparation) + - [Parameter Description](#parameter-description) + - [Example](#example) + - [Performance Test](#performance-test) + - [Accuracy Test](#accuracy-test) + + + + +## Overview + +The Benchmark tool is used to perform benchmark testing on a MindSpore Lite model and is implemented using the C++ language. It can not only perform quantitative analysis (performance) on the forward inference execution duration of a MindSpore Lite model, but also perform comparative error analysis (accuracy) based on the output of the specified model. + +## Environment Preparation + +To use the Benchmark tool, you need to prepare the environment as follows: + +- Compilation: Install compilation dependencies and perform compilation. The code of the Benchmark tool is stored in the `mindspore/lite/tools/benchmark` directory of the MindSpore source code. For details about the compilation operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/master/deploy.html#id2) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/master/deploy.html#id5) in the compilation document. + +- Run: Obtain the `Benchmark` tool and configure environment variables. For details, see [Output Description](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html#id4) in the compilation document. + +## Parameter Description + +The command used for benchmark testing based on the compiled Benchmark tool is as follows: + +```bash +./benchmark --modelPath= [--accuracyThreshold=] + [--calibDataPath=] [--cpuBindMode=] + [--device=] [--help] [--inDataPath=] + [--inDataType=] [--loopCount=] + [--numThreads=] [--omModelPath=] + [--resizeDims=] [--warmUpLoopCount=] + [--fp16Priority=] +``` + +The following describes the parameters in detail. + +| Parameter | Attribute | Function | Parameter Type | Default Value | Value Range | +| ----------------- | ---- | ------------------------------------------------------------ | ------ | -------- | ---------------------------------- | +| `--modelPath=` | Mandatory | Specifies the file path of the MindSpore Lite model for benchmark testing. | String | Null | - | +| `--accuracyThreshold=` | Optional | Specifies the accuracy threshold. | Float | 0.5 | - | +| `--calibDataPath=` | Optional | Specifies the file path of the benchmark data. The benchmark data, as the comparison output of the tested model, is output from the forward inference of the tested model under other deep learning frameworks using the same input. | String | Null | - | +| `--cpuBindMode=` | Optional | Specifies the type of the CPU core bound to the model inference program. | Integer | 1 | −1: medium core
1: large core
0: not bound | +| `--device=` | Optional | Specifies the type of the device on which the model inference program runs. | String | CPU | CPU or GPU | +| `--help` | Optional | Displays the help information about the `benchmark` command. | - | - | - | +| `--inDataPath=` | Optional | Specifies the file path of the input data of the tested model. If this parameter is not set, a random value will be used. | String | Null | - | +| `--inDataType=` | Optional | Specifies the file type of the input data of the tested model. | String | Bin | Img: The input data is an image. Bin: The input data is a binary file.| +| `--loopCount=` | Optional | Specifies the number of forward inference times of the tested model when the Benchmark tool is used for the benchmark testing. The value is a positive integer. | Integer | 10 | - | +| `--numThreads=` | Optional | Specifies the number of threads for running the model inference program. | Integer | 2 | - | +| `--omModelPath=` | Optional | Specifies the file path of the OM model. This parameter is optional only when the `device` type is NPU. | String | Null | - | +| `--resizeDims=` | Optional | Specifies the size to be adjusted for the input data of the tested model. | String | Null | - | +| `--warmUpLoopCount=` | Optional | Specifies the number of preheating inference times of the tested model before multiple rounds of the benchmark test are executed. | Integer | 3 | - | +| `--fp16Priority=` | Optional | Specifies whether the float16 operator is preferred. | Bool | false | true, false | + +## Example + +When using the Benchmark tool to perform benchmark testing on different MindSpore Lite models, you can set different parameters to implement different test functions. The testing is classified into performance test and accuracy test. + +### Performance Test + +The main test indicator of the performance test performed by the Benchmark tool is the duration of a single forward inference. In a performance test, you do not need to set benchmark data parameters such as `calibDataPath`. For example: + +```bash +./benchmark --modelPath=./models/test_benchmark.ms +``` + +This command uses a random input, and other parameters use default values. After this command is executed, the following statistics are displayed. The statistics include the minimum duration, maximum duration, and average duration of a single inference after the tested model runs for the specified number of inference rounds. + +``` +Model = test_benchmark.ms, numThreads = 2, MinRunTime = 72.228996 ms, MaxRuntime = 73.094002 ms, AvgRunTime = 72.556000 ms +``` + +### Accuracy Test + +The accuracy test performed by the Benchmark tool is to verify the accuracy of the MinSpore model output by setting benchmark data. In an accuracy test, in addition to the `modelPath` parameter, the `calibDataPath` parameter must be set. For example: + +```bash +./benchmark --modelPath=./models/test_benchmark.ms --inDataPath=./input/test_benchmark.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark.out +``` + +This command specifies the input data and benchmark data of the tested model, specifies that the model inference program runs on the CPU, and sets the accuracy threshold to 3%. After this command is executed, the following statistics are displayed, including the single input data of the tested model, output result and average deviation rate of the output node, and average deviation rate of all nodes. + +``` +InData0: 139.947 182.373 153.705 138.945 108.032 164.703 111.585 227.402 245.734 97.7776 201.89 134.868 144.851 236.027 18.1142 22.218 5.15569 212.318 198.43 221.853 +================ Comparing Output data ================ +Data of node age_out : 5.94584e-08 6.3317e-08 1.94726e-07 1.91809e-07 8.39805e-08 7.66035e-08 1.69285e-07 1.46246e-07 6.03796e-07 1.77631e-07 1.54343e-07 2.04623e-07 8.89609e-07 3.63487e-06 4.86876e-06 1.23939e-05 3.09981e-05 3.37098e-05 0.000107102 0.000213932 0.000533579 0.00062465 0.00296401 0.00993984 0.038227 0.0695085 0.162854 0.123199 0.24272 0.135048 0.169159 0.0221256 0.013892 0.00502971 0.00134921 0.00135701 0.000383242 0.000163475 0.000136294 9.77864e-05 8.00793e-05 5.73874e-05 3.53858e-05 2.18535e-05 2.04467e-05 1.85286e-05 1.05075e-05 9.34751e-06 6.12732e-06 4.55476e-06 +Mean bias of node age_out : 0% +Mean bias of all nodes: 0% +======================================================= +``` diff --git a/lite/tutorials/source_en/use/converter_tool.md b/lite/tutorials/source_en/use/converter_tool.md index 54d11d2ca99cf9d2a8596aca1a951c31d6e3bb21..7afc0f0a76e7319183b3500c4656583b03ec1ff4 100644 --- a/lite/tutorials/source_en/use/converter_tool.md +++ b/lite/tutorials/source_en/use/converter_tool.md @@ -6,7 +6,6 @@ - [Overview](#overview) - [Environment Preparation](#environment-preparation) - [Parameter Description](#parameter-description) - - [Model Visualization](#model-visualization) - [Example](#example) @@ -15,7 +14,7 @@ ## Overview -MindSpore Lite provides a tool for offline model conversion. It supports conversion of multiple types of models and visualization of converted models. The converted models can be used for inference. The command line parameters contain multiple personalized options, providing a convenient conversion method for users. +MindSpore Lite provides a tool for offline model conversion. It supports conversion of multiple types of models. The converted models can be used for inference. The command line parameters contain multiple personalized options, providing a convenient conversion method for users. Currently, the following input formats are supported: MindSpore, TensorFlow Lite, Caffe, and ONNX. @@ -23,9 +22,9 @@ Currently, the following input formats are supported: MindSpore, TensorFlow Lite To use the MindSpore Lite model conversion tool, you need to prepare the environment as follows: -- Compilation: Install basic and additional compilation dependencies and perform compilation. The compilation version is x86_64. The code of the model conversion tool is stored in the `mindspore/lite/tools/converter` directory of the MindSpore source code. For details about the compilation operations, see the [Environment Requirements] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id2) and [Compilation Example] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id5) in the deployment document. +- Compilation: Install basic and additional compilation dependencies and perform compilation. The compilation version is x86_64. The code of the model conversion tool is stored in the `mindspore/lite/tools/converter` directory of the MindSpore source code. For details about the compilation operations, see the [Environment Requirements] (https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html#id2) and [Compilation Example] (https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id5) in the compilation document. -- Run: Obtain the `converter` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id4) in the deployment document. +- Run: Obtain the `converter` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html#id4) in the compilation document. ## Parameter Description @@ -34,7 +33,7 @@ You can enter `./converter_lite --help` to obtain help information in real time. The following describes the parameters in detail. - + | Parameter | Mandatory or Not | Parameter Description | Value Range | Default Value | | -------- | ------- | ----- | --- | ---- | | `--help` | No | Prints all help information. | - | - | @@ -42,20 +41,19 @@ The following describes the parameters in detail. | `--modelFile=` | Yes | Path of the input model. | - | - | | `--outputFile=` | Yes | Path of the output model. (If the path does not exist, a directory will be automatically created.) The suffix `.ms` can be automatically generated. | - | - | | `--weightFile=` | Yes (for Caffe models only) | Path of the weight file of the input model. | - | - | -| `--quantType=` | No | Sets the training type of the model. | PostTraining: quantization after training
AwareTraining: perceptual quantization | - | +| `--quantType=` | No | Sets the quant type of the model. | PostTraining: quantization after training
AwareTraining: perceptual quantization | - | +|`--inputInferenceType=` | No(supported by aware quant models only) | Sets the input data type of the converted model. If the type is different from the origin model, the convert tool will insert data type convert op before the model to make sure the input data type is same as the input of origin model. | FLOAT or INT8 | FLOAT | +|`--inferenceType= `| No(supported by aware quant models only) | Sets the output data type of the converted model. If the type is different from the origin model, the convert tool will insert data type convert op before the model to make sure the output data type is same as the input of origin model. | FLOAT or INT8 | FLOAT | +|`--stdDev=`| No(supported by aware quant models only) | Sets the standard deviation of the input data. | (0,+∞) | 128 | +|`--mean=`| No(supported by aware quant models only) | Sets the mean value of the input data. | [-128, 127] | -0.5 | > - The parameter name and parameter value are separated by an equal sign (=) and no space is allowed between them. > - The Caffe model is divided into two files: model structure `*.prototxt`, corresponding to the `--modelFile` parameter; model weight `*.caffemodel`, corresponding to the `--weightFile` parameter -## Model Visualization - -The model visualization tool provides a method for checking the model conversion result. You can run the JSON command to generate a `*.json` file and compare it with the original model to determine the conversion effect. - -TODO: This function is under development now. ## Example -First, in the root directory of the source code, run the following command to perform compilation. For details, see `deploy.md`. +First, in the root directory of the source code, run the following command to perform compilation. For details, see `compile.md`. ```bash bash build.sh -I x86_64 ``` @@ -94,15 +92,17 @@ The following describes how to use the conversion command by using several commo ./converter_lite --fmk=ONNX --modelFile=model.onnx --outputFile=model ``` - - TensorFlow Lite perceptual quantization model `model_quant.tflite` + - TensorFlow Lite aware quantization model `model_quant.tflite` ```bash ./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining ``` + - TensorFlow Lite aware quantization model `model_quant.tflite` set the input and output data type to be int8 + ```bash + ./converter_lite --fmk=TFLITE --modelFile=model.tflite --outputFile=model --quantType=AwareTraining --inputInferenceType=INT8 --inferenceType=INT8 + ``` In the preceding scenarios, the following information is displayed, indicating that the conversion is successful. In addition, the target file `model.ms` is obtained. ``` INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS! ``` - - -You can use the model visualization tool to visually check the converted MindSpore Lite model. This function is under development. \ No newline at end of file + \ No newline at end of file diff --git a/lite/tutorials/source_en/use/timeprofiler_tool.md b/lite/tutorials/source_en/use/timeprofiler_tool.md index f779cfbda5a3f4f3c25735e5cc68e48a089c92d8..506da6162e506367cd09935a050dc254f32de773 100644 --- a/lite/tutorials/source_en/use/timeprofiler_tool.md +++ b/lite/tutorials/source_en/use/timeprofiler_tool.md @@ -1,3 +1,93 @@ # TimeProfiler Tool + + +- [TimeProfiler Tool](#timeprofiler-tool) + - [Overview](#overview) + - [Environment Preparation](#environment-preparation) + - [Parameter Description](#parameter-description) + - [Example](#example) + + + + +## Overview + +The TimeProfiler tool can be used to analyze the time consumption of forward inference at the network layer of a MindSpore Lite model. The analysis is implemented using the C++ language. + +## Environment Preparation + +To use the TimeProfiler tool, you need to prepare the environment as follows: + +- Compilation: Install compilation dependencies and perform compilation. The code of the TimeProfiler tool is stored in the `mindspore/lite/tools/time_profiler` directory of the MindSpore source code. For details about the compilation operations, see the [Environment Requirements](https://www.mindspore.cn/lite/docs/en/master/compile.html#id2) and [Compilation Example](https://www.mindspore.cn/lite/docs/en/master/compile.html#id5) in the compilation document. + +- Run: Obtain the `time_profiler` tool and configure environment variables by referring to [Output Description](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html#id4) in the compilation document. + +## Parameter Description + +The command used for analyzing the time consumption of forward inference at the network layer based on the compiled TimeProfiler tool is as follows: + +```bash +./timeprofiler --modelPath= [--help] [--loopCount=] [--numThreads=] [--cpuBindMode=] [--inDataPath=] [--fp16Priority=] +``` + +The following describes the parameters in detail. + +| Parameter | Attribute | Function | Parameter Type | Default Value | Value Range | +| ----------------- | ---- | ------------------------------------------------------------ | ------ | -------- | ---------------------------------- | +| `--help` | Optional | Displays the help information about the `timeprofiler` command. | - | - | - | +| `--modelPath= ` | Mandatory | Specifies the file path of the MindSpore Lite model for time consumption analysis. | String | Null | - | +| `--loopCount=` | Optional | Specifies the number of times that model inference is executed when the TimeProfiler tool is used for time consumption analysis. The value is a positive integer. | Integer | 100 | - | +| `--numThreads=` | Optional | Specifies the number of threads for running the model inference program. | Integer | 4 | - | +| `--cpuBindMode=` | Optional | Specifies the type of the CPU core bound to the model inference program. | Integer | 1 | −1: medium core
1: large core
0: not bound | +| `--inDataPath=` | Optional | Specifies the file path of the input data of the specified model. If this parameter is not set, a random value will be used. | String | Null | - | +| `--fp16Priority=` | Optional | Specifies whether the float16 operator is preferred. | Bool | false | true, false | + +## Example + +Take the `test_timeprofiler.ms` model as an example and set the number of model inference cycles to 10. The command for using TimeProfiler to analyze the time consumption at the network layer is as follows: + +```bash +./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10 +``` + +After this command is executed, the TimeProfiler tool outputs the statistics on the running time of the model at the network layer. In this example, the command output is as follows: The statistics are displayed by`opName` and `optype`. `opName` indicates the operator name, `optype` indicates the operator type, and `avg` indicates the average running time of the operator per single run, `percent` indicates the ratio of the operator running time to the total operator running time, `calledTimess` indicates the number of times that the operator is run, and `opTotalTime` indicates the total time that the operator is run for a specified number of times. Finally, `total time` and `kernel cost` show the average time consumed by a single inference operation of the model and the sum of the average time consumed by all operators in the model inference, respectively. + +``` +----------------------------------------------------------------------------------------- +opName avg(ms) percent calledTimess opTotalTime +conv2d_1/convolution 2.264800 0.824012 10 22.648003 +conv2d_2/convolution 0.223700 0.081390 10 2.237000 +dense_1/BiasAdd 0.007500 0.002729 10 0.075000 +dense_1/MatMul 0.126000 0.045843 10 1.260000 +dense_1/Relu 0.006900 0.002510 10 0.069000 +max_pooling2d_1/MaxPool 0.035100 0.012771 10 0.351000 +max_pooling2d_2/MaxPool 0.014300 0.005203 10 0.143000 +max_pooling2d_2/MaxPool_nchw2nhwc_reshape_1/Reshape_0 0.006500 0.002365 10 0.065000 +max_pooling2d_2/MaxPool_nchw2nhwc_reshape_1/Shape_0 0.010900 0.003966 10 0.109000 +output/BiasAdd 0.005300 0.001928 10 0.053000 +output/MatMul 0.011400 0.004148 10 0.114000 +output/Softmax 0.013300 0.004839 10 0.133000 +reshape_1/Reshape 0.000900 0.000327 10 0.009000 +reshape_1/Reshape/shape 0.009900 0.003602 10 0.099000 +reshape_1/Shape 0.002300 0.000837 10 0.023000 +reshape_1/strided_slice 0.009700 0.003529 10 0.097000 +----------------------------------------------------------------------------------------- +opType avg(ms) percent calledTimess opTotalTime +Activation 0.006900 0.002510 10 0.069000 +BiasAdd 0.012800 0.004657 20 0.128000 +Conv2D 2.488500 0.905401 20 24.885004 +MatMul 0.137400 0.049991 20 1.374000 +Nchw2Nhwc 0.017400 0.006331 20 0.174000 +Pooling 0.049400 0.017973 20 0.494000 +Reshape 0.000900 0.000327 10 0.009000 +Shape 0.002300 0.000837 10 0.023000 +SoftMax 0.013300 0.004839 10 0.133000 +Stack 0.009900 0.003602 10 0.099000 +StridedSlice 0.009700 0.003529 10 0.097000 + +total time : 2.90800 ms, kernel cost : 2.74851 ms + +----------------------------------------------------------------------------------------- +``` \ No newline at end of file diff --git a/lite/tutorials/source_en/use/tools.rst b/lite/tutorials/source_en/use/tools.rst deleted file mode 100644 index 722a938d6916890fc2e03545f926c6092aff85cd..0000000000000000000000000000000000000000 --- a/lite/tutorials/source_en/use/tools.rst +++ /dev/null @@ -1,8 +0,0 @@ -Other Tools -=========== - -.. toctree:: - :maxdepth: 1 - - benchmark_tool - timeprofiler_tool \ No newline at end of file diff --git a/lite/tutorials/source_zh_cn/compile.md b/lite/tutorials/source_zh_cn/compile.md new file mode 100644 index 0000000000000000000000000000000000000000..c558bb6e089f8caecc7e4f5e056058bfafe15dd1 --- /dev/null +++ b/lite/tutorials/source_zh_cn/compile.md @@ -0,0 +1,228 @@ +# 编译 + + + +- [编译](#编译) + - [Linux环境编译](#linux环境编译) + - [环境要求](#环境要求) + - [编译选项](#编译选项) + - [编译示例](#编译示例) + - [编译输出](#编译输出) + - [模型转换工具converter目录结构说明](#模型转换工具converter目录结构说明) + - [模型推理框架runtime及其他工具目录结构说明](#模型推理框架runtime及其他工具目录结构说明) + - [Windows环境编译](#windows环境编译) + - [环境要求](#环境要求-1) + - [编译选项](#编译选项-1) + - [编译示例](#编译示例-1) + + + + + + +本章节介绍如何在Ubuntu系统上快速编译出MindSpore Lite,其包含的模块如下: + +| 模块 | 支持平台 | 说明 | +| --- | ---- | ---- | +| converter | Linux、Windows | 模型转换工具 | +| runtime | Linux、Android | 模型推理框架 | +| benchmark | Linux、Android | 基准测试工具 | +| time_profiler | Linux、Android | 性能分析工具 | + +## Linux环境编译 + +### 环境要求 + +- 系统环境:Linux x86_64,推荐使用Ubuntu 18.04.02LTS + +- runtime、benchmark、time_profiler编译依赖 + - [CMake](https://cmake.org/download/) >= 3.14.1 + - [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0 + - [Android_NDK](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip) >= r20 + - [Git](https://git-scm.com/downloads) >= 2.28.0 + +- converter编译依赖 + - [CMake](https://cmake.org/download/) >= 3.14.1 + - [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0 + - [Android_NDK](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip) >= r20 + - [Git](https://git-scm.com/downloads) >= 2.28.0 + - [Autoconf](http://ftp.gnu.org/gnu/autoconf/) >= 2.69 + - [Libtool](https://www.gnu.org/software/libtool/) >= 2.4.6 + - [LibreSSL](http://www.libressl.org/) >= 3.1.3 + - [Automake](https://www.gnu.org/software/automake/) >= 1.11.6 + - [Libevent](https://libevent.org) >= 2.0 + - [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18 + - [OpenSSL](https://www.openssl.org/) >= 1.1.1 + +> 编译脚本中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。 + +### 编译选项 + +MindSpore Lite提供编译脚本`build.sh`用于一键式编译,位于MindSpore根目录下,该脚本可用于MindSpore训练及推理的编译。下面对MindSpore Lite的编译选项进行说明。 + +| 选项 | 参数说明 | 取值范围 | 是否必选 | +| -------- | ----- | ---- | ---- | +| **-I** | **选择适用架构,编译MindSpore Lite此选项必选** | **arm64、arm32、x86_64** | **是** | +| -d | 设置该参数,则编译Debug版本,否则编译Release版本 | 无 | 否 | +| -i | 设置该参数,则进行增量编译,否则进行全量编译 | 无 | 否 | +| -j[n] | 设定编译时所用的线程数,否则默认设定为8线程 | Integer | 否 | +| -e | 选择除CPU之外的其他内置算子类型,仅在ARM架构下适用,当前仅支持GPU | gpu | 否 | +| -h | 显示编译帮助信息 | 无 | 否 | + +> 在`-I`参数变动时,如`-I x86_64`变为`-I arm64`,添加`-i`参数进行增量编译不生效。 + +### 编译示例 + +首先,在进行编译之前,需从MindSpore代码仓下载源码。 + +```bash +git clone https://gitee.com/mindspore/mindspore.git +``` + +然后,在源码根目录下执行如下命令,可编译不同版本的MindSpore Lite。 + +- 编译x86_64架构Debug版本。 + ```bash + bash build.sh -I x86_64 -d + ``` + +- 编译x86_64架构Release版本,同时设定线程数。 + ```bash + bash build.sh -I x86_64 -j32 + ``` + +- 增量编译ARM64架构Release版本,同时设定线程数。 + ```bash + bash build.sh -I arm64 -i -j32 + ``` + +- 编译ARM64架构Release版本,同时编译内置的GPU算子。 + ```bash + bash build.sh -I arm64 -e gpu + ``` + +### 编译输出 + +编译完成后,进入`mindspore/output/`目录,可查看编译后生成的文件。文件分为两部分: +- `mindspore-lite-{version}-converter-{os}.tar.gz`:包含模型转换工具converter。 +- `mindspore-lite-{version}-runtime-{os}-{device}.tar.gz`:包含模型推理框架runtime、基准测试工具benchmark和性能分析工具time_profiler。 + +> version:输出件版本号,与所编译的分支代码对应的版本一致。 +> +> device:当前分为cpu(内置CPU算子)和gpu(内置CPU和GPU算子)。 +> +> os:输出件应部署的操作系统。 + +执行解压缩命令,获取编译后的输出件: + +```bash +tar -xvf mindspore-lite-{version}-converter-{os}.tar.gz +tar -xvf mindspore-lite-{version}-runtime-{os}-{device}.tar.gz +``` + +#### 模型转换工具converter目录结构说明 + +转换工具仅在`-I x86_64`编译选项下获得,内容包括以下几部分: + +``` +| +├── mindspore-lite-{version}-converter-{os} +│ └── converter # 模型转换工具 +│ └── third_party # 第三方库头文件和库 +│ ├── protobuf # Protobuf的动态库 + +``` + +#### 模型推理框架runtime及其他工具目录结构说明 + +推理框架可在`-I x86_64`、`-I arm64`和`-I arm32`编译选项下获得,内容包括以下几部分: + +- 当编译选项为`-I x86_64`时: + ``` + | + ├── mindspore-lite-{version}-runtime-x86-cpu + │ └── benchmark # 基准测试工具 + │ └── lib # 推理框架动态库 + │ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库 + │ └── third_party # 第三方库头文件和库 + │ ├── flatbuffers # FlatBuffers头文件 + + ``` + +- 当编译选项为`-I arm64`时: + ``` + | + ├── mindspore-lite-{version}-runtime-arm64-cpu + │ └── benchmark # 基准测试工具 + │ └── lib # 推理框架动态库 + │ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库 + │ ├── liboptimize.so # MindSpore Lite算子性能优化库 + │ └── third_party # 第三方库头文件和库 + │ ├── flatbuffers # FlatBuffers头文件 + │ └── include # 推理框架头文件 + │ └── time_profiler # 模型网络层耗时分析工具 + + ``` + +- 当编译选项为`-I arm32`时: + ``` + | + ├── mindspore-lite-{version}-runtime-arm64-cpu + │ └── benchmark # 基准测试工具 + │ └── lib # 推理框架动态库 + │ ├── libmindspore-lite.so # MindSpore Lite推理框架的动态库 + │ └── third_party # 第三方库头文件和库 + │ ├── flatbuffers # FlatBuffers头文件 + │ └── include # 推理框架头文件 + │ └── time_profiler # 模型网络层耗时分析工具 + + ``` + +> 1. `liboptimize.so`仅在runtime-arm64的输出包中存在,仅在ARMv8.2和支持fp16特性的CPU上使用。 +> 2. 编译ARM64默认可获得arm64-cpu的推理框架输出件,若添加`-e gpu`则获得arm64-gpu的推理框架输出件,此时包名为`mindspore-lite-{version}-runtime-arm64-gpu.tar.gz`,编译ARM32同理。 +> 3. 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本下编译CPU为例:配置converter:`export LD_LIBRARY_PATH=./mindspore-lite-0.7.0-converter-ubuntu/third_party/protobuf/lib`;配置benchmark和time_profiler:`export LD_LIBRARY_PATH=./mindspore-lite-0.7.0-runtime-x86-cpu/lib` + + +## Windows环境编译 + +### 环境要求 + +- 支持的编译环境为:Windows 10,64位。 + +- 编译依赖 + - [CMake](https://cmake.org/download/) >= 3.14.1 + - [MinGW GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download) >= 7.3.0 + - [Python](https://www.python.org/) >= 3.7.5 + - [Git](https://git-scm.com/downloads) >= 2.28.0 + +> 编译脚本中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。 + +### 编译选项 + +MindSpore Lite的编译选项如下。 + +| 参数 | 参数说明 | 是否必选 | +| -------- | ----- | ---- | +| **lite** | **设置该参数,则对Mindspore Lite工程进行编译** | **是** | +| [n] | 设定编译时所用的线程数,否则默认设定为6线程 | 否 | + +### 编译示例 + +首先,使用git工具从MindSpore代码仓下载源码。 + +```bash +git clone https://gitee.com/mindspore/mindspore.git +``` + +然后,使用cmd工具在源码根目录下,执行如下命令即可编译MindSpore Lite。 + +- 以默认线程数(6线程)编译Windows版本。 + ```bash + call build.bat lite + ``` +- 以指定线程数8编译Windows版本。 + ```bash + call build.bat lite 8 + ``` + +编译完成之后,进入`mindspore/output/`目录,解压后即可获取输出件`mindspore-lite-0.7.0-converter-win-cpu.zip`,其中含有转换工具可执行文件。 diff --git a/lite/tutorials/source_zh_cn/deploy.md b/lite/tutorials/source_zh_cn/deploy.md deleted file mode 100644 index 2b6177026eba1ddf58b772aec5e4771b1e38dac5..0000000000000000000000000000000000000000 --- a/lite/tutorials/source_zh_cn/deploy.md +++ /dev/null @@ -1,187 +0,0 @@ -# 部署 - - - -- [部署](#部署) - - [Linux环境部署](#linux环境部署) - - [环境要求](#环境要求) - - [编译选项](#编译选项) - - [输出件说明](#输出件说明) - - [编译示例](#编译示例) - - [Windows环境部署](#windows环境部署) - - [环境要求](#环境要求-1) - - [编译选项](#编译选项-1) - - [输出件说明](#输出件说明-1) - - [编译示例](#编译示例-1) - - - - - -本文档介绍如何在Ubuntu和Windows系统上快速安装MindSpore Lite。 - -## Linux环境部署 - -### 环境要求 - -- 编译环境仅支持x86_64版本的Linux:推荐使用Ubuntu 18.04.02LTS - -- 编译依赖(基本项) - - [CMake](https://cmake.org/download/) >= 3.14.1 - - [GCC](https://gcc.gnu.org/releases.html) >= 7.3.0 - - [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip) - - > - 仅在编译ARM版本时需要安装`Android_NDK`,编译x86_64版本可跳过此项。 - > - 如果安装并使用`Android_NDK`,需配置环境变量,命令参考:`export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b`。 - -- 编译依赖(MindSpore Lite模型转换工具所需附加项,仅编译x86_64版本时需要) - - [Autoconf](http://ftp.gnu.org/gnu/autoconf/) >= 2.69 - - [Libtool](https://www.gnu.org/software/libtool/) >= 2.4.6 - - [LibreSSL](http://www.libressl.org/) >= 3.1.3 - - [Automake](https://www.gnu.org/software/automake/) >= 1.11.6 - - [Libevent](https://libevent.org) >= 2.0 - - [M4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18 - - [OpenSSL](https://www.openssl.org/) >= 1.1.1 - - -### 编译选项 - -MindSpore Lite提供多种编译方式,用户可根据需要选择不同的编译选项。 - -| 参数 | 参数说明 | 取值范围 | 是否必选 | -| -------- | ----- | ---- | ---- | -| -d | 设置该参数,则编译Debug版本,否则编译Release版本 | - | 否 | -| -i | 设置该参数,则进行增量编译,否则进行全量编译 | - | 否 | -| -j[n] | 设定编译时所用的线程数,否则默认设定为8线程 | - | 否 | -| -I | 选择适用架构 | arm64、arm32、x86_64 | 是 | -| -e | 在ARM架构下,选择后端算子,设置`gpu`参数,会同时编译框架内置的GPU算子 | gpu | 否 | -| -h | 设置该参数,显示编译帮助信息 | - | 否 | - -> 在`-I`参数变动时,即切换适用架构时,无法使用`-i`参数进行增量编译。 - -### 输出件说明 - -编译完成后,进入源码的`mindspore/output`目录,可查看编译后生成的文件,命名为`mindspore-lite-{version}-{function}-{OS}.tar.gz`。解压后,即可获得编译后的工具包,名称为`mindspore-lite-{version}-{function}-{OS}`。 - -> version:输出件版本,与所编译的MindSpore版本一致。 -> -> function:输出件功能,`convert`表示为转换工具的输出件,`runtime`表示为推理框架的输出件。 -> -> OS:输出件应部署的操作系统。 - -```bash -tar -xvf mindspore-lite-{version}-{function}-{OS}.tar.gz -``` -编译x86可获得转换工具`converter`与推理框架`runtime`功能的输出件,编译ARM仅能获得推理框架`runtime`。 - -输出件中包含以下几类子项,功能不同所含内容也会有所区别。 - -> 编译ARM64默认可获得`arm64-cpu`的推理框架输出件,若添加`-e gpu`则获得`arm64-gpu`的推理框架输出件,编译ARM32同理。 - -| 目录 | 说明 | converter | runtime | -| --- | --- | --- | --- | -| include | 推理框架头文件 | 无 | 有 | -| lib | 推理框架动态库 | 无 | 有 | -| benchmark | 基准测试工具 | 无 | 有 | -| time_profiler | 模型网络层耗时分析工具 | 无 | 有 | -| converter | 模型转换工具 | 有 | 无 | -| third_party | 第三方库头文件和库 | 有 | 有 | - -以0.7.0-beta版本,CPU编译为例,不同包名下,`third party`与`lib`的内容不同: - -- `mindspore-lite-0.7.0-converter-ubuntu`:包含`protobuf`(Protobuf的动态库)。 -- `mindspore-lite-0.7.0-runtime-x86-cpu`:`third party`包含`flatbuffers`(FlatBuffers头文件),`lib`包含`libmindspore-lite.so`(MindSpore Lite的动态库)。 -- `mindspore-lite-0.7.0-runtime-arm64-cpu`:`third party`包含`flatbuffers`(FlatBuffers头文件),`lib`包含`libmindspore-lite.so`(MindSpore Lite的动态库)和`liboptimize.so`。 -TODO:补全文件内容 - -> 运行converter、benchmark或time_profiler目录下的工具前,都需配置环境变量,将MindSpore Lite和Protobuf的动态库所在的路径配置到系统搜索动态库的路径中。以0.7.0-beta版本为例:`export LD_LIBRARY_PATH=./mindspore-lite-0.7.0/lib:./mindspore-lite-0.7.0/third_party/protobuf/lib:${LD_LIBRARY_PATH}`。 - -### 编译示例 - -首先,从MindSpore代码仓下载源码。 - -```bash -git clone https://gitee.com/mindspore/mindspore.git -``` - -然后,在源码根目录下,执行如下命令,可编译不同版本的MindSpore Lite。 - -- 编译x86_64架构Debug版本。 - ```bash - bash build.sh -I x86_64 -d - ``` - -- 编译x86_64架构Release版本,同时设定线程数。 - ```bash - bash build.sh -I x86_64 -j32 - ``` - -- 增量编译ARM64架构Release版本,同时设定线程数。 - ```bash - bash build.sh -I arm64 -i -j32 - ``` - -- 编译ARM64架构Release版本,同时编译内置的GPU算子。 - ```bash - bash build.sh -I arm64 -e gpu - ``` - -> `build.sh`中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。 - -以0.7.0-beta版本为例,x86_64架构Release版本编译完成之后,进入`mindspore/output`目录,执行如下解压缩命令,即可获取输出件`include`、`lib`、`benchmark`、`time_profiler`、`converter`和`third_party`。 - -```bash -tar -xvf mindspore-lite-0.7.0-converter-ubuntu.tar.gz -tar -xvf mindspore-lite-0.7.0-runtime-x86-cpu.tar.gz -``` - -## Windows环境部署 - -### 环境要求 - -- 编译环境仅支持32位或64位Windows系统 - -- 编译依赖(基本项) - - [CMake](https://cmake.org/download/) >= 3.14.1 - - [MinGW GCC](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z/download) >= 7.3.0 - - [Python](https://www.python.org/) >= 3.7.5 - - [Git](https://git-scm.com/downloads) >= 2.28.0 - -### 编译选项 - -MindSpore Lite的编译选项如下。 - -| 参数 | 参数说明 | 取值范围 | 是否必选 | -| -------- | ----- | ---- | ---- | -| lite | 设置该参数,则对Mindspore Lite工程进行编译,否则对Mindspore工程进行编译 | - | 是 | -| [n] | 设定编译时所用的线程数,否则默认设定为6线程 | - | 否 | - -### 输出件说明 - -编译完成后,进入源码的`mindspore/output/`目录,可查看编译后生成的文件,命名为`mindspore-lite-{version}-converter-win-{process_unit}.zip`。解压后,即可获得编译后的工具包,名称为`mindspore-lite-{version}`。 - -> version:输出件版本,与所编译的MindSpore版本一致。 -> process_unit:输出件应部署的处理器类型。 - -### 编译示例 - -首先,使用git工具从MindSpore代码仓下载源码。 - -```bash -git clone https://gitee.com/mindspore/mindspore.git -``` - -然后,使用cmd工具在源码根目录下,执行如下命令即可编译MindSpore Lite。 - -- 以默认线程数(6线程)编译Windows版本。 - ```bash - call build.bat lite - ``` -- 以指定线程数8编译Windows版本。 - ```bash - call build.bat lite 8 - ``` - -> `build.bat`中会执行`git clone`获取第三方依赖库的代码,请提前确保git的网络设置正确可用。 - -编译完成之后,进入`mindspore/output/`目录,解压后即可获取输出件`converter`。 diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_app_result.jpg b/lite/tutorials/source_zh_cn/images/lite_quick_start_app_result.jpg index ca1f0ff80d553333f78a89bf132bbeab7666043d..9287aad111992c39145c70f6a473818e31402bc7 100644 Binary files a/lite/tutorials/source_zh_cn/images/lite_quick_start_app_result.jpg and b/lite/tutorials/source_zh_cn/images/lite_quick_start_app_result.jpg differ diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_home.png b/lite/tutorials/source_zh_cn/images/lite_quick_start_home.png index 29e954a425c3b42e61353b97394d774f646cada7..c48cf581b33afbc15dbf27be495215b999e1be60 100644 Binary files a/lite/tutorials/source_zh_cn/images/lite_quick_start_home.png and b/lite/tutorials/source_zh_cn/images/lite_quick_start_home.png differ diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_install.jpg b/lite/tutorials/source_zh_cn/images/lite_quick_start_install.jpg deleted file mode 100644 index c98ee71dae722be180a8b88c1661eabf85c97dce..0000000000000000000000000000000000000000 Binary files a/lite/tutorials/source_zh_cn/images/lite_quick_start_install.jpg and /dev/null differ diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_install.png b/lite/tutorials/source_zh_cn/images/lite_quick_start_install.png new file mode 100644 index 0000000000000000000000000000000000000000..cc66708f0633c537e111d65a4b4e8a411a9322af Binary files /dev/null and b/lite/tutorials/source_zh_cn/images/lite_quick_start_install.png differ diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_project_structure.png b/lite/tutorials/source_zh_cn/images/lite_quick_start_project_structure.png index 6f71294479c4cd91dd983136d7f13960227c3c57..ade37a61ef97a479401240215e302011c014824c 100644 Binary files a/lite/tutorials/source_zh_cn/images/lite_quick_start_project_structure.png and b/lite/tutorials/source_zh_cn/images/lite_quick_start_project_structure.png differ diff --git a/lite/tutorials/source_zh_cn/images/lite_quick_start_sdk.png b/lite/tutorials/source_zh_cn/images/lite_quick_start_sdk.png index faf694bd2e69ec1e4b33ddfe944612e8472b7600..1fcb8acabc9ba9d289efbe7e82ee5e2da8bfe073 100644 Binary files a/lite/tutorials/source_zh_cn/images/lite_quick_start_sdk.png and b/lite/tutorials/source_zh_cn/images/lite_quick_start_sdk.png differ diff --git a/lite/tutorials/source_zh_cn/quick_start/quick_start.md b/lite/tutorials/source_zh_cn/quick_start/quick_start.md index b3730d2ea44d6cecbce11b98df19aeba20a4d9ac..ae3a881b243158477d7b7c8fa6b226ce1c3dfa7e 100644 --- a/lite/tutorials/source_zh_cn/quick_start/quick_start.md +++ b/lite/tutorials/source_zh_cn/quick_start/quick_start.md @@ -1,8 +1,8 @@ -# 快速入门(Lite) +# 快速入门 -- [快速入门(Lite)](#快速入门lite) +- [快速入门](#快速入门) - [概述](#概述) - [选择模型](#选择模型) - [转换模型](#转换模型) @@ -17,6 +17,8 @@ + + ## 概述 我们推荐你从端侧Android图像分类demo入手,了解MindSpore Lite应用工程的构建、依赖项配置以及相关API的使用。 @@ -26,17 +28,17 @@ 2. 将模型转换成MindSpore Lite模型格式。 3. 在端侧使用MindSpore Lite推理模型。详细说明如何在端侧利用MindSpore Lite C++ API(Android JNI)和MindSpore Lite图像分类模型完成端侧推理,实现对设备摄像头捕获的内容进行分类,并在APP图像预览界面中,显示出最可能的分类结果。 -> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/lite/image_classif)。 +> 你可以在这里找到[Android图像分类模型](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite)和[示例代码](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/lite/image_classification)。 ## 选择模型 MindSpore团队提供了一系列预置终端模型,你可以在应用程序中使用这些预置的终端模型。 -MindSpore Model Zoo中图像分类模型可[在此下载](#TODO)。 -同时,你也可以使用预置模型做迁移学习,以实现自己的图像分类任务,操作流程参见[重训练章节](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#id6)。 +MindSpore Model Zoo中图像分类模型可[在此下载]((https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms))。 +同时,你也可以使用预置模型做迁移学习,以实现自己的图像分类任务。 ## 转换模型 -如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)将.mindir模型转换成.ms格式。 +如果预置模型已经满足你要求,请跳过本章节。 如果你需要对MindSpore提供的模型进行重训,重训完成后,需要将模型导出为[.mindir格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)。然后使用MindSpore Lite[模型转换工具](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)将.mindir模型转换成.ms格式。 以MindSpore MobilenetV2模型为例,如下脚本将其转换为MindSpore Lite模型用于端侧推理。 ```bash @@ -51,7 +53,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](#TODO)。 - Android Studio >= 3.2 (推荐4.0以上版本) - NDK 21.3 -- CMake +- CMake 10.1 - Android SDK >= 26 - OpenCV >= 4.0.0 (本示例代码已包含) @@ -79,7 +81,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](#TODO)。 3. 在Android设备上,点击“继续安装”,安装完即可查看到设备摄像头捕获的内容和推理结果。 - ![install](../images/lite_quick_start_install.jpg) + ![install](../images/lite_quick_start_install.png) 如下图所示,成功识别出图中内容是键盘和鼠标。 @@ -88,7 +90,7 @@ MindSpore Model Zoo中图像分类模型可[在此下载](#TODO)。 ## 示例程序详细说明 -本端侧图像分类Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能;JNI层在[Runtime](https://www.mindspore.cn/tutorial/zh-CN/master/use/lite_runtime.html)中完成模型推理的过程。 +本端侧图像分类Android示例程序分为JAVA层和JNI层,其中,JAVA层主要通过Android Camera 2 API实现摄像头获取图像帧,以及相应的图像处理等功能;JNI层在[Runtime](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/runtime.html)中完成模型推理的过程。 > 此处详细说明示例程序的JNI层实现,JAVA层运用Android Camera 2 API实现开启设备摄像头以及图像帧处理等功能,需读者具备一定的Android开发基础知识。 @@ -110,9 +112,7 @@ app | | └── model.ms # 存放模型文件 │ | │ ├── cpp # 模型加载和预测主要逻辑封装类 -| | ├── include # 存放MindSpore调用相关的头文件 -| | | └── ... -│ | | +| | ├── .. | | ├── MindSporeNetnative.cpp # MindSpore调用相关的JNI方法 │ | └── MindSporeNetnative.h # 头文件 │ | @@ -134,9 +134,21 @@ app ### 配置MindSpore Lite依赖项 -Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html)生成`libmindspore-lite.so`库文件,或直接下载MindSpore Lite提供的已编译完成的AMR64、ARM32、x86等[软件包](#TODO)。 +Android JNI层调用MindSpore C++ API时,需要相关库文件支持。可通过MindSpore Lite[源码编译](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html)生成`libmindspore-lite.so`库文件。 + +本示例中,bulid过程由download.gradle文件配置自动下载`libmindspore-lite.so`以及OpenCV的`libopencv_java4.so`库文件,并放置在`app/libs/arm64-v8a`目录下。 + +注: 若自动下载失败,请手动下载相关库文件并将其放在对应位置: + +libmindspore-lite.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/libmindspore-lite.so) + +libmindspore-lite include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/mindspore%20version%200.7/include.zip) + +libopencv_java4.so [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/libopencv_java4.so) + +libopencv include文件 [下载链接](https://download.mindspore.cn/model_zoo/official/lite/lib/opencv%204.4.0/include.zip) + -在Android Studio中将编译完成的`libmindspore-lite.so`库文件(可包含多个兼容架构),分别放置在APP工程的`app/libs/ARM64-V8a`(ARM64)或`app/libs/armeabi-v7a`(ARM32)目录下,并在应用的`build.gradle`文件中配置CMake编译支持,以及`arm64-v8a`和`armeabi-v7a`的编译支持。   ``` android{ @@ -154,7 +166,7 @@ android{ } ``` -在`app/CMakeLists.txt`文件中建立`.so`或`.a`库文件链接,如下所示。 +在`app/CMakeLists.txt`文件中建立`.so`库文件链接,如下所示。 ``` # Set MindSpore Lite Dependencies. @@ -180,7 +192,9 @@ target_link_libraries( ### 下载及部署模型文件 -从MindSpore Model Hub中下载模型文件,本示例程序中使用的终端图像分类模型文件为`mobilenet_v2.ms`,放置在`app/src/main/assets`工程目录下。 +从MindSpore Model Hub中下载模型文件,本示例程序中使用的终端图像分类模型文件为`mobilenet_v2.ms`,同样通过`download.gradle`脚本在APP构建时自动下载,并放置在`app/src/main/assets`工程目录下。 + +注:若下载失败请手工下载模型文件,mobilenetv2.ms [下载链接](https://download.mindspore.cn/model_zoo/official/lite/mobilenetv2_openimage_lite/mobilenetv2.ms) ### 编写端侧推理代码 @@ -205,7 +219,6 @@ target_link_libraries( // Create context. lite::Context *context = new lite::Context; - context->cpu_bind_mode_ = lite::NO_BIND; context->device_ctx_.type = lite::DT_CPU; context->thread_num_ = numThread; //Specify the number of threads to run inference @@ -222,7 +235,7 @@ target_link_libraries( CreateSession(modelBuffer, bufferLen, ctx); session = mindspore::session::LiteSession::CreateSession(ctx); auto model = mindspore::lite::Model::Import(modelBuffer, bufferLen); - int ret = session->CompileGraph(model); // Compile Graph + int ret = session->CompileGraph(model); } ``` @@ -255,8 +268,8 @@ target_link_libraries( memcpy(inTensor->MutableData(), dataHWC, inputDims.channel * inputDims.width * inputDims.height * sizeof(float)); delete[] (dataHWC); - ``` - + ``` + 3. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。 - 图执行,端测推理。 @@ -268,7 +281,7 @@ target_link_libraries( - 获取输出数据。 ```cpp - auto msOutputs = mSession->GetOutputs(); + auto msOutputs = mSession->GetOutputMapByNode(); std::string retStr = ProcessRunnetResult(msOutputs, ret); ``` @@ -286,19 +299,12 @@ target_link_libraries( int OUTPUTS_LEN = branch1_tensor[0]->ElementsNum(); - - MS_PRINT("OUTPUTS_LEN:%d", OUTPUTS_LEN); - float *temp_scores = static_cast(branch1_tensor[0]->MutableData()); - float scores[RET_CATEGORY_SUM]; for (int i = 0; i < RET_CATEGORY_SUM; ++i) { - if (temp_scores[i] > 0.5){ - MS_PRINT("MindSpore scores[%d] : [%f]", i, temp_scores[i]); - } - scores[i] = temp_scores[i]; - } - + scores[i] = temp_scores[i]; + } + // Converted to text information that needs to be displayed in the APP. std::string retStr = ""; if (runnetRet == 0) { @@ -306,12 +312,12 @@ target_link_libraries( if (scores[i] > 0.3){ retStr += g_labels_name_map[i]; retStr += ":"; - std::string score_str = std::to_string(scores[i]); + std::string score_str = std::to_string(scores[i]); retStr += score_str; retStr += ";"; } - } - } else { + } + else { MS_PRINT("MindSpore run net failed!"); for (int i = 0; i < RET_CATEGORY_SUM; ++i) { retStr += " :0.0;"; diff --git a/lite/tutorials/source_zh_cn/use/benchmark_tool.md b/lite/tutorials/source_zh_cn/use/benchmark_tool.md index 7c3df6300ab35b77ab3e0354590e2fc8ef25b3df..d7c86f7a03425fbb0ed1babbd2c48de111c00e2b 100644 --- a/lite/tutorials/source_zh_cn/use/benchmark_tool.md +++ b/lite/tutorials/source_zh_cn/use/benchmark_tool.md @@ -22,9 +22,9 @@ Benchmark工具是一款可以对MindSpore Lite模型进行基准测试的工具 使用Benchmark工具,需要进行如下环境准备工作。 -- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id2)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id5),安装编译依赖基本项,并执行编译。 +- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/compile.html#id2)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id5),安装编译依赖基本项,并执行编译。 -- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id4),获得`benchmark`工具,并配置环境变量。 +- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/compile.html#id4),获得`benchmark`工具,并配置环境变量。 ## 参数说明 @@ -48,7 +48,7 @@ Benchmark工具是一款可以对MindSpore Lite模型进行基准测试的工具 | `--accuracyThreshold=` | 可选 | 指定准确度阈值。 | Float | 0.5 | - | | `--calibDataPath=` | 可选 | 指定标杆数据的文件路径。标杆数据作为该测试模型的对比输出,是该测试模型使用相同输入并由其它深度学习框架前向推理而来。 | String | null | - | | `--cpuBindMode=` | 可选 | 指定模型推理程序运行时绑定的CPU核类型。 | Integer | 1 | -1:表示中核
1:表示大核
0:表示不绑定 | -| `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、NPU、GPU | +| `--device=` | 可选 | 指定模型推理程序运行的设备类型。 | String | CPU | CPU、GPU | | `--help` | 可选 | 显示`benchmark`命令的帮助信息。 | - | - | - | | `--inDataPath=` | 可选 | 指定测试模型输入数据的文件路径。如果未设置,则使用随机输入。 | String | null | - | | `--inDataType=` | 可选 | 指定测试模型输入数据的文件类型。 | String | bin | img:表示输入数据的文件类型为图片
bin:表示输入数据的类型为二进制文件 | @@ -68,13 +68,13 @@ Benchmark工具是一款可以对MindSpore Lite模型进行基准测试的工具 Benchmark工具进行的性能测试主要的测试指标为模型单次前向推理的耗时。在性能测试任务中,不需要设置`calibDataPath`等标杆数据参数。例如: ```bash -./benchmark --modelPath=./models/face/age/ml_face_age.ms +./benchmark --modelPath=./models/test_benchmark.ms ``` 这条命令使用随机输入,其他参数使用默认值。该命令执行后会输出如下统计信息,该信息显示了测试模型在运行指定推理轮数后所统计出的单次推理最短耗时、单次推理最长耗时和平均推理耗时。 ``` -Model = ml_face_age.ms, numThreads = 2, MinRunTime = 72.228996 ms, MaxRuntime = 73.094002 ms, AvgRunTime = 72.556000 ms +Model = test_benchmark.ms, numThreads = 2, MinRunTime = 72.228996 ms, MaxRuntime = 73.094002 ms, AvgRunTime = 72.556000 ms ``` ### 精度测试 @@ -82,10 +82,10 @@ Model = ml_face_age.ms, numThreads = 2, MinRunTime = 72.228996 ms, MaxRuntime = Benchmark工具进行的精度测试主要是通过设置标杆数据来对比验证MindSpore Lite模型输出的精确性。在精确度测试任务中,除了需要设置`modelPath`参数以外,还必须设置`calibDataPath`参数。例如: ```bash -./benchmark --modelPath=./models/face/age/ml_face_age.ms --inDataPath=./data/input/ml_face_age.ms.bin --device=NPU --accuracyThreshold=3 --calibDataPath=./data/output/face/ml_face_age.ms.out +./benchmark --modelPath=./models/test_benchmark.ms --inDataPath=./input/test_benchmark.bin --device=CPU --accuracyThreshold=3 --calibDataPath=./output/test_benchmark.out ``` -这条命令指定了测试模型的输入数据、标杆数据,同时指定了模型推理程序在NPU上运行,并指定了准确度阈值为3%。该命令执行后会输出如下统计信息,该信息显示了测试模型的单条输入数据、输出节点的输出结果和平均偏差率以及所有节点的平均偏差率。 +这条命令指定了测试模型的输入数据、标杆数据,同时指定了模型推理程序在CPU上运行,并指定了准确度阈值为3%。该命令执行后会输出如下统计信息,该信息显示了测试模型的单条输入数据、输出节点的输出结果和平均偏差率以及所有节点的平均偏差率。 ``` InData0: 139.947 182.373 153.705 138.945 108.032 164.703 111.585 227.402 245.734 97.7776 201.89 134.868 144.851 236.027 18.1142 22.218 5.15569 212.318 198.43 221.853 diff --git a/lite/tutorials/source_zh_cn/use/converter_tool.md b/lite/tutorials/source_zh_cn/use/converter_tool.md index d5402a61523a45e9fdfce61be2ddd73b4f1d157c..42ea5d2bb4b58d223be0783f9c2b45473b63cea8 100644 --- a/lite/tutorials/source_zh_cn/use/converter_tool.md +++ b/lite/tutorials/source_zh_cn/use/converter_tool.md @@ -7,12 +7,10 @@ - [Linux环境使用说明](#linux环境使用说明) - [环境准备](#环境准备) - [参数说明](#参数说明) - - [模型可视化](#模型可视化) - [使用示例](#使用示例) - [Windows环境使用说明](#windows环境使用说明) - [环境准备](#环境准备-1) - [参数说明](#参数说明-1) - - [模型可视化](#模型可视化-1) - [使用示例](#使用示例-1) @@ -21,7 +19,7 @@ ## 概述 -MindSpore Lite提供离线转换模型功能的工具,支持多种类型的模型转换,同时提供转化后模型可视化的功能,转换后的模型可用于推理。命令行参数包含多种个性化选项,为用户提供方便的转换途径。 +MindSpore Lite提供离线转换模型功能的工具,支持多种类型的模型转换,转换后的模型可用于推理。命令行参数包含多种个性化选项,为用户提供方便的转换途径。 目前支持的输入格式有:MindSpore、TensorFlow Lite、Caffe和ONNX。 @@ -31,9 +29,9 @@ MindSpore Lite提供离线转换模型功能的工具,支持多种类型的模 使用MindSpore Lite模型转换工具,需要进行如下环境准备工作。 -- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id2)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id5),安装编译依赖基本项与模型转换工具所需附加项,并编译x86_64版本。 +- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/compile.html#id2)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id5),安装编译依赖基本项与模型转换工具所需附加项,并编译x86_64版本。 -- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id4),获得`converter`工具,并配置环境变量。 +- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/compile.html#id4),获得`converter`工具,并配置环境变量。 ### 参数说明 @@ -49,20 +47,19 @@ MindSpore Lite提供离线转换模型功能的工具,支持多种类型的模 | `--modelFile=` | 是 | 输入模型的路径。 | - | - | | `--outputFile=` | 是 | 输出模型的路径(不存在时将自动创建目录),不需加后缀,可自动生成`.ms`后缀。 | - | - | | `--weightFile=` | 转换Caffe模型时必选 | 输入模型weight文件的路径。 | - | - | -| `--quantType=` | 否 | 设置模型的训练类型 | PostTraining:训练后量化
AwareTraining:感知量化。 | - | +| `--quantType=` | 否 | 设置模型的量化类型。 | PostTraining:训练后量化
AwareTraining:感知量化。 | - | +|` --inputInferenceType=` | 否 | 设置感知量化模型输入数据类型,如果和原模型不一致则转换工具会在模型前插转换算子,使得转换后的模型输入类型和inputInferenceType保持一致。 | FLOAT、INT8 | FLOAT | +| `--inferenceType=` | 否 | 设置感知量化模型输出数据类型,如果和原模型不一致则转换工具会在模型前插转换算子,使得转换后的模型输出类型和inferenceType保持一致。 | FLOAT、INT8 | FLOAT | +| `--stdDev= `| 否 | 感知量化模型转换时用于设置输入数据的标准差。 | (0,+∞) | 128 | +| `--mean=` | 否 | 感知量化模型转换时用于设置输入数据的均值。 | [-128, 127] | -0.5 | > - 参数名和参数值之间用等号连接,中间不能有空格。 > - Caffe模型一般分为两个文件:`*.prototxt`模型结构,对应`--modelFile`参数;`*.caffemodel`模型权值,对应`--weightFile`参数。 -### 模型可视化 - -模型可视化工具提供了一种查验模型转换结果的方法。用户可使用Json命令生成`*.json`文件,与原模型相对比,确定转化效果。 - -TODO: 此功能还在开发中。 ### 使用示例 -首先,在源码根目录下,输入命令进行编译,可参考`deploy.md`。 +首先,在源码根目录下,输入命令进行编译,可参考`compile.md`。 ```bash bash build.sh -I x86_64 ``` @@ -106,12 +103,17 @@ bash build.sh -I x86_64 ./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining ``` + - 感知量化模型输入设置为int8,输出设置为int8 + + ```bash + ./converter_lite --fmk=TFLITE --modelFile=model_quant.tflite --outputFile=model --quantType=AwareTraining --inputInferenceType=INT8 --inferenceType=INT8 + ``` 以上几种情况下,均显示如下转换成功提示,且同时获得`model.ms`目标文件。 + ``` INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS! ``` - -你可以选择使用模型打印工具,可视化查验上述转化后生成的MindSpore Lite模型。本部分功能开发中。 + ## Windows环境使用说明 @@ -119,21 +121,18 @@ bash build.sh -I x86_64 使用MindSpore Lite模型转换工具,需要进行如下环境准备工作。 -- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id7)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id10),安装编译依赖基本项与模型转换工具所需附加项,并编译Windows版本。 +- 编译:模型转换工具代码在MindSpore源码的`mindspore/lite/tools/converter`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html#id7)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id10),安装编译依赖基本项与模型转换工具所需附加项,并编译Windows版本。 -- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/docs/zh-CN/master/deploy.html#id9),获得`converter`工具,并将MinGW/bin目录下的几个依赖文件(libgcc_s_seh-1.dll、libwinpthread-1.dll、libssp-0.dll、libstdc++-6.dll)拷贝至`converter`工具的主目录。 +- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/docs/zh-CN/master/compile.html#id9),获得`converter`工具,并将MinGW/bin目录下的几个依赖文件(libgcc_s_seh-1.dll、libwinpthread-1.dll、libssp-0.dll、libstdc++-6.dll)拷贝至`converter`工具的主目录。 ### 参数说明 参考Linux环境模型转换工具的[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/converter_tool.html#id4) -### 模型可视化 - -参考Linux环境模型转换工具的[模型可视化](https://www.mindspore.cn/lite/docs/zh-CN/master/converter_tool.html#id5) ### 使用示例 -首先,使用cmd工具在源码根目录下,输入命令进行编译,可参考`deploy.md`。 +首先,使用cmd工具在源码根目录下,输入命令进行编译,可参考`compile.md`。 ```bash call build.bat lite ``` @@ -186,4 +185,3 @@ set MSLOG=INFO INFO [converter/converter.cc:190] Runconverter] CONVERTER RESULT: SUCCESS! ``` -你可以选择使用模型打印工具,可视化查验上述转化后生成的MindSpore Lite模型。本部分功能开发中。 diff --git a/lite/tutorials/source_zh_cn/use/post_training_quantization.md b/lite/tutorials/source_zh_cn/use/post_training_quantization.md index c09d9b933b64fea4b51663d1a141ab3ba93aa2f2..93ba0889434710d8f6143a0469b970b8e1feb2d7 100644 --- a/lite/tutorials/source_zh_cn/use/post_training_quantization.md +++ b/lite/tutorials/source_zh_cn/use/post_training_quantization.md @@ -32,9 +32,9 @@ | 参数名 | 属性 | 功能描述 | 参数类型 | 默认值 | 取值范围 | | -------- | ------- | ----- | ----- | ----- | ----- | | image_path | 必选 | 存放校准数据集的目录 | String | - | 该目录存放可直接用于执行推理的输入数据。由于目前框架还不支持数据预处理,所有数据必须事先完成所需的转换,使得它们满足推理的输入要求。 | -| batch_count | 可选 | 使用的输入数目 | Integer | 1000 | 大于0 | +| batch_count | 可选 | 使用的输入数目 | Integer | 100 | 大于0 | | method_x | 可选 | 网络层输入输出数据量化算法 | String | KL | KL,MAX_MIN。 KL: 基于[KL散度](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf)对数据范围作量化校准; MAX_MIN:基于最大值、最小值计算数据的量化参数。 在模型以及数据集比较较简单的情况下,推荐使用MAX_MIN | - +| thread_num | 可选 | 使用校准数据集执行推理流程时的线程数 | Integer | 1 | 大于0 | ## 使用示例 @@ -44,20 +44,20 @@ image_path=/dir/images batch_count=100 method_x=MAX_MIN + thread_num=1 ``` 校准数据集可以选择测试数据集的子集,要求`/dir/images`目录下存放的每个文件均是预处理好的输入数据,每个文件都可以直接用于推理的输入。 -3. 以TensorFlow Lite模型mnist.tflite为例,执行带训练后量化的模型转换命令: +3. 以MindSpore模型为例,执行带训练后量化的模型转换命令: ``` - ./converter_lite --fmk=TFLITE --modelFile=mnist.tflite --outputFile=mnist_quant --quantType=PostTraining --config_file=config.cfg + ./converter_lite --fmk=MS --modelFile=lenet.ms --outputFile=lenet_quant --quantType=PostTraining --config_file=config.cfg ``` -4. 上述命令执行成功后,便可得到量化后的模型mnist_quant.ms,通常量化后的模型大小会下降到FP32模型的1/4。 +4. 上述命令执行成功后,便可得到量化后的模型lenet_quant.ms,通常量化后的模型大小会下降到FP32模型的1/4。 ## 部分模型精度结果 | 模型 | 测试数据集 | method_x | FP32模型精度 | 训练后量化精度 | 说明 | | -------- | ------- | ----- | ----- | ----- | ----- | - | mnist.tflite | [MNIST](http://yann.lecun.com/exdb/mnist/) | MAX_MIN | 97.61% | 97.83% | 校准数据集选择MNIST Test数据集的前100张 | - | mobilenet_v1.tflite | [MNIST](http://yann.lecun.com/exdb/mnist/) | MAX_MIN | 98.36% | 98.40% | 校准数据集选择MNIST Test数据集的前100张 | - - + | [Inception_V3](https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz) | [ImageNet](http://image-net.org/) | KL | 77.92% | 77.95% | 校准数据集随机选择ImageNet Validation数据集中的100张 | + | [Mobilenet_V1_1.0_224](https://torage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz) | [ImageNet](http://image-net.org/) | KL | 70.96% | 70.69% | 校准数据集随机选择ImageNet Validation数据集中的100张 | +> 以上所有结果均在x86环境上测得。 diff --git a/lite/tutorials/source_zh_cn/use/runtime.md b/lite/tutorials/source_zh_cn/use/runtime.md index 36a5af6d4395251fb8ded2ad7bc032fb261baf90..7444b59305fdbb3900255962a1ed6af82729b8d0 100644 --- a/lite/tutorials/source_zh_cn/use/runtime.md +++ b/lite/tutorials/source_zh_cn/use/runtime.md @@ -354,6 +354,22 @@ if (out_tensor == nullptr) { } ``` +下面示例代码演示了使用`GetOutputByTensorName`接口获取输出`MSTensor`的方法: + +```cpp +// We can use GetOutputTensorNames method to get all name of output tensor of model which is in order. +auto tensor_names = this->GetOutputTensorNames(); +// Assume we have created a LiteSession instance named session before. +// Use output tensor name returned by GetOutputTensorNames as key +for (auto tensor_name : tensor_names) { + auto out_tensor = this->GetOutputByTensorName(tensor_name); + if (out_tensor == nullptr) { + std::cerr << "Output tensor is nullptr" << std::endl; + return -1; + } +} +``` + ## 获取版本号 MindSpore Lite提供了`Version`方法可以获取版本号,包含在`include/version.h`头文件中,调用该方法可以得到版本号字符串。 diff --git a/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md b/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md index 840d20f707e444135ce2b4e402baf4a4fe828b27..14bcfb006eda63a579220490ae1fa4bf10c34739 100644 --- a/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md +++ b/lite/tutorials/source_zh_cn/use/timeprofiler_tool.md @@ -20,9 +20,9 @@ TimeProfiler工具可以对MindSpore Lite模型网络层的前向推理进行耗 使用TimeProfiler工具,需要进行如下环境准备工作。 -- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profiler`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id2)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id5),安装编译依赖基本项,并执行编译。 +- 编译:TimeProfiler工具代码在MindSpore源码的`mindspore/lite/tools/time_profiler`目录中,参考部署文档中的[环境要求](https://www.mindspore.cn/lite/tutorial/zh-CN/master/compile.html#id2)和[编译示例](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id5),安装编译依赖基本项,并执行编译。 -- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/deploy.html#id4),获得`time_profiler`工具,并配置环境变量。 +- 运行:参考部署文档中的[输出件说明](https://www.mindspore.cn/lite/tutorial/zh-CN/master/compile.html#id4),获得`time_profiler`工具,并配置环境变量。 ## 参数说明 @@ -46,10 +46,10 @@ TimeProfiler工具可以对MindSpore Lite模型网络层的前向推理进行耗 ## 使用示例 -使用TimeProfiler对`tcpclassify.ms`模型的网络层进行耗时分析,并且设置模型推理循环运行次数为10,则其命令代码如下: +使用TimeProfiler对`test_timeprofiler.ms`模型的网络层进行耗时分析,并且设置模型推理循环运行次数为10,则其命令代码如下: ```bash -./timeprofiler --modelPath=./models/siteAI/tcpclassify.ms --loopCount=10 +./timeprofiler --modelPath=./models/test_timeprofiler.ms --loopCount=10 ``` 该条命令执行后,TimeProfiler工具会输出模型网络层运行耗时的相关统计信息。对于本例命令,输出的统计信息如下。其中统计信息按照`opName`和`optype`两种划分方式分别显示,`opName`表示算子名,`optype`表示算子类别,`avg`表示该算子的平均单次运行时间,`percent`表示该算子运行耗时占所有算子运行总耗时的比例,`calledTimess`表示该算子的运行次数,`opTotalTime`表示该算子运行指定次数的总耗时。最后,`total time`和`kernel cost`分别显示了该模型单次推理的平均耗时和模型推理中所有算子的平均耗时之和。 diff --git a/resource/faq/FAQ_en.md b/resource/faq/FAQ_en.md index 681d68614b8840f572c53221c748d1fe4bb01157..30335b436d65f797ba361674801d8a513125e8bc 100644 --- a/resource/faq/FAQ_en.md +++ b/resource/faq/FAQ_en.md @@ -16,6 +16,8 @@ - [Supported Features](#supported-features) +This document has been transferred to a [new location](https://www.mindspore.cn/docs/en/master/FAQ.html). This page will be offline later。 + ## Installation diff --git a/resource/faq/FAQ_zh_cn.md b/resource/faq/FAQ_zh_cn.md index 9873b933d666d18b00026515f9cfd14a228f23f5..1e06e6c30f88b04ec5a368a69fa5f30c3e748e1f 100644 --- a/resource/faq/FAQ_zh_cn.md +++ b/resource/faq/FAQ_zh_cn.md @@ -16,6 +16,7 @@ - [特性支持](#特性支持) +此文档已经转移到[新的位置](https://www.mindspore.cn/docs/zh-CN/master/FAQ.html),此页面后续会下线。 ## 安装类 diff --git a/tutorials/notebook/linear_regression.ipynb b/tutorials/notebook/linear_regression.ipynb index b67175937c7ec66376c1431e764e81fbfb150e7e..291d0a4c5b4e26470649618d4d2aa767f806acdb 100644 --- a/tutorials/notebook/linear_regression.ipynb +++ b/tutorials/notebook/linear_regression.ipynb @@ -49,7 +49,9 @@ "\n", "MindSpore版本:GPU\n", "\n", - "设置MindSpore运行配置" + "设置MindSpore运行配置\n", + "\n", + "第三方支持包:`matplotlib`,未安装此包的,可使用命令`pip install matplotlib`预先安装。" ] }, { @@ -466,7 +468,7 @@ "\n", "$$w_{ud}=w_{s}-\\alpha\\frac{\\partial{J(w_{s})}}{\\partial{w}}\\tag{11}$$\n", "\n", - "当权重$w$在更新的过程中假如临近$w_{min}$在增加或者减少一个$\\Delta{w}$,从左边或者右边越过了$w_{min}$,公式10都会使权重往反的方向移动,那么最终$w_{s}$的值会在$w_{min}$附近来回迭代,在实际训练中我们也是这样采用迭代的方式取得最优权重$w$,使得损失函数无限逼近局部最小值。" + "当权重$w$在更新的过程中假如临近$w_{min}$在增加或者减少一个$\\Delta{w}$,从左边或者右边越过了$w_{min}$,公式11都会使权重往反的方向移动,那么最终$w_{s}$的值会在$w_{min}$附近来回迭代,在实际训练中我们也是这样采用迭代的方式取得最优权重$w$,使得损失函数无限逼近局部最小值。" ] }, { @@ -546,7 +548,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式10,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。" + "`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式11,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。" ] }, { @@ -718,7 +720,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "本次体验我们了解了线性拟合的算法原理,并在MindSpore框架下实现了相应的算法定义,了解了线性拟合这类的线性回归模型在MindSpore中的训练过程,并最终拟合出了一条接近目标函数的模型函数。另外有兴趣的可以调整数据集的生成区间从(-10,10)扩展到(-100,100),看看权重值是否更接近目标函数;调整学习率大小,看看拟合的效率是否有变化;当然也可以探索如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数或者更高阶的函数。" + "本次体验我们了解了线性拟合的算法原理,并在MindSpore框架下实现了相应的算法定义,了解了线性拟合这类的线性回归模型在MindSpore中的训练过程,并最终拟合出了一条接近目标函数的模型函数。另外有兴趣的可以调整数据集的生成区间从(-10,10)扩展到(-100,100),看看权重值是否更接近目标函数;调整学习率大小,看看拟合的效率是否有变化;当然也可以探索如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数或者更高次的函数。" ] } ], diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/images/compose.png b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/compose.png new file mode 100644 index 0000000000000000000000000000000000000000..a1dcbf92d4ce37bd9b794b6c04bc38b131cc3a40 Binary files /dev/null and b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/compose.png differ diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_enhancement_performance_scheme.png b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_enhancement_performance_scheme.png new file mode 100644 index 0000000000000000000000000000000000000000..a21caea16f4ee0852be47c3e56b32d184f06a7de Binary files /dev/null and b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_enhancement_performance_scheme.png differ diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_loading_performance_scheme.png b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_loading_performance_scheme.png new file mode 100644 index 0000000000000000000000000000000000000000..fd32feee9d720141fc1bfcf3bb03cd40363316e6 Binary files /dev/null and b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_loading_performance_scheme.png differ diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/images/operator_fusion.png b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/operator_fusion.png new file mode 100644 index 0000000000000000000000000000000000000000..bd3a88cfb04825f7469e76bcd48988a596ce222d Binary files /dev/null and b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/operator_fusion.png differ diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/images/pipeline.png b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/pipeline.png new file mode 100644 index 0000000000000000000000000000000000000000..5fb3f3defd20eb700c0e16d6dff5d57a1d2007c9 Binary files /dev/null and b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/pipeline.png differ diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/images/shuffle_performance_scheme.png b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/shuffle_performance_scheme.png new file mode 100644 index 0000000000000000000000000000000000000000..d09ca3dda379502827d58c1269599fa4381cbf76 Binary files /dev/null and b/tutorials/notebook/optimize_the_performance_of_data_preparation/images/shuffle_performance_scheme.png differ diff --git a/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb b/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb new file mode 100644 index 0000000000000000000000000000000000000000..dc00235838f2747b04bb2043a3a33d7c44053a26 --- /dev/null +++ b/tutorials/notebook/optimize_the_performance_of_data_preparation/optimize_the_performance_of_data_preparation.ipynb @@ -0,0 +1,773 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#
优化数据准备的性能" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 概述\n", + "\n", + "数据是整个深度学习中最重要的一环,因为数据的好坏决定了最终结果的上限,模型的好坏只是去无限逼近这个上限,所以高质量的数据输入,会在整个深度神经网络中起到积极作用,数据在整个数据处理和数据增强的过程像经过pipeline管道的水一样,源源不断地流向训练系统,如图所示:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![title](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/optimize_the_performance_of_data_preparation/images/pipeline.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "MindSpore为用户提供了数据处理以及数据增强的功能,在数据的整个pipeline过程中,其中的每一步骤,如果都能够进行合理的运用,那么数据的性能会得到很大的优化和提升。本次体验将基于CIFAR-10数据集来为大家展示如何在数据加载、数据处理和数据增强的过程中进行性能的优化。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 整体流程\n", + "- 准备环节。\n", + "- 数据加载性能优化。\n", + "- shuffle性能优化。\n", + "- 数据增强性能优化。\n", + "- 性能优化方案总结。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 准备环节" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 导入模块" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`dataset`模块提供API用来加载和处理数据集。" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import mindspore.dataset as ds" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "`numpy`模块用于生成ndarray数组。" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 下载所需数据集" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. 在jupyter工作目录下创建`./dataset/Cifar10Data`目录,本次体验所用的数据集存放在该目录下。\n", + "2. 在jupyter工作目录下创建`./transform`目录,本次体验转换生成的数据集存放在该目录下。\n", + "3. 下载[CIFAR-10二进制格式数据集](https://www.cs.toronto.edu/~kriz/cifar-10-binary.tar.gz),并将数据集文件解压到`./dataset/Cifar10Data/cifar-10-batches-bin`目录下,数据加载的时候使用该数据集。\n", + "4. 下载[CIFAR-10 Python文件格式数据集](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz),并将数据集文件解压到`./dataset/Cifar10Data/cifar-10-batches-py`目录下,数据转换的时候使用该数据集。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "目录结构如下所示:\n", + "\n", + "\n", + " dataset/Cifar10Data\n", + " ├── cifar-10-batches-bin\n", + " │   ├── batches.meta.txt\n", + " │   ├── data_batch_1.bin\n", + " │   ├── data_batch_2.bin\n", + " │   ├── data_batch_3.bin\n", + " │   ├── data_batch_4.bin\n", + " │   ├── data_batch_5.bin\n", + " │   ├── readme.html\n", + " │   └── test_batch.bin\n", + " └── cifar-10-batches-py\n", + " ├── batches.meta\n", + " ├── data_batch_1\n", + " ├── data_batch_2\n", + " ├── data_batch_3\n", + " ├── data_batch_4\n", + " ├── data_batch_5\n", + " ├── readme.html\n", + " └── test_batch\n", + "\n", + "其中:\n", + "- `cifar-10-batches-bin`目录为CIFAR-10二进制格式数据集目录。\n", + "- `cifar-10-batches-py`目录为CIFAR-10 Python文件格式数据集目录。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 数据加载性能优化" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "MindSpore为用户提供了多种数据加载方式,其中包括常用数据集加载、用户自定义数据集加载、MindSpore数据格式加载,详情内容请参考[加载数据集](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html)。对于数据集加载,底层实现方式的不同,会导致数据集加载的性能存在差异,如下所示:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "| | 常用数据集 | 用户自定义 | MindRecord |\n", + "| :----: | :----: | :----: | :----: |\n", + "| 底层实现 | C++ | Python | C++ |\n", + "| 性能 | 高 | 中 | 高|" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 性能优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![title](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_loading_performance_scheme.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "数据加载性能优化建议如下:\n", + "- 已经支持的数据集格式优选内置加载算子,具体内容请参考[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html),如果性能仍无法满足需求,则可采取多线程并发方案,请参考本文[多线程优化方案](#thread)。\n", + "- 不支持的数据集格式,优选转换为MindSpore数据格式后再使用`MindDataset`类进行加载,具体内容请参考[将数据集转换为MindSpore数据格式](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/converting_datasets.html),如果性能仍无法满足需求,则可采取多线程并发方案,请参考本文[多线程优化方案](#thread)。\n", + "- 不支持的数据集格式,算法快速验证场景,优选用户自定义`GeneratorDataset`类实现,如果性能仍无法满足需求,则可采取多进程并发方案,请参考本文[多进程优化方案](#process)。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 代码示例\n", + "\n", + "基于以上的数据加载性能优化建议,本次体验分别使用内置加载算子`Cifar10Dataset`类、数据转换后使用`MindDataset`类、使用`GeneratorDataset`类进行数据加载,代码演示如下:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. 使用内置算子`Cifar10Dataset`类加载CIFAR-10数据集,这里使用的是CIFAR-10二进制格式的数据集,加载数据时采取多线程优化方案,开启了4个线程并发完成任务,最后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "0.0005214214324951172\n", + "{'image': array([[[235, 235, 235],\n", + " [230, 230, 230],\n", + " [234, 234, 234],\n", + " ...,\n", + " [248, 248, 248],\n", + " [248, 248, 248],\n", + " [249, 249, 249]],\n", + "\n", + " [[216, 216, 216],\n", + " [213, 213, 213],\n", + " [215, 215, 215],\n", + " ...,\n", + " [254, 254, 254],\n", + " [253, 253, 253],\n", + " [253, 253, 253]],\n", + "\n", + " [[213, 213, 213],\n", + " [217, 217, 217],\n", + " [215, 215, 215],\n", + " ...,\n", + " [255, 255, 255],\n", + " [254, 254, 254],\n", + " [254, 254, 254]],\n", + "\n", + " ...,\n", + "\n", + " [[195, 195, 195],\n", + " [200, 200, 200],\n", + " [202, 202, 202],\n", + " ...,\n", + " [138, 138, 138],\n", + " [143, 143, 143],\n", + " [172, 171, 176]],\n", + "\n", + " [[205, 205, 205],\n", + " [205, 205, 205],\n", + " [211, 211, 211],\n", + " ...,\n", + " [112, 112, 112],\n", + " [130, 130, 132],\n", + " [167, 163, 184]],\n", + "\n", + " [[210, 210, 210],\n", + " [209, 209, 209],\n", + " [213, 213, 213],\n", + " ...,\n", + " [120, 120, 119],\n", + " [146, 146, 146],\n", + " [177, 174, 190]]], dtype=uint8), 'label': array(9, dtype=uint32)}\n" + ] + } + ], + "source": [ + "cifar10_path = \"./dataset/Cifar10Data/cifar-10-batches-bin/\"\n", + "\n", + "# create Cifar10Dataset for reading data\n", + "cifar10_dataset = ds.Cifar10Dataset(cifar10_path,num_parallel_workers=4)\n", + "# create a dictionary iterator and read a data record through the iterator\n", + "print(next(cifar10_dataset.create_dict_iterator()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "2. 使用`Cifar10ToMR`这个类将CIFAR-10数据集转换为MindSpore数据格式,这里使用的是CIFAR-10 python文件格式的数据集,然后使用`MindDataset`类加载MindSpore数据格式数据集,加载数据采取多线程优化方案,开启了4个线程并发完成任务,最后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'data': array([255, 216, 255, ..., 63, 255, 217], dtype=uint8), 'id': array(30474, dtype=int64), 'label': array(2, dtype=int64)}\n" + ] + } + ], + "source": [ + "from mindspore.mindrecord import Cifar10ToMR\n", + "\n", + "cifar10_path = './dataset/Cifar10Data/cifar-10-batches-py/'\n", + "cifar10_mindrecord_path = './transform/cifar10.record'\n", + "\n", + "cifar10_transformer = Cifar10ToMR(cifar10_path,cifar10_mindrecord_path)\n", + "# executes transformation from Cifar10 to MindRecord\n", + "cifar10_transformer.transform(['label'])\n", + "\n", + "# create MindDataset for reading data\n", + "cifar10_mind_dataset = ds.MindDataset(dataset_file=cifar10_mindrecord_path,num_parallel_workers=4)\n", + "# create a dictionary iterator and read a data record through the iterator\n", + "print(next(cifar10_mind_dataset.create_dict_iterator()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "3. 使用`GeneratorDataset`类加载自定义数据集,并且采取多进程优化方案,开启了4个进程并发完成任务,最后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'data': array([0], dtype=int64)}\n" + ] + } + ], + "source": [ + "def generator_func(num):\n", + " for i in range(num):\n", + " yield (np.array([i]),)\n", + "\n", + "# create GeneratorDataset for reading data\n", + "dataset = ds.GeneratorDataset(source=generator_func(5),column_names=[\"data\"],num_parallel_workers=4)\n", + "# create a dictionary iterator and read a data record through the iterator\n", + "print(next(dataset.create_dict_iterator()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## shuffle性能优化" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "shuffle操作主要是对有序的数据集或者进行过repeat的数据集进行混洗,MindSpore专门为用户提供了`shuffle`函数,其中设定的`buffer_size`参数越大,混洗程度越大,但时间、计算资源消耗也会大。该接口支持用户在整个pipeline的任何时候都可以对数据进行混洗,具体内容请参考[shuffle处理](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_processing_and_augmentation.html#shuffle)。但是因为底层的实现方式不同,该方式的性能不如直接在[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)中设置`shuffle`参数直接对数据进行混洗。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 性能优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![title](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/optimize_the_performance_of_data_preparation/images/shuffle_performance_scheme.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "shuffle性能优化建议如下:\n", + "- 直接使用内置加载算子的`shuffle`参数进行数据的混洗。\n", + "- 如果使用的是`shuffle`函数,当性能仍无法满足需求,可通过调大`buffer_size`参数的值来优化提升性能。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 代码示例\n", + "\n", + "基于以上的shuffle性能优化建议,本次体验分别使用内置加载算子`Cifar10Dataset`类的`shuffle`参数和`Shuffle`函数进行数据的混洗,代码演示如下:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. 使用内置算子`Cifar10Dataset`类加载CIFAR-10数据集,这里使用的是CIFAR-10二进制格式的数据集,并且设置`shuffle`参数为True来进行数据混洗,最后对数据创建了字典迭代器,并通过迭代器读取了一条数据记录。" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'image': array([[[254, 254, 254],\n", + " [255, 255, 254],\n", + " [255, 255, 254],\n", + " ...,\n", + " [232, 234, 244],\n", + " [226, 230, 242],\n", + " [228, 232, 243]],\n", + "\n", + " [[251, 251, 251],\n", + " [253, 253, 254],\n", + " [255, 255, 255],\n", + " ...,\n", + " [225, 227, 235],\n", + " [227, 231, 241],\n", + " [229, 233, 243]],\n", + "\n", + " [[250, 250, 250],\n", + " [251, 251, 251],\n", + " [253, 253, 253],\n", + " ...,\n", + " [233, 235, 241],\n", + " [233, 236, 245],\n", + " [238, 242, 250]],\n", + "\n", + " ...,\n", + "\n", + " [[ 67, 64, 71],\n", + " [ 65, 62, 69],\n", + " [ 64, 61, 68],\n", + " ...,\n", + " [ 71, 67, 70],\n", + " [ 71, 68, 70],\n", + " [ 69, 65, 68]],\n", + "\n", + " [[ 62, 58, 64],\n", + " [ 59, 55, 61],\n", + " [ 61, 58, 64],\n", + " ...,\n", + " [ 64, 62, 64],\n", + " [ 61, 58, 59],\n", + " [ 62, 60, 61]],\n", + "\n", + " [[ 66, 60, 65],\n", + " [ 64, 59, 64],\n", + " [ 66, 60, 65],\n", + " ...,\n", + " [ 64, 61, 63],\n", + " [ 63, 58, 60],\n", + " [ 61, 56, 58]]], dtype=uint8), 'label': array(9, dtype=uint32)}\n" + ] + } + ], + "source": [ + "cifar10_path = \"./dataset/Cifar10Data/cifar-10-batches-bin/\"\n", + "\n", + "# create Cifar10Dataset for reading data\n", + "cifar10_dataset = ds.Cifar10Dataset(cifar10_path,shuffle=True)\n", + "# create a dictionary iterator and read a data record through the iterator\n", + "print(next(cifar10_dataset.create_dict_iterator()))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "2. 使用`shuffle`函数进行数据混洗,参数`buffer_size`设置为3,数据采用`GeneratorDataset`类自定义生成。" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": { + "scrolled": true + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "before shuffle:\n", + "[0 1 2 3 4]\n", + "[1 2 3 4 5]\n", + "[2 3 4 5 6]\n", + "[3 4 5 6 7]\n", + "[4 5 6 7 8]\n", + "after shuffle:\n", + "[2 3 4 5 6]\n", + "[0 1 2 3 4]\n", + "[4 5 6 7 8]\n", + "[1 2 3 4 5]\n", + "[3 4 5 6 7]\n" + ] + } + ], + "source": [ + "def generator_func():\n", + " for i in range(5):\n", + " yield (np.array([i,i+1,i+2,i+3,i+4]),)\n", + "\n", + "ds1 = ds.GeneratorDataset(source=generator_func,column_names=[\"data\"])\n", + "print(\"before shuffle:\")\n", + "for data in ds1.create_dict_iterator():\n", + " print(data[\"data\"])\n", + "\n", + "ds2 = ds1.shuffle(buffer_size=3)\n", + "print(\"after shuffle:\")\n", + "for data in ds2.create_dict_iterator():\n", + " print(data[\"data\"])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 数据增强性能优化" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "在图片分类的训练中,尤其是当数据集比较小的时候,用户可以使用数据增强的方式来预处理图片,从而丰富数据集。MindSpore为用户提供了多种数据增强的方式,其中包括:\n", + "- 使用内置C算子(`c_transforms`模块)进行数据增强。\n", + "- 使用内置Python算子(`py_transforms`模块)进行数据增强。\n", + "- 用户可根据自己的需求,自定义Python函数进行数据增强。\n", + "\n", + "具体的内容请参考[数据增强](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_processing_and_augmentation.html#id3)。因为底层的实现方式不同,所以性能还是有一定的差异,如下所示:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "| 模块 | 底层接口 | 说明 |\n", + "| :----: | :----: | :----: |\n", + "| c_transforms | C++(基于OpenCV)| 性能高 |\n", + "| py_transforms | Python(基于PIL) | 该模块提供了多种图像增强功能,并提供了PIL Image和Numpy数组之间的传输方法 |\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 性能优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![title](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/optimize_the_performance_of_data_preparation/images/data_enhancement_performance_scheme.png)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "数据增强性能优化建议如下:\n", + "- 优先使用`c_transforms`模块进行数据增强,因为性能最高,如果性能仍无法满足需求,可采取[多线程优化方案](#thread)、[Compose优化方案](#compose)或者[算子融合优化方案](#fusion)。\n", + "- 如果使用了`py_transforms`模块进行数据增强,当性能仍无法满足需求,可采取[多线程优化方案](#thread)、[多进程优化方案](#process)、[Compose优化方案](#compose)或者[算子融合优化方案](#fusion)。\n", + "- `c_transforms`模块是在C++内维护buffer管理,`py_transforms`模块是在Python内维护buffer管理。因为Python和C++切换的性能成本,建议不要混用算子。\n", + "- 如果用户使用了自定义Python函数进行数据增强,当性能仍无法满足需求,可采取[多线程优化方案](#thread)或者[多进程优化方案](#process),如果还是无法提升性能,就需要对自定义的Python代码进行优化。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 代码示例\n", + "\n", + "基于以上的数据增强性能优化建议,本次体验分别使用`c_transforms`模块和自定义Python函数进行了数据增强,演示代码如下所示:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. 使用`c_transforms`模块进行数据增强,数据增强时采用多线程优化方案,开启了4个线程并发完成任务,并且采用了算子融合优化方案,使用`RandomResizedCrop`融合类替代`RandomResize`类和`RandomCrop`类。" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQEAAAD8CAYAAAB3lxGOAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nOy9Xch2W3cedI251v28+6v52pgaTWwrCbQHggeWYlvoSagKWoPxQE2qtKkEemJBqWJTT+qBQnuiLVSUgMVUxFirYpGIiBpKQaUmFcUGJQ1Vo8FQbatFst/nXnN4MP6uMde6n/fZX95Nn833zr2fd/3ca801/8Y1rjHmn6gqPoVP4VP45g3jr3cCPoVP4VP46xs+gcCn8Cl8k4dPIPApfArf5OETCHwKn8I3efgEAp/Cp/BNHj6BwKfwKXyThy8FBETk7xOR/0lEfkZEfvjL+Man8Cl8Ch8nyMceJyAiG4D/GcDfC+DnAPxZAL9dVf/8R/3Qp/ApfAofJXwZTOA3AvgZVf1ZVX0P4McAfN+X8J1P4VP4FD5C2L+EOH8VgP+Nrn8OwG9aHxKR3w3gdwPAEPkNf8O7p8cxyukEkLiS008fCu29c8KWX4Vuy/Ko5B9EIBDYqV/LoOs4H5AhGHEcAyIDYzm/OnIa5Avkd805ADzifx+OVl+4Wu7mgZ5SxVqP5+++JnMvP/OY366/fMEvxwPfAIFWKHSqnSmgque/OTHzXKE67Xz5pl7lg+TkorniZ3/2L/wlVf32NV1fBghcleOpyFT1RwD8CAD88q99pn/Xd/2aHoMLElKI4ILlAjZCAEeeP/5aRBsFJZlIceEFCzQo7qvjGBgQbPuOfd+xbXbc9xu2bcN+u9nf05M9w+dPT/jss8/w9O4dvva1r+Gzr30NT++e8LXP7Pzdu3f47LPP7JmnJ3z22dfwta/Z+bZtBDpw0Dnn/Sr7ulSNXrWSS4HU5fe6LlOyGrYlQL0hewPWmfegmiBsSbAj1Ujmp+oiPs7gUc9nCrREQ6kcdAUj+i2Bux0rNVUmvXAi70ofaKb1hZmtqnh+fsZxv+P5+U7nz3h+/x7Pz8/4/PP3eP/553h+/4zPP//czp+fCxiAPJ8ep8gAQh7GIPnoRP/7v/8f+l9OicKXYw78HACSaPxqAP/Hl/AdDxet/hGcf3HS8MVDi/xFPVm/6sWrV9G9IrxKQV0BgL98vvshcNXTT3VewqQAROv+y3FefqI/fHXvUVgF+YPfe/Sdi2+9RKkefk5OL+oXrOmP6cn7MkDgzwL4dSLy3SLyBOAHAPypLxbFywWyUvUWHtRV/sY/y3L8Aml4KLH6oYceh8ca/OqH67gfmQl8+6UG9I02ruvPlmYWcJJXdvHayM9mxKuCRlpemTt5cH4JkQ/ifKkdIkyiL1gOj4J8AXC6CB/dHFDVu4j8HgD/KYANwB9T1f/xC8YCQNJ+VC60oIpZsWIXa6aXchYI1J+LBikqDxrGKs1dd8tUTAFEFHMqRCbmVMxjQiCYx4G5bZjHARkDc07MOSFzQi//yiacp/uz7o1hZRKmkmUCSn6MlfZflWzP35LXVpRMDVYEJaFKaly0WOe0tNO5zpnVEno5tHNQcbB55j8ERS+7t9qGvaxX7PsU1lzzMc5fy8ZOeM+m0asEsD90JccvhfZ9lQb+UW6vDV+GTwCq+uMAfvz1L1yaUJ5Tq2zx5qNTq7aCx0xc15QuF2qRqqjjRtmeIgLoMLkSj1CHf2sCMvI4ZUJEoHJgTmush1hFpGCEoIJsNQBjDBz3O45tw7zfcdzvBhj3O+b9gG6HgUj83etcPK3shCyhqSLrF9Tc0551DMmmdLZv7fpkuFycsz1c53MeCWDzmJgBAu4TiDQLgwH5ZZq/g3w1iHwjyrfn90yr5VL489d8leBJDeA9QnpxEfB0NizHFugdfdTQkSl8SEL6Y5lOji3KMgH2lUDwpYDAxwnqms6vXPM3BAzhf7VRI+c46HOQWZHldXzARGL4R1UHRBTi2l+OiUMmAOCACb0chzkTj4FxHBARHMdhQuFHPWYyBZ3+G13rPKDHgTknhgjUhUBJ+NO5CaTDT5h/X1pNVQoh/HnF562A+rE9QyAwtfIwjwNzHn59QKem4y3SP0i4ywkry3kBqhIriJiI3hEQSKrYkGkQ+FdJZCwdNwkYUK8vBfkBwb8K7rUMNluxPw6t5S5AkAmjuheChA+FNwEC1nwqZ0bT4xenfi/l5xETwHo/NFXgpV9HQ9NhLMC79BShFWZ29U3SwlMYbV0IdXPyUto/ugIBYwLTmcBx3HEcd8z73jT+vB/Q+4GZrGBi3g8cWDQo9RLUMRjJBYVu18WvQourFgCwZn8o+Je/W9fWcdwdBO4GfHPiOO6NCSQYNM0/MLzsRKwrFXSsfI5iM8SOHP3y2WpWpdWb1o+/LJYChZ63KDG/q4syeaTlWdBVk/Zzq2kAtJxnHFpfE/RPre9zHb8mvAkQANDKOvNLudWoRIFLJpa/pVTiNMofgLeavJkYzLQfw0nItC6WTMsssMCAyOGfOqhGqn/XGnUJ/moOzN1NgOOo43FgHnZ+NFPgjnlsKTCI4wIAK3VmOh3Zr1bM7WrpqyZAKLbwWPBXkFCdmYfjMJPnmAeOu5kIJ8GHYEQexqDxEgPqXcDqrKDyp/1aYM9oAAI1BRFKY5eOLBJvW9aUNMssQwgzR6XBMkiFNe1+igClyZXO6/dVdmU5b5DUSEGAWrGsr7Y5oO7s4nqjflnTYGtBnzN8om/BKOb6oLZnygpZvpEgNJ1NuG8gvyapEMYwE+AYgnEcGPcDUwYJ+6w/p8rzCIegmQFpT7v5MPxbwkdiAMpCTza0OY5Ga2/RPkLr199s1/QUnT8CBTcHIj8zgMDA4H6/OwiQ8KMY1hCBzAF1IDAfzYahYqZXaH8xU0ydOSQTCKCGIM06EeBCqMULIcFC7F6BarEFZtynZsf+gJUJrC8szsMrNnAO0tp704+Xz9b5a8ObAYEzk+ra2gJl3e2/aueEpNTAT2Co9UAV5oRo2KCzGpHOEi51e1SrAQeWDLfxVBVzDGxDuzNQho0QFDMLjuMwNnB3FrBtpfXv3RnY/tKLJZnPdAUNSU1aDMAGkUAiL8ZsyimITHcAQI1Wm6HXm+aqWnnMDKZOHPdgAc94fr47CDxjHubbkBB6GAgMVPls22agMDaMMRMUQuBFFDJc30VZx28jwM6ZXDQC1WQQLCxXFLyaTLQpafeNTLL2Z0HnsrgID8cgPIaB9vwStaj/heJKVtgh4aXwZkBgDVlWXm/MA5qPQA0AgvoENvQaW+OukgyNWlzQhcZ7AliJqJimCG0jISPxNxUyJnRoNkjAWcGoRh6Cf4S9fHTan2Cw9BQcmdY6BiDIdBDzEWQ2YgwYiLxkJoBhjs6gk2HCRHcej/Yr4V9obJYeiEnZb2H/H3fT/vd7jIx7b12lMABIMECBwtg26JxmCgwFtgHVAdXNfTMO1OpCr+rAp8YeJhwgRrUT9fJSpXIjsfN6tdsdJLoqKjDQbKQvMIFG95G/s0MwTREvOwHpNhWsQh/vP4KZysdX0Bx4iJ2R4QuhTgG9AIx8/Jo3nYGFyyv7oDXZJMQqJUBGnTKmVlD1cQiSQhX9/027Tuo2c3/AsW04np9xPN/wjIEt6XA0CsU8jhw2DKBVMHvQDQDIqz7Clh6ADIRTrUwG8+bnGAUar3BWPd5wJUZuXIEAMPXA/fkZdx8Se39+j/th59N7SQbOICAi2MaGsVn+t7E5G/DyyHxVnmTUtRDYijOHdW5Hsahy/2WZUk65fB/5+9KRys7UFwYsqKr5R44D9/uBu5/HvXlMHAexMWrYrT1nkSu4b+M03PqrxgQelh0br4DLZhG3Zh+l1l4cNRcmga43mrW1JCHiZwAIsOFjVAzb2AEG0wYU6Zyu5e84nu+4788YMrBvG7axQY8JqHUdHrebNZDnZ9xuN4yxef6kzJyk1NH4iRoLe9X9mCAwwi4wEIBmP34AVxoNUuVq8hTqFe0ZS44ByPPzM47j2cbFxxj5O4MAyi9ATsJtDIxtMxNqGw0EOF+QDnSQmoCVIMBgiDCFRglJlCXVc2UEayPJlgIQALTzR4OWNA/VU2KO0jw/DtzdbJr+TDKHYLkovC1x0CyT9LVED8tXjQm0AudW1eSSsC28d03aL8wsXNSnrk8ThRjVuE/97MkIOHmuE1MLyOJoKwAYzgDUPeXH/Y778zMGBM/D/AZ6HCaMx4H78477/Rn398/Y9x1jC00OOnq5sHCMurYeCikgcHMmzgHkxJ6J7hgEijCYzwHuZpD8zah0fcLiKyZwvzMIvPdRlSAGUDqLmYCI+Qa4p6CDwDUYCLOFhQ1cjz3o7SLKlRvMmeSrm4FVTtWzQgxpaYfQcJoqjQWZDgQEDIcrDG9PjtVksLgpRwymWM5oYPCa8GZAoAtpCZ9ATgW5soOg/E2X08XZb7MABpsEDYwKdoUirDHwUrRM6PzCJEAM/VUHgm1mN+AhBgbbcAeeA8exH2kybNvuguDfNalpIIDW2ON6YQABArD75tPsXYNxnsI6XGsLfFqzlDVBwh/XOg8830v7MyAEE/BHGwCkY3CY2TO2YgHFaLrwV57ZNLgwEy7YUTOpWttr/PJagTxiAS+YA4CmcNdUYWNd3AuUw8S1l1GkRhBmaZhj2QqI+XzFmMDJlAoNl7Y3EOOjmRLFs8S22jEvpO6enbOrKaDri35bqbDr5+QSWqzA7MTQDrMxAT3Mvj/udxxjw30MiCI95fN+4Njv2J+fsW0bnn26ctjJZybgx6S4IfjRKFzYU/gNAOJcg8WAQMAbdAh9/MkQbDlltYCAmYCImRfmBwhGUH/JBIBspNnQHQRC6DPPzgRAeeznJPQ0yGgFgjgfxBZODUbXW3phPnbNn36BhwBwphXRPvITqjjCV+QjK9kciDKLd7mlnoQfZRK8JrwJEADSsgQAcryF1IfmD6eOnmU24+E4zpUSdjs/ewaC9T7djgog9pFxBVhM6Wg/wyk4fBjwhjkOHPdn3IeBhjFrxRx3jG3D3e3hbXNavEUjtg+dj11A7Hbd11XwgwmgGmN0Cca9EvyBbQvvvZ0zG8hztzBUpwn/cScQuDcQiOLixl0g4Ewg/QHEaB4AwckZuJgFuYjLYiasDYeaB0IL6dJScCH8fL02mR6oblpJwEZasv8oG7Nkj4FfJkhrxunxMRt8kII1vBkQaGApKE/8CgTxALpAn6JblDtXY7z2mAm4hhIThlaYwk/0eCPNgAIh/KP7BjTswePAHD6JSG2+wR3A3A6MY+AIDejaMFcX8oSVkFuiVs9wNjIWfAYEv1d50NPRKLkYEA3vtXAQSLMgZbPO55zWLeggEKbB/f7sXYRoAMpAEIIfAjvGKrQSNKLy6YAwiAmsZgEzinQgjqJy2ZTa9aIsVJfzYnx8fRI9bj5E1VvvhTNI8wdqmmleu96eQ/KLClRRnAHgK2UOAF2UywMvLnRub2em2At7pgOM3J22s0mw4vsCBGK2vaFtZwgNKhopqUprS0SFjSdiKH8cmAAOPOMOJPVTnyTUHXtnex8oisiAwGBQ10iBZ+EvEFhGX0rlI5Y823djI9smtnLSHrQaZBYECIgxARL85wSEO+Y8so6T/5HgpJCm9q5uvxR8yp9d9p6RM/3fKs51WHLS89IU7R63K6bvKHCP3z60aK+ZVwRGxFJ6Wlh4vXzVAV59cFQzW1YfC5mFrwhvBgQmFaBlskSv6+cIJyutDtcvVBCkaaF5A/0F7V9wnywARYwpnClGNYdJFDnVeM6JKWL9v8OmAoftz84Jje6gqOCLv9D6rXsrEd/Tzr9FyiRSP0jou3mAiDfjR8aRXU7ZeLnLDvWXnRDx/oCMDTImtjGtPLdoyEmA7Vmt0meh711+zBckPeOVbzh19zDVB5HBakph60AMxRi2BoTIICEuIEgtrFoK43TOf8hela5/yMj1ky3Kz7uEw/zZti3zUtR+UN2ijVMJU8CaAZtHXzy8CRAgFuY3fBBEOQfy/uMYcCn0vVyYl72M2i1e5GoGAA0S0IjyTDwwomtQpKbWiuCYB+QgbR40z9/uA1yqgru9Zx9mKrkCQcRvsdqowdL+A+rXCAeSGMVPfpnxlH5Jp93Y8lkGgsKccEwphgzMsWEoMMbstRCOLy84i2d43KttHwAWNcIwwnUUQuzArYANIbZ0DbiJF/MPmhOXgeBK2K/+5ulebz7Fs0SQvTzbtmP33o9t281MEu4OjRzWeRuoplTWWQJre39NG38jIBDoWlfiFSQQQlNZNPxJ5v03vk9O/SW4oKBmo12nzCPJjyrCWcRcQhEIbb9PZwIitobAEMFBzMGcga5ttlh5Zz/R/xJ2FAg0QBBqCJyPMg1C2NvRmUA2umFrJNhYBNM6JmphPzMT2NwnQMViL6VAD7Fhv2NsObdi20wDRzGuPMzioenDsYBsjHRc64XuhcnIrag0tw01hoivBCWAzxHpIGBdeFAa5RlgHj4d7SMr816cgxREnhSr7QvT7hjbhn077Hxw+W7WE7OVnpHIJ0g3egOXVhmtkD4Y3ggIAJPNUgIACMjZErQvDlw08TPxJBRtqsixNBbBekpiTQODyqHYbY5ueQToiHf5iADzsJWHshcAsHHx7jOYc6TfoIR6BQKkhrVsrkNi0VgAgwBgU3FPQOCaZ9ttpp56HGNEaUt9UzYMica5Q7aqnxRJYlf2vOVzc8qkQILAahJE8a/+Dwa95fG8Jh5VQsfjxWNSmFSeoq20od1tmbcQ9hjUo8nodL0OYIglwilhSkcR4LbfsG07brEi9bZh7jdbNGZs2HfFtnn5iNQ09yxXbm/+bzPhvjAGfBgEROSPAfheAL+gqn+H3/s2AP8ugO8C8BcB/KOq+pfFUvNHAPw2AP8fgN+lqj/1moQ0N10CgDgAqA/GWfCfGqBHUs+ec4KUaHmwutA5UTUwqHio2+7BZh+AwZzQIZgTNoV4mk9gHoLD1yA4Iue6IbqXWlcfcGIEXUBG/92fr4QSEspAdAuaAW8mwRihajYD3xGsrICguuAkWYFpazPkQphtgpVC4ZN80kOvZh6NLZ9OAFgqoQS/AwI3bSIRXPztftPIkhAEQnUAaPM65iwwmKq2opNqDuDhv1w6bVkHMkZfsvAnCAA4bhP7fuS7c9/TfNm2mAng5pfH5fjlIMclWG09gYAb7yvDa5jAvwngjwL443TvhwH856r6B8X2GvxhAL8PwN8P4Nf5328C8K/hYuORNaguTOCSARBIJBXi36TafN0JfV4sAXXMWBcNSinLQ7kChF5k4HIGAwMrAWj9wQNyaJkAUBszoBNzbhjjwJwb5jEtG2lgd+FOaryaAuEFDq3J+ci4ggnQvAEZGMPHBWz+vjv8EuCkTIJkAtuOMaJXof5qZYUJEZvnMMbm9ywdKrWYgyxlmMklEGSmo2FzLYLOlZC+OXcsK33jgkSQJtea26HUjesDeI4EggPHwSBwFGvw83QqZpoUYT4+Pd2x7zuent7hOCb2fc807HswF8vzmJst1dtr1JRjtewE/ijjnsMPhw+CgKr+aRH5ruX29wH4Hj//UQA/AQOB7wPwx9VUyX8tIt8qIt+pqj//iu/UucBtUi3mFoDgv6UiaDPYSltIYX8vlyWQNQ0Gkowx2ECiCmn9BlRxXqzlmOUIO/wB8eMcAlWbLz9iReJxdEBKEPB0JTWvY/MNtEZwDQLr37ZpPjOGj2zMnqUahz5kc899+QRMDEnz2XYYEJhTsK/96CCgtKLLhaO3KG0Bd/qCsuRds7aBPGYCVNcd6kjaODDBAELBdL7G9NtCqesEn7pHC74kCBw4crhvsClyFrqd+Hy/4+l2s9mC88Dt9pR5CCY4hk2gMl9RaX4zKYQYY7WpcCS/1NYfhW/UJ/C3hGCr6s+LyN/s96+2IPtVAD4MAutFKNyotWYTEkXi3oMLIHgYZPltuQzbvrp5uve5tCVxDingmDoxpkAxMWFre0xfphSqGOrLmIVDaQwozRJstJ6YQZtT34a/MtshLRrXNV8oMcGiHGQL9wJpg274fES/tk83FgXUlmazRjtt9R8YSBsjCBBY62TR5g2HpRWDlmpH6cIQtqRqXn8BBCXweT9/V/BknnVyz3GYYNdU3wP3WN/BmcJxAoRzb0EAQuZ4apavgW/MlxjYYg2JbTu9V0qn2lvWE77x8LEdg1dpueQkQnsR3sIT0qLxSpbqnuPr+lrZ90Jn9Wm611KnFUnT/rHEVBEu++7CAPIrQqsaScYRKVYYk/CmWv9RA5lBdXyFo0oyAZ+EdowVdLYaELPMsAtQiNWG4M+EIxDu6IP4Vmq3HXs4q57MWRXn275h7Df72zb7rmw0DyE0eyRaLX4c5pWP9RjV++Z1Xdvtgg2A8h6nGgBMws0aH4qR99RNTCUTgVlAAcLhDr2D7f2pOOZhE5gO85vI/Y5jHIDcbXn5wxdqOSx/giNBek5bYh1ToYgt2OzjcwGHSX8vdkUuBSSoRpg6gluxBjB/OHyjIPB/Bs0Xke8E8At+/9VbkCntRfjLnt71lhC2NUoThwNEmQWgBBIuuFYRWktvp1ivIJMpQcVmv9XX6ftYG2c4Zfp7yNg08xF0uRqjTdsNoYnGO93h2KZJcwWLa3dfYANjs4U2nKJHv3oNshkLUGwJIOFb2DYDgdhD8bbv2PbNurBuBgLbvmNsO2TbINuGMidm2UqpplxCRzhyfRUgnRgSqxWtNaHtRtNrVBRJOqIs/eYZEOyYy75VjXnbyzaIzQV2WxjBdkzcxx3juBfAjqBTJuw4BgR3YlwHYnqUTgBioD7jm57muQj/DEYS56EkqM1Y+qU11fIdx0m12Boj8eHwjYLAnwLwgwD+oB//I7r/e0Tkx2AOwb/6Gn8AcOYBrGtrFZ+ihx0AghYWbITQKAnQKaz3UiPT4CDSSCsjYWMBfJa/uf4XpYqE23RnJjA08qiMOtZ4xCz0WlxzJABIaOhms4+8PoPBcCCIYcA3bNtofdfbbcfYd5+8tNc3xIDHymVEjjznM8siWcIQXybLgEDWlklsru5J/xnU9gkEQsBTU6bWR9rhdl5Rcz0ghW+eTYJx+LoLknGacLuWZXNSjkxbth11RUQbA6RwJ5Mh5oJqD2d/hrezYJzCEpJIQJlcmMMHwmu6CP8dmBPwbxKRnwPwB2DC/ydE5IcA/K8A/hF//Mdh3YM/A+si/Ce+QFqWD7uIcYalMmcAIHSOzgSKyDdqj3guPhKxp/epCjsRNW2vAiL1dzoTqDOVtFarkuGDU8TmFgyxhjGCuvobyTiY7sHARcWW4cZwoXQBHdtuwr9tvhhHnO9kxwcTME8/hvicABP2bd+wbzZteXcA2LZhcftMRowNcEZRAKBVdlRykIHe4mPJsqrDCpRvbga9bS/Hs2ZnoGAWAPRzeN3MK3NAFcdxQMYz7s6w4OAbYDDngSNZgWubTRIQ1IctR7uIIpgrE5gLC1BFW9glBZrLN8qm2wEBbBDByrheCq/pHfjtD376uy+eVQD/5Ku/zuG0SIDHiRj/LWAfYMzWY7Og3nFt2ol8iyueBIhO9dw4EHXDINfXe4kJIHiA1rkWI4hqna79J+Br3WtpEPMA+RyKQoIc3TgG4PR8bLvZ7MOo++5Cu/notJEsoVgAT6jZXPC3BBAbQBSr/MSSXzIGZBuIYWwFycV4DBsVqrGYqTVk8TJookj1mfjMVdDuUU8ACXfT8nSh9eg5kNBM7+5r5oBO3zFqpAlg7eCOGFYsB5sGkpGaVWSrToc5FHMachwC+QNq5OHFb9FWgnloKLkovGIBDBSr2fOh8GZGDHJgm9yu7V9BiRoLXDEB+G9eGLkKcWhtjrHHwywgWAMXacbTIQHczE4DYUL7U74CEOCCHtoC0QC8T3LxCVL8rmElhHFAxg4hwR8+Km3fd2z7rebo8ww9NxNC+HkiyxjiE1x8rsDmZsSQ9CsUE6i89jUcYuBQmD5wYNaopGamXZls6z3H/ZNO7HS/VUmeSv8nihxzo8VfEwQUMo40A6LHAAocc2JMBXAgzRI4CxlbgqANU54poGWmBBvAC45BijNYDYjJtvKTlsdiQMvs0BfCmwSB1DJCWpoofzY/QTtPyi5dhFcBzs/kZbGGHP8maO+Qu4WYSVQOQCkJkWiMIGYc5rnCzAG1LjUJ6hzVJwFFlDqBmQTDmIA56naIO+7GvmO7PZlDbzPH3u32RGPSB/kN/DzWK8i1CxwEcn5ArcSTi3sMbozRaIPBeAlE2aSEMjhUVchyBP+2tgsqZ11/fam9UxtaB47NedTgoGYO3K38vadFXdvs7hNI7a80QMu7PwdNBgq2YADQNf28OK/r2XwFke7ItUS2pLe7KodgXh8ObwMEBD6V0i9bTutGuw9qJFKPRZ9prLYa5/aYtJYlLaIQdjlFLOEEo/NuVpzDJrFsNi2OITEiL5x70v5yB6GYRRddf6m1TdDHfsOWfzu22w377clMgf0J++0pvfv77QnbKBPgNK9+mbrLC3rE6EEenRh7LqjT03BY5Vh98TKMBph+nKVBStXnWq9r3V6FRz3jl/clBIilyE/mgMzQ2vYX4//HdmDM6f6Qrc4P0/i6KVS3zF1kebqPYaj6YLADQwembljHXSDaafqQlrCAaDm9JcswRqkyGzZG8rh9cngTICAieLrt602QqdUazSMKGZNNRjZc3/E2TIFgC/UG0e6z8PORh/LyuP4rDWSAVACwhYCJLdXFM+P61OGy2XMcgHv7IaO67bbd+vNvBgS32xP23UBg22/e5bf79dJbIF3w2wq9Iwq5FzbNffSyApoHWnsxvETPjTE4w+mmbiu/tVg/PBwmQJxvCf0S3LAzgfWdoPgzqfi1r8FiskFbY9iAKwzbL3LOiW3u/u7E1D2/uUcXrDtjLxc8ydmTnEJp/64mLc+XEfhmuR8sMwtvAgSGCD57eqobq/YnRtCytWgSe0SIEUhdxxPUAKrhnSPS9gxp7ASOlwt4+Dx9+xt1HNZ4LCnLBKAAgXTmuQYaO2QM0/C3J+z7DfvtHW5PTw4C7wocHAjCHNj33eJhFhJg0JiJ529QXgjzEuoAACAASURBVMWXtVqyykNQlDSVEiqw8K+2aTg7GQAUyN60snFf04jXZ6Sq2a95u/YUFgL/NN0UHQDyWH+ZMilTSQFgq/Y3VXHTMBst9ybc9xyLEb0v2QvDwJws1oHNC6eVx1pWmSl7ZryijUZ4EyAgIni3gACPhS5BRGcBfCJVCTwLraai1gul5BqNoPiW80tW0FLRgkIxUBtANECImXwIrdRQLgV/c+E1z72BwnZ7wu3pHXYX/AQBv5faf2MmcGsAAImdfzoV7ZOS0rythtgzmPmM60ZaPwAGIgsQgHSbLvXSGnsv79PVqiE4DwQANfCsIlEgB/UkAICO7YtWjjKGz44s5WBDxf0Nykf0rNwcmLdbAUGaG8EGGtuMAhbwmAPIOpalmxUJ6q8IbwIEhgiebre6wekn26fxHSx5lBIq8XNu1LK8JBzBC6BQu/4uCYvrbOWcmJhEE+aJg0GuEiP5VEM2CGQjB50DwLbvGCM0u89Dd3t/d3PAtP+Wwj+o/3/V9k3gUWVESJrnZxs1ygSLgPuxOabotSWiNiiyVP9SjhzOvOAaGKre4jrrEB0IDLyKBXiqzXnL2t//ViAQiHXV+qQre0ywDcXcFBvFBxHIUeZAF/4yCXifiGsZJlRsiq18SqlcvkogIGNcmANojECycdIzdF4y6g09CrExgSsWIFyv/TdulCwk/M5q//qJiG0GGmAUvolBQh+LYPAxhf+2Yxvm9Nu2G8a+43Z7wu3JBf/pHZ6ezAy4Pb3zrsBoXHtO+d02H1rs+ZLWwKSVa/7rv4VSZo0d2icAgJ1+0UtQ5679Nc+qetXurCxtNQtY9BtWXICFIBZGiatF+5/qsGLMgVzau+8IAygNAomF/mDrJRjSCDSmYEZb9O9FT0wsJLLvNxqQtS3+gGJtWZY5nTXyV+ZBlMSIunSW8pUCgbGaA7GYSlScLOcRTsIbWpu03gkAQtvROdb7ON/39LRvL1qQr91thEDpEZojIlNAtfchi/quO3un87sDwe3JzYHbDU/t/DNnCzVSMI/bVpoBXA5LfpZQxFPOiqflWx+AAQNBPJYiFAQv4xVIX+9jNQuy0K/CuVFcAYCGxkSBWx/HUQDQ/9DYgkU6gKG18AcA0Vq4NEAgHK5jHNjmUfM09jDdiBWkc3BlAWemaZXiQMTKj3Zhal25L4Q3AQKn3oEHws/K+xoMSujy3XhR2CF0odGvrq+AgL7Hwt/YbraD/l/cAVHN6Y0/NFD23btGDyCw7r9uCtyebth3O27bzd/daGBQ9CxcNQbOU2WA89E1b5gv/Mzy3gOhb8N7GTy7Mj6nbzlnfLj6rc0tOdn/5H8QOeWTwbjYQN3rH3UmELpYCN69b3/z+za2w8YVHFOcnfnozBHDsnkp9KW9trCCwVJibOoNZwOvCG8HBJ7YJ8CCH0iHaswXDCCO3JV3AoUrrf6aeOl5Xe/7TcX5Hgt+LdwprbENn2c81fZCNa++0f5tC2F/h23fcbu9w9O7Jxd87ymIsQDbnt19PGnoMQhkMi8ycJWhLoj2lEuxluYFNJlrLPohsiyEQeAuyzHqqapLqNrO+WBgl+VYWC+XR4WBrqgCc7Y6ZodgOQkdyFRJHAsA4rO21oJmckUEx5gYM+ZkDOomHJ0NxI7MabpRRUk/N3DSVlY5qGuMrxoIADdiAk2TEwA0c4BLHXy/hL+f1/NBBZkQANWI0Y4UN+K5l/MT1L40PzEBDfvTn83dlOze2Pbu+b89Yc8egRJ8u2+Owm132n8xffhSq1CmBegN1nOwOvKu9JKVUqn/fBcgmx+o1XGcNqfGAjDCcepfyeqSOgIl2JyFNS8tDoKNzP9iCqnmSkDlkKPA/YLuIexdhRqJrbxpfMeMwZh9PMbA1IHdx20YCGw+WctAYI+uwtiDcZAjOdoomwKcDM8sL1s+aADeS+GNgIB5TesaJ21RbADoQs3HVfjXe/Ru/nYVz4XGz0oAisOuD/VIAgBC+EHXgAs+fUxVaCBQafkbnZtj6VaTg8ie5FGGvctvSeMJFCJzwQtWm/Q6p1YU7Iwrimy4EoAQ8SrSYcvCL+J7HyC1Xx29LOnjmXqhq+W5sy/onHdVtfKa6s2D62YxExoL4EHLnTWFwk5nsDtlVa3bOGZmlvbnORvcRRheJOkF7whbQCuZnvS1SI3+fE14MyDQmQASACDwFXJQjo4FDOJUWeA9oloBR07vZWQpt+XUOYOAVW/MaaAfWlxdVw1jASsA+LF3Edr9sRcTiF6A2+1d+gNyqPB+a2MDZNgONn04aq4h1gSlkhz/lvCzRgc981Ko7mtd3lKSkZKUtFnZFIhRcvG7nVyAAOel/9YEhoR9FSQGgVg/oFhTPOVp1zAHurOwYjp7TiJNwQY0112Aj+AcbaPZAITNp3bnvI3BeasQxM1KewK6FSkhANi+akxgNQdyAd3c6abTPAJtqjxr8DUEmEDA3832+ogFAIm0/R4AaDcFuPFFOjz97hFCAEEBQE0qSaCg6y1B4F2BwNOTzwnw7sKcJnzL3oRBAs/H1k11qR2r8XoWF6H9MAhwaACi0TYXVx4JvU2TCIcY1XMI0lXaueyL33egOOXvnBNV9eXDYsJPgHMwgVT/JyfhCmxpDvg/EVfunub5y6na20inII8YPA8dXnwD/i12snKvS4xKjQ1kXxPeDAjs+9aurZvQwWAxC/yh8/kDANCqGfoo6FmkkNc5utaXvFPPkq3ZACA/EAAwSvvrQNJOifMAgmEg8MTjAZ4aCPAYgOgJ2MYo47OjY4EBqhxLSCj/PgGo1mx4hfA/dFZnAYLWfLazFHQGAmICKfBeSu0eg8FyTnV7eubs5bT0TE0venUp0+PNHRBgELP6lrz6aRhEnE+MAj62+Y0JSK7bMFKLSw3rvgCA2AHLCkTb94tZ2XdeE94MCNxoxKAMqvwUfmICKXMh/Egh1AQCPtazXfBXexanQtXlvrZnSZiwNGIXeAYB9fNcCNT3AJAEBAOBp6d3OQbg9vSZDQpyMyDmEdRIs1j2yyo8HY2o4xUAMEMoOe0LozysL7wg/w+FrpdZ7DlYYLDMWMT5vMq21/9DYIgHyZvf/Gm++YqM2YAgUmuy382CJAdrvij3YZKJ2NbuOTOzAUCNCYiFW0TEJ50RODLLza85v8r0FBOIYerGBL5S5gDw9MTjBKIggaBDIrD16vx31v4NBAgAEhCw0n8hWl9o2u39En52BnYQKGFqqG2JJeHvR8T+fohhonW977cGAk9P73B79y5nCYrEPH9ebXhDNpKg4Fo2viWJQCrKNNMatvulfruus6tr4V8ewIh/v2YxxrTlKsMCqwcgkHJBv0X0suSBQGDh8phiAjiODgBRkOUcZJPgap4+gSkq7cMp+SA/QGr8BQRil+eBcphKqhOh9Efpng0SOLgG4HylfAIAaVx4pkNRCWyBDbGi0BT6ejMoH49uU1qPMLv+IFR0cvoup8bee5DYiE8okQREZVu6wIfGD1YQ3TjC88tj3flw/O2N8tfSYMYakI22Eqn0B4SG4N+9rILmLqrRGtpLHICfpMIJ4V9bKi4e81djsJuxW6XCJpp7Skp5wtsakvRgZDnvtmGLaAJsjsGLWYP5mlKBlkAGgK5/mcu13XiE6VgU6d8LYFHJdnvqFYgC1B7t+vM3El6z0OivgW1B9h2wiVY/oqp/RD7ifoQK+Iq78c1e0ZKVX6XAmocFWekJ0u+n72VjpPNerOR8uSzdYhbSrhO5kH3Panv/CQZszz9fJCSmCzsIhK2/x4Sg3RcRWbsBF9pqbZU0g/I4BKWkxgo5kU8vUeV3NY8vBy3NHN8PdkFFVNYVrfLkaagNQ4iNRXJVsh1k6Tpy0JaSsHUZqfwbm6tbeULZ6kuK0YaiM3oDKCNA1q14GlZz/ereSdh9LwJTVANz+ouYGMxgNZzQI1Og7eScv9fA91V4DRO4A/hnVPWnROTrAH5SRP4zAL8LH3M/Qs6C+PqAgI/71pCrpIplq5cQatJ69CMrrgulU0BAjSRAga/XSo+bSbVJL0iQufABbAA2CIbZ9WJH3op6jBhXHvMF+kKhue49f8fLLoTf8kssKBIdmZR6Lie9kuB/ESCw4bcEBswQuNVKpYTlse3xdvqUw34TeEuTCaELia/H2JjkvBKI8zeUlxjPFX9LaPsrUiwgdlcihURE0K60RoYaC5DU+DOoxlQjdVNTEcqAAcWQbKuPnJvXd7XS/UpUeM1qwz8P30ZMVf9fEflp2NZiH3U/QqbeofVV2FFVfMAgtwqnv1sn0StAbqH+TFyEnLCWZODw0Ca4LKDgw2BKW6TwDwh8vX5sEGwp/CHgsd33iKW/NxsDENOBBwEBqPmdFVxor7IhFWoDemKsu6IWA6VDLm+9AsG5ZKk8xAU06ooZG6oOI02yNNwAGcIB/r0ovbHDFPjGCHTZk7Anlwf2ECkC1LaK4z0HYsvx0tiFAtaUwhQIm51+uziPj5pDuIR/TEvXxMSA+CItw6YjT0A3F2Q5xyzR3hmgrhQWcC6TB+EL+QRE5LsA/HoA/w1+ifsRCm1D9iu+/i1Lggn6aeHQxzv+lBZKMZfebDsbKNZgD1br6Ga0Zuz+GlVyaXymwdJ+LjCoY5/ltw0X9lE+gG3b3ZlUZkA4AVP3NDn2vFPDt/+LKcjleeSZAECLHbzIBE6t3eqgBJ8FU0hYvZYSoDP5SEeBxlO9t2LhGnSPwH1BxkUuuk8gdiImDc2r/XYZkvpvre9sPuvXKBr1/LfyDUKggE7IHNBNHIB7S6skSdbLiQlwwbyWBuALgICIfAuAfx/AP62q/8+5//KUFA6nFCltQ/a3fse36+oTsKGryQlKi3jj6Q1JT19dP3hOgB+uNL+fq0ZaHAoU5tfDldBL+yuHXzkBgQ1D9tTu27Zjj8E/mw0Htm6kMANo4w/3BzSNGUn3Si/tXwK/rgASk3ki7SsL4FVvXyg9187SWEAJbpkE9jsLsDdwFlYyHQrYPS+0uEDzx2rkZUll+DdWALsANG1bj7k/oJVBvCbJ9VvPijcS8cTK6kmO9x3c5rSxg3MgGYFv4woFMMeETOsd0NU0JUaiCZRL9bxe7lt4FQiIyA0GAP+2qv4HfvuXvB8hh9LMIABwxZAV4OIo1FSEMRN1v3G/BRz5Ps9nvQIC5hmyvBxsoEECdewwGIwNIjuG+HoBw6cG7zYAaHcTIBYWzdlk2+imgGsQ28snvNlOOTPpJThmDhR4hvAEPiC13/XfRU15TiWnyGaPw1oUfk+8nMknSF2toflKyLmqehyawBIomMroC/o1xDXwkf6A8g08ynvVp69CQGnj86aQ4zmTdkvOhO1VPQZkIn80Aaeegvoyl8hFZuj8GwCEDw4pcm//vwHgp1X1X6afYj9C4Lwf4e8UC78Zr9yPUPnPhdxm2fp/cQ++pxv9NlG/0Rv0X33BBIZ/KcQN0G5pYSdbAxGhczsRFgABwpNfU3xpr8BYASjXA7z5fIBabILXBmirzSxe+GACzAagPefxIJcQqHQQeU1GwFtsT/q7+p3KegGQ0qaahasI5oE8PgQi2PfXdfozHe7Zz70DLo8Xf6o+g3D6OaWb0lQauMS7jQ2J+61OuvbJkl7S3vwPrTyJQbwgzHo6qXb7RcJrmMBvAfA7APwPIvLf+b1/Hh9zP0KvqAyCXLIpxt2kQ0ukaYXVWRSQbO1uVU31gTZYCMtjAQo+aEMPH14KmhcvNv1UKI22n5gA0wTffA/UUJJWjvxgrm7jFW/rCoQQWaMZKdlijiOUcKVtWYVpz4VmjnJRtbRolA9pYWr89YeHWrTUPl8HWyqqmp9eX3EmEBvFdEcvxyjtM4a90s4rSgKVOPejUIxNYKbifj9wHHccx4F5mGlwHLUJyYkVBdJHPr3NtS3ls+gcYqdmeoe3iaFiKxRP8XUKrTzG8s0OhpUF7tbtadQX6u06vKZ34M+g1ziHj7IfocI8tRXEaFLYmpPKHk59U961Vqitlo/qkFpAIFsN3W+v0nMhOALogeyZy7QM9e5LMUoVIxrT5hXUsGG36XO14QHuvsuKJTnM36b6DMpVa9a7V0AAioPvZ+k0DR0acAGEqyAtUoQp0p22DWoaGwnB0ai3jHNlV5cfbmZZWQNkBiznLVBZqNrGo8f9wHFMO+ediFJD06ter+aTsoZjvUIBBPXteEmRSAT1BWitHUzECEnV4V2DV8JPecIFCBDnq89e5P1BeDMjBicluNOqIDilOePfCD2r3OzWI7IS27pWCsTqs8sql6HeENtttS76qW5QeeG73Qcp7S7tBaEI4ts8aw0IKmpRa2rNlfs10M9Gv5YK6UnvUtMsOy+DeD/jAN3rsvmgwKsseRJSaHs9P96St9AFFdTGBicwYBZg/1TREIVPwfEPtQTUtapingR/Nr9AgS6lOZVF8xgtx6hTzbQhwMD7F2UO8xP4DsZnBkCAz3lEFtfCBCKHjwr9OrwJEFCgmQMyqnBJtzZap1ctMwQ2AYWfWTROPhIQXTQv9VfeUqeeLkwehQw1QRWxHYaHxSNt6vAq/KMdrRKFBB+piSekmQYAWqPsHuwoG6EzrRJkOhtgoKQxIgFsDoSGUZxKu6t3yYdUeRC3ZIPsRM3zkk4UfTxE+5SzXm+ZjCgLR8YGAvlJVgT2zP0eTMDAIEyBAATM2hOwciuNiUKZGUTcVbaKiMNDTI+HrUg9RW3w08kEmMhBRpFHMAhYuWX9ZV2+XJZreBMgsNI2nSDqrU6zadx7ahDQdfJCipiEnSuOACFXxsnf48PqDTrs1oWxCty+ti6iEa/OUBIOAOFE8iq3s9jk0hlDFYMBosL9ACWTKdarhiMN4G9kIlN7gD3zDyh6xql0/EAoCayCoRao62N0M+Yt6OmtuH7p+1R/4KQXYDJQ5hcoEfaYmwD+N4/DNih1NhDlkkWy5Ea8nrPrMoEwqqYcqHG0b8dQYGswIlJmAJsEfB4pb+esFKpdxH+vBYO3AQI4mwPVrEqzn7RBC8uvjPwZT5yW9tdA8LUhq5zYtfZXc4QXRmlkSYpPLAACW6JaKgKWGVbAQaFJW7E8NgbAotKyupZUNQgGgBgz0OOksiMp7jGWxitmpS1OpBPyui0WVa6xAa8DggYpl0ygefnjYxEf5S18AiX4PF5gopQTIXJ+lco2E8Ltz9Mzkb0XqtOdytMcyjoBtYVOrx2CDGyJ+Zfl5fBSSdRH5XcObwIEFFc+gRiIq5DpA07GheLh61V7tUshwXN4WUyB01JkWhZ0IwoZvxV9LB5l24bnBUxz+/RhKTaQDkIUE8gKn4oZ69mjZpvVJwtwSmmzhPtw2qWQrgC0eZPpr1PpAs+KQ7MwQoybGRa9EWtd8BJtoS1Rgsq+j7UB65KDkxg2jRjaMz5CYhPCoYA6E5j3YAQOBIczgVldMcJtixhlYjq3Qa160xx/YF2ZlnBaAxIKHRNzCsbqF0DPR8IRmQVRj90kuAbeR+FNgAAA84DHebFoH6HnZTdRtc9toiuH+r21XsmKi0ZcA2yEfo802AeEXmkLcEabV18sBDGo1CYLMf3PJcZChFjmZswpmwYI0R2qRhySYWSBVCVzb0KENjAINHqPxKjpkUsNxI0qysu/nQUQpEZwmuvfBkwAvSuPG2+BANX+CcsrtecfXmICKxuIY8Y2pzsGq3uQ/yQLun/vusFxyWrWT4BBjquYYcdHWrwMOa8MAAFiASoRPwk6OzTnVJ8DEebHh8PbAAHtIGCL5HihTjEDWc1uiva4vn/F0LomsquV+Odzy80ECJq4wi+LhsAP32Nwy78hNV04Vo01rV4aYtocUqi6196YobEAmZhjw/Btr6fvItzSqnVeSZeeRqDmO/jDLGx5z73hbQDQnCSM7a0Wf1uWS6QDQ9zzI/GITEv+cWW1+l1rC+3BBigkPMVq/FvJlkizprZeNTDq3WABrE84dRpFrtlJwmXGYq7Q7BK0OQi++lAMKONVohm8pzmeO0MoUDiOA2NsZNpsCWSvCW8DBKDmDYsrRTrRw/miw8wCgBr2onCqrT62htIOpdZc1TWc/UZVS05lLrQOSm/H4RODbBdimhYsO4ZPHbZIxcHOdxqSsBEBzGlCLtMXihyQMWvNuVhKnBp9sZoIQg0ILpDdHAAJQpZGCrzWAJlJjizXrgkYBCbnpcAqHSH8de6VGaxKuvAHGEirWG8DJEoXh8pbo8QlKCz4DAg5a5C7BGeVBxTpSEUI+cIMshVRORHZuIAva1djxJ8vOR4rDDMAhMKQ6XUSYM2ADdtzYgxsx4b7fYeMA5s7O18T3gYIrExA6ETUKHKYAtSISivHS4umi7gZv+k9jdrLiGdS+4gh13gLmjuK9gcAiE8MGuIThHyqMIINZG8AqCECmDMB5pAJ5BbmtflEnqfABRSxKROaRfwdB5wRGgjVMrOxliBMAoEZQ2ndQRYNsTQsSquSVovyqfPrnZCjclmW6nwxK/wbBQ7+WQdAhWa1nh1qkU4S+gUQLM9aQLBMIgqhFyq3KxYQpoZw2UTbo8uoo1pElIeT05qLCQRq7DHWOvC6OWYB9lTNFYzv+47tODDcxPlqgQBw4oOaA3EEYqoT6YG74vQNnlnTLfdKIfTrR7GI2hDm+GS2SPvLKcKxOlAuG7YBBAAWgbTGWeBj+YRMW/fuAgQGzx1ANah415gEAEyoL9rZPW2Lo4w1Zq6o47al8oi5B/6CLAYHxXYukBgNRwDWQaB6XxgEkgl4fPCqV63bWUcte1oRsTlAAl8MIQBQsyxWEAmju9rKSvcBbl+i1NZY+D2NAlofIwDTFxUNRiChaLJZlMaL+sk5EbQi0jF3jNbFeXwVzYHOBFLrquYIPN4YpIBgsdWbbum8oP9aNcQYcQkGMZsxplr5YKBgA4KRgr9JLBiyOwhsyJ6A8KRPcoSF1qLk5xZSLjy1K00XpqDXIXQ61NIRwjVGLqltn2KhKK1ujeWAzpkj5+JcSdvotGlaoZVMjFdzoM5rIdHaTmvk2ogOAlWkSIazmBIiYZJJUfMgdQHIpOkf2/cLCDwAt/DopzmwNAhuL8lptFpcMU9+z9qvRC+Rs4AE+FxDsquFAJmZ5T5pQFONdNzud8jYsB1HMQEf/PSa8DZAQAEcVXAaS/ORksSQ7B3INrMO5c3IuCI0z5nMlW273utgECBgQBCJEsSagWEOhFNwkx2b7MDYEPMDNHsHfAowaZx1kcsQllym2lcXjoVGBzWiEJIhI1fZATbbMpsGmohnam38COGe3DXmWsQbEXdz5XFOp+jd/8DnmV4CNN1GVpb6s1VLqxnhgJikKyoaDv4h4FF35TQDlS8u7zsgTDYHaGUh/6vGV01rNQfab5Ggq3YUsWWdeT0TEAjUgUAXgFGqH6ubu9fTcRzY7gfGuLeBTzEX4jXhTYBA2psexD3lAhf8oTUu39iR/RaM4CJGivl0P9hePAEd/S1+RZBrHajmjdTGsjgGa+2ADTxgSAPjFeSN52mtbocSCIwxTJDnwBzmNNSk11omwvCiGcOGukKgI4YuUqaSJVujRwp4MADvKtPOCspXQH4DFAj0qbUGljo2iEzoNqC6IdYjzP0RyCwIZgARDPHhouLm0Qj7uOpboi6EXKVhYk0ChxXwQOcnRrAyid5mqiSXBkfPMQa0NhRvpqYP/031CowRvyx8lRhMMwVymPOR3ZwxE/KYBeavCW8CBMLNltcpYNydUk4nejFIpDcQpRhDyztEd/XejcugEr5FmKS2Cs/tbo4/XgZsbDb/f7v5MuG1N+C+3SCyuda3ap0xRFgBDBf4abMQB8wDXKvmUqMPoPOjaXbKFxCSTS2vjw9oHJaeC3qPRTASIJbptG0tvoiRfAJ57uAzZORcoCGjplN4mUf98G7QKuK7KXnVTAniZTWqZFtTFVYfOk5CzubAmk9+pmvw+jfnWcFXAQopdxB1SkegMqlLz9dhWHgBs5hsk8FYop5QQ41Tw88D98P+pt+7H3fIfWC/33G/P2NsA/f7gW274zXhbYCACG4b7UU4yHbi7al4Q1I+tnOi/7n6sFbjC22cpyR0sZ8fTfKJBUG3/VbntBDIvj9h92XCY6/A/XYDZEBVOhCgJgrVSDLq+nEKWjvTxsYc1J9MpgBf1/ZV5Ddg00mjrNEaX/O6hwJOOm4Px3loqwmhkclLDwHHGfFV1aBmaVKCtCidPUIUj2d1el2Gvox/q0JJ4FVb915o0xMwJCiW8AmCNdg3ZuLtYmLwoJw2QIcYqNK5LMucM9s6Dv++A4ZO/7P47scd9/vdTIB7p/7b/W69A/d7nm++aO1rwpsAAQDYBoMAykHUNmbESfCVzoFoJCH0So0lBD7eo0aXgxLqmAOBfOFPXgtw8+XBUvv7kZkAXAva+CbxGYLBCoLi13JWYRqkwLkTbd2SSrA4CLNspAkvQZvnWRv4MV8N4U+BpvOk+TloKpiSprQXAFT5LqQZJYgCSPli1AWeVyVOgZYazyGeUIGnXVb2VwLdFlMODZ9/i1+A/AXcjrI1kdY/sYi2SvGEHj7ngOMhIBbUCsexqlH2xmgsNRxxaJ3TRKfwAzAzqOt7O7/fv2pMYOdtyEj7OxCAGh8LPjOqSwbQjtw6eQ236uZLAIgFQsfmwl3bgO/7rYPAtmO78VbhN2/AwQBQrABwwYcLfw0EmVoed7a30zwKgY/7qHLpYEHvsVXAwr8wAi920voAPM0R7/QBW7YQpkUi9HIjaFnUIX2nhNDTpNd99iawMIKsSc+Taou1bAEH/1XTM+UPgeap05m24ItFzTMu3pxkTszj3px187BeFmavvJ/g8B6b8PR3RnCkKaHzaACgOnE/DmMAczorCACYpf3vd2z3Z9uV+G7L070mvA0QgGAfZ3NgBYPglinUjPiN8mf1eSuJN6JZMk9GCX/7i12Cduy3sP333BhkxHluFW7rA8aiobYPogu98jkIDO5ZcwAAIABJREFUAMjeThDwEiHN2gbh0L0oOyufcQEEXdaDIbH8xcYujUXEAqJDIBMYQzDVOkQhiild+Kse65axti78pf/jaQLn0P5TcqEWdmyywZDnxBTYHDj5SAgQeC2/NAPYhJAAgHqvxlAQAwjNvFB0nZM2IJXWOwJUlywP7bU/cV+MgYCl80jGcD/utu4BgcB0EDjud9wDBJ7vsO3s7hjjGa8Jr9mG7DMAfxrAO3/+T6rqHxCR7wbwYwC+DcBPAfgdqvpeRN7Bti37DQD+LwDfr6p/8QPfwMY+AcFjEACagBuwBwOoCsyKlNQfHjmZBSFQY/PGv+wN6LZVrAS8bTtut6em8c002Orc/QW6MgFYm6suQpxAIAfhpKDz1XKeIOD3r5iAP5CTilL4SWgu66NKpzP/8NNcvJqgGsDLcWn9Rgk5u/d8tmiYCErABUQFU3SdB64OPrtdWl4DLAgAqsy1Py/1Po8fiF4U08L3ouB309RzTtpxeGDb7N0YB2DCPWrCTwLMSDNjOhhYj4yP4Thmav6YI1BdgXeMY2R6xjFwP+7Yjo/HBD4H8FtV9a+JLT3+Z0TkPwHwewH8K6r6YyLyrwP4IdiWYz8E4C+r6q8VkR8A8IcAfP9LHxAIbsQEkPQfxFGtYUcdwu01heY4dL/rR6D6+LVa9sICAO6m2VL4h9g+gSb4t9wV+LY/mRPQTYPmMNyj12AHxPwAZQ7weZkDCQJAHzCFJksVruW2TIYcelpZjTUKmAXEuvZlAhQjCGHHEFsbP0yyaSxtTF1Mq7U2iQ1wwhd/QGlwejBMgBByocqlzGt+ya4S01ZGEILOZsCFOUB9C5Z+Bw0zLWpAldnk9xT6+/Ozeejvz7g/3zHngT13jrLu0W3fAKgxqmW0n87Ru2jnYUc92vXhPQLHsZzPie2+Q4axgG27m/J6fsZGvSwvhdcsNKoA/ppf3vxPAfxWAP+Y3/9RAP8CDAS+z88B4E8C+KMiIsoDAZYgAuxXPoFoDMEKRglTChKDQaSZTIACAxZ+Ohe4wNdy4EITgULwbzcHgdsTbrd35hvYategdcMQCPsD+j4B64AhoDOB1uhTsZGGS0W+gMaFwxAiplml5IvNgTiv98wmHwOYPoNTIL4qLjCmGCgIE/OHVQsm8oVAJW4ps3EnugB5TDHVKw/dVWgBDTn7qpHUb03YF39BgkTkpjkbyYE3J455T8fb8/3ZhN/BIADh2Hfc9oltbg5kFm/te0hDtMknUGM1HASOA3PeU+vfwxF4py7COXFs5gc4NusiNGbtG9a8Irx285ENwE8C+LUA/lUAfwHAX1HVcD/GVmMAbUOmqncR+asAfiWAv7TEmduQ/Y1f/zq2QdRlYQCgRh12s1FoL1wtrZbNUmq94RD4JhgJMj66jTcGlbre9926Ad0vsMfeANRtaOMHtjYEFJAc3zRFq5dAfElx44YJaEKC0J0dpOX8t64YlZQnOwPZk2/dk8MHE+kYqRlnRqtRLxCxiVR2HBhjYhJdVQYsSkoydTLfsksTVOaowVPhBIxrniw1fCt2zkeROO1MRENrB3AWyxKJsfcC0YnY9XdKthCrLRXMIRhT2roJUHLkHeSpj7UIwjygMfuHOwiD7k8RcgYKjjkxjsNynUO7lx6HXJKMcHsZehzrX8YzyTA/9twBVT0A/J0i8q0A/kMAf/vVY5HOF37jOHMbsr/tO75Dt41QawEABgE5CQ4czelDpOXjOoej0ph2eEGOJsC+XfgYfZfgvRyDW+4YXJuD1Lu1e3A091iqjBVT0NS6R4YMl1Y7ZwCgPGvtxFNHy7hhjRgIOfioTl8M1UcojgHuqVjPz38X2/4utd5GEbKJIYJcfi3AgJZhS4ekmzajgfZSII0WRHkWCMS9WNorhSPPBYco5hTMw6iD+DahqhOxNV4Aigm72d33497oOXfhjSkeJzDHMEAQsZ6EceA4AmyRadZt8zQfqKXNag4DokVJ9M6MHM4SQ7SFyiEA6zXhC/UOqOpfEZGfAPCbAXyriOzOBnirsdiG7OdEZAfwKwD83y9GLOJ2U10nCIBBALRKtqannQUi3l3BIAQ1B9Zs5QcY29bOB59nF6HvGEznY9vJn1Cbhg6xOQPhPxuitSiSAud+cjQwyKS3wu9mQYJfvmy6mBVYCk4y39CWkoIsQ1K4h45LoY/ybVS6knWJ+sy0Ws/GAgIrEJzGO8SMyJPZUZQ94TN9AH4/8stThEnDzjkxxBblONJ4s1rTGX6VAoCi5e4QnOUYDDpvC3wIxhw4RDDmgTlHMoEYzx8O05q6bQajMZKwZTi/nQGMmFg3a2pyMgLP20ebOyAi3w7g2QHgawD+Hpiz778E8A/Degh+EH0bsh8E8F/57//FS/4Az163XxL96zz/ojGz8GusUVjvxVj0KBgT6i7cqf1z37/txApisFANFa7zAJKYFy5JX1dbTHLTyeGa2Zp0afAzVUK2+2r+CwCwOQCmykvpisI2QPHRcGpAAFXaKfdK+EPw1+OL1ZllnuMY/J6kGeDp0tiAJf5c8FuDr/eb4CudLyDA5gCgmBLAZ845nWUOHNF34xvE6BzAmG4qxCeKHeXgnFmOuTADjnkU5Xe2lUKukmaBCIwNZEUbSGXJ+F6HARSVEjIDBjB8un2w22wiqojNVl8TXsMEvhPAj7pfYAD4E6r6H4vInwfwYyLyLwL4c7D9CuHHf0tEfgbGAH7gg18Qwbaz4HTBX51cqsi93jUAPChxCn6cW7yhubexYexbOvGii48pfQOHsSXtZ1CIxSBAKCwLIkezVWgOtVXgtMb+ozX3r9nAKpRIoXSSUVsoqOby5zxCLo9NWFDCH+eraeLfWvTToxOEScSg0IR+nVwFGh8h0csQmZHUkMlIMteU7uW3BLWYAJWj9Uy727oHANT3B7Z52LTKT5TRTO2avQPcRdhWLR44jsMUtXv/RQwQkg4G+5reY3CMbLq23Fh0khUjtqLwVad0uAM1ppnHylNlDsj8SExAVf97AL/+4v7PAviNF/d/EbUv4auCCPo45xCqBQwCBKYDvkw/F7iWc6EnoQxACG++DfKpTT8LBDbv3+1AcAKHjbsQCWwyrdHYWX401kfp91cpJ/Bot/NO1/4ssFT+9Gb8Hu8sGhPnOFbG8QCf1iRfZKjpLyoLOf+5eVDjI4otpG8DLuBE87Ee45kFEHjzUdVB5gBxEx+dN8fhx/i2A4B3E4Y/4DR0NxyDhw0WGnOYX8A9/w0E0vxSG4l5TBxbLTCyDdvjEsN6abg8ILaonSkTawex3kRW4LQJaTg+VHsW3syIweYYvBD+/KNO99SC2aZL8BMAXDvb7L+bD/t1B5+fmzlA5oIL+zZianAAgmR3YjgWM73A+dobZoGC9scgYLkRkqiuQSk2/QAQNC1oohNKNIZd534L7XnkV06afk0XpVvQGxqn+Gw0BGUNBhATtZwFXLKI+KeEndnMCgIMlPFM9MYohtNzd44KbIguBuYcGJvZ8jprTYOKk7rzjuoRmJPPZzMH5pScKzD8aJ0Q4YfxlaTmwJjig4ssrxsGZqwTkf4Sp3kiiEVKbCn+8pskV3Lge014EyCAB0zgCgws/9YzML1t5KicpOQ1DDhAwIb7PiUA3G428ed2u2GLhRrTBIjZgzycmHoWiG0AqXioGca/ocHsumnEpL15kSBwBQDiDacofAeAVcufWQI5Ty/Ec62Pqgo5HU9gVYiQgsyE5OxCEG/YHQSueEfEz07JmJpb+S4wyLxmPmOSVvT3S7KBGJkHqA3Vnbays26CETa7lECdmcC9TenN3oExsLljcHMv/ZRak0HEgMCYiLXpMQa2MWC7E9X4giFbY1HhJ2twmf4Ty7JhzVdsyXFVxf14T3dY8IvSY/goPOUpudQgm7DW4h4yJOm/2fQu7BtT/pqyG9+mmfuRUMQEFvF0s9msIMHMv2a55vmqYVPoS57qX3qsOUXj2tNWABFA4F/LNF41ivO9uNMEv6WxpY6IwQX91PULq8AHA+ggkGAiod8ir5FfTq2PCUnnofgAKR96LASAVKHZlTZtgs79fsfz83s835/x/vP3eP/e/p6fn+3vbjPzYnhwlHMIMfYNYw7s2c7ct0CjONPnQcBZlV5KTHwFYnNKj/ydyz+UooEDdaMHUMhFfVyENwECgOJ+0GQHEn7LSK03Zgtz8NEEMxyAiYok2OzUa0cHiGAMjYFYbGcgABqFZhaQND2vOwB0q9tj7ZBep+120GgScmIEaEdiAMrknlXz6c75o5HKaGSevjNLYRCoZy91UMwboDIuq9wHvjTjp+qha/sozRVeqIxLTkKpJng0D0ICwZFAcE+h99GA9xgTcM8eAR7IA7flBRtU1E3JcC6GYqECzgPBQYJEdAG6EvP2m7KQDJJZQWeSdf4VAoETEyDBXwW07MhB50A0WBliGl4GJDz5Ul172x4swLv4YoAPmQ/1raY4vDGRzayl8btQEmVvDY6FEg/qiCpSl2fi/SsA0P6dayYArILzwZCCj2xsBQp0no8LTgXXn/DIeteh1Z9kfLZKUY+o2//X+QhoWT0dUZ45fymZlDOBu43/f//8jOf37/H5558TEzA2YGBgPQKeoMyz+YgsvfvG/qXYO4KEPGl9lYfJM6/LGD1TO8YeE9z4b5SwrxpfTicvhrcDAvcFBJpA1hz//IMtzWXnG7UtR99tYNuGd+/FZJ8yCcrj78N8yebPGpIgokJp9SRqNcXmqENRdO62audXZXDSalfUuqgwg8H5yEC1xPz4Ij6cv2QaFsHntHX/gFy0O1muAgBMaEwg+rJuxuq9sbtTL7hAhzNFmxEp/B36olS9taDmL4gZgc/PZg68f/8en78vEHj//Iy7A0FMGZbMewj3SLNpc5Nzo2nENeahgwFShi2uwQyW56SIJGvlodQyeH3Ms4J6TXgbIADFfZY5kBoZAlu22zbmsPsbYsZbuNrEh48GdvDOLt0E6N19MqpAWfBFeOx4kwo/d2/zhWZO4U8WoGC7ncFhLYNXFFTxiuaPeKT9H7eE1zUQRcm19HOwAnrMDE6o0ATBurqsr36k/c7sjpnRygiSJLxO4TV2UgBOk4N8NZ7n+3P6AcIHcG/Ov8M0NrxPH5LUf4h0AFjmPqykKAyCaoIRj09FzvbqJm4oKzpfFcw6CvVD4W2AwIkJMDU/Uvsb2k6Ys4/Wo5SB4RMpYux5DA3OQUI5ToAdgzFrkPwPzS9gNcboakXteolBQIn+q14ARABCxMHx1Ve0fXBh1ZcafjUH6PzU/cf5uAjSH2Ft2pV8v2bGwCBxBRwdBGJvvvAH1AYc4j6gEn4uiZBkpd+XrEg9f3aQGQTEvIJjzrT9gwm8d3Pg+fkZ79McsL/jfmAMhWzuAxAhgaUdhUTIN1BNS1CsoJpZCX/5sby97nsxBBluaoxs5zaBTpfZqY845zm8CRCAzsUnEEwgJuNseW8btiPQ8EYywnaMAl4AoHwBNlAoWUFO/tmaXcVj3nlTjN78XMi1a/2YzdiH4ZI5cAkEqy3P5cJfxglAGiughxrAfLAdkKnz6IkmQ49ZwCPGkBQfUb4m9DHbkgfEMAgkfW7MAOCJQ7Kk+pzWYonaKxFALfx5HAee73e8f/+c5sDze+speH5vwm9A4It7wjZ22aQ2idl2H1sSOwvRak88kapjZIBioUT2WC1D1gctVmLt156NCVG8V6ENcPoKgYACOCYtihi0HxPQsPeNAUAGhgpkThsxlTHAZbfbSwEIjNC1etBK1aySrjX/orOJ+qewk5AzGq9j8U9Dc4F271Q4cUq0L347AUEeHpsCWVg9tlcGaYImTejX3+O8g0HQf95rPljeI25ftRAxGfCTg/6FQA7bVicxecepfrKBbg7ESr/WM3D3iUdSgOvtriamSZoJaxtz8n9pHpg14O+NYAWLObst41m2YbKg6qNpbUv1OX3h0leENwECCC3qIURRo9uP7qYjiWbtSVCnQVt/kUCrAnMCkAmfmgHFYcM2zSitLwuqufl9JXlyvXICgMzDCQDqOigosAh0Mx0eiQHqeU/I4/N2soRFq15erc/yHW1zHR6ZAl34mZKHUNgQXkAwp2KMSb+tf3AzosCeNXx9g36m+zmunzfpmBPPz9YL8Pnnn+Pz9378/Bfz7/58d4eh+wZ4YdFhswNjfYYClZH8w/xWVkYGYgJIge7qPYm2GiAV605O359CR2+Ljhqo9d4mYqqajAmZH3FRkS87hEC1O17R4U2uP/YP9Ik9vEYgYny1xqIjsajE4feAMSdk6NJqFiBAVY6LQAFBCjvAbADtnr/R/AQXABAfoqSsXvh0+qG/cwaAeKzxJEg2Tw7d3DnXzPr0orpaSVn8zAQq/XFdTkFr3AYIcy7+ABQAsKnQ0nUBBudncVqKO0Dh+f1zgUACQF1HT4A5B2uegM4+JJinJ9toxtEAoKD9GuK1lXQ3NXNnKv/LHEcZsakVAABPw/gKmQOhPTNkt090FyEzXY4lnsZbY/x50pAJKprQqwwops0aj7HZwLk3IO5pr6RuCkS6Y20Dovmp/VcmYBHYgQwMDcob9Doacx2VEtDfyx/WQ5Pj1zWJB0EiDoKU9VuxmpPGCyWgqiUEhems7fvCKJfDlBvorEBAHvbFZAlBnrEWwP2OYx7Z/29/7+no3YGu+QM8amlw5IjB9S9aS/osU6FRIWZQggVNBdP8SU3RsDLy+CijknENCF5nCgBvBQSQytEv4IIQNld4Ur1BpBkQg33KHMiJPcLLfCvGnDjGgPhiGnMqkLYpo2sk4YoJLOdJ3UAaH2B2wKwAKN+AxUURgUEgnGKV9xCEzhquwGAtWL6Qi/PXQMPZfu0l9Pid1VFXQqo42cunc34HpEwbJOd3VudihFgH0BYEib7+u48D+BzvFwbwi84KkvrTHo22Wk8tGmrmQKxW5LMUUV3YWrWKLvSnAs3GVYqrTIIR7YjfdWZsQ6TjS25WLeXzUngzINB8Ap5ppjxWYN5jAAaAcTYHaGJKgPOEAkdsdpko4/YaF5mbA84+UnOTkCUqA1VpoeUXUwDoYNC0v1/X1bL5SHiLkx4zGFVqVl/iKSyauZ/H9UVY6fcJUGodx3MM13GuAl+Cvgr9BQi0RAQTKEZQXY/skNRaCPS4Zzff/X7g/fvFH/De/QG/aKBQ6w7E5COj5xAkAJTwEyOI3bKDBaj68nLSRoGey43aDsgvMAMIqN2FaRXtNXPrA+kwwL0oL4U3AQKlKS1cDgILdCffQPUx1z4BPMMvkVNLZFy6W/df0StuYgQMCoorCZ9r/c4GgAsAOB2vtLmljYeFxrmSwOjVe0v5XQug5Vvo/IPhxTbUoOzi0auXjQEAH9b6KxD0OBcgBZrwc7yA7+Pnw33N+18Ov+fn93h+X8f3ZB4keDPdVwPq3E6Mfwc9JzSvJZJKRV78wNsjql11IED/A1Ce2TIHcpZx3telTTwObwIEkiZ7iD38JMuv2MBpl6DmGORhxQKoUMGl6J4FO48LGFwKP5175FlZAQz526L9da0YXU5r8MwYk84lzRZkLur1DgcRViEv7d8dhCsrePT+SyEA7uWn2GF4dvitANDv1ZeWWnPhak5FPlfF/bk8/GHz34/7mQlQz8Dnn39OWeN6svENxQ4WxyAKOCRZAKmU1NoMAJotr/wAIJOA4yYm4KZibQnHUEK9KR8IbwQE0NwYIyk2YOvxRbde/dU+1zyj0Jv0jHkFvt03dcuk5gaoQFcB7xaVohp4DgHOuJBCzkN5zyAQzGHRndq/lDsO5wKmamUgeiGv1+J/Jdjl1a89mlo68j1c3P84oUBCL0Cg2/F8f00T+1IUQAw1ZiZgffX2RjCA+53NgXtzDN5j1uBzPQsQV4y0gup2/UsfgbeNOaCDHXtacSz5PDsZJ2KzWvXdq3UqxUftaw0iudfEa8KrQUBsjcH/FsD/rqrfKyLfjY+0DZlp7EpxTRUmAeR7at7+w/eMlmNCJOz9aBQzWYGhpZTWju/4P7kOHxm4quQbwBUQ2MMrEDAzCOHn8/AhYGnIEbmILUphmiQahIHBhVRgaU4X9+z+Sj/reBXX49aT7OyXgBGh+UNbOnvOONdv9OskzK2s7ZkObtPzWZo6FgQNj39s+nGcqX1ocmYoINMlWWrUca0anKXn3fdm3tpCJtBBayT43An1BVBP/gW/5v8Ws6OZIqjyyHb3ivBFmMA/BeCnAfxyv/5D+EjbkLGAAQ4CaptgSgAE35sOAAfTHx9tJt5PSj0KwGgDXBTrOWn+9MIzKJGuPoGCnn8L+39WhbHmuGQIDhAxldRWOxIM3bCprThzHgOPRV7Pvwv9qwijURZA4LLvJcTUnPN4+s4XsB6qu3BNqTYwOL8XdY0quxNwiTOn8glMEvq+T8A9Bw7NRutjBWJrT8mjXPjDNGXqH2aByNFKVOAsN+dGWLw5D0ZhCmpK+37tjaBpduhUzEG7F41aqpwBwEvr1fXx2h2IfjWAfwDAvwTg94qV7kfbhgywnXszeN0OF3rAdsQNJhBAEN0iignFgWyoDNVVcxl11ehyHlaULNdYhH+5l/E2sAgkR6I71GhdMogEhAIHEZuFNtWXqRoK+J5261BToXzlD/0RAjkXstzxhxnBoxAr82B5rj56ZgYfRoNY6WdlAVcAEPdWFrA6WptTUKwPKcCmVgE+byQ6Z80M1KXvP7r5oolIO4/ydequmqsKV6DMsBPUlVUoKhGBDmDqyG3KBjOB3ASm1kecOm3cizOF+lqHxteE1zKBPwzgnwPwdb/+lfiI25B9/Vt+2SJQpfmZBSRDcOZvNTFBO/1RHAALsX+UDixAxhhWAAh3fBN8/+fMJvr9biueAeHR3xhGETcFdAs+bPHGbjzxEfFz7g6L37jxlsaX5fxDwZ4tIKDvgIWWQeHDza8EXggAzvR1BYViAlrnyjUTZWDbvVlaNAV0/bMVguOamNojEZIwB6Sajz8egim+buCUCczwZyGHFifBAFp34ZxqG73S3odrr8TaVnJnJSqBLyb+Fl6z+cj3AvgFVf1JEfmeKo5TWHnZ1W91g7Yh+45v/zZtPoEJ2xMOSBNgJACIOf+JapsH9SAZIDQM6i1M6ap/dRX6QH8EQnMG1H5bM3O6Vvsntx7PNe/1VLlKtG8qeZ63iaFbNYBtM4aT+Tj3p2c2Ml0BGNE1F2MiAiIeA0EIaJZou17fo01UFlC4CqvgX/kDKq76Rml/135KRFhLQDOvXmm8e1AsChpdhrFycPkFqH6S+hdsBg4LHOgRdej7C+QbaoPasjQCON3EjbQqEDsKzSb8s/1V2pZBSrGiMDNdv34tHLyGCfwWAP+giPw2AJ/BfAJ/GB9xG7Kie5GRcABerCmoyMlRIVjD7aNA8ZmNpYCi0NsFyTcCiPt1ntWdLbIj7Wny6mVhq4ZDKgR8tmOtfR82n6YJoLpDVbGFbQpgg0LHSPBa+9mzubbKL4AI4Sx7NnL0SGBLM4dtHYL2Mt7XWIBH4UPU/3F6Fh8LMQIDbDN3sofA3kpnYM4WvNg3oLYLJzbAe7oTCmSbMSTIuo5vcqlUzURdDcSgJojtwCRim4QMWrKclUcDhUmTlWb4L+prX5wHvG7zkd8P4PdbBuR7APyzqvqPi8i/h4+0DZl9h5hAAED28xcQTN/Cy0wC9eWnpu02ywUWjUbLTIiNLjHMp8CTVQLimSGIe2+BMwDENR+4+hkEanRZOZAaACRdVesFyMBry/kX0gQo0AqxjjRRKlD9xQVoeRaC8o20nGYWiAuOXNL605tSQAA8BoUeWDgZEGaPpCaD5OrDfRtw8g/QrkFN87ZaLhOKBTxIUXil0jnoT4o4oxX45q8+vBcTMe5FbF8qGEjzWIPzQCQsYGAjCNdlxblQwc3xxfBLGSfw+/CxtiFDpy65LZUIcsYgbSgKni3oMwaZBDsbd6ZQBWWg7kIENGodDCBMgAQHoG1TrU2YikVgic/6dQnZZ+1Nz2yAgWJOxbaZBzmF3rfKbqPrPE08zbamP3PdxzvwMgJizAUznSv1UW2I412/wemKa3rnIlxR/8cA0G+G/Zu79jIwQFp8ca2qtSaA7xNg4wCod4B6CJqpJtHfHpwioLazku608zn9vrepTIE5rn31hABv30Qke7EmcpcjPYLqH5jzjjkFxyG+45X6kmVw4IuCpfKPOvgyQEBVfwLAT/j5R9uGDFir2zIsI1YH2nL7MLMwfLSgbA4IGwQbYuNHmeFBhe34EhtMKIBw0KWtS+iZwhHnDAAh/PQ7MQdmFSJiforjcAG37qk4X5kAHy1vPlhoHE7/ZgpAepW00hMe/1Bave7jyrVOmg4rcfU6eEkbR0PjW2RmscN1BaIWjWABDxSARs0sidFgAKs54L4Bi+M8qclAwAYD3Z9roFDMH+AtxXKwT2OUw3cOIh8MQAJun69dhVhra2uHUV75F/WgAh2S3z/mATl8kZL7ILUzoXMzUFEHCN0bey1FVGX8ofBmRgy2tnfBAGIF11hdKAQ/QUA2zCmpwjS2K4PkLrTpLtRYACQQvb7L9nSn/AQGpHLWFYrE057OnDlpUMpcNE55pYMdQGyL8Km2NRVTxNo+zGkvj26iQjzLcYETaATmFRNoQvVCXa1MikGgt70H19LjMIIm9P3yWjjvblqYtXEHjM4pp2oK+72NEeDpwc7YmH5D3QT1AT4ATAhjkq5g5CI12toXmwixFoAJbtVFMLNMaXMAHvYdZwKH935bGQXKl8+gfBRLHXzVQKBpjBAoWk9t2zbs+w0YGySZwG6A4GAwfLGHQw5TmNM0w0RtPhmUvFF1/6ySEDEYnAEgtH8AleQCkabF1aeduuAftH31caR2WHsNpio22Eq1c8Se9jZGAL51VnJndZMgsABnTs1XMbOyTIPmx6aXCBSzPBpKcDWdgIA1UP5L5oM9ujZW5hedCShVzAnBGeOlAAAgAElEQVQEMr3qzxQoJHNQpeHAz23YcBsvENOGGxBMW6tn+galvklojEszv9RAjexc/lzw50S+Jx7PEL8XZoFG78CBedgWZnLc7blskRNjbs4CNhzbhn3eiwGQeVhO7g+HNwECXegsBCUOANj2HfttN82P8A3siIlEwIapB+QAcKhPLSh7UcNRgNDAsYus7VAfjd/ly9udO9bIZ8AsIJaFljHMc6+2+COwoTaunLSkVQxVLcpZ/oHo1VAcm+1lN3X06arA/9/e24Zc123nQdeYa9/vSZN+pElNOJhiDIZoEdKGQBMqUg0VE6T+KdIgNNWA/qglgmATBX/4K/6xjSClUikWqq1GoyVIaon929gPS8WmxyY1mmPSpK0xBnve5957zeGP8XWNuda+7/t9z3vy3M85z3y4n7X22muvNb/GNa4x5pxjphlQdSeL8iMNmv8JIOWEinj/B3OAQSQFCSls9cgy6oVBIISZKSkxKQGbAWVGlAaj92abdBbQznE0ExpLcCZ2pXUB1+stASAjBs3VQaiUB3+TT06bU7IbTBm20egUjAACMAjA5/vbPRjAsAkMtkhuRn9yj386LgdkB6YIdtf+li9bKKe6Yc4NYx+YW+1O1OYwjHcMBA4pzQFnAxk63LR/OgfBjsLNnDEwu3qKIuZrC8xPAMC1Q9Hz3e12RW15HufhL0hbms8h0JjJ540+AECN5hnj2AsMlvh2Zo0wGFi+ROBj1pttkqkXIMfFvYOn5o/yVlIlwa0TxBoK075WN/YUafcezhMctT0/mukIBMwM/A1yPEqrzwKHpsXz/QwKUeYzEHBABQ/N1gzBG2n+nEZ8AgDWR8Jc8WFoN6FE1fwBKu6dt0hVsY7fLFACgYylaO09MdwYsHPvLd4fJPumCLBPyZmxUeYwG0YOJ+8nICC1n8YL0isBAfJu+2c2Byzs8obLw4P7AkrwwxwIn4BAsaOWccaSrik7wo2TiHvbsd+uS6hm6+gzaWdUZmz7NLLzjn1k+HIdGwC1zRBkSxCw+eq3HI8uENBGPUMTAMDcB+bcbBqpOiDo7Fo/eqkr+lVIWYCtRucBCOKbFQDSLKJn1h+SCWSHE1QH9Osp3E342RwoECizouzryEcbFvT8IY4SZfT2Y01O5l5FDr62lYQMCN03EI5kZ4Lwgb1YZDaDCQjmsFmCOkLYTYEENGAOYEx33roZIGJbko8IDRoT4Wr9AQSQHdgBUwLDfAUiA2Nu2HZjoPu2RM0eEatx+GjC8+mVgMCSAsnYKXiJ0QHS/ignIWQzUyC0hKihtM83yPoIB0xsM327knNujd0eKm9Qp63zsW3YHJW3zTuub7FePoE+OeW2306YgGsJF+x9bkkL+8wxL8Qsqo4oMtCEVFHCWxSdmQ2PURebaGCCFQBYMNGEm4GAtXsHAwaC2qjTro/2vqL0PT8p/FT4ZFOk0csJu5Pw35ZzYgXxW170RZVrS9rVhV/NnI/FPCKIsGJW30AsaFN3JkIkTQIJwbcxQ59SHP6D3evVlJmVb2CMHWO6o3wO7GOQOVraf8iweTBU18+lVwICRPnysx3ruxBupE0WBY3tmHYM1/w+f0Bjv3fF3AeGAHtoj8lgUAtAwjm0zxiy4w4r7XyM2NxkYG475nbB2AbmdsHUmTvY8m62oXUALDZkCEB13n0f/tvNl5pq1Y1SXS3naUfnjcVmcmgqAq9g7SgMCKD8sVAUcwjGVUuryzeguZ8g3IlJNivcH6DRjvFOpvcdEHr+KJ84iwVYgr0ygeO2YssKwshD5B1I5sQLsnu6L3D1SxRISC3vVnB5DTQUPtFIol3dXNRhPogxMGNfwtjvcNmj8B0DAYB6sjdEOMP8zz9LECgBxhbOudhOLEKOTMiuHlR0wxTNyRZWLS5sseGEd4TbXo67fd9xmxwL/wgEvKVZmCxxTaEOLrVYJTpqmdUkwK69p0qyBxHB7RYCTEFRSSBWAKhP/F/luZhMAcJ9+x3OHiKLCxj74zVNL7hp8pRpEHUY6w2ifu8xj+mAs/gBGARyQRAtDSYTLMKF7bHPILGBdcJQ+BKMlCkBnfgsYgVW4ZI0shLkqFU8qzEJzZ/pzzHvjFSwUPV1MOmgmpg56rQK+3Fz0nfYJxD0Kc67JmAwCKo/hmCLzUdz92FF7lUoEXlVMXfFPgauqXHKL5Be4n337amt49j5LYUewhVvtnUHgdozbtsuUNB2UL4uIEcCsgP3DgMgx4f33Rr0egvKXsFGQsMfqfJZKvo9shy8A9MyzwEFDNEacewz0zsQualcproLRsxsTI2vUnWJOg9BymWz0UaNCSgBQuRBc2JWhBM3QK/Q4hE1aN9vdL7nvoLxm7Z+wN8VoJhol+AY1esAylW2IqRUvm1zEr/WeEUwKZ+LIOrKT2xl4nBzAovG5/p715lAH54qwc9lwkQN4dNqjQoJtk1sui1iTH2mWWB2tG9FFg2VtGtPNhCx6a/Xmlp6vd668GflmjDFBCYzCzaMfXdQ2LNMBgZ9FACgztU0b60VNyAIOw8AapEMwnzIcxbO6JTVASL/8xCcdWDIzB1uEc4kGam9vEFSK1o+CIiW9gsdnSvl0kSISELSVtEBZSZo1Nk6bJrDdvQGYgcGArfcRmyftaVYhRa7ZhvvPmWYTQaexBXmABy8gCb/La3Xggi06wpoTOjwc6Xz6AdmKiiCz45pzkd4fSZo+roXSPhixhEEpKHSk+mVgMBC7zCpg5NZ4F5bgZVvJABsuDwY7S9/a+xlaOGebsO3j/Y3rM7B2223SLOPV1z3W0aeLeStiUEpQJtp/WQC7hPYfOy2kVetT80hRtrQphvDOqcQ7dVapVZxCQoUy3cBlP0POobAV/6Hr7sYY1jkouE7O2P4BqHDlRO1jZtbTJXje/ZFAOrh3KTnKf0C8O/YHLC88qKqHlCDfENsHihsGPC2O803YQ8fwE4MoZ1PP96OIwTNB2KZdQYTGvuYRPK/AwIYKNpXCqW4lzQm5nQq6lHURgZyFGd2cC8lEv0zgGHdyen59CpAwLsyfXb0R6wGZP+AM4ERIDBwuQw8XDbsCQIR581BQDQ3ikxzgEBgj4kkj4948/iI6/XmO9NeASC1f+4NH7MEBwt+9wnkcEQZ13lcKVvYeQPDV57t2HeUk0wn9n0zxXSwmd1uRhe4VcB4I1Y7D2fjBuhm+zxs1vnUWYBEhwQfHQASIDoolfAYJwmNr25clw9iBSuH53TaxkSq2UGgmUDuR5l77i1gkYT3totQLh9OJ23sLEQAsNPoQDgHo8BJ350ZBJuLJo16p+bmvp1VFYxWDt8666gyJutJ9ktv5Hek8C+mwbsGAsCJOYDq4HwekGFMIHwCgu3i+w/OAejmwjNsnFaH7xMfjaS1vDSG7mL4yNnA4+Ojh50mAQotGo6abcO27b4t9Z6mwbjEtmiAMQmklhDXsib0EVNQzHSBTSeWCJ9GXGJMA7ZJgr8KXqOCEJ+9WHmOckTgUhtCJZtf4Nu0UcdT+i7oqsDnw5NtrosGBXm/nUunGZBHOEAsTCBGbggQciJP9pV6V8y/uO019BcjAn0uAM0SnXu73gEni41YmiwBAHfkajXBel+uarKH19Lrtc8XsHIZqZ4PZ/D5AOQMDGX3boHAiTmQJgA5BtV9BG7uSADANnC5bOa40jABfKhwDgC+l3s4cda5AtcrbrdHYwJv3uDx8REfvnmDDz+02PO1TTRtf5Y+AWMCY2ECMX6bGn+QWbH54iDfMl2NAwBjYCgwd6QJMMa08WEJTcmhsGI14mwAsDr7CgA2Op8Z0NTSliaWafYYPVC3Z3EAAEFpzWJs3nFd26t3xhB8uOCDbdxgMYocqo0pv908KMHgCUU5FyA2Drldc4ux2632EDRGxecUVWjvw4QxlJcMBUTNSXiTlvv/4hrdG4u6eEArmTWrpmfzl/4SIDwjxRPKZBESfGYDL0mvBAS6K8UaVrHfJq7jBpFHzAlbHHRTXB5uuD7ueHhzw8PDAx4eHvHwwWM6h3bffHLf6/xzH34On3vzId48vjHb/3bDNWbxqYehptVk8QfAg0EoZE7T5NOAwVaQ7GDrLjR1zHa0BSOKkV5etb1j3VGmQyFDbX45mx3OJAYJtNUNAUBMbPGtsWrexHEoaTUH4vO2bXi4PNgkp4cHbwnNJdQiFttwxLPAeXHvvaIHwYg2jU5J5ykwfJ7tr40JcCCWU/aTppL7M+aOuduq0ZuDlCmFWCYcOwrXcO30BV01PyByJJmvI83u29yFA3PfKcAJypwLwTU5N4ViU85dCcTRggqksqsh0lrkVqkYRsh6xpVs0vR8eiUgANPckdSCcuz7xLjtuMoNU21W1tyBy23idlPcbhPXxxsuDzseHm9tfFjT1rthzh1v3jziMbaZ4skiGeijzxhkjZaaDgYIAwr4TDFbhSjwCZ65fXSsJxCPKKMaJoD1s9K4qO+nDXHO1Oago2Rn6mPopHkTBIgRMBDwegw/n5eLC8vFRlrEgnRimF9DMLAFmxDBNpzluHmQi6+i7qprOl07OffvWdPeBQEKv3ZwGMKON7G1+Pt2xW0M7GIuGdFihtAQ9r3AIAAgZwoCwUjLeRviLssRrsgNhCvmX9VHmgLuzDXmsnn0KKurTescTejp8wkEHKT8CVPlufRKQEAA3mZLgbl7A8Iiq4zbxO063fa+4nK5Yrs8YtsuuFw+wOXygFwiHMt1ZwWK+NyHn8OHbz7Eh073r75BZUScDRYQAT95rBiIoFAA5sTuUz91Fxdi6wwDwMiAoT5vYUyMbSQgqNoWa+qsYM5JS5GZIlNHRDjXFjpMvpN13PjgeJR+HpGLL7svULrY4qVNbEhybgOX6eaWCC4+6nHZLnhwn0eO5yNsaDo/AwBhge9sGRA3gUygoBGRSUtQo33oXHXiOgSiE/O2GQgMGw42L+vMPqDTlIPuNHFrssZ1s0dDpuLfGjOCIlkFE5iKubuwL4ylWItaANlhQ9kxKmOrA6dbCDEkHnsfEAgEO6P/028jWAYv5MWY8CpAIBA2kiqw7+qLZwS3m9qeg7JjbDe3vy/YxiXPx3Ypm6o1rJ2/efMhPvfhh+bwc7vxGlN5c1af2nnY2qHVYRTZR28hc5oDTabXf2hC2B6Cc8PYbERiuPNNt2IHKbQ+CUR2X4jiw3Ky1E1eTM1CmqPINxAsYJwJ/kJpHVjmfjFmo4pNBPsY2ATAtFEDwcA2gIdtw+VywQcPD/jg4cEorb84dBlwBABlABCjrGnekr8ry9UEf2UF/RjnmwigO27XCx62gZtExAQXoknaf3Y/QJuRSKgk9C/9FqC6SyDzNp0Tu4jvF9BNmDbXQS1ehAGomQLGCIIj2m+iT8WkYvOZmN2fcSPduZoOldZzTtjDnfTSzUd+BsCvwDjvTVW/VUS+CsCfAfD1AH4GwL+kqr8kZuz9EIDvAvAPAPx+Vf2rz76jmQOoCtx32OSgCWDHGDtkGBCMsWHIluehJdnhEoj6+PiIx6uZA7fblcyBYzCJYgJoPTUq36jgRLoFVKDDd51Sp8rxTzUfIcrRZCQDTIjAjxw5tuqCPwStLAppn3OeQbKAc8EHAQCc0m9O9fdtw9wvmNvwFY1W1k0E2xA8bAMfXDZ88PCAbYwS6lCKqeilljtLOQcZBFLoGRCYWp+BwD7BNn2cCxT7bgCwuUkjKHNA40i/SWZAgq/pE2BtS2YA1WHl2ZmrakQZobkG5NcIpsF9qsZjW5+NvwAA+460g9etBdl1QMAZELwsfRQm8M+oKm8g8v0AflxVf1BEvt8//yEA3wngG/3vt8N2Jfrtzz+emYBt/QwVWt7r67ljkosMyNj8PFYWOvonU6rz6/URj2/e4Hq9OhNwv8Asp+CuEzuZAjUkBUjEkdXIg0Jj+0OxBpkzpi0bAxhu/22bNU8EEjZHnjOBaNCk/khBTxDS/NQ7TR67T6BPHy3HXFrh4ZQTm5i0uea8jIF926CbQOcOmRMD6gCw4YPLBZ96eMCv+dQHOSGqUf0TwT8cwcITbav1OWbtzQUEOESbj+uH6bcBmPsVj5fNzJZhQT7CHGAmoH4+PZrQUWzKVyEEnnUcNtefTRr3J0w3GmP0oWJHFjudOjHH5uLt5gCMIUjYIgQCAcTFRkJhapqNqpFf7heyFuxu+nzMgX8RwO/08/8MFoD0D/n1P6kmQX9RRL5SRD6tqj9//1GLT2DafP851Sj6HlNDFbYWU+g3ktdE4HTYBTP66JCcSHK92aSgxxv5BLRMgqBvOQwHwtgAZJd+09yhhQ0YTOAmNp+As40NCmBTzWXGLPBp53l+NYXDtX2sNQitGUBnNVU1GGC0AkAwgbW+4wkPOy4+NfnhYkxgvw3ovhvbgZkJwQK+7IMH/JpPfQqXywVe4fY0ZwY5N9sFX9fzRfi5vOxkC1s+vOW619Jsm+pdC4UEiv12wYeXDZcxrDwgcyBZAAGAzxkIbpcTl7wy2z8G2Lw3GI21wZyaAV9rSJICywSYYbMlyJhQ3WxTGQC6ubnpgpwRLmONAflxMGZrQ3FzwPpoCf9LOcFLQUAB/A9iOfpjarsHfW0Itqr+vIh8jd+b25B5ii3KGggIbUP2a7/iy9G6qSI9w3Of2G9mr99usUbbGiEQMCimxNRg7/xjWOAGGdLWkvfRAUJspdGBpPNIFKC1IInAIbiQmZ3jMD67hzau8XyqBz+h4jebkpxLXjn98ZoAYK+IEQE40xhYs9PbAbjdHswU8OHUEJZwSgnURwYGLtuGh8uGh8slBT7mP2AIgYAJvS5sIJhACrxT6ZgenMKvCt0dDFShY2IfAp0Wf28TyenV83bDZTMA2Hw+yHF0gMCATAMLtOJtGkCgic2rQVAKKGu/TIhoo+OWZ85adFZb7AQ8aWF4WwYIeBnsO/NDhXwISXoYqj293DR4KQj8DlX9ORf0Py8if/OJe8/efPBSKG1D9jW/6au1CwZsGEsVU2I4beawmv0+n+N/Jo1TURuLOACIwBeUXJe48xVias+VfuVsI3XdG19wOGbuV2F2bcCf74NAoE10rqNfIj7a7fUZ2RHi95Fm/fwkVeyCWSspt5iGe8Ntu+H24BOqtg23xytul6uDbk2Cwhg+HFpAECDAgBD1wIJvQVcXEAhTIM/3Tqvj/uwIx7IlO4KNvsyhvlZiyzpKAQ8ApzYPFmUa1tvCh4aR4F8CGYJ8tu9hRTxSTJkYYvMZbJg5zEgqSGobdeZfwV9EajlXmiWoyEdPLHE4TS8CAVX9OT/+ooj8CGy/gV8Imi8inwbwi357bEMWibcou5sYyYxlDmhEcBGFDsGmNq12zhIOjc86U4ZSrgQRFaxWk0UMgVxQEkAQjCDNcKLRnSoGIed6XleZhWWXuxCjRhzCD5CPixo4NBwbI3EXAInyIzWGMSMSiPjpWB5HJwo4zQ4giAlWW0XfuXqk3gefinsxcACQ8R8NDBSy2TAob6yBQaZAzI1ADAWWOZDm15zAXtoaO5kDNCJQDj+gHGqhuavFjIQ4MxSf70D3Z60KDr9c2zMAh2Z048jMUI5LVwDNPzDMFzRHhCOfvs+mPXJotW91DQe8aGfS/I0FSFwBwl34kvSSDUm/AsBQ1V/x838OwL+P2m7sB3HchuzfEJE/DXMI/vLT/gDPPWvHfDeMzo4J24M0JtQYMreJKh6zPx8QG1H4Crt93nKjiVtq/76UtNPupWXXll7YgJzcw/TQRFmc3nXAewoQDkSB1XwGqPDOEBoiNAc0w2PDf2oH4gm6aC5nBO0vVtttlwRTEYFsPnylVtdDij7r8MCb04Uf8LBo3QcQjMC86+UIRIBEo+9af47WB+CrGjwAgPrS6Vg2XYM1aYH3BmZgd9uPRxOYmTXuRYBWAMfm3URus+eAMMIJzTkgpqfCTE/oM+Vm6VtPUkBKL2ECXwvgR/zhFwD/uar+mIj8JQD/pYh8L4D/E7Xr0H8PGx78KdgQ4b/y3AtC2OuCWghnrxx1jrPZLBtfgx2z5VATSLInqN9T1N6Wjrrg33YKLzVpmHCZ+iol6ckFGu93p9Kqe8IBlp1EfSahVNTj9igGAbdnw6EXPgRBBbr0N6dQi1NBVLgq+wJEJEj06SSEP8yB/XbDPjbs1xtulytuY3Nz4Ibb8F18Llcr8dx8BGRaO8GcoeIhtI0FeN2I2C47KOo/6dhAIJhA+ATINEhfCZsD2uukKREYAExiAluwsaD0oVWzXhamp3FvtedSkdXvFMgRAaXRAfevzGn7FAyt0aQxxfc18JBiMFD1ncy8O9WiowCAau/qkUuWX5ResiHp3wbwzSfX/z6A7zi5rgD+wAvfn4lRzRfV2SIb0TbNck5gyMyFxznssrvHNf/Fwgu7tk8S+ra0lGYMzlysbHmS8kEwIHT1b8djpSsihPRK6eN5DU+8DkSAOUbO1R+0GjHtROgBDDJoxQoAVE+Us7rNtX943PfLTqaA+wRc8G9jayAwYpGLe7iHC4pkI5rge4xuWKBtpJZsIBBDg/si9Hs5CZkVgMCafQO9lYoNDBFgDNvdWeF1RUOWfjwq+hLunuKaK54E/VgCTdo/AAGK4VOLLU4gMYE5T6U2FnDznojVnxaKtwDCJ2YO/GqlRmMkaFx4vqMhFZJeD9e2SSujol2jLysQOchnmySU5gCPDHC+PEPgI9/A9yE1fNiQYUWuDs0DqUCAgDg19IUmQ52JnminACzHB3jEH3FtwQVJLUaZULg5oLMxgrmtpgD7C8x5KDLgQdYzDxCf8hV+k0BzQ24TDMEBANhpVkITIwMFCvDzBACW4Cxcr1+bw+FHN5m2IZBps7uCasdjJDR+Vrac1h1cqKuNwzQJ7e/KqJmZ4Qfo5tAIXxHchSNAzQ6sdzP/KCiIgCf42OmVgABpV/QGq6267ThGRdJJBM5527EKcPd9A/aM7RfCb1GAS/j30IQ5swvlhGk2/goEgjXfjSG4ZjCWG5qBNPLabs4MZIh5r4HcbFl86jFT/VrWCuuICQTSOjHbq4fZcYCzgIl9lKDfbgO3ccPtcnNzwM2AseF6KSZgBkDVVwwHjuTazqR8vXsIHW/A2jZl9VGAsv/DJNCaO2D2XwbiXE2CahmpOg0mAPIDiIZFmWxiZvtIPXuprzaHg47redn+5L+AOruVnIsiUv4BwE2A1nZo760RhO4g5Kv546On+TS9DhBQo4ORrHOgRgIqCnSjfo2O5ZmWFl7/AOos3eaOyD6sAZJtxP2g34I7Gj2r3evj7LmDcOqMtOHrWZU/vdO588n5X9eCEe0nWZWzA6uO+x06nG02J2PHLruBwPWG69hwfbzhcbtiyMBlbLgM22Zt23dsD7utkYjzmwVVMX8Azx0YOVJQLKAcZ3MVfDIBEgSqE1g5XRMns4sRA8SwoPkAYrRCVCFTsItvHzZG5UUtqtKc5k+aCgcJbTXNAn0U/ggl37U/9UpqrLrOysGYCMG9sKkcJqZmOcP4NeagDibqE7OWznMnvQoQUFVcr9f8bA0TaOnaIxSAa3NeYinwKDpq9NNBPmQwabaI1BDNtM0+55yYI+iwjeFO9WMAk7dGr1ICAvDED6fCqpi+GzJEoTJzajGVHGzfmfkzqvOOkwVA629zGuNzjJC7stuZCHBCCkKMFuy7hWDffHLV4/VqJsDYsI0Nu4PA2G+2C9Plhu1mAVYkwquNkUODAQRmDsSoQMQLqFGCmCdQ53YsQKzRiOjk4eSdewgUTOiHBXmxziO+uYu4ghgQtdBzUxUSzkkQRfe1HClubBZUzSF8AdGXs8bdpxTEKK6tMxCRJu+xZ0QxTfCtvWM+oQQoIPJtOx9Bej6eS68CBKaqh/KyVGAvTtEKCOauhfpeYBmCoTbzS/y+AIYZ+7uNkZSfRwJqeMw75ebXtmInWZ+smBOmGQz8zOnemNXRpy+9rUVC540kUvHlK5jHOAGBeEbYATiiAH0ODWPn0aHspqzbaTEcbrJjGzuu1xuGDLwZj+afCN+VApfrBeOy5TZscS6bbc2Wwu+jAnkEUe/wAfBfUsA6r9GVMsDSVwqbCBZRomMOidn+FvlJRozHB9DVnI19Toyp2GXGajBgqu0UqILUA+x0Tc1P/SLZZm+L6h0OvKMDupyCNxF7/y+WE4kGiHk51b7jsOaYwBx68tzz9CpAQHXi8ZFBICZPSIKB+vUagy1KJWIbkWgCgdMktbjDU53+uW3NqwYTANJGJUDIYSimgziYCAwAcRw6LXb8sKXHsX11NH52lm6juGnSo/+sHQdkDR6hJFTPU5qAfyvZiXOegBgQXMcNMdtOYNQZLrwRZXlcfGusrUDAdml2wU8wgC+WQXViFp6YADaL9mlSQHvvcGo83O8gAgyIbTByu7lyCBotuQeEAcAgJ5wpkLDJ55iQXbCLGBB4zcwJ1PqQAAJe6cmUH+2YAkgTPcJEsVmWyO3DnpLWxt0U5sR0mz98C4M66PR3D32q/Xt6JSDQzYFYMdjOEwjoL7qxe6XVPehps4HsK5WK9jOGaelmFxYIKIOBVh6zrgN/Mh39BTqN3s9Zi42s00l2nJL/oJnq896PlDEWGIXmFjcDJH8bbw+QWXwN6GyAc5126bQFW8OBYL/tuGHgOowRhGkFwAKsbgNy23LmYGzTjm00AMhptkOqA4cmJTCweivvOdNAAbJetqwjWICOWTM/o53EV5mOzTYDDa+8HW3lnoi3jW/qEf0LMjNKsrVX1VT2BTYKtIwCa5xewYkDAQDRpiAMIJOB35X9SaPvIE0M9RPNjlnQ/u6ZA1Px5s1ju6Y53zXWg1Ut8XmtyPMlmRgZE7BWZJsNWN7/GE4kDUjaP+3VyZ0VeX9nA1iUrjfUUFuKG2aIiMUSjGcSohRwhSlN8QlpQVQ0fK03UBL2npHqtl3NZDex8afKr5Y/wPT/zXfL9eco2hZt2yitn/vhbcVcigmAVhJKTmXooMrUWvzrStgAACAASURBVHOWpxTaQ4BaHDQierRvwKnTmYCtAUkmsG22elNjMlNfIDanQnzdBMDbw1Vbt9Woi81OqiZ+QBzN6jZ/i8UfEIuvuLHifHlPtZs7KZf6iklF3P5TPdjuC9KrAAFdfAIiYfVFzdO5+Gru3FILuZEGN8vpOQl+0lGce6njHKhKL/oK6riBxN5MtJxUhgXniKXG4WwqEGAAcI0XPgEP9NGZAdVZ1JWpLi8paaxmkxJzEUkXQmwVBmPftmejz+21vigFeBO+ZdeO29VZQPgrHARAqwlr5WCcE1OLd1JBUoFqfSFaNw5YVOkxBJex5fk2bB2+rQuZ2T4RUNX2hRzZ5oP6wdQJ3AQie6tTqPmphuc/lw5z7Uc+k3USs/NliAUe1X4Z9Trak+5rj8/2877noG1drvpZNKxNSZs2wpUXToLUnKRXAwK325WuCCJWgKSAh4PM6TwAkQ0BDhFjX6FegQwAfmQQsBdbZ+AhKwYCBgvXMErsYO3EQCG0iECcARjizwxXtoKADUOVz0BGNwmwOJI0BUSyrN0L0OlrXElW4B2qDIZgRDZjErBty0X3Vj6eUFQh1Wsr7DwSAygmsIBAz15TwZLHYARWnxYwxAKfXDYL0zU3A9ncQUgDDsWjTVUE6GifKKVoMZ2xxVReA2/xCU6iLJ29lo/nKxPoAFBb2TE4cHstzRePJQYYGKOJ5JwvKZ/Akrun0qsBgSuBgFm6sXFjZwJjuA06NltY5Ig/M36+3aqkDQGGgnxpq6Sga+p0fMK9+zGbLNjzWASeKSQJjIEI+RlGjUyEKRBAEPkLW280IKgpxFkOrQ6XLIJKV+WiEiqBQMxbUGNRW7IO0u5e367s0pMu7t8wOi+ATLOpmTO7674BAQoMDr2T/RTUbMLXBAA2AlkvlNcXDycj3xXM0WaaxnAtsOVog23CohjTNmSJTV/t2H0DTVtHBrV/aPa/E4hiAcFynWedPW+tGrqc/UaizqJi4q4Chnn+uNP0KkAAsIkq7YrO0AOI0QFAMlz2GBtk2zLM2IjINtlzpH9ub+vnRe99kkjajfEoe1YgfMwNKLvsCAQcL5+nxvJsRwakTIKjYzBChLm2KVbzFAhoK6gq3dFAwLd3F8HmYdvifBs2LyA3VIl6H7VLMxm8dZ4M5SQdFOkdLRjCDgF8G/MYUyfpqr/4Dgx29Rw9vDefjnK8Rn3baJJOm7Y91zBwkYWcptkzX5q/HJiH+QGMdEsVsEgzzvBOaEQR1hp+ufR7ehUgEFQzP/u19W8C1gllYIwdYw9WYJ2zOiOQSJudBnk9U3TYcFIp0EwFredUJxRECOrIO5cjjkNhYaTUAY1mprXUsqMpXNmR0DtNE3wf6lJGo8zHwge0jglgWh11oMAgj4ehSt5ViQQvQFKAXjoSRu7VZ4U/Vka/3oR9Oef7Sb0SBizvUPpJAbwVpVYdngruoZySY/n5mU25FWCW7B/TiVCn8HveFc6WeXao9R2984in0qsAAaCi7wA+bdgF0qaMa3r2c3PQ2ARjMBMogU1NFTHwVnDgDhS56LKU9DwckUXPfXtyb8W14hMMYhEJ+xb8y8pDYZNwh8yOUp0T9KxiAzM/1/tXAAhgkwQD/kkKvXdYAwQpU4HBwJmASMt4PweA8x6+0GdO0q6Ehq76oTYl8MnvvD2XJmznMTy5pgR3GWUejGHzB9SZ2JRSBC6CvRhSbyFBDxPgHLe4zE2/I3ClHIT2bc0P8bUkZB/FFOd3kglAbTpwpN2987E70E7e+tBIQUvjfBBFbZNrltDbjMzVqSLJoZPGfbEjse3A4wAEdnSdMQNNrcwAEM8tIWfwiv5drZ8aC2ZWFBiEfyGYwcEgQFwMNhV5bCCA6uDugnWNGLsOOQgEAI7R8lesftXK3MhP9cylDU5+F3k6A4IAoRUAsojdMjo8d9XW4Q+I0YFgBkxC2P23EpwW7DUA4CQEfOYfWiM1lPi5KevgsvgduXr046VXAQIKc6JFyiAXPo3VxqcVt/RKD8jYiabyBqDLRqAECp1qj9ahWKNkH5YSvwACG3ZyECJO1xWMpPDXdzxygNZJEmQOgr+SFWYVBABazsaoz/yFVh5Y+AuUYjTZyj1I19WeiNJNAs8ra6kCAsptajKS0Ds99RQ3zlhbCmPUPf9yUbVAl/475yyP3D+E2yRriArsR5FijaHf8zngI+U4H1uifrCaZP32zNyySk6roDrBi9OrAAEoajNHoMX+u91mxQG87an9kw0kMyibtU23jXn4QeEdFAaBQO73F7QQDhJwoVkAYPMdiFs0JABrVz7z2QAdAHKjTx8F6Pf1B5wK/xMg0AFAFwAg7UJaRiD5OYBpZP0dhUMPva3T5M6W0DtnCk2vvxK2YmMJ4nGd/mIeAgQ1GamJDtBkg6iCiPheEmHyuW0dk7SU26jn6ejjsIcWmICGAxfgp9KeAWMOAyd+1m+qTNquKAzE15Gx59KrAAFdfAK7RwTe94nr9YarB7W4Xm/ND9C2DCcg4ONg0KAhsOzUYyA2DOmAEBMvvAkcUMbYcLn4luQjF6cvx/vn1hkKmHKxkIMX10o/ogv9yTHqMn7GYNCcrLM+87CB0OtiqK6EowChOrJ1wrBF9UQqnlRIqT2pjljwUWysCT5rfGHxcMEPrbi8vJtu2tRulE1DScAjFMu67XsAgB6Cy8az+b4j2BOTYSJDeeSp4LJ8B6U5IdQG7fpHTC/dhuwrAfxxAP+kv+dfBfAZfELbkAm6ANj4v3tofWOMyRUHZI+emNZhZzyr1F1QZ1tHbusEIhZ/+BBKwwUAVIefrunH2DCnxziMDkMCHA3fjihtdFKfuTowdvsVAoEelDI0Og81LmGsCQS8cjrzvcsEgq/HdFQ02aDKXiDJRV6Rk1b6QOVTXVFOTznvASwImg2fBtvC/PqioGmLxjIgyvWKmUrkZkPP1G2CZifriWNqbsn9I2UIxjwT6shlp/FZLAaN7BMFePdUhd2zsD80rDp9XzPFzonFk+mlTOCHAPyYqv4eEfkAwJcD+HfwSW1DJmggsPnkn6Hqm4kMP59Uk+YZj/BVtrpKa7GITMx5FHaEsLclurV0l6/ZPgdqXvJtw/R49QEC2xbbdA80dsG7KRH/LJ+AoICk/BdDIg4B7bfnQs7x6+247Gxz4hkK6tyGQD07BgILAPhv4pytkw4C5d8oBhLfH23Xs25dYeIov6HBqb5yQrQFJ7TRojFxm7XUGqoZQHb3DVQsFJptV8/sZcScCwGZf74aZQyMCSgmdHgMwDQxbVl6CXflU1BrIgB084FmB1q54X2wlFonQ/SgrI/kOOgOydVQ+HjpJSHHfz2AfxrA7wcAVX0E8Cgin9g2ZAJg20pwpm4YvvjG/qZN2Dj0SoXtAGfQLI7e0Fj4YUAA17Tiu81AVu3fBRHis+fEthLftgsus7RtB4Et5ykEKMR01bLJj8ODZRIUIIhILtAxQavfFzCU8O++BXvcL6TmAgC4lnkItM7LZi+NY/VZC1NY/8Ls5uX6yhaepqUdAGTpv32IU/P2Cdugo/ZwDGoeodJvHlwkgowYCPDoBpx1CiyAawhQsAIDfluMFsOE0Rdig5BkpFRI/tiYAwg4GptlRiRUB+c1V5OhO/hUnSHbPZnBC3HhJUzgGwD8XQB/QkS+GcBfAfB9+Dy3IWtJbMVXpDGtAcYW0zklF15UB4t54K7lIICH3o4loobyfXZbor/UsbOBDhCA4OFyQ2x8YdktEAgn4XoelDb/fM5ALCdmk4DBoBYtGeiYoFe0n5wj79GTAxh4xVtqGi+rVzK82qyusq8dHXOsb6YzoQYIxDqOGv+o95fGXj6fklusIGhmXeTueJ86E5h77VE4o+4226hUx4CIj+pICFWxggHYUKACAyOZxvD1/2P6FGW2KRgIpI7NCejMIW7Id56YB2uKsgYw834D4SPIZ4nf2anFs+klIHAB8C0A/qCq/oSI/BCM+t9LZ28+9AWhvQg/9fDQPOPhJBtM5dxbG5y22aDqWkKl5oyHLSazzpP+9Xs6E+ggsG1bblDCTKADwYbL5YLL5SHPAdqEQrWd907SQSAEe98HbCd40DM6AOyu6fb9BrdTiK6Ke6ZZJYTwx8wyo+UcoCTuLowg+xzx26cFXU+us26X6qVNcMpyqmnROokNtZGRmjY9c4eivW1aGsFnTTR8MZGU7yXfCRLMDNUmbZJQzRWIvhQMhWuuyte0v79MwEygmuRUYuSkXhfckfZermmh8+fTS0DgswA+q6o/4Z9/GAYCn9c2ZEp7Ef66r/hy3UYxgbl52CcdGFMw9gCEDBvYOkssHkmaxR1fyIMfiNxg2O3/g1DW0TRMrAKs5b4m8BsulwdcLg94eHjA5XLBw8ODleNOxCIWVvYaB9jNudu0aM9jrGysPRPc5nX6yyBQIyDu1Brcyxz46HMFtSw7vfdthXoA1jAFJpCrGEG/ae27NvhJWs0AQbUvz7Sc0NzQU2d3jFrd7rk9me6xZ+FeE9DIvAg/T9JritEIGOtRVWBUXIfwS8WirmACjAUsfCv9b6ZAK7ucyn8BQDe7Otwo9IgKC6v7hJiAqv4dEflZEfkmVf0MbMORv+F/34NPZBsyYJBPYMw+VbUomdhmo0EVU0POWpgjghY/DztVBoNDXWOt3MftTdBv+w2xp5z9lEHABP+DDx7wwQcf4HKxI4CMdBPBONTDnfcOMg70cb/t2MYNu2usqZM2T7nlEOrtdiUQQIJIG3bUnAOI6iX0GWU6RFBzkXCICqC2PDWIWpkCJQHPjUt3QFg4dKOv8YPyoXAsyH3uZAY5I3JhV1UCgPiLpdCxAMl8T+EoTc0/4rMFpIlVetnvuL0GahcpAG1ub1w6gACSnS3NQL9hr8qx7sTrPljIOn04nsX3vxADXjw68AcB/CkfGfjbsK3FBj6xbcj6EGH6AGixSk271JzMoihKmKsOw9E1eR45gwB/tmtM72du522OyRuxgIgpkAKWk4fKHHh4eHAmINg2C3xpjs06jzIftYUJ35YBOzz/urKBvl/g7VYgMMaADg9o4uGp0zPNZlACgAl62ZpVN9W3aTSA92bntPS++0zANNhB8PPr0NIx6uFLsLUE3iaTlSkUDlI0ACgfzrYPzG2HRX+++PN9VETKFOA9yYVHAk7+ohK0fpLXjppf6rBcOmrv8wHWleR3AGiVfzh9Lr10V+K/BuBbT776BLcho3Ontaltp0W7mZfQCLaG3UKHqwWI3F3ANegkEAFL+S1E2ux+e2FjBlWYOHQHX22aUc6rDgyb2/eCMQxM9n0kjU3tmx2qOo6qWtDOHHrMem2jBDmr0iPtAposYJ2AtIJAAQDy3a3zxn0Bjuv5GL2PLWrnOVOA38dHALlRTLCAMKlqB6k9WRGzgtqdaEaLdY0r8JBe1dxZt5T7PtlqpeXli4iVg/Z8dTAw0eT+tQICV1XrcgkG59JbjuwCrXo2j3DVQjA2P55Kr2LGIICWYaayYxvYpu11d3HnmEyL2yd7RYo1LYpcfWggkBvVAEiczvcopGbGneSphL+EkMftmVZavkcDAZGBqdNHOyh6cfzEx6wZEFQVlxxq7CsVcyelWZuH3m43C8iiDAJl2sQzGh1FvJfAQPhaZycrELykg90HggCf0d5VINBNgAIDK7fmbtKm6ff9ZutOvMEzLmG9rgS+nRMb0SW/7vsBP4ufGf4ohCCHPyVe2N8JLueTAEC+Ri4AORtL+P0ckvEmYjp8VzDPp1cDAt0+qo6XjEAVug3cBHDHQKP9IfRmGojHqqwZ5C3ok6I+NwS4U2nkf6i95jXBBkp55rULHnVHY1ORESsh7V2lIep8IwBoDRnmSLIBMgluNyjgu9v6IiuZlRdmAKzpT4S+gUJqHjdPgGwXvLCDnTf08i7U81oAFgYEmjCV8yWcIeQIQHYC53yL1meZTAFdeTboel52PqDZ004tovht9TQysCS/Bn19uJZZkfp9gjizAQKCkdf7DNZ3DgRYQ8eCFbOPFYqNhH4mCxAxO1t2q5Q51feO8PDSU4AYOfC3AGxTZRU3lF5T9K3omDWDrzRFAUBNGpqz1qfzUKEJIGuRYkJz7jUJKRuSHGWxN4CbA1ffISgceQfHYA6HdXOgaWEJQOydp2v+qqfzzrXU35M2QXXoYCUhCbw9fNR1jQDt6R/osycr/jQLXQRpicouQFht9vB5FAMIgc/JSnecdk3cxa8yyND37VoVuSuhU7AgwTepT0B+7u8l6VWAQBQoPw/BUJu5tXlDZAP7FmIi4htG+LY4u0CGQmL3mrDdJiIQMQCkUyobUBetRqdHM4B9ArOckSEgo8wB0+ZsStSzSuDqpdHIc3cQiHDeeQ+DD+8YbPPlc7LQi0CgM5AU/mAKrHEWrd3Nihcmlp8AHJIUQbVBzQcg0KWyJyugNRTB6waQw6pHILN3YzAYVP5svgROZL00f/gEFvsyOgC9D72eGzrF7fcFvuqqnt3A+IwVEEC/kyDQ0bA63hgWcHTTYgq6Fs5PFTAAgEJhe8APQVvpFZ5c8+nE8w7tWM9rn6MzlnaKDhHT4GuGWUwd7iBgeQjm4CVdgGAcWID9MZOwgCu1X+DNveMigjFjqqvtuzjCibcIeRDV1IjceROgAhzq93JWWXeaNCvy5JssWwgr2Kyrup7tvC+Yqg1IYwKWTfNmgcy3huBH9qWUQeRTWnaLBRg48DfcZYn8c1vmS1F1mPcwaHDV0O+iqk7ArAn58vk5VnuWXgcIgKvSmcCynULOGNz3ZAKQvSLd7hY3P2YI2oZMCgGvPus14w5eQmY53ANmAxQnsBb3gBqimMC2bQfBXzVNChrqtZfL5eAXiB9Nj2C8zxomu91uuF6vDgI1oSWZQAr4uTA3zUWd6vibyDOps7tteZJauameCejstvK1tHO1FaNIYKj9GwTqDjEFxvA19VE+RMT6PC8mYNCT5qEumWUTIRyFccxG0ybJWT/BBKhS6ruj9hc6jy+Tvfhz2QHYwGBlCwEEL0yvBwRYuWs1Us4f8BbanH+rAjHHcEuk8Kj5arEArKNIOlkY6ssvgKxEv6vdkFoox+l9tZoLoP3VtXDWcbK+1BtFBE1rwd3CB/Ojecpr/Hue/ImomVHBotRWwdn7ukB3n0B1oNA++X10sNTgrZaeTO2uUxBYgSD1r9c7MzDUN6r9XADBtK3Q3TRIoVnX8cebJDJFTt8wO7i+yQGsXAh/Bsn8wqCo7lKDx4+o2PkcflA9I27KsPOs9bnd6J1V1pe106sBAS0YPla2J2aQ0amzsWlyhwkAWmX1l4ERAGfVlb9RuO1ZM/Zutyuu1w3Xx0c8PjxiGwMfXi64XGxoMMJ0swZtIpHXK29Rnuv1EbfrNQFm332SjNYmqjFbslGLpUN6d8hnH+obMT9wCViZNxgSH+7rJXlZOnk02uSkeiq5447CR8LPD01w4791whkBQb2DHLY06jK1lnKzo7KAiNMijCz0UfcNIFB9ruFgB4FUTB1lkP9SyDsAJHPgDvFMejUgsPaUItBaSlypwgcs9BNsnXf4D9T8gumUmwO5D50e3kIXRGjiVVVeOgP3iG50xePjowXg9J15Ac0wZjZkBV9JWFqPtcCK1ux9//DNG7x5fIPr9RFXB4M9FsNwpzwIQrwnugmqM4EFNynOoUOHoEPVt71SZyz2lNgKi5nTWQolexQXOvNZioD4oh1qcWVRJ6ccnWVZQjF4s+W6iZxyvtGWaZQL9rHkPISYHeoh7WjZdgvhJuvfqpnXa6g+INU2DQDOwCCOBDJZTkh7R2iBaptSAs+lVwMCrUOGpvOUuiIro6/UMgAQDIU5wzS2a/IoMWMaHdcg/3cAgTtJ5ANlBtxuN9yuN1y3R3uv2+7m8bff7fvuW3dvB+2UQrpSOTEwwxA8vnmDN2/e4PGxg0CuZJzECCLbYjVUmqZrE2nlq7o+E9ZcG+DCnuPV6zVdwYXbcqlSNrEReXWVDElwQbbMYha0XMeDe//IviDFAGKn5L6lWz2LJ35NB4Co55yD4SyMHcH5XqG/gdbWLPgdGCrTZS7QA9uR6r6BwGjAgCh/AkEHlufS6wCBRSLzlAqRQAAchCsaXwcw1JZ8Wkgyq8AhkrvKB6VtvPbs3clAtWxwcsbJEFy2DZftAhF4KPKRQ3URiNQ6wTjk9/RvCB4fH3G7uTmwV4gsFv4CyKPZVP2HAOBOZ6iyrnw9fqOIXXYOQ6vE1eiNh+ckUC0PT9bRXtjpfz/X03JEf2jaUgQZuo38AgeGl3MRamISL1QqFhCbxujSN58GeADlxEPlMX7fGQLVEoPAQchBz6nv0No7APodYgJHp8vR7gMIAEBr8n2utO1dOCE6fFGYWqioMaE6fOPy8hqvPTSA4bD0xTvHvu+4jRu26xWPEsIe0YB8W2uduHl4q227VFBTEQp/3jVTW7UogsfrYzKB2/UxY+XtKwtgpkQdMpteuL5O6veErlfdU/W45IuUn4UxgwX5wADWp2rkVxxfPL9B87AKfj/v5a0XlUklNXV6iyNt8y4FZNa04QisWASh/U/NAR6OJIGEoJ5PQt80+AoA0TbUWEWIGl3IOmtAwhof/Byh97xDIHAU+PIIV8+x/0TY+6sY0zzgAybsKjOZgA7B1ApPdr/b11h/rkKUEpRgAbsIrj7GFPskCOCLWIwxXK833GLCzygAiKnELa5gLl2uo/kc3uCRfAI3D5mVE2TIUbWmBEf/UN1JyqYFCVcCoSLoUomdsQCrFu+ALcRu8qp6P7Voazqub5+glduqe73HjWUCVF6YBaqSEPoLotwRTyF8AsHQRgpcvWHSSEAwgFqyXcuVuzlApikYgNCA5ui9r2ud5nNZOI+SyqkAo2v+Dv7Hdx1r/jy9EhDAMjrgKeTW+mKhLqjA9HkMmJNQeXSAKv/svahnhz0sSnkIx6DbiWPcIFcBVCsqMY31RQwBHuvn0Od9lV8/Fxk+8nDF7VqjA9ERa9FShd6Iioo4AFQhLKvNzmFN2OpfD3f4/y7sseSYgnBUyUuTLwFzT0yCvJOOJ/lb6HeyNenmRJMbiU1WpRgAmwPREdz5mcOANBO0YjnutbHsgZcUEzgK/VnfW9nAOQi0qqBOexR8Pnahl4Cnd4sJLGlRcdyHD+cEAmgOQzjFtvHjNZRzpDATYkgMMe6s5KTSGIcXnwPAQ2rWkeAaJeh77EsQ6wmiQ+b5CQiMIbjdbnjz5g3ePH6Ix8crHt0kuN1utISWVyT6ttssUKvUsVr2CzWRyT/HjboQ+yarrrmhOeOybtHDT9YUphgDSrEJIROkA0HkMxYxcZw9hqB2FgoizQwXYp0WNVgVEk7ACE6isVS54j/0rd5wEOiDbypGIYiunzkGz0cEqjytCZnBhG9GO4RWD/DnntHEO+l1gkDTXKtuqKurhltBNP6GN0Db9ziGpXJ4qgNB5oOAQKfZ/0a3FfsQ3G4BAsilrTuZA32N/8oEzAQIP8EYA7f9hsfHRzy+eYPr7WamQTgLr1caKdhpMkvksypEgcNkJO7MNeYO+n2xg9Dx4h0qPfju+GwjDPnrEsP7XXD9Jjp25Tvyym2tgth8wmeJetMN0Di+C63vG9DWe8Bmj+YTncnt66QrNrk4F1KlS+p/+KuhyJWidxYQwi3tucfaUejxImL9SQ7ZilUG72rtOb3bCpxeJwhk6trl0LGK1+al0ghI9DWNsITTZAAgIFDq5KJSTaMKhQOBeIjvfeSClav/5uJOJN41uajpIMcgxa4j+rrvew4PXm9XXB+vOVrAIcd1sVOzPmSBAwLJ1LQMq6cAEM8KgCHbX4oFCWo/QvvMoNITd0eNC4S1vX3PYb8VJ5os0K49wNsrPP9QiHr4+QCDYT/bSfgnjRasw4FIoY6P4YRkp2Q3A4Lqr74BZv9M6atNFiDvJae29T4rzs9EvDvroW2eSq8UBM50iQLZMN1/fLidUDcWEUGQm4rN0PgsLmeMIB4aWmH6/HXXjh4LmMaZL7htGy63Swm9sGOQnYDSRwpo34Hb9YqrrxC8eizBG01VnjRngDtrOP7ONXEX9GQE0KeBAAagmj2qFmCzozHfKXVfHJ5mBmeC3hmKxmO9/iw2OL3UpSXXdyzLtxUTU/1H0/Pu2jTMqzIBigUkW/L3hCmAAG0pf0Os02jzBUDC74DQAQB53bqfcrXlfx1UiYPFTeH/ERpBCJPoBeklm498E2y7sUjfAODfA/An8QltQ+YvqvNWaj30oG7HgmpMC30RQzkcK54obtOXd4BgeamtTnQQErMVVRX72LHNDfuwvRKv21Z7JJKAr+cZ3py0iU71+QE8TblCaR291pMqQehM29UzQWcAiK+583nD0JOiprpYNwA4mXOwmg7UaHU1O/+RD2i0zQBMkAXINSVIAID/KQV9mVMdNCzKszqgqUdh5mHAWKLc69Vekqwy+hCzuDvmQAMBMDvoRc86bIon2kcP4tDqyRGygwD5WV6QXhJt+DMAfqsXYgPwfwH4EVjY8U9mGzIq3FNfNu3Pidz5ibJkd4nyDDWOe++dK6gUAw7RamuMUD0jnYTGDiySj87d9jqU0VmAhN0fGoFBoAOAyDC/gg9JptNq37vdSpNYmBmxVRma+zA7MjT9AQiiPug2B8sML760RGidrLJnnFEs3sVV58k9/X/LiwLqgu9tal8HgJPmRoACHCDsNKPSA9BpUNaEnyNGhecN1KcQgkx2/sEceIoJoMwKemYVfoXAszbCAtZVnRn4WHsE4ufSRzUHvgPAT6vq/yGf4DZkwMJcjsof2dn1eG2tEccAFzS1mYPF53x9YUg5MwLUeXR8r1R7hXqoMPtOpm+aSn/GNEbrHEljl3vP/kqD0erBnCQ008mlzS8QWY+RjigXSNjYyXTsXGfsIOpAwc/zsscTm8I59OpDKx7aDsRW6DfFXBbnmAuXRpwEVVssIkThMve9aAAAFYdJREFUQ/Ddxh8zfIq+FgKlnWuOwMxh2IpTUGbIat9zENcahhz3Rwf8PMvOWn+tLer/vHipvtOl20uWC1427Q3zZPqoIPB7AfwXfv7JbUN2lhZmdKwuutIG9v2SCGqNgTNC2gRT0u6354kW7c0w1IzLqUG75srnrdDeQKG0RwMCiO+NwiDC7zoeQ/tHJ+91Ia4Vojza+0Gz98/Kdri1np0783gdndTBfaE/u67tm3OtR+UWwQivuFr4OMM6co4GIVg0+kTNbRASZgA0PXhPIIhZhNl+IABYwuDXCBCDQbGkbGcJEGDhLKoRxCPKleVuIKDpFuPP5QSk/tba5en0YhAQ23PgdwP4geduPbl26AVC25B92aceDk8ICqoR1jl20A3EC22Xmom396rKQaJ/bwRbeq7pKEIIjJaHlTe6r2p9Oqb+vatOKA5XlTbXXLVg9e2V7Zy8KzPIseg7kuq938ZPRZc81rOf6k5P259PsYEz+LD2A4DcNbnR6ejqIWCkYcMv4OeH0ZMlVwEoTLwzF/HsyGQwAgL1uJ5VvzAiafbEc6krNqG/+/0nzqyvJkC/nAQA+GhM4DsB/FVV/QX//IltQ/Ybft1XaKGXf+//99lpUblw7W4OP2soyetueiNX94VgDDVeCEUMFungflPDX5qrDu2NXW+dyOSaJHpIeWwj83I4J03O7/GWb5yE/R8Bbk+0OJm2/pw100edrqA8tnwtxfu8UiK9f7r3jhKDwY7WFl2n+gD87jClRKwtc4hYk9TYveSRt0f41mzRj1KT17WqkXDInVSIRI+t68e6z0ysF7ACAss1g0JV4fJ0PWu18zSevyXTd6NMAcC2G/seP/8e9G3Ifp9Y+ja8cBuyZhtD+uf1L+hc0DIRDyC5dAzqKMNuwRj2V3TOphubnVeboW6bL0XdBGMTbBTTfwz7vPl3vGMS76A8hr2zLzNFy6OV/U6dtPqJa7J8ppsPGrtri9ZN6Ebu3KvWS3t4+bub4Y/yB29LrM9f25yHVe/sgcB5YlOAQpgffCs602m4AsF5gasuuA3upwXUW1pMOV1vk2N10bV6+sIOeZREZ/u7l17EBETkywH8LgD/Ol3+QXxC25D5O7goIPM9gVIEbst793HeIzmfPXwAMdEnqGQNUgV1HWCveThWAlkVtVWWpKe4tbkSOku7vHyxSM2TPecpLR2MIMyWYgLxpijLmtGnCTklU5v1rBVojj94ubrpv7p3ktk4Zk2WkZRFKUCqfujvnjkQKYZ5i2nZUyTeScAU/e6IBNy+999zytiyr5QZU1+AGxZCbUmX2+3xjBe3OV6+Ddk/APDVy7W/j09yG7JWj50AdY1Y1yQbjLUqAcEQyCxHXyw7IUIF5G/dRs/ez/bx+jkKe0dss6XYf3DsKLKeH5/kcskdwsvhAlvAFs97uUHInaouep3e60UpCOfpuc63gma+ku44YiYzPEEOry7sKh6dIwO592PAQymBzC8LnUT1iYMsCToBQWEf9cXIO9sap22hx1PNgU166lHJpGmnpBi5DAuOPIN/mV7RjMGqrNgi24bkqFK8YaJyOiCUQCcQwJyCQ2I02ltZvZG1V1QH5dIr+aU/o6++W3+vDg5F4HrkHOpc/NbWYXrrpbwDCOlcZz33etTl2Auxavr4qt4ueM5ev5c+CjGQ/h8BOX8uRnbYxr2ZA9ZRog0yVsAc4FGiKi75CBYzqUYSqg2boFuHK/BhYHhC2bcJPitcsjng56SmsgMcWrT9hiZ9reV6Ir0aEJDeQtlJcxNcWbq3agFCniuhtuY+gzkRCEjJCQ7BhDpzQCfccIscokYZaVRWJYErxxSeUs7CX7H2X+unNEGjhK6Z+7Aa33XsCJydAyAkGDwjzh9F2u8+owu/nctB+Ot49mcPyEfQ0K/1kTsjHpk6VysAjLzVOxBCf1p4eXmdcPMsebh7qzd2Y2+NAVSnzEGSF1KBVwECPEae15bz1P5Ao2rWJo7UGizArw3/zSRtq8g5AX1O/J2Xp+YnNGbtnxq+g4WBU2n+aqeX9ZRmBrRrJfyA1Hg5A9czjb/moPGPJ7J376sXss56zpPavwt/zbXACQCET2ApBw0NxuzOyqkwTlTu452oL0vLSyqhNd/UK6mA/CH5offdlQFov3dt8353+5qVRwp9Hp8Dv0qvAgQA1P4CiM5fHl7xCR957lFw6rMAPhwkc0JgE4NkqoPCyWQLGI+mUcCnUzpbihKsi21S92o1fEQq4vOXvPDIGInqNcA5AtOpWN7tEOc08zw9bRg8zTnoKU3DPy/4dc9Tf8D9WI7nwBMfaoVk/EeK5F5ZtQO1qi1PGIOoa0n+oR7OuYQAw3WVD1GHcKvC1z34tdlnRUJ9M1eUa/T+eEBPrwYEGhMQIOpyuvobqOm0wyurpoVOwIN8Ri3alF6tKaVEj6wihQTXGqmBO6VkAqBn+HkJPpbrktdjchNfe4nIxbMiU/VZDu85zWs9Jct6YKLxdG139+/y5Ggo6Nl9hyv9ro+u/Zktngm+/XaM9Xsh4T9zOOK0zaMnrAyjyhLtWBGeo5lsnVLVcvUsPdSInOQrHt8uiVCfVeiEr5T0PPhU8jHs/fGcp6a1cXoVIGBj9MQEUIgn6PvRiVg4L5m0+7AzAev/XikOAOYbIES1FzQE73k5z2PN2qMrB8FHAYsLfTtfhL93i3Yx3tje2eaML+dPMYFT8yABjN7fQDGSlILMw3klPd3l/G0v1vhYhP788/EZx/OnWEeVSk/KebeoCGnV1kgB0JqjDGkC8MPz3dqBNfKdaC352BB4BCOYrvEjYIorSAxAY6HluwQCAE5AoMAABAL5OQXfPoujZQCAuFMogKIzgVWglraW9QoDADOB5TnxGUbnzgCg5PGoX5qeuCPsp4JPjISfFmXFen/rwFSm6Ktn1HXxwbwsnfOEe3Z+fHcU8vNz/sxHFv7Ibh9xWJI41CVQaJwYjB9+E74kbdesTmnZsdsMNOmVtLzkudA54jPnC6idth0MjBT46JnMXBUpc9qkWPFFci9IrwIEBB0ELOiHegBcWwloBXb6rzb7K02AsIumuk0WjiFHyEBQ8JFRfIV/6uwLve7DPN4Q8Z0/L/wz6Q9I0+M+CPA1Zh3Pa/8zgab8sjmxnBeVdU24UNuqBxaSj+IXON553873Z5+AwPp9/bZfi3d2QWcWcHKPcBlxp4xnYJDq+sQB1wFC4HUcow6xHmYRdmYDzCRsFbV0EHClplMBGR5a3+rCAunIs0u7I70KEMBiDoQjsNnwIMcgMwGY8AuBhLiDhJnAUWtrHhmGpf9HSQ8f2SmXABCPZCBY2UAvfH/smXY/PSdzpDX2HRawlDkXYeV3BUA8UYgFLD3xL2YC5+notAtmcG8UoLOHfuT84UQg1zLg/Dz/W5pfVkBIBX8ws2ytigtgglSBRXGAGDXqeZFgR1EX9FnDFwA3BQIEBAAmPNYNoLCt6T2s+0vS6wABAEPIHJAmRQUCXgEBBOkfkAgIAYslF6sIZZYpgBo2Yo3Nae3g51Wo7eyo/b2haOShzl8KBD2fPc/3v+M8nlH/zoRK+DvzWCWpOuLKCD5OOpoAT2n/50CgC3jVRa+DJ2VBeknyuWsRpWv3FQjyc/4s+iCPTvA9vZ8FwIrY1urxechwJgEfHUD6BADYUQHxiZHqi+RUpL3jqfQqQEAAyOgZHmrKKY4aaDjKJJg67XvbddRpki0LXM8bHW5aXJ2hrWzgXgV2EIBSR0CwAhI8DcFiMMi30BPpMwlhjymwzlbs39XT5HA9zRaNfEp+kAj/ra6zqL+LlPAfbe+Pns5AILfd9newl3+MeufKSvgYQFhsoAtt/e4O/JLG78XT5e5j/ZcZtTzbgYPzE5u7niFPAYEBxyBAMCbg4dGmmLCrpplgfUMQG7uoJLl7Nr0KEDhPYYcdS1JxJj10qJALJIBDpUBCqwEslZDSzxblJonaxUrWm6qB4+vS/sj3aZoEQJsrkBScn1j/sYmBJvArm1G0/1tf7CMizAbS20y/q7xp1gGImrI2fjodBdDS4vQT8YjNz/gDQrOdsJLQlNGm4gvG5OC8i+e07KTgMcnpRVzLokTyY1TBtZXb/SOeF+WB2Oj1WRnj/XTs7+F+60BGbcXtwvU2FsV6L71SEBDUhhrVFiIWjTeuBRhkw6YasL8hFX+eSNrTENnoYVV+F476vstbF2olf0AyAb+hCb4ez4uiswbXJ489rYwlzCoHEj7Pdyv/ICsjWcDCCFoVnaXTPPXn3PP6N6CgDh73tTYHavUnUEDbGFLP76r1s3wN6I7Cn2RAuH2iXAWc7D/h8hyWQS91G5mL8+pRmp8176lCHXfqHqf+kbP0SkGgkhVanKZWkNBsC6A6hFP7+iWS3iZqaqu94/uEnuspV5+p30Ca92h3kfb3m+K8T1BigUdez3IgbPVlYhCZBTUJqTrkmo94QwRM0WAP7gtgs0CL19bvV21FGvm0/pZccE+sxbTsOLsHBPS+pi1PAMFfrLnCshRzNfyxVg4sIAW3wIDrkB+i0EAehOMuH5vaP+qPAI2OmaGoU8oTv5Uw4Dzx4+hd7zgTAAqN6Yr4LEAfAU0mMATiM7ViKDGHUxxVV2L1gtczN05WUgLq7ddaTdqRhZ/pamMDi62eQn3v3I/NHCCfxNkssdD4NR2bP1Me8gdUWykcUh36DggcOm+e9Oef0foVBECC34X05Bw1QUchrpBpBKTlleZYNqFfgelQkuVzMABnVakcdClLB6yzgCiFN9Q/11Wlggaoa2wBBtPOop5PrxIEGIRVuGLgsQEHxpyYCCAYEJn1Y0VaBiEjo1WItHfcS020uFdrwIJQ4xUcGxgBqxnQ2QELNdI+t6/9zQ0dzoU/f/tECYAu9HH+5EiD90Gm7aWVXwoCytmn/BBVjs+NFdR5E9D4zcoiQHWd5WD2dswlC789dhGgtR6FP7ngMwgQmyrWAsqrnw9pdcrsKnO2gBYYzLIyje44kfBJQ8EAooQvCxz2KkEg04KSXD8MDhlB6NjisI6i8bj6PjTRIRUVj0Y5zBGn3zJFzY7rHCGAIEYJmAkchHil/XFPIoDmswwENDt6d3guZccZExCszKDYS69Dtl3BHTffsJ5EVjp70sNNKwDUeRMM7+WlUUlYCSgStELuBagJOvfy2rVmf0crTDsXf1E6VyMwhcyaDxQARs9iH0AqjdAdzyrtQlMr7rIQXuov3/UyIvBKQYCENRvZESBMgCG1vaQkE3Cd4B1AIha938UVxILLOh95xpqlzy7kmXf3hCQFloGAAUGRnajoOb1fe67uMoA8b9W2pNL88a7QXscAKUk7CuwkvN13QODknVwWZhchoA1oFtBhU4Dz0AW2A0LVTzytg9u9/J5R6HMB6qzKWw8Ru8+q1BevEQsIxXXX+ZmgwLB4591CFevdtoALEHr3eDkReKUgADStnwIrrpVRTECBFlQkQpWXBniig+WRf4vCWG+kYAIr6xaI+SMYDEQgMor+a7Qd+wesg4rTx9DqjZ5T+VWR78985LldOHRa/j0ATAfFBIJ4Rr13ZTtRP0cbllQOv5IurZNo8vxYgydtFIIPet8xL/HO+yzwmNZsn5avPbPyGp+tzqbX37Bz6bcWA1iUTyi1hSXESRGXe2Xxvr2SPvR3PeP/bumjRBv+VU1r56i6Oe+AefezJT+tvZN3vzCdPq5Tt7tPfMGLnhzmeb4odJ8+87C49SOV/iOkE5TCkYOdp5cW9Pn0fBWsgv/EPXJ+3+dVg/njexml6wdA+3ivfLUgYOnEnkuD+l76JDqxnpzdedUTgP3sW17Qt7vfoj/0pbUQGufu9Ai+fvehkt+fBmN6rryfhxy/7KdfCPB6CWi+NL0wfx+lnpZ7X9KfztIrB4GPKWGfd3/4CA+4Q8uevSkA5JmG0zbNdTFxnsgmd4iYb3DuCO156MJ8gg6yAtPxGadPeJIMffIC/PHkYf3VC/O1hma+xw7Pf/zEx/U3z5TqY1ajPBeP7lcjicivAPjM287HFyj9JgB/721n4guQ3pfr3Uv/iKr+Q+vF1+IY/IyqfuvbzsQXIonIX/5iLNv7cn3xpFduDrxP79P79IVO70HgfXqfvsTTawGB/+RtZ+ALmL5Yy/a+XF8k6VU4Bt+n9+l9envptTCB9+l9ep/eUnoPAu/T+/Qlnt46CIjIPy8inxGRnxKR73/b+fkoSUR+s4j8BRH5SRH5X0Xk+/z6V4nInxeRv+XH3+jXRUT+Iy/rXxeRb3m7JXg6icgmIv+ziPyof/5HReQnvFx/RkQ+8Ouf8s8/5d9//dvM93NJRL5SRH5YRP6mt923f7G02cdJbxUERGQD8B8D+E4AvwXAd4vIb3mbefqI6Qbg31LVfwLAtwH4A57/7wfw46r6jQB+3D8DVs5v9L9/DcAf/dXP8kdK3wfgJ+nzfwDgD3u5fgnA9/r17wXwS6r6jwH4w37fa04/BODHVPUfB/DNsDJ+sbTZR08cYOJX+w/AtwP4c/T5BwD8wNvM0+dZnv8OwO+CzX78tF/7NGwyFAD8MQDfTffnfa/tD8DXwYThnwXwo7BJqX8PwGVtOwB/DsC3+/nF75O3XYY75fr1AP73NX9fDG32cf/etjnwDwP4Wfr8Wb/2ziWnwL8NwE8A+FpV/XkA8OPX+G3vUnn/CIB/G7W57VcD+H9U9eafOe9ZLv/+l/3+15i+AcDfBfAn3NT54yLyFfjiaLOPld42CJwteXjnxixF5NcC+K8B/Juq+v8+devJtVdXXhH5FwD8oqr+Fb58cusxaMPxu9eWLgC+BcAfVdXfBuD/Q1H/s/Qule1jpbcNAp8F8Jvp89cB+Lm3lJePlUTkAQYAf0pV/xu//Asi8mn//tMAftGvvyvl/R0AfreI/AyAPw0zCf4IgK8UkVhvwnnPcvn3vwHA//2rmeGPkD4L4LOq+hP++YdhoPCut9nHTm8bBP4SgG90r/MHAH4vgD/7lvP04iQWfuY/BfCTqvof0ld/FsD3+Pn3wHwFcf33ucf52wD8clDQ15RU9QdU9etU9ethbfI/quq/DOAvAPg9fttarijv7/H7X6W2VNW/A+BnReSb/NJ3APgbeMfb7PNKb9spAeC7APxvAH4awL/7tvPzEfP+T8Go4V8H8Nf877tg9vCPA/hbfvwqv19goyE/DeB/AfCtb7sMLyjj7wTwo37+DQD+JwA/BeC/AvApv/5l/vmn/PtveNv5fqZMvxXAX/Z2+28B/MYvpjb7qH/vpw2/T+/Tl3h62+bA+/Q+vU9vOb0HgffpffoST+9B4H16n77E03sQeJ/epy/x9B4E3qf36Us8vQeB9+l9+hJP70HgfXqfvsTT/w+tCba+RPSREwAAAABJRU5ErkJggg==\n", + "text/plain": [ + "
" + ] + }, + "metadata": { + "needs_background": "light" + }, + "output_type": "display_data" + } + ], + "source": [ + "import mindspore.dataset.transforms.c_transforms as c_transforms\n", + "import mindspore.dataset.transforms.vision.c_transforms as C\n", + "import matplotlib.pyplot as plt\n", + "cifar10_path = \"./dataset/Cifar10Data/cifar-10-batches-bin/\"\n", + "\n", + "# create Cifar10Dataset for reading data\n", + "cifar10_dataset = ds.Cifar10Dataset(cifar10_path,num_parallel_workers=4)\n", + "transforms = C.RandomResizedCrop((800,800))\n", + "# apply the transform to the dataset through dataset.map()\n", + "cifar10_dataset = cifar10_dataset.map(input_columns=\"image\",operations=transforms,num_parallel_workers=4)\n", + "\n", + "data = next(cifar10_dataset.create_dict_iterator())\n", + "plt.imshow(data[\"image\"])\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "2. 使用自定义Python函数进行数据增强,数据增强时采用多进程优化方案,开启了4个进程并发完成任务。" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "before map:\n", + "[0 1 2 3 4]\n", + "[1 2 3 4 5]\n", + "[2 3 4 5 6]\n", + "[3 4 5 6 7]\n", + "[4 5 6 7 8]\n", + "after map:\n", + "[ 0 1 4 9 16]\n", + "[ 1 4 9 16 25]\n", + "[ 4 9 16 25 36]\n", + "[ 9 16 25 36 49]\n", + "[16 25 36 49 64]\n" + ] + } + ], + "source": [ + "def generator_func():\n", + " for i in range(5):\n", + " yield (np.array([i,i+1,i+2,i+3,i+4]),)\n", + "\n", + "ds3 = ds.GeneratorDataset(source=generator_func,column_names=[\"data\"])\n", + "print(\"before map:\")\n", + "for data in ds3.create_dict_iterator():\n", + " print(data[\"data\"])\n", + "\n", + "func = lambda x:x**2\n", + "ds4 = ds3.map(input_columns=\"data\",operations=func,python_multiprocessing=True,num_parallel_workers=4)\n", + "print(\"after map:\")\n", + "for data in ds4.create_dict_iterator():\n", + " print(data[\"data\"])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 性能优化方案总结" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 多线程优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "在数据pipeline过程中,相关算子一般都有线程数设置参数,来提升处理并发度,提升性能,例如:\n", + "- 在数据加载的过程中,内置数据加载类有`num_parallel_workers`参数用来设置线程数。\n", + "- 在数据增强的过程中,`map`函数有`num_parallel_workers`参数用来设置线程数。\n", + "- 在Batch的过程中,`batch`函数有`num_parallel_workers`参数用来设置线程数。\n", + "\n", + "具体内容请参考[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 多进程优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "数据处理中Python实现的算子均支持多进程的模式,例如:\n", + "- `GeneratorDataset`这个类默认是多进程模式,它的`num_parallel_workers`参数表示的是开启的进程数,默认为1,具体内容请参考[GeneratorDataset](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html#mindspore.dataset.GeneratorDataset)。\n", + "- 如果使用Python自定义函数或者`py_transforms`模块进行数据增强的时候,当`map`函数的参数`python_multiprocessing`设置为True时,此时参数`num_parallel_workers`表示的是进程数,参数`python_multiprocessing`默认为False,此时参数`num_parallel_workers`表示的是线程数,具体的内容请参考[内置加载算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.html)。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Compose优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Map算子可以接收Tensor算子列表,并将按照顺序应用所有的这些算子,与为每个Tensor算子使用的Map算子相比,此类“胖Map算子”可以获得更好的性能,如图所示:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![title](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/optimize_the_performance_of_data_preparation/images/compose.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 算子融合优化方案" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "提供某些融合算子,这些算子将两个或多个算子的功能聚合到一个算子中。具体内容请参考[数据增强算子](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.dataset.transforms.vision.html),与它们各自组件的流水线相比,这种融合算子提供了更好的性能。如图所示:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![title](https://gitee.com/mindspore/docs/raw/master/tutorials/notebook/optimize_the_performance_of_data_preparation/images/operator_fusion.png)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.5" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/tutorials/notebook/synchronization_training_and_evaluation.ipynb b/tutorials/notebook/synchronization_training_and_evaluation.ipynb index 486fd4cf2f762b193054db536c9576b5bdc5512f..236ae433c882d620ead10a0247dc321bab8122d3 100644 --- a/tutorials/notebook/synchronization_training_and_evaluation.ipynb +++ b/tutorials/notebook/synchronization_training_and_evaluation.ipynb @@ -230,22 +230,22 @@ "metadata": {}, "outputs": [], "source": [ - "import matplotlib.pyplot as plt\n", "from mindspore.train.callback import Callback\n", "\n", "class EvalCallBack(Callback):\n", - " def __init__(self, model, eval_dataset, eval_per_epoch):\n", + " def __init__(self, model, eval_dataset, eval_per_epoch, epoch_per_eval):\n", " self.model = model\n", " self.eval_dataset = eval_dataset\n", " self.eval_per_epoch = eval_per_epoch\n", + " self.epoch_per_eval = epoch_per_eval\n", " \n", " def epoch_end(self, run_context):\n", " cb_param = run_context.original_args()\n", " cur_epoch = cb_param.cur_epoch_num\n", " if cur_epoch % self.eval_per_epoch == 0:\n", - " acc = self.model.eval(self.eval_dataset,dataset_sink_mode = True)\n", - " epoch_per_eval[\"epoch\"].append(cur_epoch)\n", - " epoch_per_eval[\"acc\"].append(acc[\"Accuracy\"])\n", + " acc = self.model.eval(self.eval_dataset, dataset_sink_mode=True)\n", + " self.epoch_per_eval[\"epoch\"].append(cur_epoch)\n", + " self.epoch_per_eval[\"acc\"].append(acc[\"Accuracy\"])\n", " print(acc)\n" ] }, @@ -351,7 +351,6 @@ } ], "source": [ - "from mindspore.train.serialization import load_checkpoint, load_param_into_net\n", "from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor\n", "from mindspore.train import Model\n", "from mindspore import context\n", @@ -368,21 +367,21 @@ " repeat_size = 1\n", " network = LeNet5()\n", " \n", - " train_data = create_dataset(train_data_path,repeat_size = repeat_size)\n", - " eval_data = create_dataset(eval_data_path,repeat_size = repeat_size)\n", + " train_data = create_dataset(train_data_path, repeat_size=repeat_size)\n", + " eval_data = create_dataset(eval_data_path, repeat_size=repeat_size)\n", " \n", " # define the loss function\n", " net_loss = SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True, reduction='mean')\n", " # define the optimizer\n", " net_opt = nn.Momentum(network.trainable_params(), learning_rate=0.01, momentum=0.9)\n", " config_ck = CheckpointConfig(save_checkpoint_steps=eval_per_epoch*1875, keep_checkpoint_max=15)\n", - " ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\",directory=ckpt_save_dir, config=config_ck)\n", + " ckpoint_cb = ModelCheckpoint(prefix=\"checkpoint_lenet\", directory=ckpt_save_dir, config=config_ck)\n", " model = Model(network, net_loss, net_opt, metrics={\"Accuracy\": Accuracy()})\n", " \n", - " epoch_per_eval = {\"epoch\":[],\"acc\":[]}\n", - " eval_cb = EvalCallBack(model,eval_data,eval_per_epoch)\n", + " epoch_per_eval = {\"epoch\": [], \"acc\": []}\n", + " eval_cb = EvalCallBack(model, eval_data, eval_per_epoch, epoch_per_eval)\n", " \n", - " model.train(epoch_size, train_data, callbacks=[ckpoint_cb, LossMonitor(375),eval_cb],\n", + " model.train(epoch_size, train_data, callbacks=[ckpoint_cb, LossMonitor(375), eval_cb],\n", " dataset_sink_mode=True)" ] }, @@ -441,11 +440,13 @@ } ], "source": [ + "import matplotlib.pyplot as plt\n", + "\n", "def eval_show(epoch_per_eval):\n", " plt.xlabel(\"epoch number\")\n", " plt.ylabel(\"Model accuracy\")\n", " plt.title(\"Model accuracy variation chart\")\n", - " plt.plot(epoch_per_eval[\"epoch\"],epoch_per_eval[\"acc\"],\"red\")\n", + " plt.plot(epoch_per_eval[\"epoch\"], epoch_per_eval[\"acc\"], \"red\")\n", " plt.show()\n", " \n", "eval_show(epoch_per_eval)" diff --git a/tutorials/source_en/advanced_use/customized_debugging_information.md b/tutorials/source_en/advanced_use/customized_debugging_information.md index 208991c20c35fb5c169882c72e99a59d1a5ae3c8..0e2b71e874e68fd5c967ba61584668a0af117958 100644 --- a/tutorials/source_en/advanced_use/customized_debugging_information.md +++ b/tutorials/source_en/advanced_use/customized_debugging_information.md @@ -31,7 +31,7 @@ For example, you can monitor the loss, save model parameters, dynamically adjust MindSpore provides the callback capabilities to allow users to insert customized operations in a specific phase of training or inference, including: -- Callback functions such as `ModelCheckpoint`, `LossMonitor`, and `SummaryStep` provided by the MindSpore framework +- Callback functions such as `ModelCheckpoint`, `LossMonitor`, and `SummaryCollector` provided by the MindSpore framework - Custom callback functions Usage: Transfer the callback object in the `model.train` method. The callback object can be a list, for example: @@ -39,13 +39,13 @@ Usage: Transfer the callback object in the `model.train` method. The callback ob ```python ckpt_cb = ModelCheckpoint() loss_cb = LossMonitor() -summary_cb = SummaryStep() +summary_cb = SummaryCollector(summary_dir='./summary_dir') model.train(epoch, dataset, callbacks=[ckpt_cb, loss_cb, summary_cb]) ``` `ModelCheckpoint` can save model parameters for retraining or inference. `LossMonitor` can output loss information in logs for users to view. In addition, `LossMonitor` monitors the loss value change during training. When the loss value is `Nan` or `Inf`, the training terminates. -SummaryStep can save the training information to a file for later use. +`SummaryCollector` can save the training information to files for later use. During the training process, the callback list will execute the callback function in the defined order. Therefore, in the definition process, the dependency between callbacks needs to be considered. ### Custom Callback diff --git a/tutorials/source_en/advanced_use/dashboard.md b/tutorials/source_en/advanced_use/dashboard.md index b41b8c70c90dc07af5b2d7e3b036b6b10fc1c5bb..6287ce87cd8322b7dddc18868112d280eb658662 100644 --- a/tutorials/source_en/advanced_use/dashboard.md +++ b/tutorials/source_en/advanced_use/dashboard.md @@ -1,6 +1,6 @@ # Dashboard -`Ascend` `GPU` `Model Optimization` `Intermediate` `Expert` +`Ascend` `GPU` `CPU` `Model Optimization` `Intermediate` `Expert` diff --git a/tutorials/source_en/advanced_use/images/data_chart.png b/tutorials/source_en/advanced_use/images/data_chart.png index 1c8d6995bc15ec28ecfca059b237a0d123dbde4f..f698c682119efc886b46a911d3c61f50ab017879 100644 Binary files a/tutorials/source_en/advanced_use/images/data_chart.png and b/tutorials/source_en/advanced_use/images/data_chart.png differ diff --git a/tutorials/source_en/advanced_use/images/data_function.png b/tutorials/source_en/advanced_use/images/data_function.png index 5af8030e1ad01f10e0c8b5636ceb5985fb5d8153..14dd75ba77452e938e75f46b65d49b6f593c543f 100644 Binary files a/tutorials/source_en/advanced_use/images/data_function.png and b/tutorials/source_en/advanced_use/images/data_function.png differ diff --git a/tutorials/source_en/advanced_use/images/data_label.png b/tutorials/source_en/advanced_use/images/data_label.png index 8d20a3d46e9f802fee99b5663b38d489fffd6d60..f76c645e26b28401285f00dd0613d27e3506982c 100644 Binary files a/tutorials/source_en/advanced_use/images/data_label.png and b/tutorials/source_en/advanced_use/images/data_label.png differ diff --git a/tutorials/source_en/advanced_use/images/data_op_profile.png b/tutorials/source_en/advanced_use/images/data_op_profile.png index b83408e92777181f6447ec20239fc92e28084a6a..6fc146a688c4670594cfc40e20e9180cfa4eacfc 100644 Binary files a/tutorials/source_en/advanced_use/images/data_op_profile.png and b/tutorials/source_en/advanced_use/images/data_op_profile.png differ diff --git a/tutorials/source_en/advanced_use/images/data_table.png b/tutorials/source_en/advanced_use/images/data_table.png index c3e44c634f72f8e4a89282bcb482f83d3b04da1c..65dcd39049b2754ef9ed22641981743f985e2b85 100644 Binary files a/tutorials/source_en/advanced_use/images/data_table.png and b/tutorials/source_en/advanced_use/images/data_table.png differ diff --git a/tutorials/source_en/advanced_use/images/gpu_activity_profiler.png b/tutorials/source_en/advanced_use/images/gpu_activity_profiler.png index 0269025e33b8e7365024b423bfc9d91e895de0ea..633599d845ffc1f308d704540dc501b5288038f4 100644 Binary files a/tutorials/source_en/advanced_use/images/gpu_activity_profiler.png and b/tutorials/source_en/advanced_use/images/gpu_activity_profiler.png differ diff --git a/tutorials/source_en/advanced_use/images/gpu_op_ui_profiler.png b/tutorials/source_en/advanced_use/images/gpu_op_ui_profiler.png index 8567c50f8b29c1dcae5219ff085459e260242b36..e8e1dcaacf5c1dbd80dafe9e634f60db1048efb9 100644 Binary files a/tutorials/source_en/advanced_use/images/gpu_op_ui_profiler.png and b/tutorials/source_en/advanced_use/images/gpu_op_ui_profiler.png differ diff --git a/tutorials/source_en/advanced_use/images/graph.png b/tutorials/source_en/advanced_use/images/graph.png index 55ca7d7183c818a15b69a3a6ee2c4ef29655460c..1660bc677ad8870b0bdcdb3d64e0a569477d8209 100644 Binary files a/tutorials/source_en/advanced_use/images/graph.png and b/tutorials/source_en/advanced_use/images/graph.png differ diff --git a/tutorials/source_en/advanced_use/images/graph_sidebar.png b/tutorials/source_en/advanced_use/images/graph_sidebar.png index 90e8d868b5ff9d68ae14d55d8f3ff188db412556..72c98d6008931a50a8376d9d6b03e48e8f57ba5f 100644 Binary files a/tutorials/source_en/advanced_use/images/graph_sidebar.png and b/tutorials/source_en/advanced_use/images/graph_sidebar.png differ diff --git a/tutorials/source_en/advanced_use/images/histogram_func.png b/tutorials/source_en/advanced_use/images/histogram_func.png index c4e2c3c9dce7cde09f12141cf9cc19b1f59cebaf..84dfd7f82e667b45d80fc7cf28761b4177d5df80 100644 Binary files a/tutorials/source_en/advanced_use/images/histogram_func.png and b/tutorials/source_en/advanced_use/images/histogram_func.png differ diff --git a/tutorials/source_en/advanced_use/images/image_function.png b/tutorials/source_en/advanced_use/images/image_function.png index d51b8e226f3a13b9707e6bba9abfa4edef6eaaea..4a43b649c106e81b70a0a5bb824bc6563cd2a66b 100644 Binary files a/tutorials/source_en/advanced_use/images/image_function.png and b/tutorials/source_en/advanced_use/images/image_function.png differ diff --git a/tutorials/source_en/advanced_use/images/image_vi.png b/tutorials/source_en/advanced_use/images/image_vi.png index d15ece27f4566f7afbe02ee16b2e5f330b9f402f..1fe3ee2c28367d5fc5d7b322e49b3a731c91f620 100644 Binary files a/tutorials/source_en/advanced_use/images/image_vi.png and b/tutorials/source_en/advanced_use/images/image_vi.png differ diff --git a/tutorials/source_en/advanced_use/images/lineage_label.png b/tutorials/source_en/advanced_use/images/lineage_label.png index 93834d8cedcf41aa1da496598f5eff802274b980..56f6eb7dfd4cd39ce7c8ebf6fa5e2b0d61ea5871 100644 Binary files a/tutorials/source_en/advanced_use/images/lineage_label.png and b/tutorials/source_en/advanced_use/images/lineage_label.png differ diff --git a/tutorials/source_en/advanced_use/images/lineage_model_chart.png b/tutorials/source_en/advanced_use/images/lineage_model_chart.png index d0d0a9a30d0ff0f653b92886b9cafb5b3a12a1b2..32e307551e210a48cfbd5022fc2901e841dd9b8a 100644 Binary files a/tutorials/source_en/advanced_use/images/lineage_model_chart.png and b/tutorials/source_en/advanced_use/images/lineage_model_chart.png differ diff --git a/tutorials/source_en/advanced_use/images/lineage_model_table.png b/tutorials/source_en/advanced_use/images/lineage_model_table.png index 7fa384e1c6f6637a3530b3354a6d3b266ff5d319..923b3ee95c08f2a32437988aae99c1aba6d191ef 100644 Binary files a/tutorials/source_en/advanced_use/images/lineage_model_table.png and b/tutorials/source_en/advanced_use/images/lineage_model_table.png differ diff --git a/tutorials/source_en/advanced_use/images/minddata_profile.png b/tutorials/source_en/advanced_use/images/minddata_profile.png index 79dfad25e6828769a2efc697bb7b02a171dbbdd0..a5698394aa7e68fbe1592f1ab19555e7820589fa 100644 Binary files a/tutorials/source_en/advanced_use/images/minddata_profile.png and b/tutorials/source_en/advanced_use/images/minddata_profile.png differ diff --git a/tutorials/source_en/advanced_use/images/multi_scalars_select.png b/tutorials/source_en/advanced_use/images/multi_scalars_select.png index 80bbc7822bf6f86ed5f1ad4f24ffc8039b655c56..7153bd3002aad05fc68e4a879aa07f021af70e0a 100644 Binary files a/tutorials/source_en/advanced_use/images/multi_scalars_select.png and b/tutorials/source_en/advanced_use/images/multi_scalars_select.png differ diff --git a/tutorials/source_en/advanced_use/images/op_statistics.PNG b/tutorials/source_en/advanced_use/images/op_statistics.PNG index 05a146e1ffd5f732ad0fb8c80bd9abe81fb65ab4..ac22f98dac493a5221481b9029e7539a95b29d85 100644 Binary files a/tutorials/source_en/advanced_use/images/op_statistics.PNG and b/tutorials/source_en/advanced_use/images/op_statistics.PNG differ diff --git a/tutorials/source_en/advanced_use/images/op_type_statistics.PNG b/tutorials/source_en/advanced_use/images/op_type_statistics.PNG index 6d18ccaa0f393938c8f89ca7c20e21e5ff496b4a..92cf3c96eca35ddf7ddc76430c24884526dbaafa 100644 Binary files a/tutorials/source_en/advanced_use/images/op_type_statistics.PNG and b/tutorials/source_en/advanced_use/images/op_type_statistics.PNG differ diff --git a/tutorials/source_en/advanced_use/images/performance_overall.png b/tutorials/source_en/advanced_use/images/performance_overall.png index 923c87df07361e35ae7429b0da2736edd6a2880c..67d1dc36e9c2867071825663a295eab842ce8294 100644 Binary files a/tutorials/source_en/advanced_use/images/performance_overall.png and b/tutorials/source_en/advanced_use/images/performance_overall.png differ diff --git a/tutorials/source_en/advanced_use/images/resources_cpu.png b/tutorials/source_en/advanced_use/images/resources_cpu.png index cd62ad294855f6ee11d1503aaa12c565dbc1c312..4bc7a3b935924f70f411084de787d34d7561b52d 100644 Binary files a/tutorials/source_en/advanced_use/images/resources_cpu.png and b/tutorials/source_en/advanced_use/images/resources_cpu.png differ diff --git a/tutorials/source_en/advanced_use/images/resources_mem.png b/tutorials/source_en/advanced_use/images/resources_mem.png index a222700035f14c08f1979cec7914a976a3633070..8da0662d114089dd3955b044d73377ea43a0c826 100644 Binary files a/tutorials/source_en/advanced_use/images/resources_mem.png and b/tutorials/source_en/advanced_use/images/resources_mem.png differ diff --git a/tutorials/source_en/advanced_use/images/resources_npu.png b/tutorials/source_en/advanced_use/images/resources_npu.png index 51cc63a7c5d1272d226e19c014b0974302f06d12..69440764b00af75f5cc0da8c61fd1a17c5bbcbdd 100644 Binary files a/tutorials/source_en/advanced_use/images/resources_npu.png and b/tutorials/source_en/advanced_use/images/resources_npu.png differ diff --git a/tutorials/source_en/advanced_use/images/scalar.png b/tutorials/source_en/advanced_use/images/scalar.png index f783fec6ccdf67a53a58b4cd1355d75d0cb03879..15a2a4889288b5153fd26c28f6d7a12b5eef4f98 100644 Binary files a/tutorials/source_en/advanced_use/images/scalar.png and b/tutorials/source_en/advanced_use/images/scalar.png differ diff --git a/tutorials/source_en/advanced_use/images/scalar_compound.png b/tutorials/source_en/advanced_use/images/scalar_compound.png index 8813a59f7551f7ab239e6103e1e9ef14ec4e2add..c248af843f3e850eda275d33eaadcfaba4304840 100644 Binary files a/tutorials/source_en/advanced_use/images/scalar_compound.png and b/tutorials/source_en/advanced_use/images/scalar_compound.png differ diff --git a/tutorials/source_en/advanced_use/images/scalar_select.png b/tutorials/source_en/advanced_use/images/scalar_select.png index 11c06aecf6b012f6033414ce5beb2d4600bc3a91..056797d9da760ad9878c86e09732eac6c6bac303 100644 Binary files a/tutorials/source_en/advanced_use/images/scalar_select.png and b/tutorials/source_en/advanced_use/images/scalar_select.png differ diff --git a/tutorials/source_en/advanced_use/images/step_trace.png b/tutorials/source_en/advanced_use/images/step_trace.png index cd82cd8ade3a577c7578cb804fe9967c9c27e541..6eac06f3d1ffc34c176c6a52d979e5e135571507 100644 Binary files a/tutorials/source_en/advanced_use/images/step_trace.png and b/tutorials/source_en/advanced_use/images/step_trace.png differ diff --git a/tutorials/source_en/advanced_use/images/tensor_function.png b/tutorials/source_en/advanced_use/images/tensor_function.png index b6c5e8aba5b098590c168b6b5acb4c698a0f6922..43dbda65cbe55a6e7e3388808087469f11186dde 100644 Binary files a/tutorials/source_en/advanced_use/images/tensor_function.png and b/tutorials/source_en/advanced_use/images/tensor_function.png differ diff --git a/tutorials/source_en/advanced_use/images/tensor_histogram.png b/tutorials/source_en/advanced_use/images/tensor_histogram.png index 4d3ca16b63261eca5e8318cb47ec4050539eca51..967a452efde4efc9f464782244f4e790417b7122 100644 Binary files a/tutorials/source_en/advanced_use/images/tensor_histogram.png and b/tutorials/source_en/advanced_use/images/tensor_histogram.png differ diff --git a/tutorials/source_en/advanced_use/images/tensor_table.png b/tutorials/source_en/advanced_use/images/tensor_table.png index 725bd9f8481826d682b593c2224a766854e9b4f8..70d1949ebf3fd4d2614ed8dd87346cfb454a7123 100644 Binary files a/tutorials/source_en/advanced_use/images/tensor_table.png and b/tutorials/source_en/advanced_use/images/tensor_table.png differ diff --git a/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md b/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md index 1e79792b6cf3659c3c04e34f6e497cb6f3a9a175..c6a1274665f22082e9cf3ea220caf4389bc18bc9 100644 --- a/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md +++ b/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md @@ -1,6 +1,6 @@ # Lineage and Scalars Comparision -`Ascend` `GPU` `Model Optimization` `Intermediate` `Expert` +`Ascend` `GPU` `CPU` `Model Optimization` `Intermediate` `Expert` diff --git a/tutorials/source_en/advanced_use/mindinsight_commands.md b/tutorials/source_en/advanced_use/mindinsight_commands.md index 42b9af599466ce92bd3515c711f7eba24d8b92c0..c65d3d3aea20eb8ea5def821f1d4c8319126c0e3 100644 --- a/tutorials/source_en/advanced_use/mindinsight_commands.md +++ b/tutorials/source_en/advanced_use/mindinsight_commands.md @@ -1,6 +1,6 @@ # MindInsight Commands -`Ascend` `GPU` `Model Optimization` `Intermediate` `Expert` +`Ascend` `GPU` `CPU` `Model Optimization` `Intermediate` `Expert` diff --git a/tutorials/source_en/advanced_use/performance_profiling_gpu.md b/tutorials/source_en/advanced_use/performance_profiling_gpu.md index 0bbe119e404d30ca4f605240089b3f2ebce72237..574809ed2a0da40db6ee38dd48f506a16c55109e 100644 --- a/tutorials/source_en/advanced_use/performance_profiling_gpu.md +++ b/tutorials/source_en/advanced_use/performance_profiling_gpu.md @@ -77,7 +77,7 @@ Users can access the Performance Profiler by selecting a specific training from Figure 1:Overall Performance Figure 1 displays the overall performance of the training, including the overall data of Step Trace, Operator Performance, MindData Performance and Timeline. Operator Performance Analysis is supportted only: -- Operator Performance: It will collect the execution time of operators and operator types. The overall performance page will show the pie graph for different operator types. +- Operator Performance: It will collect the average execution time of operators and operator types. The overall performance page will show the pie graph for different operator types. Users can click the detail link to see the details of each components. diff --git a/tutorials/source_en/advanced_use/quantization_aware.md b/tutorials/source_en/advanced_use/quantization_aware.md index 4ce552935947180891e8c1d9fadd4beb20eaf033..d554ee267ce34dab4dbf7cf1ffcbd54c2a49cd91 100644 --- a/tutorials/source_en/advanced_use/quantization_aware.md +++ b/tutorials/source_en/advanced_use/quantization_aware.md @@ -82,47 +82,71 @@ Next, the LeNet network is used as an example to describe steps 3 and 6. Define a fusion network and replace the specified operators. -1. Use the `nn.Conv2dBnAct` operator to replace the three operators `nn.Conv2d`, `nn.batchnorm`, and `nn.relu` in the original network model. -2. Use the `nn.DenseBnAct` operator to replace the three operators `nn.Dense`, `nn.batchnorm`, and `nn.relu` in the original network model. +1. Use the `nn.Conv2dBnAct` operator to replace the two operators `nn.Conv2d` and `nn.Relu` in the original network model. +2. Use the `nn.DenseBnAct` operator to replace the two operators `nn.Dense` and `nn.Relu` in the original network model. -> Even if the `nn.Dense` and `nn.Conv2d` operators are not followed by `nn.batchnorm` and `nn.relu`, the preceding two replacement operations must be performed as required. +> Even if the `nn.Dense` and `nn.Conv2d` operators are not followed by `nn.Batchnorm` and `nn.Relu`, the preceding two replacement operations must be performed as required. -The definition of the original network model is as follows: +The definition of the original network model LeNet5 is as follows: ```python +def conv(in_channels, out_channels, kernel_size, stride=1, padding=0): + """weight initial for conv layer""" + weight = weight_variable() + return nn.Conv2d(in_channels, out_channels, + kernel_size=kernel_size, stride=stride, padding=padding, + weight_init=weight, has_bias=False, pad_mode="valid") + + +def fc_with_initialize(input_channels, out_channels): + """weight initial for fc layer""" + weight = weight_variable() + bias = weight_variable() + return nn.Dense(input_channels, out_channels, weight, bias) + + +def weight_variable(): + """weight initial""" + return TruncatedNormal(0.02) + + class LeNet5(nn.Cell): - def __init__(self, num_class=10): + """ + Lenet network + + Args: + num_class (int): Num classes. Default: 10. + + Returns: + Tensor, output tensor + Examples: + >>> LeNet(num_class=10) + + """ + def __init__(self, num_class=10, channel=1): super(LeNet5, self).__init__() self.num_class = num_class - - self.conv1 = nn.Conv2d(1, 6, kernel_size=5) - self.bn1 = nn.batchnorm(6) - self.act1 = nn.relu() - - self.conv2 = nn.Conv2d(6, 16, kernel_size=5) - self.bn2 = nn.batchnorm(16) - self.act2 = nn.relu() - - self.fc1 = nn.Dense(16 * 5 * 5, 120) - self.fc2 = nn.Dense(120, 84) - self.act3 = nn.relu() - self.fc3 = nn.Dense(84, self.num_class) + self.conv1 = conv(channel, 6, 5) + self.conv2 = conv(6, 16, 5) + self.fc1 = fc_with_initialize(16 * 5 * 5, 120) + self.fc2 = fc_with_initialize(120, 84) + self.fc3 = fc_with_initialize(84, self.num_class) + self.relu = nn.ReLU() self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) + self.flatten = nn.Flatten() def construct(self, x): x = self.conv1(x) - x = self.bn1(x) - x = self.act1(x) + x = self.relu(x) x = self.max_pool2d(x) x = self.conv2(x) - x = self.bn2(x) - x = self.act2(x) + x = self.relu(x) x = self.max_pool2d(x) - x = self.flattern(x) + x = self.flatten(x) x = self.fc1(x) - x = self.act3(x) + x = self.relu(x) x = self.fc2(x) - x = self.act3(x) + x = self.relu(x) x = self.fc3(x) return x ``` diff --git a/tutorials/source_en/advanced_use/serving.md b/tutorials/source_en/advanced_use/serving.md index c3fa19ab5384b6856c4909723c6d2cfec59aead8..a726e427cbdc9febb4b52c3892aa7770e87666de 100644 --- a/tutorials/source_en/advanced_use/serving.md +++ b/tutorials/source_en/advanced_use/serving.md @@ -13,22 +13,21 @@ - [Client Samples](#client-samples) - [Python Client Sample](#python-client-sample) - [C++ Client Sample](#cpp-client-sample) + - [REST API Client Sample](#rest-api-client-sample) - ## Overview MindSpore Serving is a lightweight and high-performance service module that helps MindSpore developers efficiently deploy online inference services in the production environment. After completing model training using MindSpore, you can export the MindSpore model and use MindSpore Serving to create an inference service for the model. Currently, only Ascend 910 is supported. - ## Starting Serving After MindSpore is installed using `pip`, the Serving executable program is stored in `/{your python path}/lib/python3.7/site-packages/mindspore/ms_serving`. Run the following command to start Serving: -```bash -ms_serving [--help] [--model_path ] [--model_name ] - [--port ] [--device_id ] +```bash +ms_serving [--help] [--model_path=] [--model_name=] [--port=] + [--rest_api_port=] [--device_id=] ``` Parameters are described as follows: @@ -37,15 +36,19 @@ Parameters are described as follows: |`--help`|Optional|Displays the help information about the startup command. |-|-|-| |`--model_path=`|Mandatory|Path for storing the model to be loaded. |String|Null|-| |`--model_name=`|Mandatory|Name of the model file to be loaded. |String|Null|-| -|`--=port `|Optional|Specifies the external Serving port number. |Integer|5500|1–65535| +|`--port=`|Optional|Specifies the external Serving gRPC port number. |Integer|5500|1–65535| +|`--rest_api_port=`|Specifies the external Serving REST API port number. |Integer|5500|1–65535| |`--device_id=`|Optional|Specifies device ID to be used.|Integer|0|0 to 7| > Before running the startup command, add the path `/{your python path}/lib:/{your python path}/lib/python3.7/site-packages/mindspore/lib` to the environment variable `LD_LIBRARY_PATH`. + > port and rest_ api_port cannot be the same. ## Application Example The following uses a simple network as an example to describe how to use MindSpore Serving. ### Exporting Model + > Before exporting the model, you need to configure MindSpore [base environment](https://www.mindspore.cn/install/en). + Use [add_model.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/export_model/add_model.py) to build a network with only the Add operator and export the MindSpore inference deployment model. ```python @@ -115,7 +118,7 @@ The client code consists of the following parts: explicit MSClient(std::shared_ptr channel) : stub_(MSService::NewStub(channel)) {} private: std::unique_ptr stub_; - };MSClient client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials())); + }; MSClient client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials())); @@ -151,3 +154,27 @@ The client code consists of the following parts: ``` For details about the complete code, see [ms_client](https://gitee.com/mindspore/mindspore/blob/master/serving/example/cpp_client/ms_client.cc). + +### REST API Client Sample +1. Send data in the form of `data`: + `data` field: flatten each input data of network model into one-dimensional data. Suppose the network model has n inputs, and the final data structure is a two-dimensional list of 1 * n. + As in this example, flatten the model input data `[[1.0, 2.0], [3.0, 4.0]]` and `[[1.0, 2.0], [3.0, 4.0]]` to form `[[1.0, 2.0, 3.0, 4.0], [1.0, 2.0, 3.0, 4.0]]`. + ``` + curl -X POST -d '{"data": [[1.0, 2.0, 3.0, 4.0], [1.0, 2.0, 3.0, 4.0]]}' http://127.0.0.1:5501 + ``` + The following return values are displayed, indicating that the serving service has correctly executed the reasoning of the add network, and the output data structure is similar to that of the input: + ``` + {"data":[[2.0,4.0,6.0,8.0]]} + ``` + +2. Send data in the form of `tensor`: + `tensor` field: composed of each input of the network model, keeping the original shape of input. + As in this example, the model input data `[[1.0, 2.0], [3.0, 4.0]]` and `[[1.0, 2.0], [3.0, 4.0]]` are combined into `[[[1.0, 2.0], [3.0, 4.0]], [[1.0, 2.0], [3.0, 4.0]]]`. + ``` + curl -X POST -d '{"tensor": [[[1.0, 2.0], [3.0, 4.0]], [[1.0, 2.0], [3.0, 4.0]]]}' http://127.0.0.1:5501 + ``` + The following return values are displayed, indicating that the serving service has correctly executed the reasoning of the add network, and the output data structure is similar to that of the input: + ``` + {"tensor":[[2.0,4.0], [6.0,8.0]]} + ``` + > REST APICurrently only int32 and fp32 are supported as inputs. \ No newline at end of file diff --git a/tutorials/source_en/advanced_use/summary_record.md b/tutorials/source_en/advanced_use/summary_record.md index cebe6c393def9b71faaa6e6966a9f70ee1b202f0..6ce5581ee586f4fc37a60efc981ec5b2f202d2d6 100644 --- a/tutorials/source_en/advanced_use/summary_record.md +++ b/tutorials/source_en/advanced_use/summary_record.md @@ -1,6 +1,6 @@ # Summary Record -`Ascend` `GPU` `Model Optimization` `Intermediate` `Expert` +`Ascend` `GPU` `CPU` `Model Optimization` `Intermediate` `Expert` diff --git a/tutorials/source_en/quick_start/quick_video.md b/tutorials/source_en/quick_start/quick_video.md index 240a4e3aa65c75100f47bf5442ce5f1ae389ba67..d31b4343a77aa73dc1dff04577c1d327f6c45997 100644 --- a/tutorials/source_en/quick_start/quick_video.md +++ b/tutorials/source_en/quick_start/quick_video.md @@ -235,4 +235,36 @@ Provides video tutorials from installation to try-on, helping you quickly use Mi + + +## Join the MindSpore Community + + + + \ No newline at end of file diff --git a/tutorials/source_en/quick_start/quick_video/community.md b/tutorials/source_en/quick_start/quick_video/community.md new file mode 100644 index 0000000000000000000000000000000000000000..a2131fb205945dfcc14432bab4cded63a67bdc6f --- /dev/null +++ b/tutorials/source_en/quick_start/quick_video/community.md @@ -0,0 +1,9 @@ +# Participate in community building + +[comment]: <> (This document contains Hands-on Tutorial Series. Gitee does not support display. Please check tutorials on the official website) + + + +**See more**: diff --git a/tutorials/source_en/use/custom_operator.md b/tutorials/source_en/use/custom_operator.md index cff9317dd13be9efd5d52200234cc4256eecf862..7e58182b495e22cdc8cb8ae825ffb6ec0f5bf92f 100644 --- a/tutorials/source_en/use/custom_operator.md +++ b/tutorials/source_en/use/custom_operator.md @@ -29,7 +29,9 @@ The related concepts are as follows: - Operator implementation: describes the implementation of the internal computation logic for an operator through the DSL API provided by the Tensor Boost Engine (TBE). The TBE supports the development of custom operators based on the Ascend AI chip. You can apply for Open Beta Tests (OBTs) by visiting . - Operator information: describes basic information about a TBE operator, such as the operator name and supported input and output types. It is the basis for the backend to select and map operators. -This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe) in the MindSpore source code. +This section takes a Square operator as an example to describe how to customize an operator. + +> For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe) in the MindSpore source code. ## Registering the Operator Primitive @@ -79,7 +81,7 @@ The entry function of an operator describes the internal process of compiling th 4. Call `cce_build_code` to compile and generate an operator binary file. > The input parameters of the entry function require the input information of each operator, output information of each operator, operator attributes (optional), and `kernel_name` (name of the generated operator binary file). The input and output information is encapsulated in dictionaries, including the input and output shape and dtype when the operator is called on the network. -For details about TBE operator development, visit the [TBE website](https://www.huaweicloud.com/ascend/dev/operator). For details about how to debug and optimize the TBE operator, visit the [Mind Studio website](https://www.huaweicloud.com/intl/en-us/ascend/mindstudio). +For details about TBE operator development, visit the [TBE website]((https://support.huaweicloud.com/odevg-A800_3000_3010/atlaste_10_0063.html)). For details about how to debug and optimize the TBE operator, visit the [Mind Studio website](https://support.huaweicloud.com/usermanual-mindstudioc73/atlasmindstudio_02_0043.html). ### Registering the Operator Information @@ -91,7 +93,7 @@ The operator information is key for the backend to select the operator implement ### Example -The following takes the TBE implementation `square_impl.py` of the `Square` operator as an example. `square_compute` is a computable function of the operator implementation. It describes the computation logic of `x * x` by calling the API provided by `te.lang.cce`. `cus_square_op_info ` is the operator information, which is defined by `TBERegOp`. +The following takes the TBE implementation `square_impl.py` of the `Square` operator as an example. `square_compute` is a computable function of the operator implementation. It describes the computation logic of `x * x` by calling the API provided by `te.lang.cce`. `cus_square_op_info ` is the operator information, which is defined by `TBERegOp`. The specific field meaning of the operator information visit the [TBE website](https://support.huaweicloud.com/odevg-A800_3000_3010/atlaste_10_0096.html). Note the following parameters when setting `TBERegOp`: @@ -247,4 +249,4 @@ The execution result is as follows: ``` x: [1. 4. 9.] dx: [2. 8. 18.] -``` \ No newline at end of file +``` diff --git a/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md b/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md index 47b041273854d09d0b39a10e1609907f1d88076e..67852d647cac90ce71e992e2422d868821906bd9 100644 --- a/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md +++ b/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md @@ -248,7 +248,7 @@ def zip(self, datasets): ds2 = ds.GeneratorDataset(generator_func2, ["data2"]) ``` -2. Use `zip()` to combine the `data1` column of the dataset `ds1`and the `data2` column of the dataset `ds2` into the dataset `ds3`. +2. Use `zip()` to combine the `data` column of the dataset `ds1`and the `data2` column of the dataset `ds2` into the dataset `ds3`. ```python ds3 = ds.zip((ds1, ds2)) for data in ds3.create_dict_iterator(): @@ -256,11 +256,11 @@ def zip(self, datasets): ``` The output is as follows: ``` - {'data1': array([0, 1, 2], dtype=int64), 'data2': array([-3, -2, -1], dtype=int64)} - {'data1': array([1, 2, 3], dtype=int64), 'data2': array([-2, -1, 0], dtype=int64)} - {'data1': array([2, 3, 4], dtype=int64), 'data2': array([-1, 0, 1], dtype=int64)} - {'data1': array([3, 4, 5], dtype=int64), 'data2': array([0, 1, 2], dtype=int64)} - {'data1': array([4, 5, 6], dtype=int64), 'data2': array([1, 2, 3], dtype=int64)} + {'data': array([0, 1, 2], dtype=int64), 'data2': array([-3, -2, -1], dtype=int64)} + {'data': array([1, 2, 3], dtype=int64), 'data2': array([-2, -1, 0], dtype=int64)} + {'data': array([2, 3, 4], dtype=int64), 'data2': array([-1, 0, 1], dtype=int64)} + {'data': array([3, 4, 5], dtype=int64), 'data2': array([0, 1, 2], dtype=int64)} + {'data': array([4, 5, 6], dtype=int64), 'data2': array([1, 2, 3], dtype=int64)} ``` ## Data Augmentation During image training, especially when the dataset size is relatively small, you can preprocess images by using a series of data augmentation operations, thereby enriching the datasets. diff --git a/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md b/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md index d9708232b94726eb3f2af64d2e7abcecc63fac2f..b4d6da9a82d15848681dd01e67121b43be37afe6 100644 --- a/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md +++ b/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md @@ -33,7 +33,7 @@ Callback是回调函数的意思,但它其实不是一个函数而是一个类 MindSpore提供Callback能力,支持用户在训练/推理的特定阶段,插入自定义的操作。包括: -- MindSpore框架提供的`ModelCheckpoint`、`LossMonitor`、`SummaryStep`等Callback函数。 +- MindSpore框架提供的`ModelCheckpoint`、`LossMonitor`、`SummaryCollector`等Callback函数。 - MindSpore支持用户自定义Callback。 使用方法:在`model.train`方法中传入Callback对象,它可以是一个Callback列表,例: @@ -41,13 +41,13 @@ MindSpore提供Callback能力,支持用户在训练/推理的特定阶段, ```python ckpt_cb = ModelCheckpoint() loss_cb = LossMonitor() -summary_cb = SummaryStep() +summary_cb = SummaryCollector(summary_dir='./summary_dir') model.train(epoch, dataset, callbacks=[ckpt_cb, loss_cb, summary_cb]) ``` `ModelCheckpoint`可以保存模型参数,以便进行再训练或推理。 `LossMonitor`可以在日志中输出loss,方便用户查看,同时它还会监控训练过程中的loss值变化情况,当loss值为`Nan`或`Inf`时终止训练。 -SummaryStep可以把训练过程中的信息存储到文件中,以便后续进行查看或可视化展示。 +`SummaryCollector` 可以把训练过程中的信息存储到文件中,以便后续进行查看或可视化展示。 在训练过程中,Callback列表会按照定义的顺序执行Callback函数。因此在定义过程中,需考虑Callback之间的依赖关系。 ### 自定义Callback diff --git a/tutorials/source_zh_cn/advanced_use/dashboard.md b/tutorials/source_zh_cn/advanced_use/dashboard.md index 9f79c106488bff9e67850f62307803e3d84f09b0..817e994f35e0c3afe1e4f05bd34a0eee9254b8f2 100644 --- a/tutorials/source_zh_cn/advanced_use/dashboard.md +++ b/tutorials/source_zh_cn/advanced_use/dashboard.md @@ -1,6 +1,6 @@ # 训练看板 -`Ascend` `GPU` `模型调优` `中级` `高级` +`Ascend` `GPU` `CPU` `模型调优` `中级` `高级` diff --git a/tutorials/source_zh_cn/advanced_use/images/data_chart.png b/tutorials/source_zh_cn/advanced_use/images/data_chart.png index 1c8d6995bc15ec28ecfca059b237a0d123dbde4f..f698c682119efc886b46a911d3c61f50ab017879 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/data_chart.png and b/tutorials/source_zh_cn/advanced_use/images/data_chart.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/data_function.png b/tutorials/source_zh_cn/advanced_use/images/data_function.png index 5af8030e1ad01f10e0c8b5636ceb5985fb5d8153..ac5978dee9272bc492bf04f9362c3bed20baf96f 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/data_function.png and b/tutorials/source_zh_cn/advanced_use/images/data_function.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/data_label.png b/tutorials/source_zh_cn/advanced_use/images/data_label.png index 8d20a3d46e9f802fee99b5663b38d489fffd6d60..c761a9008c5b814da1913c84d2b113174d3f1947 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/data_label.png and b/tutorials/source_zh_cn/advanced_use/images/data_label.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png b/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png index b83408e92777181f6447ec20239fc92e28084a6a..6c657998a745deecb229298fce02108d831e1aa9 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png and b/tutorials/source_zh_cn/advanced_use/images/data_op_profile.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/data_table.png b/tutorials/source_zh_cn/advanced_use/images/data_table.png index c3e44c634f72f8e4a89282bcb482f83d3b04da1c..e368080648f1da89696efdd3fe280a371d5909c4 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/data_table.png and b/tutorials/source_zh_cn/advanced_use/images/data_table.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/gpu_activity_profiler.png b/tutorials/source_zh_cn/advanced_use/images/gpu_activity_profiler.png index 0269025e33b8e7365024b423bfc9d91e895de0ea..633599d845ffc1f308d704540dc501b5288038f4 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/gpu_activity_profiler.png and b/tutorials/source_zh_cn/advanced_use/images/gpu_activity_profiler.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/gpu_op_ui_profiler.png b/tutorials/source_zh_cn/advanced_use/images/gpu_op_ui_profiler.png index d1ee2c6b0f6d1d59d33550496083b27bc58aacde..e8e1dcaacf5c1dbd80dafe9e634f60db1048efb9 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/gpu_op_ui_profiler.png and b/tutorials/source_zh_cn/advanced_use/images/gpu_op_ui_profiler.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/graph.png b/tutorials/source_zh_cn/advanced_use/images/graph.png index 55ca7d7183c818a15b69a3a6ee2c4ef29655460c..1660bc677ad8870b0bdcdb3d64e0a569477d8209 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/graph.png and b/tutorials/source_zh_cn/advanced_use/images/graph.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/graph_sidebar.png b/tutorials/source_zh_cn/advanced_use/images/graph_sidebar.png index ea9515857e23d9a55ad56a88a4a21d232734ffb5..b0c3b177dac43d3f105a36dd85245ad4a873569d 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/graph_sidebar.png and b/tutorials/source_zh_cn/advanced_use/images/graph_sidebar.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/histogram_func.png b/tutorials/source_zh_cn/advanced_use/images/histogram_func.png index c4e2c3c9dce7cde09f12141cf9cc19b1f59cebaf..5e30875d0efdab22a326207b4d4c65c8867fefeb 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/histogram_func.png and b/tutorials/source_zh_cn/advanced_use/images/histogram_func.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/image_function.png b/tutorials/source_zh_cn/advanced_use/images/image_function.png index d51b8e226f3a13b9707e6bba9abfa4edef6eaaea..214e6b3927a1098456cabc6b70083b6365c85298 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/image_function.png and b/tutorials/source_zh_cn/advanced_use/images/image_function.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/image_vi.png b/tutorials/source_zh_cn/advanced_use/images/image_vi.png index d15ece27f4566f7afbe02ee16b2e5f330b9f402f..d1924d71c670e02f22eb878a8c3794bde630f178 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/image_vi.png and b/tutorials/source_zh_cn/advanced_use/images/image_vi.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/lineage_label.png b/tutorials/source_zh_cn/advanced_use/images/lineage_label.png index 93834d8cedcf41aa1da496598f5eff802274b980..eabd2dae20664cda83cc46d3d958a07e941a03f6 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/lineage_label.png and b/tutorials/source_zh_cn/advanced_use/images/lineage_label.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/lineage_model_chart.png b/tutorials/source_zh_cn/advanced_use/images/lineage_model_chart.png index d0d0a9a30d0ff0f653b92886b9cafb5b3a12a1b2..3c31840c8c2c89e849e71314b87ada0ba019eb44 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/lineage_model_chart.png and b/tutorials/source_zh_cn/advanced_use/images/lineage_model_chart.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/lineage_model_table.png b/tutorials/source_zh_cn/advanced_use/images/lineage_model_table.png index 7fa384e1c6f6637a3530b3354a6d3b266ff5d319..4103eee6ee25a9aa602addc616b7d200f082bbca 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/lineage_model_table.png and b/tutorials/source_zh_cn/advanced_use/images/lineage_model_table.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png b/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png index 79dfad25e6828769a2efc697bb7b02a171dbbdd0..9cf3a923f33c0bedc188f425d72b845a4c730dbf 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png and b/tutorials/source_zh_cn/advanced_use/images/minddata_profile.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/multi_scalars_select.png b/tutorials/source_zh_cn/advanced_use/images/multi_scalars_select.png index 80bbc7822bf6f86ed5f1ad4f24ffc8039b655c56..104eae240dbd7518adc85ff2cb265c22dd6cb39c 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/multi_scalars_select.png and b/tutorials/source_zh_cn/advanced_use/images/multi_scalars_select.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/on_device_inference_frame.jpg b/tutorials/source_zh_cn/advanced_use/images/on_device_inference_frame.jpg deleted file mode 100644 index 6006c845e8002831ef79b39ce9d68b8afd85e0f2..0000000000000000000000000000000000000000 Binary files a/tutorials/source_zh_cn/advanced_use/images/on_device_inference_frame.jpg and /dev/null differ diff --git a/tutorials/source_zh_cn/advanced_use/images/op_statistics.PNG b/tutorials/source_zh_cn/advanced_use/images/op_statistics.PNG index 05a146e1ffd5f732ad0fb8c80bd9abe81fb65ab4..fb9c9da03ed16976877539b9a75f0591463a1dc3 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/op_statistics.PNG and b/tutorials/source_zh_cn/advanced_use/images/op_statistics.PNG differ diff --git a/tutorials/source_zh_cn/advanced_use/images/op_type_statistics.PNG b/tutorials/source_zh_cn/advanced_use/images/op_type_statistics.PNG index 6d18ccaa0f393938c8f89ca7c20e21e5ff496b4a..c4aea613f27f0bcda34e0b1ae1cf19a3c7b71f75 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/op_type_statistics.PNG and b/tutorials/source_zh_cn/advanced_use/images/op_type_statistics.PNG differ diff --git a/tutorials/source_zh_cn/advanced_use/images/performance_overall.png b/tutorials/source_zh_cn/advanced_use/images/performance_overall.png index 923c87df07361e35ae7429b0da2736edd6a2880c..e6846a725cff0e61a0beb92e93502312eee8483c 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/performance_overall.png and b/tutorials/source_zh_cn/advanced_use/images/performance_overall.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/resources_cpu.png b/tutorials/source_zh_cn/advanced_use/images/resources_cpu.png index cd62ad294855f6ee11d1503aaa12c565dbc1c312..a679d20b7199b40ab4dd57e7099d79a652e6344f 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/resources_cpu.png and b/tutorials/source_zh_cn/advanced_use/images/resources_cpu.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/resources_mem.png b/tutorials/source_zh_cn/advanced_use/images/resources_mem.png index eb4af42b58d9f2331b2a517a03af73165a3172ed..b0cc79ef731dcf867d2c31b15208df69a96b5253 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/resources_mem.png and b/tutorials/source_zh_cn/advanced_use/images/resources_mem.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/resources_npu.png b/tutorials/source_zh_cn/advanced_use/images/resources_npu.png index 6fc8635e0c6940f1d2660332031d704cc205b7c3..9ad6cc876b0fde63458c364d409bcd43567fbefc 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/resources_npu.png and b/tutorials/source_zh_cn/advanced_use/images/resources_npu.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/scalar.png b/tutorials/source_zh_cn/advanced_use/images/scalar.png index f783fec6ccdf67a53a58b4cd1355d75d0cb03879..2e3482e03523cc21c7a9873feaf207d333397c95 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/scalar.png and b/tutorials/source_zh_cn/advanced_use/images/scalar.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/scalar_select.png b/tutorials/source_zh_cn/advanced_use/images/scalar_select.png index 11c06aecf6b012f6033414ce5beb2d4600bc3a91..a74f1f651338718a4e8f5ba171c47069e569ee2f 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/scalar_select.png and b/tutorials/source_zh_cn/advanced_use/images/scalar_select.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/side_infer_process.eddx b/tutorials/source_zh_cn/advanced_use/images/side_infer_process.eddx deleted file mode 100644 index a1f3e1ad5aa3041bbea7c72ffed3d2ad94bcac40..0000000000000000000000000000000000000000 Binary files a/tutorials/source_zh_cn/advanced_use/images/side_infer_process.eddx and /dev/null differ diff --git a/tutorials/source_zh_cn/advanced_use/images/side_infer_process.jpg b/tutorials/source_zh_cn/advanced_use/images/side_infer_process.jpg deleted file mode 100644 index c860810b3efd4d34fb570c38ed2470cc8670ee32..0000000000000000000000000000000000000000 Binary files a/tutorials/source_zh_cn/advanced_use/images/side_infer_process.jpg and /dev/null differ diff --git a/tutorials/source_zh_cn/advanced_use/images/step_trace.png b/tutorials/source_zh_cn/advanced_use/images/step_trace.png index cd82cd8ade3a577c7578cb804fe9967c9c27e541..1feace7a12db61c4da2b04b149715239dbe8db60 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/step_trace.png and b/tutorials/source_zh_cn/advanced_use/images/step_trace.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/tensor_function.png b/tutorials/source_zh_cn/advanced_use/images/tensor_function.png index b6c5e8aba5b098590c168b6b5acb4c698a0f6922..ab0ad58219181c782c65c396577d2b030b6a8d19 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/tensor_function.png and b/tutorials/source_zh_cn/advanced_use/images/tensor_function.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/tensor_histogram.png b/tutorials/source_zh_cn/advanced_use/images/tensor_histogram.png index 4d3ca16b63261eca5e8318cb47ec4050539eca51..967a452efde4efc9f464782244f4e790417b7122 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/tensor_histogram.png and b/tutorials/source_zh_cn/advanced_use/images/tensor_histogram.png differ diff --git a/tutorials/source_zh_cn/advanced_use/images/tensor_table.png b/tutorials/source_zh_cn/advanced_use/images/tensor_table.png index 725bd9f8481826d682b593c2224a766854e9b4f8..d4978b786c691140b3cad2d0d2257a7c0e448162 100644 Binary files a/tutorials/source_zh_cn/advanced_use/images/tensor_table.png and b/tutorials/source_zh_cn/advanced_use/images/tensor_table.png differ diff --git a/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md b/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md index d768f30ee4134dc0d73b03304b2bf855965020a7..fae22dbbcf27c6e5ce5b12b62eb4c74aaeee0cba 100644 --- a/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md +++ b/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md @@ -1,6 +1,6 @@ # 溯源和对比看板 -`Ascend` `GPU` `模型调优` `中级` `高级` +`Ascend` `GPU` `CPU` `模型调优` `中级` `高级` diff --git a/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md b/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md index 12905b823e717b992144dff97a4c1e9b0b15cb0b..4f685599e0c553b7fe47e3359cac8333d8da0e7b 100644 --- a/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md +++ b/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md @@ -1,6 +1,6 @@ # MindInsight相关命令 -`Ascend` `GPU` `模型调优` `中级` `高级` +`Ascend` `GPU` `CPU` `模型调优` `中级` `高级` diff --git a/tutorials/source_zh_cn/advanced_use/on_device_inference.md b/tutorials/source_zh_cn/advanced_use/on_device_inference.md deleted file mode 100644 index 5676dc2a41297277cfd163f36dfc4294c22f6ce7..0000000000000000000000000000000000000000 --- a/tutorials/source_zh_cn/advanced_use/on_device_inference.md +++ /dev/null @@ -1,359 +0,0 @@ -# 端侧推理 - - - -- [端侧推理](#端侧推理) - - [概述](#概述) - - [编译方法](#编译方法) - - [端侧推理使用](#端侧推理使用) - - [生成端侧模型文件](#生成端侧模型文件) - - [在端侧实现推理](#在端侧实现推理) - - - - - -## 概述 - -MindSpore Lite是一个轻量级的深度神经网络推理引擎,提供了将MindSpore训练出的模型或第三方模型TensorFlow Lite、ONNX、Caffe在端侧进行推理的功能。本教程介绍MindSpore Lite的编译方法和对MindSpore训练出的模型进行推理的使用指南。 - -![](./images/on_device_inference_frame.jpg) - -图1:端侧推理架构图 - -MindSpore Lite的框架主要由Frontend、IR、Backend、Lite RT、Micro构成。 - -- Frontend:用于模型的生成,用户可以使用模型构建接口构建模型,或者将第三方模型转化为MindSpore模型。 -- IR:包含MindSpore的Tensor定义、算子原型定义、图定义,后端优化基于IR进行。 -- Backend:包含图优化,量化等功能。图优化分为两部分:high-level优化与硬件无关,如算子融合、常量折叠等,low-level优化与硬件相关;量化,包括权重量化、激活值量化等多种训练后量化手段。 -- Lite RT:推理运行时,由session提供对外接口,kernel registry为算子注册器,scheduler为算子异构调度器,executor为算子执行器。Lite RT与Micro共享底层的算子库、内存分配、运行时线程池、并行原语等基础设施层。 -- Micro:Code-Gen根据模型生成.c文件,底层算子库等基础设施与Lite RT共用。 - - -## 编译方法 - -用户需要自行编译,这里介绍在Ubuntu环境下进行交叉编译的具体步骤。 - -环境要求如下: - -- 硬件要求 - - 内存1GB以上 - - 硬盘空间10GB以上 - -- 系统要求 - - 系统环境支持Linux: Ubuntu = 18.04.02LTS - -- 软件依赖 - - [cmake](https://cmake.org/download/) >= 3.14.1 - - [GCC](https://gcc.gnu.org/releases.html) >= 5.4 - - [Android_NDK r20b](https://dl.google.com/android/repository/android-ndk-r20b-linux-x86_64.zip) - - 使用MindSpore Lite转换工具,需要添加更多的依赖项: - - [autoconf](http://ftp.gnu.org/gnu/autoconf/) >= 2.69 - - [libtool](https://www.gnu.org/software/libtool/) >= 2.4.6 - - [libressl](http://www.libressl.org/) >= 3.1.3 - - [automake](https://www.gnu.org/software/automake/) >= 1.11.6 - - [libevent](https://libevent.org) >= 2.0 - - [m4](https://www.gnu.org/software/m4/m4.html) >= 1.4.18 - - [openssl](https://www.openssl.org/) >= 1.1.1 - -编译步骤如下: -1. 从代码仓下载源码。 - - ```bash - git clone https://gitee.com/mindspore/mindspore.git - ``` - -2. 在源码根目录下,执行如下命令编译MindSpore Lite。 - - - 编译转换工具: - - ```bash - bash build.sh -I x86_64 - ``` - - - 编译推理框架: - - 设定ANDROID_NDK路径: - ```bash - export ANDROID_NDK={$NDK_PATH}/android-ndk-r20b - ``` - - 用户需根据设备情况,可选择`arm64`: - ```bash - bash build.sh -I arm64 - ``` - - 或`arm32`: - ```bash - bash build.sh -I arm32 - ``` - -3. 进入源码的`mindspore/output`目录,获取编译结果`mindspore-lite-0.7.0-converter-ubuntu.tar.gz`。执行解压缩命令,获得编译后的工具包`mindspore-lite-0.7.0`: - - ```bash - tar -xvf mindspore-lite-0.7.0-converter-ubuntu.tar.gz - ``` - - -## 端侧推理使用 - -在APP的APK工程中使用MindSpore进行模型推理前,需要对输入进行必要的前处理,比如将图片转换成MindSpore推理要求的`tensor`格式、对图片进行`resize`等处理。在MindSpore完成模型推理后,对模型推理的结果进行后处理,并将处理的输出发送给APP应用。 - -本章主要描述用户如何使用MindSpore进行模型推理,APK工程的搭建和模型推理的前后处理,不在此列举。 - -MindSpore进行端侧模型推理的步骤如下。 - -### 生成端侧模型文件 -1. 加载训练完毕所生成的CheckPoint文件至定义好的网络中。 - ```python - param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path) - load_param_into_net(net, param_dict) - ``` -2. 调用`export`接口,导出模型文件(`.mindir`)。 - ```python - export(net, input_data, file_name="./lenet.mindir", file_format='MINDIR') - ``` - - 以LeNet网络为例,生成的端侧模型文件为`lenet.mindir`,完整示例代码`lenet.py`如下。 - ```python - import os - import numpy as np - import mindspore.nn as nn - import mindspore.ops.operations as P - import mindspore.context as context - from mindspore.common.tensor import Tensor - from mindspore.train.serialization import export, load_checkpoint, load_param_into_net - - class LeNet(nn.Cell): - def __init__(self): - super(LeNet, self).__init__() - self.relu = P.ReLU() - self.batch_size = 32 - self.conv1 = nn.Conv2d(1, 6, kernel_size=5, stride=1, padding=0, has_bias=False, pad_mode='valid') - self.conv2 = nn.Conv2d(6, 16, kernel_size=5, stride=1, padding=0, has_bias=False, pad_mode='valid') - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - self.reshape = P.Reshape() - self.fc1 = nn.Dense(400, 120) - self.fc2 = nn.Dense(120, 84) - self.fc3 = nn.Dense(84, 10) - - def construct(self, input_x): - output = self.conv1(input_x) - output = self.relu(output) - output = self.pool(output) - output = self.conv2(output) - output = self.relu(output) - output = self.pool(output) - output = self.reshape(output, (self.batch_size, -1)) - output = self.fc1(output) - output = self.relu(output) - output = self.fc2(output) - output = self.relu(output) - output = self.fc3(output) - return output - - if __name__ == '__main__': - context.set_context(mode=context.GRAPH_MODE, device_target="Ascend") - seed = 0 - np.random.seed(seed) - origin_data = np.random.uniform(low=0, high=255, size=(32, 1, 32, 32)).astype(np.float32) - origin_data.tofile("lenet.bin") - input_data = Tensor(origin_data) - net = LeNet() - ckpt_file_path = "path_to/lenet.ckpt" - - is_ckpt_exist = os.path.exists(ckpt_file_path) - if is_ckpt_exist: - param_dict = load_checkpoint(ckpt_file_name=ckpt_file_path) - load_param_into_net(net, param_dict) - export(net, input_data, file_name="./lenet.mindir", file_format='MINDIR') - print("export model success.") - else: - print("checkpoint file does not exist.") - ``` -3. 在`mindspore/output/mindspore-lite-0.7.0/converter`路径下,调用MindSpore端侧转换工具`converter_lite`,将模型文件(`.mindir`)转换为端侧模型文件(`.ms`)。 - ``` - ./converter_lite --fmk=MS --modelFile=./lenet.mindir --outputFile=lenet - ``` - 结果显示为: - ``` - INFO [converter/converter.cc:146] Runconverter] CONVERTER RESULT: SUCCESS! - ``` - 这表示已经成功将模型转化为MindSpore端侧模型。 - -### 在端侧实现推理 - -将`.ms`模型文件和图片数据作为输入,创建`session`在端侧实现推理。 - -![](./images/side_infer_process.jpg) - -图2:端侧推理时序图 - -对上一步生成的端侧模型文件`lenet.ms`执行推理,步骤如下: -1. 读取MindSpore端侧模型文件信息。 -2. 调用`CreateSession`接口创建`Session`。 -3. 调用`Session`中的`CompileGraph`方法,传入模型。 -4. 调用`Session`中的`GetInputs`方法,获取输入`Tensor`,获取图片信息设置为`data`,`data`即为用于推理的输入数据。 -5. 调用`Session`中的`RunGraph`接口执行推理。 -6. 调用`GetOutputs`接口获取输出。 - -推理环节的完整示例代码如下: - - ```CPP - #include - #include - #include "schema/model_generated.h" - #include "include/model.h" - #include "include/lite_session.h" - #include "include/errorcode.h" - #include "ir/dtype/type_id.h" - - - char *ReadFile(const char *file, size_t *size) { - if (file == nullptr) { - std::cerr << "file is nullptr" << std::endl; - return nullptr; - } - if (size == nullptr) { - std::cerr << "size is nullptr" << std::endl; - return nullptr; - } - std::ifstream ifs(file); - if (!ifs.good()) { - std::cerr << "file: " << file << " is not exist" << std::endl; - return nullptr; - } - - if (!ifs.is_open()) { - std::cerr << "file: " << file << " open failed" << std::endl; - return nullptr; - } - - ifs.seekg(0, std::ios::end); - *size = ifs.tellg(); - std::unique_ptr buf(new char[*size]); - - ifs.seekg(0, std::ios::beg); - ifs.read(buf.get(), *size); - ifs.close(); - - return buf.release(); - } - - int main(int argc, const char **argv) { - size_t model_size; - std::string model_path = "./lenet.ms"; - - // 1. Read File - auto model_buf = ReadFile(model_path.c_str(), &model_size); - if (model_buf == nullptr) { - std::cerr << "ReadFile return nullptr" << std::endl; - return -1; - } - - // 2. Import Model - auto model = mindspore::lite::Model::Import(model_buf, model_size); - if (model == nullptr) { - std::cerr << "Import model failed" << std::endl; - delete[](model_buf); - return -1; - } - delete[](model_buf); - auto context = new mindspore::lite::Context; - context->cpuBindMode = mindspore::lite::NO_BIND; - context->deviceCtx.type = mindspore::lite::DT_CPU; - context->threadNum = 4; - - // 3. Create Session - auto session = mindspore::session::LiteSession::CreateSession(context); - if (session == nullptr) { - std::cerr << "CreateSession failed" << std::endl; - return -1; - } - delete context; - auto ret = session->CompileGraph(model.get()); - if (ret != mindspore::lite::RET_OK) { - std::cerr << "CompileGraph failed" << std::endl; - delete session; - return -1; - } - - // 4. Get Inputs - auto inputs = session->GetInputs(); - if (inputs.size() != 1) { - std::cerr << "Lenet should has only one input" << std::endl; - delete session; - return -1; - } - auto in_tensor = inputs.front(); - if (in_tensor == nullptr) { - std::cerr << "in_tensor is nullptr" << std::endl; - delete session; - return -1; - } - size_t data_size; - std::string data_path = "./data.bin"; - auto input_buf = ReadFile(data_path.c_str(), &data_size); - if (input_buf == nullptr) { - std::cerr << "ReadFile return nullptr" << std::endl; - delete session; - return -1; - } - if (in_tensor->Size()!=data_size) { - std::cerr << "Input data size is not suit for model input" << std::endl; - delete[](input_buf); - delete session; - return -1; - } - auto *in_data = in_tensor->MutableData(); - if (in_data == nullptr) { - std::cerr << "Data of in_tensor is nullptr" << std::endl; - delete[](input_buf); - delete session; - return -1; - } - memcpy(in_data, input_buf, data_size); - delete[](input_buf); - - // 5. Run Graph - ret = session->RunGraph(); - if (ret != mindspore::lite::RET_OK) { - std::cerr << "RunGraph failed" << std::endl; - delete session; - return -1; - } - - // 6. Get Outputs - auto outputs = session->GetOutputs(); - if (outputs.size()!= 1) { - std::cerr << "Lenet should has only one output" << std::endl; - delete session; - return -1; - } - auto out_tensor = outputs.front(); - if (out_tensor == nullptr) { - std::cerr << "out_tensor is nullptr" << std::endl; - delete session; - return -1; - } - if (out_tensor->data_type()!=mindspore::TypeId::kNumberTypeFloat32) { - std::cerr << "Output of lenet should in float32" << std::endl; - delete session; - return -1; - } - auto *out_data = reinterpret_cast(out_tensor->MutableData()); - if (out_data == nullptr) { - std::cerr << "Data of out_tensor is nullptr" << std::endl; - delete session; - return -1; - } - std::cout << "Output data: "; - for (size_t i = 0; i < 10 & i < out_tensor->ElementsNum(); i++) { - std::cout << " " << out_data[i]; - } - std::cout << std::endl; - delete session; - return 0; - } - ``` diff --git a/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md b/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md index da2ee6df36fb65f3e275b5235f0ed3c48f21401f..0cd8a7fb09618802acfce8e04304133eca6cae89 100644 --- a/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md +++ b/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md @@ -72,14 +72,14 @@ class StopAtStep(Callback): ### 性能分析 -用户从训练列表中选择指定的训练,点击性能调试,可以查看该次训练的性能数据(目前GPU场景只支持算子耗时排名统计功能,其他功能暂时敬请期待)。 +用户从训练列表中选择指定的训练,点击性能调试,可以查看该次训练的性能数据(目前GPU场景只支持算子耗时排名统计功能,其他功能敬请期待)。 ![performance_overall.png](./images/performance_overall.png) 图1:性能数据总览 图1展示了性能数据总览页面,包含了迭代轨迹(Step Trace)、算子性能、MindData性能和Timeline等组件的数据总体呈现。目前GPU场景下只支持算子性能统计功能: -- 算子性能:统计单算子以及各算子类型的执行时间,进行排序展示;总览页中展示了各算子类型时间占比的饼状图。 +- 算子性能:统计单算子以及各算子类型的执行时间,进行排序展示;总览页中展示了各算子类型平均执行时间占比的饼状图。 用户可以点击查看详情链接,进入组件页面进行详细分析。 @@ -93,7 +93,7 @@ class StopAtStep(Callback): 图2展示了按算子类别进行统计分析的结果,包含以下内容: - 可以选择饼图/柱状图展示各算子类别的时间占比,每个算子类别的执行时间会统计属于该类别的算子执行时间总和以及平均执行时间。 -- 统计前20个平均执行时间最长的算子类别,并展示其总执行时间所占比例。 +- 统计前20个平均执行时间最长的算子类别。 图2下半部分展示了算子性能统计表,包含以下内容: - 选择全部:按单个算子的统计结果进行排序展示,展示维度包括算子位置(Device/Host)、算子类型、算子执行时间、算子全名等;默认按算子平均执行时间排序。 diff --git a/tutorials/source_zh_cn/advanced_use/quantization_aware.md b/tutorials/source_zh_cn/advanced_use/quantization_aware.md index d5cf8e129c840710ac43542d5669c13678832fe8..19f4d6a4c762b6decf8ab96d26525e3647680900 100644 --- a/tutorials/source_zh_cn/advanced_use/quantization_aware.md +++ b/tutorials/source_zh_cn/advanced_use/quantization_aware.md @@ -82,47 +82,71 @@ MindSpore的感知量化训练是在训练基础上,使用低精度数据替 定义融合网络,在定义网络后,替换指定的算子。 -1. 使用`nn.Conv2dBnAct`算子替换原网络模型中的3个算子`nn.Conv2d`、`nn.batchnorm`和`nn.relu`。 -2. 使用`nn.DenseBnAct`算子替换原网络模型中的3个算子`nn.Dense`、`nn.batchnorm`和`nn.relu`。 +1. 使用`nn.Conv2dBnAct`算子替换原网络模型中的2个算子`nn.Conv2d`和`nn.Relu`。 +2. 使用`nn.DenseBnAct`算子替换原网络模型中的2个算子`nn.Dense`和`nn.Relu`。 -> 即使`nn.Dense`和`nn.Conv2d`算子后面没有`nn.batchnorm`和`nn.relu`,都要按规定使用上述两个算子进行融合替换。 +> 无论`nn.Dense`和`nn.Conv2d`算子后面有没有`nn.BatchNorm`和`nn.Relu`,都要按规定使用上述两个算子进行融合替换。 -原网络模型的定义如下所示: +原网络模型LeNet5的定义如下所示: ```python +def conv(in_channels, out_channels, kernel_size, stride=1, padding=0): + """weight initial for conv layer""" + weight = weight_variable() + return nn.Conv2d(in_channels, out_channels, + kernel_size=kernel_size, stride=stride, padding=padding, + weight_init=weight, has_bias=False, pad_mode="valid") + + +def fc_with_initialize(input_channels, out_channels): + """weight initial for fc layer""" + weight = weight_variable() + bias = weight_variable() + return nn.Dense(input_channels, out_channels, weight, bias) + + +def weight_variable(): + """weight initial""" + return TruncatedNormal(0.02) + + class LeNet5(nn.Cell): - def __init__(self, num_class=10): + """ + Lenet network + + Args: + num_class (int): Num classes. Default: 10. + + Returns: + Tensor, output tensor + Examples: + >>> LeNet(num_class=10) + + """ + def __init__(self, num_class=10, channel=1): super(LeNet5, self).__init__() self.num_class = num_class - - self.conv1 = nn.Conv2d(1, 6, kernel_size=5) - self.bn1 = nn.batchnorm(6) - self.act1 = nn.relu() - - self.conv2 = nn.Conv2d(6, 16, kernel_size=5) - self.bn2 = nn.batchnorm(16) - self.act2 = nn.relu() - - self.fc1 = nn.Dense(16 * 5 * 5, 120) - self.fc2 = nn.Dense(120, 84) - self.act3 = nn.relu() - self.fc3 = nn.Dense(84, self.num_class) + self.conv1 = conv(channel, 6, 5) + self.conv2 = conv(6, 16, 5) + self.fc1 = fc_with_initialize(16 * 5 * 5, 120) + self.fc2 = fc_with_initialize(120, 84) + self.fc3 = fc_with_initialize(84, self.num_class) + self.relu = nn.ReLU() self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2) + self.flatten = nn.Flatten() def construct(self, x): x = self.conv1(x) - x = self.bn1(x) - x = self.act1(x) + x = self.relu(x) x = self.max_pool2d(x) x = self.conv2(x) - x = self.bn2(x) - x = self.act2(x) + x = self.relu(x) x = self.max_pool2d(x) - x = self.flattern(x) + x = self.flatten(x) x = self.fc1(x) - x = self.act3(x) + x = self.relu(x) x = self.fc2(x) - x = self.act3(x) + x = self.relu(x) x = self.fc3(x) return x ``` diff --git a/tutorials/source_zh_cn/advanced_use/serving.md b/tutorials/source_zh_cn/advanced_use/serving.md index 1c069b8debaa74205cbf33b143e5fc851b7254f1..37f24306f0976a5447337c2279e7ff456acb819d 100644 --- a/tutorials/source_zh_cn/advanced_use/serving.md +++ b/tutorials/source_zh_cn/advanced_use/serving.md @@ -23,13 +23,12 @@ MindSpore Serving是一个轻量级、高性能的服务模块,旨在帮助MindSpore开发者在生产环境中高效部署在线推理服务。当用户使用MindSpore完成模型训练后,导出MindSpore模型,即可使用MindSpore Serving创建该模型的推理服务。当前Serving仅支持Ascend 910。 - ## 启动Serving服务 -通过pip安装MindSpore后,Serving可执行程序位于`/{your python path}/lib/python3.7/site-packages/mindspore/ms_serving` 。 +通过pip安装MindSpore后,Serving可执行程序位于`/{your python path}/lib/python3.7/site-packages/mindspore/ms_serving`。 启动Serving服务命令如下 -```bash -ms_serving [--help] [--model_path ] [--model_name ] - [--port ] [--device_id ] +```bash +ms_serving [--help] [--model_path=] [--model_name=] [--port=] + [--rest_api_port=] [--device_id=] ``` 参数含义如下 @@ -38,8 +37,8 @@ ms_serving [--help] [--model_path ] [--model_name ] |`--help`|可选|显示启动命令的帮助信息。|-|-|-| |`--model_path=`|必选|指定待加载模型的存放路径。|String|空|-| |`--model_name=`|必选|指定待加载模型的文件名。|String|空|-| -|`--port=`|可选|指定Serving对外的gRPC端口号。|Integer|5500|1~65535| -|`--rest_api_port=`|可选|指定Serving对外的REST API端口号。|Integer|5501|1~65535| +|`--port=`|可选|指定Serving对外的gRPC端口号。|Integer|5500|1~65535| +|`--rest_api_port=`|可选|指定Serving对外的REST API端口号。|Integer|5501|1~65535| |`--device_id=`|可选|指定使用的设备号|Integer|0|0~7| > 执行启动命令前,需将`/{your python path}/lib:/{your python path}/lib/python3.7/site-packages/mindspore/lib`对应的路径加入到环境变量LD_LIBRARY_PATH中 。 @@ -49,23 +48,27 @@ ms_serving [--help] [--model_path ] [--model_name ] 下面以一个简单的网络为例,演示MindSpore Serving如何使用。 ### 导出模型 + > 导出模型之前,需要配置MindSpore[基础环境](https://www.mindspore.cn/install)。 + 使用[add_model.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。 -```python +```python python add_model.py ``` + 执行脚本,生成`tensor_add.mindir`文件,该模型的输入为两个shape为[2,2]的二维Tensor,输出结果是两个输入Tensor之和。 ### 启动Serving推理服务 -```bash +```bash ms_serving --model_path={model directory} --model_name=tensor_add.mindir ``` -当服务端打印日志`MS Serving grpc Listening on 0.0.0.0:5500`时,表示Serving gRPC服务已加载推理模型完毕。 -当服务端打印日志`MS Serving restful Listening on 0.0.0.0:5501`时,表示Serving REST服务已加载推理模型完毕。 + +当服务端打印日志`MS Serving gRPC start success, listening on 0.0.0.0:5500`时,表示Serving gRPC服务已加载推理模型完毕。 +当服务端打印日志`MS Serving RESTful start, listening on 0.0.0.0:5501`时,表示Serving REST服务已加载推理模型完毕。 ### gRPC客户端示例 #### Python客户端示例 - > 执行客户端前,需将`/{your python path}/lib/python3.7/site-packages/mindspore`对应的路径加入到环境变量PYTHONPATH中。 + > 执行客户端前,需将`/{your python path}/lib/python3.7/site-packages/mindspore`对应的路径添加到环境变量PYTHONPATH中。 获取[ms_client.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/python_client/ms_client.py),启动Python客户端。 ```bash @@ -73,7 +76,7 @@ python ms_client.py ``` 显示如下返回值说明Serving服务已正确执行Add网络的推理。 -``` +```bash ms client received: [[2. 2.] [2. 2.]] @@ -104,8 +107,7 @@ ms client received: ./ms_client --target=localhost:5500 ``` 显示如下返回值说明Serving服务已正确执行Add网络的推理。 - ``` - Compute [[1, 2], [3, 4]] + [[1, 2], [3, 4]] + ```Compute [[1, 2], [3, 4]] + [[1, 2], [3, 4]] Add result is 2 4 6 8 client received: RPC OK ``` @@ -119,7 +121,7 @@ ms client received: explicit MSClient(std::shared_ptr channel) : stub_(MSService::NewStub(channel)) {} private: std::unique_ptr stub_; - };MSClient client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials())); + }; MSClient client(grpc::CreateChannel(target_str, grpc::InsecureChannelCredentials())); @@ -151,36 +153,37 @@ ms client received: *request.add_data() = data; ``` 3. 调用gRPC接口和已经启动的Serving服务通信,并取回返回值。 - ``` - Status status = stub_->Predict(&context, request, &reply); - ``` + ```Status status = stub_->Predict(&context, request, &reply);``` 完整代码参考[ms_client](https://gitee.com/mindspore/mindspore/blob/master/serving/example/cpp_client/ms_client.cc)。 ### REST API客户端示例 1. `data`形式发送数据: - data字段:由网络模型每个输入/输出数据展平后组合而成。 + data字段:将网络模型每个输入数据展平成一维数据,假设网络模型有n个输入,最后data数据结构为1*n的二维list。 + + 如本例中,将模型输入数据`[[1.0, 2.0], [3.0, 4.0]]`和`[[1.0, 2.0], [3.0, 4.0]]`展平后组合成data形式的数据`[[1.0, 2.0, 3.0, 4.0], [1.0, 2.0, 3.0, 4.0]]` - 如本例中,将模型输入数据`[[1, 2], [3, 4]]`和`[[1, 2], [3, 4]]`展平组合成data形式的数据`[[1.0, 2.0, 3.0, 4.0], [1.0, 2.0, 3.0, 4.0]]` ``` curl -X POST -d '{"data": [[1.0, 2.0, 3.0, 4.0], [1.0, 2.0, 3.0, 4.0]]}' http://127.0.0.1:5501 ``` - 显示如下返回值,说明Serving服务已正确执行Add网络的推理: + + 显示如下返回值,说明Serving服务已正确执行Add网络的推理,输出数据结构同输入类似: ``` {"data":[[2.0,4.0,6.0,8.0]]} ``` 2. `tensor`形式发送数据: - tensor字段:由网络模型每个输入/输出组合而成。 + tensor字段:由网络模型每个输入组合而成,保持输入的原始shape。 - 如本例中,将模型输入数据`[[1, 2], [3, 4]]`和`[[1, 2], [3, 4]]`组合成tensor形式的数据`[[[1.0, 2.0], [3.0, 4.0]], [[1.0, 2.0], [3.0, 4.0]]]` + 如本例中,将模型输入数据`[[1.0, 2.0], [3.0, 4.0]]`和`[[1.0, 2.0], [3.0, 4.0]]`组合成tensor形式的数据`[[[1.0, 2.0], [3.0, 4.0]], [[1.0, 2.0], [3.0, 4.0]]]` ``` curl -X POST -d '{"tensor": [[[1.0, 2.0], [3.0, 4.0]], [[1.0, 2.0], [3.0, 4.0]]]}' http://127.0.0.1:5501 ``` - 显示如下返回值,说明Serving服务已正确执行Add网络的推理: + 显示如下返回值,说明Serving服务已正确执行Add网络的推理,输出数据结构同输入类似: ``` {"tensor":[[2.0,4.0], [6.0,8.0]]} ``` + > REST API当前只支持int32和fp32数据输入。 diff --git a/tutorials/source_zh_cn/advanced_use/summary_record.md b/tutorials/source_zh_cn/advanced_use/summary_record.md index a1fc1bbcae338e7353b5ca6b43455992feeeffae..e76ff8e730246d65547f08a1bec986b74c4371fe 100644 --- a/tutorials/source_zh_cn/advanced_use/summary_record.md +++ b/tutorials/source_zh_cn/advanced_use/summary_record.md @@ -1,6 +1,6 @@ # Summary数据收集 -`Ascend` `GPU` `模型调优` `中级` `高级` +`Ascend` `GPU` `CPU` `模型调优` `中级` `高级` diff --git a/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md b/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md index 6e6932f1894f5a6caa018e5a7684e738a323f294..773b4c4535380e34e324b64ef2abf4b429ed0b2d 100644 --- a/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md +++ b/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md @@ -1,4 +1,6 @@ -# 同步训练和验证模型 +# 同步训练和验证模型 + +`Ascend` `GPU` `CPU` `初级` `中级` `高级` `模型导出` `模型训练` @@ -41,25 +43,25 @@ - `model`:即是MindSpore中的`Model`函数。 - `eval_dataset`:验证数据集。 -- `epoch_per_eval`:记录验证模型的精度和相应的epoch数,其数据形式为`{"epoch":[],"acc":[]}`。 +- `epoch_per_eval`:记录验证模型的精度和相应的epoch数,其数据形式为`{"epoch": [], "acc": []}`。 ```python -import matplotlib.pyplot as plt from mindspore.train.callback import Callback class EvalCallBack(Callback): - def __init__(self, model, eval_dataset, eval_per_epoch): + def __init__(self, model, eval_dataset, eval_per_epoch, epoch_per_eval): self.model = model self.eval_dataset = eval_dataset self.eval_per_epoch = eval_per_epoch + self.epoch_per_eval = epoch_per_eval def epoch_end(self, run_context): cb_param = run_context.original_args() cur_epoch = cb_param.cur_epoch_num if cur_epoch % self.eval_per_epoch == 0: - acc = self.model.eval(self.eval_dataset,dataset_sink_mode = True) - epoch_per_eval["epoch"].append(cur_epoch) - epoch_per_eval["acc"].append(acc["Accuracy"]) + acc = self.model.eval(self.eval_dataset, dataset_sink_mode=True) + self.epoch_per_eval["epoch"].append(cur_epoch) + self.epoch_per_eval["acc"].append(acc["Accuracy"]) print(acc) ``` @@ -79,12 +81,10 @@ class EvalCallBack(Callback): - `epoch_per_eval`:定义收集`epoch`数和对应模型精度信息的字典。 ```python -from mindspore.train.serialization import load_checkpoint, load_param_into_net from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor from mindspore.train import Model from mindspore import context from mindspore.nn.metrics import Accuracy -from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits if __name__ == "__main__": context.set_context(mode=context.GRAPH_MODE, device_target="GPU") @@ -98,10 +98,10 @@ if __name__ == "__main__": ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet",directory=ckpt_save_dir, config=config_ck) model = Model(network, net_loss, net_opt, metrics={"Accuracy": Accuracy()}) - epoch_per_eval = {"epoch":[],"acc":[]} - eval_cb = EvalCallBack(model,eval_data,eval_per_epoch) + epoch_per_eval = {"epoch": [], "acc": []} + eval_cb = EvalCallBack(model, eval_data, eval_per_epoch, epoch_per_eval) - model.train(epoch_size, train_data, callbacks=[ckpoint_cb, LossMonitor(375),eval_cb], + model.train(epoch_size, train_data, callbacks=[ckpoint_cb, LossMonitor(375), eval_cb], dataset_sink_mode=True) ``` @@ -152,11 +152,13 @@ lenet_ckpt ```python +import matplotlib.pyplot as plt + def eval_show(epoch_per_eval): plt.xlabel("epoch number") plt.ylabel("Model accuracy") plt.title("Model accuracy variation chart") - plt.plot(epoch_per_eval["epoch"],epoch_per_eval["acc"],"red") + plt.plot(epoch_per_eval["epoch"], epoch_per_eval["acc"], "red") plt.show() eval_show(epoch_per_eval) diff --git a/tutorials/source_zh_cn/quick_start/linear_regression.md b/tutorials/source_zh_cn/quick_start/linear_regression.md index 9f7ab617c692f9e04a200e87f2a5203a9d1bbae8..a89ad01f0ce7bc90bf42d591c8576bfa6c95f0c9 100644 --- a/tutorials/source_zh_cn/quick_start/linear_regression.md +++ b/tutorials/source_zh_cn/quick_start/linear_regression.md @@ -52,6 +52,7 @@ MindSpore版本:GPU 设置MindSpore运行配置 +第三方支持包:`matplotlib`,未安装此包的,可使用命令`pip install matplotlib`预先安装。 ```python from mindspore import context @@ -268,7 +269,7 @@ $$w_{ud}=w_{s}-\alpha\frac{\partial{J(w_{s})}}{\partial{w}}\tag{10}$$ $$w_{ud}=w_{s}-\alpha\frac{\partial{J(w_{s})}}{\partial{w}}\tag{11}$$ -当权重$w$在更新的过程中假如临近$w_{min}$在增加或者减少一个$\Delta{w}$,从左边或者右边越过了$w_{min}$,公式10都会使权重往反的方向移动,那么最终$w_{s}$的值会在$w_{min}$附近来回迭代,在实际训练中我们也是这样采用迭代的方式取得最优权重$w$,使得损失函数无限逼近局部最小值。 +当权重$w$在更新的过程中假如临近$w_{min}$在增加或者减少一个$\Delta{w}$,从左边或者右边越过了$w_{min}$,公式11都会使权重往反的方向移动,那么最终$w_{s}$的值会在$w_{min}$附近来回迭代,在实际训练中我们也是这样采用迭代的方式取得最优权重$w$,使得损失函数无限逼近局部最小值。 同理:对于公式5中的另一个权重$b$容易得出其更新公式为: @@ -306,7 +307,7 @@ class GradWrap(nn.Cell): ### 反向传播更新权重 -`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式10,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。 +`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式11,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。 ```python @@ -400,4 +401,4 @@ print("weight:", net.weight.default_input[0][0], "bias:", net.bias.default_input ## 总结 -本次体验我们了解了线性拟合的算法原理,并在MindSpore框架下实现了相应的算法定义,了解了线性拟合这类的线性回归模型在MindSpore中的训练过程,并最终拟合出了一条接近目标函数的模型函数。另外有兴趣的可以调整数据集的生成区间从(-10,10)扩展到(-100,100),看看权重值是否更接近目标函数;调整学习率大小,看看拟合的效率是否有变化;当然也可以探索如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数或者更高阶的函数。 +本次体验我们了解了线性拟合的算法原理,并在MindSpore框架下实现了相应的算法定义,了解了线性拟合这类的线性回归模型在MindSpore中的训练过程,并最终拟合出了一条接近目标函数的模型函数。另外有兴趣的可以调整数据集的生成区间从(-10,10)扩展到(-100,100),看看权重值是否更接近目标函数;调整学习率大小,看看拟合的效率是否有变化;当然也可以探索如何使用MindSpore拟合$f(x)=ax^2+bx+c$这类的二次函数或者更高次的函数。 diff --git a/tutorials/source_zh_cn/quick_start/quick_video.md b/tutorials/source_zh_cn/quick_start/quick_video.md index d04f894bd84ee0a970f3120a60901a0632c98249..8c80d69900ac81b649b73b561bb415255fccfb3d 100644 --- a/tutorials/source_zh_cn/quick_start/quick_video.md +++ b/tutorials/source_zh_cn/quick_start/quick_video.md @@ -289,6 +289,30 @@ + diff --git a/tutorials/source_zh_cn/quick_start/quick_video/cpu_operator_development.md b/tutorials/source_zh_cn/quick_start/quick_video/cpu_operator_development.md new file mode 100644 index 0000000000000000000000000000000000000000..1b6851e0f6b42ac3d1cc5dcdb73d73a68cb1767c --- /dev/null +++ b/tutorials/source_zh_cn/quick_start/quick_video/cpu_operator_development.md @@ -0,0 +1,7 @@ +# CPU算子开发 + +[comment]: <> (本文档中包含手把手系列视频,码云Gitee不支持展示,请于官方网站对应教程中查看) + + diff --git a/tutorials/source_zh_cn/use/custom_operator.md b/tutorials/source_zh_cn/use/custom_operator.md index 5ca1b6d2103d261a5c0c5fd643583d12c13925ce..febbca921a2c20e784b0f3481933ddf4c7b62b06 100644 --- a/tutorials/source_zh_cn/use/custom_operator.md +++ b/tutorials/source_zh_cn/use/custom_operator.md @@ -29,7 +29,9 @@ - 算子实现:通过TBE(Tensor Boost Engine)提供的特性语言接口,描述算子内部计算逻辑的实现。TBE提供了开发昇腾AI芯片自定义算子的能力。你可以在页面申请公测。 - 算子信息:描述TBE算子的基本信息,如算子名称、支持的输入输出类型等。它是后端做算子选择和映射时的依据。 -本文将以自定义Square算子为例,介绍自定义算子的步骤。更多详细内容可参考MindSpore源码中[tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe)下的用例。 +本文将以自定义Square算子为例,介绍自定义算子的步骤。 + +> 更多详细内容可参考MindSpore源码中[tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe)下的用例。 ## 注册算子原语 @@ -79,11 +81,11 @@ class CusSquare(PrimitiveWithInfer): 4. 调用`cce_build_code`编译生成算子二进制。 > 入口函数的输入参数有特殊要求,需要依次为:算子每个输入的信息、算子每个输出的信息、算子属性(可选)和`kernel_name`(生成算子二进制的名称)。输入和输出的信息用字典封装传入,其中包含该算子在网络中被调用时传入的实际输入和输出的shape和dtype。 -更多关于使用TBE开发算子的内容请参考[TBE文档](https://www.huaweicloud.com/ascend/dev/operator),关于TBE算子的调试和性能优化请参考[MindStudio文档](https://www.huaweicloud.com/ascend/mindstudio)。 +更多关于使用TBE开发算子的内容请参考[TBE文档](https://support.huaweicloud.com/odevg-A800_3000_3010/atlaste_10_0063.html),关于TBE算子的调试和性能优化请参考[MindStudio文档](https://support.huaweicloud.com/usermanual-mindstudioc73/atlasmindstudio_02_0043.html)。 ### 注册算子信息 -算子信息是指导后端选择算子实现的关键信息,同时也指导后端为算子插入合适的类型和格式转换。它通过`TBERegOp`接口定义,通过`op_info_register`装饰器将算子信息与算子实现入口函数绑定。当算子实现py文件被导入时,`op_info_register`装饰器会将算子信息注册到后端的算子信息库中。更多关于算子信息的使用方法请参考`TBERegOp`的成员方法的注释说明。 +算子信息是指导后端选择算子实现的关键信息,同时也指导后端为算子插入合适的类型和格式转换。它通过`TBERegOp`接口定义,通过`op_info_register`装饰器将算子信息与算子实现入口函数绑定。当算子实现py文件被导入时,`op_info_register`装饰器会将算子信息注册到后端的算子信息库中。更多关于算子信息的使用方法请参考`TBERegOp`的成员方法的注释说明,算子信息的字段含义可以参考[TBE文档](https://support.huaweicloud.com/odevg-A800_3000_3010/atlaste_10_0096.html)。 > - 算子信息中定义输入输出信息的个数和顺序、算子实现入口函数的参数中的输入输出信息的个数和顺序、算子原语中输入输出名称列表的个数和顺序,三者要完全一致。 > - 算子如果带属性,在算子信息中需要用`attr`描述属性信息,属性的名称与算子原语定义中的属性名称要一致。 @@ -247,4 +249,4 @@ pytest -s tests/st/ops/custom_ops_tbe/test_square.py::test_grad_net ``` x: [1. 4. 9.] dx: [2. 8. 18.] -``` \ No newline at end of file +``` diff --git a/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md b/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md index 60ef10d81f9217de46544f22bca14f170a7a8e88..88cf85f0a885a3a3444e6c5ad34c1271ea91b61e 100644 --- a/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md +++ b/tutorials/source_zh_cn/use/data_preparation/data_processing_and_augmentation.md @@ -54,7 +54,7 @@ ds1 = ds.MnistDataset(MNIST_DATASET_PATH, MNIST_SCHEMA) # Create MNIST dataset. ds1 = ds1.shuffle(buffer_size=10000) ds1 = ds1.batch(32, drop_remainder=True) ds1 = ds1.repeat(10) -``` +``` 上面操作先对数据进行混洗,再将每32条数据组成一个batch,最后将数据集重复10次。 下面将构造一个简单数据集`ds1`,并对其进行数据处理操作,来介绍各类数据处理操作的详细使用。 @@ -239,7 +239,7 @@ MindSpore提供`zip`函数,可将多个数据集合并成1个数据集。 ```python def zip(self, datasets): ``` -1. 采用前面构造数据集`ds1`的方法,构造1个数据集`ds2`。 +1. 参考前面`generator_func`函数构造数据集`ds1`的方法,定义`generator_func2`函数用于构造1个数据集`ds2`。 ```python def generator_func2(): for i in range(5): @@ -248,7 +248,7 @@ def zip(self, datasets): ds2 = ds.GeneratorDataset(generator_func2, ["data2"]) ``` -2. 通过`zip`将数据集`ds1`的`data1`列和数据集`ds2`的`data2`列合并成数据集`ds3`。 +2. 通过`zip`将数据集`ds1`的`data`列和数据集`ds2`的`data2`列合并成数据集`ds3`。 ```python ds3 = ds.zip((ds1, ds2)) for data in ds3.create_dict_iterator(): @@ -256,11 +256,11 @@ def zip(self, datasets): ``` 输出如下所示: ``` - {'data1': array([0, 1, 2], dtype=int64), 'data2': array([-3, -2, -1], dtype=int64)} - {'data1': array([1, 2, 3], dtype=int64), 'data2': array([-2, -1, 0], dtype=int64)} - {'data1': array([2, 3, 4], dtype=int64), 'data2': array([-1, 0, 1], dtype=int64)} - {'data1': array([3, 4, 5], dtype=int64), 'data2': array([0, 1, 2], dtype=int64)} - {'data1': array([4, 5, 6], dtype=int64), 'data2': array([1, 2, 3], dtype=int64)} + {'data': array([0, 1, 2], dtype=int64), 'data2': array([-3, -2, -1], dtype=int64)} + {'data': array([1, 2, 3], dtype=int64), 'data2': array([-2, -1, 0], dtype=int64)} + {'data': array([2, 3, 4], dtype=int64), 'data2': array([-1, 0, 1], dtype=int64)} + {'data': array([3, 4, 5], dtype=int64), 'data2': array([0, 1, 2], dtype=int64)} + {'data': array([4, 5, 6], dtype=int64), 'data2': array([1, 2, 3], dtype=int64)} ``` ## 数据增强 在图片训练中,尤其在数据集较小的情况下,用户可以通过一系列的数据增强操作对图片进行预处理,从而丰富了数据集。 @@ -336,4 +336,4 @@ MindSpore提供`c_transforms`模块以及`py_transforms`模块函数供用户进 ![avatar](../images/image_random_crop.png) -图2:按500*500随机裁剪后的图片 +图2:按500*500随机裁剪后的图片 \ No newline at end of file diff --git a/tutorials/source_zh_cn/use/multi_platform_inference.md b/tutorials/source_zh_cn/use/multi_platform_inference.md index 44444b2cda77e5263dc9d4c202d242a5897f0e6e..77698182e542f89bc8270c7b3d5ba298c31453a1 100644 --- a/tutorials/source_zh_cn/use/multi_platform_inference.md +++ b/tutorials/source_zh_cn/use/multi_platform_inference.md @@ -145,4 +145,4 @@ Ascend 310 AI处理器上搭载了ACL框架,他支持OM格式,而OM格式需 ## 端侧推理 -端侧推理需使用MindSpore Lite推理引擎,详细操作请参考[导出MINDIR格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)和[端侧推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/on_device_inference.html)。 +端侧推理需使用MindSpore Lite推理引擎,详细操作请参考[导出MINDIR格式文件](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#mindir)和[端侧推理教程](https://www.mindspore.cn/lite)。