diff --git a/docs/lite/docs/source_zh_cn/operator_list_lite.md b/docs/lite/docs/source_zh_cn/operator_list_lite.md
index 00143e871b5dfad1f6f30a37cedf4e38cdd13745..35879b6f4a1206c1e75e8e9225922cb9202961ff 100644
--- a/docs/lite/docs/source_zh_cn/operator_list_lite.md
+++ b/docs/lite/docs/source_zh_cn/operator_list_lite.md
@@ -6,187 +6,190 @@
本文列举MindSpore Lite支持的算子。
-| 操作名
| CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU
| TensorRT
|支持的TensorFlow Lite算子 | 支持的Caffe算子 | 支持的Onnx算子 | 支持的TensorFlow算子 | 支持的Ascend310算子 |
-| --------------------- | :------------: | :------------: | :------------: | :-------------: | :------------: | :------------: | :---------: | :---------: | ------------------------------- | ------------------------ | ----------------------------------------------- | ----------------------------------------------- | ----------------------------------------------- |
-| Abs | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Abs | | Abs | Abs | Abs |
-| Add | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Add | | Add, Int8Add | Add, AddV2 | Add |
-| Adder | | ✅ | | | | | | | | | adder_f | | |
-| AddGrad | | ✅ | | | | | | | | | | |
-| AddN | ✅ | ✅ | | | | | | | AddN | | | | |
-| Assert | ✅ | ✅ | | | | | | | | | | Assert | |
-| Argmax | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Argmax | ArgMax | ArgMax | ArgMax | ArgMax |
-| Argmin | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Argmin | | | ArgMin | ArgMin |
-| AvgPool | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | MeanPooling | Pooling | AveragePool,
GlobalAveragePool,
Int8AveragePool | AvgPool | AvgPool |
-| AvgPoolGrad | ✅ | ✅ | | | | | | | | | | |
-| BatchNorm | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | BatchNorm | BatchNormalization | | BatchNorm |
-| BatchNormGrad | ✅ | ✅ | | | | | | | | | | | |
-| BatchToSpace | | ✅ | ✅ | ✅ | ✅ | ✅ | | | BatchToSpace,
BatchToSpaceND | | | BatchToSpace,
BatchToSpaceND | |
-| BiasAdd | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | | | BiasAdd | BiasAdd | BiasAdd |
-| BiasAddGrad | ✅ | ✅ | | | | | | | | | | | |
-| BroadcastTo | ✅ | ✅ | | | | | | | BroadcastTo | | Expand | BroadcastTo | |
-| Cast | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Cast,
QUANTIZE,
DEQUANTIZE | | Cast | Cast | Cast |
-| Ceil | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Ceil | | Ceil | Ceil | Ceil |
-| Concat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Concat | Concat | Concat | ConcatV2 | Concat |
-| ConstantOfShape | ✅ | ✅ | | | | | | | | | ConstantOfShape | | |
-| Conv2d | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Conv2D | Convolution | Conv, Int8Conv,
ConvRelu,
Int8ConvRelu | Conv2D | Conv2D |
+| 操作名
| CPU
FP16 | CPU
FP32 | CPU
Int8 | CPU
UInt8 | GPU
FP16 | GPU
FP32 | NPU
| TensorRT
| Ascend
(Ascend310) |支持的TensorFlow Lite算子 | 支持的Caffe算子 | 支持的Onnx算子 | 支持的TensorFlow算子 |
+| --------------------- | :------------: | :------------: | :------------: | :-------------: | :------------: | :------------: | :---------: | :---------: | :-----------------------------: | ------------------------ | ----------------------------------------------- | ----------------------------------------------- | --------------------- |
+| Abs | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Abs | | Abs | Abs |
+| Add | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Add | | Add, Int8Add | Add, AddV2 |
+| Adder | | ✅ | | | | | | | | | | adder_f | |
+| AddGrad | | ✅ | | | | | | | | | | | |
+| AddN | ✅ | ✅ | | | | | | | | AddN | | | |
+| Assert | ✅ | ✅ | | | | | | | | | | | Assert |
+| Argmax | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Argmax | ArgMax | ArgMax | ArgMax |
+| Argmin | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | ✅ | Argmin | | | ArgMin |
+| AvgPool | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | MeanPooling | Pooling | AveragePool,
GlobalAveragePool,
Int8AveragePool | AvgPool |
+| AvgPoolGrad | ✅ | ✅ | | | | | | | | | | | |
+| BatchNorm | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | ✅ | | BatchNorm | BatchNormalization | |
+| BatchNormGrad | ✅ | ✅ | | | | | | | | | | | |
+| BatchToSpace | | ✅ | ✅ | ✅ | ✅ | ✅ | | | | BatchToSpace,
BatchToSpaceND | | | BatchToSpace,
BatchToSpaceND |
+| BiasAdd | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | | | BiasAdd | BiasAdd |
+| BiasAddGrad | ✅ | ✅ | | | | | | | | | | | |
+| BroadcastTo | ✅ | ✅ | | | | | | | | BroadcastTo | | Expand | BroadcastTo |
+| Cast | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Cast,
QUANTIZE,
DEQUANTIZE | | Cast | Cast |
+| Ceil | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Ceil | | Ceil | Ceil |
+| Concat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Concat | Concat | Concat | ConcatV2 |
+| ConstantOfShape | ✅ | ✅ | | | | | | | | | | ConstantOfShape | |
+| Conv2d | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Conv2D | Convolution | Conv, Int8Conv,
ConvRelu,
Int8ConvRelu | Conv2D |
| Conv2DBackpropFilterFusion | ✅ | ✅ | | | | | | | | | | | |
| Conv2DBackpropInputFusion | ✅ | ✅ | | | | | | | | | | | |
-| Conv2dGrad | | ✅ | | | | | | | | | | | |
-| Conv2dTranspose | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | DeConv2D | Deconvolution | ConvTranspose | Conv2DBackpropInput | Conv2dTranspose |
-| Conv2dTransposeGrad | | ✅ | | | | | | | | | | | | |
-| Cos | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |Cos | | Cos | Cos | Cos |
-| Crop | ✅ | ✅ | ✅ | ✅ | | | | | | Crop | | | |
-| CropAndResize | | ✅ | | | | | ✅ | | | | | CropAndResize | |
-| CumSum | | ✅ | | | | | | | | | | Cumsum | |
-| CustomExtractFeatures | | ✅ | | | | | | | ExtractFeatures | | | | |
-| CustomNormalize | | ✅ | | | | | | | Normalize | | | | |
-| CustomPredict | | ✅ | | | | | | | Predict | | | | |
-| DeDepthwiseConv2D | | ✅ | ✅ | ✅ | ✅ | ✅ | | | | Deconvolution | | | Deconvolution |
-| DepthToSpace | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | DepthToSpace | | DepthToSpace | DepthToSpace | |
-| DepthwiseConv2dNative | ✅ | ✅ | ✅ | ✅ | | | ✅ | | DepthwiseConv2D | Convolution | | DepthwiseConv2dNative | DepthwiseConv2dNative |
-| DetectionPostProcess | | ✅ | ✅ | ✅ | | | | | Custom | | | | |
-| Div | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Div, RealDiv | | Div | Div, RealDiv | Div |
-| DivGrad | | ✅ | | | | | | | | | | | |
-| DropoutGrad | ✅ | ✅ | | | | | | | | | | | |
-| Eltwise | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Eltwise | Sum, Max
[3] | | Eltwise |
-| Elu | ✅ | ✅ | | | | | | | | ELU | Elu,
NonMaxSuppression | NonMaxSuppressionV3 | Elu |
-| EluGrad | | ✅ | | | | | | | | | | | |
-| Equal | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Equal | | Equal | Equal | Equal |
-| ExpFusion | ✅ | ✅ | | | ✅ | ✅ | | | Exp | Exp | Exp | Exp | |
-| ExpandDims | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ExpandDims | | | ExpandDims | ExpandDims |
-| Fill | ✅ | ✅ | | | ✅ | ✅ | | | Fill | | | Fill | Fill |
-| Flatten | ✅ | ✅ | | | | | | ✅ | | Flatten | | | Flatten |
-| FlattenGrad | ✅ | ✅ | | | | | | | | | | | |
-| Floor | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | flOOR | | Floor | Floor | Floor |
-| FloorDiv | ✅ | ✅ | | | ✅ | ✅ | ✅ | | FloorDiv | | | FloorDiv | |
-| FloorMod | ✅ | ✅ | | | ✅ | ✅ | ✅ | | FloorMod | | | FloorMod | |
-| FullConnection | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | FullyConnected | InnerProduct | | | FullConnection |
-| FusedBatchNorm | ✅ | ✅ | ✅ | ✅ | | | ✅ | | FusedBatchNorm | | | FusedBatchNorm,
FusedBatchNormV3 | FusedBatchNorm |
-| GatherNd | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | GatherND | | | GatherNd | GatherNd |
-| Gather | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Gather | | Gather | GatherV2 | Gather |
-| Greater | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Greater | | Greater | Greater | Greater |
-| GreaterEqual | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | GreaterEqual | | | GreaterEqual | GreaterEqual |
-| GRU | ✅ | ✅ | | | | | | | | | | | |
-| HardTanh | ✅ | ✅ | | | | | | | | | | | |
-| HashtableLookup | | ✅ | | | | | | | HashtableLookup | | | | |
-| HSigmoid | ✅ | ✅ | | ✅ | | | | | | | | | |
-| Hswish | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | HardSwish | | | | |
-| HswishGrad | | ✅ | | | | | | | | | | | |
-| InstanceNorm | ✅ | ✅ | | | | | ✅ | | InstanceNorm | | InstanceNormalization | | |
-| InvertPermutation | ✅ | ✅ | | | | | | | | | | InvertPermutation | |
-| L2Norm | | ✅ | ✅ | | | | | | L2_NORMALIZATION | | | | |
-| LayerNorm | ✅ | ✅ | ✅ | | ✅ | ✅ | | | | | | | |
-| LayerNormGrad | ✅ | ✅ | | | | | | | | | | | |
-| LeakyReLU | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | LeakyRelu | | LeakyRelu | LeakyRelu | LeakyRelu |
-| LeakyReLUGrad | | ✅ | | | | | | | | | | | |
-| Less | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Less | | Less | Less | |
-| LessEqual | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | LessEqual | | | LessEqual | |
-| LRN | | ✅ | | | | | | | LocalResponseNorm | | Lrn, LRN | | |
-| Log | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Log | | Log | Log | |
-| LogGrad | ✅ | ✅ | | | | | | | | | | | |
-| LogicalAnd | ✅ | ✅ | | | ✅ | ✅ | ✅ | | LogicalAnd | | And | LogicalAnd | |
-| LogicalNot | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | LogicalNot | | Not | LogicalNot | |
-| LogicalOr | ✅ | ✅ | | | ✅ | ✅ | ✅ | | LogicalOr | | Or | LogicalOr | |
-| LogSoftmax | ✅ | ✅ | | | | | | | LogSoftmax | | LogSoftmax | | |
-| LshProjection | | ✅ | | | | | | | LshProjection | | | | |
-| LSTM | ✅ | ✅ | | | | | | | | | LSTM | | |
-| MatMul | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | BatchMatMul | | MatMul,
Gemm | MatMul,
BatchMatMul,
BatchMatMulV2 | MatMul |
-| MatMulGrad | | ✅ | | | | | | | | | | | |
-| Maximum | ✅ | ✅ | | | ✅ | ✅ | ✅ | | Maximum | | Max | Maximum | Maximum |
-| MaximumGrad | ✅ | ✅ | | | | | | | | | | | |
-| MaxPool | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | MaxPooling | Pooling | MaxPool,
GlobalMaxPool | MaxPool | MaxPool |
-| MaxPoolGrad | ✅ | ✅ | | | | | | | | | | | |
-| Merge | ✅ | ✅ | | | | | | | | | | Merge | |
-| Minimum | ✅ | ✅ | | | ✅ | ✅ | ✅ | | Minimum | | Min | Minimum | Minimum |
-| MinimumGrad | ✅ | ✅ | | | | | | | | | | | |
-| Mul | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Mul | | Mul | Mul | Mul |
-| MulGrad | | ✅ | | | | | | | | | | | |
-| Neg | ✅ | ✅ | | | ✅ | ✅ | ✅ | | Neg | | Neg |Neg | |
-| NegGrad | ✅ | ✅ | | | | | | | | | | | |
-| NotEqual | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | NotEqual | | | NotEqual | |
-| OneHot | ✅ | ✅ | | | ✅ | ✅ | | | OneHot | | OneHot | OneHot | |
-| Pad | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Pad, MirrorPad, PadV2 | | Pad | MirrorPad, Pad, PadV2 | Pad |
-| Pow | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Pow | Power | Pow
[2] | Pow | Pow |
-| PowGrad | | ✅ | | | | | | | | | | | |
-| PReLU | ✅ | ✅ | | | ✅ | ✅ | | | PRELU | PReLU | PRelu | | PReLU |
-| QuantDTypeCast | ✅ | ✅ | ✅ | ✅ | | | | | | | | | |
-| RaggedRange | ✅ | ✅ | | | | | | | | | | RaggedRange | |
-| RandomStandardNormal | ✅ | ✅ | | | | | | | | | | RandomStandardNormal | |
-| RandomUniform | | ✅ | | | | | | | | | | RandomUniform | |
-| Range | ✅ | ✅ | | | | | | | Range | | Range | Range | |
-| Rank | ✅ | ✅ | | | | | | | Rank | | | Rank | |
-| RealDiv | ✅ | ✅ | | | | | | | | | | | RealDiv |
-| Reciprocal | ✅ | ✅ | ✅ | | | | ✅ | | | | Reciprocal | | |
-| ReduceAll | | ✅ | | | | | | | | | | All | |
-| ReduceASum | ✅ | ✅ | | | ✅ | ✅ | | | | Reduction | | | |
-| ReduceMax | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ReduceMax | | ReduceMax | Max | |
-| ReduceMean | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Mean | Reduction | ReduceMean | Mean | ReduceMean |
-| ReduceMin | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ReduceMin | | ReduceMin | Min | |
-| ReduceProd | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ReduceProd | | ReduceProd | Prod | |
-| ReduceSum | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Sum | Reduction | ReduceSum | Sum | ReduceSum |
-| ReduceSumSquare | ✅ | ✅ | ✅ | ✅ | | | | | | Reduction | ReduceSumSquare | | |
-| ReLU | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Relu | ReLU | Relu | Relu | ReLU |
-| ReLUGrad | ✅ | ✅ | | | | | | | | | | | |
-| ReLU6 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Relu6 | ReLU6 | Clip
[1] | Relu6 | ReLU6 |
-| ReLU6Grad | ✅ | ✅ | | | | | | | | | | | |
-| Reshape | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Reshape | Reshape | Reshape,
Flatten | Reshape | Reshape |
-| Resize | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ResizeBilinear,
NearestNeighbor | Interp | Resize, Upsample | ResizeBilinear,
ResizeBicubic,
ResizeNearestNeighbor | Upsample,
ResizeNearestNeighbor |
-| ResizeGrad | ✅ | ✅ | | | | | | | | | | | |
-| Reverse | | ✅ | | | | | | | reverse | | | ReverseV2 | |
-| ReverseSequence | | ✅ | | | | | | | ReverseSequence | | | ReverseSequence | |
-| Round | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Round | | Round | Round | Round |
-| Rsqrt | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Rsqrt | | | Rsqrt | |
-| Select | | ✅ | | | | | | | | | | Select | |
-| Selu | | | | | | | | | | | | Selu | |
-| Scale | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Scale | | | Scale |
-| ScatterNd | ✅ | ✅ | | | | | | | ScatterNd | | ScatterND | | |
-| ScatterNdUpdate | ✅ | ✅ | | | | | | | ScatterNdUpdate | | ScatterNdUpdate | | |
-| Shape | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Shape | | Shape | Shape | Shape |
-| Sigmoid | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Logistic | Sigmoid | Sigmoid | Sigmoid | Sigmoid |
+| Conv2dGrad | | ✅ | | | | | | | | | | | |
+| Conv2dTranspose | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | DeConv2D | Deconvolution | ConvTranspose | Conv2DBackpropInput |
+| Conv2dTransposeGrad | | ✅ | | | | | | | | | | | |
+| Cos | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | |✅ |Cos | | Cos | Cos |
+| Crop | ✅ | ✅ | ✅ | ✅ | | | | | | | Crop | | |
+| CropAndResize | | ✅ | | | | | ✅ | | | | | | CropAndResize |
+| CumSum | | ✅ | | | | | | | | | | | Cumsum |
+| CustomExtractFeatures | | ✅ | | | | | | | | ExtractFeatures | | | |
+| CustomNormalize | | ✅ | | | | | | | | Normalize | | | |
+| CustomPredict | | ✅ | | | | | | | | Predict | | | |
+| Deconvolution | | | | | | | | | ✅ | | | | |
+| DeDepthwiseConv2D | | ✅ | ✅ | ✅ | ✅ | ✅ | | | | | Deconvolution | | |
+| DepthToSpace | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | DepthToSpace | | DepthToSpace | DepthToSpace |
+| DepthwiseConv2dNative | ✅ | ✅ | ✅ | ✅ | | | ✅ | | ✅ | DepthwiseConv2D | Convolution | | DepthwiseConv2dNative |
+| DetectionPostProcess | | ✅ | ✅ | ✅ | | | | | | Custom | | | |
+| Div | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Div, RealDiv | | Div | Div, RealDiv |
+| DivGrad | | ✅ | | | | | | | | | | | |
+| DropoutGrad | ✅ | ✅ | | | | | | | | | | | |
+| Eltwise | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Eltwise | Sum, Max
[3] | |
+| Elu | ✅ | ✅ | | | | | | | ✅ | | ELU | Elu,
NonMaxSuppression | NonMaxSuppressionV3 |
+| EluGrad | | ✅ | | | | | | | | | | | |
+| Equal | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Equal | | Equal | Equal |
+| ExpFusion | ✅ | ✅ | | | ✅ | ✅ | | | | Exp | Exp | Exp | Exp |
+| ExpandDims | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ExpandDims | | | ExpandDims |
+| Fill | ✅ | ✅ | | | ✅ | ✅ | | | ✅ | Fill | | | Fill |
+| Flatten | ✅ | ✅ | | | | | | ✅ | ✅ | | Flatten | | |
+| FlattenGrad | ✅ | ✅ | | | | | | | | | | | |
+| Floor | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | flOOR | | Floor | Floor |
+| FloorDiv | ✅ | ✅ | | | ✅ | ✅ | ✅ | | | FloorDiv | | | FloorDiv |
+| FloorMod | ✅ | ✅ | | | ✅ | ✅ | ✅ | | | FloorMod | | | FloorMod |
+| FullConnection | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | FullyConnected | InnerProduct | | |
+| FusedBatchNorm | ✅ | ✅ | ✅ | ✅ | | | ✅ | | ✅ | FusedBatchNorm | | | FusedBatchNorm,
FusedBatchNormV3 |
+| GatherNd | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | ✅ | GatherND | | | GatherNd |
+| Gather | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Gather | | Gather | GatherV2 |
+| Greater | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Greater | | Greater | Greater |
+| GreaterEqual | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | GreaterEqual | | | GreaterEqual |
+| GRU | ✅ | ✅ | | | | | | | | | | | |
+| HardTanh | ✅ | ✅ | | | | | | | | | | | |
+| HashtableLookup | | ✅ | | | | | | | | HashtableLookup | | | |
+| HSigmoid | ✅ | ✅ | | ✅ | | | | | | | | | |
+| Hswish | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | HardSwish | | | |
+| HswishGrad | | ✅ | | | | | | | | | | | |
+| InstanceNorm | ✅ | ✅ | | | | | ✅ | | | InstanceNorm | | InstanceNormalization | |
+| InvertPermutation | ✅ | ✅ | | | | | | | | | | | InvertPermutation |
+| L2Norm | | ✅ | ✅ | | | | | | | L2_NORMALIZATION | | | |
+| LayerNorm | ✅ | ✅ | ✅ | | ✅ | ✅ | | | | | | | |
+| LayerNormGrad | ✅ | ✅ | | | | | | | | | | | |
+| LeakyReLU | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | LeakyRelu | | LeakyRelu | LeakyRelu |
+| LeakyReLUGrad | | ✅ | | | | | | | | | | | |
+| Less | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Less | | Less | Less |
+| LessEqual | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | LessEqual | | | LessEqual |
+| LRN | | ✅ | | | | | | | | LocalResponseNorm | | Lrn, LRN | |
+| Log | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Log | | Log | Log |
+| LogGrad | ✅ | ✅ | | | | | | | | | | | |
+| LogicalAnd | ✅ | ✅ | | | ✅ | ✅ | ✅ | | | LogicalAnd | | And | LogicalAnd |
+| LogicalNot | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | LogicalNot | | Not | LogicalNot |
+| LogicalOr | ✅ | ✅ | | | ✅ | ✅ | ✅ | | | LogicalOr | | Or | LogicalOr |
+| LogSoftmax | ✅ | ✅ | | | | | | | | LogSoftmax | | LogSoftmax | |
+| LshProjection | | ✅ | | | | | | | | LshProjection | | | |
+| LSTM | ✅ | ✅ | | | | | | | | | | LSTM | |
+| MatMul | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | BatchMatMul | | MatMul,
Gemm | MatMul,
BatchMatMul,
BatchMatMulV2 |
+| MatMulGrad | | ✅ | | | | | | | | | | | |
+| Maximum | ✅ | ✅ | | | ✅ | ✅ | ✅ | | ✅ | Maximum | | Max | Maximum |
+| MaximumGrad | ✅ | ✅ | | | | | | | | | | | |
+| MaxPool | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | MaxPooling | Pooling | MaxPool,
GlobalMaxPool | MaxPool |
+| MaxPoolGrad | ✅ | ✅ | | | | | | | | | | | |
+| Merge | ✅ | ✅ | | | | | | | | | | | Merge |
+| Minimum | ✅ | ✅ | | | ✅ | ✅ | ✅ | | ✅ | Minimum | | Min | Minimum |
+| MinimumGrad | ✅ | ✅ | | | | | | | | | | | |
+| Mul | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Mul | | Mul | Mul |
+| MulGrad | | ✅ | | | | | | | | | | | |
+| Neg | ✅ | ✅ | | | ✅ | ✅ | ✅ | | | Neg | | Neg |Neg |
+| NegGrad | ✅ | ✅ | | | | | | | | | | | |
+| NotEqual | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | NotEqual | | | NotEqual |
+| OneHot | ✅ | ✅ | | | ✅ | ✅ | | | | OneHot | | OneHot | OneHot |
+| Pad | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Pad, MirrorPad, PadV2 | | Pad | MirrorPad, Pad, PadV2 |
+| Pow | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | Pow | Power | Pow
[2] | Pow |
+| PowGrad | | ✅ | | | | | | | | | | | |
+| PReLU | ✅ | ✅ | | | ✅ | ✅ | | | ✅ | PRELU | PReLU | PRelu | |
+| QuantDTypeCast | ✅ | ✅ | ✅ | ✅ | | | | | | | | | |
+| RaggedRange | ✅ | ✅ | | | | | | | | | | | RaggedRange |
+| RandomStandardNormal | ✅ | ✅ | | | | | | | | | | | RandomStandardNormal |
+| RandomUniform | | ✅ | | | | | | | | | | | RandomUniform |
+| Range | ✅ | ✅ | | | | | | | | Range | | Range | Range |
+| Rank | ✅ | ✅ | | | | | | | | Rank | | | Rank |
+| RealDiv | ✅ | ✅ | | | | | | | ✅ | | | | |
+| Reciprocal | ✅ | ✅ | ✅ | | | | ✅ | | | | | Reciprocal | |
+| ReduceAll | | ✅ | | | | | | | | | | | All |
+| ReduceASum | ✅ | ✅ | | | ✅ | ✅ | | | | | Reduction | | |
+| ReduceMax | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | | ReduceMax | | ReduceMax | Max |
+| ReduceMean | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | Mean | Reduction | ReduceMean | Mean |
+| ReduceMin | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | | ReduceMin | | ReduceMin | Min |
+| ReduceProd | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | | ReduceProd | | ReduceProd | Prod |
+| ReduceSum | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | ✅ | Sum | Reduction | ReduceSum | Sum |
+| ReduceSumSquare | ✅ | ✅ | ✅ | ✅ | | | | | | | Reduction | ReduceSumSquare | |
+| ReLU | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Relu | ReLU | Relu | Relu |
+| ReLUGrad | ✅ | ✅ | | | | | | | | | | | |
+| ReLU6 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Relu6 | ReLU6 | Clip
[1] | Relu6 |
+| ReLU6Grad | ✅ | ✅ | | | | | | | | | | | |
+| Reshape | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Reshape | Reshape | Reshape,
Flatten | Reshape |
+| Resize | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ResizeBilinear,
NearestNeighbor | Interp | Resize, Upsample | ResizeBilinear,
ResizeBicubic,
ResizeNearestNeighbor |
+| ResizeGrad | ✅ | ✅ | | | | | | | | | | | |
+| ResizeNearestNeighbor | | | | | | | | | ✅ | | | | |
+| Reverse | | ✅ | | | | | | | | reverse | | | ReverseV2 |
+| ReverseSequence | | ✅ | | | | | | | | ReverseSequence | | | ReverseSequence |
+| Round | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Round | | Round | Round |
+| Rsqrt | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Rsqrt | | | Rsqrt |
+| Select | | ✅ | | | | | | | | | | | Select |
+| Selu | | | | | | | | | | | | | Selu |
+| Scale | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Scale | | |
+| ScatterNd | ✅ | ✅ | | | | | | | | ScatterNd | | ScatterND | |
+| ScatterNdUpdate | ✅ | ✅ | | | | | | | | ScatterNdUpdate | | ScatterNdUpdate | |
+| Shape | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | ✅ | Shape | | Shape | Shape |
+| Sigmoid | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Logistic | Sigmoid | Sigmoid | Sigmoid |
| SigmoidGrad | ✅ | ✅ | | | | | | | | | | | |
-| Sin | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Sin | | Sin | Sin | Sin |
-| Size | ✅ | ✅ | | | | | | | | | | Size | |
-| Slice | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Slice | Slice | Slice | Slice | |
-| SkipGram | | ✅ | | | | | | | SKipGram | | | | |
-| Softmax | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Softmax | Softmax | Softmax | Softmax | Softmax |
+| Sin | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Sin | | Sin | Sin |
+| Size | ✅ | ✅ | | | | | | | | | | | Size |
+| Slice | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Slice | Slice | Slice | Slice |
+| SkipGram | | ✅ | | | | | | | | SKipGram | | | |
+| Softmax | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Softmax | Softmax | Softmax | Softmax |
| SoftmaxGrad | | ✅ | | | | | | | | | | | |
-| Softplus | ✅ | ✅ | | | | | | | | | | Softplus | |
-| SpaceToBatch | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | SpaceToBatch | | | | |
-| SpaceToBatchND | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | SpaceToBatchND | | | SpaceToBatchND | |
-| SpaceToDepth | ✅ | ✅ | | | ✅ | ✅ | | | SpaceToDepth | | SpaceToDepth | | |
-| SparseToDense | ✅ | ✅ | | | ✅ | ✅ | | | SpareToDense | | | | |
-| Splice | ✅ | ✅ | | | | | | | | | Splice | | |
-| Split | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Split, SplitV | | Split | Split, SplitV | |
-| SplitWithOverlap | ✅ | ✅ | | | | | | | | | | | |
-| Sqrt | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Sqrt | | Sqrt | Sqrt | Sqrt |
-| Square | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | Square | | | Square | Square |
-| SquaredDifference | ✅ | ✅ | | | ✅ | ✅ | | | SquaredDifference | | | SquaredDifference | |
-| Squeeze | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Squeeze | | Squeeze | Squeeze | |
-| StridedSlice | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | StridedSlice | | Slice,
DynamicSlice | StridedSlice | StridedSlice |
-| StridedSliceGrad | ✅ | ✅ | | | | | | | | | | |
-| Stack | ✅ | ✅ | | | ✅ | ✅ | | | Stack | | | Pack | Stack |
-| Sub | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Sub | | Sub | Sub | Sub |
-| SubGrad | | ✅ | | | | | | | | | | | |
+| Softplus | ✅ | ✅ | | | | | | | | | | | Softplus |
+| SpaceToBatch | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | SpaceToBatch | | | |
+| SpaceToBatchND | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | SpaceToBatchND | | | SpaceToBatchND |
+| SpaceToDepth | ✅ | ✅ | | | ✅ | ✅ | | | | SpaceToDepth | | SpaceToDepth | |
+| SparseToDense | ✅ | ✅ | | | ✅ | ✅ | | | | SpareToDense | | | |
+| Splice | ✅ | ✅ | | | | | | | | | | Splice | |
+| Split | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Split, SplitV | | Split | Split, SplitV |
+| SplitWithOverlap | ✅ | ✅ | | | | | | | | | | | |
+| Sqrt | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Sqrt | | Sqrt | Sqrt |
+| Square | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | Square | | | Square |
+| SquaredDifference | ✅ | ✅ | | | ✅ | ✅ | | | | SquaredDifference | | | SquaredDifference |
+| Squeeze | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | ✅ | | Squeeze | | Squeeze | Squeeze |
+| StridedSlice | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | StridedSlice | | Slice,
DynamicSlice | StridedSlice |
+| StridedSliceGrad | ✅ | ✅ | | | | | | | | | | | |
+| Stack | ✅ | ✅ | | | ✅ | ✅ | | | ✅ | Stack | | | Pack |
+| Sub | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | Sub | | Sub | Sub |
+| SubGrad | | ✅ | | | | | | | | | | | |
| Swish | ✅ | ✅ | | | | | | | | | | | |
-| Switch | ✅ | ✅ | | | | | | | | | | Switch | |
-| Tanh | ✅ | ✅ | | | ✅ | ✅ | ✅ | ✅ | Tanh | TanH | Tanh, Sign | Tanh | Tanh |
-| TanhGrad | | ✅ | | | | | | | | | | | |
-| TensorListFromTensor | ✅ | ✅ | | | | | | | | | | TensorListFromTensor | |
-| TensorListGetItem | ✅ | ✅ | | | | | | | | | | TensorListGetItem | |
-| TensorListReserve | ✅ | ✅ | | | | | | | | | | TensorListReserve | |
-| TensorListSetItem | ✅ | ✅ | | | | | | | | | | TensorListSetItem | |
-| TensorListStack | ✅ | ✅ | | | | | | | | | | TensorListStack | |
-| Tile | ✅ | ✅ | | | | | ✅ | | Tile | Tile | Tile | Tile | |
-| TopK | ✅ | ✅ | ✅ | ✅ | | | | | TopKV2 | | TopK | TopKV2 | |
-| Transpose | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | Transpose | Permute | Transpose, Int8Transpose | Transpose | Transpose |
-| UniformReal | | ✅ | | | | | | | | | | | |
-| Unique | ✅ | ✅ | | | | | | | Unique | | | | |
-| UnsortedSegmentSum | ✅ | ✅ | | | | | | | | | | UnsortedSegmentSum | |
-| Unsqueeze | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | Unsqueeze | | |
-| Unstack | ✅ | ✅ | | | | | | | Unstack | | | | |
-| Where | ✅ | ✅ | | | | | | | Where | | NonZero, Where | Where | |
-| ZerosLike | ✅ | ✅ | | | | | | | ZerosLike | | | ZerosLike | |
-| 转换工具支持的其他算子
[4] | | | | | | | | | | | Constant,
Atan, Asin, Tan, Erf,
Loop, Dropout, If, Identity,
Int8GivenIntTensorFill,
Int8GivenTensorFill,
Int8Quantize,
Int8Dequantize,
LpNormalization | Dropout, Enter,
Exit, If,
IsFinite,
LinSpace,
LoopCond,
NextIteration,
StatelessIf,
StatelessWhile,
TensorArrayGatherV3,
TensorArrayReadV3,
TensorArrayScatterV3,
TensorArraySizeV3,
TensorArrayV3,
TensorArrayWriteV3,
While | |
+| Switch | ✅ | ✅ | | | | | | | | | | | Switch |
+| Tanh | ✅ | ✅ | | | ✅ | ✅ | ✅ | ✅ | ✅ | Tanh | TanH | Tanh, Sign | Tanh |
+| TanhGrad | | ✅ | | | | | | | | | | | |
+| TensorListFromTensor | ✅ | ✅ | | | | | | | | | | | TensorListFromTensor |
+| TensorListGetItem | ✅ | ✅ | | | | | | | | | | | TensorListGetItem |
+| TensorListReserve | ✅ | ✅ | | | | | | | | | | | TensorListReserve |
+| TensorListSetItem | ✅ | ✅ | | | | | | | | | | | TensorListSetItem |
+| TensorListStack | ✅ | ✅ | | | | | | | | | | | TensorListStack |
+| Tile | ✅ | ✅ | | | | | ✅ | | | Tile | Tile | Tile | Tile |
+| TopK | ✅ | ✅ | ✅ | ✅ | | | | | | TopKV2 | | TopK | TopKV2 |
+| Transpose | ✅ | ✅ | ✅ | | ✅ | ✅ | ✅ | ✅ | ✅ | Transpose | Permute | Transpose, Int8Transpose | Transpose |
+| UniformReal | | ✅ | | | | | | | | | | | |
+| Unique | ✅ | ✅ | | | | | | | | Unique | | | |
+| UnsortedSegmentSum | ✅ | ✅ | | | | | | | | | | | UnsortedSegmentSum |
+| Unsqueeze | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | | | | Unsqueeze | |
+| Unstack | ✅ | ✅ | | | | | | | | Unstack | | | |
+| Upsample | | | | | | | | | ✅ | | | | |
+| Where | ✅ | ✅ | | | | | | | | Where | | NonZero, Where | Where |
+| ZerosLike | ✅ | ✅ | | | | | | | | ZerosLike | | | ZerosLike |
+| 转换工具支持的其他算子
[4] | | | | | | | | | | | | Constant,
Atan, Asin, Tan, Erf,
Loop, Dropout, If, Identity,
Int8GivenIntTensorFill,
Int8GivenTensorFill,
Int8Quantize,
Int8Dequantize,
LpNormalization | Dropout, Enter,
Exit, If,
IsFinite,
LinSpace,
LoopCond,
NextIteration,
StatelessIf,
StatelessWhile,
TensorArrayGatherV3,
TensorArrayReadV3,
TensorArrayScatterV3,
TensorArraySizeV3,
TensorArrayV3,
TensorArrayWriteV3,
While |
[1] Clip:仅支持将clip(0, 6)转换为Relu6。
diff --git a/docs/lite/docs/source_zh_cn/quick_start/image_segmentation.md b/docs/lite/docs/source_zh_cn/quick_start/image_segmentation.md
index 23463e1e3ae48a2219c47be95f1fd7eb8892e84a..85833a3d5a5e9c6ba0773f15ea6bf814a47ec139 100644
--- a/docs/lite/docs/source_zh_cn/quick_start/image_segmentation.md
+++ b/docs/lite/docs/source_zh_cn/quick_start/image_segmentation.md
@@ -80,11 +80,7 @@
app
├── src/main
│ ├── assets # 资源文件
-| | └── model # 模型文件
-| | └── segment_model.ms # 存放的模型文件
-│ |
-│ ├── libs # Android库项目的二进制归档文件
-| | └── mindspore-lite-version.aar # MindSpore Lite针对Android版本的归档文件
+| | └── segment_model.ms # 存放的模型文件
│ |
│ ├── java # java层应用代码
│ │ └── com.mindspore.imagesegmentation
@@ -95,7 +91,9 @@ app
│ ├── res # 存放Android相关的资源文件
│ └── AndroidManifest.xml # Android配置文件
│
-│
+├── libs # Android库项目的二进制归档文件
+| └── mindspore-lite-version.aar # MindSpore Lite针对Android版本的归档文件
+|
├── build.gradle # 其他Android配置文件
├── download.gradle # 工程依赖文件下载
└── ...
@@ -121,30 +119,35 @@ Android调用MindSpore Android AAR时,需要相关库文件支持。可通过M
推理代码流程如下,完整代码请参见 [src/java/com/mindspore/imagesegmentation/TrackingMobile](https://gitee.com/mindspore/models/blob/master/official/lite/image_segmentation/app/src/main/java/com/mindspore/imagesegmentation/help/TrackingMobile.java)。
-1. 加载MindSpore Lite模型文件,构建上下文、会话以及用于推理的计算图。
+1. 加载MindSpore Lite模型,构建上下文、会话以及用于推理的计算图。
- 创建会话。
```java
// Create and init config.
MSContext context = new MSContext();
- context.init(2, CpuBindMode.HIGHER_CPU, false);
- boolean ret = context.addDeviceInfo(com.mindspore.config.DeviceType.DT_CPU, false, 0);
- if (!ret) {
- Log.e(TAG, "Create CPU Config failed.");
- return null;
+ if (!context.init(2, CpuBindMode.MID_CPU, false)) {
+ Log.e(TAG, "Init context failed");
+ return;
+ }
+ if (!context.addDeviceInfo(DeviceType.DT_CPU, false, 0)) {
+ Log.e(TAG, "Add device info failed");
+ return;
}
```
- - 加载模型文件并构建用于推理的计算图。
+ - 加载模型并构建用于推理的计算图。
```java
+ MappedByteBuffer modelBuffer = loadModel(mContext, IMAGESEGMENTATIONMODEL);
+ if(modelBuffer == null) {
+ Log.e(TAG, "Load model failed");
+ return;
+ }
// build model.
- boolean ret = model.build(filePath, ModelType.MT_MINDIR, msContext);
- if (!ret) {
- model.free();
- Log.e(TAG, "Compile graph failed");
- return null;
+ boolean ret = model.build(modelBuffer, ModelType.MT_MINDIR,context);
+ if(!ret) {
+ Log.e(TAG, "Build model failed");
}
```
@@ -172,12 +175,11 @@ Android调用MindSpore Android AAR时,需要相关库文件支持。可通过M
```java
// Run graph to infer results.
- boolean ret = model.predict();
- if (!ret) {
- Log.e(TAG, "MindSpore Lite run failed.");
- return false;
+ if (!model.predict()) {
+ Log.e(TAG, "Run graph failed");
+ return null;
}
- ```
+ ```
4. 对输出数据进行处理。
diff --git a/docs/lite/docs/source_zh_cn/quick_start/one_hour_introduction.md b/docs/lite/docs/source_zh_cn/quick_start/one_hour_introduction.md
index bd4f8aa57a48d520be5cf8c64d80bdc3f2ee9df3..23b35770ca5b52c0b6f1d10a12149bdd17313f5e 100644
--- a/docs/lite/docs/source_zh_cn/quick_start/one_hour_introduction.md
+++ b/docs/lite/docs/source_zh_cn/quick_start/one_hour_introduction.md
@@ -1,6 +1,6 @@
# 一小时入门
-`Android` `C++` `全流程` `模型转换` `模型加载` `推理应用` `数据准备` `初级` `中级` `高级`
+`Windows` `Linux` `Android` `C++` `全流程` `模型转换` `模型加载` `推理应用` `数据准备` `初级` `中级` `高级`

@@ -513,7 +513,7 @@ mindspore-lite-{version}-linux-x64
5. 编写代码
- 打开刚才创建的`mian.cc`,粘贴如下内容:
+ 打开刚才创建的`main.cc`,粘贴如下内容:
```cpp
#include
@@ -1221,7 +1221,7 @@ mindspore-lite-{version}-win-x64
5. 编写代码
- 打开刚才创建的`mian.cc`,粘贴如下内容:
+ 打开刚才创建的`main.cc`,粘贴如下内容:
```cpp
#include
diff --git a/docs/lite/docs/source_zh_cn/quick_start/quick_start.md b/docs/lite/docs/source_zh_cn/quick_start/quick_start.md
index 01e8379cbe960b738b7cd33b8af3efae16c835ec..7c96cff5257eed0b07815dee2ed07d258c0ffedc 100644
--- a/docs/lite/docs/source_zh_cn/quick_start/quick_start.md
+++ b/docs/lite/docs/source_zh_cn/quick_start/quick_start.md
@@ -243,139 +243,119 @@ target_link_libraries( # Specifies the target library.
MSNetWork *labelNet = new MSNetWork;
*labelEnv = labelNet;
- mindspore::lite::Context *context = new mindspore::lite::Context;
- context->thread_num_ = num_thread;
- context->device_list_[0].device_info_.cpu_device_info_.cpu_bind_mode_ = mindspore::lite::NO_BIND;
- context->device_list_[0].device_info_.cpu_device_info_.enable_float16_ = false;
- context->device_list_[0].device_type_ = mindspore::lite::DT_CPU;
-
- labelNet->CreateSessionMS(modelBuffer, bufferLen, context);
- delete context;
+ auto context = std::make_shared();
+ if (context == nullptr) {
+ MS_PRINT("context create failed!");
+ delete labelNet;
+ delete labelEnv;
+ return (jlong) nullptr;
+ }
+
+ context->SetThreadNum(num_thread);
+ context->SetThreadAffinity(0);
+ auto &device_list = context->MutableDeviceInfo();
+ auto cpuDeviceInfo = std::make_shared();
+ cpuDeviceInfo->SetEnableFP16(false);
+ device_list.push_back(cpuDeviceInfo);
```
基于模型文件`modelBuffer`构建用于推理的计算图。
- ```cpp
- void MSNetWork::CreateSessionMS(char *modelBuffer, size_t bufferLen, mindspore::lite::Context *ctx) {
- session_ = mindspore::session::LiteSession::CreateSession(ctx);
- if (session_ == nullptr) {
- MS_PRINT("Create Session failed.");
- return;
- }
-
- // Compile model.
- model_ = mindspore::lite::Model::Import(modelBuffer, bufferLen);
- if (model_ == nullptr) {
- ReleaseNets();
- MS_PRINT("Import model failed.");
- return;
- }
-
- int ret = session_->CompileGraph(model_);
- if (ret != mindspore::lite::RET_OK) {
- ReleaseNets();
- MS_PRINT("CompileGraph failed.");
- return;
- }
- }
+ ```cpp
+ bool MSNetWork::BuildModel(char *modelBuffer, size_t bufferLen,
+ std::shared_ptr ctx) {
+ model_ = std::make_shared();
+ if (model_ == nullptr) {
+ MS_PRINT("MindSpore build model failed!.");
+ return false;
+ }
+ auto ret = model_->Build(modelBuffer, bufferLen, mindspore::ModelType::kMindIR, ctx);
+ return ret.IsOk();
+ }
```
2. 将输入图片转换为传入MindSpore模型的Tensor格式。
- 将待检测图片`srcBitmap`进行尺寸裁剪并转换为LiteMat格式`lite_norm_mat_cut`。对其宽高以及通道数信息转换成float格式数据`dataHWC`。最终把`dataHWC`拷贝到MindSpore模型的Tensor输入`inTensor`中。
- ```cpp
- if (!BitmapToLiteMat(env, srcBitmap, &lite_mat_bgr)) {
- MS_PRINT("BitmapToLiteMat error");
- return NULL;
- }
- if (!PreProcessImageData(lite_mat_bgr, &lite_norm_mat_cut)) {
- MS_PRINT("PreProcessImageData error");
- return NULL;
- }
-
- ImgDims inputDims;
- inputDims.channel = lite_norm_mat_cut.channel_;
- inputDims.width = lite_norm_mat_cut.width_;
- inputDims.height = lite_norm_mat_cut.height_;
-
- // Get the MindSpore inference environment which created in loadModel().
- void **labelEnv = reinterpret_cast(netEnv);
- if (labelEnv == nullptr) {
- MS_PRINT("MindSpore error, labelEnv is a nullptr.");
- return NULL;
- }
- MSNetWork *labelNet = static_cast(*labelEnv);
-
- auto mSession = labelNet->session();
- if (mSession == nullptr) {
- MS_PRINT("MindSpore error, Session is a nullptr.");
- return NULL;
+ ```cpp
+ void **labelEnv = reinterpret_cast(netEnv);
+ if (labelEnv == nullptr) {
+ MS_PRINT("MindSpore error, labelEnv is a nullptr.");
+ return NULL;
+ }
+ MSNetWork *labelNet = static_cast(*labelEnv);
+
+ auto mModel = labelNet->model();
+ if (mModel == nullptr) {
+ MS_PRINT("MindSpore error, Model is a nullptr.");
+ return NULL;
}
- MS_PRINT("MindSpore get session.");
+ MS_PRINT("MindSpore get Model.");
- auto msInputs = mSession->GetInputs();
- if (msInputs.size() == 0) {
- MS_PRINT("MindSpore error, msInputs.size() equals 0.");
- return NULL;
+ auto msInputs = mModel->GetInputs();
+ if (msInputs.empty()) {
+ MS_PRINT("MindSpore error, msInputs.size() equals 0.");
+ return NULL;
}
auto inTensor = msInputs.front();
float *dataHWC = reinterpret_cast(lite_norm_mat_cut.data_ptr_);
// Copy dataHWC to the model input tensor.
- memcpy(inTensor->MutableData(), dataHWC,
- inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
- ```
+ memcpy(inTensor.MutableData(), dataHWC,
+ inputDims.channel * inputDims.width * inputDims.height * sizeof(float));
+ ```
调整输入图片的尺寸,以及数据处理详细算法。
- ```cpp
- bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
- bool ret = false;
- LiteMat lite_mat_resize;
- LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
- ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
- if (!ret) {
+ ```cpp
+ bool PreProcessImageData(const LiteMat &lite_mat_bgr, LiteMat *lite_norm_mat_ptr) {
+ bool ret = false;
+ LiteMat lite_mat_resize;
+ LiteMat &lite_norm_mat_cut = *lite_norm_mat_ptr;
+ ret = ResizeBilinear(lite_mat_bgr, lite_mat_resize, 256, 256);
+ if (!ret) {
MS_PRINT("ResizeBilinear error");
return false;
- }
- LiteMat lite_mat_convert_float;
- ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
- if (!ret) {
- MS_PRINT("ConvertTo error");
- return false;
- }
- LiteMat lite_mat_cut;
- ret = Crop(lite_mat_convert_float, lite_mat_cut, 16, 16, 224, 224);
- if (!ret) {
- MS_PRINT("Crop error");
- return false;
- }
- std::vector means = {0.485, 0.456, 0.406};
- std::vector stds = {0.229, 0.224, 0.225};
- SubStractMeanNormalize(lite_mat_cut, lite_norm_mat_cut, means, stds);
- return true;
+ }
+ LiteMat lite_mat_convert_float;
+ ret = ConvertTo(lite_mat_resize, lite_mat_convert_float, 1.0 / 255.0);
+ if (!ret) {
+ MS_PRINT("ConvertTo error");
+ return false;
+ }
+ LiteMat lite_mat_cut;
+ ret = Crop(lite_mat_convert_float, lite_mat_cut, 16, 16, 224, 224);
+ if (!ret) {
+ MS_PRINT("Crop error");
+ return false;
+ }
+ std::vector means = {0.485, 0.456, 0.406};
+ std::vector stds = {0.229, 0.224, 0.225};
+ SubStractMeanNormalize(lite_mat_cut, lite_norm_mat_cut, means, stds);
+ return true;
}
- ```
+ ```
3. 对输入Tensor按照模型进行推理,获取输出Tensor,并进行后处理。
- 图和模型加载完成,执行端侧推理。
```cpp
+ std::vector outputs;
// After the model and image tensor data is loaded, run inference.
- auto status = mSession->RunGraph();
+ auto status = mModel->Predict(msInputs, &outputs);
```
- 获取对MindSpore模型的Tensor输出`msOutputs`。通过`msOutputs`以及分类数组信息,计算得到在APP中显示的文本信息`resultCharData`。
```cpp
- auto names = mSession->GetOutputTensorNames();
- std::unordered_map msOutputs;
+ auto names = mModel->GetOutputTensorNames();
+ std::unordered_map msOutputs;
for (const auto &name : names) {
- auto temp_dat =mSession->GetOutputByTensorName(name);
- msOutputs.insert(std::pair {name, temp_dat});
- }
+ auto temp_dat = mModel->GetOutputByTensorName(name);
+ msOutputs.insert(std::pair{name, temp_dat});
+ }
std::string resultStr = ProcessRunnetResult(::RET_CATEGORY_SUM,::labels_name_map, msOutputs);
const char *resultCharData = resultStr.c_str();
@@ -385,48 +365,55 @@ target_link_libraries( # Specifies the target library.
输出数据的后续处理。通过`msOutputs`获取输出对象`outputTensor`,并和事物类别数组`labels_name_map`解析得到每个元素的训练的得分数组`scores[]`。 设置可信度阀值为`unifiedThre`,根据训练数据统计可信度阀值。高于阀值,归属于这个类型。反之,则不是。最终返回一个对应事物类别名称和对应得分的数据`categoryScore`。
```cpp
- std::string ProcessRunnetResult(const int RET_CATEGORY_SUM, const char *const labels_name_map[], std::unordered_map msOutputs) {
+ std::string ProcessRunnetResult(const int RET_CATEGORY_SUM, const char *const labels_name_map[],
+ std::unordered_map msOutputs) {
// Get the branch of the model output.
// Use iterators to get map elements.
- std::unordered_map::iterator iter;
+ std::unordered_map::iterator iter;
iter = msOutputs.begin();
// The mobilenetv2.ms model output just one branch.
auto outputTensor = iter->second;
- int tensorNum = outputTensor->ElementsNum();
+ int tensorNum = outputTensor.ElementNum();
MS_PRINT("Number of tensor elements:%d", tensorNum);
// Get a pointer to the first score.
- float *temp_scores = static_cast(outputTensor->MutableData());
+ float *temp_scores = static_cast(outputTensor.MutableData());
float scores[RET_CATEGORY_SUM];
for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- scores[i] = temp_scores[i];
+ scores[i] = temp_scores[i];
}
const float unifiedThre = 0.5;
const float probMax = 1.0;
for (size_t i = 0; i < RET_CATEGORY_SUM; ++i) {
- float threshold = g_thres_map[i];
- float tmpProb = scores[i];
- if (tmpProb < threshold) {
+ float threshold = g_thres_map[i];
+ float tmpProb = scores[i];
+ if (tmpProb < threshold) {
tmpProb = tmpProb / threshold * unifiedThre;
- } else {
+ } else {
tmpProb = (tmpProb - threshold) / (probMax - threshold) * unifiedThre + unifiedThre;
+ }
+ scores[i] = tmpProb;
}
- scores[i] = tmpProb;
+
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ if (scores[i] > 0.5) {
+ MS_PRINT("MindSpore scores[%d] : [%f]", i, scores[i]);
+ }
}
- // Score for each category.
- // Converted to text information that needs to be displayed in the APP.
- std::string categoryScore = "";
- for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
- categoryScore += labels_name_map[i];
- categoryScore += ":";
- std::string score_str = std::to_string(scores[i]);
- categoryScore += score_str;
- categoryScore += ";";
- }
- return categoryScore;
- }
- ```
+ // Score for each category.
+ // Converted to text information that needs to be displayed in the APP.
+ std::string categoryScore = "";
+ for (int i = 0; i < RET_CATEGORY_SUM; ++i) {
+ categoryScore += labels_name_map[i];
+ categoryScore += ":";
+ std::string score_str = std::to_string(scores[i]);
+ categoryScore += score_str;
+ categoryScore += ";";
+ }
+ return categoryScore;
+ }
+ ```
diff --git a/docs/lite/docs/source_zh_cn/troubleshooting_guide.md b/docs/lite/docs/source_zh_cn/troubleshooting_guide.md
index 2ff6b0f6bca12a009e4829e93bf1276de0e9c8de..715ff061010373e509cd039e7d4314ab22a949b1 100644
--- a/docs/lite/docs/source_zh_cn/troubleshooting_guide.md
+++ b/docs/lite/docs/source_zh_cn/troubleshooting_guide.md
@@ -4,7 +4,7 @@
## 概述
-在MindSpore Lite使用中遇到问题时,可首先查看日志,多数场景下的问题可以通过日志报错信息直接定位(通过设置环境变量[GLOG_v](https://mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11) 调整日志等级可以打印更多调试日志),这里简单介绍几种常见报错场景的问题定位与解决方法。
+在MindSpore Lite使用中遇到问题时,可首先查看日志,多数场景下的问题可以通过日志报错信息直接定位(通过设置环境变量[GLOG_v](https://mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置) 调整日志等级可以打印更多调试日志),这里简单介绍几种常见报错场景的问题定位与解决方法。
> 1. 因不同版本中日志行号可能存在差异,下述示例日志报错信息中的行号信息均用”**”表示;
> 2. 示例日志中只列出了通用信息,其他涉及具体场景的信息均用“****”表示。
@@ -56,7 +56,7 @@
### 全量化转换失败
-1. 针对动态Shape的模型,需要在[转换命令](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#id5)上设置`--inputShape=`,例如
+1. 针对动态Shape的模型,需要在[转换命令](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#参数说明)上设置`--inputShape=`,例如
```
./converter_lite --fmk=ModelType --modelFile=ModelFilePath --outputFile=ConvertedModelPath --configFile=/mindspore/lite/tools/converter/quantizer/config/full_quant.cfg --inputShape=intput_1:1,224,224,3;intput_2:1,48;
@@ -116,7 +116,7 @@
```
- 问题分析:ms模型的输入shape包含-1,即模型输入为动态shape,直接推理时由于shape无效导致推理失败。
- - 解决方法:MindSpore Lite在对包含动态shape输入的模型推理时要求指定合理的shape,使用benchmark工具时可通过设置[inputShapes](https://mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#id3) 参数指定,使用MindSpore Lite集成开发时可通过调用[Resize](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#resize) 方法设置。
+ - 解决方法:MindSpore Lite在对包含动态shape输入的模型推理时要求指定合理的shape,使用benchmark工具时可通过设置[inputShapes](https://mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#参数说明) 参数指定,使用MindSpore Lite集成开发时可通过调用[Resize](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#resize) 方法设置。
### OpenCL GPU 推理问题
@@ -185,7 +185,7 @@
```
- 问题分析:TensorRT GPU 构图暂不支持有动态 shape 的模型,具体情况为模型的输入 shape 包含-1,或者模型中包含 shape 算子。
- - 解决方法:在使用 converter 将模型转换成ms时,需要在[转换命令](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#id5)上设置`--inputShape=`,指定输入 tensor 的 shape 信息。如需在推理时改变输入 shape,使用 benchmark 工具时可通过设置[inputShapes](https://mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#id3) 参数指定,使用 MindSpore Lite 集成开发时可通过调用[Resize](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#resize) 方法设置。注意: [Resize](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#resize)输入的 shape 维度必须要小于等于 [Build](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#build)模型的维度。
+ - 解决方法:在使用 converter 将模型转换成ms时,需要在[转换命令](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#参数说明)上设置`--inputShape=`,指定输入 tensor 的 shape 信息。如需在推理时改变输入 shape,使用 benchmark 工具时可通过设置[inputShapes](https://mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#参数说明) 参数指定,使用 MindSpore Lite 集成开发时可通过调用[Resize](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#resize) 方法设置。注意: [Resize](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#resize)输入的 shape 维度必须要小于等于 [Build](https://mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#build)模型的维度。
#### 图执行失败
@@ -280,7 +280,7 @@
2. MindSpore Lite使用fp32推理结果正确,但是fp16推理结果出现NaN或者Inf值怎么办?
- 结果出现NaN或者Inf值一般为推理过程中出现数值溢出,可以查看模型结构,筛选可能出数值溢出的算子层,然后通过benchmark工具的[Dump功能](https://mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#dump) 保存算子层输出确认出现数值溢出的算子。
- - MindSpore Lite 1.5.0之后版本提供混合精度推理能力,在整网推理优先使用fp16时支持设置某一层算子进行fp32推理,具体使用方法可参考官网文档[混合精度运行](https://mindspore.cn/lite/docs/zh-CN/master/use/runtime_cpp.html#id13) ,通过将溢出层设置为fp32避免在fp16推理时出现的整网推理精度问题。
+ - MindSpore Lite 1.5.0之后版本提供混合精度推理能力,在整网推理优先使用fp16时支持设置某一层算子进行fp32推理,具体使用方法可参考官网文档[混合精度运行](https://mindspore.cn/lite/docs/zh-CN/master/use/runtime_cpp.html#混合精度运行) ,通过将溢出层设置为fp32避免在fp16推理时出现的整网推理精度问题。
3. MindSpore Lite使用fp32和fp16推理结果同时出现NaN或者Inf值怎么办?
- 问题分析:检查整个网络存在做除法操作的算子。在做推理时如果执行了除法操作,且除数是0时容易出现NaN值。比如下面的网络结构,如果该网络用于输入数据不做归一化的场景,输入数据在0-255范围,则会出现NaN值,原因在于不做归一化时,输入数据比较大,导致matmul的输出值会很大,导致Tanh激活函数输出等于1,最终导致Div算子除0。但如果网络输入数据做了归一化,Tanh激活函数不等于1,网络推理数据不存在NaN值。
diff --git a/docs/lite/docs/source_zh_cn/use/ascend_info.md b/docs/lite/docs/source_zh_cn/use/ascend_info.md
index 2f189dc872a13d5e6e390da27ce578f35aee5348..3f102dc6dc7dd2ea16f6f6de40bf7c3f1b6e18a7 100644
--- a/docs/lite/docs/source_zh_cn/use/ascend_info.md
+++ b/docs/lite/docs/source_zh_cn/use/ascend_info.md
@@ -100,7 +100,7 @@ MindSpore Lite提供离线转换模型功能的工具,将多种类型的模型
CONVERTER RESULT SUCCESS:0
```
- 用户若想了解converter_lite转换工具的相关参数,可参考[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#id4)。
+ 用户若想了解converter_lite转换工具的相关参数,可参考[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#参数说明)。
说明:当原始模型输入shape不确定时,converter工具转换模型时要指定inputShape,同时configFile配置acl_option_cfg_param中input_shape_vector参数,取值相同,命令如下:
@@ -112,7 +112,7 @@ MindSpore Lite提供离线转换模型功能的工具,将多种类型的模型
```cpp
[acl_option_cfg_param]
- input_shape_vector="1,64,64,1"
+ input_shape_vector="[1,64,64,1]"
```
表1:配置[acl_option_cfg_param]参数
@@ -126,7 +126,6 @@ MindSpore Lite提供离线转换模型功能的工具,将多种类型的模型
| `dynamic_batch_size` | 可选 | 指定[动态BatchSize](#动态Batch size)参数。 | String | `"2,4"`|
| `dynamic_image_size` | 可选 | 指定[动态分辨率](#动态分辨率)参数。 | String | `"96,96;32,32"` |
| `fusion_switch_config_file_path` | 可选 | 配置[融合规则开关配置](https://support.huaweicloud.com/atctool-cann504alpha2infer/atlasatc_16_0077.html)文件路径及文件名。 | String | - |
-| `op_select_implmode` | 可选 | 配置算子选择模式。 | String | 可选有`"high_performance"`和`"high_precision"`,默认为`"high_performance"` |
| `insert_op_config_file_path` | 可选 | 模型插入[AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html)算子 | String | [AIPP](https://support.huaweicloud.com/adevg-ms-atlas200dkappc32/atlasadm_01_0023.html)配置文件路径 |
## 推理工具runtime
diff --git a/docs/lite/docs/source_zh_cn/use/benchmark_tool.md b/docs/lite/docs/source_zh_cn/use/benchmark_tool.md
index 1fa3771ebe2b56689f8f25a648ee91bb943afefa..c21a8982aaab039a4db104c6bde5f3a9b6e995b1 100644
--- a/docs/lite/docs/source_zh_cn/use/benchmark_tool.md
+++ b/docs/lite/docs/source_zh_cn/use/benchmark_tool.md
@@ -14,9 +14,9 @@
使用Benchmark工具,需要进行如下环境准备工作。
-- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id3)执行编译。
+- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#编译示例)执行编译。
-- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id4),获得`benchmark`工具。
+- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#编译结构),获得`benchmark`工具。
- 将推理需要的动态链接库加入环境变量LD_LIBRARY_PATH。
@@ -268,7 +268,7 @@ np.fromfile("/path/to/dump.bin", np.float32)
使用Benchmark工具,需要进行如下环境准备工作。
-- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id9)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id11)执行编译。
+- 编译:Benchmark工具代码在MindSpore源码的`mindspore/lite/tools/benchmark`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#环境要求-1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#编译示例-1)执行编译。
- 将推理需要的动态链接库加入环境变量PATH。
```bash
diff --git a/docs/lite/docs/source_zh_cn/use/benchmark_train_tool.md b/docs/lite/docs/source_zh_cn/use/benchmark_train_tool.md
index 4736d74f0d5df24145b8ab26b47f3662e9ca75bd..bba1a9f8a3d493da576c7e779b4bf6603ffd5470 100644
--- a/docs/lite/docs/source_zh_cn/use/benchmark_train_tool.md
+++ b/docs/lite/docs/source_zh_cn/use/benchmark_train_tool.md
@@ -14,9 +14,9 @@
使用`benchmark_train`工具,需要进行如下环境准备工作。
-- 编译:`benchmark_train`工具代码在MindSpore源码的`mindspore/lite/tools/benchmark_train`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id3)编译端侧训练框架。
+- 编译:`benchmark_train`工具代码在MindSpore源码的`mindspore/lite/tools/benchmark_train`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#模块构建编译选项)编译端侧训练框架。
-- 配置环境变量:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id5),获得`benchmark_train`工具,并配置环境变量。假设您编译出的端侧训练框架压缩包所在完整路径为`/path/mindspore-lite-{version}-{os}-{arch}.tar.gz`,解压并配置环境变量的命令如下:
+- 配置环境变量:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#目录结构),获得`benchmark_train`工具,并配置环境变量。假设您编译出的端侧训练框架压缩包所在完整路径为`/path/mindspore-lite-{version}-{os}-{arch}.tar.gz`,解压并配置环境变量的命令如下:
```bash
cd /path
diff --git a/docs/lite/docs/source_zh_cn/use/build.md b/docs/lite/docs/source_zh_cn/use/build.md
index 5ab7916ac3e4ed95b7214a1787f580cee9632fa1..2094da66e4f678e78951f1f4463612f0f2a35d64 100644
--- a/docs/lite/docs/source_zh_cn/use/build.md
+++ b/docs/lite/docs/source_zh_cn/use/build.md
@@ -82,12 +82,15 @@ MindSpore根目录下的`build.sh`脚本可用于MindSpore Lite的编译。
| MSLITE_ENABLE_TRAIN | 是否编译训练版本 | on、off | on |
| MSLITE_ENABLE_SSE | 是否启用SSE指令集,仅在`-I x86_64`时有效 | on、off | off |
| MSLITE_ENABLE_AVX | 是否启用AVX指令集,仅在`-I x86_64`时有效 | on、off | off |
+| MSLITE_ENABLE_AVX512 | 是否启用AVX512指令集,仅在`-I x86_64`时有效 | on、off | off |
| MSLITE_ENABLE_CONVERTER | 是否编译模型转换工具,仅在`-I x86_64`时有效 | on、off | on |
| MSLITE_ENABLE_TOOLS | 是否编译配套工具 | on、off | on |
| MSLITE_ENABLE_TESTCASES | 是否编译测试用例 | on、off | off |
+| MSLITE_ENABLE_SERVER_INFERENCE | 是否启动服务端推理接口 | on、off | off |
> - TensorRT 和 NPU 的编译环境配置,参考[专用芯片集成说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/asic.html)。
> - 启用AVX指令集时,需要运行环境的CPU同时支持avx特性和fma特性。
+> - 模型转换工具的编译时间较长,若非必要,建议通过`MSLITE_ENABLE_CONVERTER`关闭转换工具编译,以加快编译速度。
- runtime功能裁减编译选项
@@ -246,11 +249,13 @@ MindSpore根目录下的`build.bat`脚本可用于MindSpore Lite的编译。
| -------- | ----- | ---- | ---- |
| MSLITE_ENABLE_SSE | 是否启用SSE指令集 | on、off | off |
| MSLITE_ENABLE_AVX | 是否启用AVX指令集(该选项暂不支持Visual Studio编译器) | on、off | off |
+| MSLITE_ENABLE_AVX512 | 是否启用AVX512指令集(该选项暂不支持Visual Studio编译器) | on、off | off |
| MSLITE_ENABLE_CONVERTER | 是否编译模型转换工具(该选项暂不支持Visual Studio编译器) | on、off | on |
| MSLITE_ENABLE_TOOLS | 是否编译配套工具 | on、off | on |
| MSLITE_ENABLE_TESTCASES | 是否编译测试用例 | on、off | off |
> - 以上选项可通过设置同名环境变量或者`mindspore/lite/CMakeLists.txt`文件修改。
+> - 模型转换工具的编译时间较长,若非必要,建议通过`MSLITE_ENABLE_CONVERTER`关闭转换工具编译,以加快编译速度。
### 编译示例
diff --git a/docs/lite/docs/source_zh_cn/use/converter_register.md b/docs/lite/docs/source_zh_cn/use/converter_register.md
index 50062a073ee10c1b304895bd1aaa14247723f44e..12092fcac3b501abe5a743e3ac861dd989c8889f 100644
--- a/docs/lite/docs/source_zh_cn/use/converter_register.md
+++ b/docs/lite/docs/source_zh_cn/use/converter_register.md
@@ -79,7 +79,7 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // 注册调度逻辑
示例代码可参考[pass](https://gitee.com/mindspore/mindspore/tree/master/mindspore/lite/examples/converter_extend/pass)。
-> 在离线转换阶段,我们会对模型的每一个节点的输出张量进行推断,包括输出张量的Format、DataType以及Shape,因此,离线转换阶段,用户需提供自己实现的算子的推断过程,这里用户可以参考[算子Infershape扩展](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime_cpp.html#id19)说明,示例代码可参考[infer](https://gitee.com/mindspore/mindspore/tree/master/mindspore/lite/examples/converter_extend/infer)。
+> 在离线转换阶段,我们会对模型的每一个节点的输出张量进行推断,包括输出张量的Format、DataType以及Shape,因此,离线转换阶段,用户需提供自己实现的算子的推断过程,这里用户可以参考[算子Infershape扩展](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime_cpp.html#扩展使用)说明,示例代码可参考[infer](https://gitee.com/mindspore/mindspore/tree/master/mindspore/lite/examples/converter_extend/infer)。
## 示例演示
@@ -94,7 +94,7 @@ REG_SCHEDULED_PASS(POSITION_BEGIN, {"PassTutorial"}) // 注册调度逻辑
- 编译准备
- MindSpore Lite的发布件不会提供其他框架下的序列化文件,因此,用户需自行编译获得,请参考[概述](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_register.html#id1)。
+ MindSpore Lite的发布件不会提供其他框架下的序列化文件,因此,用户需自行编译获得,请参考[概述](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_register.html#概述)。
本示例采用的是tflite模型,用户需编译[flatbuffers](https://gitee.com/mindspore/mindspore/blob/master/cmake/external_libs/flatbuffers.cmake),从[MindSpore仓](https://gitee.com/mindspore/mindspore/tree/master)中获取[TFLITE原型文件](https://gitee.com/mindspore/mindspore/blob/master/mindspore/lite/tools/converter/parser/tflite/schema.fbs),最终生成tflite的序列化文件。
diff --git a/docs/lite/docs/source_zh_cn/use/converter_tool.md b/docs/lite/docs/source_zh_cn/use/converter_tool.md
index c1cdcda3167def6d123c8778b11a0c3b36b32bbf..4911b10a0b21c10ce711eb41d650c338d4c7ff2a 100644
--- a/docs/lite/docs/source_zh_cn/use/converter_tool.md
+++ b/docs/lite/docs/source_zh_cn/use/converter_tool.md
@@ -71,7 +71,7 @@ MindSpore Lite模型转换工具提供了多种参数设置,用户可根据需
> - Caffe模型一般分为两个文件:`*.prototxt`模型结构,对应`--modelFile`参数;`*.caffemodel`模型权值,对应`--weightFile`参数。
> - `--fp16`的优先级很低,比如如果开启了量化,那么对于已经量化的权重,`--fp16`不会再次生效。总而言之,该选项只会在序列化时对模型中的Float32的权重生效。
> - `inputDataFormat`:一般在集成NCHW规格的三方硬件场景下(例如[集成NNIE使用说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/nnie.html#nnie)),设为NCHW比NHWC会有较明显的性能提升。在其他场景下,用户也可按需设置。
-> - `configFile`配置文件采用`key=value`的方式定义相关参数,量化相关的配置参数详见[训练后量化](https://www.mindspore.cn/lite/docs/zh-CN/master/use/post_training_quantization.html),扩展功能相关的配置参数详见[扩展配置](https://www.mindspore.cn/lite/docs/zh-CN/master/use/nnie.html#id6)。
+> - `configFile`配置文件采用`key=value`的方式定义相关参数,量化相关的配置参数详见[训练后量化](https://www.mindspore.cn/lite/docs/zh-CN/master/use/post_training_quantization.html),扩展功能相关的配置参数详见[扩展配置](https://www.mindspore.cn/lite/docs/zh-CN/master/use/nnie.html#扩展配置)。
### 使用示例
@@ -164,7 +164,7 @@ mindspore-lite-{version}-win-x64
### 参数说明
-参考Linux环境模型转换工具的[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#id3)。
+参考Linux环境模型转换工具的[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#参数说明)。
### 使用示例
diff --git a/docs/lite/docs/source_zh_cn/use/cropper_tool.md b/docs/lite/docs/source_zh_cn/use/cropper_tool.md
index e8c439c07a3618ac964ca611c96731fb196d583b..f9ba2e8da2720a3801681aa782f37f2ccf578c7e 100644
--- a/docs/lite/docs/source_zh_cn/use/cropper_tool.md
+++ b/docs/lite/docs/source_zh_cn/use/cropper_tool.md
@@ -14,9 +14,9 @@ MindSpore Lite提供对Runtime的`libmindspore-lite.a`静态库裁剪工具,
使用MindSpore Lite裁剪工具,需要进行如下环境准备工作。
-- 编译:裁剪工具代码在MindSpore源码的`mindspore/lite/tools/cropper`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id3)编译x86_64版本。
+- 编译:裁剪工具代码在MindSpore源码的`mindspore/lite/tools/cropper`目录中,参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#编译示例)编译x86_64版本。
-- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id4),获得`cropper`工具。
+- 运行:参考构建文档中的[编译输出](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#目录结构),获得`cropper`工具。
## 参数说明
diff --git a/docs/lite/docs/source_zh_cn/use/delegate.md b/docs/lite/docs/source_zh_cn/use/delegate.md
index 263ddcf89d67284822298bcff585a9fe14a46af0..4ec5e64d7b6936897fd1378e5d3acd038f537646 100644
--- a/docs/lite/docs/source_zh_cn/use/delegate.md
+++ b/docs/lite/docs/source_zh_cn/use/delegate.md
@@ -14,7 +14,7 @@ MindSpore Lite的Delegate接口用于支持第三方AI框架(例如:NPU、Te
1. 新增自定义Delegate类:继承[Delegate](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#delegate)类实现自定义的Delegate。
2. 实现初始化接口:[Init](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#init)接口实现判断运行设备是否支持Delegate框架,初始化Delegate资源等功能。
-3. 实现构图接口:[Build](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#id24)接口要实现算子支持判断、子图构建、在线构图功能。
+3. 实现构图接口:[Build](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#build)接口要实现算子支持判断、子图构建、在线构图功能。
4. 实现子图Kernel:继承[Kernel](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore_kernel.html#kernel)实现Delegate的子图Kernel。
### 新增自定义Delegate类
@@ -47,7 +47,7 @@ Status XXXDelegate::Init() {
### 实现构图接口
-构图接口[Build(DelegateModel *model)](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#id24)接口的入参是[DelegateModel](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#delegatemodel)的实例。
+构图接口[Build(DelegateModel *model)](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#build)接口的入参是[DelegateModel](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#delegatemodel)的实例。
> `DelegateModel`中,[std::vector *kernels_](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#kernel)是已经完成MindSpore Lite内置算子注册、经过拓扑排序的算子列表。
>
diff --git a/docs/lite/docs/source_zh_cn/use/downloads.md b/docs/lite/docs/source_zh_cn/use/downloads.md
index d11ceca5b3bb880365611bce086a281932a6ee16..38ed8cb9397adbd08644e2bdc1836d170db22d68 100644
--- a/docs/lite/docs/source_zh_cn/use/downloads.md
+++ b/docs/lite/docs/source_zh_cn/use/downloads.md
@@ -6,6 +6,22 @@
欢迎使用MindSpore Lite,我们提供了支持多种操作系统和硬件平台的模型转换、模型推理、图像处理等功能,你可以下载适用于本地环境的版本包直接使用。
+其中Linux-x86_64目标运行系统为Linux,底层架构为x86-64,已在Linux发行版Euleros2.0、Centos7.8、Ubuntu18.04版本上经过测试验证。
+
+## 1.6.0
+
+| 组件 | 硬件平台 | 操作系统 | 链接 | SHA-256 |
+| --- | --- | --- | --- | --- |
+| 推理/训练runtime、推理/训练aar包、以及benchmark工具 | CPU | Android-aarch32 | | d043803cffc8a0b75409aab3e4039f1e86756cf618af1538a76865e9fa4fd481 |
+| 推理/训练runtime、推理/训练aar包、以及benchmark工具 | CPU/GPU | Android-aarch64 | | 25188266621f4cfedb24970a9a98ef6190fe02c9d034b7285f360da425ffe9d6 |
+| 推理/训练runtime、推理/训练jar包、以及benchmark/codegen/converter/cropper工具 | CPU | Linux-x86_64 | | 90472996359f64509f38036ed8100605c76dcdc42453c2fc7156048eb981708c |
+| 推理runtime以及benchmark/codegen/converter工具 | CPU | Windows-x86_64 | | 4460b8f1bf321eca005074dccffb54d6d3164ba3f78ce34530ec20db4dbc9980 |
+| iOS 推理runtime | CPU | iOS-aarch32 | | 72fe007660abe9c51d0a1852b094fb52d8bbd1610c989e79c9858937102aa59f |
+| iOS 推理runtime | CPU | iOS-aarch64 | | 51bd5f7c21477d7856bea33d31e059f578b6b964a7c43e440e97c44b186db4a4 |
+| NNIE converter工具 | CPU | Linux-x86_64 | | 81c2a5dadf51978d1c80f75c63fde4edefa2897792ac571fd33ffd35e338736b |
+| NNIE 推理runtime及benchmark工具 | Hi3516D | Linux-aarch32 | | 8133c2326e2defa3614f86592d5691fdb410a4296898e254a33cd33a7e519b16 |
+| 轻鸿蒙推理runtime | Hi3516D | OpenHarmony-aarch32 | | d5daafac4bdcd0d03158e2a7cd3f881869b49cfb77d9654a24ddd967edbe5e91 |
+
## 1.5.0
| 组件 | 硬件平台 | 操作系统 | 链接 | SHA-256 |
diff --git a/docs/lite/docs/source_zh_cn/use/nnie.md b/docs/lite/docs/source_zh_cn/use/nnie.md
index ed3698ac24105e352e3da6ff49f21eb0c4b29900..93764a8761fee35471dd02b051a510a22b271ed1 100644
--- a/docs/lite/docs/source_zh_cn/use/nnie.md
+++ b/docs/lite/docs/source_zh_cn/use/nnie.md
@@ -37,7 +37,7 @@ mindspore-lite-{version}-linux-aarch32
└── libmslite_proposal.so # 集成proposal的样例动态库
```
-上述是NNIE的集成目录结构,推理工具runtime的其余目录结构详情,见[目录结构](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id4)。
+上述是NNIE的集成目录结构,推理工具runtime的其余目录结构详情,见[目录结构](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#目录结构)。
## 工具使用
@@ -142,7 +142,7 @@ nnie.cfg文件的示例参考如下:
CONVERTER RESULT SUCCESS:0
```
- 用户若想了解converter_lite转换工具的相关参数,可参考[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#id4)。
+ 用户若想了解converter_lite转换工具的相关参数,可参考[参数说明](https://www.mindspore.cn/lite/docs/zh-CN/master/use/converter_tool.html#参数说明)。
### 推理工具runtime
@@ -337,7 +337,7 @@ ${model_path}为转换后ms模型文件路径
MindSpore Lite在转换NNIE模型时,会将大部分的算子融合为NNIE运行的二进制文件,用户无法观察到中间算子的输出,通过在top域上添加”_report“后缀,转换构图时会将中间算子的输出添加到融合后的层输出中,若原先该算子便有输出(未被融合),则维持不变。
- 在推理运行时,用户可通过[回调运行](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime_cpp.html#id15)得到中间算子输出。
+ 在推理运行时,用户可通过[回调运行](https://www.mindspore.cn/lite/docs/zh-CN/master/use/runtime_cpp.html#回调运行)得到中间算子输出。
MindSpore Lite解析_report的相应规则,及与[inplace机制](#inplace机制)的冲突解决,参照《HiSVP 开发指南》中的定义说明。
diff --git a/docs/lite/docs/source_zh_cn/use/obfuscator_tool.md b/docs/lite/docs/source_zh_cn/use/obfuscator_tool.md
index 0457c23b49fbc6580eabbce24f073a76695accf3..49e12f80403a1333c1143691156ce33d4d650e25 100644
--- a/docs/lite/docs/source_zh_cn/use/obfuscator_tool.md
+++ b/docs/lite/docs/source_zh_cn/use/obfuscator_tool.md
@@ -14,7 +14,7 @@ MindSpore Lite提供一个轻量级的离线模型混淆工具,可用于保护
使用MindSpore Lite模型混淆工具,需要进行如下环境准备工作。
-- 参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id1)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#id3)编译x86_64版本。
+- 参考构建文档中的[环境要求](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#环境要求)和[编译示例](https://www.mindspore.cn/lite/docs/zh-CN/master/use/build.html#编译示例)编译x86_64版本。
### 目录结构
diff --git a/docs/lite/docs/source_zh_cn/use/post_training_quantization.md b/docs/lite/docs/source_zh_cn/use/post_training_quantization.md
index 56d1ca1712c43216cf72300551df2d0c822335f7..d127ebf126f7ea159b42ff495da0606ec5ebf705 100644
--- a/docs/lite/docs/source_zh_cn/use/post_training_quantization.md
+++ b/docs/lite/docs/source_zh_cn/use/post_training_quantization.md
@@ -217,7 +217,7 @@ min_quant_weight_channel=16
全量化计算激活值的量化参数,用户需要提供校准数据集。校准数据集最好来自真实推理场景,能表征模型的实际输入情况,数量在100个左右。
-针对图片数据,目前支持通道调整、归一化、缩放、裁剪等预处理的功能。用户可以根据推理时所需的预处理操作,设置相应的[参数](https://www.mindspore.cn/lite/docs/zh-CN/master/use/post_training_quantization.html#id7)。
+针对图片数据,目前支持通道调整、归一化、缩放、裁剪等预处理的功能。用户可以根据推理时所需的预处理操作,设置相应的[参数](https://www.mindspore.cn/lite/docs/zh-CN/master/use/post_training_quantization.html#数据预处理)。
全量化转换命令的一般形式为:
diff --git a/docs/lite/docs/source_zh_cn/use/runtime_cpp.md b/docs/lite/docs/source_zh_cn/use/runtime_cpp.md
index 86da93f360301e3ee02b1fa1573c87837949b469..f9ba15275a4c52ecd3471eaf2065fbccb41350ca 100644
--- a/docs/lite/docs/source_zh_cn/use/runtime_cpp.md
+++ b/docs/lite/docs/source_zh_cn/use/runtime_cpp.md
@@ -288,7 +288,7 @@ MindSpore Lite提供两种方法来获取模型的输入Tensor。
// Users need to free input_buf.
```
-> MindSpore Lite的模型输入Tensor中的数据排布必须是`NHWC`。如果需要了解更多数据前处理过程,可参考基于JNI接口的Android应用开发中[编写端侧推理代码](https://www.mindspore.cn/lite/docs/zh-CN/master/quick_start/quick_start.html#id10)的第2步,将输入图片转换为传入MindSpore模型的Tensor格式。
+> MindSpore Lite的模型输入Tensor中的数据排布必须是`NHWC`。如果需要了解更多数据前处理过程,可参考基于JNI接口的Android应用开发中[编写端侧推理代码](https://www.mindspore.cn/lite/docs/zh-CN/master/quick_start/quick_start.html#编写端侧推理代码)的第2步,将输入图片转换为传入MindSpore模型的Tensor格式。
>
> [GetInputs](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#getinputs)和[GetInputByTensorName](https://www.mindspore.cn/lite/api/zh-CN/master/api_cpp/mindspore.html#getinputbytensorname)方法返回的数据不需要用户释放。
diff --git a/docs/lite/faq/requirements.txt b/docs/lite/faq/requirements.txt
index 96cdfc3e0c7ee0ae6a01e59c1081111fdc792bb6..9e0a48e898dd0ef56f2f0503a66694e919562f73 100644
--- a/docs/lite/faq/requirements.txt
+++ b/docs/lite/faq/requirements.txt
@@ -1,5 +1,6 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
jieba
\ No newline at end of file
diff --git a/docs/mindarmour/api/requirements.txt b/docs/mindarmour/api/requirements.txt
index f2424ce6f08f8ce69b2f1bb181bc09da2641d35d..1e7189447f37b0e2e1c98a2ac5e82da36756a493 100644
--- a/docs/mindarmour/api/requirements.txt
+++ b/docs/mindarmour/api/requirements.txt
@@ -1,4 +1,5 @@
sphinx >= 2.2.1, <= 2.4.4
-sphinx_rtd_theme
+docutils == 0.16
+sphinx_rtd_theme == 0.5.2
numpy
opencv-python
\ No newline at end of file
diff --git a/docs/mindarmour/docs/requirements.txt b/docs/mindarmour/docs/requirements.txt
index 6d8cd70439820e16bc32c4abc93e948ba81dc01b..49a77fdec3a5c745edd40eaa223883c31500e975 100644
--- a/docs/mindarmour/docs/requirements.txt
+++ b/docs/mindarmour/docs/requirements.txt
@@ -1,7 +1,8 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
numpy
nbsphinx
IPython
diff --git a/docs/mindarmour/docs/source_en/conf.py b/docs/mindarmour/docs/source_en/conf.py
index b9f143de66b8503ab664e5c9b2098701a52c4ed4..e8091add2480a75778c807c34eb5188aa66eb74e 100644
--- a/docs/mindarmour/docs/source_en/conf.py
+++ b/docs/mindarmour/docs/source_en/conf.py
@@ -14,7 +14,6 @@ import os
import sys
import IPython
import re
-import nbsphinx as nbs
from sphinx.ext import autodoc as sphinx_autodoc
import mindarmour
@@ -114,20 +113,9 @@ with open(autodoc_source_path, "r+", encoding="utf8") as f:
exec(get_param_func_str, sphinx_autodoc.__dict__)
exec(code_str, sphinx_autodoc.__dict__)
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindarmour/docs/source_en/faq.md b/docs/mindarmour/docs/source_en/faq.md
index de406f38321ef6be077ca854323488e5fdebc0ce..4c064845bcafd0c4645e5ef0277997c99f11d6ff 100644
--- a/docs/mindarmour/docs/source_en/faq.md
+++ b/docs/mindarmour/docs/source_en/faq.md
@@ -1,6 +1,6 @@
# FAQ
-
+
**Q: What should I do when FastGradientSignMethod does not specify loss_fn, it reports an error: `Function construct_wrapper, the number of parameters of this function is 9, but the number of provided arguments is 10.`**
diff --git a/docs/mindarmour/docs/source_en/index.rst b/docs/mindarmour/docs/source_en/index.rst
index 4f7fe8c35905b3d8c804c48adfa2e17e588fafb1..f2567cec82407b1bfac6508e1e77cd243052a3d2 100644
--- a/docs/mindarmour/docs/source_en/index.rst
+++ b/docs/mindarmour/docs/source_en/index.rst
@@ -3,8 +3,9 @@ MindArmour Documents
As a general technology, AI brings great opportunities and benefits, but also faces new security and privacy protection challenges. MindArmour is a subsystem of MindSpore. It provides security and privacy protection for MindSpore, including adversarial robustness, model security test, differential privacy training, privacy risk assessment, and data drift detection.
-.. image:: ./images/mindarmour.png
- :width: 700px
+.. raw:: html
+
+
Typical MindArmour Application Scenarios
-----------------------------------------
diff --git a/docs/mindarmour/docs/source_zh_cn/conf.py b/docs/mindarmour/docs/source_zh_cn/conf.py
index ef02fc30cacbc16899c6e612edbb8eb7376590d9..065330c5731faa17f52b967ab9c5120e82c24c92 100644
--- a/docs/mindarmour/docs/source_zh_cn/conf.py
+++ b/docs/mindarmour/docs/source_zh_cn/conf.py
@@ -14,7 +14,6 @@ import os
import sys
import IPython
import re
-import nbsphinx as nbs
from sphinx.ext import autodoc as sphinx_autodoc
import mindarmour
@@ -114,20 +113,9 @@ with open(autodoc_source_path, "r+", encoding="utf8") as f:
exec(get_param_func_str, sphinx_autodoc.__dict__)
exec(code_str, sphinx_autodoc.__dict__)
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindarmour/docs/source_zh_cn/faq.md b/docs/mindarmour/docs/source_zh_cn/faq.md
index ae562faab408b8a0fba1b95290e9bea4a3074cae..a6952d7ad19d6f583f95e221d3536660b2980da5 100644
--- a/docs/mindarmour/docs/source_zh_cn/faq.md
+++ b/docs/mindarmour/docs/source_zh_cn/faq.md
@@ -1,6 +1,6 @@
# FAQ
-
+
**Q: FastGradientSignMethod未指定loss_fn时报错`Function construct_wrapper, The number of parameters of this function is 9, but the number of provided arguments is 10.`怎么办?**
diff --git a/docs/mindarmour/docs/source_zh_cn/index.rst b/docs/mindarmour/docs/source_zh_cn/index.rst
index 8335f92f9bd2ad350b21df36f00c700984912427..33a25ed1627cca6d6bedfd8f9ed2b895032f6094 100644
--- a/docs/mindarmour/docs/source_zh_cn/index.rst
+++ b/docs/mindarmour/docs/source_zh_cn/index.rst
@@ -3,8 +3,9 @@ MindArmour 文档
AI作为一种通用技术,在带来巨大机遇和效益的同时也面临着新的安全与隐私保护的挑战。MindArmour是昇思MindSpore的一个子项目,为昇思MindSpore提供安全与隐私保护能力,主要包括对抗鲁棒性、模型安全测试、差分隐私训练、隐私泄露风险评估、数据漂移检测等技术。
-.. image:: ./images/mindarmour_cn.png
- :width: 700px
+.. raw:: html
+
+
使用MindArmour的典型场景
------------------------------
diff --git a/docs/mindarmour/faq/requirements.txt b/docs/mindarmour/faq/requirements.txt
index 96cdfc3e0c7ee0ae6a01e59c1081111fdc792bb6..9e0a48e898dd0ef56f2f0503a66694e919562f73 100644
--- a/docs/mindarmour/faq/requirements.txt
+++ b/docs/mindarmour/faq/requirements.txt
@@ -1,5 +1,6 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
jieba
\ No newline at end of file
diff --git a/docs/mindinsight/api/requirements.txt b/docs/mindinsight/api/requirements.txt
index c47c7c8ff90e45f1e82e4c77c21c0e404f8b5b71..82711b2877bab025b77468be8612e79fed32fd30 100644
--- a/docs/mindinsight/api/requirements.txt
+++ b/docs/mindinsight/api/requirements.txt
@@ -1,4 +1,5 @@
sphinx >= 2.2.1, <= 2.4.4
-sphinx_rtd_theme
+docutils == 0.16
+sphinx_rtd_theme == 0.5.2
numpy
torch>=1.8.2
diff --git a/docs/mindinsight/docs/requirements.txt b/docs/mindinsight/docs/requirements.txt
index 412ae8c2254c231eaa625b4e3f09c8ff3c88203d..91b4fcd07762187b4ec0653a3d0fe7e4c68c6e34 100644
--- a/docs/mindinsight/docs/requirements.txt
+++ b/docs/mindinsight/docs/requirements.txt
@@ -1,7 +1,8 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
numpy
nbsphinx
IPython
diff --git a/docs/mindinsight/docs/source_en/conf.py b/docs/mindinsight/docs/source_en/conf.py
index 1fd161bd11c76aee01e4d50cf0af3165ec40f204..16c3884daddcfc3ad5bda13919adad2c352c6ac2 100644
--- a/docs/mindinsight/docs/source_en/conf.py
+++ b/docs/mindinsight/docs/source_en/conf.py
@@ -14,7 +14,6 @@ import os
import IPython
import re
import sys
-import nbsphinx as nbs
from sphinx.ext import autodoc as sphinx_autodoc
import mindinsight
@@ -111,20 +110,9 @@ with open(autodoc_source_path, "r+", encoding="utf8") as f:
exec(get_param_func_str, sphinx_autodoc.__dict__)
exec(code_str, sphinx_autodoc.__dict__)
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindinsight/docs/source_en/dashboard.md b/docs/mindinsight/docs/source_en/dashboard.md
index ad7ffbb5e336130deaad8646971ea35176f175dd..2fd8705f9809affe3ce19412fcb4be9e7b1b9c77 100644
--- a/docs/mindinsight/docs/source_en/dashboard.md
+++ b/docs/mindinsight/docs/source_en/dashboard.md
@@ -4,7 +4,7 @@
## Overview
-Training dashboard is an important part of mindinsight's visualization component, and its tags include scalar visualization, parameter distribution visualization, computational graph visualization, data graph visualization, image visualization and tensor visualization.
+Training dashboard is an important part of mindinsight's visualization component, and its tags include scalar visualization, parameter distribution visualization, computational graph visualization, data graph visualization, image visualization, tensor visualization and training optimization process visualization.
Access the Training Dashboard by selecting a specific training from the training list.
@@ -97,7 +97,12 @@ Figure 7 shows the function area of the computational graph, including:
Figure 8 shows the readability optimization feature, which optimizes the readability of the graph and reduces the complexity of the graph, and removes most of the gradient and optimizer operators.
-Note: to get the clearest visualization of computational graph, please avoid to use public methods which cross cell and set `jit_level` to `o0` when collecting the computational graph, please refer to the API [mindspore.Model.build](https://www.mindspore.cn/docs/api/en/master/api_python/mindspore/mindspore.Model.html#mindspore.Model.build).
+Note:
+
+- To get the clearest visualization of computational graph, please avoid to use public methods which cross Cell.
+- Set `jit_level` to `o0` when collecting the computational graph, please refer to the API [mindspore.Model.build](https://www.mindspore.cn/docs/api/en/master/api_python/mindspore/mindspore.Model.html#mindspore.Model.build).
+- In the computation graph optimization process, some operators in different namespaces may be merged with each other due to the same function, which will cause a cycle between namespaces and affect the readability.
+- The complete control flow is not supported at present. If you need, please specify the control branch in the script.
## Dataset Graph Visualization
@@ -164,6 +169,10 @@ Figure 13 shows tensors recorded by a user in a form of a table which includes t
Figure 14 shows tensors recorded by a user in a form of a histogram. Click the upper right corner to zoom in the histogram.
+## Training Optimization Process Visualization
+
+The training optimization process visualization can show the optimization space around the neural network training path. For more information, please refer to [Training Optimization Process Visualization](https://www.mindspore.cn/mindinsight/docs/en/master/landscape.html).
+
## Notices
1. Currently MindSpore supports recording computational graph after operator fusion for Ascend 910 AI processor only.
diff --git a/docs/mindinsight/docs/source_en/debugger_offline.md b/docs/mindinsight/docs/source_en/debugger_offline.md
index 799443feffbc7ac246acf53ba17f1616d4f2d101..40282e695157eb5b00c1a63e7d0f85c6c6bf0dad 100644
--- a/docs/mindinsight/docs/source_en/debugger_offline.md
+++ b/docs/mindinsight/docs/source_en/debugger_offline.md
@@ -101,22 +101,26 @@ import mindinsight.debugger as debugger
from mindinsight.debugger import DumpAnalyzer as DumpAnalyzer
from mindinsight.debugger import Watchpoint as Watchpoint
-# Init DumpAnalyzer with the dump_dir
-analyzer = DumpAnalyzer("/path/to/dump_dir")
-# Select the tensors generated by the code in 'lenet.py', line 49
-tensors = analyzer.select_tensors(query_string="/path/to/src/of/lenet.py:49", select_by="code_stack")
-# Create a watchpoint for tensors with condition TensorTooLarge, set the parameter abs_mean_gt=0.001
-watchpoint1 = Watchpoint(tensors, debugger.TensorTooLargeCondition(abs_mean_gt=0.001))
-# Create another watchpoint for tensors with condition TensorAllZero, set the parameter zero_percentage_ge=99.9
-watchpoint2 = Watchpoint(tensors, debugger.TensorAllZeroCondition(zero_percentage_ge=99.9))
-# Check the given watchpoints
-hits = analyzer.check_watchpoints([watchpoint1, watchpoint2])
-# Show the result
-for hit in hits:
- print("The hit detail is: {}".format(hit.get_hit_detail()))
- tensor = hit.tensor
- print("The hit tensor info is: iteration: {}, graph_name: {}, node_name: {}, rank: {}, slot: {}"
- .format(tensor.iteration, tensor.node.graph_name, tensor.node.name, tensor.node.name, tensor.rank, tensor.slot))
+def test_debugger_offline():
+ # Init DumpAnalyzer with the dump_dir
+ analyzer = DumpAnalyzer("/path/to/dump_dir")
+ # Select the tensors generated by the code in 'lenet.py', line 49
+ tensors = analyzer.select_tensors(query_string="/path/to/src/of/lenet.py:49", select_by="code_stack")
+ # Create a watchpoint for tensors with condition TensorTooLarge, set the parameter abs_mean_gt=0.001
+ watchpoint1 = Watchpoint(tensors, debugger.TensorTooLargeCondition(abs_mean_gt=0.001))
+ # Create another watchpoint for tensors with condition TensorAllZero, set the parameter zero_percentage_ge=99.9
+ watchpoint2 = Watchpoint(tensors, debugger.TensorAllZeroCondition(zero_percentage_ge=99.9))
+ # Check the given watchpoints, the check_watchpoints function start a new process needs to be called through the main entry
+ hits = analyzer.check_watchpoints([watchpoint1, watchpoint2])
+ # Show the result
+ for hit in hits:
+ print("The hit detail is: {}".format(hit.get_hit_detail()))
+ tensor = hit.tensor
+ print("The hit tensor info is: iteration: {}, graph_name: {}, node_name: {}, rank: {}, slot: {}"
+ .format(tensor.iteration, tensor.node.graph_name, tensor.node.name, tensor.node.name, tensor.rank, tensor.slot))
+
+if __name__ == "__main__":
+ test_debugger_offline()
```
## Precautions
@@ -124,7 +128,7 @@ for hit in hits:
- Scenarios:
- The offline debugger does not support the CPU scenario currently.
- The offline debugger supports the single-node multi-device scenario. To analyze the multi-node multi-device scenario, you need to summarize the data of multiple nodes.
- - The offline debugger does not support checking the initial weight and operator overflow currently.
+ - The offline debugger does not support checking the initial weight.
- The offline debugger does not support PyNative mode.
- GPU scenario:
diff --git a/docs/mindinsight/docs/source_en/debugger_online.md b/docs/mindinsight/docs/source_en/debugger_online.md
index 8310d15cda85820f01c8ab556d608da1696c8c7b..214798a9513a886e19cbd2f4f1d46dc2e0978f6b 100644
--- a/docs/mindinsight/docs/source_en/debugger_online.md
+++ b/docs/mindinsight/docs/source_en/debugger_online.md
@@ -106,7 +106,6 @@ After a watchpoint is created, manually select the node to be checked and click
The following conditions are supported (abbreviations in parentheses):
- Tensor check
- - Operator overflow (OO): Check whether overflow occurs during operator computation. Only the Ascend AI Processor is supported.
- Whether tensor values are all 0 (TZ): Set the threshold to `Percentage of 0 values ≥` to check the percentage of 0 tensor values.
- Tensor overflow (TO): Check whether a tensor value overflow occurs.
- Tensor value range (TR): Set a threshold to check the tensor value range. The options are `Percentage of the value in the range >`, `Percentage of the value in the range <`, `MAX-MIN>` and `MAX-MIN<`. If setting the threshold to `Percentage of the value in the range >` or `Percentage of the value in the range <`, you need to set the `Upper limit of the range (inclusive)` or `Lower limit of the range (inclusive)` at the same time.
@@ -256,7 +255,6 @@ Tensors can be downloaded in tensor check view. Users can download the desired t
- When using the debugger, make sure that the version numbers of MindInsight and MindSpore are the same.
- Recheck only watchpoints that have tensor values.
-- To check overflow during computation, you need to enable the overflow detection function of the asynchronous dump. For details about how to enable the function, see [Asynchronous Dump](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#asynchronous-dump).
- The graph displayed by the debugger is the finally optimized execution graph. The called operator may have been integrated with other operators, or the name of the called operator is changed after optimization.
- Enabling the debugger will turn off memory reuse mode, which may lead to an 'out of memory' error when the training network is too large.
diff --git a/docs/mindinsight/docs/source_en/index.rst b/docs/mindinsight/docs/source_en/index.rst
index c747cf5f8d1f46b8bf6e5b3b12db625f640447be..333e2e3ef7dd4aaad5200a0ddee032a1eb042489 100644
--- a/docs/mindinsight/docs/source_en/index.rst
+++ b/docs/mindinsight/docs/source_en/index.rst
@@ -12,8 +12,9 @@ MindInsight provides the following functions:
- `Hyperparameter optimization `_
- `Model migration `_
-.. image:: ./images/mindinsight_en.png
- :width: 700px
+.. raw:: html
+
+
Using MindInsight to Visualize the Training Process
----------------------------------------------------
diff --git a/docs/mindinsight/docs/source_en/landscape.md b/docs/mindinsight/docs/source_en/landscape.md
index 4e2dd9e12b41c3c8a289831dd20762429e67180b..a8bfbc61852cd21fde629ac5ceb8bd8b4dc47688 100644
--- a/docs/mindinsight/docs/source_en/landscape.md
+++ b/docs/mindinsight/docs/source_en/landscape.md
@@ -1,4 +1,4 @@
-# Visualization of Training Optimization Process
+# Training Optimization Process Visualization
@@ -158,9 +158,96 @@ The specific use steps are divided into two steps. Taking the classification tas
2. Landscape drawing: using the model parameters saved in the training process, the model and dataset are consistent with the training, start a new script, and generate landscape information through forward calculation without re-training. (applicable to drawing landscape by single device or multi devices Parallel Computing)
```python
+ import mindspore.dataset as ds
+ import mindspore.dataset.vision.c_transforms as CV
+ import mindspore.dataset.transforms.c_transforms as C
+ from mindspore.dataset.vision import Inter
+ from mindspore import dtype as mstype
+ import mindspore.nn as nn
+
+ from mindspore import Model
+ from mindspore.common.initializer import Normal
from mindspore.nn import Loss
from mindspore.train.callback import SummaryLandscape
+ def create_dataset(data_path, batch_size=32, repeat_size=1,
+ num_parallel_workers=1):
+ """
+ create dataset for train or test
+ """
+ # define dataset
+ mnist_ds = ds.MnistDataset(data_path, shuffle=False)
+
+ resize_height, resize_width = 32, 32
+ rescale = 1.0 / 255.0
+ shift = 0.0
+ rescale_nml = 1 / 0.3081
+ shift_nml = -1 * 0.1307 / 0.3081
+
+ # define map operations
+ resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode
+ rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
+ rescale_op = CV.Rescale(rescale, shift)
+ hwc2chw_op = CV.HWC2CHW()
+ type_cast_op = C.TypeCast(mstype.int32)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=resize_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=rescale_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+
+ return mnist_ds
+
+ class LeNet5(nn.Cell):
+ """
+ Lenet network
+
+ Args:
+ num_class (int): Number of classes. Default: 10.
+ num_channel (int): Number of channels. Default: 1.
+
+ Returns:
+ Tensor, output tensor
+ Examples:
+ >>> LeNet(num_class=10)
+
+ """
+ def __init__(self, num_class=10, num_channel=1, include_top=True):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid', weight_init=Normal(0.02))
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid', weight_init=Normal(0.02))
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.include_top = include_top
+ if self.include_top:
+ self.flatten = nn.Flatten()
+ self.fc1 = nn.Dense(16 * 5 * 5, 120)
+ self.fc2 = nn.Dense(120, 84)
+ self.fc3 = nn.Dense(84, num_class)
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ if not self.include_top:
+ return x
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
def callback_fn():
network = LeNet5(10)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
diff --git a/docs/mindinsight/docs/source_en/lineage_and_scalars_comparison.md b/docs/mindinsight/docs/source_en/lineage_and_scalars_comparison.md
index 5fe54d5358582029ad0f1da80e2118e99e154136..81bfe4fa3b92b7db4bf98eb1a2eaea5fbe0b1ecc 100644
--- a/docs/mindinsight/docs/source_en/lineage_and_scalars_comparison.md
+++ b/docs/mindinsight/docs/source_en/lineage_and_scalars_comparison.md
@@ -64,7 +64,7 @@ Figure 7 shows the data processing and augmentation information of all model tra
## Scalars Comparison
-Scalars Comparison can be used to compare scalar curves between multiple trainings
+Scalars Comparison can be used to compare scalar curves and loss graph between multiple trainings. For the detailed information of loss graph comparison, please refer to [Training Optimization Process Visualization](https://www.mindspore.cn/mindinsight/docs/en/master/landscape.html).

diff --git a/docs/mindinsight/docs/source_en/performance_profiling_ascend.md b/docs/mindinsight/docs/source_en/performance_profiling_ascend.md
index 2e78d0ba334181c79f1e9e551abb1e16fb773f78..0733d4c6dd1f44a11d8eeb3abd2d4e3b1d0a0d55 100644
--- a/docs/mindinsight/docs/source_en/performance_profiling_ascend.md
+++ b/docs/mindinsight/docs/source_en/performance_profiling_ascend.md
@@ -232,6 +232,8 @@ The red box in Figure 5 includes calculation quantity data on operator granulari
Figure 6 is a sankey diagram that presents data in the structure of a tree where the cursor selects a scope to see the specific FLOPs value.
+> This figure only draws the Scope hierarchy structure of the operator (the specific name of the operator at the last layer is not shown). Since the depth of each operator's hierarchy is not equal in the training process, it may occur that the sum of time of adjacent levels is not equal.
+
### Data Preparation Performance Analysis
The Data preparation performance analysis component is used to analyse the execution of data input pipeline for the training. The data input pipeline can be divided into three stages:
diff --git a/docs/mindinsight/docs/source_en/performance_profiling_ascend_of_cluster.md b/docs/mindinsight/docs/source_en/performance_profiling_ascend_of_cluster.md
index ff04a7b9a5ef6489df75ddbd93f53bf79f9c0f5e..102f2d942ebdb0a4cc79da742afc32f0150f29ab 100644
--- a/docs/mindinsight/docs/source_en/performance_profiling_ascend_of_cluster.md
+++ b/docs/mindinsight/docs/source_en/performance_profiling_ascend_of_cluster.md
@@ -284,6 +284,10 @@ To use MindInsight to visualize communication performance data, you need to inst
pip install /usr/local/Ascend/tools/hccl_parser-{version}-py3-none-any.whl
```
+### specifications
+
+For data parsing performance, the number of files generated when cluster communication is enabled is currently limited. Currently, the maximum number of original communication performance files (named with the suffix.trace) generated by MindSpore is 500. When the original communication data exceeds the upper limit, the step number of Cluster Communication may be inconsistent with the step number of cluster Step Trace.
+
## Resource Utilization
### Cluster Memory Analysis
diff --git a/docs/mindinsight/docs/source_en/summary_record.md b/docs/mindinsight/docs/source_en/summary_record.md
index 73a08c5cc157a87b14c7387e6e6608d655f1c29a..af451e1ea6fc58daaebb530a6ec386ac4d473135 100644
--- a/docs/mindinsight/docs/source_en/summary_record.md
+++ b/docs/mindinsight/docs/source_en/summary_record.md
@@ -4,17 +4,17 @@
## Overview
-Scalars, images, computational graphs, and model hyperparameters during training are recorded in files and can be viewed on the web page.
+Scalars, images, computational graphs, training optimization process, and model hyperparameters during training are recorded in files and can be viewed on the web page.
## Operation Process
-- Prepare a training script, specify scalars, images, computational graphs, and model hyperparameters in the training script, record them in the summary log file, and run the training script.
+- Prepare a training script, specify scalars, images, computational graphs, training optimization process, and model hyperparameters in the training script, record them in the summary log file, and run the training script.
- Start MindInsight and specify the summary log file directory using startup parameters. After MindInsight is started, access the visualization page based on the IP address and port number. The default access IP address is `http://127.0.0.1:8080`.
- During the training, when data is written into the summary log file, you can view the data on the web page.
## Preparing The Training Script
-Currently, MindSpore supports to save scalars, images, computational graph, and model hyperparameters to summary log file and display them on the web page. The computational graph can only be recorded in the graph mode.
+Currently, MindSpore supports to save scalars, images, computational graph, training optimization process, and model hyperparameters to summary log file and display them on the web page. The computational graph can only be recorded in the graph mode. The detailed process of data collection and landscape drawing in the training optimization process can be referred to [Training Optimization Process Visualization](https://www.mindspore.cn/mindinsight/docs/en/master/landscape.html).
MindSpore currently supports multiple ways to record data into summary log files.
@@ -120,7 +120,8 @@ if __name__ == '__main__':
```
> 1. When using summary, it is recommended that you set `dataset_sink_mode` argument of `model.train` to `False`. Please see notices for more information.
-> 2. When using summary, you need to run the code in `if __name__ == "__main__"`. For more detail, refer to [Python tutorial](https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing-programming)
+> 2. When using summary, you need to run the code in `if __name__ == "__main__"`. For more detail, refer to [Python tutorial](https://docs.python.org/3.7/library/multiprocessing.html#multiprocessing-programming).
+> 3. dataset_path is the path to the user's local training dataset.
### Method two: Custom collection of network data with summary operators and SummaryCollector
diff --git a/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md b/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
index 1f4e1f50ef679abd4ec4882683dd0aada0c09a91..e7f7f216da8b099af33d437ac104fd9ba930c6d8 100644
--- a/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
+++ b/docs/mindinsight/docs/source_zh_cn/accuracy_optimization.md
@@ -212,6 +212,8 @@
model.eval(ds_eval, callbacks=[summary_collector])
```
+ > dataset_path为用户本地的训练数据集路径。
+
使用训练看板[可视化功能](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/dashboard.html)查看训练过程数据:

@@ -305,14 +307,13 @@ MindInsight可以辅助用户对输入数据、数据处理流水线进行检查
6. 激活值饱和或过弱(例如Sigmoid的输出接近1,Relu的输出全为0);
7. 梯度爆炸、消失;
8. 训练epoch不足;
-9. 算子计算结果存在NAN、INF;
-10. 算子计算过程溢出(计算过程中的溢出不一定都是有害的)等。
+9. 算子计算结果存在NAN、INF等。
上述这些问题或现象,有的可以通过loss表现出来,有的则难以观察。MindInsight提供了针对性的功能,可以观察上述现象、自动检查问题,帮助您更快定位问题根因。例如:
- MindInsight的参数分布图模块可以展示模型权重随训练过程的变化趋势;
- MindInsight的张量可视模块可以展示张量的具体取值,对不同张量进行对比;
-- [MindInsight调试器](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger.html)内置了种类丰富,功能强大的检查能力,可以检查权重问题(例如权重不更新、权重更新过大、权重值过大/过小)、梯度问题(例如梯度消失、梯度爆炸)、激活值问题(例如激活值饱和或过弱)、张量全为0、NAN/INF、算子计算过程溢出等问题。
+- [MindInsight调试器](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger.html)内置了种类丰富,功能强大的检查能力,可以检查权重问题(例如权重不更新、权重更新过大、权重值过大/过小)、梯度问题(例如梯度消失、梯度爆炸)、激活值问题(例如激活值饱和或过弱)、张量全为0、NAN/INF等问题。

diff --git a/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md b/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md
index 93aa18eb61d0ff18a2e9ca47740a29830bb37be1..89e3055b0fc9224944b910819d689a15d6f6a9ea 100644
--- a/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md
+++ b/docs/mindinsight/docs/source_zh_cn/accuracy_problem_preliminary_location.md
@@ -303,7 +303,7 @@ MindSpore API同其它框架的API存在一定差异。有标杆脚本的情况
### 常见计算图结构问题
-为了检查计算图结构问题,请读者首先参考[收集Summary数据](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/summary_record.html),将计算图保存到summary文件中,然后使用MindInsight[可视化查看计算图](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/dashboard.html#id5)。
+为了检查计算图结构问题,请读者首先参考[收集Summary数据](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/summary_record.html),将计算图保存到summary文件中,然后使用MindInsight[可视化查看计算图](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/dashboard.html#计算图可视化)。
检查结论:
@@ -384,9 +384,7 @@ MindSpore API同其它框架的API存在一定差异。有标杆脚本的情况
检查方法:
当使用[混合精度](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/enable_mixed_precision.html)训练,或者是使用Ascend AI处理器训练时,建议检查是否存在溢出问题。
-使用GPU时,通过[调试器](https://mindspore.cn/mindinsight/docs/zh-CN/master/debugger_online.html#id10)中的“检查张量溢出”监测点可以进行溢出检查。
-
-使用Ascend AI处理器时,使能溢出检查的详细方法请见[异步Dump文档](https://mindspore.cn/docs/programming_guide/zh-CN/master/dump_in_graph_mode.html#id11)。使能溢出检查时,注意设置op_debug_mode为3,开启全部溢出检测功能。若在指定的目录存在算子溢出信息文件,则说明存在溢出问题,反之,则说明不存在溢出问题。
+使用GPU时,通过[调试器](https://mindspore.cn/mindinsight/docs/zh-CN/master/debugger_online.html#异常现象检查列表)中的“检查张量溢出”监测点可以进行溢出检查。
发现溢出问题后,应首先找到并分析第一个出现溢出的节点(对于Ascend的溢出数据,可以按文件名中的时间戳,找时间戳最小的一个;对于GPU上的溢出,只要找执行序中最靠前的一个),结合算子的输入输出数据确定溢出原因。
@@ -477,7 +475,7 @@ MindSpore API同其它框架的API存在一定差异。有标杆脚本的情况
## 求助方式
-参考上面两种初步定位方法的任意一种进行操作。若未发现可疑点,一般说明脚本不存在明显的问题,此时请参考[精度调优建议](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/accuracy_optimization.html#id12)进行调优。若使用基于现象对比的定位方法发现了疑点,请依据定位方法中的提示判断是需要自行定位的问题还是向MindSpore求助。若使用checklist发现了疑点或问题,请参考[精度问题详细定位和调优指南](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/accuracy_optimization.html)进行详细定位。
+参考上面两种初步定位方法的任意一种进行操作。若未发现可疑点,一般说明脚本不存在明显的问题,此时请参考[精度调优建议](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/accuracy_optimization.html#常用调优建议)进行调优。若使用基于现象对比的定位方法发现了疑点,请依据定位方法中的提示判断是需要自行定位的问题还是向MindSpore求助。若使用checklist发现了疑点或问题,请参考[精度问题详细定位和调优指南](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/accuracy_optimization.html)进行详细定位。
当您遇到精度问题,要向MindSpore求助时,提供相关材料将有助于我们更好地判断和解决您的问题。建议您提供的材料包括但不限于:
diff --git a/docs/mindinsight/docs/source_zh_cn/conf.py b/docs/mindinsight/docs/source_zh_cn/conf.py
index 44cfeddef5e70f03a601123b7bb560a80a97ac00..3803ec2c356d017a2481ef45cf9201954f9f0381 100644
--- a/docs/mindinsight/docs/source_zh_cn/conf.py
+++ b/docs/mindinsight/docs/source_zh_cn/conf.py
@@ -14,7 +14,6 @@ import os
import IPython
import re
import sys
-import nbsphinx as nbs
from sphinx.ext import autodoc as sphinx_autodoc
@@ -85,20 +84,9 @@ intersphinx_mapping = {
'numpy': ('https://docs.scipy.org/doc/numpy/', '../../../../resource/numpy_objects.inv'),
}
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
# Modify default signatures for autodoc.
autodoc_source_path = os.path.abspath(sphinx_autodoc.__file__)
diff --git a/docs/mindinsight/docs/source_zh_cn/dashboard.md b/docs/mindinsight/docs/source_zh_cn/dashboard.md
index d59aac09e15db6d0f0e5fe357e584006162fd00b..c87de61821df950a7aa75b7b5ac67c204e212978 100644
--- a/docs/mindinsight/docs/source_zh_cn/dashboard.md
+++ b/docs/mindinsight/docs/source_zh_cn/dashboard.md
@@ -4,7 +4,7 @@
## 概述
-训练看板是MindInsight的可视化组件的重要组成部分,而训练看板的标签包含:标量可视化、参数分布图可视化、计算图可视化、数据图可视化、图像可视化和张量可视化等。
+训练看板是MindInsight的可视化组件的重要组成部分,而训练看板的标签包含:标量可视化、参数分布图可视化、计算图可视化、数据图可视化、图像可视化、张量可视化和优化过程可视化等。
用户从训练列表中选择指定的训练,进入训练看板。
@@ -96,7 +96,12 @@
图8展示了优化可读性功能,该功能优化了计算图的可读性,降低计算图的复杂度,图中大部分的梯度计算逻辑和优化器计算逻辑将会被移除。
-注意:为达到最清晰的计算图可视化效果,请勿使用跨cell的公共函数,并在收集计算图时设置`jit_level`为`o0`,请参考[mindspore.Model.build接口定义](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.Model.html?highlight=jit_config#mindspore.Model.build) 。
+注意:
+
+- 为达到最清晰的计算图可视化效果,请勿使用跨Cell的公共函数。
+- 收集计算图时设置`jit_level`为`o0`,详细请参考[mindspore.Model.build接口定义](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.Model.html#mindspore.Model.build) 。
+- 在计算图优化时,不同命名空间中的多个算子可能会因为功能一致而融合,这种情况会导致命名空间之间的连线成环,影响可读性。
+- 暂不支持展示完整的控制流,如需展示请在脚本中指定控制分支。
## 数据图可视化
@@ -163,6 +168,10 @@
图14将用户所记录的张量以直方图的形式进行展示。点击图中右上角,可以将图放大。
+## 优化过程可视化
+
+优化过程可视化可以将将神经网络训练路径周围的优化空间展示出来,更多信息请查阅[训练优化过程可视化](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/landscape.html)。
+
## 注意事项
1. 目前MindSpore仅支持在Ascend 910 AI处理器上导出算子融合后的计算图。
diff --git a/docs/mindinsight/docs/source_zh_cn/debugger_offline.md b/docs/mindinsight/docs/source_zh_cn/debugger_offline.md
index d19c8da9be1759cb8b460889433d844a533b729b..b7a2317fe302cbbf9f96bf0f6808cbab34b0ec39 100644
--- a/docs/mindinsight/docs/source_zh_cn/debugger_offline.md
+++ b/docs/mindinsight/docs/source_zh_cn/debugger_offline.md
@@ -58,7 +58,7 @@ mindinsight start --port {PORT} --summary-base-dir {SUMMARY_BASE_DIR} --offline-
## 离线调试器页面介绍
-离线调试器界面与在线调试器大致相同。在线调试器的页面介绍详见[在线调试器页面介绍](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_online.html#id6) 。不同的是,离线调试器会在计算图的上方显示图执行历史,并且可以重置训练轮次。
+离线调试器界面与在线调试器大致相同。在线调试器的页面介绍详见[在线调试器页面介绍](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_online.html#调试器页面介绍) 。不同的是,离线调试器会在计算图的上方显示图执行历史,并且可以重置训练轮次。
### 图执行历史
@@ -90,9 +90,9 @@ mindinsight start --port {PORT} --summary-base-dir {SUMMARY_BASE_DIR} --offline-
此时,调试器处于加载离线数据的状态。
-2. 稍等片刻,在MindInsight UI上可以看到弹窗,提示选择是否使用推荐监测点,接下来的使用步骤与在线调试相同。[使用调试器进行调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_online.html#id17) 。
+2. 稍等片刻,在MindInsight UI上可以看到弹窗,提示选择是否使用推荐监测点,接下来的使用步骤与在线调试相同。[使用调试器进行调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_online.html#使用调试器进行调试) 。
-3. 如果需要重置训练轮次,可以参考[训练轮次重置](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_offline.html#id7) 来重置训练轮次。每个轮次的数据保存情况可以参考[图执行历史](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_offline.html#id6) 来查看。
+3. 如果需要重置训练轮次,可以参考[训练轮次重置](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_offline.html#训练轮次重置) 来重置训练轮次。每个轮次的数据保存情况可以参考[图执行历史](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/debugger_offline.html#图执行历史) 来查看。
## 离线调试器API使用样例
@@ -101,22 +101,26 @@ import mindinsight.debugger as debugger
from mindinsight.debugger import DumpAnalyzer as DumpAnalyzer
from mindinsight.debugger import Watchpoint as Watchpoint
-# Init DumpAnalyzer with the dump_dir
-analyzer = DumpAnalyzer("/path/to/dump_dir")
-# Select the tensors generated by the code in 'lenet.py', line 49
-tensors = analyzer.select_tensors(query_string="/path/to/src/of/lenet.py:49", select_by="code_stack")
-# Create a watchpoint for tensors with condition TensorTooLarge, set the parameter abs_mean_gt=0.001
-watchpoint1 = Watchpoint(tensors, debugger.TensorTooLargeCondition(abs_mean_gt=0.001))
-# Create another watchpoint for tensors with condition TensorAllZero, set the parameter zero_percentage_ge=99.9
-watchpoint2 = Watchpoint(tensors, debugger.TensorAllZeroCondition(zero_percentage_ge=99.9))
-# Check the given watchpoints
-hits = analyzer.check_watchpoints([watchpoint1, watchpoint2])
-# Show the result
-for hit in hits:
- print("The hit detail is: {}".format(hit.get_hit_detail()))
- tensor = hit.tensor
- print("The hit tensor info is: iteration: {}, graph_name: {}, node_name: {}, rank: {}, slot: {}"
- .format(tensor.iteration, tensor.node.graph_name, tensor.node.name, tensor.node.name, tensor.rank, tensor.slot))
+def test_debugger_offline():
+ # Init DumpAnalyzer with the dump_dir
+ analyzer = DumpAnalyzer("/path/to/dump_dir")
+ # Select the tensors generated by the code in 'lenet.py', line 49
+ tensors = analyzer.select_tensors(query_string="/path/to/src/of/lenet.py:49", select_by="code_stack")
+ # Create a watchpoint for tensors with condition TensorTooLarge, set the parameter abs_mean_gt=0.001
+ watchpoint1 = Watchpoint(tensors, debugger.TensorTooLargeCondition(abs_mean_gt=0.001))
+ # Create another watchpoint for tensors with condition TensorAllZero, set the parameter zero_percentage_ge=99.9
+ watchpoint2 = Watchpoint(tensors, debugger.TensorAllZeroCondition(zero_percentage_ge=99.9))
+ # Check the given watchpoints, the check_watchpoints function start a new process needs to be called through the main entry
+ hits = analyzer.check_watchpoints([watchpoint1, watchpoint2])
+ # Show the result
+ for hit in hits:
+ print("The hit detail is: {}".format(hit.get_hit_detail()))
+ tensor = hit.tensor
+ print("The hit tensor info is: iteration: {}, graph_name: {}, node_name: {}, rank: {}, slot: {}"
+ .format(tensor.iteration, tensor.node.graph_name, tensor.node.name, tensor.node.name, tensor.rank, tensor.slot))
+
+if __name__ == "__main__":
+ test_debugger_offline()
```
## 注意事项
@@ -124,7 +128,7 @@ for hit in hits:
- 场景支持:
- 离线调试器暂不支持CPU场景。
- 离线调试器支持单机多卡场景。若要分析多机多卡的场景。需要自行把多机数据汇总到一起。
- - 离线调试器暂不支持初始权重和计算过程溢出的检查。
+ - 离线调试器暂不支持初始权重的检查。
- 离线调试器暂不支持PyNative模式。
- GPU场景:
diff --git a/docs/mindinsight/docs/source_zh_cn/debugger_online.md b/docs/mindinsight/docs/source_zh_cn/debugger_online.md
index 6e4a714f4e8c1d341c174691498faf1e69bb3185..58d2623180ddd8e5027553522ec6a233b0944128 100644
--- a/docs/mindinsight/docs/source_zh_cn/debugger_online.md
+++ b/docs/mindinsight/docs/source_zh_cn/debugger_online.md
@@ -101,7 +101,6 @@ mindinsight start --port {PORT} --enable-debugger True --debugger-port {DEBUGGER
支持的条件包括(括号中为缩写):
- 检查张量
- - 检查计算过程溢出(OO):检查算子计算过程中是否存在溢出现象,仅支持昇腾AI处理器。
- 检查张量是否全为0(TZ):通过对条件参数设置阈值来检查张量的0值比例,可选参数为`0值比例>=`。
- 检查张量溢出(TO):检查张量值是否存在溢出现象。
- 检查张量值范围(TR):通过对条件参数设置阈值来检查张量值的范围,可选参数为`在范围中的值所占百分比>`、`在范围中的值所占百分比<`、`MAX-MIN>`和`MAX-MIN<`。其中在设置`在范围中的值所占百分比>`和`在范围中的值所占百分比<`时需要同时设置支持参数`范围上界(含)`和`范围下界(含)`。
@@ -249,6 +248,5 @@ mindinsight start --port {PORT} --enable-debugger True --debugger-port {DEBUGGER
- 使用调试器时要保证MindInsight和MindSpore的版本号相同。
- 重新检查只检查当前有张量值的监测点。
-- 检查计算过程溢出需要用户开启异步Dump的全部溢出检测功能,开启方式请参照[异步Dump功能介绍](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id5)
- 调试器展示的图是优化后的最终执行图。调用的算子可能已经与其它算子融合,或者在优化后改变了名称。
- 开启调试器会关闭内存复用,在训练网络过大时有可能导致'out of memory'错误。
diff --git a/docs/mindinsight/docs/source_zh_cn/hyper_parameters_auto_tuning.md b/docs/mindinsight/docs/source_zh_cn/hyper_parameters_auto_tuning.md
index 4c1c2e6e05ee0bbdc08f9846357e91e89936ab52..aa15cffb895ae2b11046cc4fdc283eab445205c2 100644
--- a/docs/mindinsight/docs/source_zh_cn/hyper_parameters_auto_tuning.md
+++ b/docs/mindinsight/docs/source_zh_cn/hyper_parameters_auto_tuning.md
@@ -189,7 +189,7 @@ optional arguments:
4. 可视化
- 基于config.yaml里面配置的summary_base_dir来启动MindInsight,启动方法可以查看[MindInsight启动命令](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/mindinsight_commands.html#id3)。
+ 基于config.yaml里面配置的summary_base_dir来启动MindInsight,启动方法可以查看[MindInsight启动命令](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/mindinsight_commands.html#启动服务)。
## 注意事项
diff --git a/docs/mindinsight/docs/source_zh_cn/index.rst b/docs/mindinsight/docs/source_zh_cn/index.rst
index f9e77aae375653514de5883f2144bd9ef4f09c63..ca7338c71f27928fe806fa1e3c1d883078c35300 100644
--- a/docs/mindinsight/docs/source_zh_cn/index.rst
+++ b/docs/mindinsight/docs/source_zh_cn/index.rst
@@ -12,8 +12,9 @@ MindInsight包括以下内容:
- `超参调优 `_
- `模型迁移 `_
-.. image:: ./images/mindinsight.png
- :width: 700px
+.. raw:: html
+
+
使用MindInsight可视化训练过程
------------------------------
@@ -22,18 +23,18 @@ MindInsight包括以下内容:
在训练脚本中使用SummaryCollector记录训练信息,再执行训练。
-2. `启动MindInsight可视化训练 `_
+2. `启动MindInsight可视化训练 `_
启动MindInsight,并通过 ``--summary-base-dir`` 参数指定summary日志文件目录。
-3. `查看训练看板 `_
+3. `查看训练看板 `_
在浏览器中打开MindInsight访问地址,点击“训练看板”按钮查看详细信息。
使用MindInsight分析模型性能
---------------------------
-1. `收集模型分析数据 `_
+1. `收集模型分析数据 `_
在训练脚本中调用MindSpore Profiler相关接口,再执行训练。
diff --git a/docs/mindinsight/docs/source_zh_cn/landscape.md b/docs/mindinsight/docs/source_zh_cn/landscape.md
index 05636f5dc168872ec3c11897b2623c7cae8d5dcc..9b6f216b55e54114a3aecc47c57f622adb64487c 100644
--- a/docs/mindinsight/docs/source_zh_cn/landscape.md
+++ b/docs/mindinsight/docs/source_zh_cn/landscape.md
@@ -162,9 +162,96 @@
2. 地形图绘制:利用训练过程中保存的模型参数,模型与数据集与训练一致,启动新的脚本,正向计算生成地形图信息,不用再次进行训练。(适用于单卡或多卡并行计算绘制地形图)
```python
+ import mindspore.dataset as ds
+ import mindspore.dataset.vision.c_transforms as CV
+ import mindspore.dataset.transforms.c_transforms as C
+ from mindspore.dataset.vision import Inter
+ from mindspore import dtype as mstype
+ import mindspore.nn as nn
+
+ from mindspore.common.initializer import Normal
+ from mindspore import Model
from mindspore.nn import Loss
from mindspore.train.callback import SummaryLandscape
+ def create_dataset(data_path, batch_size=32, repeat_size=1,
+ num_parallel_workers=1):
+ """
+ create dataset for train or test
+ """
+ # define dataset
+ mnist_ds = ds.MnistDataset(data_path, shuffle=False)
+
+ resize_height, resize_width = 32, 32
+ rescale = 1.0 / 255.0
+ shift = 0.0
+ rescale_nml = 1 / 0.3081
+ shift_nml = -1 * 0.1307 / 0.3081
+
+ # define map operations
+ resize_op = CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR) # Bilinear mode
+ rescale_nml_op = CV.Rescale(rescale_nml, shift_nml)
+ rescale_op = CV.Rescale(rescale, shift)
+ hwc2chw_op = CV.HWC2CHW()
+ type_cast_op = C.TypeCast(mstype.int32)
+
+ # apply map operations on images
+ mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns="label", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=resize_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=rescale_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=rescale_nml_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+ mnist_ds = mnist_ds.map(operations=hwc2chw_op, input_columns="image", num_parallel_workers=num_parallel_workers)
+
+ # apply DatasetOps
+ buffer_size = 10000
+ mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size) # 10000 as in LeNet train script
+ mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)
+ mnist_ds = mnist_ds.repeat(repeat_size)
+
+ return mnist_ds
+
+ class LeNet5(nn.Cell):
+ """
+ Lenet network
+
+ Args:
+ num_class (int): Number of classes. Default: 10.
+ num_channel (int): Number of channels. Default: 1.
+
+ Returns:
+ Tensor, output tensor
+ Examples:
+ >>> LeNet(num_class=10)
+
+ """
+ def __init__(self, num_class=10, num_channel=1, include_top=True):
+ super(LeNet5, self).__init__()
+ self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid', weight_init=Normal(0.02))
+ self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid', weight_init=Normal(0.02))
+ self.relu = nn.ReLU()
+ self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
+ self.include_top = include_top
+ if self.include_top:
+ self.flatten = nn.Flatten()
+ self.fc1 = nn.Dense(16 * 5 * 5, 120)
+ self.fc2 = nn.Dense(120, 84)
+ self.fc3 = nn.Dense(84, num_class)
+
+ def construct(self, x):
+ x = self.conv1(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ x = self.conv2(x)
+ x = self.relu(x)
+ x = self.max_pool2d(x)
+ if not self.include_top:
+ return x
+ x = self.flatten(x)
+ x = self.relu(self.fc1(x))
+ x = self.relu(self.fc2(x))
+ x = self.fc3(x)
+ return x
+
def callback_fn():
network = LeNet5(10)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
diff --git a/docs/mindinsight/docs/source_zh_cn/lineage_and_scalars_comparison.md b/docs/mindinsight/docs/source_zh_cn/lineage_and_scalars_comparison.md
index 85fd3d0ff2ab44da2d3902c7dcd6ad97ababafa3..4604fb118547ed9a0d8f0ed5170a3010a047adf4 100644
--- a/docs/mindinsight/docs/source_zh_cn/lineage_and_scalars_comparison.md
+++ b/docs/mindinsight/docs/source_zh_cn/lineage_and_scalars_comparison.md
@@ -64,7 +64,7 @@ MindInsight中的模型溯源、数据溯源和对比看板同训练看板一样
## 对比看板
-对比看板可视用于多个训练之间的标量曲线对比。
+对比看板可视用于多个训练之间的标量曲线对比以及损失函数图形对比,其中损失函数图形对比的详细信息可查阅[训练优化过程可视化](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/landscape.html)。

diff --git a/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend.md b/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend.md
index 2af5b5e4f41976ce0bc1332f6c33bc786dea4b17..529788d2c8e93a28504d996d6f1499d1c5bb57cf 100644
--- a/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend.md
+++ b/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend.md
@@ -233,6 +233,8 @@ profiler.analyse()
图6是一个桑基图,以一种树的结构展示数据,其中光标选中某个scope能看到具体的FLOPs值。
+> 该图仅绘制算子的Scope层级结构(不展示最后一层算子的具体名字),由于训练过程中各算子的层级深度不相等,可能出现相邻层级时间总和不相等的情况。
+
### 数据准备性能分析
使用数据准备性能分析组件可以对训练数据准备过程进行性能分析。数据准备过程可以分为三个阶段:数据处理pipeline、数据发送至Device以及Device侧读取训练数据。数据准备性能分析组件会对每个阶段的处理性能进行详细分析,并将分析结果进行展示。
diff --git a/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend_of_cluster.md b/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend_of_cluster.md
index 741d5bb8f1c7a45774bc8b1ec36f89e7d8c384de..e16716075fcfd34d0d5ea0e7f8965a03595791b0 100644
--- a/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend_of_cluster.md
+++ b/docs/mindinsight/docs/source_zh_cn/performance_profiling_ascend_of_cluster.md
@@ -284,6 +284,10 @@ done
pip install /usr/local/Ascend/tools/hccl_parser-{version}-py3-none-any.whl
```
+### 规格
+
+出于对数据解析性能的考虑,当前对开启集群通信生成的文件数量进行了限制,目前MindSpore侧生成的原始通信性能文件(.trace后缀命名)数量上限为500。当通信原始数据超出上限,可能出现集群通信step数与集群迭代轨迹step数不一致的情况。
+
## 资源利用
### 集群内存使用情况分析
diff --git a/docs/mindinsight/docs/source_zh_cn/performance_profiling_gpu.md b/docs/mindinsight/docs/source_zh_cn/performance_profiling_gpu.md
index 5e79e7b0e2c27095687ccf05029c5add84bf6e16..9e8f749abc4a72f0047fba27671b4466e81b19a9 100644
--- a/docs/mindinsight/docs/source_zh_cn/performance_profiling_gpu.md
+++ b/docs/mindinsight/docs/source_zh_cn/performance_profiling_gpu.md
@@ -30,7 +30,7 @@
- 在训练结束后,调用`Profiler.analyse`停止性能数据收集并生成性能分析结果。
-样例代码与Ascend使用方式一致,可以参考:
+样例代码与Ascend使用方式一致,可以参考:
GPU场景可自定义callback方式收集性能,但数据准备阶段、数据下沉模式不支持该方式收集性能数据。
@@ -124,13 +124,13 @@ GPU场景下,Timeline分析的使用方法和Ascend场景相同,不同之处
GPU场景下,迭代轨迹分析的使用方法和Ascend场景相同。(注意:**迭代轨迹暂不支持异构训练场景**)
-使用方法可参考:
+使用方法可参考:
### 数据准备性能分析
GPU场景下,数据准备性能分析的使用方法和Ascend场景相同。
-使用方法可参考:
+使用方法可参考:
## 资源利用
diff --git a/docs/mindinsight/docs/source_zh_cn/performance_tuning_guide.md b/docs/mindinsight/docs/source_zh_cn/performance_tuning_guide.md
index 92acdcb5f88383ea319ac5728aae02f73a047075..b5cef24f3a666e34251d774a68b63b8558282809 100644
--- a/docs/mindinsight/docs/source_zh_cn/performance_tuning_guide.md
+++ b/docs/mindinsight/docs/source_zh_cn/performance_tuning_guide.md
@@ -60,7 +60,7 @@ MindInsight在性能调优的单卡页面为用户提供了`迭代轨迹`标签
- 若用户脚本中不存在耗时的自定义逻辑,说明框架将数据从Host侧发送到Device侧耗时较长,请到[MindSpore社区](https://gitee.com/mindspore/mindspore/issues) 进行反馈。
-步骤2:跳转到`数据准备详情`页的`数据处理`标签页,观察算子间队列,确定数据处理具体哪个算子存在性能瓶颈。判断原则请见[性能调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/performance_profiling_ascend.html#id8) 页面的`数据处理pipeline分析`部分。找到存在性能问题的算子后,可参考[优化数据处理](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/optimize_data_processing.html) 页面尝试提高数据处理算子的性能。
+步骤2:跳转到`数据准备详情`页的`数据处理`标签页,观察算子间队列,确定数据处理具体哪个算子存在性能瓶颈。判断原则请见[性能调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/performance_profiling_ascend.html#数据准备性能分析) 页面的`数据处理pipeline分析`部分。找到存在性能问题的算子后,可参考[优化数据处理](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/optimize_data_processing.html) 页面尝试提高数据处理算子的性能。
#### 数据下沉模式
@@ -71,7 +71,7 @@ MindInsight在性能调优的单卡页面为用户提供了`迭代轨迹`标签
步骤2:查看主机队列Size曲线的变化情况。若该队列Size都不是0,说明训练数据从Host发往Device的流程为性能瓶颈点,请到[MindSpore社区](https://gitee.com/mindspore/mindspore/issues) 反馈;否则说明数据处理流程是性能瓶颈点,请参照步骤3继续定位数据处理哪个算子存在性能问题。
-步骤3:跳转到`数据准备详情页的数据处理标签页`观察算子间队列,确定数据处理具体哪个算子存在性能瓶颈。判断原则请见[性能调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/performance_profiling_ascend.html#id8) 页面的`数据处理pipeline分析`部分。找到存在性能问题的算子后,可参考[优化数据处理](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/optimize_data_processing.html) 页面尝试提高数据处理算子的性能。
+步骤3:跳转到`数据准备详情页的数据处理标签页`观察算子间队列,确定数据处理具体哪个算子存在性能瓶颈。判断原则请见[性能调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/performance_profiling_ascend.html#数据准备性能分析) 页面的`数据处理pipeline分析`部分。找到存在性能问题的算子后,可参考[优化数据处理](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/optimize_data_processing.html) 页面尝试提高数据处理算子的性能。
### 前反向耗时长
diff --git a/docs/mindinsight/docs/source_zh_cn/summary_record.md b/docs/mindinsight/docs/source_zh_cn/summary_record.md
index 1b5651a80d9dce0ee2ae02ebd0e605fa5381c41d..1a2be4143cade79a8b42698c7e720eef6070baa7 100644
--- a/docs/mindinsight/docs/source_zh_cn/summary_record.md
+++ b/docs/mindinsight/docs/source_zh_cn/summary_record.md
@@ -4,11 +4,11 @@
## 概述
-训练过程中的标量、图像、计算图以及模型超参等信息记录到文件中,通过可视化界面供用户查看。
+训练过程中的标量、图像、计算图、训练优化过程以及模型超参等信息记录到文件中,通过可视化界面供用户查看。
## 操作流程
-- 准备训练脚本,并在训练脚本中指定标量、图像、计算图、模型超参等信息记录到summary日志文件,接着运行训练脚本。
+- 准备训练脚本,并在训练脚本中指定标量、图像、计算图、训练优化过程、模型超参等信息记录到summary日志文件,接着运行训练脚本。
- 启动MindInsight,并通过启动参数指定summary日志文件目录,启动成功后,根据IP和端口访问可视化界面,默认访问地址为 `http://127.0.0.1:8080`。
- 在训练过程中,有数据写入summary日志文件时,即可在页面中[查看训练看板中可视的数据](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/dashboard.html)。
@@ -16,7 +16,7 @@
## 准备训练脚本
-当前MindSpore支持将标量、图像、计算图、模型超参等信息保存到summary日志文件中,并通过可视化界面进行展示。计算图数据仅能在图模式下记录。
+当前MindSpore支持将标量、图像、计算图、训练优化过程、模型超参等信息保存到summary日志文件中,并通过可视化界面进行展示。计算图数据仅能在图模式下记录,训练优化过程数据收集及地形图绘制的详细流程可参考[训练优化过程可视化](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/landscape.html)。
MindSpore目前支持多种方式将数据记录到summary日志文件中。
@@ -125,6 +125,7 @@ if __name__ == '__main__':
> 1. 使用summary功能时,建议将`model.train`的`dataset_sink_mode`参数设置为`False`。请参考文末的注意事项。
> 2. 使用summary功能时,需要将代码放置到`if __name__ == "__main__"`中运行。详情请[参考Python官网介绍](https://docs.python.org/zh-cn/3.7/library/multiprocessing.html#multiprocessing-programming)。
+> 3. dataset_path为用户本地的训练数据集路径。
### 方式二:结合Summary算子和SummaryCollector,自定义收集网络中的数据
diff --git a/docs/mindinsight/faq/requirements.txt b/docs/mindinsight/faq/requirements.txt
index 96cdfc3e0c7ee0ae6a01e59c1081111fdc792bb6..9e0a48e898dd0ef56f2f0503a66694e919562f73 100644
--- a/docs/mindinsight/faq/requirements.txt
+++ b/docs/mindinsight/faq/requirements.txt
@@ -1,5 +1,6 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
jieba
\ No newline at end of file
diff --git a/docs/mindquantum/api/requirements.txt b/docs/mindquantum/api/requirements.txt
index 2a02803179fc4f0092b3793f092fe9ffd05f6e4c..a7d57f0645d5b74878d294b407346a52e5ba3e19 100644
--- a/docs/mindquantum/api/requirements.txt
+++ b/docs/mindquantum/api/requirements.txt
@@ -1,3 +1,4 @@
sphinx >= 2.2.1, <= 2.4.4
-sphinx_rtd_theme
+docutils == 0.16
+sphinx_rtd_theme == 0.5.2
numpy
diff --git a/docs/mindquantum/docs/requirements.txt b/docs/mindquantum/docs/requirements.txt
index e96c888345b8dd6ef8151075764a7847c5982b0c..5e132d85aecbc652067257757273ac8bc8cca5ee 100644
--- a/docs/mindquantum/docs/requirements.txt
+++ b/docs/mindquantum/docs/requirements.txt
@@ -1,6 +1,7 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
jieba
numpy
diff --git a/docs/mindquantum/docs/source_en/conf.py b/docs/mindquantum/docs/source_en/conf.py
index a1cf5fa242c184ef65e79fcf34f86445de3e766d..7da1fabc96e847e57ecd3636882de42b8b91683f 100644
--- a/docs/mindquantum/docs/source_en/conf.py
+++ b/docs/mindquantum/docs/source_en/conf.py
@@ -14,7 +14,6 @@ import os
import sys
import IPython
import re
-import nbsphinx as nbs
sys.path.append(os.path.abspath('../_ext'))
import sphinx.ext.autosummary.generate as g
from sphinx.ext import autodoc as sphinx_autodoc
@@ -128,21 +127,9 @@ with open(autodoc_source_path, "r+", encoding="utf8") as f:
exec(get_param_func_str, sphinx_autodoc.__dict__)
exec(code_str, sphinx_autodoc.__dict__)
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- f.seek(0)
- f.writelines(contents)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
diff --git a/docs/mindquantum/docs/source_en/index.rst b/docs/mindquantum/docs/source_en/index.rst
index 3ba3bb48fd784ad30885c92ec3d7a0d61b3610af..e7abba12e293c223df2782c3a7fcfc299c8eb336 100644
--- a/docs/mindquantum/docs/source_en/index.rst
+++ b/docs/mindquantum/docs/source_en/index.rst
@@ -1,10 +1,11 @@
MindQuantum Documents
======================
-MindQuantum is a general quantum computing library developed by MindSpore and HiQ, that can be used to build and train different quantum neural networks. Thanks to the powerful algorithm of quantum software group of Huawei and High-performance automatic differentiation ability of MindSpore, MindQuantum can efficiently handle problems such as quantum machine learning, quantum chemistry simulation, and quantum optimization, which provides an efficient platform for researchers, teachers and students to quickly design and verify quantum machine learning algorithms.
+MindQuantum is a general quantum computing framework developed by MindSpore and HiQ, that can be used to build and train different quantum neural networks. Thanks to the powerful algorithm of quantum software group of Huawei and High-performance automatic differentiation ability of MindSpore, MindQuantum can efficiently handle problems such as quantum machine learning, quantum chemistry simulation, and quantum optimization, which provides an efficient platform for researchers, teachers and students to quickly design and verify quantum machine learning algorithms.
-.. image:: ./images/mindquantum_en.png
- :width: 700px
+.. raw:: html
+
+
Typical MindQuantum Application Scenarios
------------------------------------------
diff --git a/docs/mindquantum/docs/source_zh_cn/conf.py b/docs/mindquantum/docs/source_zh_cn/conf.py
index 42234ddac9a4816d4182c0e16aac945e261fa0d2..ce058b41157b74b6c7a3dd2cc12278c99d696ecb 100644
--- a/docs/mindquantum/docs/source_zh_cn/conf.py
+++ b/docs/mindquantum/docs/source_zh_cn/conf.py
@@ -14,7 +14,6 @@ import os
import sys
import IPython
import re
-import nbsphinx as nbs
sys.path.append(os.path.abspath('../_ext'))
import sphinx.ext.autosummary.generate as g
from sphinx.ext import autodoc as sphinx_autodoc
@@ -92,21 +91,9 @@ intersphinx_mapping = {
'numpy': ('https://docs.scipy.org/doc/numpy/', '../../../../resource/numpy_objects.inv'),
}
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- f.seek(0)
- f.writelines(contents)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
# Modify regex for sphinx.ext.autosummary.generate.find_autosummary_in_lines.
gfile_abs_path = os.path.abspath(g.__file__)
diff --git a/docs/mindquantum/docs/source_zh_cn/images/quantum_phase_estimation.png b/docs/mindquantum/docs/source_zh_cn/images/quantum_phase_estimation.png
index 44d90c301ae3005c7930b0e47053be0dad3f6cd6..778c4d03379dd6b70182e7a84f6e948d60ea25ad 100644
Binary files a/docs/mindquantum/docs/source_zh_cn/images/quantum_phase_estimation.png and b/docs/mindquantum/docs/source_zh_cn/images/quantum_phase_estimation.png differ
diff --git a/docs/mindquantum/docs/source_zh_cn/index.rst b/docs/mindquantum/docs/source_zh_cn/index.rst
index 6c2717e28512813ddf0bcf683f920734304f1f6c..c58757c927a60bbc11d5862d04c03bade2813d0b 100644
--- a/docs/mindquantum/docs/source_zh_cn/index.rst
+++ b/docs/mindquantum/docs/source_zh_cn/index.rst
@@ -1,10 +1,11 @@
MindQuantum文档
=========================
-MindQuantum是基于昇思MindSpore开源深度学习框架和HiQ量子计算云平台开发的通用量子计算库,支持多种量子神经网络的训练和推理。得益于华为HiQ团队的量子计算模拟器和昇思MindSpore高性能自动微分能力,MindQuantum能够高效处理量子机器学习、量子化学模拟和量子优化等问题,为广大的科研人员、老师和学生提供快速设计和验证量子机器学习算法的高效平台。
+MindQuantum是基于昇思MindSpore开源深度学习框架和HiQ量子计算云平台开发的通用量子计算框架,支持多种量子神经网络的训练和推理。得益于华为HiQ团队的量子计算模拟器和昇思MindSpore高性能自动微分能力,MindQuantum能够高效处理量子机器学习、量子化学模拟和量子优化等问题,为广大的科研人员、老师和学生提供快速设计和验证量子机器学习算法的高效平台。
-.. image:: ./images/mindquantum_cn.png
- :width: 700px
+.. raw:: html
+
+
使用MindQuantum的典型场景
------------------------------
diff --git a/docs/mindquantum/docs/source_zh_cn/quantum_phase_estimation.ipynb b/docs/mindquantum/docs/source_zh_cn/quantum_phase_estimation.ipynb
index 7d606bdf7879d154c54dfc8a637ebd5e07708237..09a074ebec2a4b75e263dcccaf8e22e77e50af39 100644
--- a/docs/mindquantum/docs/source_zh_cn/quantum_phase_estimation.ipynb
+++ b/docs/mindquantum/docs/source_zh_cn/quantum_phase_estimation.ipynb
@@ -26,15 +26,15 @@
"source": [
"## 算法解析\n",
"\n",
- "相位估计算法的实现需要两个寄存器(register),第一个寄存器包含$t$个初始在 $|0\\rangle$ 的量子比特,比特数和最后相位估计的结果的精度和算法的成功概率相关;第二个寄存器初始化在幺正算符 $U$ 的本征态 $|u\\rangle$ 上。相位估计算法主要分为三步:\n",
+ "量子相位估计算法的实现需要两个寄存器(register),第一寄存器包含$t$个初始在 $|0\\rangle$ 的量子比特,比特数和最后相位估计的结果的精度和算法的成功概率相关;第二个寄存器初始化在幺正算符 $U$ 的本征态 $|u\\rangle$ 上。相位估计算法主要分为三步:\n",
"\n",
- "1. 对第一个寄存器的所有量子比特进行``Hadamard``门操作,对第二寄存器连续进行``控制U``门操作,其中U门的幂次依次为 $2^0, 2^1,...,2^{t-1}$,控制比特依次为 $q_{t-1}, q_{t-2},..., q_{1}, q_{0}$。这时第一寄存器中的态就会变为\n",
+ "1. 对第一寄存器的所有量子比特进行 `Hadamard` 门操作,对第二寄存器连续进行 `控制U` 门操作,其中 $U$ 门的幂次依次为 $2^0, 2^1,...,2^{t-1}$,控制比特依次为 $q_{t-1}, q_{t-2},..., q_{1}, q_{0}$。这时第一寄存器中的态就会变为\n",
"\n",
"$$\n",
"|\\psi_1\\rangle=\\frac{1}{2^{t/2}}\\left(|0\\rangle+e^{i2\\pi 2^{t-1}\\varphi}|1\\rangle\\right)\\left(|0\\rangle+e^{i2\\pi2^{t-2}\\varphi}|1\\rangle\\right)...\\left(|0\\rangle+e^{i2\\pi 2^{0}\\varphi}|1\\rangle\\right) = \\frac{1}{2^{t/2}}\\sum_{k=0}^{2^t-1}e^{i2\\pi\\varphi k}|k\\rangle\n",
"$$\n",
"\n",
- "其中k为直积态的十进制表示,比如 $k=0$ 表示第一寄存器中t个比特全部在基态 $|00...00\\rangle$, $k=2$ 表示 $|00...10\\rangle$,以此类推。\n",
+ "其中$k$为直积态的十进制表示,比如 $k=0$ 表示第一寄存器中t个比特全部在基态 $|00...00\\rangle$, $k=2$ 表示 $|00...10\\rangle$,以此类推。\n",
"\n",
"2. 对第一寄存器的进行量子傅里叶变换的逆变换(Inverse Quantum Fourier Transform),在线路中表示成 $QFT^\\dagger$, 对 $|\\psi_1\\rangle$ 进行逆量子傅里叶变换可得 $|\\psi_2\\rangle$\n",
"\n",
@@ -50,11 +50,11 @@
"\n",
"为本征基矢 $|x\\rangle$ ($x=0.1,...,2^t$) 对应的概率幅 。由上式可得,当 $2^t\\varphi$ 为整数,且满足 $x=2^t\\varphi$ 时,概率幅取最大值1,此时第一寄存器的末态可以精确反映 $\\varphi$;当 $2^t\\varphi$ 不是整数时,$x$ 为 $\\varphi$ 的估计,且$t$越大,估计精度越高。\n",
"\n",
- "3. 对第一寄存器的量子比特进行测量,得到第一寄存器的末态 $f=\\sum_{x}^{2^t-1}a_x|x\\rangle$, $x=0,1,...,2^t$;从中找到最大的概率幅 $a_{max}$,其对应的本征基矢 $|x\\rangle$ 中的 $x$ 在除以 $2^t$ 即为相位的估计值。\n",
+ "3. 对第一寄存器的量子比特进行测量,得到第一寄存器的末态 $f=\\sum_{x}^{2^t-1}a_x|x\\rangle$, $x=0,1,...,2^t$,从中找到最大的振幅 $a_{max}$,其对应的本征基矢 $|x\\rangle$ 中的 $x$ 再除以 $2^t$ 即为相位的估计值。\n",
"\n",
"## QPE代码实现\n",
"\n",
- "下面用一个实例来演示如何在MindQuantum实现相位估计算法,选择 ``T``门作为进行估计的幺正算符,由定义\n",
+ "下面用一个实例来演示如何在MindQuantum实现量子相位估计算法,选择 `T` 门作为进行估计的幺正算符,由定义\n",
"\n",
"$$\n",
"T|1\\rangle=e^{i\\pi/4}|1\\rangle\n",
@@ -62,6 +62,8 @@
"\n",
"可知需要估计的相位角为 $\\varphi=\\frac{1}{8}$。\n",
"\n",
+ "现在假设我们不知道 `T` 门的相位信息,只知道幺正算符 $U$ 是 `T` 门且本征态为 $|1\\rangle$ ,接下来我们需要用量子相位估计算法求出其对应的本征值,即需要估计本征值指数上的相位角。\n",
+ "\n",
"首先导入相关依赖。"
]
},
@@ -71,9 +73,9 @@
"metadata": {},
"outputs": [],
"source": [
- "from mindquantum import Circuit\n",
- "from mindquantum import Simulator\n",
- "from mindquantum import UN, PhaseShift, qft, H, X, BARRIER\n",
+ "from mindquantum.core import Circuit, UN, T, H, X, Power, BARRIER\n",
+ "from mindquantum.simulator import Simulator\n",
+ "from mindquantum.algorithm import qft\n",
"import numpy as np"
]
},
@@ -81,9 +83,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "``Mindquantum.UN`` 可以指定量子门,目标比特和控制比特,从而在线路中搭建门操作。因为我们已知 $\\varphi=1/8$,当 $t=3$ 时可令 $2^t\\varphi$ 为整数,所以第一寄存器只需要3个比特即可准确估计;又已知 ``T`` 门的本征态为 $|1\\rangle$,所以第二寄存器选择一个比特,即:我们需要搭建4比特线路,前 $q_0, q_1, q_2$ 比特用于估计,属于第一寄存器;$q_3$ 属于第二寄存器用于传入 $T$ 算符的本征态。\n",
+ "`UN` 可以指定量子门,目标比特和控制比特,从而在线路中搭建门操作; `Power` 可以得到指定量子门的指数形式。因为我们已知 `T` 门的本征态为 $|1\\rangle$,所以第二寄存器只需1个比特,而在第一寄存器中的比特数越多,得到的结果就越准确,在这里我们使用4个比特。\n",
+ "\n",
+ "因此我们需要搭建5比特线路, $q_0, q_1, q_2, q_3$ 比特用于估计,属于第一寄存器, $q_4$ 属于第二寄存器用于传入 $T$ 算符的本征态。\n",
"\n",
- "利用 ``UN`` 对 $q_0, q_1, q_2$ 进行 ``Hadamard`` 门操作, 用 ``X`` 门对 $q_3$ 进行翻转,得到 ``T`` 门的本征态 $|1\\rangle$。"
+ "利用 `UN` 对 $q_0, q_1, q_2, q_3$ 进行 `Hadamard` 门操作, 用 `X` 门对 $q_4$ 进行翻转,得到 `T` 门的本征态 $|1\\rangle$。"
]
},
{
@@ -92,56 +96,37 @@
"metadata": {},
"outputs": [
{
+ "output_type": "display_data",
"data": {
- "text/html": [
- "\n"
- ],
- "text/plain": []
+ "text/plain": "",
+ "text/html": "\n"
},
- "metadata": {},
- "output_type": "display_data"
+ "metadata": {}
},
{
+ "output_type": "execute_result",
"data": {
- "text/html": [
- "q0: ──H──\n",
- " \n",
- "q1: ──H──\n",
- " \n",
- "q2: ──H──\n",
- " \n",
- "q3: ──X──\n",
- "
\n"
- ],
- "text/plain": [
- "q0: ──H──\n",
- " \n",
- "q1: ──H──\n",
- " \n",
- "q2: ──H──\n",
- " \n",
- "q3: ──X──"
- ]
+ "text/plain": "q0: ──H──\n\nq1: ──H──\n\nq2: ──H──\n\nq3: ──H──\n\nq4: ──X──",
+ "text/html": "q0: ──H──\n\nq1: ──H──\n\nq2: ──H──\n\nq3: ──H──\n\nq4: ──X──\n
\n"
},
- "execution_count": 2,
"metadata": {},
- "output_type": "execute_result"
+ "execution_count": 2
}
],
"source": [
"# pylint: disable=W0104\n",
- "n = 3\n",
- "c = Circuit()\n",
- "c += UN(H, n)\n",
- "c += X.on(n)\n",
- "c"
+ "n = 4\n",
+ "circ = Circuit()\n",
+ "circ += UN(H, n) # 对前4个比特作用力H门\n",
+ "circ += X.on(n) # 对q4作用X门\n",
+ "circ"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "以 $q_3$ 为目标比特,添加 ``控制PhaseShift``门。"
+ "以 $q_4$ 为目标比特,添加控制$T^{2^i}$门。"
]
},
{
@@ -150,54 +135,35 @@
"metadata": {},
"outputs": [
{
+ "output_type": "display_data",
"data": {
- "text/html": [
- "\n"
- ],
- "text/plain": []
+ "text/plain": "",
+ "text/html": "\n"
},
- "metadata": {},
- "output_type": "display_data"
+ "metadata": {}
},
{
+ "output_type": "execute_result",
"data": {
- "text/html": [
- "q0: ──H────────────────────────────────●──────\n",
- " │ \n",
- "q1: ──H───────────────────●────────────┼──────\n",
- " │ │ \n",
- "q2: ──H───────●───────────┼────────────┼──────\n",
- " │ │ │ \n",
- "q3: ──X────PS(phi)────PS(2*phi)────PS(4*phi)──\n",
- "
\n"
- ],
- "text/plain": [
- "q0: ──H────────────────────────────────●──────\n",
- " │ \n",
- "q1: ──H───────────────────●────────────┼──────\n",
- " │ │ \n",
- "q2: ──H───────●───────────┼────────────┼──────\n",
- " │ │ │ \n",
- "q3: ──X────PS(phi)────PS(2*phi)────PS(4*phi)──"
- ]
+ "text/plain": "q0: ──H──────────────────────────●───\n │\nq1: ──H───────────────────●──────┼───\n │ │\nq2: ──H────────────●──────┼──────┼───\n │ │ │\nq3: ──H─────●──────┼──────┼──────┼───\n │ │ │ │\nq4: ──X────T^1────T^2────T^4────T^8──",
+ "text/html": "q0: ──H──────────────────────────●───\n │\nq1: ──H───────────────────●──────┼───\n │ │\nq2: ──H────────────●──────┼──────┼───\n │ │ │\nq3: ──H─────●──────┼──────┼──────┼───\n │ │ │ │\nq4: ──X────T^1────T^2────T^4────T^8──\n
\n"
},
- "execution_count": 3,
"metadata": {},
- "output_type": "execute_result"
+ "execution_count": 3
}
],
"source": [
"# pylint: disable=W0104\n",
"for i in range(n):\n",
- " c += PhaseShift({'phi': 2**i}).on(n, n-i-1)\n",
- "c"
+ " circ += Power(T, 2**i).on(n, n - i - 1) # 添加T^2^i门,其中q4为目标比特,n-i-1为控制比特\n",
+ "circ"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "对第一寄存器比特进行逆傅里叶变换。"
+ "对第一寄存器中的比特进行逆量子傅里叶变换。"
]
},
{
@@ -206,112 +172,69 @@
"metadata": {},
"outputs": [
{
+ "output_type": "display_data",
"data": {
- "text/html": [
- "\n"
- ],
- "text/plain": []
+ "text/plain": "",
+ "text/html": "\n"
},
- "metadata": {},
- "output_type": "display_data"
+ "metadata": {}
},
{
+ "output_type": "execute_result",
"data": {
- "text/html": [
- "q0: ──H────────────────────────────────●────────@──────────────────────────PS(-π/4)────PS(-π/2)────H──\n",
- " │ │ │ │ \n",
- "q1: ──H───────────────────●────────────┼────────┼─────────PS(-π/2)────H───────┼───────────●───────────\n",
- " │ │ │ │ │ \n",
- "q2: ──H───────●───────────┼────────────┼────────@────H───────●────────────────●───────────────────────\n",
- " │ │ │ \n",
- "q3: ──X────PS(phi)────PS(2*phi)────PS(4*phi)──────────────────────────────────────────────────────────\n",
- "
\n"
- ],
- "text/plain": [
- "q0: ──H────────────────────────────────●────────@──────────────────────────PS(-π/4)────PS(-π/2)────H──\n",
- " │ │ │ │ \n",
- "q1: ──H───────────────────●────────────┼────────┼─────────PS(-π/2)────H───────┼───────────●───────────\n",
- " │ │ │ │ │ \n",
- "q2: ──H───────●───────────┼────────────┼────────@────H───────●────────────────●───────────────────────\n",
- " │ │ │ \n",
- "q3: ──X────PS(phi)────PS(2*phi)────PS(4*phi)──────────────────────────────────────────────────────────"
- ]
+ "text/plain": "q0: ──H──────────────────────────●──────────@───────────────────────────────────────────────────────PS(-π/8)────PS(-π/4)────PS(-π/2)────H──\n │ │ │ │ │\nq1: ──H───────────────────●──────┼─────@────┼──────────────────────────PS(-π/4)────PS(-π/2)────H───────┼───────────┼───────────●───────────\n │ │ │ │ │ │ │ │\nq2: ──H────────────●──────┼──────┼─────@────┼─────────PS(-π/2)────H───────┼───────────●────────────────┼───────────●───────────────────────\n │ │ │ │ │ │ │\nq3: ──H─────●──────┼──────┼──────┼──────────@────H───────●────────────────●────────────────────────────●───────────────────────────────────\n │ │ │ │\nq4: ──X────T^1────T^2────T^4────T^8────────────────────────────────────────────────────────────────────────────────────────────────────────",
+ "text/html": "q0: ──H──────────────────────────●──────────@───────────────────────────────────────────────────────PS(-π/8)────PS(-π/4)────PS(-π/2)────H──\n │ │ │ │ │\nq1: ──H───────────────────●──────┼─────@────┼──────────────────────────PS(-π/4)────PS(-π/2)────H───────┼───────────┼───────────●───────────\n │ │ │ │ │ │ │ │\nq2: ──H────────────●──────┼──────┼─────@────┼─────────PS(-π/2)────H───────┼───────────●────────────────┼───────────●───────────────────────\n │ │ │ │ │ │ │\nq3: ──H─────●──────┼──────┼──────┼──────────@────H───────●────────────────●────────────────────────────●───────────────────────────────────\n │ │ │ │\nq4: ──X────T^1────T^2────T^4────T^8────────────────────────────────────────────────────────────────────────────────────────────────────────\n
\n"
},
- "execution_count": 4,
"metadata": {},
- "output_type": "execute_result"
+ "execution_count": 4
}
],
"source": [
"# pylint: disable=W0104\n",
- "c += BARRIER\n",
- "c += qft(range(n)).hermitian()\n",
- "c"
+ "circ += BARRIER\n",
+ "circ += qft(range(n)).hermitian() # 对前4个比特作用量子傅立叶变换的逆变换\n",
+ "circ"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "选择后端、传入总比特数创建模拟器,将 $\\varphi$ 值传入并进行演化,得到末态。"
+ "选择后端、传入总比特数创建模拟器,对量子线路进行演化,得到末态。"
]
},
{
"cell_type": "code",
"execution_count": 5,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "1¦1100⟩\n"
- ]
- },
- {
+ "output_type": "display_data",
"data": {
- "text/html": [
- "\n"
- ],
- "text/plain": []
+ "text/plain": "",
+ "text/html": "\n"
},
- "metadata": {},
- "output_type": "display_data"
+ "metadata": {}
},
{
+ "output_type": "execute_result",
"data": {
- "text/html": [
- "shots: 100\n",
- "Keys: q3 q2 q1 q0│0.00 0.2 0.4 0.6 0.8 1.0\n",
- "─────────────────┼───────────┴───────────┴───────────┴───────────┴───────────┴\n",
- " 1100│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓\n",
- " │ \n",
- "{'1100': 100}\n",
- "
\n"
- ],
- "text/plain": [
- "shots: 100\n",
- "Keys: q3 q2 q1 q0│0.00 0.2 0.4 0.6 0.8 1.0\n",
- "─────────────────┼───────────┴───────────┴───────────┴───────────┴───────────┴\n",
- " 1100│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓\n",
- " │ \n",
- "{'1100': 100}"
- ]
+ "text/plain": "shots: 100\nKeys: q3 q2 q1 q0│0.00 0.2 0.4 0.6 0.8 1.0\n─────────────────┼───────────┴───────────┴───────────┴───────────┴───────────┴\n 0100│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓\n │\n{'0100': 100}",
+ "text/html": "shots: 100\nKeys: q3 q2 q1 q0│0.00 0.2 0.4 0.6 0.8 1.0\n─────────────────┼───────────┴───────────┴───────────┴───────────┴───────────┴\n 0100│▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓\n │\n{'0100': 100}\n
\n"
},
- "execution_count": 5,
"metadata": {},
- "output_type": "execute_result"
+ "execution_count": 5
}
],
"source": [
"# pylint: disable=W0104\n",
"from mindquantum import Measure\n",
- "sim = Simulator('projectq', c.n_qubits)\n",
- "phi = 0.125\n",
- "sim.apply_circuit(c, {'phi': 2*np.pi*phi})\n",
- "qs = sim.get_qs()\n",
- "print(sim.get_qs(ket=True))\n",
- "res = sim.sampling(UN(Measure(), c.n_qubits), shots=100)\n",
+ "sim = Simulator('projectq', circ.n_qubits) # 创建模拟器\n",
+ "sim.apply_circuit(circ) # 用模拟器演化线路\n",
+ "qs = sim.get_qs() # 获得演化得到的量子态\n",
+ "res = sim.sampling(UN(Measure(), circ.n_qubits - 1), shots=100) # 在寄存器1中加入测量门并对线路进行100次采样,获得统计结果\n",
"res"
]
},
@@ -319,49 +242,61 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "找出概率幅最大值的位置。"
+ "需要注意的是,测量结果作为二进制串的读取顺序应为$|q_0q_1q_2q_3\\rangle$,因此我们得到寄存器1的测量结果为`0010`,概率幅为1,该末态可以精准地反映相位$\\varphi$。但`0010`是二进制结果,因此我们将它转回十进制后再除以$2^n$,就得到了我们最终的估计值:$\\varphi=\\frac{2}{2^4}=\\frac{1}{8}$。\n",
+ "\n",
+ "我们也可以通过线路演化得到的量子态 `qs` 找出第一寄存器中振幅最大值 $a_{max}$ 的位置,进而得到其对应的本征基矢 $|x\\rangle$ ,其中的 $x$ 再除以 $2^t$ 即为相位的估计值。"
]
},
{
"cell_type": "code",
"execution_count": 6,
- "metadata": {},
"outputs": [
{
- "name": "stdout",
"output_type": "stream",
- "text": [
- "12\n"
- ]
+ "name": "stdout",
+ "text": "10100\n"
}
],
"source": [
"index = np.argmax(np.abs(qs))\n",
- "print(index)"
- ]
+ "print(bin(index)[2:])"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%%\n"
+ },
+ "tags": []
+ }
},
{
"cell_type": "markdown",
- "metadata": {},
"source": [
- "注意此时的 ``index`` 对应的 $x$ 并不是真正的估计值,被 $2^t$ 除之后也不是,因为测量结果中包括第二寄存器中的辅助比特,需要将``index``转成二进制后将辅助位剔除。"
- ]
+ "需要注意的是,`qs` 对应的是整个量子线路的末态,因此得到的 ``index`` 也包含第二寄存器中的比特,不能直接得到第一寄存器末态中 $a_{max}$ 对应的 $|x\\rangle$ ,需要将 ``index`` 转成二进制后将 $q4$ 对应的比特位剔除,然后得到的才是第一寄存器的 $|x\\rangle$ 。"
+ ],
+ "metadata": {
+ "collapsed": false,
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ }
},
{
"cell_type": "code",
"execution_count": 7,
- "metadata": {},
+ "metadata": {
+ "tags": []
+ },
"outputs": [
{
- "name": "stdout",
"output_type": "stream",
- "text": [
- "100\n"
- ]
+ "name": "stdout",
+ "text": "0010\n"
}
],
"source": [
- "bit_string = bin(index)[2:].zfill(c.n_qubits)[1:]\n",
+ "bit_string = bin(index)[2:].zfill(circ.n_qubits)[1:] # 将index转换成01串并剔除q4\n",
+ "bit_string = bit_string[::-1] # 将比特串顺序调整为q0q1q2q3\n",
"print(bit_string)"
]
},
@@ -369,7 +304,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "在将二进制转回十进制,得到我们最终的估计值。"
+ "再将二进制转回十进制,得到我们最终的估计值。"
]
},
{
@@ -378,19 +313,17 @@
"metadata": {},
"outputs": [
{
+ "output_type": "execute_result",
"data": {
- "text/plain": [
- "0.125"
- ]
+ "text/plain": "0.125"
},
- "execution_count": 8,
"metadata": {},
- "output_type": "execute_result"
+ "execution_count": 8
}
],
"source": [
"# pylint: disable=W0104\n",
- "theta_exp = int(bit_string[::-1], 2) / 2**n\n",
+ "theta_exp = int(bit_string, 2) / 2**n\n",
"theta_exp"
]
},
@@ -430,9 +363,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.8.8"
+ "version": "3.7.5-final"
}
},
"nbformat": 4,
"nbformat_minor": 4
-}
+}
\ No newline at end of file
diff --git a/docs/mindscience/api/requirements.txt b/docs/mindscience/api/requirements.txt
index 2a02803179fc4f0092b3793f092fe9ffd05f6e4c..a7d57f0645d5b74878d294b407346a52e5ba3e19 100644
--- a/docs/mindscience/api/requirements.txt
+++ b/docs/mindscience/api/requirements.txt
@@ -1,3 +1,4 @@
sphinx >= 2.2.1, <= 2.4.4
-sphinx_rtd_theme
+docutils == 0.16
+sphinx_rtd_theme == 0.5.2
numpy
diff --git a/docs/mindscience/docs/requirements.txt b/docs/mindscience/docs/requirements.txt
index 6d8cd70439820e16bc32c4abc93e948ba81dc01b..49a77fdec3a5c745edd40eaa223883c31500e975 100644
--- a/docs/mindscience/docs/requirements.txt
+++ b/docs/mindscience/docs/requirements.txt
@@ -1,7 +1,8 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
numpy
nbsphinx
IPython
diff --git a/docs/mindscience/docs/source_en/conf.py b/docs/mindscience/docs/source_en/conf.py
index 55fccca51583a10a8efe916257ae9f28889a7b81..fcd8da2832e6a64de60e299c5c68c94fa8b88c9c 100644
--- a/docs/mindscience/docs/source_en/conf.py
+++ b/docs/mindscience/docs/source_en/conf.py
@@ -14,7 +14,6 @@ import os
import sys
import IPython
import re
-import nbsphinx as nbs
import sphinx
sys.path.append(os.path.abspath('../_ext'))
import sphinx.ext.autosummary.generate as g
@@ -147,20 +146,9 @@ import mindspore
import mindelec
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
diff --git a/docs/mindscience/docs/source_en/index.rst b/docs/mindscience/docs/source_en/index.rst
index ff9d95803f07a0c80eab44b34026ba0cde5d2069..d84d76980025d2f207a01986353d178b9ef6367c 100644
--- a/docs/mindscience/docs/source_en/index.rst
+++ b/docs/mindscience/docs/source_en/index.rst
@@ -3,8 +3,9 @@ MindScience Documents
MindScience is scientific computing kits for various industries based on the converged MindSpore framework. It contains the industry-leading datasets, basic network structures, high-precision pre-trained models, and pre- and post-processing tools, accelerating the development of scientific computing applications. Currently, the MindElec kit for the electronic information industry and the MindSPONGE kit for the life science industry have been launched, improving the electromagnetic simulation performance by 10 times and the simulation efficiency of biopharmaceutical compounds by 50%.
-.. image:: ./mindelec/images/mindscience_en.png
- :width: 700px
+.. raw:: html
+
+
Typical MindScience Application Scenarios
------------------------------------------
diff --git a/docs/mindscience/docs/source_zh_cn/conf.py b/docs/mindscience/docs/source_zh_cn/conf.py
index 34a1649d1988c5fb973e784125f1047e1ada5a47..c45cf08602ea8f2c4b2c89f52286c54a4b166516 100644
--- a/docs/mindscience/docs/source_zh_cn/conf.py
+++ b/docs/mindscience/docs/source_zh_cn/conf.py
@@ -14,7 +14,6 @@ import os
import sys
import IPython
import re
-import nbsphinx as nbs
import sphinx
sys.path.append(os.path.abspath('../_ext'))
import sphinx.ext.autosummary.generate as g
@@ -81,20 +80,9 @@ html_search_language = 'zh'
html_search_options = {'dict': '../../../resource/jieba.txt'}
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
diff --git a/docs/mindscience/docs/source_zh_cn/index.rst b/docs/mindscience/docs/source_zh_cn/index.rst
index 2a9fe6f600cdd8de5ca4b354f9f240e6e396477f..1bcc3cfc059f892154ff2ab665ef9bca2cd4daf9 100644
--- a/docs/mindscience/docs/source_zh_cn/index.rst
+++ b/docs/mindscience/docs/source_zh_cn/index.rst
@@ -3,8 +3,9 @@ MindScience 文档
MindScience是基于昇思MindSpore融合架构打造的科学计算行业套件,包含了业界领先的数据集、基础模型、预置高精度模型和前后处理工具,加速了科学行业应用开发。目前已推出面向电子信息行业的MindElec套件和面向生命科学行业的MindSPONGE套件,分别实现了电磁仿真性能提升10倍和生物制药化合物模拟效率提升50%。
-.. image:: ./mindelec/images/mindscience_cn.png
- :width: 700px
+.. raw:: html
+
+
使用MindScience的典型场景
------------------------------
diff --git a/docs/mindspore/api/Makefile b/docs/mindspore/api/Makefile
index 954652e2b70c041797a5444f6b9e5f07aeccda64..cd8d6bfa74c2ebc5218220cc76910c55aa3abeec 100644
--- a/docs/mindspore/api/Makefile
+++ b/docs/mindspore/api/Makefile
@@ -24,4 +24,4 @@ EXTRADIR = $(SOURCEDIR)/api_python
.PHONY: clean
clean:
- -rm -rf $(BUILDDIR)/* $(EXTRADIR)/ops $(EXTRADIR)/nn $(EXTRADIR)/dataset $(EXTRADIR)/dataset_vision $(EXTRADIR)/dataset_transforms $(EXTRADIR)/dataset_text $(EXTRADIR)/text $(EXTRADIR)/numpy $(EXTRADIR)/nn_probability $(EXTRADIR)/mindspore $(EXTRADIR)/scipy
+ -rm -rf $(BUILDDIR)/* $(EXTRADIR)/ops $(EXTRADIR)/nn $(EXTRADIR)/dataset $(EXTRADIR)/dataset_vision $(EXTRADIR)/dataset_audio $(EXTRADIR)/dataset_transforms $(EXTRADIR)/dataset_text $(EXTRADIR)/text $(EXTRADIR)/numpy $(EXTRADIR)/nn_probability $(EXTRADIR)/mindspore $(EXTRADIR)/scipy
diff --git a/docs/mindspore/api/requirements.txt b/docs/mindspore/api/requirements.txt
index f2424ce6f08f8ce69b2f1bb181bc09da2641d35d..1e7189447f37b0e2e1c98a2ac5e82da36756a493 100644
--- a/docs/mindspore/api/requirements.txt
+++ b/docs/mindspore/api/requirements.txt
@@ -1,4 +1,5 @@
sphinx >= 2.2.1, <= 2.4.4
-sphinx_rtd_theme
+docutils == 0.16
+sphinx_rtd_theme == 0.5.2
numpy
opencv-python
\ No newline at end of file
diff --git a/docs/mindspore/api/source_en/api_python/mindspore.scipy.rst b/docs/mindspore/api/source_en/api_python/mindspore.scipy.rst
index 20ccb2229974d71bd5f532094de4d42c89ceb5d9..c48b343c4c2efdc04a9dab53b289cbbebe729f43 100644
--- a/docs/mindspore/api/source_en/api_python/mindspore.scipy.rst
+++ b/docs/mindspore/api/source_en/api_python/mindspore.scipy.rst
@@ -14,10 +14,14 @@ mindspore.scipy.linalg
:template: classtemplate_inherited.rst
mindspore.scipy.linalg.block_diag
+ mindspore.scipy.linalg.cho_factor
+ mindspore.scipy.linalg.cholesky
+ mindspore.scipy.linalg.cho_solve
mindspore.scipy.linalg.eigh
mindspore.scipy.linalg.inv
mindspore.scipy.linalg.lu
mindspore.scipy.linalg.lu_factor
+ mindspore.scipy.linalg.solve_triangular
mindspore.scipy.optimize
------------------------
diff --git a/docs/mindspore/api/source_en/conf.py b/docs/mindspore/api/source_en/conf.py
index 7d274628392abe4406838b903ef309a3c0de1f61..4b1778e12f97bf6d871fc067ff90e61bd466e512 100644
--- a/docs/mindspore/api/source_en/conf.py
+++ b/docs/mindspore/api/source_en/conf.py
@@ -150,29 +150,38 @@ import mindspore
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
-# Copy images from mindspore repository to sphinx workdir before running.
-import glob
+# Copy images from mindspore repo.
+import imghdr
import shutil
from sphinx.util import logging
-logger = logging.getLogger(__name__)
-
-image_specified = {"docs/api_img/*.png": "./api_python/ops/api_img",
- "docs/api_img/dataset/*.png": "./api_python/dataset/api_img"}
-for img in image_specified.keys():
- des_dir = os.path.normpath(image_specified[img])
- try:
- if "*" in img:
- imgs = glob.glob(os.path.join(os.getenv("MS_PATH"), os.path.normpath(img)))
- if not imgs:
- continue
- if not os.path.exists(des_dir):
- os.makedirs(des_dir)
- for i in imgs:
- shutil.copy(i, des_dir)
- else:
- img_fullpath = os.path.join(os.getenv("MS_PATH"), des_dir)
- if os.path.exists(img_fullpath):
- shutil.copy(img_fullpath, des_dir)
- except:
- logger.warning(f"{img} deal failed!")
+logger = logging.getLogger(__name__)
+src_dir = os.path.join(os.getenv("MS_PATH"), 'docs/api/api_python')
+des_dir = "./api_python"
+image_specified = {"train/": ""}
+
+if not os.path.exists(src_dir):
+ logger.warning(f"不存在目录:{src_dir}!")
+
+def copy_image(sourcedir, des_dir):
+ """
+ Copy all images from sourcedir to workdir.
+ """
+ for cur, _, files in os.walk(sourcedir, topdown=True):
+ for i in files:
+ if imghdr.what(os.path.join(cur, i)):
+ try:
+ rel_path = os.path.relpath(cur, sourcedir)
+ targetdir = os.path.join(des_dir, rel_path)
+ for j in image_specified.keys():
+ if rel_path.startswith(j):
+ value = image_specified[j]
+ targetdir = os.path.join(des_dir, re.sub(rf'^{j}', rf'{value}', rel_path))
+ break
+ if not os.path.exists(targetdir):
+ os.makedirs(targetdir, exist_ok=True)
+ shutil.copy(os.path.join(cur, i), targetdir)
+ except:
+ logger.warning(f'picture {os.path.join(os.path.relpath(cur, sourcedir), i)} copy failed.')
+
+copy_image(src_dir, des_dir)
diff --git a/docs/mindspore/api/source_zh_cn/api_python/mindspore.scipy.rst b/docs/mindspore/api/source_zh_cn/api_python/mindspore.scipy.rst
index 20ccb2229974d71bd5f532094de4d42c89ceb5d9..c48b343c4c2efdc04a9dab53b289cbbebe729f43 100644
--- a/docs/mindspore/api/source_zh_cn/api_python/mindspore.scipy.rst
+++ b/docs/mindspore/api/source_zh_cn/api_python/mindspore.scipy.rst
@@ -14,10 +14,14 @@ mindspore.scipy.linalg
:template: classtemplate_inherited.rst
mindspore.scipy.linalg.block_diag
+ mindspore.scipy.linalg.cho_factor
+ mindspore.scipy.linalg.cholesky
+ mindspore.scipy.linalg.cho_solve
mindspore.scipy.linalg.eigh
mindspore.scipy.linalg.inv
mindspore.scipy.linalg.lu
mindspore.scipy.linalg.lu_factor
+ mindspore.scipy.linalg.solve_triangular
mindspore.scipy.optimize
------------------------
diff --git a/docs/mindspore/api/source_zh_cn/conf.py b/docs/mindspore/api/source_zh_cn/conf.py
index 300c377cda8c46cfbd2d4095f179f687c03b89fc..b4221fa9e64a2a4e3cbac312df0e7186686527eb 100644
--- a/docs/mindspore/api/source_zh_cn/conf.py
+++ b/docs/mindspore/api/source_zh_cn/conf.py
@@ -150,29 +150,38 @@ import mindspore
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
-# Copy images from mindspore repository to sphinx workdir before running.
-import glob
+# Copy images from mindspore repo.
+import imghdr
import shutil
from sphinx.util import logging
-logger = logging.getLogger(__name__)
-
-image_specified = {"docs/api_img/*.png": "./api_python/ops/api_img",
- "docs/api_img/dataset/*.png": "./api_python/dataset/api_img"}
-for img in image_specified.keys():
- des_dir = os.path.normpath(image_specified[img])
- try:
- if "*" in img:
- imgs = glob.glob(os.path.join(os.getenv("MS_PATH"), os.path.normpath(img)))
- if not imgs:
- continue
- if not os.path.exists(des_dir):
- os.makedirs(des_dir)
- for i in imgs:
- shutil.copy(i, des_dir)
- else:
- img_fullpath = os.path.join(os.getenv("MS_PATH"), des_dir)
- if os.path.exists(img_fullpath):
- shutil.copy(img_fullpath, des_dir)
- except:
- logger.warning(f"{img} deal failed!")
+logger = logging.getLogger(__name__)
+src_dir = os.path.join(os.getenv("MS_PATH"), 'docs/api/api_python')
+des_dir = "./api_python"
+image_specified = {"train/": ""}
+
+if not os.path.exists(src_dir):
+ logger.warning(f"不存在目录:{src_dir}!")
+
+def copy_image(sourcedir, des_dir):
+ """
+ Copy all images from sourcedir to workdir.
+ """
+ for cur, _, files in os.walk(sourcedir, topdown=True):
+ for i in files:
+ if imghdr.what(os.path.join(cur, i)):
+ try:
+ rel_path = os.path.relpath(cur, sourcedir)
+ targetdir = os.path.join(des_dir, rel_path)
+ for j in image_specified.keys():
+ if rel_path.startswith(j):
+ value = image_specified[j]
+ targetdir = os.path.join(des_dir, re.sub(rf'^{j}', rf'{value}', rel_path))
+ break
+ if not os.path.exists(targetdir):
+ os.makedirs(targetdir, exist_ok=True)
+ shutil.copy(os.path.join(cur, i), targetdir)
+ except:
+ logger.warning(f'picture {os.path.join(os.path.relpath(cur, sourcedir), i)} copy failed.')
+
+copy_image(src_dir, des_dir)
diff --git a/docs/mindspore/faq/requirements.txt b/docs/mindspore/faq/requirements.txt
index 3fcb7644a26ae0615bc4e71a7d333f6200e1a5bf..878affd19d9928e8955e4d5b44b45b823da7fb0c 100644
--- a/docs/mindspore/faq/requirements.txt
+++ b/docs/mindspore/faq/requirements.txt
@@ -1,5 +1,6 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
jieba
diff --git a/docs/mindspore/faq/source_en/conf.py b/docs/mindspore/faq/source_en/conf.py
index 5be540341a1805fe8241134120683dc3bae5c6f7..4669f40aa0c1071982318c307891cbed76cfb52b 100644
--- a/docs/mindspore/faq/source_en/conf.py
+++ b/docs/mindspore/faq/source_en/conf.py
@@ -52,6 +52,8 @@ pygments_style = 'sphinx'
#
html_theme = 'sphinx_rtd_theme'
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindspore/faq/source_en/installation.md b/docs/mindspore/faq/source_en/installation.md
index 43f58139d1b31040b8b5d3d8756565801adac1ff..a99c9942e9de0a3e983f358abd21a9fd6e95dedf 100644
--- a/docs/mindspore/faq/source_en/installation.md
+++ b/docs/mindspore/faq/source_en/installation.md
@@ -286,3 +286,9 @@ $make
$make check
$sudo make install
```
+
+
+
+**What should I do if an warning message `UserWarning: The value of the smallest subnormal for type is zero.` is displayed when running Mindspore?**
+
+A: We observed such warnings on ARM environment, with python 3.9 and numpy >=1.22.0 installed. These wharnings come from numpy instead of MindSpore, if you wish to suppress these warnings, please consider manually switching to a lower version of numpy (<=1.21.2).
diff --git a/docs/mindspore/faq/source_zh_cn/conf.py b/docs/mindspore/faq/source_zh_cn/conf.py
index 34872471f7f6867fdb6e3e35ba8765345a1b4546..face6046bf5697ccade00ccff3fc1eb770f20698 100644
--- a/docs/mindspore/faq/source_zh_cn/conf.py
+++ b/docs/mindspore/faq/source_zh_cn/conf.py
@@ -56,6 +56,8 @@ html_search_language = 'zh'
html_search_options = {'dict': '../../../resource/jieba.txt'}
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindspore/faq/source_zh_cn/feature_advice.md b/docs/mindspore/faq/source_zh_cn/feature_advice.md
index e4061f58b882c7c18034cbcbde0ea53dd475da49..93df8eb21a61ea2c571a2caeecb0a33a69b93405 100644
--- a/docs/mindspore/faq/source_zh_cn/feature_advice.md
+++ b/docs/mindspore/faq/source_zh_cn/feature_advice.md
@@ -144,7 +144,7 @@ A: TensorFlow的对象检测Pipeline接口属于TensorFlow Model模块。待Mind
**Q: 使用PyNative模式能够进行迁移学习?**
-A: PyNative模式是兼容迁移学习的,更多的教程信息,可以参考[预训练模型加载代码详解](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cv_mobilenetv2_fine_tune.html#id7)。
+A: PyNative模式是兼容迁移学习的,更多的教程信息,可以参考[预训练模型加载代码详解](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/cv_mobilenetv2_fine_tune.html#预训练模型加载代码详解)。
diff --git a/docs/mindspore/faq/source_zh_cn/implement_problem.md b/docs/mindspore/faq/source_zh_cn/implement_problem.md
index 3f168f5fdf553728f7dcc9bb5cce3098013cf69f..758b2bbec9fe2c27751393b41a36b69a516c8fcf 100644
--- a/docs/mindspore/faq/source_zh_cn/implement_problem.md
+++ b/docs/mindspore/faq/source_zh_cn/implement_problem.md
@@ -519,7 +519,7 @@ ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Exp
-**运行文档示例代码的过程中,遇到`matplotlib.pyplot.show()`或`plt.show()`无法执行怎么处理?**
+**Q:运行文档示例代码的过程中,遇到`matplotlib.pyplot.show()`或`plt.show()`无法执行怎么处理?**
A: 首先确认是否安装`matplotlib`,如果没有安装,可以在命令行中执行`pip install matplotlib`进行安装。
diff --git a/docs/mindspore/faq/source_zh_cn/inference.md b/docs/mindspore/faq/source_zh_cn/inference.md
index 0eba8eefdbe0dfbfcd1b2fefa50a652355daebce..33fe0734ca3624d089055d823744017b4b0b96e3 100644
--- a/docs/mindspore/faq/source_zh_cn/inference.md
+++ b/docs/mindspore/faq/source_zh_cn/inference.md
@@ -34,7 +34,7 @@ def export_net():
**Q: 编译应用时报错`/usr/bin/ld: warning: libxxx.so, needed by libmindspore.so, not found`怎么办?**
-A: 寻找缺少的动态库文件所在目录,添加该路径到环境变量`LD_LIBRARY_PATH`中,环境变量设置参考[Ascend 310 AI处理器上使用MindIR模型进行推理#编译推理代码](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference_ascend_310_mindir.html#id6)。
+A: 寻找缺少的动态库文件所在目录,添加该路径到环境变量`LD_LIBRARY_PATH`中,环境变量设置参考[Ascend 310 AI处理器上使用MindIR模型进行推理#编译推理代码](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference_ascend_310_mindir.html#编译推理代码)。
diff --git a/docs/mindspore/faq/source_zh_cn/installation.md b/docs/mindspore/faq/source_zh_cn/installation.md
index fc445b3d4e4a9d7a1290ef03edc3f049c04e8ca8..ca67551f542ef8e56e8f3242db9af4eb49a0abca 100644
--- a/docs/mindspore/faq/source_zh_cn/installation.md
+++ b/docs/mindspore/faq/source_zh_cn/installation.md
@@ -222,7 +222,7 @@ A: MindSpore GPU模式一般无需设置`DEVICE_ID`环境变量,MindSpore会
**Q: 编译应用时报错`/usr/bin/ld: warning: libxxx.so, needed by libmindspore.so, not found`怎么办?**
-A: 寻找缺少的动态库文件所在目录,添加该路径到环境变量`LD_LIBRARY_PATH`中,环境变量设置参考[Ascend 310 AI处理器上使用MindIR模型进行推理#编译推理代码](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference_ascend_310_mindir.html#id6)。
+A: 寻找缺少的动态库文件所在目录,添加该路径到环境变量`LD_LIBRARY_PATH`中,环境变量设置参考[Ascend 310 AI处理器上使用MindIR模型进行推理#编译推理代码](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference_ascend_310_mindir.html#编译推理代码)。
@@ -304,3 +304,9 @@ $make
$make check
$sudo make install
```
+
+
+
+**Q: 运行MindSpore时出现告警 `UserWarning: The value of the smallest subnormal for type is zero.` 应该怎么解决?**
+
+A: 上述问题出现在安装了较新版本的numpy(>=1.22.0)版本的ARM python3.9环境上。告警来自numpy而非MindSpore。如果告警影响到了代码的正常调测,可以考虑手动安装较低版本的numpy(<=1.21.2)来规避。
diff --git a/docs/mindspore/migration_guide/requirements.txt b/docs/mindspore/migration_guide/requirements.txt
index 6d8cd70439820e16bc32c4abc93e948ba81dc01b..49a77fdec3a5c745edd40eaa223883c31500e975 100644
--- a/docs/mindspore/migration_guide/requirements.txt
+++ b/docs/mindspore/migration_guide/requirements.txt
@@ -1,7 +1,8 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
numpy
nbsphinx
IPython
diff --git a/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md b/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md
index 4b223b2a0934ef87e5b3fcc618850803cd32f07b..eba739c4f88e473d1d6831da5ec650ea378c65be 100644
--- a/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md
+++ b/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md
@@ -138,7 +138,7 @@ More MindSpore developers are also welcome to participate in improving the mappi
| [torch.tanh](https://pytorch.org/docs/1.5.0/torch.html#torch.tanh) | [mindspore.ops.Tanh](https://mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.Tanh.html#mindspore.ops.Tanh) | |
| [torch.tensor](https://pytorch.org/docs/1.5.0/torch.html#torch.tensor) | [mindspore.Tensor](https://mindspore.cn/docs/api/en/master/api_python/mindspore/mindspore.Tensor.html#mindspore.Tensor) | |
| [torch.Tensor](https://pytorch.org/docs/1.5.0/torch.html#torch.Tensor) | [mindspore.Tensor](https://mindspore.cn/docs/api/en/master/api_python/mindspore/mindspore.Tensor.html#mindspore.Tensor) | |
-| [torch.tensordot](https://pytorch.org/docs/1.5.0/torch.html#torch.tensordot) | [mindspore.numpy.tensordot](https://mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.Reshape.html#mindspore.numpy.tensordot) | |
+| [torch.tensordot](https://pytorch.org/docs/1.5.0/torch.html#torch.tensordot) | [mindspore.numpy.tensordot](https://www.mindspore.cn/docs/api/en/master/api_python/numpy/mindspore.numpy.tensordot.html) | |
| [torch.topk](https://pytorch.org/docs/1.5.0/torch.html#torch.topk) | [mindspore.ops.TopK](https://mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.TopK.html#mindspore.ops.TopK) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/TopK.html) |
| [torch.trace](https://pytorch.org/docs/1.5.0/torch.html#torch.trace) | [mindspore.Tensor.trace](https://mindspore.cn/docs/api/en/master/api_python/mindspore/mindspore.Tensor.html#mindspore.Tensor.trace) | |
| [torch.transpose](https://pytorch.org/docs/1.5.0/torch.html#torch.transpose) | [mindspore.ops.Transpose](https://mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.Transpose.html#mindspore.ops.Transpose) | |
diff --git a/docs/mindspore/migration_guide/source_en/conf.py b/docs/mindspore/migration_guide/source_en/conf.py
index d0451dfbce424c51499552fe13a6591fd9333621..f7122e1445044c4b270fc99bc08ce2c89e49f9d7 100644
--- a/docs/mindspore/migration_guide/source_en/conf.py
+++ b/docs/mindspore/migration_guide/source_en/conf.py
@@ -14,7 +14,6 @@ import os
import IPython
import re
import sys
-import nbsphinx as nbs
# -- Project information -----------------------------------------------------
@@ -61,20 +60,9 @@ pygments_style = 'sphinx'
#
html_theme = 'sphinx_rtd_theme'
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
diff --git a/docs/mindspore/migration_guide/source_zh_cn/api_mapping/pytorch_api_mapping.md b/docs/mindspore/migration_guide/source_zh_cn/api_mapping/pytorch_api_mapping.md
index 8880b199d3f9d868b9592a6bcbaa2443615788c0..f6f2c512e0430690a58c70afdf1ebcbafd603f52 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/api_mapping/pytorch_api_mapping.md
+++ b/docs/mindspore/migration_guide/source_zh_cn/api_mapping/pytorch_api_mapping.md
@@ -138,7 +138,7 @@
| [torch.tanh](https://pytorch.org/docs/1.5.0/torch.html#torch.tanh) | [mindspore.ops.Tanh](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.Tanh.html#mindspore.ops.Tanh) | |
| [torch.tensor](https://pytorch.org/docs/1.5.0/torch.html#torch.tensor) | [mindspore.Tensor](https://mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.Tensor.html#mindspore.Tensor) | |
| [torch.Tensor](https://pytorch.org/docs/1.5.0/torch.html#torch.Tensor) | [mindspore.Tensor](https://mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.Tensor.html#mindspore.Tensor) | |
-| [torch.tensordot](https://pytorch.org/docs/1.5.0/torch.html#torch.tensordot) | [mindspore.numpy.tensordot](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.Reshape.html#mindspore.numpy.tensordot) | |
+| [torch.tensordot](https://pytorch.org/docs/1.5.0/torch.html#torch.tensordot) | [mindspore.numpy.tensordot](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/numpy/mindspore.numpy.tensordot.html) | |
| [torch.topk](https://pytorch.org/docs/1.5.0/torch.html#torch.topk) | [mindspore.ops.TopK](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.TopK.html#mindspore.ops.TopK) | [差异对比](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/api_mapping/pytorch_diff/TopK.html) |
| [torch.trace](https://pytorch.org/docs/1.5.0/torch.html#torch.trace) | [mindspore.Tensor.trace](https://mindspore.cn/docs/api/zh-CN/master/api_python/mindspore/mindspore.Tensor.html#mindspore.Tensor.trace) | |
| [torch.transpose](https://pytorch.org/docs/1.5.0/torch.html#torch.transpose) | [mindspore.ops.Transpose](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.Transpose.html#mindspore.ops.Transpose) | |
diff --git a/docs/mindspore/migration_guide/source_zh_cn/api_mapping/tensorflow_api_mapping.md b/docs/mindspore/migration_guide/source_zh_cn/api_mapping/tensorflow_api_mapping.md
index 14091d1cd2ba1feba9590ab9f563cf9ab7e7a773..398b588fc7ce8e3c7598674a677d003a98b3ce08 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/api_mapping/tensorflow_api_mapping.md
+++ b/docs/mindspore/migration_guide/source_zh_cn/api_mapping/tensorflow_api_mapping.md
@@ -17,7 +17,7 @@
| [tf.eye](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/eye) | [mindspore.ops.Eye](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.Eye.html) | |
| [tf.fill](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/fill) | [mindspore.ops.Fill](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.Fill.html) | |
| [tf.gather](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/gather) | [mindspore.ops.Gather](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.Gather.html) | |
-| [tf.gradients](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/gradients) | [mindspore.ops.GradOperation](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.GradOperation.html) | [差异对比](http://www.mindspore.cn/docs/migration_guide/zh-CN/master/api_mapping/tensorflow_diff/GradOperation.html) |
+| [tf.gradients](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/gradients) | [mindspore.ops.GradOperation](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.GradOperation.html) | [差异对比](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/api_mapping/tensorflow_diff/GradOperation.html) |
| [tf.norm](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/norm) | [mindspore.nn.Norm](https://mindspore.cn/docs/api/zh-CN/master/api_python/nn/mindspore.nn.Norm.html) | |
| [tf.one_hot](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/one_hot) | [mindspore.nn.OneHot](https://mindspore.cn/docs/api/zh-CN/master/api_python/nn/mindspore.nn.OneHot.html) | |
| [tf.ones_like](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/ones_like) | [mindspore.ops.OnesLike](https://mindspore.cn/docs/api/zh-CN/master/api_python/ops/mindspore.ops.OnesLike.html) | |
diff --git a/docs/mindspore/migration_guide/source_zh_cn/conf.py b/docs/mindspore/migration_guide/source_zh_cn/conf.py
index e937416a34f01c0c3fcdf46cb6139ef1a7f7bc8d..4645ca4e62c51da6e52e2a480187c0b3754666ef 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/conf.py
+++ b/docs/mindspore/migration_guide/source_zh_cn/conf.py
@@ -14,7 +14,6 @@ import os
import IPython
import re
import sys
-import nbsphinx as nbs
# -- Project information -----------------------------------------------------
@@ -65,20 +64,9 @@ html_search_language = 'zh'
html_search_options = {'dict': '../../../resource/jieba.txt'}
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
diff --git a/docs/mindspore/migration_guide/source_zh_cn/faq.md b/docs/mindspore/migration_guide/source_zh_cn/faq.md
index d2df2bb5a6023db12ef28d475577eac8a7b475f7..d7f85fd323d576c9ec20a07ab362e496f1e4bb82 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/faq.md
+++ b/docs/mindspore/migration_guide/source_zh_cn/faq.md
@@ -10,9 +10,9 @@
- 网络脚本分析
- [算子映射及缺失算子处理策略](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/script_analysis.html#id3)
+ [算子映射及缺失算子处理策略](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/script_analysis.html#查询算子映射表)
- [常见语法限制及处理策略](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/script_analysis.html#id6)
+ [常见语法限制及处理策略](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/script_analysis.html#常见限制原则)
- 网络脚本开发
@@ -20,17 +20,17 @@
- 网络调试
- [流程调试常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/neural_network_debug.html#id6)
+ [流程调试常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/neural_network_debug.html#常见错误)
- [loss值对比检查常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/neural_network_debug.html#id8)
+ [loss值对比检查常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/neural_network_debug.html#相关问题定位)
- [loss值异常常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/neural_network_debug.html#id11)
+ [loss值异常常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/neural_network_debug.html#loss值异常定位)
- 性能调试
- [性能调试常见问题及优化方法](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/sample_code.html#id26)
+ [性能调试常见问题及优化方法](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/sample_code.html#性能调优)
- [Profiler工具常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/performance_optimization.html#id6)
+ [Profiler工具常见问题处理](https://www.mindspore.cn/docs/migration_guide/zh-CN/master/performance_optimization.html#常见问题)
- 执行推理
diff --git a/docs/mindspore/migration_guide/source_zh_cn/inference.md b/docs/mindspore/migration_guide/source_zh_cn/inference.md
index bde5c9645747479dc8e3ae220857eebee89e77a3..f574013bb5e272884793a421c5ccbe36e428f5ba 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/inference.md
+++ b/docs/mindspore/migration_guide/source_zh_cn/inference.md
@@ -8,11 +8,11 @@ MindSpore可以基于训练好的模型,在不同的硬件平台上执行推
### 总览
-MindSpore支持保存为CheckPoint格式的[训练参数文件](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#id2)和MindIR、AIR、ONNX格式的[网络模型文件](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#id2)。
+MindSpore支持保存为CheckPoint格式的[训练参数文件](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#模型文件)和MindIR、AIR、ONNX格式的[网络模型文件](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#模型文件)。
-参考[执行推理](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#id3),不仅可以直接通过`mindspore.model.predict`接口执行本机推理,还可以通过`mindspore.export`导出MindIR、AIR、ONNX格式的网络模型文件,以便于跨平台执行推理。
+参考[执行推理](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#执行推理),不仅可以直接通过`mindspore.model.predict`接口执行本机推理,还可以通过`mindspore.export`导出MindIR、AIR、ONNX格式的网络模型文件,以便于跨平台执行推理。
-使用[MindIR格式](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#id3)的模型文件消除了不同后端模型的差异,可以用于执行跨硬件平台推理,支持部署到云端Serving和端侧Lite平台。
+使用[MindIR格式](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/multi_platform_inference.html#mindir介绍)的模型文件消除了不同后端模型的差异,可以用于执行跨硬件平台推理,支持部署到云端Serving和端侧Lite平台。
### 不同硬件平台执行推理
diff --git a/docs/mindspore/migration_guide/source_zh_cn/neural_network_debug.md b/docs/mindspore/migration_guide/source_zh_cn/neural_network_debug.md
index 855e1cfe9b9f8de125847e4bc5df8ac603636bc6..9bfbbefb85867897b50f2cec8110430c8da4375c 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/neural_network_debug.md
+++ b/docs/mindspore/migration_guide/source_zh_cn/neural_network_debug.md
@@ -43,7 +43,7 @@
- 在PyNative模式下可使用pdb进行调试,利用pdb打印相关堆栈和上下文信息帮助问题定位。
- 使用Print算子打印更多上下文信息,具体示例可参考[Print算子功能介绍](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#print)。
-- 调整日志级别获取更多报错信息,MindSpore可通过环境变量方便地调整日志级别,具体可参考[日志相关的环境变量和配置](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id6)。
+- 调整日志级别获取更多报错信息,MindSpore可通过环境变量方便地调整日志级别,具体可参考[日志相关的环境变量和配置](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)。
#### 常见错误
@@ -125,7 +125,7 @@
- [Callback功能](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#callback)
- MindSpore已提供ModelCheckpoint、LossMonitor、SummaryCollector等Callback类用于保存模型参数、监控loss值、保存训练过程信息等功能,用户也可自定义Callback函数用于实现在每个epoch和step的开始和结束运行相关功能,具体示例可参考[自定义Callback](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id3)。
+ MindSpore已提供ModelCheckpoint、LossMonitor、SummaryCollector等Callback类用于保存模型参数、监控loss值、保存训练过程信息等功能,用户也可自定义Callback函数用于实现在每个epoch和step的开始和结束运行相关功能,具体示例可参考[自定义Callback](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#自定义callback)。
- [MindSpore metrics功能](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#mindspore-metrics)
diff --git a/docs/mindspore/migration_guide/source_zh_cn/sample_code.md b/docs/mindspore/migration_guide/source_zh_cn/sample_code.md
index 058d94804009897e0eab89460e150cebcdbf9db3..7052278e59876d4c9f9177ef09499e140d2f4cce 100644
--- a/docs/mindspore/migration_guide/source_zh_cn/sample_code.md
+++ b/docs/mindspore/migration_guide/source_zh_cn/sample_code.md
@@ -823,7 +823,7 @@ profiler.analyse()
当数据处理速度较慢时,队列从最开始的满队列情况逐渐消耗为空队列,训练进程会开始等待空队列填入数据,一旦有新的数据填入,网络才会继续进行单Step训练。由于数据处理没有队列作为缓冲,数据处理的性能抖动直接体现在单Step的性能上,因此还会造成单Step性能抖动。
-关于MindData的性能问题,可以参考 MindInsight 组件的 [数据准备性能分析](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/performance_profiling_ascend.html#id8),其给出了MindData 性能的常见问题及解决方法。
+关于MindData的性能问题,可以参考 MindInsight 组件的 [数据准备性能分析](https://www.mindspore.cn/mindinsight/docs/zh-CN/master/performance_profiling_ascend.html#数据准备性能分析),其给出了MindData 性能的常见问题及解决方法。
#### 多机同步性能问题
diff --git a/docs/mindspore/note/requirements.txt b/docs/mindspore/note/requirements.txt
index 3fcb7644a26ae0615bc4e71a7d333f6200e1a5bf..878affd19d9928e8955e4d5b44b45b823da7fb0c 100644
--- a/docs/mindspore/note/requirements.txt
+++ b/docs/mindspore/note/requirements.txt
@@ -1,5 +1,6 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
jieba
diff --git a/docs/mindspore/note/source_en/conf.py b/docs/mindspore/note/source_en/conf.py
index c4c6d846fa95a2a37a3f00665150b9bb3fe9ef57..37e45152eea2b3cac89b34d82c6928860f7d7109 100644
--- a/docs/mindspore/note/source_en/conf.py
+++ b/docs/mindspore/note/source_en/conf.py
@@ -55,6 +55,8 @@ pygments_style = 'sphinx'
#
html_theme = 'sphinx_rtd_theme'
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindspore/note/source_en/env_var_list.md b/docs/mindspore/note/source_en/env_var_list.md
index 9016f580db391e7befae19fc524aa0b8f282b5aa..b2c96ebc75088fef18b12e034d435bd6b6e8ca3a 100644
--- a/docs/mindspore/note/source_en/env_var_list.md
+++ b/docs/mindspore/note/source_en/env_var_list.md
@@ -18,9 +18,11 @@ MindSpore environment variables are as follows:
|GLOG_v|MindSpore|For details about the function and usage, see [GLOG_v](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|0-DEBUG
1-INFO
2-WARNING
3-ERROR|None|Optional|2|
|GLOG_logtostderr|MindSpore|For details about the function and usage, see [GLOG_logtostderr](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|1:logs are output to the screen
0:logs are output to a file|This variable is used together with GLOG_log_dir|Optional|1|
|GLOG_log_dir|MindSpore|For details about the function and usage, see [GLOG_log_dir](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|String|File path, which can be a relative path or an absolute path.|This variable is used together with GLOG_logtostderr|Optional|None|
-|GLOG_log_max|MindSpore|For details about the function and usage, see [GLOG_log_max](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|>0|None |Optional|50|
-|MS_SUBMODULE_LOG_v|MindSpore|For details about the function and usage, see [MS_SUBMODULE_LOG_v](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|None | Optional|None|
|GLOG_stderrthreshold|For details about the function and usage, see [GLOG_stderrthreshold](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|0-DEBUG
1-INFO
2-WARNING
3-ERROR|None|Optional|2
+|MS_SUBMODULE_LOG_v|MindSpore|For details about the function and usage, see [MS_SUBMODULE_LOG_v](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|None | Optional|None|
+|GLOG_log_max|MindSpore|For details about the function and usage, see [GLOG_log_max](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|>0|None |Optional|50|
+|logger_maxBytes|MindSpore|For details about the function and usage, see [logger_maxBytes](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|None|None | Optional|52428800|
+|logger_backupCount|For details about the function and usage, see [logger_backupCount](https://www.mindspore.cn/docs/programming_guide/en/master/custom_debugging_info.html#log-related-environment-variables-and-configurations)|Integer|None|None|Optional|30
|OPTION_PROTO_LIB_PATH|MindSpore|Specifies the RPOTO dependent library path. |String|File path, which can be a relative path or an absolute path.|None|Optional|None|
|MS_RDR_ENABLE|MindSpore|Determines whether to enable running data recorder (RDR). If a running exception occurs in MindSpore, the pre-recorded data in MindSpore is automatically exported to assist in locating the cause of the running exception.|Integer|1:enables RDR
0:disables RDR|This variable is used together with `MS_RDR_MODE` and `MS_RDR_PATH`.|Optional|None|
|MS_RDR_MODE|MindSpore|Determines the exporting mode of running data recorder(RDR).|Integer|1:export data when training process terminates in exceptional scenario
2:export data when training process terminates in both exceptional scenario and normal scenario.|This variable is used together with `MS_RDR_ENABLE=1`.|Optional|1|
diff --git a/docs/mindspore/note/source_en/static_graph_syntax_support.md b/docs/mindspore/note/source_en/static_graph_syntax_support.md
index 22e8c603e60841dc68aeaea605ffd9280c6596a5..e928f74dbeeeeb3ead884c7df1c63609975baa6b 100644
--- a/docs/mindspore/note/source_en/static_graph_syntax_support.md
+++ b/docs/mindspore/note/source_en/static_graph_syntax_support.md
@@ -380,7 +380,7 @@ Currently, `Cell` and its subclass instances can be constructed on the network.
However, during construction, the parameter can be specified only in position parameter mode, and cannot be specified in the key-value pair mode. That is, the syntax `cell = Cell(arg_name=value)` is not supported.
-Currently, the attributes and APIs related to `Cell` and its subclasses cannot be called on the network unless they are called through `self` in `contrcut` of `Cell`.
+Currently, the attributes and APIs related to `Cell` and its subclasses cannot be called on the network unless they are called through `self` in `construct` of `Cell`.
For details about the definition of `Cell`, click .
diff --git a/docs/mindspore/note/source_zh_cn/conf.py b/docs/mindspore/note/source_zh_cn/conf.py
index 6a5ffa7f0544879eb64dbf581b1e89369730cc5f..8875a22e5490a152db3a521acabe37f404c5859e 100644
--- a/docs/mindspore/note/source_zh_cn/conf.py
+++ b/docs/mindspore/note/source_zh_cn/conf.py
@@ -59,8 +59,8 @@ html_search_language = 'zh'
html_search_options = {'dict': '../../../resource/jieba.txt'}
-
-
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
import search_code
diff --git a/docs/mindspore/note/source_zh_cn/env_var_list.md b/docs/mindspore/note/source_zh_cn/env_var_list.md
index e3e6edc5a71160a6772d061e230648fe64504776..958ade0b518c0ea8347dc970f9d846fda967df34 100644
--- a/docs/mindspore/note/source_zh_cn/env_var_list.md
+++ b/docs/mindspore/note/source_zh_cn/env_var_list.md
@@ -18,16 +18,18 @@
|MS_RDR_ENABLE|MindSpore|是否开启程序运行数据记录器(RDR),如果MindSpore出现了运行异常,会自动导出MindSpore中预先记录的数据以辅助定位运行异常的原因|Integer|1:开启RDR功能
0:关闭RDR功能|配合`MS_RDR_MODE`与`MS_RDR_PATH`使用|可选|无|
|MS_RDR_MODE|MindSpore|指定运行数据记录器(RDR)导出数据的模式|Integer|1:仅在训练进程异常终止时导出数据
2:训练进程异常终止或正常结束时导出数据|配合`MS_RDR_ENABLE=1`使用|可选|1|
|MS_RDR_PATH|MindSpore|配置程序运行数据记录器(RDR)的文件导出的根目录路径|String|目录路径,仅支持绝对路径|配合`MS_RDR_ENABLE=1`使用,最终RDR文件将保存在`${MS_RDR_PATH}/rank_${RANK_ID}/rdr/`目录下。其中`RANK_ID`为多卡训练场景中的卡号,单卡场景默认`RANK_ID=0`。|可选|无|
-|GLOG_v|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11)|Integer|0-DEBUG
1-INFO
2-WARNING
3-ERROR|无|可选|2|
-|GLOG_logtostderr|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11)|Integer|1:日志输出到屏幕
0:日志输出到文件|与GLOG_log_dir一起使用|可选|1|
-|GLOG_log_dir|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11)|String|文件路径,支持相对路径与绝对路径|与GLOG_logtostderr一起使用|可选|无|
-|GLOG_log_max|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11)|Integer|正整数|无|可选|50|
-|MS_SUBMODULE_LOG_v|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11)|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选|无|
-|GLOG_stderrthreshold|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#id11)|Integer|0-DEBUG
1-INFO
2-WARNING
3-ERROR|无|可选|2
+|GLOG_v|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Integer|0-DEBUG
1-INFO
2-WARNING
3-ERROR|无|可选|2|
+|GLOG_logtostderr|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Integer|1:日志输出到屏幕
0:日志输出到文件|与GLOG_log_dir一起使用|可选|1|
+|GLOG_log_dir|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|String|文件路径,支持相对路径与绝对路径|与GLOG_logtostderr一起使用|可选|无|
+|GLOG_stderrthreshold|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Integer|0-DEBUG
1-INFO
2-WARNING
3-ERROR|无|可选|2
+|MS_SUBMODULE_LOG_v|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Dict{String:Integer...}|LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR
SubModual: COMMON, MD, DEBUG, DEVICE, COMMON, IR...|无|可选|无|
+|GLOG_log_max|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Integer|正整数|无|可选|50|
+|logger_maxBytes|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Integer|无|无|可选|52428800|
+|logger_backupCount|MindSpore|[日志功能与用法](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/custom_debugging_info.html#日志相关的环境变量和配置)|Integer|无|无|可选|30
|OPTION_PROTO_LIB_PATH|MindSpore|RPOTO依赖库库路径|String|目录路径,支持相对路径与绝对路径|无|可选|无|
|MS_OM_PATH|MindSpore|配置task异常时dump数据路径以及图编译出错时dump的analyze_fail.dat文件的保存目录,保存路径为:指定的路径/rank_${rand_id}/om|String|文件路径,支持相对路径与绝对路径|无|可选|无|
-|MINDSPORE_DUMP_CONFIG|MindSpore|指定[云侧Dump功能](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dump_in_graph_mode.html#id6)或[端侧Dump功能](https://www.mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#dump)所依赖的配置文件的路径|String|文件路径,支持相对路径与绝对路径|无|可选|无|
-|MS_DIAGNOSTIC_DATA_PATH|MindSpore|使用[云侧Dump功能](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dump_in_graph_mode.html#id6)时,如果Dump配置文件没有设置`path`字段或者设置为空字符串,则“$MS_DIAGNOSTIC_DATA_PATH/debug_dump”就会被当做path的值。若Dump配置文件中设置了`path`字段,则仍以该字段的实际取值为准。|String|文件路径,只支持绝对路径|与MINDSPORE_DUMP_CONFIG配合使用|可选|无|
+|MINDSPORE_DUMP_CONFIG|MindSpore|指定[云侧Dump功能](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dump_in_graph_mode.html#同步dump)或[端侧Dump功能](https://www.mindspore.cn/lite/docs/zh-CN/master/use/benchmark_tool.html#dump)所依赖的配置文件的路径|String|文件路径,支持相对路径与绝对路径|无|可选|无|
+|MS_DIAGNOSTIC_DATA_PATH|MindSpore|使用[云侧Dump功能](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dump_in_graph_mode.html#同步dump)时,如果Dump配置文件没有设置`path`字段或者设置为空字符串,则“$MS_DIAGNOSTIC_DATA_PATH/debug_dump”就会被当做path的值。若Dump配置文件中设置了`path`字段,则仍以该字段的实际取值为准。|String|文件路径,只支持绝对路径|与MINDSPORE_DUMP_CONFIG配合使用|可选|无|
|MS_ENABLE_CACHE|MindData|是否开启dataset数据处理cache功能,可以实现数据处理过程中数据的cache能力,加速数据集读取及增强处理|String|TRUE:开启数据处理cache功能
FALSE:关闭数据处理cache功能|与MS_CACHE_HOST、MS_CACHE_PORT一起使用|可选|无|
|MS_CACHE_HOST|MindData|开启cache时,cache服务所在的IP|String|Cache Server所在机器的IP|与MS_ENABLE_CACHE=TRUE、MS_CACHE_PORT一起使用|可选|无|
|MS_CACHE_PORT|MindData|开启cache时,cache服务所在的端口|String|Cache Server所在机器的端口|与MS_ENABLE_CACHE=TRUE、MS_CACHE_HOST一起使用|可选|无|
diff --git a/docs/mindspore/note/source_zh_cn/static_graph_syntax_support.md b/docs/mindspore/note/source_zh_cn/static_graph_syntax_support.md
index 2463107a23c18f969984a7c703515611b075aa67..7b38eea48bbde38e405ffadcebfad5c38839b9eb 100644
--- a/docs/mindspore/note/source_zh_cn/static_graph_syntax_support.md
+++ b/docs/mindspore/note/source_zh_cn/static_graph_syntax_support.md
@@ -381,7 +381,7 @@ x:[[1. 1. 1. 1.]
但在构造时,参数只能通过位置参数方式传入,不支持通过键值对方式传入,即不支持在语法`cell = Cell(arg_name=value)`。
-当前不支持在网络调用`Cell`及其子类相关属性和接口,除非是在`Cell`自己的`contrcut`中通过`self`调用。
+当前不支持在网络调用`Cell`及其子类相关属性和接口,除非是在`Cell`自己的`construct`中通过`self`调用。
`Cell`定义可参考文档:
diff --git a/docs/mindspore/programming_guide/requirements.txt b/docs/mindspore/programming_guide/requirements.txt
index 6d8cd70439820e16bc32c4abc93e948ba81dc01b..49a77fdec3a5c745edd40eaa223883c31500e975 100644
--- a/docs/mindspore/programming_guide/requirements.txt
+++ b/docs/mindspore/programming_guide/requirements.txt
@@ -1,7 +1,8 @@
sphinx >= 2.2.1, <= 2.4.4
+docutils == 0.16
myst_parser == 0.14.0
sphinx-markdown-tables
-sphinx_rtd_theme
+sphinx_rtd_theme == 0.5.2
numpy
nbsphinx
IPython
diff --git a/docs/mindspore/programming_guide/source_en/conf.py b/docs/mindspore/programming_guide/source_en/conf.py
index fd02f9f76234625b3cef358215c56c832dd01f7f..86b4e280cfc83128d9764621de8d0b84939e7fba 100644
--- a/docs/mindspore/programming_guide/source_en/conf.py
+++ b/docs/mindspore/programming_guide/source_en/conf.py
@@ -11,10 +11,10 @@
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
+import shutil
import IPython
import re
import sys
-import nbsphinx as nbs
# -- Project information -----------------------------------------------------
@@ -60,23 +60,19 @@ pygments_style = 'sphinx'
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
+#modify layout.html for sphinx_rtd_theme.
+import sphinx_rtd_theme
+layout_target = os.path.join(os.path.dirname(sphinx_rtd_theme.__file__), 'layout.html')
+layout_src = '../../../../resource/_static/layout.html'
+if os.path.exists(layout_target):
+ os.remove(layout_target)
+shutil.copy(layout_src, layout_target)
html_static_path = ['_static']
-# Remove extra outputs for nbsphinx extension.
-nbsphinx_source_re = re.compile(r"(app\.connect\('html-collect-pages', html_collect_pages\))")
-nbsphinx_math_re = re.compile(r"(\S.*$)")
-mod_path = os.path.abspath(nbs.__file__)
-with open(mod_path, "r+", encoding="utf8") as f:
- contents = f.readlines()
- for num, line in enumerate(contents):
- _content_re = nbsphinx_source_re.search(line)
- if _content_re and "#" not in line:
- contents[num] = nbsphinx_source_re.sub(r"# \g<1>", line)
- if "mathjax_config = app.config" in line and "#" not in line:
- contents[num:num+10] = [nbsphinx_math_re.sub(r"# \g<1>", i) for i in contents[num:num+10]]
- break
- exec("".join(contents), nbs.__dict__)
+sys.path.append(os.path.abspath('../../../../resource/sphinx_ext'))
+import anchor_mod
+import nbsphinx_mod
sys.path.append(os.path.abspath('../../../../resource/search'))
diff --git a/docs/mindspore/programming_guide/source_en/control_flow.md b/docs/mindspore/programming_guide/source_en/control_flow.md
index 834d3324854133e72260e81196187e5712b73940..f8ec74204b81fe51912a5fac045f7293d63b3687 100644
--- a/docs/mindspore/programming_guide/source_en/control_flow.md
+++ b/docs/mindspore/programming_guide/source_en/control_flow.md
@@ -518,7 +518,7 @@ The following table lists the side effect operators that are not supported in th
| ScatterMul |
| ScatterNdAdd |
| ScatterNdSub |
-| ScatterNdUpadte |
+| ScatterNdUpdate |
| ScatterNonAliasingAdd |
| ScatterSub |
| ScatterUpdate |
diff --git a/docs/mindspore/programming_guide/source_en/convert_dataset.ipynb b/docs/mindspore/programming_guide/source_en/convert_dataset.ipynb
index b5aaab73df3b24459bcd45e0117d853d007c41ac..4fa8a78e2af016c1c5fdca981a7ac2a86cec646b 100644
--- a/docs/mindspore/programming_guide/source_en/convert_dataset.ipynb
+++ b/docs/mindspore/programming_guide/source_en/convert_dataset.ipynb
@@ -189,6 +189,7 @@
" This example will generate `test.mindrecord0`, `test.mindrecord0.db`, `test.mindrecord1`, `test.mindrecord1.db`, `test.mindrecord2`, `test.mindrecord2.db`, `test.mindrecord3`, `test.mindrecord3.db`, totally eight files, called MindRecord datasets. `test.mindrecord0` and `test.mindrecord0.db` are collectively referred to as a MindRecord file, where `test.mindrecord0` is the data file and `test.mindrecord0.db` is the index file.\n",
"\n",
" **Interface Description:**\n",
+ " - `FileWriter`: If the parameter shard_num > 1, the original dataset will be saved to shard_num of mindrecord files and each mindrecord file will save the metadata information of adjacent mindrecord files. Then, when using the `MindDataset` interface to read the mindrecord dataset, you can read all shard_num of mindrecord files through `MindDataset(dataset_files=\"./test.mindrecord0\")` and you can read only `test.mindrecord0` mindrecord file through `MindDataset(dataset_files=[\"./test.mindrecord0\"])`.\n",
" - `write_raw_data`: write data to memory.\n",
" - `commit`: write data in memory to disk.\n",
"\n",
@@ -439,4 +440,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
-}
\ No newline at end of file
+}
diff --git a/docs/mindspore/programming_guide/source_en/custom_debugging_info.md b/docs/mindspore/programming_guide/source_en/custom_debugging_info.md
index de3c59a617173ef509d310de0552d42796d6083a..9b390ce86af2bba69338273f3b3515832d5f0aab 100644
--- a/docs/mindspore/programming_guide/source_en/custom_debugging_info.md
+++ b/docs/mindspore/programming_guide/source_en/custom_debugging_info.md
@@ -344,8 +344,10 @@ MindSpore uses glog to output logs. The following environment variables are comm
Logs of C++ and Python will be output to different files. The file name of C++ log complies with the naming rule of `GLOG` log file. Here, the name is `mindspore.MachineName.UserName.log.LogLevel.Timestamp`. The file name of Python log is `mindspore.log`.
`GLOG_log_dir` can only contains characters such as uppercase letters, lowercase letters, digits, "-", "_" and "/".
-- `GLOG_log_max`
- Each log file's max size is 50 MB by default. But we can change it by set this environment variable. When the log file reaches the max size, the next logs will be written to the new log file.
+- `GLOG_stderrthreshold`
+
+ The log module will print logs to the screen when these logs are output to a file. This environment variable is used to control the log level printed to the screen in this scenario.
+ The default value is 2, indicating the WARNING level. The values are as follows: 0: DEBUG; 1: INFO; 2: WARNING; 3: ERROR; 4: CRITICAL.
- `MS_SUBMODULE_LOG_v`
@@ -354,11 +356,6 @@ MindSpore uses glog to output logs. The following environment variables are comm
The specified sub module log level will overwrite the global log level. The meaning of sub module log level is the same as `GLOG_v`, the sub modules of MindSpore are categorized by source directory is shown in the below table.
E.g. when set `GLOG_v=1 MS_SUBMODULE_LOG_v="{PARSER:2,ANALYZER:2}"` then log levels of `PARSER` and `ANALYZER` are WARNING, other modules' log levels are INFO.
-- `GLOG_stderrthreshold`
-
- The log module will print logs to the screen when these logs are output to a file. This environment variable is used to control the log level printed to the screen in this scenario.
- The default value is 2, indicating the WARNING level. The values are as follows: 0: DEBUG; 1: INFO; 2: WARNING; 3: ERROR; 4: CRITICAL.
-
Sub modules of MindSpore grouped by source directory:
| Source Files | Sub Module Name |
@@ -388,4 +385,16 @@ Sub modules of MindSpore grouped by source directory:
| mindspore/core/gvar | COMMON |
| mindspore/core/ | CORE |
+- `GLOG_log_max`
+
+ It is used to control the size of the mindspire C + + module log file. The default maximum is 50MB. You can change the default maximum value of the log file through this environment variable. If the currently written log file exceeds the maximum value, the newly output log content will be written to the new log file.
+
+- `logger_maxBytes`
+
+ It is used to control the size of the mindspire Python module log file. The default is 52428800 bytes.
+
+- `logger_backupCount`
+
+ Used to control the number of mindspire Python module log files. The default is 30.
+
> The glog does not support log rotate. To control the disk space occupied by log files, use the log file management tool provided by the operating system, such as: logrotate of Linux.
diff --git a/docs/mindspore/programming_guide/source_en/custom_operator_cpu.md b/docs/mindspore/programming_guide/source_en/custom_operator_cpu.md
index 62b7cf66812569752981cab82fd2a8563929d427..26e719e17b308ccbd96d3f7faa4d846d4d57a5ac 100644
--- a/docs/mindspore/programming_guide/source_en/custom_operator_cpu.md
+++ b/docs/mindspore/programming_guide/source_en/custom_operator_cpu.md
@@ -105,7 +105,7 @@ void TransposeCPUFwdKernel::InitKernel(const CNodePtr &kernel_node) {
- The functions in the class `AnfRuntimeAlgorithm` implement various operations on operator nodes. `shape_` represents the shape of the first input of the operator. `axis_` represents the attribute "perm" of the operator.
- The parameter "perm" of the`Transpose` operator's primitive is as an input, but "perm" is actually considered as the attribute of the operation when parsing.
-> For details of the class `AnfRuntimeAlgorithm`, please refer to the declaration in MindSpore source codes under [mindspore/ccsrc/backend/session/anf_runtime_algorithm.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/backend/session/anf_runtime_algorithm.h).
+> For details of the class `AnfRuntimeAlgorithm`, please refer to the declaration in MindSpore source codes under [mindspore/ccsrc/backend/common/session/anf_runtime_algorithm.h](https://gitee.com/mindspore/mindspore/blob/master/mindspore/ccsrc/backend/common/session/anf_runtime_algorithm.h).
The definition of the function `Launch` in the source file is as follows: First, get the address of each input and output in turn, and then transform the dimension according to `axis_`, and assign the value to the space pointed to by the output address.
diff --git a/docs/mindspore/programming_guide/source_en/cv_mobilenetv2_fine_tune.md b/docs/mindspore/programming_guide/source_en/cv_mobilenetv2_fine_tune.md
index afe5f0da3891f9e7d16f0ab17a95828e88181541..e3bde58dbb567612911e33aeb48a6f4792ee32dd 100644
--- a/docs/mindspore/programming_guide/source_en/cv_mobilenetv2_fine_tune.md
+++ b/docs/mindspore/programming_guide/source_en/cv_mobilenetv2_fine_tune.md
@@ -253,7 +253,7 @@ sh run_eval.sh [PLATFORM] [DATASET_PATH] [PRETRAIN_CKPT_PATH]
## Loading Fine-Tuning Training
-Only `train.py` can be run on Windows when MobileNetV2 is used for fine-tuning training. You can run the shell script `run_train.sh` and input [parameters](https://www.mindspore.cn/docs/programming_guide/en/master/cv_mobilenetv2_fine_tune.html#id8) on Linux when MobileNetV2 is used for fine-tuning training.
+Only `train.py` can be run on Windows when MobileNetV2 is used for fine-tuning training. You can run the shell script `run_train.sh` and input [parameters](https://www.mindspore.cn/docs/programming_guide/en/master/cv_mobilenetv2_fine_tune.html#parameter-description) on Linux when MobileNetV2 is used for fine-tuning training.
The Windows system outputs information to an interactive command line. When running `run_train.sh` on the Linux system, use `&> ` at the end of the command line to write the standard output and error output to the log file. After the fine-tuning is successful, training starts. The training time and loss of each epoch are continuously written into the `./train/rank*/log*.log` file. If the fine-tuning fails, an error message is recorded in the preceding log file.
@@ -383,7 +383,7 @@ The Windows system outputs information to an interactive command line. When runn
### Validating the Model
-Set mandatory [parameters](https://www.mindspore.cn/docs/programming_guide/en/master/cv_mobilenetv2_fine_tune.html#id8) when using the validation set to test model performance. The default value of `--platform` is `Ascend`. You can set it to `CPU` or `GPU`. Finally, the standard output and error output are displayed in the interactive command line or written to the `eval.log` file.
+Set mandatory [parameters](https://www.mindspore.cn/docs/programming_guide/en/master/cv_mobilenetv2_fine_tune.html#parameter-description) when using the validation set to test model performance. The default value of `--platform` is `Ascend`. You can set it to `CPU` or `GPU`. Finally, the standard output and error output are displayed in the interactive command line or written to the `eval.log` file.
```bash
# Windows/Linux with Python
diff --git a/docs/mindspore/programming_guide/source_en/dataset_conversion.md b/docs/mindspore/programming_guide/source_en/dataset_conversion.md
index 62f8d59ab58a280e22be5cd60a68ea3ac4695e98..a1557e30344cbc179802ebaca8b741665f04b31b 100644
--- a/docs/mindspore/programming_guide/source_en/dataset_conversion.md
+++ b/docs/mindspore/programming_guide/source_en/dataset_conversion.md
@@ -78,7 +78,7 @@ Create a MindRecord file containing 100 records, whose samples include the `file
3. Read MindRecord using `MindDataset`.
```python
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ data_set = ds.MindDataset(dataset_files=MINDRECORD_FILE)
decode_op = vision.Decode()
data_set = data_set.map(operations=decode_op, input_columns=["data"], num_parallel_workers=2)
count = 0
@@ -161,7 +161,7 @@ Create a MindRecord file containing 100 records, whose samples include eight fie
3. Read MindRecord using `MindDataset`.
```python
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ data_set = ds.MindDataset(dataset_files=MINDRECORD_FILE)
count = 0
for item in data_set.create_dict_iterator():
count += 1
@@ -273,7 +273,7 @@ You can use the `Cifar10ToMR` class to convert the original CIFAR-10 data to Min
4. Read MindRecord using `MindDataset`.
```python
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ data_set = ds.MindDataset(dataset_files=MINDRECORD_FILE)
decode_op = vision.Decode()
data_set = data_set.map(operations=decode_op, input_columns=["data"], num_parallel_workers=2)
count = 0
@@ -335,7 +335,7 @@ You can use the `ImageNetToMR` class to convert the original ImageNet data (imag
4. Read MindRecord using `MindDataset`.
```python
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ data_set = ds.MindDataset(dataset_files=MINDRECORD_FILE)
decode_op = vision.Decode()
data_set = data_set.map(operations=decode_op, input_columns=["image"], num_parallel_workers=2)
count = 0
@@ -396,7 +396,7 @@ Create a CSV file containing 5 records, convert the CSV file to MindRecord using
3. Read MindRecord using `MindDataset`.
```python
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ data_set = ds.MindDataset(dataset_files=MINDRECORD_FILE)
count = 0
for item in data_set.create_dict_iterator(output_numpy=True):
count += 1
@@ -529,7 +529,7 @@ Use TensorFlow to create a TFRecord file and convert the file to MindRecord usin
4. Read MindRecord using `MindDataset`.
```python
- data_set = ds.MindDataset(dataset_file=MINDRECORD_FILE)
+ data_set = ds.MindDataset(dataset_files=MINDRECORD_FILE)
decode_op = vision.Decode()
data_set = data_set.map(operations=decode_op, input_columns=["image_bytes"], num_parallel_workers=2)
count = 0
diff --git a/docs/mindspore/programming_guide/source_en/distributed_training_gpu.md b/docs/mindspore/programming_guide/source_en/distributed_training_gpu.md
index 91990b4cc0cdd65fcbd0848612d805045709fbe3..bde4c011b5935c2a29f1fccb1b8eac4dc543a109 100644
--- a/docs/mindspore/programming_guide/source_en/distributed_training_gpu.md
+++ b/docs/mindspore/programming_guide/source_en/distributed_training_gpu.md
@@ -24,12 +24,6 @@ The method of downloading and loading the dataset: .
-- `NCCL-2.7.6`: Nvidia collective communication library.
-
- Download NCCL-2.7.6 from .
-
- For details about how to install NCCL, see the official tutorial: .
-
- Password-free login between hosts (required for multi-host training). If multiple hosts are involved in the training, you need to configure password-free login between them. The procedure is as follows:
1. Ensure that the same user is used to log in to each host. (The root user is not recommended.)
2. Run the `ssh-keygen -t rsa -P ""` command to generate a key.
diff --git a/docs/mindspore/programming_guide/source_en/dump_in_graph_mode.md b/docs/mindspore/programming_guide/source_en/dump_in_graph_mode.md
index 7ebfc9ca16a9a9b25cf4362dfaf9a52b64cbbd3b..187a24fd36d6e5003a38ae661677fafac7f37f20 100644
--- a/docs/mindspore/programming_guide/source_en/dump_in_graph_mode.md
+++ b/docs/mindspore/programming_guide/source_en/dump_in_graph_mode.md
@@ -409,7 +409,7 @@ Large networks (such as Bert Large) will cause memory overflow when using synchr
- `kernels`: List of operator names. Turn on the IR save switch `context.set_context(save_graphs=True)` and execute the network to obtain the operator name from the generated `trace_code_graph_{graph_id}`IR file. `kernels` only supports TBE operator, AiCPU operator and communication operator. The data of communication operation input operator will be dumped if `kernels` is set to the name of communication operator. For details, please refer to [Saving IR](https://www.mindspore.cn/docs/programming_guide/en/master/design/mindir.html#saving-ir).
- `support_device`: Supported devices, default setting is `[0,1,2,3,4,5,6,7]`. You can specify specific device ids to dump specific device data.
- `enable`: Enable Asynchronous Dump. If synchronous dump and asynchronous dump are enabled at the same time, only synchronous dump will take effect.
- - `op_debug_mode`: 0: disable overflow check function; 1: enable AiCore overflow check; 2: enable Atomic overflow check; 3: enable all overflow check function. If it is not set to 0, only the data of the overflow operator will be dumped.
+ - `op_debug_mode`: Reserved field, set to 0.
- `file_format`: Dump file type. It can be either `npy` and `bin`. `npy`: data will be dumped in npy files as host format. `bin`: data will be dumped in protobuf file as device format and need to be transformed to parse using the provided data analysis tool. Please refer to [Asynchronous Dump Data Analysis Sample](#asynchronous-dump-data-analysis-sample) for details. The default value is `bin`.
2. Set Dump environment.
diff --git a/docs/mindspore/programming_guide/source_en/forward_value_and_grad.md b/docs/mindspore/programming_guide/source_en/forward_value_and_grad.md
deleted file mode 100644
index 4b878ed88bda265229a8e32b79867eda8ce87a07..0000000000000000000000000000000000000000
--- a/docs/mindspore/programming_guide/source_en/forward_value_and_grad.md
+++ /dev/null
@@ -1,135 +0,0 @@
-# Forward Value And Grad
-
-`Ascend` `GPU` `CPU` `Model Running`
-
-
-
-## Overview
-
-ForwardValueAndGrad is used to generate the forward value and backend gradient of the input network. The `get_all`, `get_by_list`, and `sens_param` parameters are used to control the gradient calculation method. For details, see [mindspore API](https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.ForwardValueAndGrad.html).
-
-The following is an example of using ForwardValueAndGrad.
-
-## First-order Derivation
-
-The first-order derivative method of MindSpore is `mindspore.nn.ForwardValueAndGrad (network, weights=None, get_all=False, get_by_list=False, sens_param=False)`. When `get_all` is set to `False`, the first input derivative is computed. When `get_all` is set to `True`, all input derivatives are computed. When `get_by_list` is set to `False`, weight derivation is not performed. When `get_by_list` is set to `True`, weight derivation is performed. `sens_param` scales the output value of the network to change the final gradient. Therefore, its dimension is consistent with the output dimension. The following uses the first-order derivation of the MatMul operator for in-depth analysis.
-
-For details about the complete sample code, see [First-order Derivation Sample Code](https://gitee.com/mindspore/docs/tree/master/docs/sample_code/high_order_differentiation/first_order).
-
-### Input Derivation
-
-The input derivation code is as follows:
-
-```python
-import numpy as np
-import mindspore.context as context
-import mindspore.nn as nn
-import mindspore.ops as ops
-from mindspore import Tensor
-from mindspore import ParameterTuple, Parameter
-from mindspore import dtype as mstype
-context.set_context(mode=context.GRAPH_MODE, device_target="GPU")
-class Net(nn.Cell):
- def __init__(self):
- super(Net, self).__init__()
- self.matmul = ops.MatMul()
- self.z = Parameter(Tensor(np.array([1.0], np.float32)), name='z')
- def construct(self, x, y):
- x = x * self.z
- out = self.matmul(x, y)
- return out
-
-class ForwardValueAndGradWrtX(nn.Cell):
- def __init__(self, net):
- super(ForwardValueAndGradWrtX, self).__init__()
- self.net = net
- self.grad = nn.ForwardValueAndGrad(self.net)
- def construct(self, x, y):
- ret = self.grad(x, y)
- return ret
-
-x = Tensor([[0.8, 0.6, 0.2], [1.8, 1.3, 1.1]], dtype=mstype.float32)
-y = Tensor([[0.11, 3.3, 1.1], [1.1, 0.2, 1.4], [1.1, 2.2, 0.3]], dtype=mstype.float32)
-output = ForwardValueAndGradWrtX(Net())(x, y)
-print(output)
-```
-
-The output is as follows:
-
-```text
-(Tensor(shape=[2, 3], dtype=Float32, value=
-[[9.68000054e-01, 3.20000029e+00, 1.78000009e+00],
- [2.83800006e+00, 8.61999989e+00, 4.13000011e+00]]), Tensor(shape=[2, 3], dtype=Float32, value=
-[[4.50999975e+00, 2.70000005e+00, 3.60000014e+00],
- [4.50999975e+00, 2.70000005e+00, 3.60000014e+00]]))
-```
-
-If the derivatives of the `x` and `y` inputs are considered, you only need to set `self.grad = nn.ForwardValueAndGrad(self.net, get_all=True)` in `ForwardValueAndGradWrtX`.
-
-### Weight Derivation
-
-If the derivation of weights is considered, change `ForwardValueAndGradWrtX` to the following:
-
-```python
-class ForwardValueAndGradWrtX(nn.Cell):
- def __init__(self, net):
- super(ForwardValueAndGradWrtX, self).__init__()
- self.net = net
- self.params = ParameterTuple(net.trainable_params())
- self.grad = nn.ForwardValueAndGrad(self.net, weights=self.params, get_by_list=True)
- def construct(self, x, y):
- ret = self.grad(x, y)
- return ret
-```
-
-```python
-output = ForwardValueAndGradWrtX(Net())(x, y)
-print(output)
-```
-
-The output is as follows:
-
-```text
-(Tensor(shape=[2, 3], dtype=Float32, value=
-[[9.68000054e-01, 3.20000029e+00, 1.78000009e+00]
- [2.83800006e+00, 8.61999989e+00, 4.13000011e+00]]), (Tensor(shape=[1], dtype=Float32, value= [2.15359993e+01]),))
-```
-
-### Gradient Value Scaling
-
-You can use the `sens_param` parameter to control the scaling of the gradient value.
-
-```python
-class ForwardValueAndGradWrtX(nn.Cell):
- def __init__(self, net):
- super(ForwardValueAndGradWrtX, self).__init__()
- self.net = net
- self.grad = nn.ForwardValueAndGrad(self.net, sens_param=True)
- self.grad_wrt_output = Tensor([[0.1, 0.6, 0.2], [0.8, 1.3, 1.1]], dtype=mstype.float32)
- def construct(self, x, y):
- ret = self.grad(x, y, self.grad_wrt_output)
- return ret
-```
-
-```python
-output = ForwardValueAndGradWrtX(Net())(x, y)
-print(output)
-```
-
-The output is as follows:
-
-```text
-(Tensor(shape=[2, 3], dtype=Float32, value=
-[[9.68000054e-01, 3.20000029e+00, 1.78000009e+00],
- [2.83800006e+00, 8.61999989e+00, 4.13000011e+00]]), Tensor(shape=[2, 3], dtype=Float32, value=
-[[2.21099997e+00, 5.09999990e-01, 1.49000001e+00],
- [5.58799982e+00, 2.68000007e+00, 4.07000017e+00]]))
-```
-
-`self.grad_wrt_output` may be denoted as the following form:
-
-```python
-self.grad_wrt_output = Tensor([[s1, s2, s3], [s4, s5, s6]])
-```
-
-The output value after scaling is the product of the original output value and the element corresponding to `self.grad_wrt_output`.
diff --git a/docs/mindspore/programming_guide/source_en/images/dot_to_png.png b/docs/mindspore/programming_guide/source_en/images/dot_to_png.png
new file mode 100644
index 0000000000000000000000000000000000000000..9689a1e575c9900fbde64b4af206cc48ebad6dbb
Binary files /dev/null and b/docs/mindspore/programming_guide/source_en/images/dot_to_png.png differ
diff --git a/docs/mindspore/programming_guide/source_en/index.rst b/docs/mindspore/programming_guide/source_en/index.rst
index 082bb0c2bad15df4e7d3a7249ad7d85a1df92098..92198ea4a849f3682c8d95fc5dea66f45f94dd39 100644
--- a/docs/mindspore/programming_guide/source_en/index.rst
+++ b/docs/mindspore/programming_guide/source_en/index.rst
@@ -78,6 +78,7 @@ MindSpore Programming Guide
constexpr
hypermap
optim
+ train_and_eval
.. toctree::
:glob:
@@ -816,18 +817,6 @@ MindSpore Programming Guide