diff --git a/tutorials/training/source_en/advanced_use/hub_tutorial.md b/tutorials/training/source_en/advanced_use/hub_tutorial.md index 963e0505d1140de1fefb38f5d4ea1271a32e78b6..83c4892957960216b807e14351b93f2d73304f41 100644 --- a/tutorials/training/source_en/advanced_use/hub_tutorial.md +++ b/tutorials/training/source_en/advanced_use/hub_tutorial.md @@ -5,11 +5,11 @@ - [Submitting, Loading and Fine-tuning Models using MindSpore Hub](#submitting-loading-and-fine-tuning-models-using-mindspore-hub) - - [Overview](#overview) - - [How to submit models](#how-to-submit-models) - - [Steps](#steps) - - [How to load models](#how-to-load-models) - - [Model Fine-tuning](#model-fine-tuning) + - [Overview](#overview) + - [How to submit models](#how-to-submit-models) + - [Steps](#steps) + - [How to load models](#how-to-load-models) + - [Model Fine-tuning](#model-fine-tuning) @@ -17,34 +17,69 @@ ### Overview -For algorithm developers who are interested in publishing models into MindSpore Hub, this tutorial introduces the specific steps to submit models using GoogleNet as an example. It also describes how to load/fine-tune MindSpore Hub models for application developers who aim to do inference/transfer learning on new dataset. In summary, this tutorial helps the algorithm developers submit models efficiently and enables the application developers to perform inference or fine-tuning using MindSpore Hub APIs quickly. +MindSpore Hub is a pre-trained model application tool of the MindSpore ecosystem, which serves as a channel for model developers and application developers. It not only provides model developers with a convenient and fast channel for model submission, but also provides application developers with simple model loading and fine-tuning APIs. For model developers who are interested in publishing models into MindSpore Hub, this tutorial introduces the specific steps to submit models using GoogleNet as an example. It also describes how to load/fine-tune MindSpore Hub models for application developers who aim to do inference/transfer learning on new dataset. In summary, this tutorial helps the model developers submit models efficiently and enables the application developers to perform inference or fine-tuning using MindSpore Hub APIs quickly. ### How to submit models -We accept publishing models to MindSpore Hub via PR in `hub` repo. Here we use GoogleNet as an example to list the steps of model submission to MindSpore Hub. +We accept publishing models to MindSpore Hub via PR in [hub](https://gitee.com/mindspore/hub) repo. Here we use GoogleNet as an example to list the steps of model submission to MindSpore Hub. #### Steps -1. Host your pre-trained model in a storage location where we are able to access. +1. Host your pre-trained model in a storage location where we are able to access. -2. Add a model generation python file called `mindspore_hub_conf.py` in your own repo using this [template](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/mindspore_hub_conf.py). +2. Add a model generation python file called `mindspore_hub_conf.py` in your own repo using this [template](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/mindspore_hub_conf.py). The location of the `mindspore_hub_conf.py` file is shown below: -3. Create a `{model_name}_{model_version}_{dataset}.md` file in `hub/mshub_res/assets` using this [template](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/gpu/0.6/alexnet_v1_cifar10.md). For each pre-trained model, please run the following command to obtain a hash value required at `asset-sha256` of this `.md` file: + ```shell script + googlenet + ├── src + │ ├── googlenet.py + ├── script + │ ├── run_train.sh + ├── train.py + ├── test.py + ├── mindspore_hub_conf.py + ``` + +3. Create a `{model_name}_{model_version}_{dataset}.md` file in `hub/mshub_res/assets/mindspore/ascend/0.7` using this [template](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/ascend/0.7/googlenet_v1_cifar10.md). Here `ascend` refers to the hardware platform for the pre-trained model, and `0.7` indicates the MindSpore version. The structure of the `hub/mshub_res` folder is as follows: + + ```shell script + hub + ├── mshub_res + │ ├── assets + │ ├── mindspore + | ├── gpu + | ├── 0.7 + | ├── ascend + | ├── 0.7 + | ├── googlenet_v1_cifar10.md + │ ├── tools + | ├── md_validator.py + | └── md_validator.py + ``` + + Note that it is required to fill in the `{model_name}_{model_version}_{dataset}.md` template by providing `file-format`、`asset-link` and `asset-sha256` below, which refers to the model file format, model storage location from step 1 and model hash value, respectively. The MindSpore Hub supports multiple model file formats including [MindSpore CKPT](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#checkpoint-configuration-policies), [AIR](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html), [MindIR](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model), [ONNX](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) and [MSLite](https://www.mindspore.cn/lite/tutorial/en/master/use/converter_tool.html). + + ```shell script + file-format: ckpt + asset-link: https://download.mindspore.cn/model_zoo/official/cv/googlenet/goolenet_ascend_0.2.0_cifar10_official_classification_20200713/googlenet.ckpt + asset-sha256: 114e5acc31dad444fa8ed2aafa02ca34734419f602b9299f3b53013dfc71b0f7 + ``` + For each pre-trained model, please run the following command to obtain a hash value required at `asset-sha256` of this `.md` file. Here the pre-trained model `googlenet.ckpt` is accessed from the storage location in step 1 and then saved in `tools` folder. The output hash value is: `114e5acc31dad444fa8ed2aafa02ca34734419f602b9299f3b53013dfc71b0f7`. ```python cd ../tools python get_sha256.py ../googlenet.ckpt ``` -4. Check the format of the markdown file locally using `hub/mshub_res/tools/md_validator.py` by running the following command: +4. Check the format of the markdown file locally using `hub/mshub_res/tools/md_validator.py` by running the following command. The output is `All Passed`,which indicates that the format and content of the `.md` file meets the requirements. ```python python md_validator.py ../assets/mindspore/ascend/0.7/googlenet_v1_cifar10.md ``` -5. Create a PR in `mindspore/hub` repo. +5. Create a PR in `mindspore/hub` repo. See our [Contributor Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md) for more information about creating a PR. -Once your PR is merged into master branch here, your model will show up in [MindSpore Hub Website](https://hub.mindspore.com/mindspore) within 24 hours. For more information, please refer to the [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md). +Once your PR is merged into master branch here, your model will show up in [MindSpore Hub Website](https://hub.mindspore.com/mindspore) within 24 hours. Please refer to [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md) for more information about model submission. ### How to load models @@ -62,9 +97,8 @@ import mindspore from mindspore import context, Tensor, nn from mindspore.train.model import Model from mindspore.common import dtype as mstype -from mindspore.dataset.transforms import Compose +from mindspore.dataset.transforms import py_transforms from PIL import Image -import mindspore.dataset.vision.py_transforms as py_transforms import cv2 context.set_context(mode=context.GRAPH_MODE, @@ -74,7 +108,7 @@ context.set_context(mode=context.GRAPH_MODE, model = "mindspore/ascend/0.7/googlenet_v1_cifar10" image = Image.open('cifar10/a.jpg') -transforms = Compose([py_transforms.ToTensor()]) +transforms = py_transforms.ComposeOp([py_transforms.ToTensor()]) # Initialize the number of classes based on the pre-trained model. network = mshub.load(model, num_classes=10) @@ -84,23 +118,22 @@ out = network(transforms(image)) ### Model Fine-tuning -When loading a model with `mindspore_hub.load` API, we can add an extra argument to load the feature extraction part of the model only. So we can easily add new layers to perform transfer learning. *This feature can be found in the related model page when an extra argument (e.g., include_top) has been integrated into the model construction by the algorithm engineer.* +When loading a model with `mindspore_hub.load` API, we can add an extra argument to load the feature extraction part of the model only. So we can easily add new layers to perform transfer learning. This feature can be found in the related model page when an extra argument (e.g., include_top) has been integrated into the model construction by the model developer. The value of `include_top` is True or False, indicating whether to keep the top layer in the fully-connected network. -We use Googlenet as example to illustrate how to load a model trained on ImageNet dataset and then perform transfer learning (re-training) on specific sub-task dataset. The main steps are listed below: +We use GoogleNet as example to illustrate how to load a model trained on ImageNet dataset and then perform transfer learning (re-training) on specific sub-task dataset. The main steps are listed below: -1. Search the model of interest on [MindSpore Hub Website](https://hub.mindspore.com/mindspore) and get the related `url`. +1. Search the model of interest on [MindSpore Hub Website](https://hub.mindspore.com/mindspore) and get the related `url`. -2. Load the model from MindSpore Hub using the `url`. *Note that the parameter `include_top` is provided by the model developer*. +2. Load the model from MindSpore Hub using the `url`. Note that the parameter `include_top` is provided by the model developer. ```python import mindspore - from mindspore import nn from mindspore import context import mindspore_hub as mshub - + context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", save_graphs=False) - + network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', include_top=False) network.set_train(False) ``` @@ -108,14 +141,16 @@ We use Googlenet as example to illustrate how to load a model trained on ImageNe 3. Add a new classification layer into current model architecture. ```python + from mindspore import nn + # Check MindSpore Hub website to conclude that the last output shape is 1024. last_channel = 1024 - + # The number of classes in target task is 26. num_classes = 26 classification_layer = nn.Dense(last_channel, num_classes) classification_layer.set_train(True) - + train_network = nn.SequentialCell([network, classification_layer]) ``` @@ -123,59 +158,59 @@ We use Googlenet as example to illustrate how to load a model trained on ImageNe ```python from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits - + # Wrap the backbone network with loss. loss_fn = SoftmaxCrossEntropyWithLogits() loss_net = nn.WithLossCell(train_network, loss_fn) - + # Create an optimizer. - optim = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), Tensor(lr), config.momentum, config.weight_decay) - + optim = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), Tensor(lr), config.momentum, config.weight_decay) + train_net = nn.TrainOneStepCell(loss_net, optim) ``` -5. Create dataset and start fine-tuning. +5. Create dataset and start fine-tuning. As is shown below, the new dataset used for fine-tuning is the garbage classification data located at `/ssd/data/garbage/train` folder. ```python from src.dataset import create_dataset - from mindspore.train.serialization import _exec_save_checkpoint - + from mindspore.train.serialization import save_checkpoint + dataset = create_dataset("/ssd/data/garbage/train", do_train=True, batch_size=32) - + epoch_size = 15 for epoch in range(epoch_size): for i, items in enumerate(dataset): data, label = items data = mindspore.Tensor(data) label = mindspore.Tensor(label) - + loss = train_net(data, label) print(f"epoch: {epoch}, loss: {loss}") # Save the ckpt file for each epoch. ckpt_path = f"./ckpt/garbage_finetune_epoch{epoch}.ckpt" - _exec_save_checkpoint(train_network, ckpt_path) + save_checkpoint(train_network, ckpt_path) ``` 6. Eval on test set. ```python from mindspore.train.serialization import load_checkpoint, load_param_into_net - + network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', include_top=False) train_network = nn.SequentialCell([network, nn.Dense(last_channel, num_classes)]) - + # Load a pre-trained ckpt file. ckpt_path = "./ckpt/garbage_finetune_epoch15.ckpt" trained_ckpt = load_checkpoint(ckpt_path) load_param_into_net(train_network, trained_ckpt) - + # Define loss and create model. loss = SoftmaxCrossEntropyWithLogits() model = Model(network, loss_fn=loss, metrics={'acc'}) - - eval_dataset = create_dataset("/ssd/data/garbage/train", do_train=False, + + eval_dataset = create_dataset("/ssd/data/garbage/train", do_train=False, batch_size=32) - + res = model.eval(eval_dataset) print("result:", res, "ckpt=", ckpt_path) - ``` + ``` \ No newline at end of file diff --git a/tutorials/training/source_en/advanced_use/improve_model_security_nad.md b/tutorials/training/source_en/advanced_use/improve_model_security_nad.md index d6e6415b312d110ca1f5b4d1fe6a3825c141c0bb..da32a4ffb2ba378a6e15078bbc687937471f8ca0 100644 --- a/tutorials/training/source_en/advanced_use/improve_model_security_nad.md +++ b/tutorials/training/source_en/advanced_use/improve_model_security_nad.md @@ -17,7 +17,7 @@ - + ## Overview @@ -31,7 +31,7 @@ At the beginning of AI algorithm design, related security threats are sometimes This section describes how to use MindArmour in adversarial attack and defense by taking the Fast Gradient Sign Method (FGSM) attack algorithm and Natural Adversarial Defense (NAD) algorithm as examples. -> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at: +> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at: > - `mnist_attack_fgsm.py`: contains attack code. > - `mnist_defense_nad.py`: contains defense code. @@ -198,8 +198,8 @@ The LeNet model is used as an example. You can also create and train your own mo inputs = [] labels = [] for data in ds_test.create_tuple_iterator(): - inputs.append(data[0].asnumpy().astype(np.float32)) - labels.append(data[1].asnumpy()) + inputs.append(data[0].astype(np.float32)) + labels.append(data[1]) test_inputs = np.concatenate(inputs) test_labels = np.concatenate(labels) ``` diff --git a/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md b/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md index 919701dc4ae8c14397810a9109792613dffafd2a..a43087de2729c6c9407bcf6353ccc2f3d7c7cedb 100644 --- a/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md +++ b/tutorials/training/source_en/advanced_use/lineage_and_scalars_comparision.md @@ -13,7 +13,7 @@ - + ## Overview diff --git a/tutorials/training/source_en/advanced_use/migrate_3rd_scripts.md b/tutorials/training/source_en/advanced_use/migrate_3rd_scripts.md index fb0ef76b027825e0cdb12626fb9569f1f2bc61e3..ef2b198f9f4ce6440a201aa93b8fe25f46edd844 100644 --- a/tutorials/training/source_en/advanced_use/migrate_3rd_scripts.md +++ b/tutorials/training/source_en/advanced_use/migrate_3rd_scripts.md @@ -19,7 +19,7 @@ - + ## Overview @@ -31,9 +31,9 @@ Before you start working on your scripts, prepare your operator assessment and h ### Operator Assessment -Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html). +Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/doc/api_python/en/r1.0/operator_list.html). -Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List. +Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/doc/api_python/en/r1.0/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List. If any operator does not exist, you are advised to perform the following operations: @@ -59,17 +59,17 @@ Prepare the hardware environment, find a platform corresponding to your environm MindSpore differs from TensorFlow and PyTorch in the network structure. Before migration, you need to clearly understand the original script and information of each layer, such as shape. -> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script. +> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/r1.0/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script. The ResNet-50 network migration and training on the Ascend 910 is used as an example. 1. Import MindSpore modules. - Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see . + Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see . 2. Load and preprocess a dataset. - Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_preparation.html). + Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/training/en/r1.0/use/data_preparation/data_preparation.html). In this example, the CIFAR-10 dataset is loaded, which supports both single-GPU and multi-GPU scenarios. @@ -81,7 +81,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa num_shards=device_num, shard_id=rank_id) ``` - Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see . + Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see . 3. Build a network. @@ -216,7 +216,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa 6. Build the entire network. - The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`. + The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`. 7. Define a loss function and an optimizer. @@ -237,7 +237,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False) ``` - You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/master/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute. + You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/training/en/r1.0/advanced_use/customized_debugging_infor.html#mindspore-metrics) attribute. ```python model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'}) @@ -266,15 +266,15 @@ The accuracy optimization process is as follows: #### On-Cloud Integration -Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html). +Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/use_on_the_cloud.html). ### Inference Phase -Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps. +Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/training/en/r1.0/use/multi_platform_inference.html) for detailed steps. ## Examples -1. [Common dataset examples](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/loading_the_datasets.html) +1. [Common dataset examples](https://www.mindspore.cn/tutorial/training/en/r1.0/use/data_preparation/loading_the_datasets.html) -2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo) +2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo) diff --git a/tutorials/training/source_en/advanced_use/mindinsight_commands.md b/tutorials/training/source_en/advanced_use/mindinsight_commands.md index 8ed9fcbed9126ed1ea78140626d7b5bc2411b317..4eb356d366bd834ba9076e6d8fe6b485e3445077 100644 --- a/tutorials/training/source_en/advanced_use/mindinsight_commands.md +++ b/tutorials/training/source_en/advanced_use/mindinsight_commands.md @@ -13,7 +13,7 @@ - + ## View the Command Help Information diff --git a/tutorials/training/source_zh_cn/advanced_use/hub_tutorial.md b/tutorials/training/source_zh_cn/advanced_use/hub_tutorial.md index 0be1d8efee78fa4f4457b5a3f1c872cc65d61744..5a880555d55e5e534ec6fb3967b1eca9fb2612f5 100644 --- a/tutorials/training/source_zh_cn/advanced_use/hub_tutorial.md +++ b/tutorials/training/source_zh_cn/advanced_use/hub_tutorial.md @@ -17,42 +17,77 @@ ### 概述 -本教程以Googlenet为例,对想要将模型发布到MindSpore Hub的算法开发者介绍了模型上传步骤,也对想要使用MindSpore Hub模型进行推理或者微调的开发应用者描述了具体操作流程。总之,本教程可以帮助算法开发者有效地提交模型,并使得应用开发者利用MindSpore Hub的接口快速实现模型推理或微调。 +MindSpore Hub是MindSpore生态的预训练模型应用工具,作为模型开发者和应用开发者的管道,它不仅向模型开发者提供了方便快捷的模型发布通道,而且向应用开发者提供了简单易用的模型加载和微调API。本教程以GoogleNet为例,对想要将模型发布到MindSpore Hub的模型开发者介绍了模型上传步骤,也对想要使用MindSpore Hub模型进行推理或者微调的应用开发者描述了具体操作流程。总之,本教程可以帮助模型开发者有效地提交模型,并使得应用开发者利用MindSpore Hub的接口快速实现模型推理或微调。 ### 模型上传 -我们接收用户通过向`hub`仓提交PR的方式向MindSpore Hub发布模型。这里我们用Googlenet为例,列出将模型提交到MindSpore Hub的步骤。 +我们接收用户通过向 [hub](https://gitee.com/mindspore/hub) 仓提交PR的方式向MindSpore Hub发布模型。这里我们以GoogleNet为例,列出模型提交到MindSpore Hub的步骤。 #### 步骤 -1. 将你的预训练模型托管在我们可以访问的存储位置。 +1. 将你的预训练模型托管在可以访问的存储位置。 -2. 按照 [模板](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/mindspore_hub_conf.py) 在你自己的代码仓中添加模型生成文件 `mindspore_hub_conf.py`。 +2. 按照 [模板](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/googlenet/mindspore_hub_conf.py) 在你自己的代码仓中添加模型生成文件 `mindspore_hub_conf.py`,文件放置的位置如下: -3. 按照 [模板](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/gpu/0.6/alexnet_v1_cifar10.md) 在 `hub/mshub_res/assets` 中创建`{model_name}_{model_version}_{dataset}.md` 文件。对于每个预训练模型,执行以下命令,用来获得`.md`文件`asset-sha256` 处所需的哈希值: + ```shell script + googlenet + ├── src + │ ├── googlenet.py + ├── script + │ ├── run_train.sh + ├── train.py + ├── test.py + ├── mindspore_hub_conf.py + ``` + +3. 按照 [模板](https://gitee.com/mindspore/hub/blob/master/mshub_res/assets/mindspore/ascend/0.7/googlenet_v1_cifar10.md) 在 `hub/mshub_res/assets/mindspore/ascend/0.7` 文件夹下创建`{model_name}_{model_version}_{dataset}.md` 文件,其中 `ascend` 为模型运行的硬件平台,`0.7` 为MindSpore的版本号,`hub/mshub_res`的目录结构为: + + ```shell script + hub + ├── mshub_res + │ ├── assets + │ ├── mindspore + | ├── gpu + | ├── 0.7 + | ├── ascend + | ├── 0.7 + | ├── googlenet_v1_cifar10.md + │ ├── tools + | ├── md_validator.py + | └── md_validator.py + ``` + 注意,`{model_name}_{model_version}_{dataset}.md` 文件中需要补充如下所示的 `file-format`、`asset-link` 和 `asset-sha256` 信息,它们分别表示模型文件格式、模型存储位置(步骤1所得)和模型哈希值,其中MindSpore Hub支持的模型文件格式有 [MindSpore CKPT](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#checkpoint-configuration-policies),[AIR](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html),[MindIR](https://www.mindspore.cn/tutorial/zh-CN/master/use/saving_and_loading_model_parameters.html#export-mindir-model),[ONNX](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html) 和 [MSLite](https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html)。 + + ```shell script + file-format: ckpt + asset-link: https://download.mindspore.cn/model_zoo/official/cv/googlenet/goolenet_ascend_0.2.0_cifar10_official_classification_20200713/googlenet.ckpt + asset-sha256: 114e5acc31dad444fa8ed2aafa02ca34734419f602b9299f3b53013dfc71b0f7 + ``` + + 对于每个预训练模型,执行以下命令,用来获得`.md` 文件 `asset-sha256` 处所需的哈希值,其中 `googlenet.ckpt` 是从步骤1的存储位置处下载并保存到 `tools` 文件夹的预训练模型,运行后输出的哈希值为 `114e5acc31dad444fa8ed2aafa02ca34734419f602b9299f3b53013dfc71b0f7`。 ```python - cd ../tools + cd /hub/mshub_res/tools python get_sha256.py ../googlenet.ckpt ``` -4. 使用 `hub/mshub_res/tools/md_validator.py` 在本地核对`.md`文件的格式,执行的命令如下: +4. 使用 `hub/mshub_res/tools/md_validator.py` 在本地核对`.md`文件的格式,执行以下命令,输出结果为 `All Passed`,表示 `.md` 文件的格式和内容均符合要求。 ```python python md_validator.py ../assets/mindspore/ascend/0.7/googlenet_v1_cifar10.md ``` -5. 在 `mindspore/hub` 仓创建PR。 +5. 在 `mindspore/hub` 仓创建PR,详细创建方式可以参考[贡献者Wiki](https://gitee.com/mindspore/mindspore/blob/master/CONTRIBUTING.md)。 -一旦你的PR合并到 `mindspore/hub` 的master分支,你的模型将于24小时内在 [MindSpore Hub 网站](https://hub.mindspore.com/mindspore) 上显示。更多详细信息,请参考 [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md) 。 +一旦你的PR合并到 `mindspore/hub` 的master分支,你的模型将于24小时内在 [MindSpore Hub 网站](https://hub.mindspore.com/mindspore) 上显示。有关模型上传的更多详细信息,请参考 [README](https://gitee.com/mindspore/hub/blob/master/mshub_res/README.md) 。 -### 模型加载 +### 模型加载 `mindspore_hub.load` API用于加载预训练模型,可以实现一行代码加载模型。主要的模型加载流程如下: - 在MindSpore Hub官网上搜索感兴趣的模型。 - 例如,想使用Googlenet对CIFAR-10数据集进行分类,可以在MindSpore Hub官网上使用关键词`GoogleNet`进行搜索。页面将会返回与Googlenet相关的所有模型。进入相关模型页面之后,获得详情页 `url`。 + 例如,想使用GoogleNet对CIFAR-10数据集进行分类,可以在MindSpore Hub官网上使用关键词`GoogleNet`进行搜索。页面将会返回与GoogleNet相关的所有模型。进入相关模型页面之后,获得详情页 `url`。 - 使用`url`完成模型的加载,示例代码如下: @@ -66,25 +101,26 @@ from PIL import Image import cv2 import mindspore.dataset.vision.py_transforms as py_transforms - + context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", device_id=0) - + model = "mindspore/ascend/0.7/googlenet_v1_cifar10" - + + # Test an image from CIFAR-10 dataset image = Image.open('cifar10/a.jpg') transforms = Compose([py_transforms.ToTensor()]) - + # Initialize the number of classes based on the pre-trained model. network = mshub.load(model, num_classes=10) network.set_train(False) out = network(transforms(image)) ``` -### 模型微调 +### 模型微调 -在使用 `mindspore_hub.load` 进行模型加载时,可以增加一个额外的参数项只加载神经网络的特征提取部分。这样我们就能很容易地在之后增加一些新的层进行迁移学习。*当算法工程师将额外的参数(例如 include_top)添加到模型构造中时,可以在模型的详情页中找到这个功能。* +在使用 `mindspore_hub.load` 进行模型加载时,可以增加一个额外的参数项只加载神经网络的特征提取部分。这样我们就能很容易地在之后增加一些新的层进行迁移学习。*当模型开发者将额外的参数(例如 include_top)添加到模型构造中时,可以在模型的详情页中找到这个功能。`include_top` 取值为True或者False,表示是否保留顶层的全连接网络。* 下面我们以GoogleNet为例,说明如何加载一个基于ImageNet的预训练模型,并在特定的子任务数据集上进行迁移学习(重训练)。主要的步骤如下: @@ -94,13 +130,12 @@ ```python import mindspore - from mindspore import nn from mindspore import context import mindspore_hub as mshub - + context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", save_graphs=False) - + network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', include_top=False) network.set_train(False) ``` @@ -108,14 +143,16 @@ 3. 在现有模型结构基础上增加一个与新任务相关的分类层。 ```python + from mindspore import nn + # Check MindSpore Hub website to conclude that the last output shape is 1024. last_channel = 1024 - + # The number of classes in target task is 26. num_classes = 26 classification_layer = nn.Dense(last_channel, num_classes) classification_layer.set_train(True) - + train_network = nn.SequentialCell([network, classification_layer]) ``` @@ -123,58 +160,58 @@ ```python from mindspore.nn.loss import SoftmaxCrossEntropyWithLogits - + # Wrap the backbone network with loss. loss_fn = SoftmaxCrossEntropyWithLogits() loss_net = nn.WithLossCell(train_network, loss_fn) - + # Create an optimizer. - optim = opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), Tensor(lr), config.momentum, config.weight_decay) + optim = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), Tensor(lr), config.momentum, config.weight_decay) train_net = nn.TrainOneStepCell(loss_net, optim) ``` - -5. 构建数据集,开始重训练。 + +5. 构建数据集,开始重训练。如下所示,进行微调任务的数据集为垃圾分类数据集,存储位置为 `/ssd/data/garbage/train`。 ```python from src.dataset import create_dataset - from mindspore.train.serialization import _exec_save_checkpoint - + from mindspore.train.serialization import save_checkpoint + dataset = create_dataset("/ssd/data/garbage/train", do_train=True, batch_size=32) - + epoch_size = 15 for epoch in range(epoch_size): for i, items in enumerate(dataset): data, label = items data = mindspore.Tensor(data) label = mindspore.Tensor(label) - + loss = train_net(data, label) print(f"epoch: {epoch}, loss: {loss}") # Save the ckpt file for each epoch. ckpt_path = f"./ckpt/garbage_finetune_epoch{epoch}.ckpt" - _exec_save_checkpoint(train_network, ckpt_path) + save_checkpoint(train_network, ckpt_path) ``` 6. 在测试集上测试模型精度。 ```python from mindspore.train.serialization import load_checkpoint, load_param_into_net - + network = mshub.load('mindspore/ascend/0.7/googlenet_v1_cifar10', include_top=False) train_network = nn.SequentialCell([network, nn.Dense(last_channel, num_classes)]) - + # Load a pre-trained ckpt file. ckpt_path = "./ckpt/garbage_finetune_epoch15.ckpt" trained_ckpt = load_checkpoint(ckpt_path) load_param_into_net(train_network, trained_ckpt) - + # Define loss and create model. loss_fn = SoftmaxCrossEntropyWithLogits() model = Model(network, loss_fn=loss, metrics={'acc'}) - - eval_dataset = create_dataset("/ssd/data/garbage/train", do_train=False, + + eval_dataset = create_dataset("/ssd/data/garbage/train", do_train=False, batch_size=32) - + res = model.eval(eval_dataset) print("result:", res, "ckpt=", ckpt_path) - ``` + ``` \ No newline at end of file diff --git a/tutorials/training/source_zh_cn/advanced_use/improve_model_security_nad.md b/tutorials/training/source_zh_cn/advanced_use/improve_model_security_nad.md index 9d101729a3f72cdbdde488528a76faec65b10213..bbbbe17dd98e6d65627b85256d04c861f7e3d48d 100644 --- a/tutorials/training/source_zh_cn/advanced_use/improve_model_security_nad.md +++ b/tutorials/training/source_zh_cn/advanced_use/improve_model_security_nad.md @@ -198,8 +198,8 @@ def generate_mnist_dataset(data_path, batch_size=32, repeat_size=1, inputs = [] labels = [] for data in ds_test.create_tuple_iterator(): - inputs.append(data[0].asnumpy().astype(np.float32)) - labels.append(data[1].asnumpy()) + inputs.append(data[0].astype(np.float32)) + labels.append(data[1]) test_inputs = np.concatenate(inputs) test_labels = np.concatenate(labels) ``` diff --git a/tutorials/training/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md b/tutorials/training/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md index 140ad4748852781749d4741361ef7f345a5b3713..5afb8c00931a1a14077ae457b2927ed9c0b5b723 100644 --- a/tutorials/training/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md +++ b/tutorials/training/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md @@ -13,8 +13,8 @@ -   - +   + ## 概述 diff --git a/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts.md b/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts.md index c6ccc309fbe6907bed114ca039f93bd0dcb88c37..cd036c948ade6026744156346f5d59fe604acad4 100644 --- a/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts.md +++ b/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts.md @@ -19,7 +19,7 @@ - + ## 概述 @@ -31,9 +31,9 @@ ### 算子评估 -分析待迁移的网络中所包含的算子,结合[MindSpore算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html),梳理出MindSpore对这些算子的支持程度。 +分析待迁移的网络中所包含的算子,结合[MindSpore算子支持列表](https://www.mindspore.cn/doc/programming_guide/zh-CN/r1.0/operator_list.html),梳理出MindSpore对这些算子的支持程度。 -以ResNet-50为例,[Conv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d)和[BatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d)是其中最主要的两个算子,它们已在MindSpore支持的算子列表中。 +以ResNet-50为例,[Conv](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore.nn.html#mindspore.nn.Conv2d)和[BatchNorm](https://www.mindspore.cn/doc/api_python/zh-CN/r1.0/mindspore.nn.html#mindspore.nn.BatchNorm2d)是其中最主要的两个算子,它们已在MindSpore支持的算子列表中。 如果发现没有对应算子,建议: - 使用其他算子替换:分析算子实现公式,审视是否可以采用MindSpore现有算子叠加达到预期目标。 @@ -57,17 +57,17 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差别,迁移前需要对原脚本有较为清晰的了解,明确地知道每一层的shape等信息。 -> 你也可以使用[MindConverter工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/mindconverter)实现PyTorch网络定义脚本到MindSpore网络定义脚本的自动转换。 +> 你也可以使用[MindConverter工具](https://gitee.com/mindspore/mindinsight/tree/r1.0/mindinsight/mindconverter)实现PyTorch网络定义脚本到MindSpore网络定义脚本的自动转换。 下面,我们以ResNet-50的迁移,并在Ascend 910上训练为例: 1. 导入MindSpore模块。 - 根据所需使用的接口,导入相应的MindSpore模块,模块列表详见。 + 根据所需使用的接口,导入相应的MindSpore模块,模块列表详见。 2. 加载数据集和预处理。 - 使用MindSpore构造你需要使用的数据集。目前MindSpore已支持常见数据集,你可以通过原始格式、`MindRecord`、`TFRecord`等多种接口调用,同时还支持数据处理以及数据增强等相关功能,具体用法可参考[准备数据教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_preparation.html)。 + 使用MindSpore构造你需要使用的数据集。目前MindSpore已支持常见数据集,你可以通过原始格式、`MindRecord`、`TFRecord`等多种接口调用,同时还支持数据处理以及数据增强等相关功能,具体用法可参考[准备数据教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/data_preparation/data_preparation.html)。 本例中加载了Cifar-10数据集,可同时支持单卡和多卡的场景。 @@ -79,7 +79,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 num_shards=device_num, shard_id=rank_id) ``` - 然后对数据进行了数据增强、数据清洗和批处理等操作。代码详见。 + 然后对数据进行了数据增强、数据清洗和批处理等操作。代码详见。 3. 构建网络。 @@ -212,7 +212,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 6. 构造整网。 - 将定义好的多个子网连接起来就是整个[ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py)网络的结构了。同样遵循先定义后使用的原则,在`__init__`中定义所有用到的子网,在`construct`中连接子网。 + 将定义好的多个子网连接起来就是整个[ResNet-50](https://gitee.com/mindspore/mindspore/blob/r1.0/model_zoo/official/cv/resnet/src/resnet.py)网络的结构了。同样遵循先定义后使用的原则,在`__init__`中定义所有用到的子网,在`construct`中连接子网。 7. 定义损失函数和优化器。 @@ -233,7 +233,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False) ``` - 如果希望使用`Model`内置的评估方法,则可以使用[metrics](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/customized_debugging_information.html#mindspore-metrics)属性设置希望使用的评估方法。 + 如果希望使用`Model`内置的评估方法,则可以使用[metrics](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/customized_debugging_infor.html#mindspore-metrics)属性设置希望使用的评估方法。 ```python model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'}) @@ -261,14 +261,14 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差 #### 云上集成 -请参考[在云上使用MindSpore](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html),将你的脚本运行在ModelArts。 +请参考[在云上使用MindSpore](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/use_on_the_cloud.html),将你的脚本运行在ModelArts。 ### 推理阶段 -在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)。 +在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/use/multi_platform_inference.html)。 ## 样例参考 -1. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html) +1. [常用数据集读取样例](https://www.mindspore.cn/tutorial/traning/zh-CN/r1.0/use/data_preparation/loading_the_datasets.html) -2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo) +2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/r1.0/model_zoo) diff --git a/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts_mindconverter.md b/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts_mindconverter.md index 7b86cbbddec8f55a53ea39ad5189f69d3a6c7610..dda45822f80611ba8f8d90d91fac0fb7c94b5ae0 100644 --- a/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts_mindconverter.md +++ b/tutorials/training/source_zh_cn/advanced_use/migrate_3rd_scripts_mindconverter.md @@ -1,6 +1,6 @@ -# 迁移第三方框架脚本 +# 使用工具迁移第三方框架脚本 -`Linux` `Ascend` `模型开发` `初级` +`Linux` `Ascend` `模型开发` `初级` `Linux` @@ -8,7 +8,6 @@ - [概述](#概述) - [安装](#安装) - [用法](#用法) - - [使用场景](#使用场景) - [使用示例](#使用示例) - [基于AST的脚本转换示例](#基于AST的脚本转换示例) - [基于图结构的脚本生成示例](#基于图结构的脚本生成示例) @@ -16,7 +15,7 @@ - + ## 概述 @@ -79,7 +78,6 @@ optional arguments: 另外,当使用基于图结构的脚本生成方案时,请确保原PyTorch项目已在Python包搜索路径中,可通过CLI进入Python交互式命令行,通过import的方式判断是否已满足;若未加入,可通过`--project_path`命令手动将项目路径传入,以确保MindConverter可引用到原PyTorch脚本。 - > 假设用户项目目录为`/home/user/project/model_training`,用户可通过如下命令手动项目添加至包搜索路径中:`export PYTHONPATH=/home/user/project/model_training:$PYTHONPATH` > 此处MindConverter需要引用原PyTorch脚本,是因为PyTorch模型反向序列化过程中会引用原脚本。 @@ -101,7 +99,6 @@ MindConverter提供两种技术方案,以应对不同脚本迁移场景: > 2. 基于图结构的脚本生成方案,由于要基于推理模式加载PyTorch模型,会导致转换后网络中Dropout算子丢失,需要用户手动补齐; > 3. 基于图结构的脚本生成方案持续优化中。 - ## 使用示例 ### 基于AST的脚本转换示例 @@ -114,7 +111,7 @@ mindconverter --in_file /home/user/model.py \ --report /home/user/output/report ``` -转换报告中,对于未转换的代码行形式为如下,其中x, y指明的是原PyTorch脚本中代码的行、列号。对于未成功转换的算子,可参考[MindSporeAPI映射查询功能](https://www.mindspore.cn/docs/zh-CN/master/index.html#operator_api) 手动对代码进行迁移。对于工具无法迁移的算子,会保留原脚本中的代码。 +转换报告中,对于未转换的代码行形式为如下,其中x, y指明的是原PyTorch脚本中代码的行、列号。对于未成功转换的算子,可参考[MindSporeAPI映射查询功能](https://www.mindspore.cn/docs/zh-CN/r1.0/index.html#operator_api) 手动对代码进行迁移。对于工具无法迁移的算子,会保留原脚本中的代码。 ```text line x:y: [UnConvert] 'operator' didn't convert. ... @@ -151,48 +148,7 @@ mindconverter --model_file /home/user/model.pth --shape 3,224,224 \ 基于图结构的脚本生成方案产生的转换报告格式与AST方案相同。然而,由于基于图结构方案属于生成式方法,转换过程中未参考原PyTorch脚本,因此生成的转换报告中涉及的代码行、列号均指生成后脚本。 -另外对于未成功转换的算子,在代码中会相应的标识该节点输入、输出Tensor的shape(以`input_shape`, `output_shape`标识),便于用户手动修改。以Reshape算子为例(暂不支持Reshape),将生成如下代码: - -```python -class Classifier(nn.Cell): - - def __init__(self): - super(Classifier, self).__init__() - ... - self.reshape = onnx.Reshape(input_shape=(1, 1280, 1, 1), - output_shape=(1, 1280)) - ... - - def construct(self, x): - ... - # Suppose input of `reshape` is x. - reshape_output = self.reshape(x) - ... - -``` - -通过`input_shape`、`output_shape`参数,用户可以十分便捷地完成算子替换,替换结果如下: - -```python -from mindspore.ops import operations as P -... - -class Classifier(nn.Cell): - - def __init__(self): - super(Classifier, self).__init__() - ... - self.reshape = P.Reshape(input_shape=(1, 1280, 1, 1), - output_shape=(1, 1280)) - ... - - def construct(self, x): - ... - # Suppose input of `reshape` is x. - reshape_output = self.reshape(x, (1, 1280)) - ... - -``` +另外对于未成功转换的算子,在代码中会相应的标识该节点输入、输出Tensor的shape(以input_shape, output_shape标识),便于用户手动修改。 > 注意:其中`--output`与`--report`参数可省略,若省略,该命令将在当前工作目录(Working directory)下自动创建`output`目录,将生成的脚本、转换报告输出至该目录。 diff --git a/tutorials/training/source_zh_cn/advanced_use/migrate_script.rst b/tutorials/training/source_zh_cn/advanced_use/migrate_script.rst index e23cb61a2dd70876721f72640b50bfab5d55a61b..b4aeabcb925a19ef1c1de90bf696d77cf5795293 100644 --- a/tutorials/training/source_zh_cn/advanced_use/migrate_script.rst +++ b/tutorials/training/source_zh_cn/advanced_use/migrate_script.rst @@ -1,9 +1,9 @@ 迁移第三方框架训练脚本 -==================== +=========== .. toctree:: :maxdepth: 1 - migrate_3rd_scripts_mindconverter - migrate_3rd_scripts + advanced_use/migrate_3rd_scripts_mindconverter + advanced_use/migrate_3rd_scripts \ No newline at end of file diff --git a/tutorials/training/source_zh_cn/advanced_use/mindinsight_commands.md b/tutorials/training/source_zh_cn/advanced_use/mindinsight_commands.md index 0383711471a70801b5d341f5bd3556ee1b5557cb..5fcdcbd5cd96cdc4eb39bc1ef72cf2226dcbeb99 100644 --- a/tutorials/training/source_zh_cn/advanced_use/mindinsight_commands.md +++ b/tutorials/training/source_zh_cn/advanced_use/mindinsight_commands.md @@ -13,7 +13,7 @@ - + ## 查看命令帮助信息