From 897ae0313ec1cb54d765b915a486d54991cca7b4 Mon Sep 17 00:00:00 2001 From: zhangyi Date: Sat, 2 Apr 2022 16:58:33 +0800 Subject: [PATCH] modify the files --- .../migration_guide/source_en/inference.md | 6 +- .../source_en/migration_script.md | 20 ++--- .../migration_guide/source_en/preparation.md | 90 +------------------ 3 files changed, 16 insertions(+), 100 deletions(-) diff --git a/docs/mindspore/migration_guide/source_en/inference.md b/docs/mindspore/migration_guide/source_en/inference.md index b94d1bbdf6..2876da929f 100644 --- a/docs/mindspore/migration_guide/source_en/inference.md +++ b/docs/mindspore/migration_guide/source_en/inference.md @@ -10,7 +10,7 @@ For trained models, MindSpore can execute inference tasks on different hardware ### Overview -MindSpore supports to save files of [training parameters](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html#model-files) as CheckPoint format. MindSpore also supports to save [network model](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html#model-files) files as MindIR, AIR, and ONNX. +MindSpore supports to save [training parameters files](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html#model-files) as CheckPoint format. MindSpore also supports to save [network model files](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html#model-files) as MindIR, AIR, and ONNX. Referring to the [executing inference](https://www.mindspore.cn/docs/programming_guide/en/master/multi_platform_inference.html#inference-execution) section, users not only can execute local inference through `mindspore.model.predict` interface, but also can export MindIR, AIR, and ONNX model files through `mindspore.export` for inference on different hardware platforms. @@ -27,7 +27,7 @@ For dominating the difference between backend models, model files in the [MindIR ## On-line Inference Service Deployment Based on MindSpore Serving -MindSpore Serving is a lite and high-performance service module, aiming at assisting MindSpore developers in efficiently deploying on-line inference services. When a user completes the training task by using MindSpore, the trained model can be exported for inference service deployment through MindSpore Serving. Please refer to the following examples for deployment: +MindSpore Serving is a lite and high-performance service module, aiming at assisting MindSpore developers in efficiently deploying on-line inference services in the production environment. When a user completes the training task by using MindSpore, the trained model can be exported for inference service deployment via MindSpore Serving. Please refer to the following examples for deployment: - [MindSpore Serving-based Inference Service Deployment](https://www.mindspore.cn/serving/docs/en/master/serving_example.html). - [gRPC-based MindSpore Serving Access](https://www.mindspore.cn/serving/docs/en/master/serving_grpc.html). @@ -35,4 +35,4 @@ MindSpore Serving is a lite and high-performance service module, aiming at assis - [Servable Provided Through Model Configuration](https://www.mindspore.cn/serving/docs/en/master/serving_model.html). - [MindSpore Serving-based Distributed Inference Service Deployment](https://www.mindspore.cn/serving/docs/en/master/serving_distributed_example.html). -> For deployment issues regarding the on-line inference service, please refer to [MindSpore Serving](https://www.mindspore.cn/docs/faq/en/master/inference.html#mindspore-serving). +> For deployment issues regarding the on-line inference service, please refer to [MindSpore Serving Class](https://www.mindspore.cn/docs/faq/en/master/inference.html#mindspore-serving). diff --git a/docs/mindspore/migration_guide/source_en/migration_script.md b/docs/mindspore/migration_guide/source_en/migration_script.md index df0f54ecab..bf8879dfd9 100644 --- a/docs/mindspore/migration_guide/source_en/migration_script.md +++ b/docs/mindspore/migration_guide/source_en/migration_script.md @@ -16,13 +16,13 @@ Migrate scripts by reading the TensorBoard graphs。 > The PoseNet code mentioned here is based on Python2. You need to make some syntax changes to run on Python3. Details are not described here. -2. Rewrite the code, use `tf.summary` interface to save the log required by TensorBoard, start TensorBoard. +2. Rewrite the code, use `tf.summary` interface, save the log required by TensorBoard, start TensorBoard. -3. The following figure shows the opened TensorBoard, it's for reference only. The figure displayed on TensorBoard may vary in the log generation mode. +3. The following figure shows the opened TensorBoard, and it's for reference only. This may vary depending on how the log is generated. The figure displayed on TensorBoard may vary in the log generation mode. ![PoseNet TensorBoard](images/pic1.png) -4. Find the placeholder of three inputs, view the figure and read the code, the second and third inputs are used only for loss calculation. +4. Find the Placeholder of three inputs, view the figure and read the code, and the second and third inputs are used only for loss calculation. ![PoseNet Placeholder](images/pic3.png) @@ -32,7 +32,7 @@ Migrate scripts by reading the TensorBoard graphs。 So far, we can preliminarily follow three steps to construct a network model: - Step 1, the first three inputs of the network will compute six outputs in the backbone. + Step 1, the first input will compute six outputs in the backbone in the three inputs of the network. Step 2, the result of step 1, the second and third inputs are used to calculate the loss in the loss subnet. @@ -78,13 +78,11 @@ Migrate scripts by reading the TensorBoard graphs。 net_with_loss = PoseNetLossCell(backbone, loss) opt = Adam(net_with_loss.trainable_params(), learning_rate=0.001, beta1=0.9, beta2=0.999, eps=1e-08, use_locking=False) net_with_grad = TrainOneStepCell(net_with_loss, opt) - - ``` 5. Next, let's implement the computing logic in the backbone. - The first input passes through a subgraph named conv1, the computing logic can be obtained by looking at the following figure: + The first input passes through a subgraph named conv1, and the computing logic can be obtained by looking at the following figure: ![PoseNet conv1](images/pic5.png) @@ -289,7 +287,7 @@ Migrate scripts by reading the TensorBoard graphs。 model.train(epoch_size, dataset) ``` - In this way, the model script is basically migrated from TensorFlow to Mindspore. Then, various Mindspore tools and computing policies are used to optimize the precision. + In this way, the model script is basically migrated from TensorFlow to MindSpore. Then, various MindSpore tools and computing policies are used to optimize the precision. ## Migrating the PyTorch Script to MindSpore @@ -431,7 +429,7 @@ Read the PyTorch script to migrate directly. return out ``` -3. PyTorch backpropagation is usually implemented by `loss.backward()`, and parameter update is implemented by `optimizer.step()`, In MindSpore, these parameters do not need to be explicitly invoked by the user and can be transferred to the `TrainOneStepCell` class for backpropagation and gradient update. Finally, the training script should look like this: +3. PyTorch backpropagation is usually implemented by `loss.backward()`, and parameter update is implemented by `optimizer.step()`. In MindSpore, these parameters do not need to be explicitly invoked by the user and can be transferred to the `TrainOneStepCell` class for backpropagation and gradient update. Finally, the training script should look like this: ```python # define dataset @@ -455,7 +453,7 @@ Read the PyTorch script to migrate directly. model.train(epoch_size, dataset) ``` -PyTorch and mindspore have similar definitions of some basic APIs, such as [mindspore.nn.SequentialCell](https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.SequentialCell.html#mindspore.nn.SequentialCell) and [torch.nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential), In addition, some operator APIs may be not the same. This section lists some common API comparisons. For more information, see the [MindSpore and PyTorch API mapping](https://www.mindspore.cn/docs/note/en/master/index.html#operator_api) on Mindspore's official website. +PyTorch and mindspore have similar definitions of some basic APIs, such as [mindspore.nn.SequentialCell](https://www.mindspore.cn/docs/api/en/master/api_python/nn/mindspore.nn.SequentialCell.html#mindspore.nn.SequentialCell) and [torch.nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential). In addition, some operator APIs may be not the same. This section lists some common API comparisons. For more information, see the [MindSpore and PyTorch API mapping](https://www.mindspore.cn/docs/note/en/master/index.html#operator_api) on Mindspore's official website. | PyTorch | MindSpore | | :-------------------------------: | :------------------------------------------------: | @@ -467,4 +465,4 @@ PyTorch and mindspore have similar definitions of some basic APIs, such as [mind | torch.nn.Linear | mindspore.nn.Dense | | torch.nn.PixelShuffle | mindspore.ops.operations.DepthToSpace | -It should be noticed that although `torch.nn.MaxPool2d` and `mindspore.nn.MaxPool2d` are similar in interface definition, Mindspore actually invokes the `MaxPoolWithArgMax` operator during training on Ascend. The function of this operator is the same as that of TensorFlow, during the migration, the MindSpore output after the MaxPool layer is inconsistent with that of PyTorch, theoretically, it's not affect the final training result. +It should be noticed that although `torch.nn.MaxPool2d` and `mindspore.nn.MaxPool2d` are similar in interface definition, and Mindspore actually invokes the `MaxPoolWithArgMax` operator during training on Ascend. The function of this operator is the same as that of TensorFlow, during the migration, and the MindSpore output after the MaxPool layer is inconsistent with that of PyTorch. Theoretically, it's not affect the final training result. diff --git a/docs/mindspore/migration_guide/source_en/preparation.md b/docs/mindspore/migration_guide/source_en/preparation.md index dbb5f4f081..9a1a56cd80 100644 --- a/docs/mindspore/migration_guide/source_en/preparation.md +++ b/docs/mindspore/migration_guide/source_en/preparation.md @@ -10,7 +10,7 @@ Before developing or migrating networks, you need to install MindSpore and learn ## Installing MindSpore -Refer to the following figure, to determine the release version and the structure of the system, and the Python version. +Refer to the following figure, to determine the release version and the architecture(x86 or Arm) of the system, and the Python version. | System | Query Content | Query Command | | ------ | ---------------------- | ------------------- | @@ -18,13 +18,13 @@ Refer to the following figure, to determine the release version and the structur | Linux | System Architecture | `uname -m` | | Linux | Python Version | `python3` | -Choose a corresponding MindSpore version based on users own operating system. MindSpore is installed in the manner of Pip, Conda, Docker or source code compilation. It is recommended to visit the MindSpore installation page, and complete the installation by referring to this website for instructions. +Choose a corresponding MindSpore version based on users own operating system. MindSpore is installed in the manner of Pip, Conda, Docker or source code compilation. It is recommended to visit the [MindSpore installation page](https://www.mindspore.cn), and complete the installation by referring to this website for instructions. ### Verifying MindSpore After the MindSpore is installed, the following commands can be run (taking the MindSpore r1.6 as an example), to test whether the installation of the MindSpore has been completed. -```bash +```python import mindspore mindspore.run_check() ``` @@ -36,88 +36,6 @@ MindSpore version: 1.6.0 The result of multiplication calculation is correct, MindSpore has been installed successfully! ``` -### Installing by Source Code - -You can visit [Repository of Mindspore](https://gitee.com/mindspore/mindspore) and download the source code by `git clone https://gitee.com/mindspore/mindspore.git`. A file `build.sh` in root directory provides several optional parameters, to choose and customize the MindSpore service. The following code is for compiling MindSpore. - -```bash -cd mindspore -bash build.sh -e cpu -j{thread_num} # cpu -bash build.sh -e ascend -j{thread_num} # ascend -bash build.sh -e gpu -j{thread_num} # gpu -``` - -After successfully compilation, MindSpore install package will be created in `output` directory. Then you can **install it by pip** or **add current directory to PYTHONPATH** to use this package. - -> Installing by pip is fast and convenient to start. -> -> Installing by source code can customize MindSpore service and change to any commit_id to compile and run MindSpore. - -### Configuring Environment Variables (only for Ascend) - -```bash -# control log level. 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR, 4-CRITICAL, default level is WARNING. -export GLOG_v=2 - -# Conda environmental options -LOCAL_ASCEND=/usr/local/Ascend # the root directory of run package - -# lib libraries that the run package depends on -export LD_LIBRARY_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/lib64:${LOCAL_ASCEND}/driver/lib64:${LOCAL_ASCEND}/opp/op_impl/built-in/ai_core/tbe/op_tiling:${LD_LIBRARY_PATH} - -# Environment variables that must be configured -export TBE_IMPL_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp/op_impl/built-in/ai_core/tbe # TBE operator implementation tool path -export ASCEND_OPP_PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/opp # OPP path -export PATH=${LOCAL_ASCEND}/ascend-toolkit/latest/fwkacllib/ccec_compiler/bin/:${PATH} # TBE operator compilation tool path -export PYTHONPATH=${TBE_IMPL_PATH}:${PYTHONPATH} # Python library that TBE implementation depends on -``` - -### Mindspore Verification - -MindSpore is installed successfully if you can run the following code and exit properly. - -For CPU: - -```python -import numpy as np -from mindspore import Tensor -import mindspore.ops as ops -import mindspore.context as context - -context.set_context(device_target="CPU") -x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -print(ops.add(x, y)) -``` - -For Ascend: - -```python -import numpy as np -from mindspore import Tensor -import mindspore.ops as ops -import mindspore.context as context - -context.set_context(device_target="Ascend") -x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -print(ops.add(x, y)) -``` - -For GPU: - -```python -import numpy as np -from mindspore import Tensor -import mindspore.ops as ops -import mindspore.context as context - -context.set_context(device_target="GPU") -x = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -y = Tensor(np.ones([1,3,3,4]).astype(np.float32)) -print(ops.add(x, y)) -``` - ## Knowledge Preparation ### MindSpore Programming Guide @@ -132,4 +50,4 @@ Users can read [MindSpore Tutorial](https://www.mindspore.cn/docs/programming_gu ### Training on the Cloud -ModelArts is a one-stop development platform for AI developers, which contains Ascend resource pool. Users can experience MindSpore in this platform and read related document [MindSpore use_on_the_cloud](https://www.mindspore.cn/docs/programming_guide/en/master/use_on_the_cloud.html) and [AI Platform ModelArts](https://support.huaweicloud.com/intl/en-us/wtsnew-modelarts/index.html). +ModelArts is a one-stop development platform for AI developers provided by HUAWEI Cloud, which contains Ascend resource pool. Users can experience MindSpore in this platform and read related document [MindSpore use_on_the_cloud](https://www.mindspore.cn/docs/programming_guide/en/master/use_on_the_cloud.html) and [AI Platform ModelArts](https://support.huaweicloud.com/intl/en-us/wtsnew-modelarts/index.html). -- Gitee