From 1cbd00432a741bd01b1e5da837befcfb57a15e01 Mon Sep 17 00:00:00 2001 From: zhangyi Date: Fri, 25 Mar 2022 10:49:16 +0800 Subject: [PATCH] exchange the english link --- CONTRIBUTING_DOC.md | 2 +- .../docs/source_en/image_classification_application.md | 2 +- docs/mindspore/api/source_en/api_python/mindspore.ops.rst | 2 +- .../source_en/api_mapping/pytorch_api_mapping.md | 4 ++-- .../source_en/api_mapping/pytorch_diff/stop_gradient.md | 4 ++-- .../mindspore/programming_guide/source_en/api_structure.ipynb | 4 ++-- docs/mindspore/programming_guide/source_en/index.rst | 4 ++-- docs/mindspore/programming_guide/source_en/loss.md | 2 +- .../source_en/quick_start/quick_video/quick_start_video.md | 2 +- docs/mindspore/programming_guide/source_en/run.md | 4 ++-- docs/probability/docs/source_en/probability.md | 2 +- docs/probability/docs/source_en/using_bnn.md | 2 +- .../docs/source_en/using_the_uncertainty_toolbox.md | 2 +- docs/serving/docs/source_en/serving_example.md | 2 +- docs/serving/docs/source_en/serving_multi_subgraphs.md | 2 +- tutorials/source_en/beginner/quick_start.md | 4 ++-- 16 files changed, 22 insertions(+), 22 deletions(-) diff --git a/CONTRIBUTING_DOC.md b/CONTRIBUTING_DOC.md index 6c283862ae..3820974024 100644 --- a/CONTRIBUTING_DOC.md +++ b/CONTRIBUTING_DOC.md @@ -77,7 +77,7 @@ By default, tutorials and documents of the latest version are displayed on the o ![master_doc_en](./resource/_static/master_doc_en.png) -Take **Quick Start for Beginners** as an example. The document link is . +Take **Quick Start for Beginners** as an example. The document link is . ## API diff --git a/docs/federated/docs/source_en/image_classification_application.md b/docs/federated/docs/source_en/image_classification_application.md index f0cf6d3532..867a4d1ac3 100644 --- a/docs/federated/docs/source_en/image_classification_application.md +++ b/docs/federated/docs/source_en/image_classification_application.md @@ -20,7 +20,7 @@ Users can also define the dataset by themselves. Note that the dataset must be a 1. **Define the network and training process** - For the definition of the specific network and training process, please refer to [Beginners Getting Started](https://www.mindspore.cn/tutorials/en/master/quick_start.html#%E5%88%9B%E5%BB%BA%E6%20%A8%A1%E5%9E%8B). + For the definition of the specific network and training process, please refer to [Beginners Getting Started](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html). We provide the network definition file [model.py](https://gitee.com/mindspore/mindspore/blob/master/tests/st/fl/mobile/src/model.py) and the training process definition file [run_export_lenet](https://gitee.com/mindspore/mindspore/blob/master/tests/st/fl/cross_device_lenet/cloud/run_export_lenet.py) for your reference. diff --git a/docs/mindspore/api/source_en/api_python/mindspore.ops.rst b/docs/mindspore/api/source_en/api_python/mindspore.ops.rst index ba5f688e5a..bbbd05d082 100644 --- a/docs/mindspore/api/source_en/api_python/mindspore.ops.rst +++ b/docs/mindspore/api/source_en/api_python/mindspore.ops.rst @@ -269,7 +269,7 @@ The functional operators are the pre-instantiated Primitive operators, which can * - mindspore.ops.stack - Refer to :class:`mindspore.ops.Stack`. * - mindspore.ops.stop_gradient - - Disable update during back propagation. (`stop_gradient `_) + - Disable update during back propagation. (`stop_gradient `_) * - mindspore.ops.strided_slice - Refer to :class:`mindspore.ops.StridedSlice`. * - mindspore.ops.string_concat diff --git a/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md b/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md index 2cadb03295..2c997dace7 100644 --- a/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_api_mapping.md @@ -162,9 +162,9 @@ More MindSpore developers are also welcome to participate in improving the mappi | PyTorch 1.5.0 APIs | MindSpore APIs | Description | | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | | [torch.autograd.backward](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.backward) | [mindspore.ops.GradOperation](https://mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.GradOperation.html#mindspore.ops.GradOperation) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/GradOperation.html) | -| [torch.autograd.enable_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.enable_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/autograd.html#stop-gradient) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/stop_gradient.html) | +| [torch.autograd.enable_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.enable_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-gradient) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/stop_gradient.html) | | [torch.autograd.grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.grad) | [mindspore.ops.GradOperation](https://mindspore.cn/docs/api/en/master/api_python/ops/mindspore.ops.GradOperation.html#mindspore.ops.GradOperation) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/GradOperation.html) | -| [torch.autograd.no_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.no_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/autograd.html#stop-gradient) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/stop_gradient.html) | +| [torch.autograd.no_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.no_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-gradient) | [diff](https://www.mindspore.cn/docs/migration_guide/en/master/api_mapping/pytorch_diff/stop_gradient.html) | | [torch.autograd.variable](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.variable-deprecated)| [mindspore.Parameter](https://mindspore.cn/docs/api/en/master/api_python/mindspore/mindspore.Parameter.html#mindspore.Parameter) | | ## troch.cuda diff --git a/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_diff/stop_gradient.md b/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_diff/stop_gradient.md index b991c13538..521ab2d00f 100644 --- a/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_diff/stop_gradient.md +++ b/docs/mindspore/migration_guide/source_en/api_mapping/pytorch_diff/stop_gradient.md @@ -24,10 +24,10 @@ For more information, see [torch.autograd.no_grad](https://pytorch.org/docs/1.5. mindspore.ops.stop_gradient(input) ``` -For more information, see [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/autograd.html#stop-gradient). +For more information, see [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-gradient). ## Differences PyTorch: Use `torch.autograd.enable_grad` to enable gradient calculation, and `torch.autograd.no_grad` to disable gradient calculation. -MindSpore: Use [stop_gradient](https://www.mindspore.cn/tutorials/en/master/autograd.html#stop-gradient) to disable calculation of gradient for certain operators. +MindSpore: Use [stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-gradient) to disable calculation of gradient for certain operators. diff --git a/docs/mindspore/programming_guide/source_en/api_structure.ipynb b/docs/mindspore/programming_guide/source_en/api_structure.ipynb index e5272f967d..9d2bb23660 100644 --- a/docs/mindspore/programming_guide/source_en/api_structure.ipynb +++ b/docs/mindspore/programming_guide/source_en/api_structure.ipynb @@ -25,7 +25,7 @@ "\n", "MindSpore originates from the best practices of the entire industry and provides unified model training, inference, and export APIs for data scientists and algorithm engineers. It supports flexible deployment in different scenarios such as the device, edge, and cloud, and promotes the prosperity of domains such as deep learning and scientific computing.\n", "\n", - "MindSpore provides the Python programming paradigm. Users can use the native control logic of Python to build complex neural network models, simplifying AI programming. For details, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/quick_start.html).\n", + "MindSpore provides the Python programming paradigm. Users can use the native control logic of Python to build complex neural network models, simplifying AI programming. For details, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html).\n", "\n", "Currently, there are two execution modes of a mainstream deep learning framework: a static graph mode and a dynamic graph mode. The static graph mode has a relatively high training performance, but is difficult to debug. On the contrary, the dynamic graph mode is easy to debug, but is difficult to execute efficiently. MindSpore provides an encoding mode that unifies dynamic and static graphs, which greatly improves the compatibility between static and dynamic graphs. Instead of developing multiple sets of code, users can switch between the dynamic and static graph modes by changing only one line of code. For example, set `context.set_context(mode=context.PYNATIVE_MODE)` to switch to the dynamic graph mode, or set `context.set_context(mode=context.GRAPH_MODE)` to switch to the static graph mode, which facilitates development and debugging, and improves performance experience.\n", "\n", @@ -127,4 +127,4 @@ }, "nbformat": 4, "nbformat_minor": 5 -} +} \ No newline at end of file diff --git a/docs/mindspore/programming_guide/source_en/index.rst b/docs/mindspore/programming_guide/source_en/index.rst index 9bd2367b42..70671ea96a 100644 --- a/docs/mindspore/programming_guide/source_en/index.rst +++ b/docs/mindspore/programming_guide/source_en/index.rst @@ -34,8 +34,8 @@ MindSpore Programming Guide :caption: Quickstart :hidden: - Implementing Simple Linear Function Fitting↗ - Implementing an Image Classification Application↗ + Implementing Simple Linear Function Fitting↗ + Implementing an Image Classification Application↗ .. toctree:: :glob: diff --git a/docs/mindspore/programming_guide/source_en/loss.md b/docs/mindspore/programming_guide/source_en/loss.md index 4977c8abdc..94bad2f59c 100644 --- a/docs/mindspore/programming_guide/source_en/loss.md +++ b/docs/mindspore/programming_guide/source_en/loss.md @@ -136,7 +136,7 @@ Now we train model by the defined L1Loss. Taking the simple linear function fitting as an example. The dataset and network structure is defined as follows: -> For a detailed introduction of linear fitting, please refer to the tutorial [Implementing Simple Linear Function Fitting](https://www.mindspore.cn/tutorials/en/master/linear_regression.html). +> For a detailed introduction of linear fitting, please refer to the tutorial [Implementing Simple Linear Function Fitting](https://www.mindspore.cn/tutorials/en/r1.6/linear_regression.html). 1. Defining the Dataset diff --git a/docs/mindspore/programming_guide/source_en/quick_start/quick_video/quick_start_video.md b/docs/mindspore/programming_guide/source_en/quick_start/quick_video/quick_start_video.md index 6634557329..ce303257c2 100644 --- a/docs/mindspore/programming_guide/source_en/quick_start/quick_video/quick_start_video.md +++ b/docs/mindspore/programming_guide/source_en/quick_start/quick_video/quick_start_video.md @@ -8,4 +8,4 @@ **View code**: -**View the full tutorial**: +**View the full tutorial**: diff --git a/docs/mindspore/programming_guide/source_en/run.md b/docs/mindspore/programming_guide/source_en/run.md index f6b5500bb4..bd2ed551ed 100644 --- a/docs/mindspore/programming_guide/source_en/run.md +++ b/docs/mindspore/programming_guide/source_en/run.md @@ -264,7 +264,7 @@ epoch: 1 step: 1500, loss is 0.032973606 epoch: 1 step: 1875, loss is 0.06105463 ``` -> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorials/en/master/quick_start.html#downloading-the-dataset). +> For details about how to obtain the MNIST dataset used in the example, see [Downloading the Dataset](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html#downloading-the-dataset). > Use the PyNative mode for debugging, including the execution of single operator, common function, and network training model. For details, see [Debugging in PyNative Mode](https://www.mindspore.cn/docs/programming_guide/en/master/debug_in_pynative_mode.html). ### Executing an Inference Model @@ -406,4 +406,4 @@ In the preceding information: - `checkpoint_lenet-1_1875.ckpt`: name of the saved checkpoint model file. - `load_param_into_net`: loads parameters to the network. -> For details about how to save the `checkpoint_lenet-1_1875.ckpt` file, see [Training the Network](https://www.mindspore.cn/tutorials/en/master/quick_start.html#training-and-saving-the-model). +> For details about how to save the `checkpoint_lenet-1_1875.ckpt` file, see [Training the Network](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html#training-and-saving-the-model). diff --git a/docs/probability/docs/source_en/probability.md b/docs/probability/docs/source_en/probability.md index e2b3e5f67b..0e267d0bd5 100644 --- a/docs/probability/docs/source_en/probability.md +++ b/docs/probability/docs/source_en/probability.md @@ -804,7 +804,7 @@ decoder = Decoder() cvae = ConditionalVAE(encoder, decoder, hidden_size=400, latent_size=20, num_classes=10) ``` -Load a dataset, for example, Mnist. For details about the data loading and preprocessing process, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/quick_start.html). The create_dataset function is used to create a data iterator. +Load a dataset, for example, Mnist. For details about the data loading and preprocessing process, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html). The create_dataset function is used to create a data iterator. ```python ds_train = create_dataset(image_path, 128, 1) diff --git a/docs/probability/docs/source_en/using_bnn.md b/docs/probability/docs/source_en/using_bnn.md index dee1e5b5ce..9612ebff9a 100644 --- a/docs/probability/docs/source_en/using_bnn.md +++ b/docs/probability/docs/source_en/using_bnn.md @@ -77,7 +77,7 @@ download_dataset("https://mindspore-website.obs.myhuaweicloud.com/notebook/datas ### Defining the Dataset Enhancement Method -The original training dataset of the MNIST dataset is 60,000 single-channel digital images with $28\times28$ pixels. The LeNet5 network containing the Bayesian layer used in this training received the training data tensor as `(32,1 ,32,32)`, through the custom create_dataset function to enhance the original dataset to meet the training requirements of the data, the specific enhancement operation explanation can refer to [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/quick_start.html). +The original training dataset of the MNIST dataset is 60,000 single-channel digital images with $28\times28$ pixels. The LeNet5 network containing the Bayesian layer used in this training received the training data tensor as `(32,1 ,32,32)`, through the custom create_dataset function to enhance the original dataset to meet the training requirements of the data, the specific enhancement operation explanation can refer to [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html). ```python import mindspore.dataset.vision.c_transforms as CV diff --git a/docs/probability/docs/source_en/using_the_uncertainty_toolbox.md b/docs/probability/docs/source_en/using_the_uncertainty_toolbox.md index 751c7a77f8..29c5510b1e 100644 --- a/docs/probability/docs/source_en/using_the_uncertainty_toolbox.md +++ b/docs/probability/docs/source_en/using_the_uncertainty_toolbox.md @@ -156,7 +156,7 @@ MindSpore uses the Uncertainty Toolbox `UncertaintyEvaluation` interface to meas ### Preparing the Model Weight Parameter File -In this example, the corresponding model weight parameter file `checkpoint_lenet.ckpt` has been prepared. This parameter file is the weight parameter file saved after training for 5 epochs in [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/quick_start.html), execute the following command to download: +In this example, the corresponding model weight parameter file `checkpoint_lenet.ckpt` has been prepared. This parameter file is the weight parameter file saved after training for 5 epochs in [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html), execute the following command to download: ```bash download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/models/checkpoint_lenet.ckpt", ".") diff --git a/docs/serving/docs/source_en/serving_example.md b/docs/serving/docs/source_en/serving_example.md index e1d8ec03eb..a0c559bd2b 100644 --- a/docs/serving/docs/source_en/serving_example.md +++ b/docs/serving/docs/source_en/serving_example.md @@ -67,7 +67,7 @@ if __name__ == "__main__": ``` To use MindSpore for neural network definition, inherit `mindspore.nn.Cell`. (A `Cell` is a base class of all neural networks.) Define each layer of a neural network in the `__init__` method in advance, and then define the `construct` method to complete the forward construction of the neural network. Use `export` of the `mindspore` module to export the model file. -For more detailed examples, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/quick_start.html). +For more detailed examples, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html). Execute the `add_model.py` script to generate the `tensor_add.mindir` file. The input of the model is two 2D tensors with shape [2,2], and the output is the sum of the two input tensors. diff --git a/docs/serving/docs/source_en/serving_multi_subgraphs.md b/docs/serving/docs/source_en/serving_multi_subgraphs.md index 701895669d..4848861fbb 100644 --- a/docs/serving/docs/source_en/serving_multi_subgraphs.md +++ b/docs/serving/docs/source_en/serving_multi_subgraphs.md @@ -83,7 +83,7 @@ if __name__ == "__main__": ``` To use MindSpore for neural network definition, inherit `mindspore.nn.Cell`. (A `Cell` is a base class of all neural networks.) Define each layer of a neural network in the `__init__` method in advance, and then define the `construct` method to complete the forward construction of the neural network. Use `export` of the `mindspore` module to export the model file. -For more detailed examples, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/quick_start.html). +For more detailed examples, see [Quick Start for Beginners](https://www.mindspore.cn/tutorials/en/master/beginner/quick_start.html). Execute the `export_matmul.py` script to generate the `matmul_0.mindir` and `matmul_1.mindir` files. The inputs shapes of these subgraphs are [128,96] and [8,96]. diff --git a/tutorials/source_en/beginner/quick_start.md b/tutorials/source_en/beginner/quick_start.md index 19a5312c14..c52a6d2312 100644 --- a/tutorials/source_en/beginner/quick_start.md +++ b/tutorials/source_en/beginner/quick_start.md @@ -187,7 +187,7 @@ Loss functions supported by MindSpore include `SoftmaxCrossEntropyWithLogits`, ` net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') ``` -> For more information about using loss functions in mindspore, see [Loss Functions](https://www.mindspore.cn/tutorials/en/master/optimization.html#loss-functions). +> For more information about using loss functions in mindspore, see [Loss Functions](https://www.mindspore.cn/tutorials/en/master/beginner/train.html#loss-functions). MindSpore supports the `Adam`, `AdamWeightDecay`, and `Momentum` optimizers. The following uses the `Momentum` optimizer as an example. @@ -196,7 +196,7 @@ MindSpore supports the `Adam`, `AdamWeightDecay`, and `Momentum` optimizers. The net_opt = nn.Momentum(net.trainable_params(), learning_rate=0.01, momentum=0.9) ``` -> For more information about using an optimizer in mindspore, see [Optimizer](https://www.mindspore.cn/tutorials/en/master/optimization.html#optimizer). +> For more information about using an optimizer in mindspore, see [Optimizer](https://www.mindspore.cn/tutorials/en/master/beginner/train.html#optimizer-functions). ## Training and Saving the Model -- Gitee