From a917b8a221a0e664f9ff3290b8db69477037bd26 Mon Sep 17 00:00:00 2001 From: huanxiaoling <3174348550@qq.com> Date: Fri, 23 Sep 2022 15:17:25 +0800 Subject: [PATCH] modify the wrong links in files --- docs/mindspore/source_en/design/glossary.md | 6 +++--- .../source_en/note/api_mapping/pytorch_api_mapping.md | 4 ++-- .../note/api_mapping/pytorch_diff/stop_gradient.md | 4 ++-- .../source_en/note/api_mapping/tensorflow_api_mapping.md | 2 +- tutorials/source_en/advanced/lenet_mnist.md | 2 +- 5 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/mindspore/source_en/design/glossary.md b/docs/mindspore/source_en/design/glossary.md index c89650e91a..90995b33de 100644 --- a/docs/mindspore/source_en/design/glossary.md +++ b/docs/mindspore/source_en/design/glossary.md @@ -65,8 +65,8 @@ | TFRecord | Data format defined by TensorFlow. | | Tensor | A tensor is a generalization of vectors and matrices and is easily understood as a multidimensional array, scalar, and matrix. | | Broadcast |  In matrix mathematical operations, the shape of the operands is extended to a dimension compatible with the operation. In distributed parallelism, the parameters on one card are synchronized to other cards. | -| Computational Graphs on Devices | The entire graph is executed on the device to reduce the interaction overheads between the host and device. For details see [On-Device Execution](https://www.mindspore.cn/docs/en/master/design/on_device.html). | -| Cyclic Sinking | Cyclic sinking is optimized based on on-device execution to further reduce the number of interactions between the host and device. For details see [On-Device Execution](https://www.mindspore.cn/docs/en/master/design/on_device.html). | -| Data Sinking | Sinking means that data is directly transmitted to the device through a channel. For details see [On-Device Execution](https://www.mindspore.cn/docs/en/master/design/on_device.html). | +| Computational Graphs on Devices | The entire graph is executed on the device to reduce the interaction overheads between the host and device. | +| Cyclic Sinking | Cyclic sinking is optimized based on on-device execution to further reduce the number of interactions between the host and device. | +| Data Sinking | Sinking means that data is directly transmitted to the device through a channel. | | Graph Mode |  Static graph mode or graph mode. In this mode, the neural network model is compiled into an entire graph, and then the graph is delivered for execution. This mode uses graph optimization to improve the running performance and facilitates large-scale deployment and cross-platform running. | | PyNative Mode | Dynamic graph mode. In this mode, operators in the neural network are delivered and executed one by one, facilitating the compilation and debugging of the neural network model. | diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md index fba7567ecb..d412b0d2fa 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md @@ -162,9 +162,9 @@ More MindSpore developers are also welcome to participate in improving the mappi | PyTorch 1.5.0 APIs | MindSpore APIs | Description | | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | | [torch.autograd.backward](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.backward) | [mindspore.ops.GradOperation](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.GradOperation.html#mindspore.ops.GradOperation) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/GradOperation.html) | -| [torch.autograd.enable_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.enable_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-calculating-gradients) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/stop_gradient.html) | +| [torch.autograd.enable_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.enable_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stop-gradient) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/stop_gradient.html) | | [torch.autograd.grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.grad) | [mindspore.ops.GradOperation](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.GradOperation.html#mindspore.ops.GradOperation) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/GradOperation.html) | -| [torch.autograd.no_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.no_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-calculating-gradients) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/stop_gradient.html) | +| [torch.autograd.no_grad](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.no_grad) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stop-gradient) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/stop_gradient.html) | | [torch.autograd.variable](https://pytorch.org/docs/1.5.0/autograd.html#torch.autograd.variable-deprecated)| [mindspore.Parameter](https://mindspore.cn/docs/en/master/api_python/mindspore/mindspore.Parameter.html#mindspore.Parameter) | | ## torch.cuda diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/stop_gradient.md b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/stop_gradient.md index 69dbc9126c..edabb8a28f 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/stop_gradient.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/stop_gradient.md @@ -24,10 +24,10 @@ For more information, see [torch.autograd.no_grad](https://pytorch.org/docs/1.5. mindspore.ops.stop_gradient(input) ``` -For more information, see [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-calculating-gradients). +For more information, see [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stop-gradient). ## Differences PyTorch: Use `torch.autograd.enable_grad` to enable gradient calculation, and `torch.autograd.no_grad` to disable gradient calculation. -MindSpore: Use [stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-calculating-gradients) to disable calculation of gradient for certain operators. +MindSpore: Use [stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stop-gradient) to disable calculation of gradient for certain operators. diff --git a/docs/mindspore/source_en/note/api_mapping/tensorflow_api_mapping.md b/docs/mindspore/source_en/note/api_mapping/tensorflow_api_mapping.md index 01c5da4f9b..f22b94b9e4 100644 --- a/docs/mindspore/source_en/note/api_mapping/tensorflow_api_mapping.md +++ b/docs/mindspore/source_en/note/api_mapping/tensorflow_api_mapping.md @@ -24,7 +24,7 @@ More MindSpore developers are also welcome to participate in improving the mappi | [tf.shape](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/shape) | [mindspore.ops.Shape](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.Shape.html) | | | [tf.size](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/size) | [mindspore.ops.Size](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.Size.html) | | | [tf.slice](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/slice) | [mindspore.ops.Slice](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.Slice.html) | | -| [tf.stop_gradient](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/stop_gradient) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stopping-calculating-gradients) | | +| [tf.stop_gradient](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/stop_gradient) | [mindspore.ops.stop_gradient](https://www.mindspore.cn/tutorials/en/master/beginner/autograd.html#stop-gradient) | | | [tf.Tensor](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Tensor) | [mindspore.Tensor](https://mindspore.cn/docs/en/master/api_python/mindspore/mindspore.Tensor.html) | | | [tf.tile](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/tile) | [mindspore.ops.Tile](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.Tile.html) | | | [tf.transpose](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/transpose) | [mindspore.ops.Transpose](https://mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.Transpose.html) | | diff --git a/tutorials/source_en/advanced/lenet_mnist.md b/tutorials/source_en/advanced/lenet_mnist.md index 09ae192413..2106c9a66a 100644 --- a/tutorials/source_en/advanced/lenet_mnist.md +++ b/tutorials/source_en/advanced/lenet_mnist.md @@ -165,7 +165,7 @@ ms.load_param_into_net(network, param_dict) [] ``` -> For more information about loading a model in mindspore, see [Loading the Model](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html#loading-the-model). +> For more information about loading a model in mindspore, see [Saving and Loading the Model Weight](https://www.mindspore.cn/tutorials/en/master/beginner/save_load.html#saving-and-loading-the-model-weight). ## Validating the Model -- Gitee