From cf6b9a14b321663b91844fbd5615828e38d9eef1 Mon Sep 17 00:00:00 2001 From: lvmingfu Date: Mon, 18 Apr 2022 17:38:56 +0800 Subject: [PATCH] modify code formats --- docs/mindspore/source_en/migration_guide/sample_code.md | 2 +- docs/mindspore/source_zh_cn/design/comm_fusion.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/mindspore/source_en/migration_guide/sample_code.md b/docs/mindspore/source_en/migration_guide/sample_code.md index 9e87cd6170..16cd23f9d2 100644 --- a/docs/mindspore/source_en/migration_guide/sample_code.md +++ b/docs/mindspore/source_en/migration_guide/sample_code.md @@ -25,7 +25,7 @@ ResNet50 is a classic deep neural network in CV, which attracts more developers' The official PyTorch implementation script can be found at [torchvision model](https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py) or [Nvidia PyTorch implementation script](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5), which includes implementations of the mainstream ResNet family of networks (ResNet18, ResNet18, ResNet18, ResNet18, and ResNet18). (ResNet18, ResNet34, ResNet50, ResNet101, ResNet152). The dataset used for ResNet50 is ImageNet2012, and the convergence accuracy can be found in [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/#model-description). -Developers can run PyTorch-based ResNet50 scripts directly on the benchmark hardware environment and then computes the performance data, or they can refer to the official data on the same hardware environment. For example, when we benchmark the Nvidia DGX-1 32GB (8x V100 32GB) hardware, we can refer to [Nvidia's official ResNet50 performance data](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v15#training-performance-nvidia-dgx-1-32gb-8x-v100-32gb). +Developers can run PyTorch-based ResNet50 scripts directly on the benchmark hardware environment and then computes the performance data, or they can refer to the official data on the same hardware environment. For example, when we benchmark the Nvidia DGX-1 32GB (8x V100 32GB) hardware, we can refer to [Nvidia's official ResNet50 performance data](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5#training-performance-nvidia-dgx-1-32gb-8x-v100-32gb). ### Reproduce the Migration Target diff --git a/docs/mindspore/source_zh_cn/design/comm_fusion.md b/docs/mindspore/source_zh_cn/design/comm_fusion.md index 20e00fea84..1e47797dd6 100644 --- a/docs/mindspore/source_zh_cn/design/comm_fusion.md +++ b/docs/mindspore/source_zh_cn/design/comm_fusion.md @@ -46,7 +46,7 @@ MindSpore提供两种接口来使能通信融合,下面分别进行介绍。 #### 自动并行场景下的配置 -在自动并行或半自动并行场景下,用户在通过`context.set_auto_parallel_context`来配置并行策略时,可以利用该接口提供的`comm_fusion`参数来设置并行策略,用户可以指定用index方法还是fusion buffer的方法。具体参数说明请参照 [分布式并行接口说明](auto_parallel.md)。 +在自动并行或半自动并行场景下,用户在通过`context.set_auto_parallel_context`来配置并行策略时,可以利用该接口提供的`comm_fusion`参数来设置并行策略,用户可以指定用index方法还是fusion buffer的方法。 #### 利用`Cell`提供的接口 -- Gitee