diff --git a/install/mindspore_cpu_install.md b/install/mindspore_cpu_install.md
index 721cad66b18b2db7b7fdc312765fc60c4a5db594..da0a14f902c3df02fb3ea8e9a9b8019ffabe8615 100644
--- a/install/mindspore_cpu_install.md
+++ b/install/mindspore_cpu_install.md
@@ -21,7 +21,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 **安装依赖:** 与可执行文件安装依赖相同 |
+| MindSpore 0.7.0-beta | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 **安装依赖:** 与可执行文件安装依赖相同 |
- GCC 7.3.0可以直接通过apt命令安装。
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -62,7 +62,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. 在源码根目录下执行如下命令编译MindSpore。
@@ -97,7 +97,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
-| MindArmour master | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour 0.7.0-beta | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.7/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
@@ -122,7 +122,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r0.7
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
diff --git a/install/mindspore_cpu_install_en.md b/install/mindspore_cpu_install_en.md
index d21f05bdeb3d451ff36588a8346a9753bdf831d6..4e47fb44d0df0addd7a9784d09c256baa9de543d 100644
--- a/install/mindspore_cpu_install_en.md
+++ b/install/mindspore_cpu_install_en.md
@@ -21,7 +21,7 @@ This document describes how to quickly install MindSpore in a Ubuntu system with
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 same as the executable file installation dependencies. |
+| MindSpore 0.7.0-beta | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 same as the executable file installation dependencies. |
- GCC 7.3.0 can be installed by using apt command.
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -62,7 +62,7 @@ This document describes how to quickly install MindSpore in a Ubuntu system with
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile MindSpore:
@@ -97,7 +97,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour 0.7.0-beta | - Ubuntu 18.04 x86_64 - Ubuntu 18.04 aarch64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.7/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -122,7 +122,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
diff --git a/install/mindspore_cpu_win_install.md b/install/mindspore_cpu_win_install.md
index d2cf00feab8578c530090516fb1f0b1542c6ac3a..626575438cdc29275fa07e8d7794877583f30522 100644
--- a/install/mindspore_cpu_win_install.md
+++ b/install/mindspore_cpu_win_install.md
@@ -20,7 +20,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 - [CMake](https://cmake.org/download/) 3.14.1 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 **安装依赖:** 与可执行文件安装依赖相同 |
+| MindSpore 0.7.0-beta | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 - [CMake](https://cmake.org/download/) 3.14.1 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 **安装依赖:** 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -62,7 +62,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. 在源码根目录下执行如下命令编译MindSpore。
diff --git a/install/mindspore_cpu_win_install_en.md b/install/mindspore_cpu_win_install_en.md
index ea2a2e2137d53f0c06071ab961db32aef42dc366..fe564d1530066e3d7eedd3c9d1146ce4492b4604 100644
--- a/install/mindspore_cpu_win_install_en.md
+++ b/install/mindspore_cpu_win_install_en.md
@@ -20,7 +20,7 @@ This document describes how to quickly install MindSpore in a Windows system wit
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 - [CMake](https://cmake.org/download/) 3.14.1 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 **Installation dependencies:** same as the executable file installation dependencies. |
+| MindSpore 0.7.0-beta | Windows 10 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [MinGW-W64 GCC-7.3.0](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/7.3.0/threads-posix/seh/x86_64-7.3.0-release-posix-seh-rt_v5-rev0.7z) x86_64-posix-seh - [ActivePerl](http://downloads.activestate.com/ActivePerl/releases/5.24.3.2404/ActivePerl-5.24.3.2404-MSWin32-x64-404865.exe) 5.24.3.2404 - [CMake](https://cmake.org/download/) 3.14.1 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 **Installation dependencies:** same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -62,7 +62,7 @@ This document describes how to quickly install MindSpore in a Windows system wit
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile MindSpore:
diff --git a/install/mindspore_d_install.md b/install/mindspore_d_install.md
index a0d6eb70a9f6d7f7c4452a593695e95c024c135f..fb352dc5108ba35153f5e4db38976545d829ba2d 100644
--- a/install/mindspore_d_install.md
+++ b/install/mindspore_d_install.md
@@ -32,7 +32,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **安装依赖:** 与可执行文件安装依赖相同 |
+| MindSpore 0.7.0-beta | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **安装依赖:** 与可执行文件安装依赖相同 |
- 确认当前用户有权限访问Ascend 910 AI处理器配套软件包(对应版本[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816))的安装路径`/usr/local/Ascend`,若无权限,需要root用户将当前用户添加到`/usr/local/Ascend`所在的用户组,具体配置请详见配套软件包的说明文档。
- GCC 7.3.0可以直接通过apt命令安装。
@@ -81,7 +81,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. 在源码根目录下,执行如下命令编译MindSpore。
@@ -180,7 +180,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **安装依赖:** 与可执行文件安装依赖相同 |
+| MindInsight 0.7.0-beta | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.7/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **安装依赖:** 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -205,7 +205,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r0.7
```
> **不能**直接在仓库主页下载zip包获取源码。
@@ -245,7 +245,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindArmour master | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour 0.7.0-beta | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.7/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
@@ -270,7 +270,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r0.7
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
diff --git a/install/mindspore_d_install_en.md b/install/mindspore_d_install_en.md
index 827f23bb76e748d6304eabf3444d7974956bc0a1..47171835400f7245db99d0b114c7a7d85c8e957a 100644
--- a/install/mindspore_d_install_en.md
+++ b/install/mindspore_d_install_en.md
@@ -32,7 +32,7 @@ This document describes how to quickly install MindSpore in an Ascend AI process
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.6/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **Installation dependencies:** same as the executable file installation dependencies. |
+| MindSpore 0.7.0-beta | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)) - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **Installation dependencies:** same as the executable file installation dependencies. |
- Confirm that the current user has the right to access the installation path `/usr/local/Ascend `of Ascend 910 AI processor software package(Version:[Atlas Data Center Solution V100R020C10T200](https://support.huawei.com/enterprise/zh/ascend-computing/atlas-data-center-solution-pid-251167910/software/251661816)). If not, the root user needs to add the current user to the user group where `/usr/local/Ascend` is located. For the specific configuration, please refer to the software package instruction document.
- GCC 7.3.0 can be installed by using apt command.
@@ -81,7 +81,7 @@ The compilation and installation must be performed on the Ascend 910 AI processo
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile MindSpore:
@@ -180,7 +180,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **Installation dependencies:** same as the executable file installation dependencies. |
+| MindInsight 0.7.0-beta | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.7/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **Installation dependencies:** same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -205,7 +205,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r0.7
```
> You are **not** supposed to obtain the source code from the zip package downloaded from the repository homepage.
@@ -247,7 +247,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour 0.7.0-beta | - Ubuntu 18.04 aarch64 - Ubuntu 18.04 x86_64 - EulerOS 2.8 aarch64 - EulerOS 2.5 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.7/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -272,7 +272,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
diff --git a/install/mindspore_gpu_install.md b/install/mindspore_gpu_install.md
index 6c4e175a851cc2a9a2ef7b3ca9d35f2ee1fd8243..3c53f899a88f787c742d864b6bda2123f04bc34e 100644
--- a/install/mindspore_gpu_install.md
+++ b/install/mindspore_gpu_install.md
@@ -28,7 +28,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要) - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (可选,单机多卡/多机多卡训练需要) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69 - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30 - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **安装依赖:** 与可执行文件安装依赖相同 |
+| MindSpore 0.7.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (可选,单机多卡/多机多卡训练需要) - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (可选,单机多卡/多机多卡训练需要) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69 - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30 - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **安装依赖:** 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
- 为了方便用户使用,MindSpore降低了对Autoconf、Libtool、Automake版本的依赖,可以使用系统自带版本。
@@ -64,7 +64,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. 在源码根目录下执行如下命令编译MindSpore。
@@ -124,7 +124,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **安装依赖:** 与可执行文件安装依赖相同 |
+| MindInsight 0.7.0-beta | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - 其他依赖项参见[requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.7/requirements.txt) | **编译依赖:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **安装依赖:** 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`requirements.txt`中的依赖项,其余情况需自行安装。
@@ -149,7 +149,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r0.7
```
> **不能**直接在仓库主页下载zip包获取源码。
@@ -189,7 +189,7 @@
| 版本号 | 操作系统 | 可执行文件安装依赖 | 源码编译安装依赖 |
| ---------------------- | :------------------ | :----------------------------------------------------------- | :----------------------- |
-| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py) | 与可执行文件安装依赖相同 |
+| MindArmour 0.7.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - 其他依赖项参见[setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.7/setup.py) | 与可执行文件安装依赖相同 |
- 在联网状态下,安装whl包时会自动下载`setup.py`中的依赖项,其余情况需自行安装。
@@ -214,7 +214,7 @@
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r0.7
```
2. 在源码根目录下,执行如下命令编译并安装MindArmour。
diff --git a/install/mindspore_gpu_install_en.md b/install/mindspore_gpu_install_en.md
index 70ee23959882765e907b0db917d68a256d2b1d8a..c75107b6ef5d4e0ea6dfb65f902ddb3e2beda219 100644
--- a/install/mindspore_gpu_install_en.md
+++ b/install/mindspore_gpu_install_en.md
@@ -28,7 +28,7 @@ This document describes how to quickly install MindSpore in a NVIDIA GPU environ
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindSpore master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/master/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69 - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30 - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **Installation dependencies:** same as the executable file installation dependencies. |
+| MindSpore 0.7.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [OpenMPI](https://www.open-mpi.org/faq/?category=building#easy-build) 3.1.5 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) - [NCCL](https://docs.nvidia.com/deeplearning/sdk/nccl-install-guide/index.html#debian) 2.7.6-1 (optional, required for single-node/multi-GPU and multi-node/multi-GPU training) - [gmp](https://gmplib.org/download/gmp/) 6.1.2 - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindspore/blob/r0.7/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [patch](http://ftp.gnu.org/gnu/patch/) >= 2.5 - [Autoconf](https://www.gnu.org/software/autoconf) >= 2.69 - [Libtool](https://www.gnu.org/software/libtool) >= 2.4.6-29.fc30 - [Automake](https://www.gnu.org/software/automake) >= 1.15.1 - [CUDA 10.1](https://developer.nvidia.com/cuda-10.1-download-archive-base) - [CuDNN](https://developer.nvidia.com/rdp/cudnn-archive) >= 7.6 - [gmp](https://gmplib.org/download/gmp/) 6.1.2 **Installation dependencies:** same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during `.whl` package installation. In other cases, you need to manually install dependency items.
- MindSpore reduces dependency on Autoconf, Libtool, Automake versions for the convenience of users, default versions of these tools built in their systems are now supported.
@@ -64,7 +64,7 @@ This document describes how to quickly install MindSpore in a NVIDIA GPU environ
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile MindSpore:
@@ -124,7 +124,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindInsight master | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/master/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **Installation dependencies:** same as the executable file installation dependencies. |
+| MindInsight 0.7.0-beta | - Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - For details about other dependency items, see [requirements.txt](https://gitee.com/mindspore/mindinsight/blob/r0.7/requirements.txt). | **Compilation dependencies:** - [Python](https://www.python.org/downloads/) 3.7.5 - [CMake](https://cmake.org/download/) >= 3.14.1 - [GCC](https://gcc.gnu.org/releases.html) 7.3.0 - [node.js](https://nodejs.org/en/download/) >= 10.19.0 - [wheel](https://pypi.org/project/wheel/) >= 0.32.0 - [pybind11](https://pypi.org/project/pybind11/) >= 2.4.3 **Installation dependencies:** same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `requirements.txt` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -149,7 +149,7 @@ If you need to analyze information such as model scalars, graphs, and model trac
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindinsight.git
+ git clone https://gitee.com/mindspore/mindinsight.git -b r0.7
```
> You are **not** supposed to obtain the source code from the zip package downloaded from the repository homepage.
@@ -191,7 +191,7 @@ If you need to conduct AI model security research or enhance the security of the
| Version | Operating System | Executable File Installation Dependencies | Source Code Compilation and Installation Dependencies |
| ---- | :--- | :--- | :--- |
-| MindArmour master | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore master - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/master/setup.py). | Same as the executable file installation dependencies. |
+| MindArmour 0.7.0-beta | Ubuntu 18.04 x86_64 | - [Python](https://www.python.org/downloads/) 3.7.5 - MindSpore 0.7.0-beta - For details about other dependency items, see [setup.py](https://gitee.com/mindspore/mindarmour/blob/r0.7/setup.py). | Same as the executable file installation dependencies. |
- When the network is connected, dependency items in the `setup.py` file are automatically downloaded during .whl package installation. In other cases, you need to manually install dependency items.
@@ -216,7 +216,7 @@ If you need to conduct AI model security research or enhance the security of the
1. Download the source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindarmour.git
+ git clone https://gitee.com/mindspore/mindarmour.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile and install MindArmour:
diff --git a/tutorials/source_en/advanced_use/checkpoint_for_hybrid_parallel.md b/tutorials/source_en/advanced_use/checkpoint_for_hybrid_parallel.md
index 4843a034aa0846167e18bfd80404cedbb330477c..7cd04473cd325a82ea91134b48ceccd326fddd8e 100644
--- a/tutorials/source_en/advanced_use/checkpoint_for_hybrid_parallel.md
+++ b/tutorials/source_en/advanced_use/checkpoint_for_hybrid_parallel.md
@@ -262,7 +262,7 @@ User process:
3. Execute stage 2 training: There are two devices in stage 2 training environment. The weight shape of the MatMul operator on each device is \[4, 8]. Load the initialized model parameter data from the integrated checkpoint file and then perform training.
-> For details about the distributed environment configuration and training code, see [Distributed Training](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training_ascend.html).
+> For details about the distributed environment configuration and training code, see [Distributed Training](https://www.mindspore.cn/tutorial/en/r0.7/advanced_use/distributed_training_ascend.html).
>
> This document provides the example code for integrating checkpoint files and loading checkpoint files before distributed training. The code is for reference only.
diff --git a/tutorials/source_en/advanced_use/computer_vision_application.md b/tutorials/source_en/advanced_use/computer_vision_application.md
index 0ff68ce380132d8618de67ad3c7d4ac6ae98399d..6d001fb4eff0b38dfb30b935974b2e2a0cf93e22 100644
--- a/tutorials/source_en/advanced_use/computer_vision_application.md
+++ b/tutorials/source_en/advanced_use/computer_vision_application.md
@@ -18,7 +18,7 @@
-
+
## Overview
@@ -38,7 +38,7 @@ def classify(image):
The key point is to select a proper model. The model generally refers to a deep convolutional neural network (CNN), such as AlexNet, VGG, GoogleNet, and ResNet.
-MindSpore presets a typical CNN, developer can visit [model_zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official) to get more details.
+MindSpore presets a typical CNN, developer can visit [model_zoo](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo/official) to get more details.
MindSpore supports the following image classification networks: LeNet, AlexNet, and ResNet.
@@ -61,7 +61,7 @@ Next, let's use MindSpore to solve the image classification task. The overall pr
5. Call the high-level `Model` API to train and save the model file.
6. Load the saved model for inference.
-> This example is for the hardware platform of the Ascend 910 AI processor. You can find the complete executable sample code at: .
+> This example is for the hardware platform of the Ascend 910 AI processor. You can find the complete executable sample code at: .
The key parts of the task process code are explained below.
@@ -145,7 +145,7 @@ CNN is a standard algorithm for image classification tasks. CNN uses a layered s
ResNet is recommended. First, it is deep enough with 34 layers, 50 layers, or 101 layers. The deeper the hierarchy, the stronger the representation capability, and the higher the classification accuracy. Second, it is learnable. The residual structure is used. The lower layer is directly connected to the upper layer through the shortcut connection, which solves the problem of gradient disappearance caused by the network depth during the reverse propagation. In addition, the ResNet network has good performance, including the recognition accuracy, model size, and parameter quantity.
-MindSpore Model Zoo has a ResNet [model](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py). The calling method is as follows:
+MindSpore Model Zoo has a ResNet [model](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/resnet/src/resnet.py). The calling method is as follows:
```python
network = resnet50(class_num=10)
diff --git a/tutorials/source_en/advanced_use/customized_debugging_information.md b/tutorials/source_en/advanced_use/customized_debugging_information.md
index 0e2b71e874e68fd5c967ba61584668a0af117958..54943f331d68738eeba9574829ddd42e2ec7aac0 100644
--- a/tutorials/source_en/advanced_use/customized_debugging_information.md
+++ b/tutorials/source_en/advanced_use/customized_debugging_information.md
@@ -16,7 +16,7 @@
-
+
## Overview
@@ -263,7 +263,7 @@ val:[[1 1]
When the training result deviates from the expectation on Ascend, the input and output of the operator can be dumped for debugging through Asynchronous Data Dump.
-> `comm_ops` operators are not supported by Asynchronous Data Dump. `comm_ops` can be found in [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
+> `comm_ops` operators are not supported by Asynchronous Data Dump. `comm_ops` can be found in [Operator List](https://www.mindspore.cn/docs/en/r0.7/operator_list.html).
1. Turn on the switch to save graph IR: `context.set_context(save_graphs=True)`.
2. Execute training script.
diff --git a/tutorials/source_en/advanced_use/dashboard.md b/tutorials/source_en/advanced_use/dashboard.md
index 6287ce87cd8322b7dddc18868112d280eb658662..eb5aec6c096658ffa9796c9ea2f67d7d9ee33125 100644
--- a/tutorials/source_en/advanced_use/dashboard.md
+++ b/tutorials/source_en/advanced_use/dashboard.md
@@ -16,7 +16,7 @@
-
+
## Overview
diff --git a/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md b/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
index 877b07e0a6dcc8e7980a883927f5d1f235bcd213..07a648764388423aacd9c5bdc75cf33371253fe5 100644
--- a/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
+++ b/tutorials/source_en/advanced_use/debugging_in_pynative_mode.md
@@ -13,7 +13,7 @@
-
+
## Overview
diff --git a/tutorials/source_en/advanced_use/differential_privacy.md b/tutorials/source_en/advanced_use/differential_privacy.md
index 33635e67bdc654970158e0bafa8fe4184b0ee77d..4c0ca8a6d468ac4e2e8d0fdddaea84029a1112b6 100644
--- a/tutorials/source_en/advanced_use/differential_privacy.md
+++ b/tutorials/source_en/advanced_use/differential_privacy.md
@@ -16,7 +16,7 @@
-
+
## Overview
@@ -45,7 +45,7 @@ MindArmour differential privacy module Differential-Privacy implements the diffe
The LeNet model and MNIST dataset are used as an example to describe how to use the differential privacy optimizer to train a neural network model on MindSpore.
-> This example is for the Ascend 910 AI processor. You can download the complete sample code from .
+> This example is for the Ascend 910 AI processor. You can download the complete sample code from .
## Implementation
@@ -85,7 +85,7 @@ TAG = 'Lenet5_train'
### Configuring Parameters
-1. Set the running environment, dataset path, model training parameters, checkpoint storage parameters, and differential privacy parameters. Replace 'data_path' with you data path. For more configurations, see .
+1. Set the running environment, dataset path, model training parameters, checkpoint storage parameters, and differential privacy parameters. Replace 'data_path' with you data path. For more configurations, see .
```python
cfg = edict({
diff --git a/tutorials/source_en/advanced_use/distributed_training_ascend.md b/tutorials/source_en/advanced_use/distributed_training_ascend.md
index 6cfb3db2427102e17d88aa4ff1c02c56124ba1fb..d4531d1331b5fee4f22a6642970bee4913cdc44e 100644
--- a/tutorials/source_en/advanced_use/distributed_training_ascend.md
+++ b/tutorials/source_en/advanced_use/distributed_training_ascend.md
@@ -20,12 +20,12 @@
-
+
## Overview
This tutorial describes how to train the ResNet-50 network in data parallel and automatic parallel modes on MindSpore based on the Ascend 910 AI processor.
-> Download address of the complete sample code:
+> Download address of the complete sample code:
## Preparations
@@ -156,7 +156,7 @@ Different from the single-node system, the multi-node system needs to transfer t
## Defining the Network
-In data parallel and automatic parallel modes, the network definition method is the same as that in a single-node system. The reference code is as follows:
+In data parallel and automatic parallel modes, the network definition method is the same as that in a single-node system. The reference code is as follows:
## Defining the Loss Function and Optimizer
diff --git a/tutorials/source_en/advanced_use/graph_kernel_fusion.md b/tutorials/source_en/advanced_use/graph_kernel_fusion.md
index 76a9bd8642001f4807a1ab417f6725c4e30b6191..6d7b8256eb591d4cb85408b4dadf69c02f83b46b 100644
--- a/tutorials/source_en/advanced_use/graph_kernel_fusion.md
+++ b/tutorials/source_en/advanced_use/graph_kernel_fusion.md
@@ -14,7 +14,7 @@
-
+
## Overview
@@ -100,7 +100,7 @@ context.set_context(enable_graph_kernel=True)
2. `BERT-large` training network
- Take the training model of the `BERT-large` network as an example. For details about the dataset and training script, see . You only need to modify the `context` parameter.
+ Take the training model of the `BERT-large` network as an example. For details about the dataset and training script, see . You only need to modify the `context` parameter.
## Effect Evaluation
diff --git a/tutorials/source_en/advanced_use/hardware_resources.md b/tutorials/source_en/advanced_use/hardware_resources.md
index cb9dc16d3411acf5310897bc405badea0d46e9d3..b18d83ca7725e2bd17f8b87019ff4048fe663343 100644
--- a/tutorials/source_en/advanced_use/hardware_resources.md
+++ b/tutorials/source_en/advanced_use/hardware_resources.md
@@ -11,12 +11,12 @@
-
+
## Overview
Users can view hardware resources such as Ascend AI processor, CPU, memory, etc., so as to allocate appropriate resources for training.
-Just [Start MindInsight](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html#start-the-service), and click "Hardware Resources" in the navigation bar to view it.
+Just [Start MindInsight](https://www.mindspore.cn/tutorial/en/r0.7/advanced_use/mindinsight_commands.html#start-the-service), and click "Hardware Resources" in the navigation bar to view it.
## Ascend AI Processor Board
diff --git a/tutorials/source_en/advanced_use/host_device_training.md b/tutorials/source_en/advanced_use/host_device_training.md
index 135518dff1493f955ed383367b9da18020b9898b..5563dcd3b8ec15f90a964d3fec331cafeda40656 100644
--- a/tutorials/source_en/advanced_use/host_device_training.md
+++ b/tutorials/source_en/advanced_use/host_device_training.md
@@ -12,7 +12,7 @@
-
+
## Overview
@@ -21,10 +21,10 @@ the number of required accelerators is too overwhelming for people to access, re
efficient method for addressing huge model problem.
In MindSpore, users can easily implement hybrid training by configuring trainable parameters and necessary operators to run on hosts, and other operators to run on accelerators.
-This tutorial introduces how to train [Wide&Deep](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep) in the Host+Ascend 910 AI Accelerator mode.
+This tutorial introduces how to train [Wide&Deep](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo/official/recommend/wide_and_deep) in the Host+Ascend 910 AI Accelerator mode.
## Preliminaries
-1. Prepare the model. The Wide&Deep code can be found at: , in which `train_and_eval_auto_parallel.py` is the main function for training,
+1. Prepare the model. The Wide&Deep code can be found at: , in which `train_and_eval_auto_parallel.py` is the main function for training,
`src/` directory contains the model definition, data processing and configuration files, `script/` directory contains the launch scripts in different modes.
2. Prepare the dataset. The dataset can be found at: . Use the script `src/preprocess_data.py` to transform dataset into MindRecord format.
diff --git a/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md b/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md
index c6a1274665f22082e9cf3ea220caf4389bc18bc9..87f0974da0e37d8baccddbdc17d2829be8cb8dd3 100644
--- a/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md
+++ b/tutorials/source_en/advanced_use/lineage_and_scalars_comparision.md
@@ -13,7 +13,7 @@
-
+
## Overview
diff --git a/tutorials/source_en/advanced_use/mindinsight_commands.md b/tutorials/source_en/advanced_use/mindinsight_commands.md
index c65d3d3aea20eb8ea5def821f1d4c8319126c0e3..1ec8105da84222d34d65ff6734ceec9b4d0ff4b6 100644
--- a/tutorials/source_en/advanced_use/mindinsight_commands.md
+++ b/tutorials/source_en/advanced_use/mindinsight_commands.md
@@ -13,7 +13,7 @@
-
+
## View the Command Help Information
diff --git a/tutorials/source_en/advanced_use/mixed_precision.md b/tutorials/source_en/advanced_use/mixed_precision.md
index 43bfbbabffc9af388c01cdb3b48a027d1db2ea05..ba4b301c12de236fc604a18fcf296c1440cab3d2 100644
--- a/tutorials/source_en/advanced_use/mixed_precision.md
+++ b/tutorials/source_en/advanced_use/mixed_precision.md
@@ -12,7 +12,7 @@
-
+
## Overview
diff --git a/tutorials/source_en/advanced_use/model_security.md b/tutorials/source_en/advanced_use/model_security.md
index 91b4e36a01e11299940b9d04a3d013b2f11e60f2..c38640465b4cd65ca5f7a0f44ca805828679f173 100644
--- a/tutorials/source_en/advanced_use/model_security.md
+++ b/tutorials/source_en/advanced_use/model_security.md
@@ -17,7 +17,7 @@
-
+
## Overview
@@ -31,7 +31,7 @@ At the beginning of AI algorithm design, related security threats are sometimes
This section describes how to use MindArmour in adversarial attack and defense by taking the Fast Gradient Sign Method (FGSM) attack algorithm and Natural Adversarial Defense (NAD) algorithm as examples.
-> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at:
+> The current sample is for CPU, GPU and Ascend 910 AI processor. You can find the complete executable sample code at:
> - `mnist_attack_fgsm.py`: contains attack code.
> - `mnist_defense_nad.py`: contains defense code.
diff --git a/tutorials/source_en/advanced_use/network_migration.md b/tutorials/source_en/advanced_use/network_migration.md
index 24c12287b21e6f720aea801536016955daa05322..6fe6d96a10bc61273ba915b1babbc4153e64ba31 100644
--- a/tutorials/source_en/advanced_use/network_migration.md
+++ b/tutorials/source_en/advanced_use/network_migration.md
@@ -19,7 +19,7 @@
-
+
## Overview
@@ -31,9 +31,9 @@ Before you start working on your scripts, prepare your operator assessment and h
### Operator Assessment
-Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/master/operator_list.html).
+Analyze the operators contained in the network to be migrated and figure out how does MindSpore support these operators based on the [Operator List](https://www.mindspore.cn/docs/en/r0.7/operator_list.html).
-Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List.
+Take ResNet-50 as an example. The two major operators [Conv](https://www.mindspore.cn/api/en/r0.7/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d) and [BatchNorm](https://www.mindspore.cn/api/en/r0.7/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d) exist in the MindSpore Operator List.
If any operator does not exist, you are advised to perform the following operations:
@@ -59,17 +59,17 @@ Prepare the hardware environment, find a platform corresponding to your environm
MindSpore differs from TensorFlow and PyTorch in the network structure. Before migration, you need to clearly understand the original script and information of each layer, such as shape.
-> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script.
+> You can also use [MindConverter Tool](https://gitee.com/mindspore/mindinsight/tree/r0.7/mindinsight/mindconverter) to automatically convert the PyTorch network definition script to MindSpore network definition script.
The ResNet-50 network migration and training on the Ascend 910 is used as an example.
1. Import MindSpore modules.
- Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see .
+ Import the corresponding MindSpore modules based on the required APIs. For details about the module list, see .
2. Load and preprocess a dataset.
- Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_preparation.html).
+ Use MindSpore to build the required dataset. Currently, MindSpore supports common datasets. You can call APIs in the original format, `MindRecord`, and `TFRecord`. In addition, MindSpore supports data processing and data augmentation. For details, see the [Data Preparation](https://www.mindspore.cn/tutorial/en/r0.7/use/data_preparation/data_preparation.html).
In this example, the CIFAR-10 dataset is loaded, which supports both single-GPU and multi-GPU scenarios.
@@ -81,7 +81,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
num_shards=device_num, shard_id=rank_id)
```
- Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see .
+ Then, perform data augmentation, data cleaning, and batch processing. For details about the code, see .
3. Build a network.
@@ -216,7 +216,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
6. Build the entire network.
- The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`.
+ The [ResNet-50](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/resnet/src/resnet.py) network structure is formed by connecting multiple defined subnets. Follow the rule of defining subnets before using them and define all the subnets used in the `__init__` and connect subnets in the `construct`.
7. Define a loss function and an optimizer.
@@ -237,7 +237,7 @@ The ResNet-50 network migration and training on the Ascend 910 is used as an exa
loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False)
```
- You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/master/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute.
+ You can use a built-in assessment method of `Model` by setting the [metrics](https://www.mindspore.cn/tutorial/en/r0.7/advanced_use/customized_debugging_information.html#mindspore-metrics) attribute.
```python
model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'})
@@ -266,15 +266,15 @@ The accuracy optimization process is as follows:
#### On-Cloud Integration
-Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html).
+Run your scripts on ModelArts. For details, see [Using MindSpore on Cloud](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/use_on_the_cloud.html).
### Inference Phase
-Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/master/use/multi_platform_inference.html) for detailed steps.
+Models trained on the Ascend 910 AI processor can be used for inference on different hardware platforms. Refer to the [Multi-platform Inference Tutorial](https://www.mindspore.cn/tutorial/en/r0.7/use/multi_platform_inference.html) for detailed steps.
## Examples
-1. [Common dataset examples](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/loading_the_datasets.html)
+1. [Common dataset examples](https://www.mindspore.cn/tutorial/en/r0.7/use/data_preparation/loading_the_datasets.html)
-2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)
+2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo)
diff --git a/tutorials/source_en/advanced_use/nlp_application.md b/tutorials/source_en/advanced_use/nlp_application.md
index cd30dbe2e30aa377114f5877ee11ab87914c5d40..ac20c94f02a803f7ff33fcb26d4cb8cd3762f0ff 100644
--- a/tutorials/source_en/advanced_use/nlp_application.md
+++ b/tutorials/source_en/advanced_use/nlp_application.md
@@ -23,7 +23,7 @@
-
+
## Overview
@@ -88,7 +88,7 @@ Currently, MindSpore GPU and CPU supports SentimentNet network based on the long
Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used for processing and predicting an important event with a long interval and delay in a time sequence. For details, refer to online documentation.
3. After the model is obtained, use the validation dataset to check the accuracy of model.
-> The current sample is for the Ascend 910 AI processor. You can find the complete executable sample code at:
+> The current sample is for the Ascend 910 AI processor. You can find the complete executable sample code at:
> - `src/config.py`:some configurations on the network, including the batch size and number of training epochs.
> - `src/dataset.py`:dataset related definition,include MindRecord file convert and data-preprocess, etc.
> - `src/imdb.py`: the util class for parsing IMDB dataset.
@@ -158,7 +158,7 @@ if args.preprocess == "true":
```
> After convert success, we can file `mindrecord` files under the directory `preprocess_path`. Usually, this operation does not need to be performed every time while the data set is unchanged.
-> `convert_to_mindrecord` You can find the complete definition at:
+> `convert_to_mindrecord` You can find the complete definition at:
> It consists of two steps:
>1. Process the text dataset, including encoding, word segmentation, alignment, and processing the original GloVe data to adapt to the network structure.
@@ -178,7 +178,7 @@ network = SentimentNet(vocab_size=embedding_table.shape[0],
weight=Tensor(embedding_table),
batch_size=cfg.batch_size)
```
-> `SentimentNet` You can find the complete definition at:
+> `SentimentNet` You can find the complete definition at:
### Pre-Training
@@ -217,7 +217,7 @@ else:
model.train(cfg.num_epochs, ds_train, callbacks=[time_cb, ckpoint_cb, loss_cb])
print("============== Training Success ==============")
```
-> `lstm_create_dataset` You can find the complete definition at:
+> `lstm_create_dataset` You can find the complete definition at:
### Validating the Model
diff --git a/tutorials/source_en/advanced_use/on_device_inference.md b/tutorials/source_en/advanced_use/on_device_inference.md
index fc4bc61442edf9e4066c4f1df87a5f3cc64f05d1..cd165fa5d937f115d01ee3e1e47451f34bdc7218 100644
--- a/tutorials/source_en/advanced_use/on_device_inference.md
+++ b/tutorials/source_en/advanced_use/on_device_inference.md
@@ -11,7 +11,7 @@
-
+
## Overview
@@ -62,7 +62,7 @@ The compilation procedure is as follows:
1. Download source code from the code repository.
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. Run the following command in the root directory of the source code to compile MindSpore Lite.
diff --git a/tutorials/source_en/advanced_use/parameter_server_training.md b/tutorials/source_en/advanced_use/parameter_server_training.md
index 880089e3e5a27e932cf11794685a34156022d4b2..e748b29ed3cb5093e7af5c7fd2c14abfb9adba39 100644
--- a/tutorials/source_en/advanced_use/parameter_server_training.md
+++ b/tutorials/source_en/advanced_use/parameter_server_training.md
@@ -14,7 +14,7 @@
-
+
## Overview
A parameter server is a widely used architecture in distributed training. Compared with the synchronous AllReduce training method, a parameter server has better flexibility, scalability, and node failover capabilities. Specifically, the parameter server supports both synchronous and asynchronous SGD training algorithms. In terms of scalability, model computing and update are separately deployed in the worker and server processes, so that resources of the worker and server can be independently scaled out and in horizontally. In addition, in an environment of a large-scale data center, various failures often occur in a computing device, a network, and a storage device, and consequently some nodes are abnormal. However, in an architecture of a parameter server, such a failure can be relatively easily handled without affecting a training job.
@@ -35,7 +35,7 @@ The following describes how to use parameter server to train LeNet on Ascend 910
### Training Script Preparation
-Learn how to train a LeNet using the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) by referring to .
+Learn how to train a LeNet using the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) by referring to .
### Parameter Setting
@@ -44,7 +44,7 @@ In this training mode, you can use either of the following methods to control wh
- Use `mindspore.nn.Cell.set_param_ps()` to set all weight recursions of `nn.Cell`.
- Use `mindspore.common.Parameter.set_param_ps()` to set the weight.
-On the basis of the [original training script](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/train.py), set all LeNet model weights to be trained on the parameter server:
+On the basis of the [original training script](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/lenet/train.py), set all LeNet model weights to be trained on the parameter server:
```python
network = LeNet5(cfg.num_classes)
network.set_param_ps()
diff --git a/tutorials/source_en/advanced_use/performance_profiling.md b/tutorials/source_en/advanced_use/performance_profiling.md
index 2ede33a3407d5613822c923b95cdd5baa24082b9..cb337104b7cac44176deee48c5870e5fd2e55e1c 100644
--- a/tutorials/source_en/advanced_use/performance_profiling.md
+++ b/tutorials/source_en/advanced_use/performance_profiling.md
@@ -18,7 +18,7 @@
-
+
## Overview
Performance data like operators' execution time is recorded in files and can be viewed on the web page, this can help the user optimize the performance of neural networks.
@@ -70,7 +70,7 @@ def test_profiler():
## Launch MindInsight
-The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html).
+The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/r0.7/advanced_use/mindinsight_commands.html).
### Performance Analysis
@@ -189,6 +189,6 @@ W/A/S/D can be applied to zoom in and out of the Timeline graph.
- To limit the data size generated by the Profiler, MindInsight suggests that for large neural network, the profiled steps should better below 10.
> How to limit step count, Please refer to data preparation tutorial:
- >
+ >
- The parse of Timeline data is time consuming, and several step's data is usually enough for analysis. In order to speed up the data parse and UI display, Profiler will show at most 20M data (Contain 10+ step information for large networks).
diff --git a/tutorials/source_en/advanced_use/performance_profiling_gpu.md b/tutorials/source_en/advanced_use/performance_profiling_gpu.md
index 0bbe119e404d30ca4f605240089b3f2ebce72237..86fda8607854c7d693b7a3f5ea6c406cf6b0d404 100644
--- a/tutorials/source_en/advanced_use/performance_profiling_gpu.md
+++ b/tutorials/source_en/advanced_use/performance_profiling_gpu.md
@@ -14,7 +14,7 @@
-
+
## Overview
Performance data like operators' execution time is recorded in files and can be viewed on the web page, this can help the user optimize the performance of neural networks.
@@ -23,7 +23,7 @@ Performance data like operators' execution time is recorded in files and can be
> The GPU operation process is the same as that in Ascend chip.
>
->
+>
## Preparing the Training Script
@@ -31,7 +31,7 @@ To enable the performance profiling of neural networks, MindSpore Profiler APIs
> The sample code is the same as that in Ascend chip:
>
->
+>
Users can get profiling data by user-defined callback:
@@ -65,7 +65,7 @@ The code above is just a example. Users should implement callback by themselves.
## Launch MindInsight
-The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/master/advanced_use/mindinsight_commands.html).
+The MindInsight launch command can refer to [MindInsight Commands](https://www.mindspore.cn/tutorial/en/r0.7/advanced_use/mindinsight_commands.html).
### Performance Analysis
diff --git a/tutorials/source_en/advanced_use/quantization_aware.md b/tutorials/source_en/advanced_use/quantization_aware.md
index 4ce552935947180891e8c1d9fadd4beb20eaf033..7285bd3ab91485350d11a285149c6c64e76fd8ed 100644
--- a/tutorials/source_en/advanced_use/quantization_aware.md
+++ b/tutorials/source_en/advanced_use/quantization_aware.md
@@ -20,7 +20,7 @@
-
+
## Background
@@ -51,7 +51,7 @@ Aware quantization training specifications
| Specification | Description |
| ------------- | ---------------------------------------- |
| Hardware | Supports hardware platforms based on the GPU or Ascend AI 910 processor. |
-| Network | Supports networks such as LeNet and ResNet50. For details, see . |
+| Network | Supports networks such as LeNet and ResNet50. For details, see . |
| Algorithm | Supports symmetric and asymmetric quantization algorithms in MindSpore fake quantization training. |
| Solution | Supports 4-, 7-, and 8-bit quantization solutions. |
@@ -76,7 +76,7 @@ Compared with common training, the quantization aware training requires addition
Next, the LeNet network is used as an example to describe steps 3 and 6.
-> You can obtain the complete executable sample code at .
+> You can obtain the complete executable sample code at .
### Defining a Fusion Network
@@ -175,7 +175,7 @@ The preceding describes the quantization aware training from scratch. A more com
2. Define a network.
3. Define a fusion network.
4. Define an optimizer and loss function.
- 5. Load a model file and retrain the model. Load an existing model file and retrain the model based on the fusion network to generate a fusion model. For details, see .
+ 5. Load a model file and retrain the model. Load an existing model file and retrain the model based on the fusion network to generate a fusion model. For details, see .
6. Generate a quantization network.
7. Perform quantization training.
@@ -183,7 +183,7 @@ The preceding describes the quantization aware training from scratch. A more com
The inference using a quantization model is the same as common model inference. The inference can be performed by directly using the checkpoint file or converting the checkpoint file into a common model format (such as ONNX or AIR).
-For details, see .
+For details, see .
- To use a checkpoint file obtained after quantization aware training for inference, perform the following steps:
diff --git a/tutorials/source_en/advanced_use/serving.md b/tutorials/source_en/advanced_use/serving.md
index a726e427cbdc9febb4b52c3892aa7770e87666de..40320639b37fe9862c58e905f8ebeaccbe83f3fc 100644
--- a/tutorials/source_en/advanced_use/serving.md
+++ b/tutorials/source_en/advanced_use/serving.md
@@ -16,7 +16,7 @@
- [REST API Client Sample](#rest-api-client-sample)
-
+
## Overview
@@ -49,7 +49,7 @@ The following uses a simple network as an example to describe how to use MindSpo
### Exporting Model
> Before exporting the model, you need to configure MindSpore [base environment](https://www.mindspore.cn/install/en).
-Use [add_model.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/export_model/add_model.py) to build a network with only the Add operator and export the MindSpore inference deployment model.
+Use [add_model.py](https://gitee.com/mindspore/mindspore/blob/r0.7/serving/example/export_model/add_model.py) to build a network with only the Add operator and export the MindSpore inference deployment model.
```python
python add_model.py
@@ -66,7 +66,7 @@ If the server prints the `MS Serving Listening on 0.0.0.0:5500` log, the Serving
#### Python Client Sample
> Before running the client sample, add the path `/{your python path}/lib/python3.7/site-packages/mindspore/` to the environment variable `PYTHONPATH`.
-Obtain [ms_client.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/python_client/ms_client.py) and start the Python client.
+Obtain [ms_client.py](https://gitee.com/mindspore/mindspore/blob/r0.7/serving/example/python_client/ms_client.py) and start the Python client.
```bash
python ms_client.py
```
@@ -153,7 +153,7 @@ The client code consists of the following parts:
Status status = stub_->Predict(&context, request, &reply);
```
-For details about the complete code, see [ms_client](https://gitee.com/mindspore/mindspore/blob/master/serving/example/cpp_client/ms_client.cc).
+For details about the complete code, see [ms_client](https://gitee.com/mindspore/mindspore/blob/r0.7/serving/example/cpp_client/ms_client.cc).
### REST API Client Sample
1. Send data in the form of `data`:
diff --git a/tutorials/source_en/advanced_use/summary_record.md b/tutorials/source_en/advanced_use/summary_record.md
index 6ce5581ee586f4fc37a60efc981ec5b2f202d2d6..9706738c0244dc644839aa096ca495c3ef62cb7f 100644
--- a/tutorials/source_en/advanced_use/summary_record.md
+++ b/tutorials/source_en/advanced_use/summary_record.md
@@ -16,7 +16,7 @@
-
+
## Overview
@@ -127,10 +127,10 @@ model.eval(ds_eval, callbacks=[summary_collector])
In addition to providing the `SummaryCollector` that automatically collects some summary data, MindSpore provides summary operators that enable custom collection other data on the network, such as the input of each convolutional layer, or the loss value in the loss function, etc.
Summary operators currently supported:
-- [ScalarSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=scalarsummary#mindspore.ops.operations.ScalarSummary): Record a scalar data.
-- [TensorSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=tensorsummary#mindspore.ops.operations.TensorSummary): Record a tensor data.
-- [ImageSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=imagesummary#mindspore.ops.operations.ImageSummary): Record a image data.
-- [HistogramSummary](https://www.mindspore.cn/api/en/master/api/python/mindspore/mindspore.ops.operations.html?highlight=histogramsummar#mindspore.ops.operations.HistogramSummary): Convert tensor data into histogram data records.
+- [ScalarSummary](https://www.mindspore.cn/api/en/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=scalarsummary#mindspore.ops.operations.ScalarSummary): Record a scalar data.
+- [TensorSummary](https://www.mindspore.cn/api/en/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=tensorsummary#mindspore.ops.operations.TensorSummary): Record a tensor data.
+- [ImageSummary](https://www.mindspore.cn/api/en/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=imagesummary#mindspore.ops.operations.ImageSummary): Record a image data.
+- [HistogramSummary](https://www.mindspore.cn/api/en/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=histogramsummar#mindspore.ops.operations.HistogramSummary): Convert tensor data into histogram data records.
The recording method is shown in the following steps.
@@ -328,7 +328,7 @@ Stop MindInsight command:
mindinsight stop
```
-For more parameter Settings, see the [MindInsight related commands](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mindinsight_commands.html) page.
+For more parameter Settings, see the [MindInsight related commands](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/mindinsight_commands.html) page.
## Notices
diff --git a/tutorials/source_en/quick_start/quick_start.md b/tutorials/source_en/quick_start/quick_start.md
index 4e37ef22432b2afc587d9ea73ff92c9b41d08e34..e9cab9fb0756e0f9c82edd59d5827193a4a40246 100644
--- a/tutorials/source_en/quick_start/quick_start.md
+++ b/tutorials/source_en/quick_start/quick_start.md
@@ -26,7 +26,7 @@
-
+
## Overview
@@ -40,7 +40,7 @@ During the practice, a simple image classification function is implemented. The
5. Load the saved model for inference.
6. Validate the model, load the test dataset and trained model, and validate the result accuracy.
-> You can find the complete executable sample code at .
+> You can find the complete executable sample code at .
This is a simple and basic application process. For other advanced and complex applications, extend this basic process as needed.
@@ -85,7 +85,7 @@ Currently, the `os` libraries are required. For ease of understanding, other req
import os
```
-For details about MindSpore modules, search on the [MindSpore API Page](https://www.mindspore.cn/api/en/master/index.html).
+For details about MindSpore modules, search on the [MindSpore API Page](https://www.mindspore.cn/api/en/r0.7/index.html).
### Configuring the Running Information
@@ -181,7 +181,7 @@ In the preceding information:
Perform the shuffle and batch operations, and then perform the repeat operation to ensure that data during an epoch is unique.
-> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see section "Data Processing and Augmentation" in the MindSpore Tutorials (https://www.mindspore.cn/tutorial/en/master/use/data_preparation/data_processing_and_augmentation.html).
+> MindSpore supports multiple data processing and augmentation operations, which are usually combined. For details, see section "Data Processing and Augmentation" in the MindSpore Tutorials (https://www.mindspore.cn/tutorial/en/r0.7/use/data_preparation/data_processing_and_augmentation.html).
## Defining the Network
diff --git a/tutorials/source_en/quick_start/quick_video.md b/tutorials/source_en/quick_start/quick_video.md
index 240a4e3aa65c75100f47bf5442ce5f1ae389ba67..3cbd604ef6304d1494162d0e7c975cfffdf47b56 100644
--- a/tutorials/source_en/quick_start/quick_video.md
+++ b/tutorials/source_en/quick_start/quick_video.md
@@ -10,7 +10,7 @@ Provides video tutorials from installation to try-on, helping you quickly use Mi
-
diff --git a/tutorials/source_en/quick_start/quick_video/customized_debugging.md b/tutorials/source_en/quick_start/quick_video/customized_debugging.md
index 94d0078ca6df2521d6214509257df6ab2676339d..d74a2695c6c24c3844520196a50bb165ea913fba 100644
--- a/tutorials/source_en/quick_start/quick_video/customized_debugging.md
+++ b/tutorials/source_en/quick_start/quick_video/customized_debugging.md
@@ -6,4 +6,4 @@
-**View the full tutorial**:
\ No newline at end of file
+**View the full tutorial**:
\ No newline at end of file
diff --git a/tutorials/source_en/quick_start/quick_video/mindInsight_dashboard.md b/tutorials/source_en/quick_start/quick_video/mindInsight_dashboard.md
index 9b3d33b5db358da1b58d69fc7d3775309d0f526f..47a0b2f341f76d461bc9a5da51a6fed4b84107bb 100644
--- a/tutorials/source_en/quick_start/quick_video/mindInsight_dashboard.md
+++ b/tutorials/source_en/quick_start/quick_video/mindInsight_dashboard.md
@@ -8,4 +8,4 @@
**Install now**:
-**See more**:
\ No newline at end of file
+**See more**:
\ No newline at end of file
diff --git a/tutorials/source_en/quick_start/quick_video/mindInsight_installation_and_common_commands.md b/tutorials/source_en/quick_start/quick_video/mindInsight_installation_and_common_commands.md
index 4f5fd939f30334d602558672bac4b91559e05ed3..9f015cc11673a57b0998e66709729c3bf35c7cf5 100644
--- a/tutorials/source_en/quick_start/quick_video/mindInsight_installation_and_common_commands.md
+++ b/tutorials/source_en/quick_start/quick_video/mindInsight_installation_and_common_commands.md
@@ -8,4 +8,4 @@
**Install now**:
-**More commands**:
\ No newline at end of file
+**More commands**:
\ No newline at end of file
diff --git a/tutorials/source_en/quick_start/quick_video/quick_start_video.md b/tutorials/source_en/quick_start/quick_video/quick_start_video.md
index eb4483b1f87c61abc1df0670ee17b5d6e9aa995d..d6a24b55351cacbb3c85c0c8c283059ae0575bd0 100644
--- a/tutorials/source_en/quick_start/quick_video/quick_start_video.md
+++ b/tutorials/source_en/quick_start/quick_video/quick_start_video.md
@@ -6,6 +6,6 @@
-**View code**:
+**View code**:
-**View the full tutorial**:
\ No newline at end of file
+**View the full tutorial**:
\ No newline at end of file
diff --git a/tutorials/source_en/quick_start/quick_video/saving_and_loading_model_parameters.md b/tutorials/source_en/quick_start/quick_video/saving_and_loading_model_parameters.md
index 12c9d36d2a243cf7d8399f01d1a4ddf66bc9afca..bb64d89fa2ca4472e2b68c82e5c43c8cc5650b11 100644
--- a/tutorials/source_en/quick_start/quick_video/saving_and_loading_model_parameters.md
+++ b/tutorials/source_en/quick_start/quick_video/saving_and_loading_model_parameters.md
@@ -6,4 +6,4 @@
-**View the full tutorial**:
\ No newline at end of file
+**View the full tutorial**:
\ No newline at end of file
diff --git a/tutorials/source_en/use/custom_operator.md b/tutorials/source_en/use/custom_operator.md
index cff9317dd13be9efd5d52200234cc4256eecf862..7e280e7bdad514165381653761be5f81cc4b73c8 100644
--- a/tutorials/source_en/use/custom_operator.md
+++ b/tutorials/source_en/use/custom_operator.md
@@ -16,7 +16,7 @@
-
+
## Overview
@@ -29,14 +29,14 @@ The related concepts are as follows:
- Operator implementation: describes the implementation of the internal computation logic for an operator through the DSL API provided by the Tensor Boost Engine (TBE). The TBE supports the development of custom operators based on the Ascend AI chip. You can apply for Open Beta Tests (OBTs) by visiting .
- Operator information: describes basic information about a TBE operator, such as the operator name and supported input and output types. It is the basis for the backend to select and map operators.
-This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe) in the MindSpore source code.
+This section takes a Square operator as an example to describe how to customize an operator. For details, see cases in [tests/st/ops/custom_ops_tbe](https://gitee.com/mindspore/mindspore/tree/r0.7/tests/st/ops/custom_ops_tbe) in the MindSpore source code.
## Registering the Operator Primitive
The primitive of an operator is a subclass inherited from `PrimitiveWithInfer`. The type name of the subclass is the operator name.
The definition of the custom operator primitive is the same as that of the built-in operator primitive.
-- The attribute is defined by the input parameter of the constructor function `__init__`. The operator in this test case has no attribute. Therefore, `__init__` has only one input parameter. For details about test cases in which operators have attributes, see [custom add3](https://gitee.com/mindspore/mindspore/tree/master/tests/st/ops/custom_ops_tbe/cus_add3.py) in the MindSpore source code.
+- The attribute is defined by the input parameter of the constructor function `__init__`. The operator in this test case has no attribute. Therefore, `__init__` has only one input parameter. For details about test cases in which operators have attributes, see [custom add3](https://gitee.com/mindspore/mindspore/tree/r0.7/tests/st/ops/custom_ops_tbe/cus_add3.py) in the MindSpore source code.
- The input and output names are defined by the `init_prim_io_names` function.
- The shape inference method of the output tensor is defined in the `infer_shape` function, and the dtype inference method of the output tensor is defined in the `infer_dtype` function.
diff --git a/tutorials/source_en/use/data_preparation/converting_datasets.md b/tutorials/source_en/use/data_preparation/converting_datasets.md
index fd502276d5da25b02bf5ef7a7569e247fbe21155..df9ff04891cfcb85051d404c9999ebe33f4f6f59 100644
--- a/tutorials/source_en/use/data_preparation/converting_datasets.md
+++ b/tutorials/source_en/use/data_preparation/converting_datasets.md
@@ -16,7 +16,7 @@
-
+
## Overview
diff --git a/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md b/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md
index 47b041273854d09d0b39a10e1609907f1d88076e..ea379d5cb9a7e1e010f285632ab0567775149aa2 100644
--- a/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md
+++ b/tutorials/source_en/use/data_preparation/data_processing_and_augmentation.md
@@ -18,7 +18,7 @@
-
+
## Overview
diff --git a/tutorials/source_en/use/data_preparation/loading_the_datasets.md b/tutorials/source_en/use/data_preparation/loading_the_datasets.md
index 144ab6c4a0ed3e2a787d2f3419fd5de79a293b88..3fcf54d0613e5953f2fdf128f0610caa68357a6e 100644
--- a/tutorials/source_en/use/data_preparation/loading_the_datasets.md
+++ b/tutorials/source_en/use/data_preparation/loading_the_datasets.md
@@ -15,7 +15,7 @@
-
+
## Overview
@@ -152,7 +152,7 @@ MindSpore can also read datasets in the `TFRecord` data format through the `TFRe
## Loading a Custom Dataset
In real scenarios, there are various datasets. For a custom dataset or a dataset that can't be loaded by APIs directly, there are tow ways.
-One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/master/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
+One is converting the dataset to MindSpore data format (for details, see [Converting Datasets to the Mindspore Data Format](https://www.mindspore.cn/tutorial/en/r0.7/use/data_preparation/converting_datasets.html)). The other one is using the `GeneratorDataset` object.
The following shows how to use `GeneratorDataset`.
1. Define an iterable object to generate a dataset. There are two examples following. One is a customized function which contains `yield`. The other one is a customized class which contains `__getitem__`.
diff --git a/tutorials/source_en/use/defining_the_network.rst b/tutorials/source_en/use/defining_the_network.rst
index a530f34dbdf92434d8727839433770e23124cfbd..16fc6fb81b43f7f260f633b7463f61c9fa31c732 100644
--- a/tutorials/source_en/use/defining_the_network.rst
+++ b/tutorials/source_en/use/defining_the_network.rst
@@ -4,5 +4,5 @@ Defining the Network
.. toctree::
:maxdepth: 1
- Network List
+ Network List
custom_operator
\ No newline at end of file
diff --git a/tutorials/source_en/use/multi_platform_inference.md b/tutorials/source_en/use/multi_platform_inference.md
index fb6b5390094717cb7d794137093b5e2f0264bb93..a86e204d69eda4f0f31c3af315ecedcd2f6c6f41 100644
--- a/tutorials/source_en/use/multi_platform_inference.md
+++ b/tutorials/source_en/use/multi_platform_inference.md
@@ -20,7 +20,7 @@
-
+
## Overview
@@ -77,8 +77,8 @@ MindSpore supports the following inference scenarios based on the hardware platf
print("============== {} ==============".format(acc))
```
In the preceding information:
- `model.eval` is an API for model validation. For details about the API, see .
- > Inference sample code: .
+ `model.eval` is an API for model validation. For details about the API, see .
+ > Inference sample code: .
1.2 Remote Storage
@@ -101,14 +101,14 @@ MindSpore supports the following inference scenarios based on the hardware platf
```
In the preceding information:
- `hub.load_weights` is an API for loading model parameters. PLease check the details in .
+ `hub.load_weights` is an API for loading model parameters. PLease check the details in .
2. Use the `model.predict` API to perform inference.
```python
model.predict(input_data)
```
In the preceding information:
- `model.predict` is an API for inference. For details about the API, see .
+ `model.predict` is an API for inference. For details about the API, see .
## Inference on the Ascend 310 AI processor
@@ -116,7 +116,7 @@ MindSpore supports the following inference scenarios based on the hardware platf
The Ascend 310 AI processor is equipped with the ACL framework and supports the OM format which needs to be converted from the model in ONNX or AIR format. For inference on the Ascend 310 AI processor, perform the following steps:
-1. Generate a model in ONNX or AIR format on the training platform. For details, see [Export AIR Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-air-model) and [Export ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-onnx-model).
+1. Generate a model in ONNX or AIR format on the training platform. For details, see [Export AIR Model](https://www.mindspore.cn/tutorial/en/r0.7/use/saving_and_loading_model_parameters.html#export-air-model) and [Export ONNX Model](https://www.mindspore.cn/tutorial/en/r0.7/use/saving_and_loading_model_parameters.html#export-onnx-model).
2. Convert the ONNX or AIR model file into an OM model file and perform inference.
- For performing inference in the cloud environment (ModelArt), see the [Ascend 910 training and Ascend 310 inference samples](https://support.huaweicloud.com/bestpractice-modelarts/modelarts_10_0026.html).
@@ -130,7 +130,7 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
-1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-onnx-model).
+1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/en/r0.7/use/saving_and_loading_model_parameters.html#export-onnx-model).
2. Perform inference on a GPU by referring to the runtime or SDK document. For example, use TensorRT to perform inference on the NVIDIA GPU. For details, see [TensorRT backend for ONNX](https://github.com/onnx/onnx-tensorrt).
@@ -142,10 +142,10 @@ The inference is the same as that on the Ascend 910 AI processor.
### Inference Using an ONNX File
Similar to the inference on a GPU, the following steps are required:
-1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-onnx-model).
+1. Generate a model in ONNX format on the training platform. For details, see [Export ONNX Model](https://www.mindspore.cn/tutorial/en/r0.7/use/saving_and_loading_model_parameters.html#export-onnx-model).
2. Perform inference on a CPU by referring to the runtime or SDK document. For details about how to use the ONNX Runtime, see the [ONNX Runtime document](https://github.com/microsoft/onnxruntime).
## On-Device Inference
-MindSpore Lite is an inference engine for on-device inference. For details, see [Export MINDIR Model](https://www.mindspore.cn/tutorial/en/master/use/saving_and_loading_model_parameters.html#export-mindir-model) and [On-Device Inference](https://www.mindspore.cn/tutorial/en/master/advanced_use/on_device_inference.html).
+MindSpore Lite is an inference engine for on-device inference. For details, see [Export MINDIR Model](https://www.mindspore.cn/tutorial/en/r0.7/use/saving_and_loading_model_parameters.html#export-mindir-model) and [On-Device Inference](https://www.mindspore.cn/tutorial/en/r0.7/advanced_use/on_device_inference.html).
diff --git a/tutorials/source_en/use/saving_and_loading_model_parameters.md b/tutorials/source_en/use/saving_and_loading_model_parameters.md
index 2cd12c3bb6c74d402cec1127e594d441867b507d..026bcdc6a7de065632a0d9ae37480bf3db8fbf7c 100644
--- a/tutorials/source_en/use/saving_and_loading_model_parameters.md
+++ b/tutorials/source_en/use/saving_and_loading_model_parameters.md
@@ -18,7 +18,7 @@
-
+
## Overview
diff --git a/tutorials/source_zh_cn/advanced_use/bert_poetry.md b/tutorials/source_zh_cn/advanced_use/bert_poetry.md
index e4fc7f37ca8902a1b3d9a24647a95dc4c9cd08f8..583271a1101e60a075564a4b4581c9bad5622600 100644
--- a/tutorials/source_zh_cn/advanced_use/bert_poetry.md
+++ b/tutorials/source_zh_cn/advanced_use/bert_poetry.md
@@ -21,7 +21,7 @@
- [参考资料](#参考资料)
-
+
五千年历史孕育了深厚的中华文化,而诗词是中华文化不可或缺的一部分,欣赏过诗词就可以感受到当中纯净、辽阔的意境,极致的感性,恰恰弥补了节奏紧迫的现代生活带给我们的拥挤感、浮躁感,古语曰:熟读唐诗三百首,不会作诗也会吟,今天理科生MindSpore也来秀一秀文艺范儿!
@@ -123,9 +123,9 @@ pip install bert4keras
pip install bottle
```
-数据集为43030首诗词:可[下载](https://github.com/AaronJny/DeepLearningExamples/tree/master/keras-bert-poetry-generator)其中的`poetry.txt`。
+数据集为43030首诗词:可[下载](https://github.com/AaronJny/DeepLearningExamples/tree/r0.7/keras-bert-poetry-generator)其中的`poetry.txt`。
-BERT-Base模型的预训练ckpt:可在[MindSpore官网](https://www.mindspore.cn/docs/zh-CN/master/network_list.html)下载。
+BERT-Base模型的预训练ckpt:可在[MindSpore官网](https://www.mindspore.cn/docs/zh-CN/r0.7/network_list.html)下载。
### 训练
diff --git a/tutorials/source_zh_cn/advanced_use/checkpoint_for_hybrid_parallel.md b/tutorials/source_zh_cn/advanced_use/checkpoint_for_hybrid_parallel.md
index b19a874640f52b1ac06732b507f7e1f3c8bb3c0d..65f1006977d2ca543d70cb2ca2bdcfbbfb59eac5 100644
--- a/tutorials/source_zh_cn/advanced_use/checkpoint_for_hybrid_parallel.md
+++ b/tutorials/source_zh_cn/advanced_use/checkpoint_for_hybrid_parallel.md
@@ -28,7 +28,7 @@
-
+
## 概述
@@ -263,7 +263,7 @@ load_param_into_net(opt, param_dict)
3. 执行阶段2训练:阶段2为2卡训练环境,每卡上MatMul算子weight的shape为[4, 8],从合并后的CheckPoint文件加载初始化模型参数数据,之后执行训练。
-> 具体分布式环境配置和训练部分代码,此处不做详细说明,可以参考[分布式并行训练](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/distributed_training_ascend.html)
+> 具体分布式环境配置和训练部分代码,此处不做详细说明,可以参考[分布式并行训练](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/distributed_training_ascend.html)
章节。
>
> 本文档附上对CheckPoint文件做合并处理以及分布式训练前加载CheckPoint文件的示例代码,仅作为参考,实际请参考具体情况实现。
diff --git a/tutorials/source_zh_cn/advanced_use/computer_vision_application.md b/tutorials/source_zh_cn/advanced_use/computer_vision_application.md
index a2d6143cd2468b4dd7246146575e40acc8539d03..d89ec59161e763250a7b5e9fc05e5ae681d9d78a 100644
--- a/tutorials/source_zh_cn/advanced_use/computer_vision_application.md
+++ b/tutorials/source_zh_cn/advanced_use/computer_vision_application.md
@@ -18,8 +18,8 @@
-
-
+
+
## 概述
@@ -39,7 +39,7 @@ def classify(image):
选择合适的model是关键。这里的model一般指的是深度卷积神经网络,如AlexNet、VGG、GoogLeNet、ResNet等等。
-MindSpore实现了典型的卷积神经网络,开发者可以参考[model_zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official)。
+MindSpore实现了典型的卷积神经网络,开发者可以参考[model_zoo](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo/official)。
MindSpore当前支持的图像分类网络包括:典型网络LeNet、AlexNet、ResNet。
@@ -63,7 +63,7 @@ MindSpore当前支持的图像分类网络包括:典型网络LeNet、AlexNet
6. 加载保存的模型进行推理
-> 本例面向Ascend 910 AI处理器硬件平台,你可以在这里下载完整的样例代码:
+> 本例面向Ascend 910 AI处理器硬件平台,你可以在这里下载完整的样例代码:
下面对任务流程中各个环节及代码关键片段进行解释说明。
@@ -148,7 +148,7 @@ tar -zvxf cifar-10-binary.tar.gz
ResNet通常是较好的选择。首先,它足够深,常见的有34层,50层,101层。通常层次越深,表征能力越强,分类准确率越高。其次,可学习,采用了残差结构,通过shortcut连接把低层直接跟高层相连,解决了反向传播过程中因为网络太深造成的梯度消失问题。此外,ResNet网络的性能很好,既表现为识别的准确率,也包括它本身模型的大小和参数量。
-MindSpore Model Zoo中已经实现了ResNet模型,可以采用[ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py)。调用方法如下:
+MindSpore Model Zoo中已经实现了ResNet模型,可以采用[ResNet-50](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/resnet/src/resnet.py)。调用方法如下:
```python
network = resnet50(class_num=10)
diff --git a/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md b/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md
index b4d6da9a82d15848681dd01e67121b43be37afe6..620a143d7b66534c21fa701dfa144aa64975fc70 100644
--- a/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md
+++ b/tutorials/source_zh_cn/advanced_use/customized_debugging_information.md
@@ -16,9 +16,9 @@
-
+
-
+
## 概述
@@ -267,7 +267,7 @@ val:[[1 1]
在Ascend环境上执行训练,当训练结果和预期有偏差时,可以通过异步数据Dump功能保存算子的输入输出进行调试。
-> 异步数据Dump不支持`comm_ops`类别的算子,算子类别详见[算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html)。
+> 异步数据Dump不支持`comm_ops`类别的算子,算子类别详见[算子支持列表](https://www.mindspore.cn/docs/zh-CN/r0.7/operator_list.html)。
1. 开启IR保存开关: `context.set_context(save_graphs=True)`。
2. 执行网络脚本。
diff --git a/tutorials/source_zh_cn/advanced_use/dashboard.md b/tutorials/source_zh_cn/advanced_use/dashboard.md
index 817e994f35e0c3afe1e4f05bd34a0eee9254b8f2..e14e420e6ce4da67b2cad3b1f8801fbb558c5663 100644
--- a/tutorials/source_zh_cn/advanced_use/dashboard.md
+++ b/tutorials/source_zh_cn/advanced_use/dashboard.md
@@ -16,8 +16,8 @@
-
-
+
+
## 概述
diff --git a/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md b/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md
index a8c87f9ba8f6df44d9f5c4193b4d2b14ba1db147..27a1da39b73531daceaab23d633b1440f0e88efa 100644
--- a/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md
+++ b/tutorials/source_zh_cn/advanced_use/debugging_in_pynative_mode.md
@@ -13,9 +13,9 @@
-
+
-
+
## 概述
diff --git a/tutorials/source_zh_cn/advanced_use/differential_privacy.md b/tutorials/source_zh_cn/advanced_use/differential_privacy.md
index 0f09b27154658ff26c1f03bb73613eb28f4f7ca3..73bbcbf43e12609ea54a519b89b67faa53889ec9 100644
--- a/tutorials/source_zh_cn/advanced_use/differential_privacy.md
+++ b/tutorials/source_zh_cn/advanced_use/differential_privacy.md
@@ -16,7 +16,7 @@
-
+
## 概述
@@ -45,7 +45,7 @@ MindArmour的差分隐私模块Differential-Privacy,实现了差分隐私优
这里以LeNet模型,MNIST 数据集为例,说明如何在MindSpore上使用差分隐私优化器训练神经网络模型。
-> 本例面向Ascend 910 AI处理器,你可以在这里下载完整的样例代码:
+> 本例面向Ascend 910 AI处理器,你可以在这里下载完整的样例代码:
## 实现阶段
@@ -85,7 +85,7 @@ TAG = 'Lenet5_train'
### 参数配置
-1. 设置运行环境、数据集路径、模型训练参数、checkpoint存储参数、差分隐私参数,`data_path`数据路径替换成你的数据集所在路径。更多配置可以参考。
+1. 设置运行环境、数据集路径、模型训练参数、checkpoint存储参数、差分隐私参数,`data_path`数据路径替换成你的数据集所在路径。更多配置可以参考。
```python
cfg = edict({
diff --git a/tutorials/source_zh_cn/advanced_use/distributed_training_ascend.md b/tutorials/source_zh_cn/advanced_use/distributed_training_ascend.md
index d02d36a460a235f4c29c86c93503a6957e7f60ca..ecf984a971e3069053872ed10b95a05b917c67e3 100644
--- a/tutorials/source_zh_cn/advanced_use/distributed_training_ascend.md
+++ b/tutorials/source_zh_cn/advanced_use/distributed_training_ascend.md
@@ -20,14 +20,14 @@
-
+
## 概述
本篇教程我们主要讲解,如何在Ascend 910 AI处理器硬件平台上,利用MindSpore通过数据并行及自动并行模式训练ResNet-50网络。
> 你可以在这里下载完整的样例代码:
>
->
+>
## 准备环节
@@ -159,7 +159,7 @@ def create_dataset(data_path, repeat_num=1, batch_size=32, rank_id=0, rank_size=
## 定义网络
-数据并行及自动并行模式下,网络定义方式与单机一致。代码请参考:
+数据并行及自动并行模式下,网络定义方式与单机一致。代码请参考:
## 定义损失函数及优化器
diff --git a/tutorials/source_zh_cn/advanced_use/distributed_training_gpu.md b/tutorials/source_zh_cn/advanced_use/distributed_training_gpu.md
index 93318259a8b25586644223870241002283f9f629..01e807fb81617b16d43a1e1e1524374765bc616b 100644
--- a/tutorials/source_zh_cn/advanced_use/distributed_training_gpu.md
+++ b/tutorials/source_zh_cn/advanced_use/distributed_training_gpu.md
@@ -17,7 +17,7 @@
-
+
## 概述
@@ -31,7 +31,7 @@
> 数据集的下载和加载方式参考:
>
-> 。
+> 。
### 配置分布式环境
@@ -83,7 +83,7 @@ if __name__ == "__main__":
在GPU硬件平台上,网络的定义和Ascend 910 AI处理器一致。
-> 网络、优化器、损失函数的定义参考:。
+> 网络、优化器、损失函数的定义参考:。
## 运行脚本
@@ -91,7 +91,7 @@ if __name__ == "__main__":
> 你可以在这里找到样例的运行脚本:
>
-> 。
+> 。
>
> 如果通过root用户执行脚本,`mpirun`需要加上`--allow-run-as-root`参数。
diff --git a/tutorials/source_zh_cn/advanced_use/fuzzer.md b/tutorials/source_zh_cn/advanced_use/fuzzer.md
index 9ab97020ace0c7433fcc06a82a7c01da091415f8..a188101bb7d7c0365ec0dde1c9f3673fdcb9e2e2 100644
--- a/tutorials/source_zh_cn/advanced_use/fuzzer.md
+++ b/tutorials/source_zh_cn/advanced_use/fuzzer.md
@@ -12,7 +12,7 @@
- [运用Fuzzer](#运用Fuzzer)
-
+
## 概述
@@ -22,7 +22,7 @@ MindArmour的Fuzzer模块以神经元覆盖率作为测试评价准则。神经
这里以LeNet模型,MNIST数据集为例,说明如何使用Fuzzer。
-> 本例面向CPU、GPU、Ascend 910 AI处理器,你可以在这里下载完整的样例代码:
+> 本例面向CPU、GPU、Ascend 910 AI处理器,你可以在这里下载完整的样例代码:
## 实现阶段
@@ -60,7 +60,7 @@ context.set_context(mode=context.GRAPH_MODE, device_target=cfg.device_target)
### 运用Fuzzer
-1. 建立LeNet模型,加载MNIST数据集,操作同[模型安全]()
+1. 建立LeNet模型,加载MNIST数据集,操作同[模型安全]()
```python
...
@@ -101,7 +101,7 @@ context.set_context(mode=context.GRAPH_MODE, device_target=cfg.device_target)
数据变异方法一定要包含基于图像像素值变化的方法。
- 前两种图像变化方法的可配置参数,以及推荐参数范围请参考:对应的类方法,也可以均设置为`'auto_param': True`,变异参数将在推荐范围内随机生成。
+ 前两种图像变化方法的可配置参数,以及推荐参数范围请参考:对应的类方法,也可以均设置为`'auto_param': True`,变异参数将在推荐范围内随机生成。
基于对抗攻击方法的参数配置请参考对应的攻击方法类。
diff --git a/tutorials/source_zh_cn/advanced_use/gradient_accumulation.md b/tutorials/source_zh_cn/advanced_use/gradient_accumulation.md
index 5ea2290e4647d865fdef6ddaf50032efb22a5fe8..e30cdea257f7a99cd1ae8d2b59f1dfc54c055e2c 100644
--- a/tutorials/source_zh_cn/advanced_use/gradient_accumulation.md
+++ b/tutorials/source_zh_cn/advanced_use/gradient_accumulation.md
@@ -18,7 +18,7 @@
-
+
## 概述
@@ -30,7 +30,7 @@
最终目的是为了达到跟直接用N*Mini-batch数据训练几乎同样的效果。
-> 本教程用于GPU、Ascend 910 AI处理器, 你可以在这里下载主要的训练样例代码:
+> 本教程用于GPU、Ascend 910 AI处理器, 你可以在这里下载主要的训练样例代码:
## 创建梯度累积模型
@@ -59,11 +59,11 @@ from model_zoo.official.cv.lenet.src.lenet import LeNet5
### 加载数据集
-利用MindSpore的dataset提供的`MnistDataset`接口加载MNIST数据集,此部分代码由model_zoo中lenet目录下的[dataset.py]()导入。
+利用MindSpore的dataset提供的`MnistDataset`接口加载MNIST数据集,此部分代码由model_zoo中lenet目录下的[dataset.py]()导入。
### 定义网络
-这里以LeNet网络为例进行介绍,当然也可以使用其它的网络,如ResNet-50、BERT等, 此部分代码由model_zoo中lenet目录下的[lenet.py]()导入。
+这里以LeNet网络为例进行介绍,当然也可以使用其它的网络,如ResNet-50、BERT等, 此部分代码由model_zoo中lenet目录下的[lenet.py]()导入。
### 定义训练模型
将训练流程拆分为正向反向训练、参数更新和累积梯度清理三个部分:
@@ -253,7 +253,7 @@ if __name__ == "__main__":
**验证模型**
-通过model_zoo中lenet目录下的[eval.py](),使用保存的CheckPoint文件,加载验证数据集,进行验证。
+通过model_zoo中lenet目录下的[eval.py](),使用保存的CheckPoint文件,加载验证数据集,进行验证。
```shell
$ python eval.py --data_path=./MNIST_Data --ckpt_path=./gradient_accumulation.ckpt
diff --git a/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md b/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md
index c6a5a8ac65f8c5f4eac213ddba5ba59ed0e453a1..bc0468f8326558987b5d4f896a70ac304c6cbf58 100644
--- a/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md
+++ b/tutorials/source_zh_cn/advanced_use/graph_kernel_fusion.md
@@ -14,7 +14,7 @@
-
+
## 概述
@@ -101,7 +101,7 @@ context.set_context(enable_graph_kernel=True)
2. `BERT-large`训练网络
以`BERT-large`网络的训练模型为例,数据集和训练脚本可参照
- ,同样我们只需修改`context`参数即可。
+ ,同样我们只需修改`context`参数即可。
## 效果评估
diff --git a/tutorials/source_zh_cn/advanced_use/hardware_resources.md b/tutorials/source_zh_cn/advanced_use/hardware_resources.md
index 406db6384677c418193258a498b005f84bb33553..761cca2dd586dbe4be0efd0b369c43ab78927174 100644
--- a/tutorials/source_zh_cn/advanced_use/hardware_resources.md
+++ b/tutorials/source_zh_cn/advanced_use/hardware_resources.md
@@ -11,11 +11,11 @@
-
+
## 概述
-用户可查看昇腾AI处理器、CPU、内存等系统指标,从而分配适当的资源进行训练。直接[启动MindInsight](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mindinsight_commands.html#id3),点击导航栏的“硬件资源”即可查看。
+用户可查看昇腾AI处理器、CPU、内存等系统指标,从而分配适当的资源进行训练。直接[启动MindInsight](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/mindinsight_commands.html#id3),点击导航栏的“硬件资源”即可查看。
## 昇腾AI处理器看板
diff --git a/tutorials/source_zh_cn/advanced_use/host_device_training.md b/tutorials/source_zh_cn/advanced_use/host_device_training.md
index 6ef3c35f91d6373ed5dbf1c30706ffc359fc5d69..8bb7ce4bd961a206819e681cb9ce8c15b3b5d321 100644
--- a/tutorials/source_zh_cn/advanced_use/host_device_training.md
+++ b/tutorials/source_zh_cn/advanced_use/host_device_training.md
@@ -12,17 +12,17 @@
-
+
## 概述
在深度学习中,工作人员时常会遇到超大模型的训练问题,即模型参数所占内存超过了设备内存上限。为高效地训练超大模型,一种方案便是分布式并行训练,也就是将工作交由同构的多个加速器(如Ascend 910 AI处理器,GPU等)共同完成。但是这种方式在面对几百GB甚至几TB级别的模型时,所需的加速器过多。而当从业者实际难以获取大规模集群时,这种方式难以应用。另一种可行的方案是使用主机端(Host)和加速器(Device)的混合训练模式。此方案同时发挥了主机端内存大和加速器端计算快的优势,是一种解决超大模型训练较有效的方式。
-在MindSpore中,用户可以将待训练的参数放在主机,同时将必要算子的执行位置配置为主机,其余算子的执行位置配置为加速器,从而方便地实现混合训练。此教程以推荐模型[Wide&Deep](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/recommend/wide_and_deep)为例,讲解MindSpore在主机和Ascend 910 AI处理器的混合训练。
+在MindSpore中,用户可以将待训练的参数放在主机,同时将必要算子的执行位置配置为主机,其余算子的执行位置配置为加速器,从而方便地实现混合训练。此教程以推荐模型[Wide&Deep](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo/official/recommend/wide_and_deep)为例,讲解MindSpore在主机和Ascend 910 AI处理器的混合训练。
## 准备工作
-1. 准备模型代码。Wide&Deep的代码可参见:,其中,`train_and_eval_auto_parallel.py`为训练的主函数所在,`src/`目录中包含Wide&Deep模型的定义、数据处理和配置信息等,`script/`目录中包含不同配置下的训练脚本。
+1. 准备模型代码。Wide&Deep的代码可参见:,其中,`train_and_eval_auto_parallel.py`为训练的主函数所在,`src/`目录中包含Wide&Deep模型的定义、数据处理和配置信息等,`script/`目录中包含不同配置下的训练脚本。
2. 准备数据集。数据集下载链接:。利用脚本`src/preprocess_data.py`将数据集转换为MindRecord格式。
diff --git a/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md b/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md
index fae22dbbcf27c6e5ce5b12b62eb4c74aaeee0cba..af7ab7e19af59b9dafa5bc1b9bc13699254f4949 100644
--- a/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md
+++ b/tutorials/source_zh_cn/advanced_use/lineage_and_scalars_comparision.md
@@ -13,8 +13,8 @@
-
-
+
+
## 概述
diff --git a/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md b/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md
index 4f685599e0c553b7fe47e3359cac8333d8da0e7b..a05388b89da744e76eae749df0c479fc5943392e 100644
--- a/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md
+++ b/tutorials/source_zh_cn/advanced_use/mindinsight_commands.md
@@ -13,7 +13,7 @@
-
+
## 查看命令帮助信息
diff --git a/tutorials/source_zh_cn/advanced_use/mixed_precision.md b/tutorials/source_zh_cn/advanced_use/mixed_precision.md
index c59cdca7c3b61eb87253d4e663257f9a770ea9a3..5ba9803713ab97dbe7652e5228b0812b7eae7098 100644
--- a/tutorials/source_zh_cn/advanced_use/mixed_precision.md
+++ b/tutorials/source_zh_cn/advanced_use/mixed_precision.md
@@ -12,8 +12,8 @@
-
-
+
+
## 概述
diff --git a/tutorials/source_zh_cn/advanced_use/model_security.md b/tutorials/source_zh_cn/advanced_use/model_security.md
index d77b577bcfc773e048a623885fb91b00f8511be3..6d415208ceae5b6c5a83c6650d60a723564a727d 100644
--- a/tutorials/source_zh_cn/advanced_use/model_security.md
+++ b/tutorials/source_zh_cn/advanced_use/model_security.md
@@ -17,8 +17,8 @@
-
-
+
+
## 概述
@@ -31,7 +31,7 @@ AI算法设计之初普遍未考虑相关的安全威胁,使得AI算法的判
这里通过图像分类任务上的对抗性攻防,以攻击算法FGSM和防御算法NAD为例,介绍MindArmour在对抗攻防上的使用方法。
-> 本例面向CPU、GPU、Ascend 910 AI处理器,你可以在这里下载完整的样例代码:
+> 本例面向CPU、GPU、Ascend 910 AI处理器,你可以在这里下载完整的样例代码:
> - `mnist_attack_fgsm.py`:包含攻击代码。
> - `mnist_defense_nad.py`:包含防御代码。
diff --git a/tutorials/source_zh_cn/advanced_use/network_migration.md b/tutorials/source_zh_cn/advanced_use/network_migration.md
index 0ffe5055850f858a1735168e56cd844cee110ecc..0948c9e37bd14cb9c2e67a7fdfc6f62b2d2afc9b 100644
--- a/tutorials/source_zh_cn/advanced_use/network_migration.md
+++ b/tutorials/source_zh_cn/advanced_use/network_migration.md
@@ -19,7 +19,7 @@
-
+
## 概述
@@ -31,9 +31,9 @@
### 算子评估
-分析待迁移的网络中所包含的算子,结合[MindSpore算子支持列表](https://www.mindspore.cn/docs/zh-CN/master/operator_list.html),梳理出MindSpore对这些算子的支持程度。
+分析待迁移的网络中所包含的算子,结合[MindSpore算子支持列表](https://www.mindspore.cn/docs/zh-CN/r0.7/operator_list.html),梳理出MindSpore对这些算子的支持程度。
-以ResNet-50为例,[Conv](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d)和[BatchNorm](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d)是其中最主要的两个算子,它们已在MindSpore支持的算子列表中。
+以ResNet-50为例,[Conv](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.nn.html#mindspore.nn.Conv2d)和[BatchNorm](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.nn.html#mindspore.nn.BatchNorm2d)是其中最主要的两个算子,它们已在MindSpore支持的算子列表中。
如果发现没有对应算子,建议:
- 使用其他算子替换:分析算子实现公式,审视是否可以采用MindSpore现有算子叠加达到预期目标。
@@ -57,17 +57,17 @@
MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差别,迁移前需要对原脚本有较为清晰的了解,明确地知道每一层的shape等信息。
-> 你也可以使用[MindConverter工具](https://gitee.com/mindspore/mindinsight/tree/master/mindinsight/mindconverter)实现PyTorch网络定义脚本到MindSpore网络定义脚本的自动转换。
+> 你也可以使用[MindConverter工具](https://gitee.com/mindspore/mindinsight/tree/r0.7/mindinsight/mindconverter)实现PyTorch网络定义脚本到MindSpore网络定义脚本的自动转换。
下面,我们以ResNet-50的迁移,并在Ascend 910上训练为例:
1. 导入MindSpore模块。
- 根据所需使用的接口,导入相应的MindSpore模块,模块列表详见。
+ 根据所需使用的接口,导入相应的MindSpore模块,模块列表详见。
2. 加载数据集和预处理。
- 使用MindSpore构造你需要使用的数据集。目前MindSpore已支持常见数据集,你可以通过原始格式、`MindRecord`、`TFRecord`等多种接口调用,同时还支持数据处理以及数据增强等相关功能,具体用法可参考[准备数据教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_preparation.html)。
+ 使用MindSpore构造你需要使用的数据集。目前MindSpore已支持常见数据集,你可以通过原始格式、`MindRecord`、`TFRecord`等多种接口调用,同时还支持数据处理以及数据增强等相关功能,具体用法可参考[准备数据教程](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/data_preparation/data_preparation.html)。
本例中加载了Cifar-10数据集,可同时支持单卡和多卡的场景。
@@ -79,7 +79,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
num_shards=device_num, shard_id=rank_id)
```
- 然后对数据进行了数据增强、数据清洗和批处理等操作。代码详见。
+ 然后对数据进行了数据增强、数据清洗和批处理等操作。代码详见。
3. 构建网络。
@@ -212,7 +212,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
6. 构造整网。
- 将定义好的多个子网连接起来就是整个[ResNet-50](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py)网络的结构了。同样遵循先定义后使用的原则,在`__init__`中定义所有用到的子网,在`construct`中连接子网。
+ 将定义好的多个子网连接起来就是整个[ResNet-50](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/resnet/src/resnet.py)网络的结构了。同样遵循先定义后使用的原则,在`__init__`中定义所有用到的子网,在`construct`中连接子网。
7. 定义损失函数和优化器。
@@ -233,7 +233,7 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
loss_scale = FixedLossScaleManager(config.loss_scale, drop_overflow_update=False)
```
- 如果希望使用`Model`内置的评估方法,则可以使用[metrics](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/customized_debugging_information.html#mindspore-metrics)属性设置希望使用的评估方法。
+ 如果希望使用`Model`内置的评估方法,则可以使用[metrics](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/customized_debugging_information.html#mindspore-metrics)属性设置希望使用的评估方法。
```python
model = Model(net, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale, metrics={'acc'})
@@ -261,14 +261,14 @@ MindSpore与TensorFlow、PyTorch在网络结构组织方式上,存在一定差
#### 云上集成
-请参考[在云上使用MindSpore](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/use_on_the_cloud.html),将你的脚本运行在ModelArts。
+请参考[在云上使用MindSpore](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/use_on_the_cloud.html),将你的脚本运行在ModelArts。
### 推理阶段
-在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/zh-CN/master/use/multi_platform_inference.html)。
+在Ascend 910 AI处理器上训练后的模型,支持在不同的硬件平台上执行推理。详细步骤可参考[多平台推理教程](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/multi_platform_inference.html)。
## 样例参考
-1. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html)
+1. [常用数据集读取样例](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/data_preparation/loading_the_datasets.html)
-2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)
+2. [Model Zoo](https://gitee.com/mindspore/mindspore/tree/r0.7/model_zoo)
diff --git a/tutorials/source_zh_cn/advanced_use/nlp_application.md b/tutorials/source_zh_cn/advanced_use/nlp_application.md
index f6e70eb436668bbc35d24f47979cfc02064c724c..653b7588ebfc715a7820814997fca4494da1131f 100644
--- a/tutorials/source_zh_cn/advanced_use/nlp_application.md
+++ b/tutorials/source_zh_cn/advanced_use/nlp_application.md
@@ -23,8 +23,8 @@
-
-
+
+
## 概述
@@ -89,7 +89,7 @@ $F1分数 = (2 * Precision * Recall) / (Precision + Recall)$
> LSTM(Long short-term memory,长短期记忆)网络是一种时间循环神经网络,适合于处理和预测时间序列中间隔和延迟非常长的重要事件。具体介绍可参考网上资料,在此不再赘述。
3. 得到模型之后,使用验证数据集,查看模型精度情况。
-> 本例面向GPU或CPU硬件平台,你可以在这里下载完整的样例代码:
+> 本例面向GPU或CPU硬件平台,你可以在这里下载完整的样例代码:
> - `src/config.py`:网络中的一些配置,包括`batch size`、进行几次epoch训练等。
> - `src/dataset.py`:数据集相关,包括转换成MindRecord文件,数据预处理等。
> - `src/imdb.py`: 解析IMDB数据集的工具。
@@ -158,7 +158,7 @@ if args.preprocess == "true":
```
> 转换成功后会在`preprocess_path`路径下生成`mindrecord`文件; 通常该操作在数据集不变的情况下,无需每次训练都执行。
-> `convert_to_mindrecord`函数的具体实现请参考
+> `convert_to_mindrecord`函数的具体实现请参考
> 其中包含两大步骤:
> 1. 解析文本数据集,包括编码、分词、对齐、处理GloVe原始数据,使之能够适应网络结构。
@@ -178,7 +178,7 @@ network = SentimentNet(vocab_size=embedding_table.shape[0],
weight=Tensor(embedding_table),
batch_size=cfg.batch_size)
```
-> `SentimentNet`网络结构的具体实现请参考
+> `SentimentNet`网络结构的具体实现请参考
### 预训练模型
@@ -217,7 +217,7 @@ else:
model.train(cfg.num_epochs, ds_train, callbacks=[time_cb, ckpoint_cb, loss_cb])
print("============== Training Success ==============")
```
-> `lstm_create_dataset`函数的具体实现请参考
+> `lstm_create_dataset`函数的具体实现请参考
### 模型验证
diff --git a/tutorials/source_zh_cn/advanced_use/on_device_inference.md b/tutorials/source_zh_cn/advanced_use/on_device_inference.md
index 5676dc2a41297277cfd163f36dfc4294c22f6ce7..77c4b1b1c71d2e8aed932ac1bc5bde5de01b3847 100644
--- a/tutorials/source_zh_cn/advanced_use/on_device_inference.md
+++ b/tutorials/source_zh_cn/advanced_use/on_device_inference.md
@@ -11,7 +11,7 @@
-
+
## 概述
@@ -61,7 +61,7 @@ MindSpore Lite的框架主要由Frontend、IR、Backend、Lite RT、Micro构成
1. 从代码仓下载源码。
```bash
- git clone https://gitee.com/mindspore/mindspore.git
+ git clone https://gitee.com/mindspore/mindspore.git -b r0.7
```
2. 在源码根目录下,执行如下命令编译MindSpore Lite。
diff --git a/tutorials/source_zh_cn/advanced_use/parameter_server_training.md b/tutorials/source_zh_cn/advanced_use/parameter_server_training.md
index a6578c7f845d414d314d9a44464daa988d75fe09..4ae1ffa21e40c3cca7386e530d07d2b38806b2e6 100644
--- a/tutorials/source_zh_cn/advanced_use/parameter_server_training.md
+++ b/tutorials/source_zh_cn/advanced_use/parameter_server_training.md
@@ -14,7 +14,7 @@
-
+
## 概述
Parameter Server(参数服务器)是分布式训练中一种广泛使用的架构,相较于同步的AllReduce训练方法,Parameter Server具有更好的灵活性、可扩展性以及节点容灾的能力。具体来讲,参数服务器既支持同步SGD,也支持异步SGD的训练算法;在扩展性上,将模型的计算与模型的更新分别部署在Worker和Server两类进程中,使得Worker和Server的资源可以独立地横向扩缩;另外,在大规模数据中心的环境下,计算设备、网络以及存储经常会出现各种故障而导致部分节点异常,而在参数服务器的架构下,能够较为容易地处理此类的故障而不会对训练中的任务产生影响。
@@ -36,7 +36,7 @@ Parameter Server(参数服务器)是分布式训练中一种广泛使用的架
### 训练脚本准备
-参考,使用[MNIST数据集](http://yann.lecun.com/exdb/mnist/),了解如何训练一个LeNet网络。
+参考,使用[MNIST数据集](http://yann.lecun.com/exdb/mnist/),了解如何训练一个LeNet网络。
### 参数设置
@@ -45,7 +45,7 @@ Parameter Server(参数服务器)是分布式训练中一种广泛使用的架
- 通过`mindspore.nn.Cell.set_param_ps()`对`nn.Cell`中所有权重递归设置
- 通过`mindspore.common.Parameter.set_param_ps()`对此权重进行设置
-在[原训练脚本](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/lenet/train.py)基础上,设置LeNet模型所有权重通过Parameter Server训练:
+在[原训练脚本](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/lenet/train.py)基础上,设置LeNet模型所有权重通过Parameter Server训练:
```python
network = LeNet5(cfg.num_classes)
network.set_param_ps()
diff --git a/tutorials/source_zh_cn/advanced_use/performance_profiling.md b/tutorials/source_zh_cn/advanced_use/performance_profiling.md
index 00f65da84e55c7fb1252666a263457bc5bffaab5..3b5ae7911f5a36d1b5cae8d4a1efcfac10f98bb5 100644
--- a/tutorials/source_zh_cn/advanced_use/performance_profiling.md
+++ b/tutorials/source_zh_cn/advanced_use/performance_profiling.md
@@ -18,7 +18,7 @@
-
+
## 概述
将训练过程中的算子耗时等信息记录到文件中,通过可视化界面供用户查看分析,帮助用户更高效地调试神经网络性能。
@@ -70,7 +70,7 @@ def test_profiler():
## 启动MindInsight
-启动命令请参考[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mindinsight_commands.html)。
+启动命令请参考[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/mindinsight_commands.html)。
### 性能分析
@@ -189,6 +189,6 @@ Timeline主要包含如下几个部分:
> 如何控制step数目请参考数据准备教程:
>
- >
+ >
- Timeline数据的解析比较耗时,且一般几个step的数据即足够分析出结果。出于数据解析和UI展示性能的考虑,Profiler最多展示20M数据(对大型网络20M可以显示10+条step的信息)。
diff --git a/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md b/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md
index da2ee6df36fb65f3e275b5235f0ed3c48f21401f..bf0143779cd477d046f170609cf17946d276b731 100644
--- a/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md
+++ b/tutorials/source_zh_cn/advanced_use/performance_profiling_gpu.md
@@ -14,7 +14,7 @@
-
+
## 概述
将训练过程中的算子耗时等信息记录到文件中,通过可视化界面供用户查看分析,帮助用户更高效地调试神经网络性能。
@@ -23,7 +23,7 @@
> 操作流程可以参考Ascend 910上profiler的操作:
>
->
+>
## 准备训练脚本
@@ -33,7 +33,7 @@
> 样例代码与Ascend使用方式一致可以参考:
>
->
+>
GPU场景下还可以用自定义callback的方式收集性能数据,示例如下:
@@ -67,7 +67,7 @@ class StopAtStep(Callback):
## 启动MindInsight
-启动命令请参考[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mindinsight_commands.html)。
+启动命令请参考[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/mindinsight_commands.html)。
### 性能分析
diff --git a/tutorials/source_zh_cn/advanced_use/quantization_aware.md b/tutorials/source_zh_cn/advanced_use/quantization_aware.md
index d5cf8e129c840710ac43542d5669c13678832fe8..6649f7d3f3d377f258e4b756b4c3186157b0e239 100644
--- a/tutorials/source_zh_cn/advanced_use/quantization_aware.md
+++ b/tutorials/source_zh_cn/advanced_use/quantization_aware.md
@@ -20,7 +20,7 @@
-
+
## 背景
@@ -51,7 +51,7 @@ MindSpore的感知量化训练是在训练基础上,使用低精度数据替
| 规格 | 规格说明 |
| --- | --- |
| 硬件支持 | GPU、Ascend AI 910处理器的硬件平台 |
-| 网络支持 | 已实现的网络包括LeNet、ResNet50等网络,具体请参见。 |
+| 网络支持 | 已实现的网络包括LeNet、ResNet50等网络,具体请参见。 |
| 算法支持 | 在MindSpore的伪量化训练中,支持非对称和对称的量化算法。 |
| 方案支持 | 支持4、7和8比特的量化方案。 |
@@ -76,7 +76,7 @@ MindSpore的感知量化训练是在训练基础上,使用低精度数据替
接下来,以LeNet网络为例,展开叙述3、6两个步骤。
-> 你可以在这里找到完整可运行的样例代码: 。
+> 你可以在这里找到完整可运行的样例代码: 。
### 定义融合网络
@@ -175,7 +175,7 @@ net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=100
2. 定义网络。
3. 定义融合网络。
4. 定义优化器和损失函数。
- 5. 加载模型文件模型重训。加载已有模型文件,基于融合网络重新训练生成融合模型。详细模型重载训练,请参见
+ 5. 加载模型文件模型重训。加载已有模型文件,基于融合网络重新训练生成融合模型。详细模型重载训练,请参见
6. 转化量化网络。
7. 进行量化训练。
@@ -183,7 +183,7 @@ net = qat.convert_quant_network(net, quant_delay=0, bn_fold=False, freeze_bn=100
使用量化模型进行推理,与普通模型推理一致,分为直接checkpoint文件推理及转化为通用模型格式(ONNX、AIR等)进行推理。
-> 推理详细说明请参见。
+> 推理详细说明请参见。
- 使用感知量化训练后得到的checkpoint文件进行推理:
diff --git a/tutorials/source_zh_cn/advanced_use/second_order_optimizer_for_resnet50_application.md b/tutorials/source_zh_cn/advanced_use/second_order_optimizer_for_resnet50_application.md
index 55371f8a0c3c4901ce44be4f1e8ccba74a35af1a..aa8a6044e7b72a125d1ade9a0312f848c1b92f08 100644
--- a/tutorials/source_zh_cn/advanced_use/second_order_optimizer_for_resnet50_application.md
+++ b/tutorials/source_zh_cn/advanced_use/second_order_optimizer_for_resnet50_application.md
@@ -21,7 +21,7 @@
- [模型推理](#模型推理)
-
+
## 概述
@@ -32,7 +32,7 @@ MindSpore开发团队在现有的自然梯度算法的基础上,对FIM矩阵
本篇教程将主要介绍如何在Ascend 910 以及GPU上,使用MindSpore提供的二阶优化器THOR训练ResNet50-v1.5网络和ImageNet数据集。
> 你可以在这里下载完整的示例代码:
- 。
+ 。
### 示例代码目录结构
@@ -93,10 +93,10 @@ MindSpore开发团队在现有的自然梯度算法的基础上,对FIM矩阵
```
### 配置分布式环境变量
#### Ascend 910
-Ascend 910 AI处理器的分布式环境变量配置参考[分布式并行训练 (Ascend)](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/distributed_training_ascend.html#id4)。
+Ascend 910 AI处理器的分布式环境变量配置参考[分布式并行训练 (Ascend)](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/distributed_training_ascend.html#id4)。
#### GPU
-GPU的分布式环境配置参考[分布式并行训练 (GPU)](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/distributed_training_gpu.html#id4)。
+GPU的分布式环境配置参考[分布式并行训练 (GPU)](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/distributed_training_gpu.html#id4)。
## 加载处理数据集
@@ -155,11 +155,11 @@ def create_dataset(dataset_path, do_train, repeat_num=1, batch_size=32, target="
return ds
```
-> MindSpore支持进行多种数据处理和增强的操作,各种操作往往组合使用,具体可以参考[数据处理与数据增强](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_processing_and_augmentation.html)章节。
+> MindSpore支持进行多种数据处理和增强的操作,各种操作往往组合使用,具体可以参考[数据处理与数据增强](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/data_preparation/data_processing_and_augmentation.html)章节。
## 定义网络
-本示例中使用的网络模型为ResNet50-v1.5,先定义[ResNet50网络](https://gitee.com/mindspore/mindspore/blob/master/model_zoo/official/cv/resnet/src/resnet.py),然后使用二阶优化器自定义的算子替换`Conv2d`和
+本示例中使用的网络模型为ResNet50-v1.5,先定义[ResNet50网络](https://gitee.com/mindspore/mindspore/blob/r0.7/model_zoo/official/cv/resnet/src/resnet.py),然后使用二阶优化器自定义的算子替换`Conv2d`和
和`Dense`算子。定义好的网络模型在在源码`src/resnet_thor.py`脚本中,自定义的算子`Conv2d_thor`和`Dense_thor`在`src/thor_layer.py`脚本中。
- 使用`Conv2d_thor`替换原网络模型中的`Conv2d`
diff --git a/tutorials/source_zh_cn/advanced_use/serving.md b/tutorials/source_zh_cn/advanced_use/serving.md
index ce3d94b14df413bf807be11d05c58db289045154..d1c51e22a4418c418ee982baa108daf45bd090e2 100644
--- a/tutorials/source_zh_cn/advanced_use/serving.md
+++ b/tutorials/source_zh_cn/advanced_use/serving.md
@@ -16,7 +16,7 @@
- [REST API客户端示例](#rest-api客户端示例)
-
+
## 概述
@@ -50,7 +50,7 @@ ms_serving [--help] [--model_path=] [--model_name=] [--p
### 导出模型
> 导出模型之前,需要配置MindSpore[基础环境](https://www.mindspore.cn/install)。
-使用[add_model.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。
+使用[add_model.py](https://gitee.com/mindspore/mindspore/blob/r0.7/serving/example/export_model/add_model.py),构造一个只有Add算子的网络,并导出MindSpore推理部署模型。
```python
python add_model.py
@@ -70,7 +70,7 @@ ms_serving --model_path={model directory} --model_name=tensor_add.mindir
#### Python客户端示例
> 执行客户端前,需将`/{your python path}/lib/python3.7/site-packages/mindspore`对应的路径添加到环境变量PYTHONPATH中。
-获取[ms_client.py](https://gitee.com/mindspore/mindspore/blob/master/serving/example/python_client/ms_client.py),启动Python客户端。
+获取[ms_client.py](https://gitee.com/mindspore/mindspore/blob/r0.7/serving/example/python_client/ms_client.py),启动Python客户端。
```bash
python ms_client.py
```
@@ -155,7 +155,7 @@ ms client received:
3. 调用gRPC接口和已经启动的Serving服务通信,并取回返回值。
```Status status = stub_->Predict(&context, request, &reply);```
-完整代码参考[ms_client](https://gitee.com/mindspore/mindspore/blob/master/serving/example/cpp_client/ms_client.cc)。
+完整代码参考[ms_client](https://gitee.com/mindspore/mindspore/blob/r0.7/serving/example/cpp_client/ms_client.cc)。
### REST API客户端示例
1. `data`形式发送数据:
diff --git a/tutorials/source_zh_cn/advanced_use/summary_record.md b/tutorials/source_zh_cn/advanced_use/summary_record.md
index e76ff8e730246d65547f08a1bec986b74c4371fe..1cfdc15a0c0788112cb38ce2b424ee9277cb355b 100644
--- a/tutorials/source_zh_cn/advanced_use/summary_record.md
+++ b/tutorials/source_zh_cn/advanced_use/summary_record.md
@@ -16,8 +16,8 @@
-
-
+
+
## 概述
@@ -129,10 +129,10 @@ model.eval(ds_eval, callbacks=[summary_collector])
MindSpore除了提供 `SummaryCollector` 能够自动收集一些常见数据,还提供了Summary算子,支持在网络中自定义收集其他的数据,比如每一个卷积层的输入,或在损失函数中的损失值等。
当前支持的Summary算子:
-- [ScalarSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=scalarsummary#mindspore.ops.operations.ScalarSummary): 记录标量数据
-- [TensorSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=tensorsummary#mindspore.ops.operations.TensorSummary): 记录张量数据
-- [ImageSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=imagesummary#mindspore.ops.operations.ImageSummary): 记录图片数据
-- [HistogramSummary](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.ops.operations.html?highlight=histogramsummar#mindspore.ops.operations.HistogramSummary): 将张量数据转为直方图数据记录
+- [ScalarSummary](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=scalarsummary#mindspore.ops.operations.ScalarSummary): 记录标量数据
+- [TensorSummary](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=tensorsummary#mindspore.ops.operations.TensorSummary): 记录张量数据
+- [ImageSummary](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=imagesummary#mindspore.ops.operations.ImageSummary): 记录图片数据
+- [HistogramSummary](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.ops.operations.html?highlight=histogramsummar#mindspore.ops.operations.HistogramSummary): 将张量数据转为直方图数据记录
记录方式如下面的步骤所示。
@@ -332,7 +332,7 @@ mindinsight start --summary-base-dir ./summary
mindinsight stop
```
-更多参数设置,请点击查看[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mindinsight_commands.html)页面。
+更多参数设置,请点击查看[MindInsight相关命令](https://www.mindspore.cn/tutorial/zh-CN/r0.7/advanced_use/mindinsight_commands.html)页面。
## 注意事项
diff --git a/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md b/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md
index 773b4c4535380e34e324b64ef2abf4b429ed0b2d..981d9d8a7016a00a40989186aa95995b06e367e1 100644
--- a/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md
+++ b/tutorials/source_zh_cn/advanced_use/synchronization_training_and_evaluation.md
@@ -13,9 +13,9 @@
-
+
-
+
## 概述
@@ -26,11 +26,11 @@
2. 定义训练网络并执行。
3. 将不同epoch下的模型精度绘制出折线图并挑选最优模型。
-完整示例请参考[notebook](https://gitee.com/mindspore/docs/blob/master/tutorials/notebook/synchronization_training_and_evaluation.ipynb)。
+完整示例请参考[notebook](https://gitee.com/mindspore/docs/blob/r0.7/tutorials/notebook/synchronization_training_and_evaluation.ipynb)。
## 定义回调函数EvalCallBack
-实现思想:每隔n个epoch验证一次模型精度,由于在自定义函数中实现,如需了解详细用法,请参考[API说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.train.html?highlight=callback#mindspore.train.callback.Callback);
+实现思想:每隔n个epoch验证一次模型精度,由于在自定义函数中实现,如需了解详细用法,请参考[API说明](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.train.html?highlight=callback#mindspore.train.callback.Callback);
核心实现:回调函数的`epoch_end`内设置验证点,如下:
diff --git a/tutorials/source_zh_cn/advanced_use/use_on_the_cloud.md b/tutorials/source_zh_cn/advanced_use/use_on_the_cloud.md
index 6fbc1722b67c71ce7dc3d81047c8d3002b4d7bfb..1d2427b85c0e5c937df81ab4a8cebb749d15abf2 100644
--- a/tutorials/source_zh_cn/advanced_use/use_on_the_cloud.md
+++ b/tutorials/source_zh_cn/advanced_use/use_on_the_cloud.md
@@ -24,7 +24,7 @@
-
+
## 概述
@@ -69,7 +69,7 @@ ModelArts使用对象存储服务(Object Storage Service,简称OBS)进行
### 执行脚本准备
新建一个自己的OBS桶(例如:`resnet50-train`),在桶中创建代码目录(例如:`resnet50_cifar10_train`),并将以下目录中的所有脚本上传至代码目录:
-> 脚本使用ResNet-50网络在CIFAR-10数据集上进行训练,并在训练结束后验证精度。脚本可以在ModelArts采用`1*Ascend`或`8*Ascend`两种不同规格进行训练任务。
+> 脚本使用ResNet-50网络在CIFAR-10数据集上进行训练,并在训练结束后验证精度。脚本可以在ModelArts采用`1*Ascend`或`8*Ascend`两种不同规格进行训练任务。
为了方便后续创建训练作业,先创建训练输出目录和日志输出目录,本示例创建的目录结构如下:
@@ -108,7 +108,7 @@ ModelArts使用对象存储服务(Object Storage Service,简称OBS)进行
### 适配OBS数据
MindSpore暂时没有提供直接访问OBS数据的接口,需要通过MoXing提供的API与OBS交互。ModelArts训练脚本在容器中执行,通常选用`/cache`目录作为容器数据存储路径。
-> 华为云MoXing提供了丰富的API供用户使用,本示例中仅需要使用`copy_parallel`接口。
+> 华为云MoXing提供了丰富的API供用户使用,本示例中仅需要使用`copy_parallel`接口。
1. 将OBS中存储的数据下载至执行容器。
diff --git a/tutorials/source_zh_cn/quick_start/linear_regression.md b/tutorials/source_zh_cn/quick_start/linear_regression.md
index 9f7ab617c692f9e04a200e87f2a5203a9d1bbae8..18e69319de898a00f15984d03a8725e86a8e601c 100644
--- a/tutorials/source_zh_cn/quick_start/linear_regression.md
+++ b/tutorials/source_zh_cn/quick_start/linear_regression.md
@@ -26,9 +26,9 @@
-
+
-
+
## 概述
@@ -42,7 +42,7 @@
4. 定义线性拟合过程的可视化函数
5. 执行训练
-本次样例源代码请参考:。
+本次样例源代码请参考:。
## 环境准备
@@ -306,7 +306,7 @@ class GradWrap(nn.Cell):
### 反向传播更新权重
-`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式10,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/master/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。
+`nn.RMSProp`为完成权重更新的函数,更新方式大致为公式10,但是考虑的因素更多,具体信息请参考[官网说明](https://www.mindspore.cn/api/zh-CN/r0.7/api/python/mindspore/mindspore.nn.html?highlight=rmsprop#mindspore.nn.RMSProp)。
```python
diff --git a/tutorials/source_zh_cn/quick_start/quick_start.md b/tutorials/source_zh_cn/quick_start/quick_start.md
index 2cc7e51cff73c29199ed76c82c50bc28257f15f4..25a50a5c8c055413a9efea6ad436698b993598c1 100644
--- a/tutorials/source_zh_cn/quick_start/quick_start.md
+++ b/tutorials/source_zh_cn/quick_start/quick_start.md
@@ -25,9 +25,9 @@
-
+
-
+
## 概述
@@ -41,7 +41,7 @@
5. 加载保存的模型,进行推理。
6. 验证模型,加载测试数据集和训练后的模型,验证结果精度。
-> 你可以在这里找到完整可运行的样例代码: 。
+> 你可以在这里找到完整可运行的样例代码: 。
这是简单、基础的应用流程,其他高级、复杂的应用可以基于这个基本流程进行扩展。
@@ -87,7 +87,7 @@
import os
```
-详细的MindSpore的模块说明,可以在[MindSpore API页面](https://www.mindspore.cn/api/zh-CN/master/index.html)中搜索查询。
+详细的MindSpore的模块说明,可以在[MindSpore API页面](https://www.mindspore.cn/api/zh-CN/r0.7/index.html)中搜索查询。
### 配置运行信息
@@ -183,7 +183,7 @@ def create_dataset(data_path, batch_size=32, repeat_size=1,
先进行shuffle、batch操作,再进行repeat操作,这样能保证1个epoch内数据不重复。
-> MindSpore支持进行多种数据处理和增强的操作,各种操作往往组合使用,具体可以参考[数据处理与数据增强](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/data_processing_and_augmentation.html)章节。
+> MindSpore支持进行多种数据处理和增强的操作,各种操作往往组合使用,具体可以参考[数据处理与数据增强](https://www.mindspore.cn/tutorial/zh-CN/r0.7/use/data_preparation/data_processing_and_augmentation.html)章节。
## 定义网络
diff --git a/tutorials/source_zh_cn/quick_start/quick_video.md b/tutorials/source_zh_cn/quick_start/quick_video.md
index d04f894bd84ee0a970f3120a60901a0632c98249..87192f943ccd33852318cd5fdd8ec390a27599d9 100644
--- a/tutorials/source_zh_cn/quick_start/quick_video.md
+++ b/tutorials/source_zh_cn/quick_start/quick_video.md
@@ -10,7 +10,7 @@