From 49b3d823f7c7c94d468ede68497b82e7501abe85 Mon Sep 17 00:00:00 2001 From: huanxiaoling <3174348550@qq.com> Date: Mon, 21 Nov 2022 17:43:49 +0800 Subject: [PATCH] modify the contents in tutorials --- tutorials/source_en/advanced/mixed_precision.md | 2 ++ tutorials/source_zh_cn/advanced/mixed_precision.ipynb | 2 ++ 2 files changed, 4 insertions(+) diff --git a/tutorials/source_en/advanced/mixed_precision.md b/tutorials/source_en/advanced/mixed_precision.md index 5efa9a9280..4cae439eca 100644 --- a/tutorials/source_en/advanced/mixed_precision.md +++ b/tutorials/source_en/advanced/mixed_precision.md @@ -270,6 +270,8 @@ for epoch in range(epochs): The `Model` interface provides the input `amp_level` to achieve automatic mixed precision, or the user can set the operator involved in the Cell to FP16 via `to_float(ms.float16)` to achieve manual mixed precision. +> This method only supports Ascend and GPU. + #### Automatic Mixed-Precision To use the automatic mixed-precision, you need to call the `Model` API to transfer the network to be trained and optimizer as the input. This API converts the network model operators into FP16 operators. diff --git a/tutorials/source_zh_cn/advanced/mixed_precision.ipynb b/tutorials/source_zh_cn/advanced/mixed_precision.ipynb index 8125ae9aad..bccba37634 100644 --- a/tutorials/source_zh_cn/advanced/mixed_precision.ipynb +++ b/tutorials/source_zh_cn/advanced/mixed_precision.ipynb @@ -410,6 +410,8 @@ "\n", "`Model` 接口提供了入参 `amp_level` 实现自动混合精度,用户也可以通过 `to_float(ms.float16)` 把Cell中涉及的算子设置成FP16,实现手动混合精度。\n", "\n", + "> 该方式仅支持Ascend和GPU。\n", + "\n", "#### 自动混合精度\n", "\n", "使用自动混合精度,需要调用`Model`接口,将待训练网络和优化器作为输入传入,该接口会根据设定策略把对应的网络模型的算子转换成FP16算子。\n", -- Gitee