diff --git a/tutorials/source_en/advanced/mixed_precision.md b/tutorials/source_en/advanced/mixed_precision.md index 5efa9a9280e8e1452c915e0caaba4a52f5abceec..4cae439eca349d5142847d5d40fc6b7a79b845cd 100644 --- a/tutorials/source_en/advanced/mixed_precision.md +++ b/tutorials/source_en/advanced/mixed_precision.md @@ -270,6 +270,8 @@ for epoch in range(epochs): The `Model` interface provides the input `amp_level` to achieve automatic mixed precision, or the user can set the operator involved in the Cell to FP16 via `to_float(ms.float16)` to achieve manual mixed precision. +> This method only supports Ascend and GPU. + #### Automatic Mixed-Precision To use the automatic mixed-precision, you need to call the `Model` API to transfer the network to be trained and optimizer as the input. This API converts the network model operators into FP16 operators. diff --git a/tutorials/source_zh_cn/advanced/mixed_precision.ipynb b/tutorials/source_zh_cn/advanced/mixed_precision.ipynb index 8125ae9aadddc880fc92eaf46ce6e7d531838df8..bccba3763436f70b787d13dee3c200e5a1e13f1b 100644 --- a/tutorials/source_zh_cn/advanced/mixed_precision.ipynb +++ b/tutorials/source_zh_cn/advanced/mixed_precision.ipynb @@ -410,6 +410,8 @@ "\n", "`Model` 接口提供了入参 `amp_level` 实现自动混合精度,用户也可以通过 `to_float(ms.float16)` 把Cell中涉及的算子设置成FP16,实现手动混合精度。\n", "\n", + "> 该方式仅支持Ascend和GPU。\n", + "\n", "#### 自动混合精度\n", "\n", "使用自动混合精度,需要调用`Model`接口,将待训练网络和优化器作为输入传入,该接口会根据设定策略把对应的网络模型的算子转换成FP16算子。\n",