From dc3cd5264c0897074a9a184dfa859fcea2ac7b90 Mon Sep 17 00:00:00 2001
From: huan <3174348550@qq.com>
Date: Mon, 11 Aug 2025 16:14:26 +0800
Subject: [PATCH] modify mapping files
---
.../mindspore/source_en/note/api_mapping/pytorch_api_mapping.md | 2 +-
.../source_zh_cn/note/api_mapping/pytorch_api_mapping.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md
index 573cf08874..56a7722d2f 100644
--- a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md
+++ b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md
@@ -37,7 +37,7 @@ Because of the framework mechanism, MindSpore does not provide the following par
| :-------------: | :----------------------------------------------------------: |:--:|
| out | Indicates the output Tensor |Assign the operation result to the out parameter, not supported in MindSpore.|
| layout | Indicates the memory distribution strategy |PyTorch supports torch.striped and torch.split_coo, not supported in MindSpore.|
-| device | Indicates the Tensor storage location |Including device type and optional device number, MindSpore currently supports operator or network-level device scheduling.|
+| device | Indicates the Tensor storage location |Including device type and optional device number. MindSpore supports the following options:
1. After creating a Tensor, it is created on the CPU by default. When executing operators, it will be automatically copied to the corresponding device_target according to [set_device](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mindspore/mindspore.set_device.html).
2. If you want to manually copy the Tensor after creation, you can also call the [Tensor.move_to](https://www.mindspore.cn/docs/en/r2.7.0/api_python/mindspore/Tensor/mindspore.Tensor.move_to.html). |
| requires_grad | Indicates whether to update the gradient |MindSpore can be accessed through the `Parameter.requires_grad` attribute to control.|
| pin_memory | Indicates whether to use locking page memory |Not supported in MindSpore.|
| memory_format | Indicates the memory format of the Tensor |Not supported in MindSpore.|
diff --git a/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md
index 5817bd30e3..e48dfb7c62 100644
--- a/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md
+++ b/docs/mindspore/source_zh_cn/note/api_mapping/pytorch_api_mapping.md
@@ -37,7 +37,7 @@ mindspore.mint.argmax只有一种API形式,即mindspore.mint.argmax(input, dim
|:-------------:| :------------------------------------------------: |:------------------------------------------------------------:|
| out | 表示输出的Tensor | 把运算结果赋值给out参数,MindSpore目前无此机制 |
| layout | 表示内存分布策略 | PyTorch支持torch.strided和torch.sparse_coo两种模式, MindSpore目前无此机制 |
-| device | 表示Tensor存放位置 | 包含设备类型及可选设备号,MindSpore目前支持算子或网络级别的设备调度 |
+| device | 表示Tensor存放位置 | 包含设备类型及可选设备号。MindSpore的支持方案如下:
1. 创建Tensor后,默认在CPU上,在执行算子的时候,会根据[set_device](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mindspore/mindspore.set_device.html)自动拷贝到对应的device_target。
2. 如果想在创建了Tensor之后手动拷贝,也可以调用[Tensor.move_to](https://www.mindspore.cn/docs/zh-CN/r2.7.0/api_python/mindspore/Tensor/mindspore.Tensor.move_to.html)接口。 |
| requires_grad | 表示是否更新梯度 | MindSpore中可以通过`Parameter.requires_grad`控制 |
| pin_memory | 表示是否使用锁页内存 | MindSpore目前无此机制 |
| memory_format | 表示Tensor的内存格式 | MindSpore目前无此机制 |
--
Gitee