From ced8f2f3ed2ba3ccfe793ea245e406e429d268e4 Mon Sep 17 00:00:00 2001 From: pengzirong Date: Tue, 12 Apr 2022 10:25:06 +0800 Subject: [PATCH] =?UTF-8?q?=E8=A1=A5=E5=85=85NPU=E8=87=AA=E5=AE=9A?= =?UTF-8?q?=E7=AE=97=E5=AD=90=E5=88=97=E8=A1=A8=E4=B8=AD=E7=BC=BA=E5=A4=B1?= =?UTF-8?q?=E7=9A=84=E7=AE=97=E5=AD=90=20=EF=BC=88=E8=AF=A6=E7=BB=86?= =?UTF-8?q?=E7=AE=97=E5=AD=90=E6=B8=85=E5=8D=95=E8=AF=B4=E6=98=8E=E4=B8=AD?= =?UTF-8?q?=E5=AD=98=E5=9C=A8=EF=BC=89?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...\214\201\346\270\205\345\215\225_1.5.0.md" | 50 +++---------------- 1 file changed, 7 insertions(+), 43 deletions(-) diff --git "a/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.5.0.md" "b/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.5.0.md" index 5f8c769ba6..9181a82389 100644 --- "a/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.5.0.md" +++ "b/docs/zh/PyTorch API\346\224\257\346\214\201\346\270\205\345\215\225_1.5.0.md" @@ -1252,6 +1252,13 @@ torch.npu.set_device()接口只支持在程序开始的位置通过set_device进 | 55 | npu_linear | | 56 | npu_bert_apply_adam | | 57 | npu_giou | +| 58 | npu_min.dim | +| 59 | npu_nms_rotated | +| 60 | npu_silu | +| 61 | npu_reshape | +| 62 | npu_rotated_iou | +| 63 | npu_rotated_box_encode | +| 64 | npu_rotated_box_decode | 详细算子接口说明: @@ -1465,8 +1472,6 @@ Change the format of a npu tensor. 29 ``` -> npu_format_cast_ - > npu_format_cast_.src(self, src) -> Tensor In-place Change the format of self, with the same format as src. @@ -2957,47 +2962,6 @@ masked fill tensor along with one axis by range.boxes. It is a customized masked [5.3273, 6.3089, 3.9601, 3.2410]], device='npu:0') ``` -> npu_bert_apply_adam.old(Tensor(a!) var, Tensor(b!) m, Tensor(c!) v, lr, beta1, beta2, epsilon, grad, max_grad_norm, global_grad_norm, weight_decay, step_size=None, adam_mode=0) -> (Tensor(a!), Tensor(b!), Tensor(c!)) - - count adam result. - -- Parameters: - - - **var** (Tensor) - A Tensor. Support float16/float32. - - **m**(Tensor) - A Tensor. Datatype and shape are same as exp_avg. - - **v**(Tensor) - A Tensor. Datatype and shape are same as exp_avg. - - **lr** (Number) - A Tensor. Datatype is same as exp_avg. - - **beta1** (Number) - A Tensor. Datatype is same as exp_avg. - - **beta2** (Number) - A Tensor. Datatype is same as exp_avg. - - **epsilon** (Number) - A Tensor. Datatype is same as exp_avg. - - **grad**(Tensor) - A Tensor. Datatype and shape are same as exp_avg. - - **max_grad_norm** (Number) - A Tensor. Datatype is same as exp_avg. - - **global_grad_norm** (Number) - A Tensor. Datatype is same as exp_avg. - - **weight_decay** (Number) - A Tensor. Datatype is same as exp_avg. - -- constraints: - - None - -- Examples: - ```python - >>> var_in = torch.rand(321538).uniform_(-32., 21.).npu() - >>> m_in = torch.zeros(321538).npu() - >>> v_in = torch.zeros(321538).npu() - >>> grad = torch.rand(321538).uniform_(-0.05, 0.03).npu() - >>> max_grad_norm = -1. - >>> beta1 = 0.9 - >>> beta2 = 0.99 - >>> weight_decay = 0. - >>> lr = 0. - >>> epsilon = 1e-06 - >>> global_grad_norm = 0. - >>> var_out, m_out, v_out = torch.npu_bert_apply_adam(var_in, m_in, v_in, lr, beta1, beta2, epsilon, grad, max_grad_norm, global_grad_norm, weight_decay) - >>> var_out - tensor([ 14.7733, -30.1218, -1.3647, ..., -16.6840, 7.1518, 8.4872], - device='npu:0') - ``` - > npu_bert_apply_adam(lr, beta1, beta2, epsilon, grad, max_grad_norm, global_grad_norm, weight_decay, step_size=None, adam_mode=0, *, out=(var,m,v)) count adam result. -- Gitee