From 90975ce951d1f4164a701603d6024478ed6f2622 Mon Sep 17 00:00:00 2001 From: huanxiaoling <3174348550@qq.com> Date: Wed, 16 Nov 2022 18:55:03 +0800 Subject: [PATCH] update en pytorch_diff --- .../note/api_mapping/pytorch_api_mapping.md | 14 ++++---- .../api_mapping/pytorch_diff/CosineDecayLr.md | 18 +++++++--- .../pytorch_diff/ExponentialDecayLR.md | 35 ++++++++++++++++--- .../pytorch_diff/PiecewiseConstantLR.md | 31 +++++++++++++--- 4 files changed, 78 insertions(+), 20 deletions(-) diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md index bdf3e3abb9..7d2d8ac9cd 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_api_mapping.md @@ -69,7 +69,7 @@ More MindSpore developers are also welcome to participate in improving the mappi | [torch.nn.BCEWithLogitsLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.BCEWithLogitsLoss.html) | [mindspore.nn.BCEWithLogitsLoss](https://mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.BCEWithLogitsLoss.html) | Difference comparison is under development. | | [torch.nn.CTCLoss](https://pytorch.org/docs/1.8.1/generated/torch.nn.CTCLoss.html) | [mindspore.ops.CTCLoss](https://www.mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.CTCLoss.html) |Difference comparison is under development. | | [torch.nn.Dropout](https://pytorch.org/docs/1.8.1/generated/torch.nn.Dropout.html) | [mindspore.nn.Dropout](https://mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Dropout.html#mindspore.nn.Dropout) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/Dropout.html) | -| [torch.nn.Dropout](https://pytorch.org/docs/1.8.1/generated/torch.nn.Dropout.html) | [mindspore.ops.dropout](https://www.mindspore.cn/docs/zh-CN/master/api_python/ops/mindspore.ops.dropout.html) | Difference comparison is under development. | +| [torch.nn.Dropout](https://pytorch.org/docs/1.8.1/generated/torch.nn.Dropout.html) | [mindspore.ops.dropout](https://www.mindspore.cn/docs/en/master/api_python/ops/mindspore.ops.dropout.html) | Difference comparison is under development. | | [torch.nn.Unfold](https://pytorch.org/docs/1.8.1/generated/torch.nn.Unfold.html) | [mindspore.nn.Unfold](https://mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Unfold.html) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/Unfold.html) | ### torch.nn.functional @@ -80,12 +80,12 @@ More MindSpore developers are also welcome to participate in improving the mappi ### torch.optim -| PyTorch 1.8.1 APIs | MindSpore APIs | Descriptions | -| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | -| [torch.optim.lr_scheduler.CosineAnnealingLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR) | [mindspore.nn.cosine_decay_lr](https://www.mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.cosine_decay_lr.html#mindspore.nn.cosine_decay_lr) | Difference comparison is under development. | -| [torch.optim.lr_scheduler.ExponentialLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.ExponentialLR) | [mindspore.nn.exponential_decay_lr](https://mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.exponential_decay_lr.html#mindspore.nn.exponential_decay_lr) | Difference comparison is under development. | -| [torch.optim.lr_scheduler.MultiStepLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.MultiStepLR) | [mindspore.nn.piecewise_constant_lr](https://mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.piecewise_constant_lr.html#mindspore.nn.piecewise_constant_lr) | Difference comparison is under development. | -| [torch.optim.lr_scheduler.StepLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.StepLR) | [mindspore.nn.piecewise_constant_lr](https://mindspore.cn/docs/zh-CN/master/api_python/nn/mindspore.nn.piecewise_constant_lr.html#mindspore.nn.piecewise_constant_lr) | Difference comparison is under development. | +| PyTorch 1.8.1 APIs | MindSpore APIs | Descriptions | +| ------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | +| [torch.optim.lr_scheduler.CosineAnnealingLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR) | [mindspore.nn.cosine_decay_lr](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.cosine_decay_lr.html#mindspore.nn.cosine_decay_lr) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/CosineDecayLr.html) | +| [torch.optim.lr_scheduler.ExponentialLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.ExponentialLR) | [mindspore.nn.exponential_decay_lr](https://mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.exponential_decay_lr.html#mindspore.nn.exponential_decay_lr) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/ExponentialDecayLR.html) | +| [torch.optim.lr_scheduler.MultiStepLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.MultiStepLR) | [mindspore.nn.piecewise_constant_lr](https://mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.piecewise_constant_lr.html#mindspore.nn.piecewise_constant_lr) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/PiecewiseConstantLR.html) | +| [torch.optim.lr_scheduler.StepLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.StepLR) | [mindspore.nn.piecewise_constant_lr](https://mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.piecewise_constant_lr.html#mindspore.nn.piecewise_constant_lr) | [diff](https://www.mindspore.cn/docs/en/master/note/api_mapping/pytorch_diff/PiecewiseConstantLR.html) | ### torch.utils diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/CosineDecayLr.md b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/CosineDecayLr.md index 5d908bbedd..59720217ea 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/CosineDecayLr.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/CosineDecayLr.md @@ -9,11 +9,12 @@ torch.optim.lr_scheduler.CosineAnnealingLR( optimizer, T_max, eta_min=0, - last_epoch=-1 + last_epoch=-1, + verbose=False ) ``` -For more information, see[torch.optim.lr_scheduler.CosineAnnealingLR](https://pytorch.org/docs/1.5.0/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR)。 +For more information, see[torch.optim.lr_scheduler.CosineAnnealingLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.CosineAnnealingLR). ## mindspore.nn.cosine_decay_lr @@ -31,9 +32,18 @@ For more information, see[mindspore.nn.cosine_decay_lr](https://www.mindspore.cn ## Differences -`torch.optim.lr_scheduler.CosineAnnealingLR` Used to periodically adjust the learning rate, where the input parameter `T_max` represents 1/2 of the period. Assuming the initial learning rate is `lr`, in each period of `2*T_max`, the learning rate changes according to the specified calculation logic, for the formula detail, see the API docs; after the period ends, the learning rate returns to the initial value `lr` , and keep looping. +PyTorch (torch.optim.lr_scheduler.CosineAnnealingLR): `torch.optim.lr_scheduler.CosineAnnealingLR` is used to periodically adjust the learning rate, where the input parameter `T_max` represents 1/2 of the period. Assuming the initial learning rate is `lr`, in each period of `2*T_max`, the learning rate changes according to the specified calculation logic, for the formula detail, see the API docs; after the period ends, the learning rate returns to the initial value `lr` , and keep looping. When `verbose` is True, the relevant information is printed for each update. -`mindspore.nn.cosine_decay_lr`: the learning rate adjustment has no periodic changes, and the learning rate value changes according to the specified calculation logic. The formula calculation logic is the same as that of `torch.optim.lr_scheduler.CosineAnnealingLR`. +MindSpore (mindspore.nn.cosine_decay_lr): the learning rate adjustment of `mindspore.nn.cosine_decay_lr` has no periodic changes, and the learning rate value changes according to the specified calculation logic. The formula calculation logic is the same as that of `torch.optim.lr_scheduler.CosineAnnealingLR`. + +| Categories | Subcategories | PyTorch | MindSpore | Differences | +| ---- | ----- | ------- | --------- | -------------------- | +| Parameter | Parameter 1 | optimizer | | Optimizer for PyTorch applications. MindSpore does not have this parameter | +| | Parameter 2 | T_max | decay_steps | The step to perform decay. The function is the same, and the parameter name is different | +| | Parameter 3 | eta_min | min_lr | Minimum learning rate, same function, different parameter names | +| | Parameter 4 | last_epoch | | MindSpore does not have this parameter | +| | Parameter 5 | verbose | | PyTorch prints information about each update when `verbose` is True. MindSpore does not have this parameter | +| | Parameter 6 | | max_lr | Maximum learning rate. PyTorch is set to initial lr, and MindSpore is set to `max_lr` | ## Code Example diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/ExponentialDecayLR.md b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/ExponentialDecayLR.md index bf9d95431c..728cb10ffd 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/ExponentialDecayLR.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/ExponentialDecayLR.md @@ -8,11 +8,12 @@ torch.optim.lr_scheduler.ExponentialLR( optimizer, gamma, - last_epoch=-1 + last_epoch=-1, + verbose=False ) ``` -For more information, see [torch.optim.lr_scheduler.ExponentialLR](https://pytorch.org/docs/1.5.0/optim.html#torch.optim.lr_scheduler.ExponentialLR). +For more information, see [torch.optim.lr_scheduler.ExponentialLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.ExponentialLR). ## mindspore.nn.exponential_decay_lr @@ -44,9 +45,33 @@ For more information, see [mindspore.nn.ExponentialDecayLR](https://www.mindspor ## Differences -PyTorch: The function of calculating the learning rate for each step is lr*gamma^{epoch}. In the training stage, the optimizer should be passed into the lr scheduler, then the step method will be implemented. - -MindSpore: The function of calculating learning rate for each step is lr*decay_rate^{p}, which is implemented in two ways in MindSpore: `exponential_decay_lr` pregenerates a list of learning rates and passes the list to the optimizer; secondly, `ExponentialDecayLR` instance is passed into the optimizer, during the training process, the optimizer calls the instance taking the current step as the input to get the current learning rate. +PyTorch (torch.optim.lr_scheduler.ExponentialLR): The calculating method is :math:`lr * gamma^{epoch}`. When used, the optimizer is used as input and the learning rate is updated by calling the `step` method. When `verbose` is True, the relevant information is printed for each update. + +MindSpore (mindspore.nn.exponential_decay_lr): The calculating method is :math:`lr * decay\_rate^{p}`. `exponential_decay_lr` pre-generates the learning rate list and passes the list into the optimizer. + +| Categories | Subcategories | PyTorch | MindSpore | Differences | +| ---- | ----- | ------- | --------- | -------------------- | +| Parameter | Parameter 1 | optimizer | | Optimizer for PyTorch applications. MindSpore does not have this Parameter | +| | Parameter 2 | gamma | decay_rate | Parameter of decay learning rate, same function, different Parameter name | +| | Parameter 3 | last_epoch | | MindSpore does not have this Parameter | +| | Parameter 4 | verbose | | PyTorch `verbose` prints information about each update when it is True. MindSpore does not have this Parameter. | +| | Parameter 5 | | learning_rate | MindSpore sets the initial value of the learning rate. | +| | Parameter 6 | | total_step | Total number of steps in MindSpore | +| | Parameter 7 | | step_per_epoch | The number of steps per epoch in MindSpore | +| | Parameter 8 | | decay_steps | The number of decay steps performed by MindSpore | +| | Parameter 9 | | is_stair | When MindSpore `is_stair` is True, the learning rate decays once every `decay_steps`. | + +MindSpore (mindspore.nn.ExponentialDecayLR): The calculating method is :math:`lr * decay\_rate^{p}`. `ExponentialDecayLR` is passed in the optimizer for training in the way of the computational graph. + +| Categories | Subcategories | PyTorch | MindSpore | Differences | +| ---- | ----- | ------- | --------- | -------------------- | +| Parameter | Parameter 1 | optimizer | | Optimizer for PyTorch applications. MindSpore does not have this Parameter | +| | Parameter 2 | gamma | decay_rate | Parameter of decay learning rate, same function, different Parameter name | +| | Parameter 3 | last_epoch | | MindSpore does not have this Parameter. | +| | Parameter 4 | verbose | | PyTorch `verbose` prints information about each update when it is True. MindSpore does not have this Parameter. | +| | Parameter 5 | | learning_rate | MindSpore sets the initial value of the learning rate. | +| | Parameter 6 | | decay_steps | The number of decay steps performed by MindSpore | +| | Parameter 7 | | is_stair | When MindSpore `is_stair` is True, the learning rate decays once every `decay_steps`. | ## Code Example diff --git a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/PiecewiseConstantLR.md b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/PiecewiseConstantLR.md index af91fe250b..f2afbaafd1 100644 --- a/docs/mindspore/source_en/note/api_mapping/pytorch_diff/PiecewiseConstantLR.md +++ b/docs/mindspore/source_en/note/api_mapping/pytorch_diff/PiecewiseConstantLR.md @@ -9,11 +9,12 @@ torch.optim.lr_scheduler.StepLR( optimizer, step_size, gamma=0.1, - last_epoch=-1 + last_epoch=-1, + verbose=False ) ``` -For more information, see [torch.optim.lr_scheduler.StepLR](https://pytorch.org/docs/1.5.0/optim.html#torch.optim.lr_scheduler.StepLR). +For more information, see [torch.optim.lr_scheduler.StepLR](https://pytorch.org/docs/1.8.1/optim.html#torch.optim.lr_scheduler.StepLR). ## torch.optim.lr_scheduler.MultiStepLR @@ -41,9 +42,31 @@ For more information, see [mindspore.nn.piecewise_constant_lr](https://mindspore ## Differences -PyTorch: `torch.optim.lr_scheduler.StepLR`calculates the learning rate by step_size, value of lr will multiply gamma for every fixed step size; `torch.optim.lr_scheduler.MultiStepLR`ccalculates the learning rate by a step size list, value of lr will multiply gamma when the step size is reached in the list. During the training stage, the optimizer is passed to the lr scheduler, which calls the step method to update the current learning rate. +PyTorch (torch.optim.lr_scheduler.StepLR): Setting the learning rate in segments. `torch.optim.lr_scheduler.StepLR` multiplies the learning rate by gamma every fixed step_size, by passing in step_size. When `verbose` is True, the relevant information is printed for each update. -MindSpore: Step size list and the corresponding learning rate value list are passed to the function, and a list of learning rates will be returned as input to the optimizer. +MindSpore (mindspore.nn.piecewise_constant_lr): Pass in the list of step values of milestones and the corresponding list of learning rate setting values, reach the step values and the learning rate takes the corresponding values. A list of learning rates is eventually returned as input to the optimizer. + +| Categories | Subcategories | PyTorch | MindSpore | Differences | +| ---- | ----- | ------- | --------- | -------------------- | +| Parameter | Parameter 1 | optimizer | | Optimizer for PyTorch applications. MindSpore does not have this Parameter | +| | Parameter 2 | step_size | milestone | MindSpore segmentation updates the step list of learning rates, and PyTorch uses a fixed step value. | +| | Parameter 3 | gamma | | Parameters for PyTorch decay learning rate. MindSpore does not have this Parameter. | +| | Parameter 4 | last_epoch | | MindSpore does not have this Parameter. | +| | Parameter 5 | | verbose | PyTorch `verbose` prints information about each update when it is True. | +| | Parameter 6 | | learning_rates | List of MindSpore settings for learning rate | + +PyTorch (torch.optim.lr_scheduler.MultiStepLR): `torch.optim.lr_scheduler.MultiStepLR` reaches the step value by passing in a list of step values for milestones, and the learning rate is multiplied by gamma. When used, the optimizer is used as input and the `step` method is called during the training process to update the values. `verbose` prints information about each update when it is True. + +MindSpore (mindspore.nn.piecewise_constant_lr): Pass in the list of step values for milestones and the corresponding list of learning rate setting values, reach the step values and the learning rate takes the corresponding values. A list of learning rates is eventually returned as input to the optimizer. + +| Categories | Subcategories | PyTorch | MindSpore | Differences | +| ---- | ----- | ------- | --------- | -------------------- | +| Parameter | Parameter 1 | optimizer | | Optimizer for PyTorch applications. MindSpore does not have this Parameter | +| | Parameter 2 | milestones | milestone | Segmentation updates the step list of learning rates, with the same functions and different parameter names | +| | Parameter 3 | gamma | | Parameters for PyTorch decay learning rate. MindSpore does not have this Parameter. | +| | Parameter 4 | last_epoch | | MindSpore does not have this Parameter. | +| | Parameter 5 | | verbose | PyTorch `verbose` prints information about each update when it is True. MindSpore does not have this Parameter.| +| | Parameter 6 | | learning_rates | List of MindSpore settings for learning rate | ## Code Example -- Gitee