diff --git a/docs/source_en/glossary.md b/docs/source_en/glossary.md index 6abd2e7d4f8c1d396638ed0590f4ccbf19c9f1b0..f48ce25ef8cc104dad06c127e94a7ae488630bcd 100644 --- a/docs/source_en/glossary.md +++ b/docs/source_en/glossary.md @@ -18,6 +18,7 @@ | EulerOS | Euler operating system, which is developed by Huawei based on the standard Linux kernel. | | FC Layer | Fully connected layer, which acts as a classifier in the entire convolutional neural network. | | FE | Fusion Engine, which connects to GE and TBE operators and has the capabilities of loading and managing the operator information library and managing convergence rules. | +| Fine-tuning | A process to take a network model that has already been trained for a given task, and make it perform a second similar task. | | FP16 | 16-bit floating point, which is a half-precision floating point arithmetic format, consuming less memory. | | FP32 | 32-bit floating point, which is a single-precision floating point arithmetic format. | | GE | Graph Engine, MindSpore computational graph execution engine, which is responsible for optimizing hardware (such as operator fusion and memory overcommitment) based on the front-end computational graph and starting tasks on the device side. | diff --git a/docs/source_zh_cn/glossary.md b/docs/source_zh_cn/glossary.md index 615ac39b1b515c47d1e0b450109990b8f87081cd..fd2bce35b3e03aa7276137fafc93441e53a085f0 100644 --- a/docs/source_zh_cn/glossary.md +++ b/docs/source_zh_cn/glossary.md @@ -18,6 +18,7 @@ | EulerOS | 欧拉操作系统,华为自研的基于Linux标准内核的操作系统。 | | FC Layer | Fully Conneted Layer,全连接层。整个卷积神经网络中起到分类器的作用。 | | FE | Fusion Engine,负责对接GE和TBE算子,具备算子信息库的加载与管理、融合规则管理等能力。 | +| Fine-tuning | 基于面向某任务训练的网络模型,训练面向第二个类似任务的网络模型。 | | FP16 | 16位浮点,半精度浮点算术,消耗更小内存。 | | FP32 | 32位浮点,单精度浮点算术。 | | GE | Graph Engine,MindSpore计算图执行引擎,主要负责根据前端的计算图完成硬件相关的优化(算子融合、内存复用等等)、device侧任务启动。 | diff --git a/tutorials/source_zh_cn/quick_start/quick_start.md b/tutorials/source_zh_cn/quick_start/quick_start.md index c00b41ef18d8c89e7c8ec5f09c74ea552b6e22a9..4236798f6c51c640eaa7006bcac38889b536e287 100644 --- a/tutorials/source_zh_cn/quick_start/quick_start.md +++ b/tutorials/source_zh_cn/quick_start/quick_start.md @@ -317,7 +317,7 @@ if __name__ == "__main__": ### 配置模型保存 MindSpore提供了callback机制,可以在训练过程中执行自定义逻辑,这里使用框架提供的`ModelCheckpoint`和`LossMonitor`为例。 -`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的微调(fune-tune)操作,`LossMonitor`可以监控训练过程中`loss`值的变化。 +`ModelCheckpoint`可以保存网络模型和参数,以便进行后续的fine-tuning(微调)操作,`LossMonitor`可以监控训练过程中`loss`值的变化。 ```python from mindspore.train.callback import ModelCheckpoint, CheckpointConfig diff --git a/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md b/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md index 53d358ced58c907867cd0764d904d9a48c9c1efc..1bd0c24b7c33f1b6a9300f00dccc4688d53b6033 100644 --- a/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md +++ b/tutorials/source_zh_cn/use/saving_and_loading_model_parameters.md @@ -23,7 +23,7 @@ - 训练过程中,通过实时验证精度,把精度最高的模型参数保存下来,用于预测操作。 - 再训练场景 - 进行长时间训练任务时,保存训练过程中的CheckPoint文件,防止任务异常退出后从初始状态开始训练。 - - Fine Tune(微调):训练一个模型并保存参数,然后针对不同任务进行Fine Tune操作。 + - Fine-tuning(微调)场景,即训练一个模型并保存参数,基于该模型,面向第二个类似任务进行模型训练。 MindSpore的CheckPoint文件是一个二进制文件,存储了所有训练参数的值。采用了Google的Protocol Buffers机制,与开发语言、平台无关,具有良好的可扩展性。 CheckPoint的protocol格式定义在`mindspore/ccsrc/utils/checkpoint.proto`中。 @@ -117,7 +117,7 @@ acc = model.eval(dataset_eval) ### 用于再训练场景 -针对任务中断再训练及Fine Tune场景,可以加载网络参数和优化器参数到模型中。 +针对任务中断再训练及fine-tuning场景,可以加载网络参数和优化器参数到模型中。 示例代码如下: ```python