# reinforcement_main **Repository Path**: erpim/reinforcement_main ## Basic Information - **Project Name**: reinforcement_main - **Description**: A high-performance, scalable MindSpore reinforcement learning framework. - **Primary Language**: Python - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 30 - **Created**: 2022-10-31 - **Last Updated**: 2022-11-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # MindSpore Reinforcement [View English](./README.md) [](https://pypi.org/project/mindspore-rl/) [](https://github.com/mindspore-ai/reinforcement/blob/master/LICENSE) [](https://gitee.com/mindspore/reinforcement/pulls) - [MindSpore Reinforcement](#mindspore-reinforcement) - [概述](#概述) - [安装](#安装) - [MindSpore版本依赖关系](#mindspore版本依赖关系) - [pip安装](#pip安装) - [源码编译安装](#源码编译安装) - [验证是否成功安装](#验证是否成功安装) - [快速入门](#快速入门) - [特性](#特性) - [算法](#算法) - [环境](#环境) - [经验回放](#经验回放) - [未来路标](#未来路标) - [社区](#社区) - [治理](#治理) - [交流](#交流) - [贡献](#贡献) - [许可证](#许可证) ## 概述 MindSpore Reinforcement是一个开源的强化学习框架,支持使用强化学习算法对agent进行**分布式训练**。MindSpore Reinforcement为编写强化学习算法提供了**干净整洁的API抽象**,它将算法与部署和执行注意事项解耦,包括加速器的使用、并行度和跨worker集群计算的分布。MindSpore Reinforcement将强化学习算法转换为一系列编译后的**计算图**,然后由MindSpore框架在CPU、GPU或Ascend AI处理器上高效运行。MindSpore Reinforcement的架构在如下展示:  ## 安装 MindSpore Reinforcement依赖MindSpore训练推理框架,安装完[MindSpore](https://gitee.com/mindspore/mindspore#%E5%AE%89%E8%A3%85),再安装MindSpore Reinforcement。可以采用pip安装或者源码编译安装两种方式。 ### MindSpore版本依赖关系 由于MindSpore Reinforcement与MindSpore有依赖关系,请按照根据下表中所指示的对应关系,在[MindSpore下载页面](https://www.mindspore.cn/versions)下载并安装对应的whl包。 ```shell pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{MindSpore-Version}/MindSpore/cpu/ubuntu_x86/mindspore-{MindSpore-Version}-cp37-cp37m-linux_x86_64.whl ``` | MindSpore Reinforcement | 分支 | MindSpore | | :---------------------: | :----------------------------------------------------------: | :-------: | | 0.7.0 | [r0.7](https://gitee.com/mindspore/reinforcement/tree/r0.7/) | 2.0.0 | | 0.6.0 | [r0.6](https://gitee.com/mindspore/reinforcement/tree/r0.6/) | 1.9.0 | | 0.5.0 | [r0.5](https://gitee.com/mindspore/reinforcement/tree/r0.5/) | 1.8.0 | | 0.3.0 | [r0.3](https://gitee.com/mindspore/reinforcement/tree/r0.3/) | 1.7.0 | | 0.2.0 | [r0.2](https://gitee.com/mindspore/reinforcement/tree/r0.2/) | 1.6.0 | | 0.1.0 | [r0.1](https://gitee.com/mindspore/reinforcement/tree/r0.1/) | 1.5.0 | ### pip安装 使用pip命令安装,请从[MindSpore Reinforcement下载页面](https://www.mindspore.cn/versions)下载并安装whl包。 ```shell pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/{MindSpore_version}/Reinforcement/any/mindspore_rl-{Reinforcement_version}-py3-none-any.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple ``` > - 在联网状态下,安装whl包时会自动下载MindSpore Reinforcement安装包的依赖项(依赖项详情参见requirement.txt),其余情况需自行安装。 > - `{MindSpore_version}`表示MindSpore版本号,MindSpore和Reinforcement版本配套关系参见[页面](https://www.mindspore.cn/versions)。 > - `{Reinforcement_version}`表示Reinforcement版本号。例如下载0.1.0版本Reinforcement时,`{MindSpore_version}应写为1.5.0,{Reinforcement_version}`应写为0.1.0。 ### 源码编译安装 下载[源码](https://gitee.com/mindspore/reinforcement),下载后进入`reinforcement`目录。 ```shell git clone https://gitee.com/mindspore/reinforcement.git cd reinforcement/ bash build.sh pip install output/mindspore_rl-{Reinforcement_version}-py3-none-any.whl ``` 其中,`build.sh`为`reinforcement`目录下的编译脚本文件。`{Reinforcement_version}`表示MindSpore Reinforcement版本号。 安装依赖项 ```shell cd reinforcement && pip install requirements.txt ``` ### 验证是否成功安装 执行以下命令,验证安装结果。导入Python模块不报错即安装成功: ```python import mindspore_rl ``` ## 快速入门 MindSpore Reinforcement的算法示例位于`reinforcement/example/`下,以一个简单的算法[Deep Q-Learning (DQN)](https://www.mindspore.cn/reinforcement/docs/zh-CN/master/dqn.html) 示例,演示MindSpore Reinforcement如何使用。 第一种开箱即用方式,使用脚本文件直接运行: ```shell cd reinforcement/example/dqn/scripts bash run_standalone_train.sh ``` 第二种方式,直接使用`config.py`和`train.py`,可以更灵活地修改配置: ```shell cd reinforcement/example/dqn python train.py --episode 1000 --device_target GPU ``` 第一种方式会在当前目录会生成`dqn_train_log.txt`日志文件,第二种在屏幕上打印日志信息: ```shell Episode 0: loss is 0.396, rewards is 42.0 Episode 1: loss is 0.226, rewards is 15.0 Episode 2: loss is 0.202, rewards is 9.0 Episode 3: loss is 0.122, rewards is 15.0 Episode 4: loss is 0.107, rewards is 12.0 Episode 5: loss is 0.078, rewards is 10.0 Episode 6: loss is 0.075, rewards is 8.0 Episode 7: loss is 0.084, rewards is 12.0 Episode 8: loss is 0.069, rewards is 10.0 Episode 9: loss is 0.067, rewards is 10.0 Episode 10: loss is 0.056, rewards is 8.0 ----------------------------------------- Evaluate for episode 10 total rewards is 9.600 ----------------------------------------- ```
算法 | RL版本 | 动作空间 | 设备 | 示例环境 | |||
---|---|---|---|---|---|---|---|
离散 | 连续 | CPU | GPU | Ascend | |||
DQN | >= 0.1 | ✔️ | / | ✔️ | ✔️ | ✔️ | CartPole-v0 |
PPO | >= 0.1 | / | ✔️ | ✔️ | ✔️ | ✔️ | HalfCheetah-v2 |
AC | >= 0.1 | ✔️ | / | ✔️ | ✔️ | / | CartPole-v0 |
A2C | >= 0.2 | ✔️ | / | ✔️ | ✔️ | / | CartPole-v0 |
DDPG | >= 0.3 | / | ✔️ | ✔️ | ✔️ | ✔️ | HalfCheetah-v2 |
QMIX | >= 0.5 | ✔️ | / | ✔️ | ✔️ | / | SMAC |
SAC | >= 0.5 | / | ✔️ | ✔️ | ✔️ | ✔️ | HalfCheetah-v2 |
TD3 | >= 0.6 | / | ✔️ | ✔️ | ✔️ | / | HalfCheetah-v2 |
C51 | >= 0.6 | ✔️ | / | ✔️ | / | / | CartPole-v0 |
类别 | 特性 | 设备 | ||
---|---|---|---|---|
CPU | GPU | Ascend | ||
UniformReplayBuffer | 1 FIFO先进先出 2 支持batch 输入 |
✔️ | ✔️ | / |
PriorityReplayBuffer | 1 proportional-based优先级策略 2 Sum Tree提升采样效率 |
✔️ | ✔️ | ✔️ |
ReservoirReplayBuffer | 采用无偏采样 | ✔️ | ✔️ | ✔️ |