diff --git a/mindspore/2.3.0.rc1-cann8.0.RC1/22.03-lts-sp4/Dockerfile b/mindspore/2.3.0.rc1-cann8.0.RC1/22.03-lts-sp4/Dockerfile new file mode 100644 index 0000000000000000000000000000000000000000..ef278427bc9d695ae85900486464f21a38402496 --- /dev/null +++ b/mindspore/2.3.0.rc1-cann8.0.RC1/22.03-lts-sp4/Dockerfile @@ -0,0 +1,12 @@ +ARG BASE=openeuler/cann:8.0.RC1-oe2203sp4 +FROM ${BASE} + +# Arguments +ARG VERSION=2.3.0rc1 + +# Change the default shell +SHELL [ "/bin/bash", "-c" ] + +# Install mindspore +RUN pip install --no-cache-dir \ + mindspore==${VERSION} \ No newline at end of file diff --git a/mindspore/README.md b/mindspore/README.md new file mode 100644 index 0000000000000000000000000000000000000000..fb5444b590df945b53b2a772c1208256f8d94ec5 --- /dev/null +++ b/mindspore/README.md @@ -0,0 +1,78 @@ +# Quick reference + +- The official MindSpore docker image. + +- Maintained by: [openEuler CloudNative SIG](https://gitee.com/openeuler/cloudnative). + +- Where to get help: [openEuler CloudNative SIG](https://gitee.com/openeuler/cloudnative), [openEuler](https://gitee.com/openeuler/community). + +# MindSpore | openEuler +Current MindSpore docker images are built on the [openEuler](https://repo.openeuler.org/). This repository is free to use and exempted from per-user rate limits. + +MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios. MindSpore is designed to provide development experience with friendly design and efficient execution for the data scientists and algorithmic engineers, native support for Ascend AI processor, and software hardware co-optimization. At the meantime MindSpore as a global AI open source community, aims to further advance the development and enrichment of the AI software/hardware application ecosystem. + +For more details please check out our [Architecture Guide](https://www.mindspore.cn/tutorials/en/master/beginner/introduction.html)⁠. + +# Supported tags and respective Dockerfile links +The tag of each `mindspore` docker image is consist of the complete software stack version. The details are as follows +| Tag | Currently | Architectures | +|----------|-------------|------------------| +|[2.3.0rc1-cann8.0.RC1-oe2203sp4](https://gitee.com/openeuler/openeuler-docker-images/blob/master/mindspore/2.3.0.rc1-cann8.0.RC1/22.03-lts-sp4/Dockerfile)| MindSpore 2.3.0.rc1 with CANN 8.0.RC1 on openEuler 22.03-LTS-SP4 | amd64, arm64 | + +# Usage +In this usage, users can select the corresponding `{Tag}` and `container startup options` based on their requirements. + +- Pull the `openeuler/mindspore` image from docker + + ```bash + docker pull openeuler/mindspore:{Tag} + ``` + +- Start a mindspore instance + + ```bash + docker run \ + --name my-mindspore \ + --device /dev/davinci1 \ + --device /dev/davinci_manager \ + --device /dev/devmm_svm \ + --device /dev/hisi_hdc \ + -v /usr/local/dcmi:/usr/local/dcmi \ + -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ + -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ + -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ + -v /etc/ascend_install.info:/etc/ascend_install.info \ + -it openeuler/mindspore:{Tag} bash + ``` + +- Container startup options + + | Option | Description | + |--|--| + | `--name my-mindspore` | Names the container `my-mindspore`. | + | `--device /dev/davinciX` | NPU device, where `X` is the physical ID number of the chip, e.g., davinci1. | + | `--device /dev/davinci_manager` | Davinci-related management device. | + | `--device /dev/devmm_svm` | Memory management-related device. | + | `--device /dev/hisi_hdc` | HDC-related management device. | + | `-v /usr/local/dcmi:/usr/local/dcmi` | Mounts the host's DCMI .so and interface file directory /usr/local/dcmi to the container. Please modify according to actual situation. | + | `-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi` | Mount the host npu-smi tool "/usr/local/bin/npu-smi" into the container. Please modify it according to the actual situation. | + | `-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/` | Mounts the host directory /usr/local/Ascend/driver/lib64/driver to the container. Please modify according to the path where the driver's .so files are located. | + | `-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info` | Mounts the host's version information file /usr/local/Ascend/driver/version.info to the container. Please modify according to actual situation. | + | `-v /etc/ascend_install.info:/etc/ascend_install.info` | Mounts the host's installation information file /etc/ascend_install.info to the container. | + | `-it` | Starts the container in interactive mode with a terminal (bash). | + | `openeuler/mindspore:{Tag}` | Specifies the Docker image to run, replace `{Tag}` with the specific version or tag of the `openeuler/mindspore` image you want to use. | + +- View container running logs + + ```bash + docker logs -f my-mindspore + ``` + +- To get an interactive shell + + ```bash + docker exec -it my-mindspore /bin/bash + ``` + +# Question and answering +If you have any questions or want to use some special features, please submit an issue or a pull request on [openeuler-docker-images](https://gitee.com/openeuler/openeuler-docker-images). \ No newline at end of file diff --git a/mindspore/doc/image-info.yml b/mindspore/doc/image-info.yml new file mode 100644 index 0000000000000000000000000000000000000000..fb4f37d4e8c55c3b2398b6fdad7268029e01bb97 --- /dev/null +++ b/mindspore/doc/image-info.yml @@ -0,0 +1,90 @@ +name: mindspore +category: ai +description: 昇思MindSpore是由华为于2019年8月推出的新一代全场景AI框架,2020年3月28日,华为宣布昇思MindSpore正式开源。昇思MindSpore是一个全场景AI框架,旨在实现易开发、高效执行、全场景统一部署三大目标。 +environment: | + 本应用在Docker环境中运行,安装Docker执行如下命令 + ``` + yum install -y docker + ``` +tags: | + MindSpore镜像的Tag由其版本信息和基础镜像版本信息组成,详细内容如下 + + | Tag | Currently | Architectures | + |----------|-------------|------------------| + |[2.3.0rc1-cann8.0.RC1-oe2203sp4](https://gitee.com/openeuler/openeuler-docker-images/blob/master/mindspore/2.3.0.rc1-cann8.0.RC1/22.03-lts-sp4/Dockerfile)| MindSpore 2.3.0.rc1 with CANN 8.0.RC1 on openEuler 22.03-LTS-SP4 | amd64, arm64 | + +download: | + 拉取镜像到本地 + ``` + docker pull openeuler/mindspore:{Tag} + ``` + +usage: | + + - 启动容器 + ``` + docker run \ + --name my-mindspore \ + --device ${device} \ + --device /dev/davinci_manager \ + --device /dev/devmm_svm \ + --device /dev/hisi_hdc \ + -v /usr/local/dcmi:/usr/local/dcmi \ + -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ + -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ + -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ + -v /etc/ascend_install.info:/etc/ascend_install.info \ + -it openeuler/mindspore:{Tag} bash + ``` + 用户可根据自身需求选择对应的硬件设备{device}、对应版本的{Tag}以及容器启动的选项。 + + - 参数说明 + | 配置项 | 描述 | + |--|--| + | `--name my-mindspore` | 容器名称。| + | `--device /dev/davinci1` | NPU设备,X是芯片物理ID号,例如davinci1。 | + | `--device /dev/davinci_manager` | davinci相关的管理设备。 | + | `--device /dev/devmm_svm` | 内存管理相关设备。 | + | `--device /dev/hisi_hdc` | hdc相关管理设备。 | + | `-v /usr/local/dcmi:/usr/local/dcmi` | 将宿主机dcmi的.so和接口文件目录`/usr/local/dcmi`挂载到容器中,请根据实际情况修改。 | + | `-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi` | 将宿主机`npu-smi`工具`/usr/local/bin/npu-smi`挂载到容器中,请根据实际情况修改。 | + | `-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/` | 将宿主机目录`/usr/local/Ascend/driver/lib64/driver`挂载到容器中。请根据driver的驱动.so所在路径修改。 | + | `-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info` | 将宿主机版本信息文件`/usr/local/Ascend/driver/version.info`挂载到容器中,请根据实际情况修改。 | + | `-v /etc/ascend_install.info:/etc/ascend_install.info` |将宿主机安装信息文件`/etc/ascend_install.info`挂载到容器中。 | + | `-it` | 以交互模式启动容器。 | + | `openeuler/mindspore:{Tag}` | 指定要运行的镜像为 `openeuler/mindspore`,其中` {Tag}` 是需要替换的镜像标签。 | + + - 容器测试 + + 查看运行日志 + ``` + docker logs -f my-mindspore + ``` + + 使用shell交互 + ``` + docker exec -it my-mindspore /bin/bash + ``` + +license: Apache-2.0 license +similar_packages: + - PyTorch: PyTorch是一个开源的Python机器学习库,基于torch,用于自然语言处理等应用程序。PyTorch既可以看作加入了GPU支持的numpy,同时也可以看成一个拥有自动求导功能的强大的深度神经网络。 +dependency: + - numpy + - protobuf + - asttokens + - pillow + - scipy + - decorator + - matplotlib + - opencv-python + - scikit-learn + - pandas + - packaging + - pycocotools + - tables + - easydict + - onnxruntime + - psutil + - astunparse + - safetensors \ No newline at end of file diff --git a/mindspore/doc/picture/logo.png b/mindspore/doc/picture/logo.png new file mode 100644 index 0000000000000000000000000000000000000000..5582332707b3544481e1959a8486a08b8fb04738 Binary files /dev/null and b/mindspore/doc/picture/logo.png differ diff --git a/mindspore/meta.yml b/mindspore/meta.yml new file mode 100644 index 0000000000000000000000000000000000000000..3fcf3da5d381516db4a795bc884781596c2b5f7b --- /dev/null +++ b/mindspore/meta.yml @@ -0,0 +1,2 @@ +2.3.0rc1-cann8.0.RC1-oe2203sp4: + path: mindspore/2.3.0rc1-cann8.0.RC1/22.03-lts-sp4/Dockerfile diff --git a/pytorch/2.2.0-cann8.0.RC1/22.03-lts-sp4/Dockerfile b/pytorch/2.2.0-cann8.0.RC1/22.03-lts-sp4/Dockerfile new file mode 100644 index 0000000000000000000000000000000000000000..9d135c84e9bef43ba41308ec42161fed597e6647 --- /dev/null +++ b/pytorch/2.2.0-cann8.0.RC1/22.03-lts-sp4/Dockerfile @@ -0,0 +1,34 @@ +ARG BASE=openeuler/cann:8.0.RC1-oe2203sp4 +FROM ${BASE} + +# Arguments +ARG VERSION=2.2.0 +ARG AUDIO_VERSION=2.2.0 +ARG NPU_VERSION=2.2.0 +ARG TORCH_VERSION=0.17.0 + +# Change the default shell +SHELL [ "/bin/bash", "-c" ] + +# Install pytorch, torch-npu and related packages +RUN if [ "${VERSION}" == "2.1.0" ]; then \ + TORCH_VERSION=0.16.0; \ + AUDIO_VERSION=2.1.0; \ + NPU_VERSION=2.1.0; \ + elif [ "${VERSION}" == "2.2.0" ]; then \ + TORCH_VERSION=0.17.0; \ + AUDIO_VERSION=2.2.0; \ + NPU_VERSION=2.2.0; \ + else \ + echo "Not supported version: ${VERSION}. Feel free to submit an issue to us: https://github.com/cosdt/dockerfiles/issues"; \ + exit 1; \ + fi && \ + # Uninstall the latest numpy and sympy first, as the right versions will be installed again \ + # after installing following packages \ + pip uninstall -y numpy sympy && \ + pip install --no-cache-dir --index-url https://download.pytorch.org/whl/cpu \ + torch==${VERSION} \ + torchvision==${TORCH_VERSION} \ + torchaudio==${AUDIO_VERSION} && \ + pip install --no-cache-dir \ + torch-npu==${NPU_VERSION} \ No newline at end of file diff --git a/pytorch/README.md b/pytorch/README.md index f236e04ef2b4b6dcba830ff61f325a02fe0d7456..717f006c29bbcc307ff700fbb2464c951550021a 100644 --- a/pytorch/README.md +++ b/pytorch/README.md @@ -1,30 +1,83 @@ -# pytorch - # Quick reference -- The official pytorch docker image. +- The official PyTorch docker image. -- Maintained by: [openEuler CloudNative SIG](https://gitee.com/openeuler/cloudnative) +- Maintained by: [openEuler CloudNative SIG](https://gitee.com/openeuler/cloudnative). -- Where to get help: [openEuler CloudNative SIG](https://gitee.com/openeuler/cloudnative), [openEuler](https://gitee.com/openeuler/community) +- Where to get help: [openEuler CloudNative SIG](https://gitee.com/openeuler/cloudnative), [openEuler](https://gitee.com/openeuler/community). -# Build reference +# PyTorch | openEuler +Current PyTorch docker images are built on the [openEuler](https://repo.openeuler.org/). This repository is free to use and exempted from per-user rate limits. -1. Build images and push: -```shell -docker buildx build -t "openeuler/pytorch:$TAG" --platform linux/arm64 ./$TAG --push -``` +PyTorch is a Python package that provides two high-level features: +- Tensor computation (like NumPy) with strong GPU acceleration +- Deep neural networks built on a tape-based autograd system -We are using `buildx` in here to generate ARM64 images on different host, see more in [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/) +You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed. -# How to use this image -Please run container with this image on Ascend platform of ARM64. -```shell -docker run --device $DEVICE --device /dev/davinci_manager --device /dev/devmm_svm --device /dev/hisi_hdc -v /usr/local/dcmi:/usr/local/dcmi -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info -it openeuler/pytorch:$TAG -``` +Learn more about on [PyTorch Website](https://pytorch.org/docs/stable/index.html). # Supported tags and respective Dockerfile links -- pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2: pytorch pytorch2.1.0.a1-cann7.0.RC1.alpha002, openEuler 22.03 LTS SP2 +The tag of each `pytorch` docker image is consist of the complete software stack version. The details are as follows +| Tag | Currently | Architectures | +|----------|-------------|------------------| +|[pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2](https://gitee.com/openeuler/openeuler-docker-images/blob/master/pytorch/2.1.0.a1-cann7.0.RC1.alpha002/22.03-lts-sp2/Dockerfile)| pyTorch 2.1.0.a1 with cann 7.0.RC1.alpha002 on openEuler 22.03-LTS-SP2 | arm64 | +|[2.2.0-cann8.0.RC1-oe2203sp4](https://gitee.com/openeuler/openeuler-docker-images/blob/master/pytorch/2.2.0-cann8.0.RC1/22.03-lts-sp4/Dockerfile)| PyTorch 2.2.0 with CANN 8.0.RC1 on openEuler 22.03-LTS-SP4 | arm64,amd64 | + +# Usage +In this usage, users can select the corresponding `{Tag}` and `container startup options` based on their requirements. + +- Pull the `openeuler/pytorch` image from docker + + ```bash + docker pull openeuler/pytorch:{Tag} + ``` + +- Start a pytorch instance + + ```bash + docker run \ + --name my-pytorch \ + --device /dev/davinci1 \ + --device /dev/davinci_manager \ + --device /dev/devmm_svm \ + --device /dev/hisi_hdc \ + -v /usr/local/dcmi:/usr/local/dcmi \ + -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ + -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ + -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ + -v /etc/ascend_install.info:/etc/ascend_install.info \ + -it openeuler/pytorch:{Tag} bash + ``` + +- Container startup options + + | Option | Description | + |--|--| + | `--name my-pytorch` | Names the container `my-pytorch`. | + | `--device /dev/davinciX` | NPU device, where `X` is the physical ID number of the chip, e.g., davinci1. | + | `--device /dev/davinci_manager` | Davinci-related management device. | + | `--device /dev/devmm_svm` | Memory management-related device. | + | `--device /dev/hisi_hdc` | HDC-related management device. | + | `-v /usr/local/dcmi:/usr/local/dcmi` | Mounts the host's DCMI .so and interface file directory /usr/local/dcmi to the container. Please modify according to actual situation. | + | `-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi` | Mount the host npu-smi tool "/usr/local/bin/npu-smi" into the container. Please modify it according to the actual situation. | + | `-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/` | Mounts the host directory /usr/local/Ascend/driver/lib64/driver to the container. Please modify according to the path where the driver's .so files are located. | + | `-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info` | Mounts the host's version information file /usr/local/Ascend/driver/version.info to the container. Please modify according to actual situation. | + | `-v /etc/ascend_install.info:/etc/ascend_install.info` | Mounts the host's installation information file /etc/ascend_install.info to the container. | + | `-it` | Starts the container in interactive mode with a terminal (bash). | + | `openeuler/pytorch:{Tag}` | Specifies the Docker image to run, replace `{Tag}` with the specific version or tag of the `openeuler/pytorch` image you want to use. | + +- View container running logs + + ```bash + docker logs -f my-pytorch + ``` + +- To get an interactive shell + + ```bash + docker exec -it my-pytorch /bin/bash + ``` -## Operating System -Linux/Unix, ARM64 architecture. +# Question and answering +If you have any questions or want to use some special features, please submit an issue or a pull request on [openeuler-docker-images](https://gitee.com/openeuler/openeuler-docker-images). \ No newline at end of file diff --git a/pytorch/doc/image-info.yml b/pytorch/doc/image-info.yml index f4a53bf8fdac599dbec9317df980b9630b3c53a3..c23d6169df53e035e70399a49e649e0ebad1564e 100644 --- a/pytorch/doc/image-info.yml +++ b/pytorch/doc/image-info.yml @@ -1,17 +1,18 @@ -name: pyTorch +name: pytorch category: ai -description: pyTorch是一个开源的Ppython机器学习库,基于torch,用于自然语言处理等应用程序。pyTorch既可以看作加入了GPU支持的numpy,同时也可以看成一个拥有自动求导功能的强大的深度神经网络。 +description: PyTorch是一个开源的Python机器学习库,基于torch,用于自然语言处理等应用程序。PyTorch既可以看作加入了GPU支持的numpy,同时也可以看成一个拥有自动求导功能的强大的深度神经网络。 environment: | 本应用在Docker环境中运行,安装Docker执行如下命令 ``` yum install -y docker ``` tags: | - pyTorch镜像的Tag由其版本信息和基础镜像版本信息组成,详细内容如下 + PyTorch镜像的Tag由其版本信息和基础镜像版本信息组成,详细内容如下 | Tag | Currently | Architectures | |----------|-------------|------------------| - |[pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2](https://gitee.com/openeuler/openeuler-docker-images/blob/master/pytorch/2.1.0.a1-cann7.0.RC1.alpha002/22.03-lts-sp2/Dockerfile)| pyTorch pytorch2.1.0.a1-cann7.0.RC1.alpha002 on openEuler 22.03-LTS-SP2 | arm64 | + |[pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2](https://gitee.com/openeuler/openeuler-docker-images/blob/master/pytorch/2.1.0.a1-cann7.0.RC1.alpha002/22.03-lts-sp2/Dockerfile)| PyTorch pytorch2.1.0.a1-cann7.0.RC1.alpha002 on openEuler 22.03-LTS-SP2 | arm64 | + |[2.2.0-cann8.0.RC1-oe2203sp4](https://gitee.com/openeuler/openeuler-docker-images/blob/master/pytorch/2.2.0-cann8.0.RC1/22.03-lts-sp4/Dockerfile)| PyTorch 2.2.0 with CANN 8.0.RC1 on openEuler 22.03-LTS-SP4 | arm64,amd64 | download: | 拉取镜像到本地 @@ -22,9 +23,36 @@ download: | usage: | - 启动容器 ``` - docker run -d --name my-pytorch openeuler/pytorch:{Tag} + docker run \ + --name my-pytorch \ + --device ${device} \ + --device /dev/davinci_manager \ + --device /dev/devmm_svm \ + --device /dev/hisi_hdc \ + -v /usr/local/dcmi:/usr/local/dcmi \ + -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ + -v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ + -v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ + -v /etc/ascend_install.info:/etc/ascend_install.info \ + -it openeuler/pytorch:{Tag} bash ``` - 用户可根据自身需求选择对应版本的{Tag}、容器启动的选项。 + 用户可根据自身需求选择对应的硬件设备{device}、对应版本的{Tag}以及容器启动的选项。 + + - 参数说明 + | 配置项 | 描述 | + |--|--| + | `--name my-pytorch` | 容器名称。| + | `--device /dev/davinci1` | NPU设备,X是芯片物理ID号,例如davinci1。 | + | `--device /dev/davinci_manager` | davinci相关的管理设备。 | + | `--device /dev/devmm_svm` | 内存管理相关设备。 | + | `--device /dev/hisi_hdc` | hdc相关管理设备。 | + | `-v /usr/local/dcmi:/usr/local/dcmi` | 将宿主机dcmi的.so和接口文件目录`/usr/local/dcmi`挂载到容器中,请根据实际情况修改。 | + | `-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi` | 将宿主机`npu-smi`工具`/usr/local/bin/npu-smi`挂载到容器中,请根据实际情况修改。 | + | `-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/` | 将宿主机目录`/usr/local/Ascend/driver/lib64/driver`挂载到容器中。请根据driver的驱动.so所在路径修改。 | + | `-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info` | 将宿主机版本信息文件`/usr/local/Ascend/driver/version.info`挂载到容器中,请根据实际情况修改。 | + | `-v /etc/ascend_install.info:/etc/ascend_install.info` |将宿主机安装信息文件`/etc/ascend_install.info`挂载到容器中。 | + | `-it` | 以交互模式启动容器。 | + | `openeuler/pytorch:{Tag}` | 指定要运行的镜像为 `openeuler/pytorch`,其中` {Tag}` 是需要替换的镜像标签。 | - 容器测试 @@ -40,8 +68,7 @@ usage: | license: BSD 3-Clause license similar_packages: - - TensorFlow: TensorFlow是一个基于数据流编程(dataflow programming)的符号数学系统,被广泛应用于各类机器学习(machine learning)算法的编程实现,其前身是谷歌的神经网络算法库DistBelief。 - - OpenCV: OpenCV是一个基于Apache2.0许可(开源)发行的跨平台计算机视觉和机器学习软件库,可以运行在Linux、Windows、Android和Mac OS操作系统上。 + - MindSpore: 昇思MindSpore是由华为于2019年8月推出的新一代全场景AI框架,2020年3月28日,华为宣布昇思MindSpore正式开源。昇思MindSpore是一个全场景AI框架,旨在实现易开发、高效执行、全场景统一部署三大目标。 dependency: - astunparse - expecttest diff --git a/pytorch/meta.yml b/pytorch/meta.yml index c3ea7b24c265afb46e15d802658c5d4dfb423664..3f7016d06dd308bbe7e1b314ac6b3d9abc4665e5 100644 --- a/pytorch/meta.yml +++ b/pytorch/meta.yml @@ -1,2 +1,5 @@ pytorch2.1.0.a1-cann7.0.RC1.alpha002-oe2203sp2: path: pytorch/2.1.0.a1-cann7.0.RC1.alpha002/22.03-lts-sp2/Dockerfile + arch: aarch64 +2.2.0-cann8.0.RC1-oe2203sp4: + path: pytorch/2.2.0-cann8.0.RC1/22.03-lts-sp4/Dockerfile