diff --git a/ACL_PyTorch/built-in/cv/FLAVR_for_PyTorch/README.md b/ACL_PyTorch/built-in/cv/FLAVR_for_PyTorch/README.md
index f01218d88381c896a221b27d30bd9b2bae0a5a08..4bb006adb9b90c7fa21ed1a6149be508aba2cf08 100644
--- a/ACL_PyTorch/built-in/cv/FLAVR_for_PyTorch/README.md
+++ b/ACL_PyTorch/built-in/cv/FLAVR_for_PyTorch/README.md
@@ -4,8 +4,6 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
-
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
- [获取源码](#section4622531142816)
@@ -57,20 +55,6 @@ FLAVR使用3D卷积来学习帧间运动信息,是一种无光流估计的单
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.1.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.12.1 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
-
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/Flownet2_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/Flownet2_for_Pytorch/README.md
index f2ea0204d71b9a81a72d59f59cc3070db2b4e831..3a6d3f88dd1d2321f11993367ecdbec1896a1143 100644
--- a/ACL_PyTorch/built-in/cv/Flownet2_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/Flownet2_for_Pytorch/README.md
@@ -7,8 +7,6 @@
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
-
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
- [获取源码](#section4622531142816)
@@ -54,20 +52,6 @@ FlowNet提出了第一个基于CNN的光流预测算法,虽然具有快速的
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.9.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
-
diff --git a/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/README.md
index dff9c08aff9d17608c428901c8554cbc9a4123c7..6c463a9166cce664f69e6cdb910386a9888c0297 100644
--- a/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/README.md
@@ -32,16 +32,6 @@ commit_id=7d955df73fe0e9b47f7d6c77c699324b256fc41f
| output1 | batchsize x 1000 | FLOAT32 | ND |
-### 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- | 配套 | 版本 | 环境准备指导 |
- | ---- | ---- | ---------- |
- | 固件与驱动 | 22.0.1| [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | |
- | PyTorch | 1.13.1 | |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 |||
# 快速上手
@@ -207,3 +197,7 @@ python3 vision_metric_ImageNet.py result/ ImageNet/val_label.txt ./ result.json
| 64 | | 4886.97 | 877.98 |
| | **最优性能** | **6308.38** | **1258.53** |
+
+# 公网地址说明
+代码涉及公网地址参考 public_address_statement.md
+
diff --git a/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/public_address_statement.md
new file mode 100644
index 0000000000000000000000000000000000000000..2f276a2b64fd936f802cc908bc6df763a68ce0c6
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/GoogleNet_for_Pytorch/public_address_statement.md
@@ -0,0 +1,4 @@
+
+| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
+| ---- | ------------ | ------ | ------------------------------------ | -------- |
+|开发引入|/|googlenet_pth2onnx.py|https://github.com/pytorch/vision/blob/master/torchvision/models/googlenet.py|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/README.md b/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/README.md
index 075ecd1f2949f144e13984e0b64fdcb39e1b43b1..fc119f3b4e99046e3537573a541a78f2482fd7bf 100644
--- a/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/README.md
@@ -7,8 +7,6 @@
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
-
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
- [获取源码](#section4622531142816)
@@ -51,19 +49,6 @@
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- |---------| ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.10.1 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
@@ -112,7 +97,7 @@
2. 运行preprocess.py处理数据集
```
cd mmpose
- python3.7 tools/preprocess.py configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/hrnet_w32_coco_512x512.py --pre_data ./pre_data
+ python3 tools/preprocess.py configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/hrnet_w32_coco_512x512.py --pre_data ./pre_data
```
- 参数说明:
@@ -142,7 +127,7 @@
```
cd mmpose
- python3.7 tools/deployment/pytorch2onnx.py configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/hrnet_w32_coco_512x512.py ./hrnet_w32_coco_512x512-bcb8c247_20200816.pth --output-file ./hrnet.onnx
+ python3 tools/deployment/pytorch2onnx.py configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/hrnet_w32_coco_512x512.py ./hrnet_w32_coco_512x512-bcb8c247_20200816.pth --output-file ./hrnet.onnx
```
- 参数说明:
@@ -204,7 +189,7 @@
a. 使用run_infer.py进行推理, 该文件调用aclruntime的后端封装的python的whl包进行推理。
```
- python3.7 run_infer.py --data_path ./pre_data --out_put ./output --result ./result --batch_size 1 --device_id 0
+ python3 run_infer.py --data_path ./pre_data --out_put ./output --result ./result --batch_size 1 --device_id 0
```
- 参数说明:
@@ -219,7 +204,7 @@
b. 精度验证。
```
- python3.7 tools/postprocess.py configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/hrnet_w32_coco_512x512.py --dataset ./result --eval mAP --label_dir ./pre_data/label.json
+ python3 tools/postprocess.py configs/body/2d_kpt_sview_rgb_img/associative_embedding/coco/hrnet_w32_coco_512x512.py --dataset ./result --eval mAP --label_dir ./pre_data/label.json
```
- 参数说明:
diff --git a/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/public_address_statement.md
index c522bb9e8327effb7fac210336c126749a2419ea..29efc1c7b77bc4ed60b890523ad316f40398ff4e 100644
--- a/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/public_address_statement.md
+++ b/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/public_address_statement.md
@@ -1,4 +1,5 @@
| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
| ---- | ------------ | ------ | ------------------------------------ | -------- |
-|开发引入|/|HRNet_mmlab_for_pytorch/url.ini|https://github.com/open-mmlab/mmdeploy|获取源码|
\ No newline at end of file
+|开发引入|/|HRNet_mmlab_for_pytorch/url.ini|https://github.com/open-mmlab/mmdeploy|获取源码|
+|开发引入|/|associative_embedding.py|https://github.com/open-mmlab/mmpose/pull/382|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/run_infer.py b/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/run_infer.py
index 22cc55d33e8f73b45509a934712e7a186eab5bfa..12c7a720fb5f5a0c4b1dab1d775e527e09c124f3 100644
--- a/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/run_infer.py
+++ b/ACL_PyTorch/built-in/cv/HRNet_mmlab_for_pytorch/run_infer.py
@@ -47,7 +47,7 @@ def main():
shape = file.split('_')
shape_list.append([shape[0], shape[1]])
- command = 'python3.7 -m ais_bench --model "model/hrnet_bs{}.om" --input "{}/{}_{}" --output "{}" --outfmt NPY ' \
+ command = 'python3 -m ais_bench --model "model/hrnet_bs{}.om" --input "{}/{}_{}" --output "{}" --outfmt NPY ' \
'--dymDims x:{},3,{},{} --device {}'
for i in shape_list:
command1 = command.format(arg.batch_size, data_path1, i[0], i[1], arg.out_put, arg.batch_size, i[0], i[1],
diff --git a/ACL_PyTorch/built-in/cv/I3D_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/I3D_for_Pytorch/README.md
index d497fd873f0040084aa52606d91bc3a1cbccfc9e..e79829c8d59efc6b0338dca72a243c672774f833 100644
--- a/ACL_PyTorch/built-in/cv/I3D_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/I3D_for_Pytorch/README.md
@@ -26,19 +26,6 @@ url=https://github.com/open-mmlab/mmaction2
| output | FLOAT32 | batchsize x 30 x 400 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | [CANN推理架构准备](https://www/hiascend.com/software/cann/commercial) |
- | Python | 3.7.5 | 创建anaconda环境时指定python版本即可,conda create -n ${your_env_name} python==3.7.5 |
- | PyTorch | 1.8.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
@@ -63,7 +50,7 @@ url=https://github.com/open-mmlab/mmaction2
```
cd ..
- pip3.7 install -r requirements.txt
+ pip3 install -r requirements.txt
```
## 准备数据集
@@ -189,7 +176,7 @@ url=https://github.com/open-mmlab/mmaction2
运行命令获取top1_acc,top5_acc和mean_acc,如出现找不到mmaction的错误,可将mmaction2下的mmaction文件移到mmaction2/tools。
```sh
mv ../i3d_inference.py ./
- python i3d_inference.py ./configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py --eval top_k_accuracy mean_class_accuracy --out result.json --batch_size 1 --model ../i3d_bs1.om --device_id 0 --show True
+ python3 i3d_inference.py ./configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py --eval top_k_accuracy mean_class_accuracy --out result.json --batch_size 1 --model ../i3d_bs1.om --device_id 0 --show True
```
- 参数说明:
- --eval:精度指标。
@@ -203,7 +190,7 @@ url=https://github.com/open-mmlab/mmaction2
可使用ais_bench推理工具的纯推理模型验证模型的性能,参考命令如下:
```
- python3.7 -m ais_bench --model=i3d_bs1.om --batchsize=1
+ python3 -m ais_bench --model=i3d_bs1.om --batchsize=1
```
# 模型推理性能&精度
diff --git a/ACL_PyTorch/built-in/cv/I3D_nonlocal/README.md b/ACL_PyTorch/built-in/cv/I3D_nonlocal/README.md
index 7177f2bd08d2a7321e5ddf379b8d886952182437..83d4018fce94879f1ca834aa78165af9548a1c18 100644
--- a/ACL_PyTorch/built-in/cv/I3D_nonlocal/README.md
+++ b/ACL_PyTorch/built-in/cv/I3D_nonlocal/README.md
@@ -5,7 +5,7 @@
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -41,20 +41,6 @@ url=https://github.com/open-mmlab/mmaction2
| output | FLOAT32 | batchsize x 10 x 400 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | [CANN推理架构准备](https://www/hiascend.com/software/cann/commercial) |
- | Python | 3.7.5 | 创建anaconda环境时指定python版本即可,conda create -n ${your_env_name} python==3.7.5 |
- | PyTorch | 1.8.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
-
# 快速上手
## 获取源码
diff --git a/ACL_PyTorch/built-in/cv/InceptionV3_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/InceptionV3_for_Pytorch/README.md
index 0c5024bac14a8c316ec327afe0d3403eaef4eea5..078b3a4c2ef5dadc30c6d5124ba9236c445349ba 100644
--- a/ACL_PyTorch/built-in/cv/InceptionV3_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/InceptionV3_for_Pytorch/README.md
@@ -2,7 +2,7 @@
- [概述](#概述)
- [输入输出数据](#输入输出数据)
-- [推理环境](#推理环境)
+
- [快速上手](#快速上手)
- [获取源码](#获取源码)
- [准备数据集](#准备数据集)
@@ -35,17 +35,7 @@ InceptionV3 模型是谷歌 Inception 系列里面的第三代模型,在 Incep
----
-# 推理环境
-
-- 该模型推理所需配套的软件如下:
- | 配套 | 版本 | 环境准备指导 |
- | --------- | ------- | ---------- |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
-
- 说明:请根据推理卡型号与 CANN 版本选择相匹配的固件与驱动版本。
----
diff --git a/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/README.md
index 0fc37f4755727d8d75b9e56a2db2dba95f8fde15..3c80f5abae4fd9b67474aabe9c0bb554af2833c6 100644
--- a/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/README.md
@@ -2,7 +2,7 @@
- [概述](#概述)
- [输入输出数据](#输入输出数据)
-- [推理环境](#推理环境)
+
- [快速上手](#快速上手)
- [获取源码](#获取源码)
- [准备数据集](#准备数据集)
@@ -38,17 +38,6 @@ InceptionV4中基本的Inception module还是沿袭了Inception v2/v3的结构
----
-# 推理环境
-
-- 该模型推理所需配套的软件如下:
-
- | 配套 | 版本 | 环境准备指导 |
- | --------- | ------- | ---------- |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
-
- 说明:请根据推理卡型号与 CANN 版本选择相匹配的固件与驱动版本。
----
diff --git a/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/public_address_statement.md
index 76e63736a6d247556e54ac3cce903f895f80805c..0393b65d59684d35adbff14575a697cfb0bd6b05 100644
--- a/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/public_address_statement.md
+++ b/ACL_PyTorch/built-in/cv/InceptionV4_for_Pytorch/public_address_statement.md
@@ -1,3 +1,4 @@
| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
| ---- | ------------ | ------ | ------------------------------------ | -------- |
-|开发引入|/|InceptionV4_for_Pytorch/url.ini|http://data.lip6.fr/cadene/pretrainedmodels/inceptionv4-8e4777a0.pth|下载权重|
\ No newline at end of file
+|开发引入|/|InceptionV4_for_Pytorch/url.ini|http://data.lip6.fr/cadene/pretrainedmodels/inceptionv4-8e4777a0.pth|下载权重|
+|开发引入|/|inceptionv4_pth2onnx.py|https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/inceptionv4.py|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/LPRNet_for_PyTorch/README.md b/ACL_PyTorch/built-in/cv/LPRNet_for_PyTorch/README.md
index 84e548a4b0bf69ee0e241c825e0df7b783d3d903..1a1d4b2c0d5c39bf137bf97bb5b8cb1fefc5dfc8 100644
--- a/ACL_PyTorch/built-in/cv/LPRNet_for_PyTorch/README.md
+++ b/ACL_PyTorch/built-in/cv/LPRNet_for_PyTorch/README.md
@@ -49,21 +49,10 @@ LPRNet(License Plate Recognition Network)是一个实时的轻量化、高质量
# 推理环境准备
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ---------- | ------- | ---------------------------------------------------------- |
- | 固件与驱动 | 22.0.2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.0 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.8.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
- 该模型需要以下依赖
- **表 2** 依赖列表
+ **表 1** 依赖列表
| 依赖名称 | 版本 |
| --------------------- | ----------------------- |
diff --git a/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/README.md
index 0c5f083e651e2d034b17d9b1a6af79ccdbc0455e..6af144d65d090a6b86fc152673420116a92d47fe 100644
--- a/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/README.md
@@ -3,7 +3,6 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -47,19 +46,7 @@ mobileNetV2是对mobileNetV1的改进,是一种轻量级的神经网络。mobi
-# 推理环境准备\[所有版本\]
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ |---------| ------------------------------------------------------------ |
-| 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.0.RC1 | - |
-| Python | 3.7.5 | - |
-| PyTorch | 1.8.0 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/public_address_statement.md
index b6c42cc39893d1c3c1d04e0fc9f39cc8cea083fb..788db266c71882d47e6207a0ac02d93a56a214d2 100644
--- a/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/public_address_statement.md
+++ b/ACL_PyTorch/built-in/cv/MobileNetV2_for_Pytorch/public_address_statement.md
@@ -1,3 +1,5 @@
| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
| ---- | ------------ | ------ | ------------------------------------ | -------- |
|开发引入|/|MobileNetV2_for_Pytorch/url.ini|https://download.pytorch.org/models/mobilenet_v2-b0353104.pth|下载权重|
+|开发引入|/|mobilenet.py|https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py|注释说明|
+|开发引入|/|mobilenet.py|`"MobileNetV2: Inverted Residuals and Linear Bottlenecks" `_.|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/README.md
index 7e2c4d0eb34363c129a4b4dab5c3d29daa949744..c01e399bdf944ce61e727c64d5324252f18b747f 100644
--- a/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/README.md
@@ -1,7 +1,7 @@
# MobileNetV3-推理指导
- [概述](#概述)
-- [推理环境准备](#推理环境准备)
+
- [快速上手](#快速上手)
- [获取源码](#获取源码)
- [准备数据集](#准备数据集)
@@ -39,17 +39,6 @@ MobileNetV3引入了MobileNetV1的深度可分离卷积,MobileNetV2的具有
| output | FLOAT16 | batchsize x 1000 | ND |
-# 推理环境准备
-- 该模型需要以下插件与驱动
- **表 1** 版本配套表
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------- |---------| ------------------------------------------------------------ |
-| 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.0.RC1 | - |
-| Python | 3.7.5 | - |
-| PyTorch | 1.10.1 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
@@ -184,3 +173,7 @@ MobileNetV3引入了MobileNetV1的深度可分离卷积,MobileNetV2的具有
| Ascend310P3 | 32 | ImageNet | 65.094/Top1 85.432/Top5 | 15442.12 fps |
| Ascend310P3 | 64 | ImageNet | 65.079/Top1 85.417/Top5 | 14863.88 fps |
+
+# 公网地址说明
+代码涉及公网地址参考 public_address_statement.md
+
diff --git a/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/public_address_statement.md
new file mode 100644
index 0000000000000000000000000000000000000000..7610f1dd5d3ab2028d128e0c200fda8aa3e46cfa
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/MobileNetV3_for_Pytorch/public_address_statement.md
@@ -0,0 +1,3 @@
+| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
+| ---- | ------------ | ------ | ------------------------------------ | -------- |
+|开发引入|/|data/dataset.py|"""See http://www.codinghorror.com/blog/archives/001018.html"""|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/README.md
index 89c19f6d9cc7dbb0b05a5e380251c562e493b56f..d4fbedf1d8e10aa186d23e2aad3513c349f86bda 100644
--- a/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/README.md
@@ -5,7 +5,7 @@
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -52,19 +52,6 @@ PSENet(渐进式的尺度扩张网络)是一种文本检测器,能够很好地
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.6.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
diff --git a/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/public_address_statement.md
index c4c04cb504112da45540ad669537faebc18a220b..4cc98a09101dec0036ce09e5dd9fbfac8cd904a0 100644
--- a/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/public_address_statement.md
+++ b/ACL_PyTorch/built-in/cv/PSENet_for_Pytorch/public_address_statement.md
@@ -6,3 +6,6 @@
|开发引入|/|PSENet_for_Pytorch/url.ini|https://download.pytorch.org/models/resnet50-19c8e357.pth|下载权重|
|开发引入|/|PSENet_for_Pytorch/url.ini|https://download.pytorch.org/models/resnet101-5d3mb4d8f.pth|下载权重|
|开发引入|/|PSENet_for_Pytorch/url.ini|https://download.pytorch.org/models/resnet152-b121ed2d.pth|下载权重|
+|开发引入|/|fpn_resnet_nearest.py|http://www.apache.org/licenses/|license|
+|开发引入|/|fpn_resnet_nearest.py|http://www.apache.org/licenses/LICENSE-2.0|license|
+|开发引入|/|Post-processing/Algorithm_DetEva.py|It is slightly different from original algorithm(see https://perso.liris.cnrs.fr/christian.wolf/software/deteval/index.html)|注释说明|
diff --git a/ACL_PyTorch/built-in/cv/Pelee_for_Pytorch/ReadMe.md b/ACL_PyTorch/built-in/cv/Pelee_for_Pytorch/ReadMe.md
index 3221cca847a98072b57b4c26775592f4179d1a5e..54b6a60ac119fb52aceb38ab97c5431c9fd698e1 100644
--- a/ACL_PyTorch/built-in/cv/Pelee_for_Pytorch/ReadMe.md
+++ b/ACL_PyTorch/built-in/cv/Pelee_for_Pytorch/ReadMe.md
@@ -35,15 +35,6 @@ commit_id=1eab4106330f275ab3c5dfb910ddd79a5bac95ef
## 推理环境准备
-该模型需要以下插件与驱动
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| 固件与驱动 | [1.0.15](https://www.hiascend.com/hardware/firmware-drivers?tag=commercial) | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | [5.1.RC1](https://www.hiascend.com/software/cann/commercial?version=5.1.RC1) | |
-| PyTorch | [1.5.0](https://github.com/pytorch/pytorch/tree/v1.5.0) | |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | | |
-
| 依赖名称 | 版本 |
diff --git a/ACL_PyTorch/built-in/cv/R(2+1)D_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/R(2+1)D_for_Pytorch/README.md
index 368d10ba0491dde66278fa6cea1c4a0499eacc68..c28447e699a5845fdecfab488f118296f2674176 100644
--- a/ACL_PyTorch/built-in/cv/R(2+1)D_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/R(2+1)D_for_Pytorch/README.md
@@ -5,7 +5,7 @@
- **1.1 安装必要的依赖,测试环境可能已经安装其中的一些不同版本的库了,故手动测试时不推荐使用该命令安装**
```
-pip3.7 install -r requirements.txt
+pip3 install -r requirements.txt
```
- **1.2 获取,修改与安装开源模型代码**
@@ -20,7 +20,7 @@ cd ..
git clone https://github.com/open-mmlab/mmaction2 -b master
cd mmaction2
git reset --hard acce52d21a2545d9351b1060853c3bcd171b7158
-python3.7 setup.py develop
+python3 setup.py develop
```
注:若上述命令不能下载源码,则将https替换为git(如:git clone git://github.com/open-mmlab/mmcv -b master )
@@ -44,7 +44,7 @@ mkdir -p ./data/ucf101/videos
将/root/datasets/ucf101文件夹下的视频文件夹复制到videos下
cp -r /root/datasets/ucf101/* ./data/ucf101/videos
-python3.7 ./mmaction2/tools/data/build_rawframes.py ./data/ucf101/videos/ ./data/ucf101/rawframes/ --task rgb --level 2 --ext avi --use-opencv
+python3 ./mmaction2/tools/data/build_rawframes.py ./data/ucf101/videos/ ./data/ucf101/rawframes/ --task rgb --level 2 --ext avi --use-opencv
DATA_DIR_AN="./data/ucf101/annotations"
@@ -66,9 +66,9 @@ PYTHONPATH=. python3.7 ./mmaction2/tools/data/build_file_list.py ucf101 data/ucf
```
- **2.1 pth转ONNX**
```
-python3.7 ./mmaction2/tools/deployment/pytorch2onnx.py ./mmaction2/configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_ucf101_rgb2.py best_top1_acc_epoch_35.pth --verify --output-file=r2plus1d.onnx --shape 1 3 3 8 256 256
+python3 ./mmaction2/tools/deployment/pytorch2onnx.py ./mmaction2/configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_ucf101_rgb2.py best_top1_acc_epoch_35.pth --verify --output-file=r2plus1d.onnx --shape 1 3 3 8 256 256
-python3.7 -m onnxsim --input-shape="1,3,3,8,256,256" --dynamic-input-shape r2plus1d.onnx r2plus1d_sim.onnx
+python3 -m onnxsim --input-shape="1,3,3,8,256,256" --dynamic-input-shape r2plus1d.onnx r2plus1d_sim.onnx
```
- **2.2 ONNX转om**
1. 配置环境变量
@@ -91,18 +91,18 @@ python3.7 -m onnxsim --input-shape="1,3,3,8,256,256" --dynamic-input-shape r2plu
- **2.3 数据预处理**
```
-python3.7 r2plus1d_preprocess.py --config=./mmaction2/configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_ucf101_rgb2.py --bts=1 --output_path=./predata_bts1/
+python3 r2plus1d_preprocess.py --config=./mmaction2/configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_ucf101_rgb2.py --bts=1 --output_path=./predata_bts1/
```
- **2.4 模型性能测试**
```
-python3.7.5 -m ais_bench --model ./r2plus1d_bs4.om --loop 50
+python3 -m ais_bench --model ./r2plus1d_bs4.om --loop 50
```
- **2.5 模型精度测试**
模型推理数据集
```
-python3.7.5 -m ais_bench --model ./r2plus1d_bs4.om --input ./predata_bts1/ --output ./lcmout/ --outfmt NPY
+python3 -m ais_bench --model ./r2plus1d_bs4.om --input ./predata_bts1/ --output ./lcmout/ --outfmt NPY
--model:模型地址
--input:预处理完的数据集文件夹
--output:推理结果保存地址
@@ -110,7 +110,7 @@ python3.7.5 -m ais_bench --model ./r2plus1d_bs4.om --input ./predata_bts1/ --out
```
精度验证
```
-python3.7 r2plus1d_postprocess.py --result_path=./lcmout/2022_xx_xx-xx_xx_xx/sumary.json
+python3 r2plus1d_postprocess.py --result_path=./lcmout/2022_xx_xx-xx_xx_xx/sumary.json
--result_path:推理结果中的json文件
```
diff --git a/ACL_PyTorch/built-in/cv/Res2Net_v1b_101_for_PyTorch/README.md b/ACL_PyTorch/built-in/cv/Res2Net_v1b_101_for_PyTorch/README.md
index bdbd02bbc9d550870881422cae561e5121b7dbc3..df721ff9beda687886fb038665a72ec1a3160d5b 100755
--- a/ACL_PyTorch/built-in/cv/Res2Net_v1b_101_for_PyTorch/README.md
+++ b/ACL_PyTorch/built-in/cv/Res2Net_v1b_101_for_PyTorch/README.md
@@ -7,7 +7,7 @@
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -54,19 +54,7 @@
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 5.1.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.5.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
diff --git a/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/ReadMe.md b/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/ReadMe.md
index 0c3fb44c828a7631c73aa27685e8e2548abaf932..114b2a40a945462ab4d4401e9700a73a21b4a3bc 100644
--- a/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/ReadMe.md
+++ b/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/ReadMe.md
@@ -56,7 +56,9 @@
-(6)python3.7 vision_metric_ImageNet.py result/dumpOutput_device0/ ./val_label.txt ./ result.json
+(6)python3 vision_metric_ImageNet.py result/dumpOutput_device0/ ./val_label.txt ./ result.json
验证推理结果
+# 公网地址说明
+代码涉及公网地址参考 public_address_statement.md
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/public_address_statement.md b/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/public_address_statement.md
new file mode 100644
index 0000000000000000000000000000000000000000..a2038173d10a5382a9f8d8165fc5c3f18c5b3290
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/ResNeXt50_for_Pytorch/public_address_statement.md
@@ -0,0 +1,3 @@
+| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
+| ---- | ------------ | ------ | ------------------------------------ | -------- |
+|开发引入|/|resnext50_pth2onnx.py|https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/Resnet101_Pytorch_Infer/README.md b/ACL_PyTorch/built-in/cv/Resnet101_Pytorch_Infer/README.md
index dbd2d7c9f7e993ad38ed13310ec6e5dd7e9efc2d..7d1a05e63796b66d5490d656bcb3b1484893659c 100644
--- a/ACL_PyTorch/built-in/cv/Resnet101_Pytorch_Infer/README.md
+++ b/ACL_PyTorch/built-in/cv/Resnet101_Pytorch_Infer/README.md
@@ -43,14 +43,6 @@ commit_id=7d955df73fe0e9b47f7d6c77c699324b256fc41f
### 推理环境准备
-- 该模型需要以下插件与驱动
-
- | 配套 | 版本 | 环境准备指导 |
- | ----------------------------------------- | ------- | --------------------------------------------------------------------------------------------- |
- | 固件与驱动 | 22.0.4 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | |
- | PyTorch | 1.5.1 | |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | | |
- 该模型需要以下依赖。
diff --git a/ACL_PyTorch/built-in/cv/Resnet18_for_PyTorch/README.md b/ACL_PyTorch/built-in/cv/Resnet18_for_PyTorch/README.md
index ab6a788fd80cbeceb76f11ce8c8237e2c35a76e2..b442f62f32a1acd2e6d87605ba631cff21ba9712 100644
--- a/ACL_PyTorch/built-in/cv/Resnet18_for_PyTorch/README.md
+++ b/ACL_PyTorch/built-in/cv/Resnet18_for_PyTorch/README.md
@@ -27,14 +27,7 @@
## 推理环境准备
-- 该模型需要以下插件与驱动
-
- | 配套 | 版本 | 环境准备指导 |
- | ----------------------------------------- | ------- | --------------------------------------------------------------------------------------------- |
- | 固件与驱动 | 22.0.4 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | |
- | PyTorch | 1.5.1 | |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | | |
+
- 该模型需要以下依赖。
diff --git a/ACL_PyTorch/built-in/cv/Resnet34_for_Pytorch/ReadMe.md b/ACL_PyTorch/built-in/cv/Resnet34_for_Pytorch/ReadMe.md
index 3dfd2a1ddb933dbd709a8cfe483ba6ea3626bf45..42de679f624ef1f7440fe9ea9f5e764f0ef94ccf 100644
--- a/ACL_PyTorch/built-in/cv/Resnet34_for_Pytorch/ReadMe.md
+++ b/ACL_PyTorch/built-in/cv/Resnet34_for_Pytorch/ReadMe.md
@@ -18,19 +18,19 @@
1. 数据预处理,把ImageNet 50000张图片转为二进制文件(.bin)
```shell
- python3.7 pytorch_transfer.py resnet /home/HwHiAiUser/dataset/ImageNet/ILSVRC2012_img_val ./prep_bin
+ python3 pytorch_transfer.py resnet /home/HwHiAiUser/dataset/ImageNet/ILSVRC2012_img_val ./prep_bin
```
2. 生成数据集info文件
```shell
- python3.7 get_info.py bin ./prep_bin ./BinaryImageNet.info 256 256
+ python3 get_info.py bin ./prep_bin ./BinaryImageNet.info 256 256
```
3. 从torchvision下载[resnet34模型](https://ascend-repo-modelzoo.obs.cn-east-2.myhuaweicloud.com/model/1_PyTorch_PTH/ResNet34/PTH/resnet34-b627a593.pth)或者指定自己训练好的pth文件路径,通过pth2onnx.py脚本转化为onnx模型
```shell
- python3.7 pth2onnx.py ./resnet34-333f7ec4.pth ./resnet34_dynamic.onnx
+ python3 pth2onnx.py ./resnet34-333f7ec4.pth ./resnet34_dynamic.onnx
```
4. 支持脚本将.onnx文件转为离线推理模型文件.om文件
@@ -51,14 +51,14 @@
6. 精度验证,调用vision_metric_ImageNet.py脚本与数据集标签val_label.txt比对,可以获得Accuracy数据,结果保存在result.json中
```shell
- python3.7 vision_metric_ImageNet.py result/dumpOutput_device0/ ./val_label.txt ./ result.json
+ python3 vision_metric_ImageNet.py result/dumpOutput_device0/ ./val_label.txt ./ result.json
```
7. 模型量化:
a.生成量化数据:
'''shell
mkdir amct_prep_bin
- python3.7 pytorch_transfer_amct.py /home/HwHiAiUser/dataset/ImageNet/ILSVRC2012_img_val ./amct_prep_bin
+ python3 pytorch_transfer_amct.py /home/HwHiAiUser/dataset/ImageNet/ILSVRC2012_img_val ./amct_prep_bin
mkdir data_bs64
- python3.7 calibration_bin ./amct_prep_bin data_bs64 64
+ python3 calibration_bin ./amct_prep_bin data_bs64 64
b. 量化模型转换:
amct_onnx calibration --model resnet34_dynamic.onnx --save_path ./result/resnet34 --input_shape="actual_input_1:64,3,224,224" --data_dir "./data_bs64/" --data_type "float32"
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer/README.md b/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer/README.md
index cee20cc10e167af37e46ef58fa936b1e2f67cd5a..f8a0cc54b7e48cdf8001674c4709bc4f05ef2d4d 100644
--- a/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer/README.md
+++ b/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer/README.md
@@ -3,7 +3,7 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -54,19 +54,6 @@ Resnet是残差网络(Residual Network)的缩写,该系列网络广泛用于目
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
-| 固件与驱动 | 1.0.15 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 5.1.RC2 | - |
-| Python | 3.7.5 | - |
-| PyTorch | >1.5.0 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer_poc/README.md b/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer_poc/README.md
index b9502c0add8b03b842ee557c36ff8bb91ab30a5e..afe59a1c9c92c9f4ce069f9dfaa01dbaffaa79e7 100644
--- a/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer_poc/README.md
+++ b/ACL_PyTorch/built-in/cv/Resnet50_Pytorch_Infer_poc/README.md
@@ -3,7 +3,7 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -54,19 +54,7 @@ Resnet是残差网络(Residual Network)的缩写,该系列网络广泛用于目
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
-| 固件与驱动 | 23.0.RC2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.3.203 | - |
-| Python | 3.7.5 | - |
-| PyTorch | >1.5.0 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/Resnet50_mlperf/README.md b/ACL_PyTorch/built-in/cv/Resnet50_mlperf/README.md
index 3f958e870968d9eb04a963d92ef996b43bbca05f..44c4971daab3b2bd8b6c2c8c8537dd13fc6626e3 100644
--- a/ACL_PyTorch/built-in/cv/Resnet50_mlperf/README.md
+++ b/ACL_PyTorch/built-in/cv/Resnet50_mlperf/README.md
@@ -3,7 +3,7 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -53,18 +53,7 @@ Resnet是残差网络(Residual Network)的缩写,该系列网络广泛用于目
| output | batchsize | INT64 |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
-| 固件与驱动 | 1.0.15 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.3.RC1 | - |
-| Python | 3.7.5 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/Retinanet_Resnet18/README.md b/ACL_PyTorch/built-in/cv/Retinanet_Resnet18/README.md
index e7c168b453a88eaaab6919ca021b38c4505be9f0..af2daae6347106f3c560098007b2b74c09ebb881 100644
--- a/ACL_PyTorch/built-in/cv/Retinanet_Resnet18/README.md
+++ b/ACL_PyTorch/built-in/cv/Retinanet_Resnet18/README.md
@@ -4,7 +4,7 @@
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -44,18 +44,7 @@
| dets | FLOAT32 | 1 x 5 x 100 | ND |
| labels | INT64 | 1 x 100 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
- | 配套 | 版本 | 环境准备指导 |
- | ---------- | ------- | ----------------------------------------------------------------------------------------------------- |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 7.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.10.0 | - |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/Retinanet_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/Retinanet_for_Pytorch/README.md
index 52547cd95f5329c8b25cd029705b52b1b22a51b5..de146d61043cf6fd8dabc51a0a17058a98bfce3e 100644
--- a/ACL_PyTorch/built-in/cv/Retinanet_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/Retinanet_for_Pytorch/README.md
@@ -4,7 +4,7 @@
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -44,18 +44,7 @@
| dets | FLOAT32 | 1 x 5 x 100 | ND |
| labels | INT64 | 1 x 100 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
- | 配套 | 版本 | 环境准备指导 |
- | ---------- | ------- | ----------------------------------------------------------------------------------------------------- |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.10.0 | - |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/SAM/README.md b/ACL_PyTorch/built-in/cv/SAM/README.md
index 82a1048f09519be4190fdd620a205c9fa0135481..a822b9e272bee9d20b4ed4a54f3b66a412112730 100644
--- a/ACL_PyTorch/built-in/cv/SAM/README.md
+++ b/ACL_PyTorch/built-in/cv/SAM/README.md
@@ -3,7 +3,7 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -72,19 +72,6 @@ SAM 首先会自动分割图像中的所有内容,但是如果你需要分割
| low_res_masks | FLOAT32 | -1 x 1 x -1 x -1 | ND |
-# 推理环境准备\[所有版本\]
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ |---------| ------------------------------------------------------------ |
-| 固件与驱动 | 23.0.rc3.b070 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 7.0.T10 | - |
-| Python | 3.8.13 | - |
-| PyTorch | 1.13.1 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/SCNet/README.md b/ACL_PyTorch/built-in/cv/SCNet/README.md
index a78abff04c54c3b273b95396e90be287f1d86517..52f8730eb18504d0711d485e8a50922576eaeeaf 100644
--- a/ACL_PyTorch/built-in/cv/SCNet/README.md
+++ b/ACL_PyTorch/built-in/cv/SCNet/README.md
@@ -7,7 +7,7 @@
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -51,19 +51,7 @@
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
- | 配套 | 版本 | 环境准备指导 |
- |---------| ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.8.1 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
@@ -100,7 +88,7 @@
执行imagenet_torch_preprocess.py脚本,完成预处理。
```
- python3.7 imagenet_torch_preprocess.py /local/SCNet/imagenet/val ./pre_dataset
+ python3 imagenet_torch_preprocess.py /local/SCNet/imagenet/val ./pre_dataset
```
@@ -129,7 +117,7 @@
运行pth2onnx.py脚本。
```
- python3.7 pth2onnx.py scnet50_v1d-4109d1e1.pth scnet.onnx
+ python3 pth2onnx.py scnet50_v1d-4109d1e1.pth scnet.onnx
```
获得scnet.onnx文件。
@@ -209,7 +197,7 @@
调用vision_metric_ImageNet.py脚本与label比对,可以获得Accuracy Top5数据,结果保存在result.txt中。
```
- python3.7 vision_metric.py --benchmark_out ./output/subdir/ --anno_file /local/SCNet/imagenet/val_label.txt --result_file ./result.txt
+ python3 vision_metric.py --benchmark_out ./output/subdir/ --anno_file /local/SCNet/imagenet/val_label.txt --result_file ./result.txt
```
- 参数说明:
@@ -225,9 +213,9 @@
可使用ais_bench推理工具的纯推理模式验证不同batch_size的om模型的性能,参考命令如下:
```
- python3.7 -m ais_bench --model=./scnet_bs{batch size}.om --loop=1000 --batchsize={batch size}
+ python3 -m ais_bench --model=./scnet_bs{batch size}.om --loop=1000 --batchsize={batch size}
示例
- python3.7 -m ais_bench --model=./scnet_bs1.om --loop=1000 --batchsize=1
+ python3 -m ais_bench --model=./scnet_bs1.om --loop=1000 --batchsize=1
```
- 参数说明:
diff --git a/ACL_PyTorch/built-in/cv/SE-SSD_for_PyTorch/readme.md b/ACL_PyTorch/built-in/cv/SE-SSD_for_PyTorch/readme.md
index 89b492bb67367c95a5e1d0d4163a08d0f7586342..724f78a1de575cf10dd0daff3785f385abb1d627 100644
--- a/ACL_PyTorch/built-in/cv/SE-SSD_for_PyTorch/readme.md
+++ b/ACL_PyTorch/built-in/cv/SE-SSD_for_PyTorch/readme.md
@@ -4,7 +4,7 @@
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -44,17 +44,7 @@ SE-SSD(Self-Ensembling Single-Stage Object Detector)是一种基于自集成
| dir_cls_preds | FLOAT32 | batchsize x 200 x 176 x 4 | ND |
| iou_preds | FLOAT32 | batchsize x 200 x 176 x 2 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动:
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 23.0.RC1 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.3.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.13.1 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | | |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/SE_ResNet50_Pytorch_Infer/README.md b/ACL_PyTorch/built-in/cv/SE_ResNet50_Pytorch_Infer/README.md
index 522873d50a8deb5f6673502ea95de84fa708d878..1ff2c8657fd368d9d6b78bc5efed61d5ce87c56f 100644
--- a/ACL_PyTorch/built-in/cv/SE_ResNet50_Pytorch_Infer/README.md
+++ b/ACL_PyTorch/built-in/cv/SE_ResNet50_Pytorch_Infer/README.md
@@ -5,7 +5,7 @@
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -48,19 +48,6 @@
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.6.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
diff --git a/ACL_PyTorch/built-in/cv/SFA3D_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/SFA3D_for_Pytorch/README.md
index 87252375cee0933473590c6c0f1552cab424fa01..00f29bcaf8f6f4a42b33baef761867a1a6e612e2 100644
--- a/ACL_PyTorch/built-in/cv/SFA3D_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/SFA3D_for_Pytorch/README.md
@@ -2,7 +2,7 @@
- [概述](#00)
- [输入输出数据](#00_1)
-- [推理环境准备](#01)
+
- [快速上手](#1)
- [获取源码](#1_0)
- [准备数据集](#1_1)
@@ -45,19 +45,6 @@ SFA3D(Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point Clou
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | |
- | Python | 3.7.5 | - |
- | PyTorch | 1.6.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
@@ -305,7 +292,7 @@ SFA3D(Super Fast and Accurate 3D Object Detection based on 3D LiDAR Point Clou
可使用ais_bench推理工具的纯推理模式验证不同batch_size的om模型的性能,参考命令如下:
```
- python3.7 -m ais_bench --model=${om_model_path} --loop=20 --batchsize=${batch_size}
+ python3 -m ais_bench --model=${om_model_path} --loop=20 --batchsize=${batch_size}
```
2. trt纯推理。
diff --git a/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/README.md b/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/README.md
index 069305922c7ddd88e2e7da5c1ecfa9184d23deb6..209976e07bbd7962a0fa90c44eb366480c4b7762 100644
--- a/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/README.md
+++ b/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/README.md
@@ -7,7 +7,7 @@
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -57,18 +57,7 @@ SSD模型是用于图像检测的模型,通过基于Resnet34残差卷积网络
| scores | FLOAT32 | 1 x 200 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.3.RC2 | - |
- | Python | 3.7.5 | - |
-
@@ -267,4 +256,9 @@ SSD模型是用于图像检测的模型,通过基于Resnet34残差卷积网络
| Model | batchsize | Accuracy |
| ----------- | --------- | -------- |
| ssd_resnet34 | 1 | map = 20% |
+
+
+# 公网地址说明
+代码涉及公网地址参考 public_address_statement.md
+
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/public_address_statement.md b/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/public_address_statement.md
new file mode 100644
index 0000000000000000000000000000000000000000..bd450e0c8b7c1872c187687e0dbfacded4640e43
--- /dev/null
+++ b/ACL_PyTorch/built-in/cv/SSD_resnet34_for_POC/public_address_statement.md
@@ -0,0 +1,3 @@
+| 类型 | 开源代码地址 | 文件名 | 公网IP地址/公网URL地址/域名/邮箱地址 | 用途说明 |
+| ---- | ------------ | ------ | ------------------------------------ | -------- |
+|开发引入|/|ssd_preprocess.py|# use the scales here: https://github.com/amdegroot/ssd.pytorch/blob/master/data/config.py|注释说明|
\ No newline at end of file
diff --git a/ACL_PyTorch/built-in/cv/STGCN_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/STGCN_for_Pytorch/README.md
index 0530a9f1e754b1b637d846c7345dd68c861875f5..39f98d7cca233b9d4b18845c3ccd4c0b001ad55f 100644
--- a/ACL_PyTorch/built-in/cv/STGCN_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/STGCN_for_Pytorch/README.md
@@ -2,7 +2,7 @@
- [概述](#概述)
- [输入输出数据](#输入输出数据)
-- [推理环境](#推理环境)
+
- [快速上手](#快速上手)
- [获取源码](#获取源码)
- [准备数据集](#准备数据集)
@@ -37,20 +37,6 @@ ST-GCN是一种图卷积神经网络,该模型可以实现对人体骨架图
----
-# 推理环境
-
-- 该模型推理所需配套的软件如下:
-
- | 配套 | 版本 | 环境准备指导 |
- | --------- | ------- | ---------- |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Nvidia-Driver | 460.67 | |
- | CUDA | 10.0 | - |
- | CUDNN | 7.6.5.32 | - |
- | Python | 3.7.5 | - |
-
- 说明:请根据推理卡型号与 CANN 版本选择相匹配的固件与驱动版本。
----
diff --git a/ACL_PyTorch/built-in/cv/Shufflenetv2_for_Pytorch/ReadMe.md b/ACL_PyTorch/built-in/cv/Shufflenetv2_for_Pytorch/ReadMe.md
index dfdeba0a534251609be4504b58a3a7c946e7375e..026b28cb1382be0ac88ca57a614b44e634c1f49c 100644
--- a/ACL_PyTorch/built-in/cv/Shufflenetv2_for_Pytorch/ReadMe.md
+++ b/ACL_PyTorch/built-in/cv/Shufflenetv2_for_Pytorch/ReadMe.md
@@ -4,7 +4,7 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -51,19 +51,6 @@ Shufflenetv2是Shufflenet的升级版本,作为轻量级网络,通过遵循
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
- | 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
- | CANN | 6.0.RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.8.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
diff --git a/ACL_PyTorch/built-in/cv/SuperGlue_with_SuperPoint_for_Pytorch/README.md b/ACL_PyTorch/built-in/cv/SuperGlue_with_SuperPoint_for_Pytorch/README.md
index 6f9780f4640228f5003c1a2d3df51527513d17d6..e81755ac16a8ab2bf39b0a207dba5d07d4ba8431 100644
--- a/ACL_PyTorch/built-in/cv/SuperGlue_with_SuperPoint_for_Pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/SuperGlue_with_SuperPoint_for_Pytorch/README.md
@@ -4,7 +4,7 @@
- [输入输出数据](#section540883920406)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
- [获取源码](#section4622531142816)
@@ -66,19 +66,7 @@ SuperGlue网络用于特征匹配与外点剔除,其使用图神经网络对
| matching_scores0 | FLOAT32 | points_num0 x 1 | ND |
| matching_scores1 | FLOAT32 | points_num1 x 1 | ND |
-# 推理环境准备
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
- | 配套 | 版本 | 环境准备指导 |
- | ------------------------------------------------------------ | ------ | ------------------------------------------------------------ |
- | 固件与驱动 | 1.0.17 | [Pytorch框架推理环境准备](https://gitee.com/link?target=https%3A%2F%2Fwww.hiascend.com%2Fdocument%2Fdetail%2Fzh%2FModelZoo%2Fpytorchframework%2Fpies) |
- | CANN | 6.3RC1 | - |
- | Python | 3.7.5 | - |
- | PyTorch | 1.12.0 | - |
- | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch/README.md b/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch/README.md
index 160cdfdbb87dd92996fca7b51ca3c6d70785d4f7..2037be4c6b0ccfbb7c361ffbb951867d0b3fca09 100644
--- a/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch/README.md
+++ b/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch/README.md
@@ -3,7 +3,6 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -46,19 +45,7 @@ ResNet50是针对移动端专门定制的轻量级卷积神经网络,该网络
-# 推理环境准备\[所有版本\]
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ |---------| ------------------------------------------------------------ |
-| 固件与驱动 | 22.0.3 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.0.RC1 | - |
-| Python | 3.7.5 | - |
-| PyTorch | 1.8.0 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手
diff --git a/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch_for_POC/README.md b/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch_for_POC/README.md
index 9fdba2bf215c369520fcde3edca817c7e2956148..85166d84df42382e4585d64e2733549a7d83ad32 100644
--- a/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch_for_POC/README.md
+++ b/ACL_PyTorch/built-in/cv/resnet50_mmlab_for_pytorch_for_POC/README.md
@@ -3,7 +3,7 @@
- [概述](#ZH-CN_TOPIC_0000001172161501)
-- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
- [快速上手](#ZH-CN_TOPIC_0000001126281700)
@@ -46,19 +46,7 @@ ResNet50是针对移动端专门定制的轻量级卷积神经网络,该网络
-# 推理环境准备\[所有版本\]
-
-- 该模型需要以下插件与驱动
-
- **表 1** 版本配套表
-| 配套 | 版本 | 环境准备指导 |
-| ------------------------------------------------------------ |---------| ------------------------------------------------------------ |
-| 固件与驱动 | 23.0.rc2 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
-| CANN | 6.3.RC1 | - |
-| Python | 3.7.5 | - |
-| PyTorch | 1.13.1 | - |
-| 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | \ | \ |
# 快速上手