diff --git a/README.md b/README.md index 47d502acbfb722b0487bb6eb4b2451ed375b2a94..48f776c255497bd93810f8efe32e3eaf1919f4cc 100644 --- a/README.md +++ b/README.md @@ -92,6 +92,8 @@ Lockzhiner Vision Module 拥有丰富的 IO 接口,其接口图片如下图所 * [凌智视觉模块条码检测与识别部署指南](./example/special/bar_code_recognition) * [凌智视觉模块人脸检测部署指南](example/vision/face_detection) * [凌智视觉模块人脸识别部署指南](example/vision/face_recognition) +* [凌智视觉模块猫狗识别部署指南](example/special/cat_and_dog_classification) +* [凌智视觉模块花卉识别部署指南](example/special/flower_classfication/) ## 🐛 Bug反馈 diff --git a/docs/introductory_tutorial/connect_device_using_ssh.md b/docs/introductory_tutorial/connect_device_using_ssh.md index a07bdd1a0cea508a2402e350b0e6c90c203fe8ee..752014fefbe2987379a100918239d27dd216bf51 100644 --- a/docs/introductory_tutorial/connect_device_using_ssh.md +++ b/docs/introductory_tutorial/connect_device_using_ssh.md @@ -51,7 +51,7 @@ SSH 是一种用于在不安全网络上安全地访问和传输数据的协议 键盘按下 **Win + Q** 呼出搜索框 -> 输入并点击设置 -![](images/connect_device_using_ssh/open_setting.png) +![](./images/connect_device_using_ssh/open_setting_win11.png) 点击 **网络和 Internet** -> 点击 **以太网** @@ -68,7 +68,19 @@ SSH 是一种用于在不安全网络上安全地访问和传输数据的协议 ### 3.2 Win10 设置本机 IP 地址 -待补充 +键盘按下 **Win + Q** 呼出搜索框 -> 输入并点击设置 +![](./images/connect_device_using_ssh/open-setting_win10.png) + +点击 **网络和 Internet** -> 点击 **以太网** -> 点击 **更改适配器** + +![](./images/connect_device_using_ssh/select_adapter_win10.png) + +连接设备点击新增的**虚拟网卡** +![](./images/connect_device_using_ssh/config_net_win10.png) + +点击 **编辑** 并配置 IP 地址。这里将 IPV4 地址设置为 **10.1.1.155**,子网掩码设置为 **255.0.0.0**。 + +![](./images/connect_device_using_ssh/config_ipv4_win10.png) ## 4 使用 SSH 连接设备 diff --git a/docs/introductory_tutorial/images/connect_device_using_ssh/click_internet_win10.png b/docs/introductory_tutorial/images/connect_device_using_ssh/click_internet_win10.png new file mode 100644 index 0000000000000000000000000000000000000000..d7579b990ddc4f5820c58bcc494f583aa847cbe6 Binary files /dev/null and b/docs/introductory_tutorial/images/connect_device_using_ssh/click_internet_win10.png differ diff --git a/docs/introductory_tutorial/images/connect_device_using_ssh/config_ipv4_win10.png b/docs/introductory_tutorial/images/connect_device_using_ssh/config_ipv4_win10.png new file mode 100644 index 0000000000000000000000000000000000000000..38d293920affc8e5a6ef71a7c76ced8f4e3aab43 Binary files /dev/null and b/docs/introductory_tutorial/images/connect_device_using_ssh/config_ipv4_win10.png differ diff --git a/docs/introductory_tutorial/images/connect_device_using_ssh/config_net_win10.png b/docs/introductory_tutorial/images/connect_device_using_ssh/config_net_win10.png new file mode 100644 index 0000000000000000000000000000000000000000..dbbb0a41399bb4afef879e652e9304ee2aad73ac Binary files /dev/null and b/docs/introductory_tutorial/images/connect_device_using_ssh/config_net_win10.png differ diff --git a/docs/introductory_tutorial/images/connect_device_using_ssh/open-setting_win10.png b/docs/introductory_tutorial/images/connect_device_using_ssh/open-setting_win10.png new file mode 100644 index 0000000000000000000000000000000000000000..66290245a43f528ade249002edb607ea89151140 Binary files /dev/null and b/docs/introductory_tutorial/images/connect_device_using_ssh/open-setting_win10.png differ diff --git a/docs/introductory_tutorial/images/connect_device_using_ssh/open_setting.png b/docs/introductory_tutorial/images/connect_device_using_ssh/open_setting_win11.png similarity index 100% rename from docs/introductory_tutorial/images/connect_device_using_ssh/open_setting.png rename to docs/introductory_tutorial/images/connect_device_using_ssh/open_setting_win11.png diff --git a/docs/introductory_tutorial/images/connect_device_using_ssh/select_adapter_win10.png b/docs/introductory_tutorial/images/connect_device_using_ssh/select_adapter_win10.png new file mode 100644 index 0000000000000000000000000000000000000000..4a55da4b0353bd2dc535442472de7ea621e5d907 Binary files /dev/null and b/docs/introductory_tutorial/images/connect_device_using_ssh/select_adapter_win10.png differ diff --git a/example/special/cat_and_dog_classification/README.md b/example/special/cat_and_dog_classification/README.md new file mode 100644 index 0000000000000000000000000000000000000000..79e003e27e51fa3ae60660d1e7229d089b44d235 --- /dev/null +++ b/example/special/cat_and_dog_classification/README.md @@ -0,0 +1,50 @@ +

凌智视觉模块猫狗分类识别部署指南

+ +发布版本:V0.0.0 + +日期:2024-11-13 + +文件密级:□绝密 □秘密 □内部资料 ■公开 + +--- + +**免责声明** + +本文档按**现状**提供,福州凌睿智捷电子有限公司(以下简称**本公司**)不对本文档中的任何陈述、信息和内容的准确性、可靠性、完整性、适销性、适用性及非侵权性提供任何明示或暗示的声明或保证。本文档仅作为使用指导的参考。 + +由于产品版本升级或其他原因,本文档可能在未经任何通知的情况下不定期更新或修改。 + +**读者对象** + +本教程适用于以下工程师: + +- 技术支持工程师 +- 软件开发工程师 + +**修订记录** + +| **日期** | **版本** | **作者** | **修改说明** | +| :--------- | -------- | -------- | ------------ | +| 2024/11/13 | 0.0.0 | 钟海滨 | 初始版本 | + +## 1 简介 + +猫狗分类是计算机视觉入门的常见任务,我们基于 [凌智视觉模块分类模型部署指南](../../vision/classification) 训练了凌智视觉模块专用的模型,该模型能够实现对猫狗分类识别。 + +## 2 运行前的准备 + +- 请确保你已经下载了 [凌智视觉模块猫狗分类识别模型](https://gitee.com/LockzhinerAI/LockzhinerVisionModule/releases/download/v0.0.2/LZ-Dog-and-Cat-classfication.rknn) + +## 3 在凌智视觉模块上部署猫狗分类识别案例 + +下载模型后,请参考以下教程使用 Python 在凌智视觉模块上部署分类模型例程: + +- [凌智视觉模块猫狗分类识别 Python 部署指南](./python) + +## 4 模型性能指标 + +以下测试数据为模型执行 Predict 函数运行 1000 次耗时的平均时间 + +| 分类模型 | FPS(帧/s) | 精度(%) | +|:-------:|:----:|:----:| +|LZ-Flower-Classfication|35|| diff --git a/example/special/cat_and_dog_classification/python/README.md b/example/special/cat_and_dog_classification/python/README.md new file mode 100644 index 0000000000000000000000000000000000000000..b788cca34a0cd5670e2ee3a09dca9d5d92cb72a5 --- /dev/null +++ b/example/special/cat_and_dog_classification/python/README.md @@ -0,0 +1,108 @@ +

凌智视觉模块猫狗分类识别 Python 部署指南

+ +发布版本:V0.0.0 + +日期:2024-11-13 + +文件密级:□绝密 □秘密 □内部资料 ■公开 + +--- + +**免责声明** + +本文档按**现状**提供,福州凌睿智捷电子有限公司(以下简称**本公司**)不对本文档中的任何陈述、信息和内容的准确性、可靠性、完整性、适销性、适用性及非侵权性提供任何明示或暗示的声明或保证。本文档仅作为使用指导的参考。 + +由于产品版本升级或其他原因,本文档可能在未经任何通知的情况下不定期更新或修改。 + +**读者对象** + +本教程适用于以下工程师: + +- 技术支持工程师 +- 软件开发工程师 + +**修订记录** + +| **日期** | **版本** | **作者** | **修改说明** | +| :--------- | -------- | -------- | ------------ | +| 2024/11/13 | 0.0.0 | 钟海滨 | 初始版本 | + +## 1 简介 + +接下来让我们基于 Python 来部署猫狗分类识别案例,在开始本章节前: + +- 请确保你已经参考 [凌智视觉模块猫狗分类识别部署指南](../README.md) 正确下载了模型。 +- 请确保你已经参考 [凌智视觉模块摄像头部署指南](../../../periphery/capture/README.md) 正确下载了凌智视觉模块图片传输助手。 +- 请确保你已经按照 [开发环境搭建指南](../../../../docs/introductory_tutorial/python_development_environment.md) 正确配置了开发环境。 + +## 2 Python API 文档 + +同[分类模型 Python 部署 API 文档](../../../vision/classification/python/README.md) + +## 3 项目介绍 + +为了方便大家入手,我们做了一个简易的猫狗分类识别例程。该程序可以使用摄像头进行端到端推理,并可视化推理结果到凌智视觉模块图片传输助手。 + +```python +from lockzhiner_vision_module.cv2 import VideoCapture +from lockzhiner_vision_module.vision import PaddleClas, visualize +from lockzhiner_vision_module.edit import Edit +import sys + +if __name__ == "__main__": + args = sys.argv + if len(args) != 2: + print("Need model path. Example: python test_dog_and_cat_classfication.py LZ-Dog-and-Cat-Classfication.rknn") + exit(1) + + edit = Edit() + edit.start_and_accept_connection() + + model = PaddleClas() + if model.initialize(args[1]) is False: + print("Failed to initialize PaddleClas") + exit(1) + + video_capture = VideoCapture() + if video_capture.open(0) is False: + print("Failed to open capture") + exit(1) + + while True: + ret, mat = video_capture.read() + if ret is False: + continue + + result = model.predict(mat) + print(f"The label_id is {result.label_id} and the score is {result.score}") + + vis_mat = visualize(mat, result) + edit.print(vis_mat) +``` + +## 4 上传并测试 Python 程序 + +参考 [连接设备指南](../../../../docs/introductory_tutorial/connect_device_using_ssh.md) 正确连接 Lockzhiner Vision Module 设备。 + +![](../../../../docs/introductory_tutorial/images/connect_device_using_ssh/ssh_success.png) + +请使用 Electerm Sftp 依次上传以下两个文件: + +- 进入存放 **test_dog_and_cat_classfication.py** 脚本文件的目录,将 **test_dog_and_cat_classfication.py** 上传到 Lockzhiner Vision Module +- 进入存放 **LZ-Dog-and-Cat-Classfication.rknn** 模型存放的目录(模型存放在训练模型后下载的 output 文件夹内),将 **LZ-Dog-and-Cat-Classfication.rknn** 上传到 Lockzhiner Vision Module + +![](images/stfp.png) + +请使用 Electerm Ssh 并在命令行中执行以下命令: + +```bash +python test_dog_and_cat_classfication.py LZ-Dog-and-Cat-Classfication.rknn +``` + +运行程序后,使用凌智视觉模块图片传输助手连接设备,屏幕上开始打印标签索引和置信度,凌智视觉模块图片传输助手出现可视化的结果 + +![alt text](images/cat.png) + + + + diff --git a/example/special/cat_and_dog_classification/python/images/cat.png b/example/special/cat_and_dog_classification/python/images/cat.png new file mode 100644 index 0000000000000000000000000000000000000000..d39bebd46a9083bd30b881b4510e27977aabf5b6 Binary files /dev/null and b/example/special/cat_and_dog_classification/python/images/cat.png differ diff --git a/example/special/cat_and_dog_classification/python/images/stfp.png b/example/special/cat_and_dog_classification/python/images/stfp.png new file mode 100644 index 0000000000000000000000000000000000000000..ed0aa465776c65c295de9c8e051ba65573ef89a9 Binary files /dev/null and b/example/special/cat_and_dog_classification/python/images/stfp.png differ diff --git a/example/special/cat_and_dog_classification/python/test_dog_and_cat_classfication.py b/example/special/cat_and_dog_classification/python/test_dog_and_cat_classfication.py new file mode 100644 index 0000000000000000000000000000000000000000..ac43bf881122aa59979188844f7951803850a5e4 --- /dev/null +++ b/example/special/cat_and_dog_classification/python/test_dog_and_cat_classfication.py @@ -0,0 +1,36 @@ +from lockzhiner_vision_module.cv2 import VideoCapture +from lockzhiner_vision_module.vision import PaddleClas, visualize +from lockzhiner_vision_module.edit import Edit +import sys + +if __name__ == "__main__": + args = sys.argv + if len(args) != 2: + print( + "Need model path. Example: python test_flower_classfication.py LZ-DigitHandRecog.rknn" + ) + exit(1) + + edit = Edit() + edit.start_and_accept_connection() + + model = PaddleClas() + if model.initialize(args[1]) is False: + print("Failed to initialize PaddleClas") + exit(1) + + video_capture = VideoCapture() + if video_capture.open(0) is False: + print("Failed to open capture") + exit(1) + + while True: + ret, mat = video_capture.read() + if ret is False: + continue + + result = model.predict(mat) + print(f"The label_id is {result.label_id} and the score is {result.score}") + + vis_mat = visualize(mat, result) + edit.print(vis_mat) diff --git a/example/special/flower_classfication/README.md b/example/special/flower_classfication/README.md new file mode 100644 index 0000000000000000000000000000000000000000..0a875177c804b5fe25aa5022dcc837b92a63e565 --- /dev/null +++ b/example/special/flower_classfication/README.md @@ -0,0 +1,50 @@ +

凌智视觉模块花卉分类识别部署指南

+ +发布版本:V0.0.0 + +日期:2024-11-13 + +文件密级:□绝密 □秘密 □内部资料 ■公开 + +--- + +**免责声明** + +本文档按**现状**提供,福州凌睿智捷电子有限公司(以下简称**本公司**)不对本文档中的任何陈述、信息和内容的准确性、可靠性、完整性、适销性、适用性及非侵权性提供任何明示或暗示的声明或保证。本文档仅作为使用指导的参考。 + +由于产品版本升级或其他原因,本文档可能在未经任何通知的情况下不定期更新或修改。 + +**读者对象** + +本教程适用于以下工程师: + +- 技术支持工程师 +- 软件开发工程师 + +**修订记录** + +| **日期** | **版本** | **作者** | **修改说明** | +| :--------- | -------- | -------- | ------------ | +| 2024/11/13 | 0.0.0 | 钟海滨 | 初始版本 | + +## 1 简介 + +花卉的种类繁多,为了应对花卉种类繁多带来的分类挑战,我们基于 [凌智视觉模块分类模型部署指南](../../vision/classification) 训练了凌智视觉模块专用的模型,该模型能够实现对五种常见花卉的精确分类,包括向日葵、蒲公英、雏菊、玫瑰和郁金香。 + +## 2 运行前的准备 + +- 请确保你已经下载了 [凌智视觉模块花卉分类识别模型](https://gitee.com/LockzhinerAI/LockzhinerVisionModule/releases/download/v0.0.2/LZ-Flower-Classfication.rknn) + +## 3 在凌智视觉模块上部署花卉分类识别案例 + +下载模型后,请参考以下教程使用 Python 在凌智视觉模块上部署分类模型例程: + +- [凌智视觉模块花卉分类识别 Python 部署指南](./python) + +## 4 模型性能指标 + +以下测试数据为模型执行 Predict 函数运行 1000 次耗时的平均时间 + +| 分类模型 | FPS(帧/s) | 精度(%) | +|:-------:|:----:|:----:| +|LZ-Flower-Classfication|35|| diff --git a/example/special/flower_classfication/python/README.md b/example/special/flower_classfication/python/README.md new file mode 100644 index 0000000000000000000000000000000000000000..6c853b30d7c553e9dfa61a5e9e27dc80b8da7ad2 --- /dev/null +++ b/example/special/flower_classfication/python/README.md @@ -0,0 +1,108 @@ +

凌智视觉模块花卉分类识别 Python 部署指南

+ +发布版本:V0.0.0 + +日期:2024-11-13 + +文件密级:□绝密 □秘密 □内部资料 ■公开 + +--- + +**免责声明** + +本文档按**现状**提供,福州凌睿智捷电子有限公司(以下简称**本公司**)不对本文档中的任何陈述、信息和内容的准确性、可靠性、完整性、适销性、适用性及非侵权性提供任何明示或暗示的声明或保证。本文档仅作为使用指导的参考。 + +由于产品版本升级或其他原因,本文档可能在未经任何通知的情况下不定期更新或修改。 + +**读者对象** + +本教程适用于以下工程师: + +- 技术支持工程师 +- 软件开发工程师 + +**修订记录** + +| **日期** | **版本** | **作者** | **修改说明** | +| :--------- | -------- | -------- | ------------ | +| 2024/11/13 | 0.0.0 | 钟海滨 | 初始版本 | + +## 1 简介 + +接下来让我们基于 Python 来部署花卉分类识别案例,在开始本章节前: + +- 请确保你已经参考 [凌智视觉模块花卉分类识别部署指南](../README.md) 正确下载了模型。 +- 请确保你已经参考 [凌智视觉模块摄像头部署指南](../../../periphery/capture/README.md) 正确下载了凌智视觉模块图片传输助手。 +- 请确保你已经按照 [开发环境搭建指南](../../../../docs/introductory_tutorial/python_development_environment.md) 正确配置了开发环境。 + +## 2 Python API 文档 + +同[分类模型 Python 部署 API 文档](../../../vision/classification/python/README.md) + +## 3 项目介绍 + +为了方便大家入手,我们做了一个简易的花卉分类识别例程。该程序可以使用摄像头进行端到端推理,并可视化推理结果到凌智视觉模块图片传输助手。 + +```python +from lockzhiner_vision_module.cv2 import VideoCapture +from lockzhiner_vision_module.vision import PaddleClas, visualize +from lockzhiner_vision_module.edit import Edit +import sys + +if __name__ == "__main__": + args = sys.argv + if len(args) != 2: + print("Need model path. Example: python test_flower-classfication.py LZ-Flower-Classfication.rknn") + exit(1) + + edit = Edit() + edit.start_and_accept_connection() + + model = PaddleClas() + if model.initialize(args[1]) is False: + print("Failed to initialize PaddleClas") + exit(1) + + video_capture = VideoCapture() + if video_capture.open(0) is False: + print("Failed to open capture") + exit(1) + + while True: + ret, mat = video_capture.read() + if ret is False: + continue + + result = model.predict(mat) + print(f"The label_id is {result.label_id} and the score is {result.score}") + + vis_mat = visualize(mat, result) + edit.print(vis_mat) +``` + +## 4 上传并测试 Python 程序 + +参考 [连接设备指南](../../../../docs/introductory_tutorial/connect_device_using_ssh.md) 正确连接 Lockzhiner Vision Module 设备。 + +![](../../../../docs/introductory_tutorial/images/connect_device_using_ssh/ssh_success.png) + +请使用 Electerm Sftp 依次上传以下两个文件: + +- 进入存放 **test_flower-classfication.py** 脚本文件的目录,将 **test_flower-classfication.py** 上传到 Lockzhiner Vision Module +- 进入存放 **LZ-Flower-Classfication.rknn** 模型存放的目录(模型存放在训练模型后下载的 output 文件夹内),将 **LZ-Flower-Classfication.rknn** 上传到 Lockzhiner Vision Module + +![](images/stfp.png) + +请使用 Electerm Ssh 并在命令行中执行以下命令: + +```bash +python test_flower_classfication.py LZ-DigitHandRecog.rknn +``` + +运行程序后,使用凌智视觉模块图片传输助手连接设备,屏幕上开始打印标签索引和置信度,凌智视觉模块图片传输助手出现可视化的结果 + +![alt text](images/tulips.png) + + + + diff --git a/example/special/flower_classfication/python/images/daisy.png b/example/special/flower_classfication/python/images/daisy.png new file mode 100644 index 0000000000000000000000000000000000000000..5af34de99568b5dc6cb5716f5c6e9f2b708bfb2d Binary files /dev/null and b/example/special/flower_classfication/python/images/daisy.png differ diff --git a/example/special/flower_classfication/python/images/dandelion.png b/example/special/flower_classfication/python/images/dandelion.png new file mode 100644 index 0000000000000000000000000000000000000000..cc8daf50f38a16a8aa9af1f7586caa7a34735826 Binary files /dev/null and b/example/special/flower_classfication/python/images/dandelion.png differ diff --git a/example/special/flower_classfication/python/images/roses.png b/example/special/flower_classfication/python/images/roses.png new file mode 100644 index 0000000000000000000000000000000000000000..490425ed8a84d055ef1d6e9d5f05179ca5fed5b6 Binary files /dev/null and b/example/special/flower_classfication/python/images/roses.png differ diff --git a/example/special/flower_classfication/python/images/stfp.png b/example/special/flower_classfication/python/images/stfp.png new file mode 100644 index 0000000000000000000000000000000000000000..02d7cc03282dd76568b3b8ebe8a5138890a58d46 Binary files /dev/null and b/example/special/flower_classfication/python/images/stfp.png differ diff --git a/example/special/flower_classfication/python/images/sunflower.png b/example/special/flower_classfication/python/images/sunflower.png new file mode 100644 index 0000000000000000000000000000000000000000..28338efd74af4ed09eb9a9141ccfd230bd91cf8d Binary files /dev/null and b/example/special/flower_classfication/python/images/sunflower.png differ diff --git a/example/special/flower_classfication/python/images/tulips.png b/example/special/flower_classfication/python/images/tulips.png new file mode 100644 index 0000000000000000000000000000000000000000..6ba24f31ec95a3f62ac63340b7be7423ff28913c Binary files /dev/null and b/example/special/flower_classfication/python/images/tulips.png differ diff --git a/example/special/flower_classfication/python/test_flower-classfication.py b/example/special/flower_classfication/python/test_flower-classfication.py new file mode 100644 index 0000000000000000000000000000000000000000..ac43bf881122aa59979188844f7951803850a5e4 --- /dev/null +++ b/example/special/flower_classfication/python/test_flower-classfication.py @@ -0,0 +1,36 @@ +from lockzhiner_vision_module.cv2 import VideoCapture +from lockzhiner_vision_module.vision import PaddleClas, visualize +from lockzhiner_vision_module.edit import Edit +import sys + +if __name__ == "__main__": + args = sys.argv + if len(args) != 2: + print( + "Need model path. Example: python test_flower_classfication.py LZ-DigitHandRecog.rknn" + ) + exit(1) + + edit = Edit() + edit.start_and_accept_connection() + + model = PaddleClas() + if model.initialize(args[1]) is False: + print("Failed to initialize PaddleClas") + exit(1) + + video_capture = VideoCapture() + if video_capture.open(0) is False: + print("Failed to open capture") + exit(1) + + while True: + ret, mat = video_capture.read() + if ret is False: + continue + + result = model.predict(mat) + print(f"The label_id is {result.label_id} and the score is {result.score}") + + vis_mat = visualize(mat, result) + edit.print(vis_mat) diff --git a/example/vision/detetcion/README.md b/example/vision/detetcion/README.md index db16b50a8eae776d24b3777d789a89fd7820e578..d9aa648df9ba2960b65b7ec66db58d98ae9d963c 100644 --- a/example/vision/detetcion/README.md +++ b/example/vision/detetcion/README.md @@ -1,8 +1,8 @@

凌智视觉模块检测模型部署指南

-发布版本:V0.0.0 +发布版本:V0.0.1 -日期:2024-09-11 +日期:2024-11-16 文件密级:□绝密 □秘密 □内部资料 ■公开 @@ -26,6 +26,7 @@ | **日期** | **版本** | **作者** | **修改说明** | | :--------- | -------- | -------- | ------------ | | 2024/09/11 | 0.0.0 | 郑必城 | 初始版本 | +| 2024/11/16 | 0.0.1 | 郑必城 | 优化模型训练文档 | ## 1 简介 @@ -73,6 +74,12 @@ Labelme 是一个 python 语言编写,带有图形界面的图像标注软件 ![](images/flags.png) +> 请注意: +> +> 1. flags.txt 中的标签应该填入真实的检测类别,例如这里我们识别的是"左转、右转、直行、停止、红、绿",所以依次填入他们的英文 +> +> 2. 遇到英文字符中存在空格的情况请以下划线代替 + 进入 **Dataset** 文件夹所在的目录,按住键盘`Shift`键后,单击鼠标右键,点击 **在此处打开 Powershell 窗口**。 ![](images/open_powershll.png) @@ -94,7 +101,7 @@ Labelme 是一个 python 语言编写,带有图形界面的图像标注软件 命令执行后将打开 **Labelme** 程序,如下图 -![](images/Labelme.png) +![](./images/labelme_bak.png) ### 2.3 标注并产出数据集 @@ -222,6 +229,10 @@ AI Studio 是基于百度深度学习开源平台飞桨的人工智能学习与 ![](images/config_project_2.png) +配置完数据集路径后,我们还需要配置目标检测类别数,这里我们检测 6 个类别的数据,因此将该参数配置为 6,如下图: + +![](images/config_project_3.png) + ### 3.8 开始训练 点击**运行全部 Cell** 开启训练,耐心等待训练完成,如下图: diff --git a/example/vision/detetcion/images/FPS.png b/example/vision/detetcion/images/FPS.png new file mode 100644 index 0000000000000000000000000000000000000000..02bc484d05b9104919eac338324867e817c2809d Binary files /dev/null and b/example/vision/detetcion/images/FPS.png differ diff --git a/example/vision/detetcion/images/config_project_3.png b/example/vision/detetcion/images/config_project_3.png new file mode 100644 index 0000000000000000000000000000000000000000..92ec053dccdaca5e052e000a90eab7511b4db5dc Binary files /dev/null and b/example/vision/detetcion/images/config_project_3.png differ diff --git a/example/vision/detetcion/images/img.png b/example/vision/detetcion/images/img.png new file mode 100644 index 0000000000000000000000000000000000000000..9a587898c0cb587d31032965cac0709552064257 Binary files /dev/null and b/example/vision/detetcion/images/img.png differ diff --git a/example/vision/detetcion/images/labelme.png b/example/vision/detetcion/images/labelme.png index 61f7186f690e2c8b0110740d493fef7c872d4e62..c324641d51a45a29dd61c0d7db20ef4592770e99 100644 Binary files a/example/vision/detetcion/images/labelme.png and b/example/vision/detetcion/images/labelme.png differ diff --git a/example/vision/detetcion/images/labelme_bak.png b/example/vision/detetcion/images/labelme_bak.png new file mode 100644 index 0000000000000000000000000000000000000000..61f7186f690e2c8b0110740d493fef7c872d4e62 Binary files /dev/null and b/example/vision/detetcion/images/labelme_bak.png differ diff --git a/example/vision/detetcion/images/results_and_fps.png b/example/vision/detetcion/images/results_and_fps.png new file mode 100644 index 0000000000000000000000000000000000000000..81c8fba405412f77620ad3ef71b68b966feff763 Binary files /dev/null and b/example/vision/detetcion/images/results_and_fps.png differ diff --git a/example/vision/detetcion/images/sftp.png b/example/vision/detetcion/images/sftp.png new file mode 100644 index 0000000000000000000000000000000000000000..4799519db7f1cf488ed6e14d028b5f3379f341f2 Binary files /dev/null and b/example/vision/detetcion/images/sftp.png differ diff --git a/example/vision/detetcion/images/start.png b/example/vision/detetcion/images/start.png new file mode 100644 index 0000000000000000000000000000000000000000..83e7ed7c5864515264fcb38c50e6aa6ef2219d16 Binary files /dev/null and b/example/vision/detetcion/images/start.png differ diff --git a/example/vision/detetcion/python/README.md b/example/vision/detetcion/python/README.md index 4e87a00096a1ff983d53ea17d00444a4aebfa9a5..dd0551cddec086ddca323ac76044814b6a518c3b 100644 --- a/example/vision/detetcion/python/README.md +++ b/example/vision/detetcion/python/README.md @@ -246,19 +246,21 @@ if __name__ == "__main__": - 进入存放 **test_detection.py** 脚本文件的目录,将 **test_detection.py** 上传到 Lockzhiner Vision Module - 进入存放 **LZ-Picodet.rknn(也可能是其他模型)** 模型存放的目录(模型存放在训练模型后下载的 output 文件夹内),将 **LZ-Picodet.rknn** 上传到 Lockzhiner Vision Module -![](images/stfp_0.png) - -![](images/stfp_1.png) +![](../images/sftp.png) 请使用 Electerm Ssh 并在命令行中执行以下命令: ```bash python test_detection.py LZ-Picodet.rknn ``` +![](../images/start.png) + +连接凌智视觉模块图片传输助手[凌智视觉模块图片传输助手下载地址](https://gitee.com/LockzhinerAI/LockzhinerVisionModule/releases/download/v0.0.0/LockzhinerVisionModuleImageFetcher.exe)后,选择连接设备 +![](../images/img.png) 运行程序后,屏幕上开始打印矩形框信息,标签信息和置信度,并在一段时间后输出 FPS 值 -![alt text](result_0.png) +![alt text](../images/results_and_fps.png) diff --git a/example/vision/face_detection/images/img.png b/example/vision/face_detection/images/img.png new file mode 100644 index 0000000000000000000000000000000000000000..5006d38434c3673d196f67a50ccf56d8f515651b Binary files /dev/null and b/example/vision/face_detection/images/img.png differ diff --git a/example/vision/face_detection/images/results.png b/example/vision/face_detection/images/results.png new file mode 100644 index 0000000000000000000000000000000000000000..e2276adbf4d3b19e859cf4dba621842060935b2a Binary files /dev/null and b/example/vision/face_detection/images/results.png differ diff --git a/example/vision/face_detection/images/start.png b/example/vision/face_detection/images/start.png new file mode 100644 index 0000000000000000000000000000000000000000..dd1ceb9225b17cc96060b2a881aa1dfb885fc1d5 Binary files /dev/null and b/example/vision/face_detection/images/start.png differ diff --git a/example/vision/face_detection/images/upload.png b/example/vision/face_detection/images/upload.png new file mode 100644 index 0000000000000000000000000000000000000000..e1113951fd97a836adb74ac6c99883d4c08ae9dd Binary files /dev/null and b/example/vision/face_detection/images/upload.png differ diff --git a/example/vision/face_detection/python/README.md b/example/vision/face_detection/python/README.md index fc693f3b17bb71369db512588ddd273127fa35d0..6495c8d14473e8db3ad10b9637a51eaccc372c66 100644 --- a/example/vision/face_detection/python/README.md +++ b/example/vision/face_detection/python/README.md @@ -265,19 +265,24 @@ if __name__ == "__main__": - 进入存放 **test_retina_face.py** 脚本文件的目录,将 **test_retina_face.py** 上传到 Lockzhiner Vision Module - 进入存放 **LZ-RetinaFace.rknn(也可能是其他模型)** 模型存放的目录(模型存放在训练模型后下载的 output 文件夹内),将 **LZ-RetinaFace.rknn** 上传到 Lockzhiner Vision Module -![](images/stfp_0.png) +![](../images/upload.png) + -![](images/stfp_1.png) 请使用 Electerm Ssh 并在命令行中执行以下命令: ```bash python test_retina_face.py LZ-RetinaFace.rknn ``` +![](../images/start.png) + +连接凌智视觉模块图片传输助手[凌智视觉模块图片传输助手下载地址](https://gitee.com/LockzhinerAI/LockzhinerVisionModule/releases/download/v0.0.0/LockzhinerVisionModuleImageFetcher.exe)后,选择连接设备 + +![](../images/img.png) 运行程序后,屏幕上开始打印矩形框信息和置信度,并在一段时间后输出 FPS 值 -![alt text](result_0.png) +![alt text](../images/results.png)