diff --git a/README.md b/README.md
index 12382f995e82bf5794c836e2e86a65713f65e286..e60d2692b2f204cd81657e52904024d5928278ac 100644
--- a/README.md
+++ b/README.md
@@ -88,7 +88,7 @@ Lockzhiner Vision Module 拥有丰富的 IO 接口,其接口图片如下图所
* [凌智视觉模块手写数字分类部署指南](./example/special/digit_handwritten_recognition)
* [凌智视觉模块猫狗分类部署指南](example/special/cat_and_dog_classification)
* [凌智视觉模块花卉分类部署指南](example/special/flower_classfication/)
-
+* [凌智视觉模块口罩佩戴分类模型部署指南](example/special/maskwear_classfication)
### 👍 目标检测案例
目标检测(Object Detection)是深度学习中计算机视觉领域的重要任务之一,旨在识别图像或视频中所有感兴趣的物体,并准确地定位这些物体的边界框(Bounding Box)。与目标分类不同,目标检测不仅需要预测物体的类别,还需要标注它们在图像中的位置。一般来说,目标检测任务的标注过程比较复杂,适合既需要对目标进行分类,有需要对目标进行定位的场景。
diff --git a/example/special/maskwear_classfication/README.md b/example/special/maskwear_classfication/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..c27f6cde136932accf1fe819ef330bcdb92913aa
--- /dev/null
+++ b/example/special/maskwear_classfication/README.md
@@ -0,0 +1,51 @@
+
凌智视觉模块口罩佩戴分类部署指南
+
+发布版本:V0.0.0
+
+日期:2024-11-20
+
+文件密级:□绝密 □秘密 □内部资料 ■公开
+
+---
+
+**免责声明**
+
+本文档按**现状**提供,福州凌睿智捷电子有限公司(以下简称**本公司**)不对本文档中的任何陈述、信息和内容的准确性、可靠性、完整性、适销性、适用性及非侵权性提供任何明示或暗示的声明或保证。本文档仅作为使用指导的参考。
+
+由于产品版本升级或其他原因,本文档可能在未经任何通知的情况下不定期更新或修改。
+
+**读者对象**
+
+本教程适用于以下工程师:
+
+- 技术支持工程师
+- 软件开发工程师
+
+**修订记录**
+
+| **日期** | **版本** | **作者** | **修改说明** |
+|:-----------| -------- | -------- | ------------ |
+| 2024/11/20 | 0.0.0 | 钟海滨 | 初始版本 |
+
+## 1 简介
+
+口罩佩戴分类有助于识别和管理公共卫生风险,我们基于 [凌智视觉模块分类模型部署指南](../../vision/classification) 训练了凌智视觉模块专用的模型,该模型能够实现未佩戴口罩、佩戴口罩、口罩佩戴不正确的识别。
+
+
+## 2 运行前的准备
+
+- 请确保你已经下载了 [凌智视觉模块口罩佩戴分类模型](https://gitee.com/LockzhinerAI/LockzhinerVisionModule/releases/download/v0.0.2/LZ-Maskwear-Classification.rknn)
+
+## 3 在凌智视觉模块上部署口罩佩戴分类识别案例
+
+下载模型后,请参考以下教程使用 Python 在凌智视觉模块上部署分类模型例程:
+
+- [凌智视觉模块口罩佩戴分类 Python 部署指南](./python/README.md)
+
+## 4 模型性能指标
+
+以下测试数据为模型执行 Predict 函数运行 1000 次耗时的平均时间
+
+| 分类模型 | FPS(帧/s) | 精度(%) |
+|:------------------------:|:----:|:----:|
+|LZ-Maskwear-Classification|35||
diff --git a/example/special/maskwear_classfication/python/README.md b/example/special/maskwear_classfication/python/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..dd3a49be88ff9261f6f0087d90ceb15ceb945d98
--- /dev/null
+++ b/example/special/maskwear_classfication/python/README.md
@@ -0,0 +1,121 @@
+凌智视觉模块口罩佩戴分类 Python 部署指南
+
+发布版本:V0.0.0
+
+日期:2024-11-20
+
+文件密级:□绝密 □秘密 □内部资料 ■公开
+
+---
+
+**免责声明**
+
+本文档按**现状**提供,福州凌睿智捷电子有限公司(以下简称**本公司**)不对本文档中的任何陈述、信息和内容的准确性、可靠性、完整性、适销性、适用性及非侵权性提供任何明示或暗示的声明或保证。本文档仅作为使用指导的参考。
+
+由于产品版本升级或其他原因,本文档可能在未经任何通知的情况下不定期更新或修改。
+
+**读者对象**
+
+本教程适用于以下工程师:
+
+- 技术支持工程师
+- 软件开发工程师
+
+**修订记录**
+
+| **日期** | **版本** | **作者** | **修改说明** |
+|:-----------| -------- | -------- | ------------ |
+| 2024/11/20 | 0.0.0 | 钟海滨 | 初始版本 |
+
+## 1 简介
+
+接下来让我们基于 Python 来部署口罩佩戴分类识别案例,在开始本章节前:
+
+- 请确保你已经参考 [凌智视觉模块口罩佩戴分类部署指南](../README.md) 正确下载了模型。
+- 请确保你已经参考 [凌智视觉模块摄像头部署指南](../../../periphery/capture/README.md) 正确下载了凌智视觉模块图片传输助手。
+- 请确保你已经按照 [开发环境搭建指南](../../../../docs/introductory_tutorial/python_development_environment.md) 正确配置了开发环境。
+
+## 2 Python API 文档
+
+同[分类模型 Python 部署 API 文档](../../../vision/classification/python/README.md)
+
+## 3 项目介绍
+
+为了方便大家入手,我们做了一个简易的口罩佩戴分类识别例程。该程序可以使用摄像头进行端到端推理,并可视化推理结果到凌智视觉模块图片传输助手。
+
+```python
+from lockzhiner_vision_module.cv2 import VideoCapture
+from lockzhiner_vision_module.vision import PaddleClas, visualize
+from lockzhiner_vision_module.edit import Edit
+import time
+import sys
+
+
+labels =['mask_weared_incorrect','without_mask','with_mask']
+
+if __name__ == "__main__":
+ args = sys.argv
+ if len(args) != 2:
+ print("Need model path. Example: python test_mask_classification.py LZ-Maskwear-Classification.rknn")
+ exit(1)
+ edit = Edit()
+ edit.start_and_accept_connection()
+
+ model = PaddleClas()
+ if model.initialize(args[1]) is False:
+ print("Failed to initialize PaddleClas")
+ exit(1)
+
+ video_capture = VideoCapture()
+ if video_capture.open(0) is False:
+ print("Failed to open capture")
+ exit(1)
+
+ while True:
+ read_index = 0
+ total_time_ms = 0
+ for i in range(30):
+ start_time = time.time()
+ ret, mat = video_capture.read()
+ if ret is False:
+ continue
+
+ result = model.predict(mat)
+
+ end_time = time.time()
+ total_time_ms += end_time - start_time
+ read_index += 1
+ print(result.label_id, result.score)
+ vis_mat = visualize(mat, result, labels)
+ # vis_mat = visualize(mat,result)
+ edit.print(vis_mat)
+ print(f"FPS is {1.0 / (total_time_ms / read_index)}")
+
+```
+
+## 4 上传并测试 Python 程序
+
+参考 [连接设备指南](../../../../docs/introductory_tutorial/connect_device_using_ssh.md) 正确连接 Lockzhiner Vision Module 设备。
+
+
+
+请使用 Electerm Sftp 依次上传以下两个文件:
+nn
+- 进入存放 **test_mask_classification.py** 脚本文件的目录,将 **test_mask_classification.py** 上传到 Lockzhiner Vision Module
+- 进入存放 **LZ-Maskwear-Classification.rknn** 模型存放的目录(模型存放在训练模型后下载的 output 文件夹内),将 **LZ-Maskwear-Classification.rknn** 上传到 Lockzhiner Vision Module
+
+
+
+请使用 Electerm Ssh 并在命令行中执行以下命令:
+
+```bash
+python test_mask_classification.py LZ-Maskwear-Classification.rknn
+```
+
+运行程序后,使用凌智视觉模块图片传输助手连接设备,屏幕上开始打印标签索引和置信度,凌智视觉模块图片传输助手出现可视化的结果
+
+
+
+
+
+
diff --git a/example/special/maskwear_classfication/python/images/img.png b/example/special/maskwear_classfication/python/images/img.png
new file mode 100644
index 0000000000000000000000000000000000000000..6b5f8094855075d18758cc0ca04f3fe26a52d1eb
Binary files /dev/null and b/example/special/maskwear_classfication/python/images/img.png differ
diff --git a/example/special/maskwear_classfication/python/images/img_1.png b/example/special/maskwear_classfication/python/images/img_1.png
new file mode 100644
index 0000000000000000000000000000000000000000..db5559f5528f5f9039c84067fdd6f588c4532ef3
Binary files /dev/null and b/example/special/maskwear_classfication/python/images/img_1.png differ
diff --git a/example/special/maskwear_classfication/python/images/img_2.png b/example/special/maskwear_classfication/python/images/img_2.png
new file mode 100644
index 0000000000000000000000000000000000000000..3e6e49c3d34c3da37239efd08742589b6e021286
Binary files /dev/null and b/example/special/maskwear_classfication/python/images/img_2.png differ
diff --git a/example/special/maskwear_classfication/python/test_mask_classification.py b/example/special/maskwear_classfication/python/test_mask_classification.py
new file mode 100644
index 0000000000000000000000000000000000000000..be03f4b41275e38b01b163ffc1c6784d5c01748c
--- /dev/null
+++ b/example/special/maskwear_classfication/python/test_mask_classification.py
@@ -0,0 +1,46 @@
+from lockzhiner_vision_module.cv2 import VideoCapture
+from lockzhiner_vision_module.vision import PaddleClas, visualize
+from lockzhiner_vision_module.edit import Edit
+import time
+import sys
+
+
+labels =['mask_weared_incorrect','without_mask','with_mask']
+
+if __name__ == "__main__":
+ args = sys.argv
+ if len(args) != 2:
+ print("Need model path. Example: python test_mask_classification.py LZ-Maskwear-Classification-2024-1120-1209")
+ exit(1)
+ edit = Edit()
+ edit.start_and_accept_connection()
+
+ model = PaddleClas()
+ if model.initialize(args[1]) is False:
+ print("Failed to initialize PaddleClas")
+ exit(1)
+
+ video_capture = VideoCapture()
+ if video_capture.open(0) is False:
+ print("Failed to open capture")
+ exit(1)
+
+ while True:
+ read_index = 0
+ total_time_ms = 0
+ for i in range(30):
+ start_time = time.time()
+ ret, mat = video_capture.read()
+ if ret is False:
+ continue
+
+ result = model.predict(mat)
+
+ end_time = time.time()
+ total_time_ms += end_time - start_time
+ read_index += 1
+ print(result.label_id, result.score)
+ vis_mat = visualize(mat, result, labels)
+ # vis_mat = visualize(mat,result)
+ edit.print(vis_mat)
+ print(f"FPS is {1.0 / (total_time_ms / read_index)}")
diff --git a/utils/create_classification_dataset.py b/utils/create_classification_dataset.py
new file mode 100644
index 0000000000000000000000000000000000000000..795dd291d3ceebb5fc8035596d0b72f747308ec7
--- /dev/null
+++ b/utils/create_classification_dataset.py
@@ -0,0 +1,135 @@
+import os
+import json
+from PIL import Image
+import shutil
+from tqdm import tqdm
+import matplotlib.pyplot as plt
+
+
+# 指定图片文件夹路径
+image_root_folder = r'E:\face_mask'
+
+# 指定输出标签文件夹路径
+output_folder = './Dataset/annotations'
+
+# 指定的输出图片保存路径
+images_folder = './Dataset/images'
+
+# 标签文本保存
+flags_txt = './Dataset/flags.txt'
+
+# 确保输出文件夹存在
+os.makedirs(output_folder, exist_ok=True)
+os.makedirs(images_folder, exist_ok=True)
+
+# 动态生成文件夹与标志位的映射关系
+folder_to_flag = {}
+# 存储标签名
+flag_names=[]
+
+
+for folder_name in os.listdir(image_root_folder):
+ folder_path = os.path.join(image_root_folder, folder_name)
+ if os.path.isdir(folder_path):
+ folder_to_flag[folder_name] = folder_name
+ flag_names.append(folder_name)
+with open(flags_txt,'w',encoding='utf-8') as f:
+ for flag_name in flag_names:
+ f.write(flag_name+'\n')
+ f.close()
+ print('标签文件创建成功')
+# 动态生成 flags 字典
+flags = {key: False for key in folder_to_flag.values()}
+
+# 维护每个文件夹的计数器
+folder_counters = {folder_name: 0 for folder_name in folder_to_flag.keys()}
+
+# 统计每个类别的图片数量
+category_counts = {folder_name: 0 for folder_name in folder_to_flag.keys()}
+
+# 遍历图片根文件夹中的所有子文件夹
+for folder_name in os.listdir(image_root_folder):
+ folder_path = os.path.join(image_root_folder, folder_name)
+
+ # 确保是一个文件夹
+ if os.path.isdir(folder_path):
+ # 获取文件夹中的所有文件,并计算总数
+ files = [f for f in os.listdir(folder_path) if f.lower().endswith(('.png', '.jpg', '.jpeg', '.bmp', '.gif'))]
+ total_files = len(files)
+
+ # 使用 tqdm 包装迭代器,显示进度条
+ for filename in tqdm(files, desc=f'Processing {folder_name}', total=total_files):
+ # 构建完整的文件路径
+ image_path = os.path.join(folder_path, filename)
+
+ # 获取当前文件夹的计数器值
+ counter = folder_counters[folder_name]
+
+ # 构建新的文件名
+ new_filename = f'{folder_name}_{counter:04d}.jpg'
+ new_image_path = os.path.join(images_folder, new_filename)
+
+ # 存储文件名构造
+ save_path_name = os.path.join('..\\images\\', new_filename)
+
+ # 拷贝图片到 images 文件夹
+ shutil.copy(image_path, new_image_path)
+
+ # 打开图片并获取高度和宽度
+ with Image.open(image_path) as img:
+ width, height = img.size
+
+ # 初始化 flags
+ current_flags = flags.copy()
+
+ # 设置对应的标志位为 True
+ flag_key = folder_to_flag.get(folder_name, None)
+ if flag_key is not None:
+ current_flags[flag_key] = True
+
+ # 创建标签文件的数据结构
+ label_data = {
+ "version": "5.5.0",
+ "flags": current_flags,
+ "shapes": [],
+ "imagePath": save_path_name,
+ "imageData": None,
+ "imageHeight": height,
+ "imageWidth": width
+ }
+
+ # 构建标签文件的路径
+ label_filename = os.path.splitext(new_filename)[0] + '.json'
+ label_path = os.path.join(output_folder, label_filename)
+
+ # 将标签数据写入文件
+ with open(label_path, 'w') as f:
+ json.dump(label_data, f, indent=4)
+
+ # 增加计数器
+ folder_counters[folder_name] += 1
+
+ # 更新类别计数
+ category_counts[folder_name] += 1
+
+# print('Label files creation complete.')
+
+# 输出每个类别的图片数量统计
+# for category, count in category_counts.items():
+# print(f'Category {category} has {count} images.')
+
+# 绘制柱状图
+categories = list(category_counts.keys())
+counts = list(category_counts.values())
+
+plt.figure(figsize=(10, 6))
+plt.bar(categories, counts, color='skyblue')
+plt.xlabel('Categories')
+plt.ylabel('Number of Images')
+plt.title('Image Count by Category')
+plt.xticks(rotation=45)
+plt.tight_layout()
+
+# 保存柱状图
+plt.savefig('category_counts.png')
+# plt.show()
\ No newline at end of file
diff --git a/utils/extera_images_from_video.py b/utils/extera_images_from_video.py
new file mode 100644
index 0000000000000000000000000000000000000000..1ab68972c1a3e72c4659f00ce76eefa24b2a1323
--- /dev/null
+++ b/utils/extera_images_from_video.py
@@ -0,0 +1,84 @@
+import os
+import cv2
+import matplotlib.pyplot as plt
+
+def create_directory_structure(source_root, target_root):
+ """创建与源文件夹结构相同的输出文件夹结构"""
+ for root, dirs, files in os.walk(source_root):
+ relative_path = os.path.relpath(root, source_root)
+ target_path = os.path.join(target_root, relative_path)
+ os.makedirs(target_path, exist_ok=True)
+
+# interval抽帧间隔
+
+def video_capture(video_path, output_root, source_root, interval=10):
+ cap = cv2.VideoCapture(video_path)
+ if not cap.isOpened():
+ print(f"Error opening video file: {video_path}")
+ return
+ frame_count = 0
+ saved_frame_count = 0
+ relative_video_dir = os.path.dirname(os.path.relpath(video_path, start=source_root))
+ output_dir = os.path.join(output_root, relative_video_dir)
+ total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
+ while True:
+ ret, frame = cap.read()
+ if not ret:
+ break
+ if frame_count % interval == 0:
+ frame_filename = f"{os.path.splitext(os.path.basename(video_path))[0]}_{frame_count:04d}.jpg"
+ frame_path = os.path.join(output_dir, frame_filename)
+ cv2.imwrite(frame_path, frame)
+ saved_frame_count += 1
+ frame_count += 1
+ cap.release()
+ print(f"Processed and saved {saved_frame_count} out of {frame_count} frames from {video_path}")
+
+def count_files_in_directories(root_directory):
+ """统计每个子目录下的文件数量,忽略根目录"""
+ counts = {}
+ for root, dirs, files in os.walk(root_directory):
+ relative_path = os.path.relpath(root, root_directory)
+ # 忽略根目录
+ if relative_path == '.':
+ continue
+ counts[relative_path] = len(files)
+ return counts
+
+def plot_category_counts(category_counts):
+ """绘制类别数量的柱状图"""
+ categories = list(category_counts.keys())
+ counts = list(category_counts.values())
+ plt.figure(figsize=(10, 6))
+ plt.bar(categories, counts, color='skyblue')
+ plt.xlabel('Categories')
+ plt.ylabel('Number of Images')
+ plt.title('Image Count per Category')
+ plt.xticks(rotation=45, ha='right')
+ plt.tight_layout()
+ plt.show()
+
+if __name__ == '__main__':
+ # 源视频文件夹路径
+ video_root = r'C:\Users\Administrator\Desktop\new'
+ # 输出图片文件夹根路径
+ output_root = 'face_add'
+ if not os.path.exists(output_root):
+ os.mkdir(output_root)
+ # 创建与视频文件夹结构相同的输出文件夹结构
+ create_directory_structure(video_root, output_root)
+
+ # 获取所有视频文件
+ video_files = [os.path.join(root, file) for root, _, files in os.walk(video_root) for file in files if file.lower().endswith(('.mp4', '.avi', '.mov', '.mkv'))]
+
+ # 处理所有视频文件
+ for video_file in video_files:
+ video_capture(video_file, output_root, video_root)
+
+ # 统计处理后各类别的数量
+ category_counts = count_files_in_directories(output_root)
+ for category, count in category_counts.items():
+ print(f"Category '{category}' contains {count} images.")
+
+ # 绘制柱状图
+ plot_category_counts(category_counts)
\ No newline at end of file