2024-09-02
深度学习
00

目录

一、数据集
二、yolov10介绍
三、数据voc转换为yolo
四、训练
五、验证
六、数据、模型、训练后的所有文件
七、更大、更强

寻求帮助请看这里:

cpp
https://docs.qq.com/sheet/DUEdqZ2lmbmR6UVdU?tab=BB08J2

一、数据集

安全帽佩戴检测 数据集:https://github.com/njvisionpower/Safety-Helmet-Wearing-Dataset 基准模型:

image.png

二、yolov10介绍

听说过yolov10吗:https://www.jiqizhixin.com/articles/2024-05-28-7

论文:

https://arxiv.org/abs/2405.14458

代码:

https://github.com/THU-MIG/yolov10

三、数据voc转换为yolo

调整一下,整成这样:

cpp
VOC2028 # tree -L 1 . ├── images ├── labels ├── test.txt ├── train.txt ├── trainval.txt └── val.txt 2 directories, 4 files

写为绝对路径:

cpp
# 定义需要处理的文件名列表 file_names = ['test.txt', 'train.txt', 'trainval.txt', 'val.txt'] for file_name in file_names: # 打开文件用于读取 with open(file_name, 'r') as file: # 读取所有行 lines = file.readlines() # 打开(或创建)另一个文件用于写入修改后的内容,这里使用新的文件名表示已修改 new_file_name = 'modified_' + file_name with open(new_file_name, 'w') as new_file: # 遍历每一行并进行修改 for line in lines: # 删除行尾的换行符,添加'.jpg''images/',然后再添加回换行符 modified_line = '/ssd/xiedong/yolov10/VOC2028/images/' + line.strip() + '.jpg\n' # 将修改后的内容写入新文件 new_file.write(modified_line) print("所有文件处理完成。")

转yolo txt:

python
import traceback import xml.etree.ElementTree as ET import os import shutil import random import cv2 import numpy as np from tqdm import tqdm def convert_annotation_to_list(xml_filepath, size_width, size_height, classes): in_file = open(xml_filepath, encoding='UTF-8') tree = ET.parse(in_file) root = tree.getroot() # size = root.find('size') # size_width = int(size.find('width').text) # size_height = int(size.find('height').text) yolo_annotations = [] # if size_width == 0 or size_height == 0: for obj in root.iter('object'): difficult = obj.find('difficult').text cls = obj.find('name').text if cls not in classes: classes.append(cls) cls_id = classes.index(cls) xmlbox = obj.find('bndbox') b = [float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text)] # 标注越界修正 if b[1] > size_width: b[1] = size_width if b[3] > size_height: b[3] = size_height txt_data = [((b[0] + b[1]) / 2.0) / size_width, ((b[2] + b[3]) / 2.0) / size_height, (b[1] - b[0]) / size_width, (b[3] - b[2]) / size_height] # 标注越界修正 if txt_data[0] > 1: txt_data[0] = 1 if txt_data[1] > 1: txt_data[1] = 1 if txt_data[2] > 1: txt_data[2] = 1 if txt_data[3] > 1: txt_data[3] = 1 yolo_annotations.append(f"{cls_id} {' '.join([str(round(a, 6)) for a in txt_data])}") in_file.close() return yolo_annotations def main(): classes = [] root = r"/ssd/xiedong/yolov10/VOC2028" img_path_1 = os.path.join(root, "images") xml_path_1 = os.path.join(root, "labels") dst_yolo_root_txt = xml_path_1 index = 0 img_path_1_files = os.listdir(img_path_1) xml_path_1_files = os.listdir(xml_path_1) for img_id in tqdm(img_path_1_files): # 右边的.之前的部分 xml_id = img_id.split(".")[0] + ".xml" if xml_id in xml_path_1_files: try: img = cv2.imdecode(np.fromfile(os.path.join(img_path_1, img_id), dtype=np.uint8), 1) # img是矩阵 new_txt_name = img_id.split(".")[0] + ".txt" yolo_annotations = convert_annotation_to_list(os.path.join(xml_path_1, img_id.split(".")[0] + ".xml"), img.shape[1], img.shape[0], classes) with open(os.path.join(dst_yolo_root_txt, new_txt_name), 'w') as f: f.write('\n'.join(yolo_annotations)) except: traceback.print_exc() # classes print(f"我已经完成转换 {classes}") if __name__ == '__main__': main()

vim voc2028x.yaml

cpp
train: /ssd/xiedong/yolov10/VOC2028/modified_train.txt val: /ssd/xiedong/yolov10/VOC2028/modified_val.txt test: /ssd/xiedong/yolov10/VOC2028/modified_test.txt # Classes names: 0: hat 1: person

四、训练

环境:

cpp
git clone https://github.com/THU-MIG/yolov10.git cd yolov10 conda create -n yolov10 python=3.9 -y conda activate yolov10 pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple some-package pip install -e . -i https://pypi.tuna.tsinghua.edu.cn/simple some-package

训练

cpp
yolo detect train data="/ssd/xiedong/yolov10/voc2028x.yaml" model=yolov10s.yaml epochs=200 batch=64 imgsz=640 device=1,3

image.png

训练启动后:

image.png

训练完成后:

image.png

五、验证

cpp
yolo val model="/ssd/xiedong/yolov10/runs/detect/train2/weights/best.pt" data="/ssd/xiedong/yolov10/voc2028x.yaml" batch=32 imgsz=640 device=1,3

image.png

map50平均达到0.94,已超出基准很多了。

预测:

cpp
yolo predict model=yolov10n/s/m/b/l/x.pt

导出:

cpp
# End-to-End ONNX yolo export model=yolov10n/s/m/b/l/x.pt format=onnx opset=13 simplify # Predict with ONNX yolo predict model=yolov10n/s/m/b/l/x.onnx # End-to-End TensorRT yolo export model=yolov10n/s/m/b/l/x.pt format=engine half=True simplify opset=13 workspace=16 # Or trtexec --onnx=yolov10n/s/m/b/l/x.onnx --saveEngine=yolov10n/s/m/b/l/x.engine --fp16 # Predict with TensorRT yolo predict model=yolov10n/s/m/b/l/x.engine

demo:

cpp
wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10s.pt python app.py # Please visit http://127.0.0.1:7860

image.png

image.png

六、数据、模型、训练后的所有文件

yolov10训练安全帽目标监测全部东西,下载看这里:

cpp
https://docs.qq.com/sheet/DUEdqZ2lmbmR6UVdU?tab=BB08J2

image.png

image.png

七、更大、更强

用yolov10m训练了一个模型,1280输入。

image.png

如果对你有用的话,可以打赏哦
打赏
ali pay
wechat pay

本文作者:Dong

本文链接:

版权声明:本博客所有文章除特别声明外,均采用 CC BY-NC。本作品采用《知识共享署名-非商业性使用 4.0 国际许可协议》进行许可。您可以在非商业用途下自由转载和修改,但必须注明出处并提供原作者链接。 许可协议。转载请注明出处!