欢迎访问《茶叶科学》,今天是
研究报告

基于改进YOLOv7-tiny的茶叶嫩芽分级识别方法

  • 洪孔林 ,
  • 吴明晖 ,
  • 高博 ,
  • 冯业宁
展开
  • 上海工程技术大学机械与汽车工程学院,上海 201620
洪孔林,男,硕士研究生,主要从事图像处理与目标检测方面的研究。

收稿日期: 2023-09-24

  修回日期: 2023-11-13

  网络出版日期: 2024-03-13

基金资助

上海市自然科学基金(21ZR1425900)

A Grading Identification Method for Tea Buds Based on Improved YOLOv7-tiny

  • HONG Konglin ,
  • WU Minghui ,
  • GAO Bo ,
  • FENG Yening
Expand
  • School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Shanghai 201620, China

Received date: 2023-09-24

  Revised date: 2023-11-13

  Online published: 2024-03-13

摘要

实现自然生长环境的茶叶嫩芽分级识别是名优茶智能化采摘的基础,针对光照、遮挡、密集等复杂环境造成的茶叶嫩芽识别精度较低、鲁棒性较差等问题,提出了一种基于YOLOv7-tiny的改进模型。首先在YOLOv7-tiny模型的小目标检测层添加卷积注意力模块,提高模型对小目标特征的关注能力,减少复杂环境对茶叶嫩芽识别的干扰;调整空间金字塔池化结构,降低模型运算成本,提高检测速度;使用交并比(Intersection over Union,IoU)和归一化Wasserstein距离(Normalized gaussian wasserstein distance,NWD)结合的损失函数,改善IoU机制对位置偏差敏感的问题,进一步提高模型对小目标检测的鲁棒性。结果表明,该模型的检测准确率为91.15%,召回率为88.54%,均值平均精度为92.66%,模型大小为12.4 MB,与原始模型相比,准确率、召回率、均值平均精度分别提高2.83、2.00、1.47个百分点,模型大小增加0.1 MB。与不同模型的对比试验表明,该模型在多个场景下的嫩芽分级检测中漏检和误检较少,置信度分数较高。改进后的模型可应用于名优茶采摘机器人的嫩芽分级识别。

本文引用格式

洪孔林 , 吴明晖 , 高博 , 冯业宁 . 基于改进YOLOv7-tiny的茶叶嫩芽分级识别方法[J]. 茶叶科学, 2024 , 44(1) : 62 -74 . DOI: 10.13305/j.cnki.jts.2024.01.006

Abstract

The intelligent grading and recognition of tea buds in a natural environment are fundamental for the automation of premium tea harvesting. To address the problems of low recognition accuracy and limited robustness caused by complex environmental factors like lighting, obstruction, and dense foliage, we propose an enhanced model based on YOLOv7-tiny. Firstly, a CBAM module was added into the small object detection layer of the YOLOv7-tiny model to enhance the model's ability to focus on small object features and reduce the interference of complex environments on tea bud recognition. We adjusted the spatial pyramid pooling structure to lower computational costs and improve detection speed. Additionally, we utilized a loss function combining IoU and NWD to further enhance the model's robustness in small object detection by addressing the sensitivity of the IoU mechanism to position deviations. Experimental results demonstrate that the proposed model achieves a detection accuracy of 91.15%, a recall rate of 88.54%, and a mean average precision of 92.66%. The model's size is 12.4 MB. Compared to the original model, this represents an improvement of 2.83%, 2.00%, and 1.47% in accuracy, recall rate, and mean average precision, respectively, with a significant increase of 0.1 MB in model size. Comparative experiments with different models show that our model exhibits fewer false negatives and false positives in multiple scenarios, along with higher confidence scores. The improved model can be applied to the bud grading and recognition process of premium tea harvesting robots.

参考文献

[1] 梅宇, 梁晓. 2021年中国茶叶生产与内销形势分析[J]. 中国茶叶, 2022, 44(4): 17-22.
Mei Y,Liang X.Analysis of China's tea production and domestic sales in 2021[J]. China Tea, 2022, 44(4): 17-22.
[2] 代云中, 蒋天宸, 杨威, 等. 基于YOLOv5算法的名优茶采摘机器人[J]. 南方农机, 2023, 54(12): 24-27.
Dai Y Z, Jiang T C, Yang W, et al.A premium tea picking robot based on the YOLOv5 algorithm[J]. China Southern Agricultural Machinery, 2023, 54(12): 24-27.
[3] 黄海涛, 毛宇骁, 李红莉, 等. 茶鲜叶机械化采收装备与技术研究进展[J]. 中国茶叶, 2023, 45(8): 18-23, 31.
Huang H T, Mao X Y, Li H L, et al.Research progress on mechanized harvesting equipment and technology for fresh tea leaves[J]. Chinese Tea, 2023, 45(8): 18-23, 31.
[4] 吴敏, 郇晓龙, 陈建能, 等. 融合2D激光雷达与航向姿态参考系统的采茶机仿形方法研究与试验[J]. 茶叶科学, 2023, 43(1): 135-145.
Wu M, Huan X L, Chen J N, et al.Research and experiment on profiling method of tea picker based on fusion of 2D-LiDAR and attitude and heading reference system[J]. Journal of Tea Science, 2023, 43(1): 135-145.
[5] 王先伟, 吴明晖, 周俊, 等. 名优茶采摘机器人机械手结构参数优化与仿真[J]. 中国农机化学报, 2018, 39(7): 84-89.
Wang X W, Wu M H, Zhou J, et al.Optimization and simulation of structural parameters of manipulators for high-quality tea picking robots[J]. Journal of Chinese Agricultural Mechanization, 2018, 39(7): 84-89.
[6] 吴雪梅, 张富贵, 吕敬堂. 基于图像颜色信息的茶叶嫩叶识别方法研究[J]. 茶叶科学, 2013, 33(6): 584-589.
Wu X M, Zhang F G, Lü J T.Research on the recognition method of tea leaves based on image color information[J]. Journal of Tea Science, 2013, 33(6): 584-589.
[7] 龙樟, 姜倩, 王健, 等. 茶叶嫩芽视觉识别与采摘点定位方法研究[J]. 传感器与微系统, 2022, 41(2): 39-41, 45.
Long Z, Jiang Q, Wang J, et al.Research on method of tea flushes vision recognition and picking point localization[J]. Transducer and Microsystem Technologies, 2022, 41(2): 39-41, 45.
[8] 张金炎, 曹成茂, 李文宝, 等. 基于多特征融合的茶叶鲜叶等级识别的方法研究[J]. 安徽农业大学学报, 2021, 48(3): 480-487.
Zhang J Y, Cao C M, Li W B, et al.Study on the method of recognition of fresh leaf grade of tea based on multi-feature fusion[J]. Journal of Anhui Agricultural University, 2021, 48(3): 480-487.
[9] 刘自强, 周铁军, 傅冬和, 等. 基于颜色和形状的鲜茶叶图像特征提取及在茶树品种识别中的应用[J]. 江苏农业科学, 2021, 49(12): 168-172.
Liu Z Q, Zhou T J, Fu D H, et al.Application of image feature extraction based on color and shape in tea tree variety identification[J]. Jiangsu Agricultural Sciences, 2021, 49(12): 168-172.
[10] 王子钰, 赵怡巍, 刘振宇. 基于SSD算法的茶叶嫩芽检测研究[J]. 微处理机, 2020, 41(4): 42-48.
Wang Z Y, Zhao Y W, Liu Z Y.Research on tea buds detection based on SSD algorithm[J]. Microprocessors, 2020, 41(4): 42-48.
[11] Yang H L, Chen L, Chen M T, et al.Tender tea shoots recognition and positioning for picking robot using improved YOLO-v3 model[J]. IEEE Access, 2019: 180998-181011.
[12] 方梦瑞, 吕军, 阮建云, 等. 基于改进YOLOv4-tiny的茶叶嫩芽检测模型[J]. 茶叶科学, 2022, 42(4): 549-560.
Fang M R, Lü J, Ruan J Y, et al.Tea buds detection model using improved YOLOv4-tiny[J]. Journal of Tea Science, 2022, 42(4): 549-560.
[13] 吕丹瑜, 金子晶, 陆璐, 等. 基于图像处理技术的茶树新梢识别和叶面积计算的探索研究[J]. 茶叶科学, 2023, 43(5): 691-702.
Lü D Y, Jin Z J, Lu L, et al.Exploratory study on the image processing technology-based tea shoot identification and leaf area calculation[J]. Journal of Tea Science, 2023, 43(5): 691-702.
[14] 尹川, 苏议辉, 潘勉, 等. 基于改进YOLOv5s的名优绿茶品质检测[J]. 农业工程学报, 2023, 39(8): 179-187.
Yin C, Su Y H, Pan M, et al.Detection of the quality of famous green tea based on improved YOLOv5s[J]. Transactions of the Chinese Society of Agricultural Engineering, 2023, 39(8): 179-187.
[15] Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//The Institute of Electrical and Electronics Engineers. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver: IEEE, 2023.
[16] He K M, Zhang X Y, Ren S Q, et al.Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2014, 37(9): 1904-1916.
[17] Redmon J, Farhadi A.YOLOv3: an incremental improvement[J]. arXiv, 2018: 1804.02767. doi: 10.48550/arXiv.1804.02767.
[18] 祝志慧, 何昱廷, 李沃霖, 等. 基于改进YOLOv7模型的复杂环境下鸭蛋识别定位[J]. 农业工程学报, 2023, 39(11): 274-285.
Zhu Z H, He Y T, Li W L, et al.Improved YOLOv7 model for duck egg recognition and localization in complex environments[J]. Transactions of the Chinese Society for Agricultural Machinery, 2023, 39(11): 274-285.
[19] Woo S, Park J, Lee J Y, et al.CBAM: convolutional block attention module[C]//Ferrari V, Hebert M, Sminchisescu C, et al. Computer Vision: ECCV 2018. Munich: Springer, 2018.
[20] Wang J W, Xu C, Yang W, et al.A normalized gaussian wasserstein distance for tiny object detection[J]. arXiv, 2021: 2110.13389. doi: 10.48550/arXiv.2110.13389.
[21] Zheng Z H, Wang P, Liu W, et al.Distance-IoU loss: faster and better learning for bounding box regression[J]. arXiv, 2019: 1911.08287. doi: 10.48550/arXiv.1911.08287.
[22] Rezatofighi H, Tsoi N, Gwak J Y, et al.Generalized intersection over union: a metric and a loss for bounding box regression[C]//The Institute of Electrical and Electronics Engineers. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019.
[23] He J B, Erfani S, Ma X J, et al.Alpha-IoU: a family of power intersection over union losses for bounding box regression[J]. arXiv, 2021: 2110.13675. doi: 10.48550/arXiv.2110.13675.
[24] Ma S L, Xu Y.MPDIoU: a loss for efficient and accurate bounding box regression[J]. arXiv, 2023: 2307.07662. doi: 10.48550/arXiv.2307.07662.
[25] Liu W, Anguelov D, Erhan D, et al.SSD: single shot multibox detector[C]//Leibe B, Matas J, Sebe N, et al. Computer Vision: ECCV 2016. Munich: Springer, 2016.
[26] Ren S Q, He K M, Girshick R, et al.Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, 39(6): 1137-1149.
[27] Malta A, Mendes M, Farinha T.Augmented reality maintenance assistant using YOLOv5[J]. Applied Sciences, 2021, 11(11): 4758. doi: 10.3390/app11114758.
文章导航

/