Welcome to Journal of Tea Science,Today is
Research Paper

Research on Tea Bud Recognition Based on Improved YOLOv8n

  • YANG Xiaowei ,
  • SHEN Qiang ,
  • LUO Jinlong ,
  • ZHANG Tuo ,
  • YANG Ting ,
  • DAI Yuqiao ,
  • LIU Zhongying ,
  • LI Qin ,
  • WANG Jialun
Expand
  • 1. Guizhou Tea Research Institute, Guiyang 550006, China;
    2. Tea Processing and Mechanical Function Laboratory, Guizhou Tea Industry Technology System, Guiyang 550006, China;
    3. Zunyi Comprehensive Test Station, National Tea Industry Technology System, Guiyang 550006, China

Received date: 2024-09-02

  Revised date: 2024-10-22

  Online published: 2025-01-08

Abstract

Accurate recognition of tea buds in complex natural environment is one of the key technologies to realize intelligent picking of tea buds by agricultural robots. To address the problem of low recognition accuracy of tea buds in complex environment of tea gardens, a tea bud recognition method based on improved YOLOv8n was proposed. The Honor Mobile Phone was used to collect the RGB images of tea buds, and the image annotation of tea buds was completed. The labeled data was divided according to the 8∶1∶1 radio of the training set and test set. To effectively extract bud features and reduce model redundancy calculation and memory access, FasterNet was used to replace the backbone network of YOLOv8n model for feature extraction. To suppress the background information of the tea garden environment and enhance the feature extraction ability of tea buds, the global attention mechanism (GAM) module was introduced at the end of the backbone network (after the SPPF module). To further improve the recognition accuracy of tea buds, the Context Guided (CG) module was introduced into the Neck network to learn the joint features of local features and surrounding environment of tea buds. The improved YOLOV8n algorithm was trained and tested by using the constructed tea bud data set. The ablation experiments verify that the FasterNet network, GAM attention mechanism and CG module effectively improved the recognition accuracy of the YOLOv8n model. The mean average accuracy (mAP) of the improved YOLOv8n model on the multi-category tea bud data set was 94.3%. Compared with the original YOLOv8n model, the mAP of single bud, one bud and one leaf, and one bud and two leaves of tea buds increased by 2.2, 1.6 and 2.7 percentage points, respectively. The improved YOLOv8n model was tested for performance comparison with YOLOv3-tiny, YOLOv3, YOLOv5m, YOLOv7-tiny, YOLOv7 and YOLOv8n models. The experimental results show that the improved YOLOv8n model has a higher accuracy in identifying tea buds. The experimental results demonstrate that the improved YOLOv8n model can effectively improve the accuracy of tea bud recognition and provide technical support for intelligent tea picking robots.

Cite this article

YANG Xiaowei , SHEN Qiang , LUO Jinlong , ZHANG Tuo , YANG Ting , DAI Yuqiao , LIU Zhongying , LI Qin , WANG Jialun . Research on Tea Bud Recognition Based on Improved YOLOv8n[J]. Journal of Tea Science, 2024 , 44(6) : 949 -959 . DOI: 10.13305/j.cnki.jts.2024.06.005

References

[1] 吴芹瑶, 杨江帆, 林程, 等. 中国茶叶生产布局变迁研究[J]. 茶叶科学, 2022, 42(2): 290-300.
Wu Q Y, Yang J F, Lin C, et al.Research on the changes of China's tea production layout[J]. Journal of Tea Science, 2022, 42(2): 290-300.
[2] 孙肖肖, 牟少敏, 许永玉, 等. 基于深度学习的复杂背景下茶叶嫩芽检测算法[J]. 河北大学学报(自然科学版), 2019, 39(2): 211-216.
Sun X X, Mu S M, Xu Y Y, et al.Detection algorithm of tea tender buds under complex background based on deep learning[J]. Journal of Hebei University (Natural Science Edition), 2019, 39(2): 211-216.
[3] 张浩, 陈勇, 汪巍, 等. 基于主动计算机视觉的茶叶采摘定位技术[J]. 农业机械学报, 2014, 45(9): 61-65, 78.
Zhang H, Chen Y, Wang W, et al.Positioning method for tea picking using active computer vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2014, 45(9): 61-65, 78.
[4] 吴雪梅, 张富贵, 吕敬堂. 基于图像颜色信息的茶叶嫩叶识别方法研究[J]. 茶叶科学, 2013, 33(6): 584-589.
Wu X M, Zhang F G, Lü J T.Research on recognition of tea tender leaf based on image color information[J]. Journal of Tea Science, 2013, 33(6): 584-589.
[5] 刘自强, 周铁军, 傅冬和, 等. 基于颜色和形状的鲜茶叶图像特征提取及在茶树品种识别中的应用[J]. 江苏农业科学, 2021, 49(12): 168-172.
Liu Z Q, Zhou T J, Fu D H, et al.Image feature extraction of fresh tea leaf based on color and shape and its application in tea variety recognition[J]. Jiangsu Agricultural Sciences, 2021, 49(12): 168-172.
[6] 胡和平, 吴明辉, 洪孔林, 等. 基于改进YOLOv5s的茶叶嫩芽分级识别方法[J]. 江西农业大学学报, 2023, 45(5): 1261-1272.
Hu H P, Wu M H, Hong K L, et al.Classification and recognition method for tea buds based on improved YOLOv5s[J]. Acta Agriculturae Universitatis Jiangxiensis, 2023, 45(5): 1261-1272.
[7] 徐海东, 马伟, 谭彧, 等. 基于YOLOv5深度学习的茶叶嫩芽估产方法[J]. 中国农业大学学报, 2022, 27(12): 213-220.
Xu H D, Ma W, Tan Y, et al.Yield estimation method for tea buds based on YOLOv5 deep learning[J]. Journal of China Agricultural University, 2022, 27(12): 213-220.
[8] 朱红春, 李旭, 孟炀, 等. 基于Faster R-CNN网络的茶叶嫩芽检测[J]. 农业机械学报, 2022, 53(5): 217-224.
Zhu H C, Li X, Meng Y, et al.Tea bud detection based on Faster R-CNN network[J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(5): 217-224.
[9] 张晴晴, 刘连忠, 宁井铭, 等. 基于YOLOV3优化模型的复杂场景下茶树嫩芽识别[J]. 浙江农业学报, 2021, 33(9): 1740-1747.
Zhang Q Q, Liu L Z, Ning J M, et al.Tea buds recognition under complex scenes based on optimized YOLOV3 model[J]. Acta Agriculturae Zhejiangensis, 2021, 33(9): 1740-1747.
[10] 马志艳, 李辉, 杨光友. 基于YOLOv3算法的智能采茶机关键技术研究[J]. 中国农机化学报, 2024, 45(4): 199-204, 236.
Ma Z Y, Li H, Yang G Y.Research on key technologies of intelligent tea picking machine based on YOLOv3 algorithm[J]. Journal of Chinese Agricultural Mechanization, 2024, 45(4): 199-204, 236.
[11] 王梦妮, 顾寄南, 王化佳, 等. 基于改进YOLOv5s模型的茶叶嫩芽识别方法[J]. 农业工程学报, 2023, 39(12): 150-157.
Wang M N, Gu J N, Wang H J, et al.Method for identifying tea buds based on improved YOLOv5s model[J]. Transactions of the Chinese Society of Agricultural Engineering, 2023, 39(12): 150-157.
[12] 洪孔林, 吴明辉, 高博, 等. 基于改进YOLOv7-tiny的茶叶嫩芽分级识别方法[J]. 茶叶科学, 2024, 44(1): 62-74.
Hong K L, Wu M H, Gao B, et al.A grading identification method for tea buds based on improved YOLOv7-tiny[J]. Journal of Tea Science, 2024, 44(1): 62-74.
[13] 严蓓蓓, 纪元浩, 曲凤凤, 等. 基于改进YOLOv5s的茶叶嫩芽检测[J]. 中国农机化学报, 2024, 45(4): 168-174.
Yan B B, Ji Y H, Qu F F, et al.Detection of tea buds based on improved YOLOv5s[J]. Journal of Chinese Agricultural Mechanization, 2024, 45(4): 168-174.
[14] 朱铭敏, 张国平, 谭建军, 等. 基于YOLOv5s的轻量级茶叶嫩芽终端检测模型[J]. 浙江农业学报, 2024, 36(6): 1413-1424.
Zhu M M, Zhang G P, Tan J J, et al.A lightweight tea buds terminal detection model based on YOLOv5s[J]. Acta Agriculturae Zhejiangensis, 2024, 36(6): 1413-1424.
[15] 黄家才, 唐安, 陈光明, 等. 基于Compact-YOLO v4算法的茶叶嫩芽移动端识别方案[J]. 农业机械学报, 2024: 1-11[2024-08-09].http://kns.cnki.net/kcms/detail/11.1964.S.20230113.1315.002.html.
Huang J C, Tang A, Chen G M, et al. Mobile recognition solution of tea buds based on compact-YOLOv4 algorithm[J]. Transactions of the Chinese Society for Agricultural Machinery, 2024: 1-11[2024-08-09]. http://kns.cnki.net/kcms/detail/11.1964.S.20230113.1315.002.html.
[16] Chen J R, Kao S H, He H, et al.Run, don't walk: chasing higher FLOPS for faster neural networks[C]//IEEE Computer Society. Proceedings of the 2023 IEEE/CVF conference on computer vision and pattern recognition. Vancouver: IEEE, 2023: 12021-12031.
[17] Liu Y C, Shao Z R, Hoffmann N.Global attention mechanism: retain information to enhance channel-spatial interactions[J]. arXiv, 2021: arXiv:2112.05561. doi: 10.48550/arXiv.2112.05561.
[18] Hu J, Shen L, Sun G.Squeeze-and-excitation networks[C]//IEEE Computer Society. Proceedings of the 2018 IEEE conference on computer vision and pattern recognition. Salt Lake City: IEEE, 2018: 7132-7141.
[19] Redmon J, Farhadi A. Yolov3: an incremental improvement[J]. arXiv, 2018: arXiv:1804.02767. doi: 10.48550/arXiv.1804.02767.
[20] 陈慧颖, 宋青峰, 常天根, 等. 基于YOLOv5m和CBAM-CPN的单分蘖水稻植株表型参数提取[J]. 农业工程学报, 2024, 40(2): 307-314.
Chen H Y, Song Q F, Chang T G, et al.Extraction of the single-tiller rice phenotypic parameters based on YOLOv5m and CBAM-CPN[J]. Transactions of the Chinese Society of Agricultural Engineering, 2024, 40(2): 307-314.
[21] Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//IEEE Computer Society. Proceedings of the 2023 IEEE/CVF conference on computer vision and pattern recognition. Vancouver: IEEE, 2023: 7464-7475.
Outlines

/