Welcome to Journal of Tea Science,Today is

Journal of Tea Science ›› 2024, Vol. 44 ›› Issue (6): 949-959.doi: 10.13305/j.cnki.jts.2024.06.005

• Research Paper • Previous Articles     Next Articles

Research on Tea Bud Recognition Based on Improved YOLOv8n

YANG Xiaowei1,2, SHEN Qiang1,2,*, LUO Jinlong1,2, ZHANG Tuo1,2, YANG Ting1,2, DAI Yuqiao1,2, LIU Zhongying1,2, LI Qin1,2, WANG Jialun1,3,*   

  1. 1. Guizhou Tea Research Institute, Guiyang 550006, China;
    2. Tea Processing and Mechanical Function Laboratory, Guizhou Tea Industry Technology System, Guiyang 550006, China;
    3. Zunyi Comprehensive Test Station, National Tea Industry Technology System, Guiyang 550006, China
  • Received:2024-09-02 Revised:2024-10-22 Online:2024-12-15 Published:2025-01-08

Abstract: Accurate recognition of tea buds in complex natural environment is one of the key technologies to realize intelligent picking of tea buds by agricultural robots. To address the problem of low recognition accuracy of tea buds in complex environment of tea gardens, a tea bud recognition method based on improved YOLOv8n was proposed. The Honor Mobile Phone was used to collect the RGB images of tea buds, and the image annotation of tea buds was completed. The labeled data was divided according to the 8∶1∶1 radio of the training set and test set. To effectively extract bud features and reduce model redundancy calculation and memory access, FasterNet was used to replace the backbone network of YOLOv8n model for feature extraction. To suppress the background information of the tea garden environment and enhance the feature extraction ability of tea buds, the global attention mechanism (GAM) module was introduced at the end of the backbone network (after the SPPF module). To further improve the recognition accuracy of tea buds, the Context Guided (CG) module was introduced into the Neck network to learn the joint features of local features and surrounding environment of tea buds. The improved YOLOV8n algorithm was trained and tested by using the constructed tea bud data set. The ablation experiments verify that the FasterNet network, GAM attention mechanism and CG module effectively improved the recognition accuracy of the YOLOv8n model. The mean average accuracy (mAP) of the improved YOLOv8n model on the multi-category tea bud data set was 94.3%. Compared with the original YOLOv8n model, the mAP of single bud, one bud and one leaf, and one bud and two leaves of tea buds increased by 2.2, 1.6 and 2.7 percentage points, respectively. The improved YOLOv8n model was tested for performance comparison with YOLOv3-tiny, YOLOv3, YOLOv5m, YOLOv7-tiny, YOLOv7 and YOLOv8n models. The experimental results show that the improved YOLOv8n model has a higher accuracy in identifying tea buds. The experimental results demonstrate that the improved YOLOv8n model can effectively improve the accuracy of tea bud recognition and provide technical support for intelligent tea picking robots.

Key words: deep learning, tea buds, image recognition, YOLOv8n, attention mechanisms, picking robot

CLC Number: