目的 为提升深度学习方法在注塑制品缺陷检测过程中的准确性与鲁棒性,减少对高质量标注数据的依赖,构建了一种能够融合通用视觉知识与特定任务特征的缺陷检测方法。方法 利用SAM2从大规模自然图像数据中所学习的视觉先验知识,采用Adapter适配器实现高效的参数微调,以捕捉注塑制品缺陷的特定特征,同时,结合U-Net解码结构,实现对注塑制品缺陷区域的高精度语义分割。采用半监督辅助训练策略,引入无缺陷样本增强模型的泛化能力。结果 与现有主流分割方法相比,采用本文提出的缺陷检测方法显著提升了模型的分割性能。其中,mIoU指标提高了7.13%、Recall指标提高了7.87%、PA指标提高了0.67%、mPrecision指标提高了4.83%。结论 在缺陷样本稀缺的情况下,采用本文提出的SAM2与U-Net融合的缺陷检测模型和半监督辅助训练策略,显著提升了注塑制品缺陷检测的性能,减少了对高质量标注数据的依赖,该检测方法在塑料制品行业具有良好的应用价值。
Abstract
To enhance the accuracy and robustness of deep learning methods in the detection of injection molding products while reducing their dependence on high-quality labeled data, the work aims to construct a defect detection method that integrates general visual knowledge with task-specific features. Specifically, the Segment Anything Model 2 (SAM2) was utilized to transfer visual priors learned from large-scale natural image datasets. Adapter modules were introduced to enable efficient parameter fine-tuning, allowing the model to capture the distinctive characteristics of injection molding defects. At the same time, the U-Net decoding structure was employed to achieve high-precision semantic segmentation of defect regions. A semi-supervised auxiliary training strategy was adopted, introducing non-defective samples to enhance the model's generalization ability. Compared with mainstream segmentation methods, the proposed defect detection method significantly improved segmentation performance. The mIoU increased by 7.13%, Recall by 7.87%, Pixel Accuracy (PA) by 0.67% and mPrecision by 4.83%. Under the condition of scarce defect samples, the proposed defect detection model combining SAM2 and U-Net, along with a semi-supervised auxiliary training strategy, significantly improves the performance of injection molding product defect detection, reduces reliance on high-quality annotated data, and demonstrates promising application value in the plastics industry.
关键词
注塑制品 /
缺陷检测 /
语义分割 /
半监督学习 /
Segment Anything Model(SAM2) /
U-Net
Key words
injection molding products /
defect detection /
semantic segmentation /
semi-supervised learning /
Segment Anything Model 2 (SAM2) /
U-Net
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
参考文献
[1] JHA S B, BABICEANU R F.Deep CNN-Based Visual Defect Detection: Survey of Current Literature[J]. Computers in Industry, 2023, 148: 103911.
[2] 李原, 李燕君, 刘进超, 等. 基于改进Res-UNet网络的钢铁表面缺陷图像分割研究[J]. 电子与信息学报, 2022, 44(5): 1513-1520.
LI Y, LI Y J, LIU J C, et al.Research on Segmentation of Steel Surface Defect Images Based on Improved Res-UNet Network[J]. Journal of Electronics & Information Technology, 2022, 44(5): 1513-1520.
[3] 李耀龙, 陈晓林, 林浩, 等. DySnake-YOLO: 改进的YOLOv9c电路板表面缺陷检测方法[J]. 计算机工程与应用, 2025, 61(3): 242-252.
LI Y L, CHEN X L, LIN H, et al.DySnake-YOLO: Improved Detection of Surface Defects on YOLOv9c Circuit Board[J]. Computer Engineering and Applications, 2025, 61(3): 242-252.
[4] 朱磊, 王倩倩, 姚丽娜, 等. 改进YOLOv5的织物缺陷检测方法[J]. 计算机工程与应用, 2024, 60(20): 302-311.
ZHU L, WANG Q Q, YAO L N, et al.Fabric Defect Detection Method with Improved YOLOv5[J]. Computer Engineering and Applications, 2024, 60(20): 302-311.
[5] CHETVERIKOV D, HANBURY A.Finding Defects in Texture Using Regularity and Local Orientation[J]. Pattern Recognition, 2002, 35(10): 2165-2180.
[6] BUDA M, MAKI A, MAZUROWSKI M A.A Systematic Study of the Class Imbalance Problem in Convolutional Neural Networks[J]. Neural Networks, 2018, 106: 249-259.
[7] SHORTEN C, KHOSHGOFTAAR T M.A Survey on Image Data Augmentation for Deep Learning[J]. Journal of Big Data, 2019, 6(1): 60.
[8] KIRILLOV A, MINTUN E, RAVI N, et al.Segment anything[C]//2023 IEEE/CVF International Conference on Computer Vision (ICCV). Paris, France. IEEE, 2023: 3992-4003.
[9] RAVI N, GABEUR V, HU Y T, et al. Sam 2: Segment Anything in Images and Videos[J]. arXiv preprint arXiv:2408.00714, 2024.
[10] WU Z F, ZHAO S Y, ZHANG Y Z, et al.DefectSAM: Prototype Prompt Guided SAM for Few-Shot Defect Segmentation[J]. IEEE Transactions on Instrumentation and Measurement, 2025, 74: 5015513.
[11] HU B, GAO B, TAN C, et al. Segment Anything in Defect Detection[J]. arXiv preprint arXiv:2311.10245, 2023.
[12] JI W, LI J J, BI Q, et al.Segment Anything Is Not always Perfect: An Investigation of SAM on Different Real-World Applications[J]. Machine Intelligence Research, 2024, 21(4): 617-630.
[13] HUANG S H, PAN Y C.Automated Visual Inspection in the Semiconductor Industry: A Survey[J]. Computers in Industry, 2015, 66: 1-10.
[14] SHEU R K, TENG Y H, TSENG C H, et al.Apparatus and Method of Defect Detection for Resin Films[J]. Applied Sciences, 2020, 10(4): 1206.
[15] HA H, JEONG J.CNN-Based Defect Inspection for Injection Molding Using Edge Computing and Industrial IoT Systems[J]. Applied Sciences, 2021, 11(14): 6378.
[16] LIU J H, GUO F, GAO H, et al.Defect Detection of Injection Molding Products on Small Datasets Using Transfer Learning[J]. Journal of Manufacturing Processes, 2021, 70: 400-413.
[17] CHEN M Q, YU L J, ZHI C, et al.Improved Faster R-CNN for Fabric Defect Detection Based on Gabor Filter with Genetic Algorithm Optimization[J]. Computers in Industry, 2022, 134: 103551.
[18] 龚浩天, 付泽军, 徐竹田, 等. 基于图像机器学习的极片辊压褶皱形貌快速判断方法[J]. 精密成形工程, 2025, 17(5): 177-187.
GONG H T, FU Z J, XU Z T, et al.Method for Fast Morphology Judgment of Electrodes Calendering Defects Based on Image Machine Learning[J]. Journal of Netshape Forming Engineering, 2025, 17(5): 177-187.
[19] 王昱翔, 葛洪伟. 基于U2-Net的金属表面缺陷检测算法[J]. 南京大学学报(自然科学), 2023, 59(3): 413-424.
WANG Y X, GE H W.Metal Surface Defect Detection Algorithm Based on U2-Net[J]. Journal of Nanjing University (Natural Science), 2023, 59(3): 413-424.
[20] LING Z G, ZHANG A R, MA D X, et al.Deep Siamese Semantic Segmentation Network for PCB Welding Defect Detection[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 5006511.
[21] CHEN T R, ZHU L Y, DING C T, et al.SAM-Adapter: Adapting Segment Anything in Underperformed Scenes[C]//2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Paris, France. IEEE, 2023: 3359-3367.
[22] WU J D, WANG Z Y, HONG M X, et al.Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation[J]. Medical Image Analysis, 2025, 102: 103547.
[23] HUANG D J, XIONG X Y, MA J, et al.AlignSAM: Aligning Segment anything Model to Open Context via Reinforcement Learning[C]//2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA. IEEE, 2024: 3205-3215.
[24] LENG T A, ZHANG Y M, HAN K, et al.Self-Sampling Meta SAM: Enhancing Few-Shot Medical Image Segmentation with Meta-Learning[C]//2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Waikoloa, HI, USA. IEEE, 2024: 7910-7920.
[25] OSCO L P, WU Q S, DE LEMOS E L, et al. The Segment anything Model (SAM) for Remote Sensing Applications: From Zero to One Shot[J]. International Journal of Applied Earth Observation and Geoinformation, 2023, 124: 103540.
[26] RONNEBERGER O, FISCHER P, BROX T.U-Net: Convolutional Networks for Biomedical Image Segmentation[C]//Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. Cham: Springer, 2015: 234-241.
[27] ZHAO H S, SHI J P, QI X J, et al.Pyramid Scene Parsing Network[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA. IEEE, 2017: 6230-6239.
[28] CHEN L C, ZHU Y K, PAPANDREOU G, et al.Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation[C]//Computer Vision-ECCV 2018. Cham: Springer, 2018: 833-851.