DeepLabv3 :一种基于语义分割的布匹瑕疵检测模型
DeepLabv3 : Fabric defect detection model based on semantic segmentation
投稿时间:2024-01-24  修订日期:2024-04-02
DOI:
中文关键词:  布匹瑕疵检测  DeepLabv3++  多尺度的轻量级主干网络  卷积注意力  多层次特征融合
英文关键词:fabric defect detection  DeepLabv3++  multi-scale lightweight backbone network  convolutional at-tention  multi-level feature fusion
基金项目:
作者单位邮编
陈小梦* 浙江理工大学 310018
摘要点击次数: 51
全文下载次数: 0
中文摘要:
      针对布匹瑕疵检测任务中,细小目标识别准确率低、检测速度慢等问题,提出一种新型的DeepLabv3++布匹瑕疵检测模型。首先,在DeepLabv3++模型中设计了多尺度的轻量级主干网络,用于提取形状各异、大小不一的瑕疵特征;其次,引入卷积注意力模块和空间通道注意力模块,分别实现对细小目标边界信息的捕捉和瑕疵区域的关注;接着,在解码部分添加两类多层次特征融合模块,以减少细节信息丢失问题;最后,采用工业现场采集的布匹瑕疵数据集对模型进行训练与评估。实验结果表明,本文提出的DeepLabv3++模型相较其他主流语义分割模型具有更高的精度和检测速度,其参数量仅为4.1M(million),平均交并比、平均像素精度分别达到了90.01%、95.05%,满足了工业现场对检测精度和速度的均衡性需求。
英文摘要:
      A novel DeepLabv3++ model is proposed to address the low accuracy in identifying small tar-gets and slow detection speed in fabric defect detection tasks. Firstly, a multi-scale lightweight backbone network is designed within the DeepLabv3++ model to extract features from defects of various shapes and sizes. Secondly, convolutional attention modules and channel spatial at-tention modules are introduced to capture boundary information of small targets and focus on defect regions. Additionally, two types of multi-level feature fusion modules are added to miti-gate the issue of detail information loss. Finally, the model is trained and evaluated using a fab-ric defect dataset collected from an industrial site. The results show that our DeepLabv3++ model outperforms other models, utilizing only 4.1 million parameters. It achieves a mean inter-section over union of 90.01% and a mean pixel accuracy of 95.05%, meeting the precision and lightweight balance requirements of the industrial site.
    下载PDF阅读器
关闭

版权所有:《光电子·激光》编辑部  津ICP备12008651号-1
主管单位:天津市教育委员会 主办单位:天津理工大学 地址:中国天津市西青区宾水西道391号
技术支持:北京勤云科技发展有限公司