基于Transformer架构的城市街景语义分割方法研究
DOI:
作者:
作者单位:

湖北工业大学电气与电子工程学院

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金(62202148)、湖北省自然科学基金(2019CFB530)、湖北省科技厅重大专项(2019ZYYD020)、襄阳湖北工业大学产业研究院科研项目(XYYJ2022C05)和国家留学基金(201808420418)资助项目。


Research on Urban Streetscape Semantic Segmentation Method based on Transformer Architecture
Author:
Affiliation:

Hubei University of Technology

Fund Project:

National Natural Science Foundation of China (62202148), Natural Science Foundation of Hubei Province (2019CFB530), Major Special Project of Science and Technology Department of Hubei Province (2019ZYYD020), Xiangyang Industrial Research Institute Research Project of Hubei University of Technology (XYYJ2022C05) and China Scholarship Foundation (201808420418)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对部分Transformer网络在进行城市街景图像分割时,没有充分利用网络中的多尺度特征和上下文信息,导致分割结果部分大目标存在孔洞、小目标边缘分割不精细等缺陷。本文提出了基于Transformer架构的以提取多尺度特征和汇聚上下文信息为主的Trans-AsfNet方法解决此问题。该分割方法引入了Swin Transformer作为新的特征提取网络,加强信息的长距离依赖;提出了一种自适应子空间特征融合模块(Adaptive Subspace Feature Fusion, ASFF)加强网络对多尺度特征的提取能力,同时设计了一种有效全局上下文聚合模块(Efficient Global Context Aggregation, EGCA)提升网络的上下文信息聚合能力,利用丰富的多尺度信息进行特征解码与信息补偿,然后聚合不同尺度的上下文信息以强化理解目标的语义信息,进而消除大目标孔洞,提高小目标像素边缘分割精度。将Trans-AsfNet方法通过CamVid城市街景数据集验证测试,实验结果表明,该网络基本可以消除分割孔洞缺陷、提升小目标边缘的分割效果,在CamVid测试集上MIoU达到了69.5%。

    Abstract:

    When segmenting urban street view images in Transformer network, multi-scale features and context information in the network are not fully utilized, leading to defects such as holes in large targets and imprecise edge segmentation of small targets. In this paper, a Trans-AsfNet method based on Transformer architecture is proposed to extract multi-scale features and aggregate context information to solve this problem. The segmentation method introduces Swin Transformer as a new feature extraction network to strengthen the long-distance dependence of information. An adaptive subspace feature fusion (ASFF) module is proposed to strengthen the network's ability to extract multi-scale features, and an effective global context aggregation module (EGCA) is designed to improve the context information aggregation capability of the network, and uses rich multi-scale information for feature decoding and information compensation. Then, the context information of different scales is aggregated to strengthen the semantic information of the understanding target, so as to eliminate the holes of large targets and improve the edge segmentation accuracy of small target pixels. The Trans-AsfNet method is verified and tested by the CamVid urban street view dataset, and the experimental results show that the network can basically eliminate the segmentation hole defects of the DeepLabv3 network and improve the segmentation effect of small target edges, and the MIoU reaches 69.5% on the CamVid test set.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-05-08
  • 最后修改日期:2023-06-30
  • 录用日期:2023-07-12
  • 在线发布日期:
  • 出版日期: