Abstract:When segmenting urban street view images in Transformer network, multi-scale features and context information in the network are not fully utilized, leading to defects such as holes in large targets and imprecise edge segmentation of small targets. In this paper, a Trans-AsfNet method based on Transformer architecture is proposed to extract multi-scale features and aggregate context information to solve this problem. The segmentation method introduces Swin Transformer as a new feature extraction network to strengthen the long-distance dependence of information. An adaptive subspace feature fusion (ASFF) module is proposed to strengthen the network's ability to extract multi-scale features, and an effective global context aggregation module (EGCA) is designed to improve the context information aggregation capability of the network, and uses rich multi-scale information for feature decoding and information compensation. Then, the context information of different scales is aggregated to strengthen the semantic information of the understanding target, so as to eliminate the holes of large targets and improve the edge segmentation accuracy of small target pixels. The Trans-AsfNet method is verified and tested by the CamVid urban street view dataset, and the experimental results show that the network can basically eliminate the segmentation hole defects of the DeepLabv3 network and improve the segmentation effect of small target edges, and the MIoU reaches 69.5% on the CamVid test set.