一种结合STB和FSASC的视网膜图像血管分割新方法
DOI:
作者:
作者单位:

西南科技大学

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金(N O.62071399);西南科技大学博士基金(17zx7159)


A novel retinal vascular image segmentation method based on STB and FSASC technology
Author:
Affiliation:

SWUST

Fund Project:

The National Natural Science Foundation of China(N O.62071399);Fund of Robot Technology Used for Special Environment Key Laboratory of Sichuan Province(13zxtk08);Doctoral Fund of Southwest University of Science and Technology(17zx7159)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    视网膜血管分割是医学图像分析中一个重要而困难的任务,常规方法难以有效检测图像中细小且密集的血管结构。为了解决这一问题,本文提出了一种结合Swin Transformer Block(STB)与注意力全尺度跳跃连接机制,实现高精细、高准确视网膜血管分割的新方法。该方法构建了一个U型编码器-解码器网络,其编码器采用STB,实现了从局部到全局的自注意力机制,使模型可以更多关注关键的血管特征。采用注意力全尺度跳跃连接对不同特征进行有效融合,为模型学习图像多尺度语义与空间信息提供了一种简单而强大的机制。此外,针对视网膜血管分割任务设计了一种新的联合加权损失函数,对模型进行了改进与优化。利用公开数据集DRIVE和STARE对本文方法进行了视网膜血管分割实验,实验结果表明,本文所提方法解决了视网膜图像血管分割任务中的关键技术难题,实现了对视网膜图像血管结构的高质量、高精准分割,与Unet及其它方法相比,本文方法在分割准确率和细节特征分割上具有更好的性能表现。

    Abstract:

    Retinal vascular image segmentation is an important and difficult task in medical image analysis, and it is difficult for conventional methods to detect the small and dense vascular structures effectively. To solve this problem, a novel high-precision retinal vascular segmentation method that combines Swin transformer block (STB) and full-scale attention skip connection technology (FSASC) was proposed. By constructing a U-shaped encoder-decoder network, the proposed method realizes self-attention from local to global, so that our model can pay more attention to the key vascular features. FSASC technology is used for fusing different features, which provides a simple and powerful mechanism for our model to learn multi-scale semantic and spatial information. The proposed method was tested by using open datasets DRIVE and STARE. The experimental results show that the proposed method can achieve high-quality and high-precision segmentation for retinal vascular structures. Compared with Unet and other methods, the proposed method has better performance in both the detailed feature segmentation and the segmentation accuracy.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-10-08
  • 最后修改日期:2023-12-28
  • 录用日期:2024-01-03
  • 在线发布日期:
  • 出版日期: