基于AMRMA模型的图像超分辨率重建
DOI:
作者:
作者单位:

西南科技大学

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

国家自然科学基金 (62071399),特殊环境机器人四川省重点实验室项目(13zxtk08),西南科技大学博士基金(17zx7159)


Image super-resolution reconstruction based on AMRMA model
Author:
Affiliation:

SWUST

Fund Project:

National Natural Science Foundation of China(62071399); Special Environment Robot Sichuan Provincial Key Laboratory Project(13zxtk08); SWUST Doctoral Research Fund(17zx7159)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    现有的基于卷积神经网络的图像超分辨率重建方法通常在全分辨率或渐进式低分辨率表示上进行操作。前者可实现空间上精确但上下文信息较弱的超分辨率重建结果,后者可生成语义上可靠但空间上不太精确的输出。针对上述问题,本文提出了一种新的基于跨多分辨率信息流和多重注意力机制 (across-multi-resolution and multi-attention mechanism, AMRMA)的超分辨率重建模型和法。该方法采用跨多分辨率信息流和信息交互机制实现多尺度特征提取和聚合,利用多重注意力机制捕捉上下文信息以增强图像高频信息,设计一种新的加权损失函数以优化模型参数。在Set5等五个公开数据集上的实验结果表明,与Bicubic、SRCNN、VDSR、RDN和MuRNet 等经典和现有方法相比,本文方法PSNR和SSIM分别提升了0.33dB和0.0048,具有更好的超分辨率重建效果。

    Abstract:

    Existing CNN-based image super resolution reconstruction methods are usually realized on full-resolution or progressively low-resolution image representations. The former can achieve a spatially accurate but contextually weak super-resolution reconstruction result, while the latter can obtain a semantically reliable but less spatially accurate output. To solve the above-mentioned problems, a new super-resolution reconstruction method based on across-multi-resolution information flow and multiple attention mechanism is proposed in this paper. Multi-scale feature extraction and aggregation are realized by using cross-multi-resolution information flow and information interaction technology. Multiple attention mechanism is used for capturing context information to enhance image high-frequency information. A new weighted loss function is designed and used for optimizing model parameters. The experimental results on five public datasets show that, compared with Bicubic, SRCNN, VDSR, RDN and MuRNet reconstruction algorithms, the PNSR and SSIM of the proposed method are improved by 0.33dB and 0.0048, and the proposed method has better super-resolution reconstruction effect.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-10-30
  • 最后修改日期:2024-01-26
  • 录用日期:2024-02-01
  • 在线发布日期:
  • 出版日期: