基于全局-局部特征融合的人脸图像修复方法
DOI:
作者:
作者单位:

贵州民族大学

作者简介:

通讯作者:

中图分类号:

TP391.41

基金项目:


Face Image Restoration Method Based on Global-Local Feature Fusion
Author:
Affiliation:

1.贵州民族大学;2.Guizhou Minzu University

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对现有算法在处理大面积不规则破损的人脸图像修复时存在伪影和对上下文信息关注不足的问题,我们提出了一种基于全局-局部特征融合的人脸图像修复方法。首先,采用小波掩码混洗下采样模块,加强模型对边缘纹理局部特征的学习能力,从而解决修复过程中对人脸局部细节提取不足的问题。其次,设计全局通道加权注意力提取全局特征,使模型有针对性地关注当前任务中更为重要的特征通道,有效减少处理不必要信息的计算量,为确保最终高质量输出的重要性,模型可以选择地过滤和调整信息流。最后,采用多尺度池化模块为提取到的两种特征进行自适应融合,模型可以更好地过滤掉噪声并保留有用的信号,从而提高了算法在复杂环境中的适用性和鲁棒性。从端到端的学习中,模型能够同时优化全局特征和局部特征,从而使得最终特征图获得更加丰富的语义信息。实验结果在CelebA-HQ高清人脸数据集上验证,定性实验显示我们的方法在清晰度和合理性上优于对比方法,定量实验中的结构相似性指数、峰值信噪比、学习感知图像块相似度以及L1损失指标均表现出显著优势。综上所述,本文提出的方法在修复人脸图像任务中表现出更好的效果。

    Abstract:

    Addressing the issues of artifacts and insufficient contextual awareness in existing algorithms for repairing severely damaged facial images, we propose a facial image restoration method based on global-local feature fusion. First, a wavelet-masked shuffling down-sampling module is employed to enhance the model's capability to learn local features of edge textures, thereby resolving the issue of inadequate extraction of facial local details during the restoration process. Secondly, a global channel-weighted attention mechanism is designed to extract global features, enabling the model to focus specifically on the most critical feature channels for the current task, effectively reducing unnecessary computational overhead. To ensure the importance of achieving high-quality output, the model selectively filters and adjusts information flow. Lastly, a multi-scale pooling module is utilized to adaptively fuse the extracted features, allowing the model to better filter out noise and preserve useful signals, thereby enhancing the algorithm's applicability and robustness in complex environments.Through end-to-end learning, the model optimizes both global and local features simultaneously, enriching the final feature maps with richer semantic information. Experimental results validated on the CelebA-HQ high-resolution facial dataset demonstrate our method's superiority in terms of clarity and coherence over comparative methods in qualitative experiments. Quantitative experiments show significant advantages in Structural Similarity Index (SSI), Peak Signal-to-Noise Ratio (PSNR), Learned Perceptual Image Patch Similarity (LPIPS), and L1 loss metrics.In summary, the proposed method shows superior performance in the task of facial image restoration.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-06-18
  • 最后修改日期:2024-07-30
  • 录用日期:2024-09-10
  • 在线发布日期:
  • 出版日期: