Abstract:Current face image inpainting methods often suffer from blurred textures and structural distortions when handling images with large missing regions. To address this issue, we propose a gated convolution-based network for restoring face images with extensive occlusions. First, we improve the standard gated convolution by enabling dynamic selection of convolutional kernels and adaptive weight allocation, thereby enhancing the model's representational capacity. Second, we design a Dynamic Multi-Scale Fusion Gated Residual Module to effectively integrate global structural priors with local texture details. Third, we construct a Multi-Branch Dynamic Multi-Scale Gated Discriminator to enforce facial structural consistency and contour coherence during reconstruction. Extensive experiments are conducted on the CelebA-HQ and FFHQ datasets under large-area irregular masks with missing ratios in the range of (0.4, 0.6]. Compared with the second-best method, our approach achieves PSNR gains of 1.3451 dB and 1.6587 dB, SSIM improvements of 0.0283 and 0.0345, LPIPS reductions of 0.0297 and 0.0400, and FID scores lowered by 1.608 and 4.8797 on the two datasets respectively. Quantitative and qualitative results demonstrate that the proposed method can effectively reconstruct large missing regions, delivering superior inpainting performance.