基于三维空洞卷积和图卷积的高光谱影像分类
DOI:
作者:
作者单位:

1.湖州师范学院;2.辽宁工程技术大学

作者简介:

通讯作者:

中图分类号:

基金项目:

浙江省教育厅一般科研项目(Y202248546)


Hyperspectral image classification based on three-dimensional dilated convolution and graph convolution
Author:
Affiliation:

1.Huzhou University;2.Liaoning Technical University

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对高光谱影像分类任务中标记样本数量有限和多样化特征提取不足导致分类效果不理想的问题,本文提出一种基于三维空洞卷积和图卷积的高光谱影像分类方法。首先,引入不同尺度的空洞卷积构建三维空洞卷积网络模型提取多尺度的深度空谱特征;其次,通过聚合图节点的邻域特征建立图卷积神经网络模型,获取蕴含空间结构的上下文特征;最后,为了提高多样化特征的表示能力,将深层空谱特征与空间上下文特征融合并采用Softmax实现分类。本文所提方法能够充分利用高光谱影像的多样化特征并具有较强的特征学习能力,有效提高了影像的分类精度。在Indian Pines和Pavia University高光谱数据集上将提出方法与7种相关分类方法进行实验对比与分析,结果表明本文方法能够得到最优结果,总体分类精达到99.33%和99.41%。

    Abstract:

    To address the problem of unsatisfactory classification results due to the limited number of labeled samples and insufficient extraction of diverse features in hyperspectral image classification tasks, this paper proposes a hyperspectral image classification method based on three-dimensional dilated convolution and graph convolution. Firstly, we introduce different scales of dilated convolution to build a network model to extract multi-scale dilated spectral features. Secondly, we build a graph convolution neural network model by aggregating the neighborhood feature information of graph nodes to obtain the contextual features with spatial structure. Finally, to improve the representation capability of diverse features, we fuse deep spatial-spectral features with spatial contextual features and use Softmax to achieve classification. The proposed method can make full use of the diverse features of hyperspectral images and has a strong feature learning capability, which can effectively improve the classification accuracy. The proposed method was experimentally compared with seven related methods on the hyperspectral datasets of Indian Pines and Pavia University, and the results showed that the proposed method could obtain optimal results with an overall classification accuracy of 99.33% and 99.41%.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-01-29
  • 最后修改日期:2023-04-17
  • 录用日期:2023-04-25
  • 在线发布日期:
  • 出版日期: