聋人群体语言信息处理眼动数据集构建与分析
DOI:
CSTR:
作者:
作者单位:

1.天津理工大学,聋人工学院;2.东南大学,外国语学院

作者简介:

通讯作者:

中图分类号:

基金项目:

国家社会科学基金重点项目 (19AZD037)


Construction and analysis of eye movement dataset for language information processing in the deaf population
Author:
Affiliation:

1.Technical College for the Deaf, Tianjin University of Technology;2.School of Foreign Languages, Southeast University

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在现有公开眼动数据集中,针对聋人群体的采集与构建几乎处于空白状态。为填补这一研究缺口,本文构建了一个多模态聋人眼动数据集(Multimodal Deaf Eye Tracking Dataset, MDETD),覆盖口语与手语两种语言模态。数据采集共涉及27名聋人和34名听人,记录眼动样本约147万条,涵盖注视、扫视、平滑追踪与噪声四类类型。经预处理与特征提取后,采用1D-CNN-BLSTM模型进行眼动分类,在注视与扫视任务中分别获得97.1%与78.2%的F1分数,表现出较高的分类性能。进一步比较发现,聋人在注视行为中的占比显著高于听人,显示出更强的视觉集中倾向;而听人在扫视行为中的占比更高,反映了两类群体在语言通道使用和视觉加工策略上的差异。本研究为理解聋人视觉认知机制提供了数据支持,同时为眼动分类、跨模态语言处理等相关研究提供了高质量的基础资源。

    Abstract:

    Existing public eye-tracking datasets rarely include data from deaf individuals, leaving a critical gap in related research. To address this, we constructed the Multimodal Deaf Eye Tracking Dataset (MDETD), which incorporates both spoken and signed language modalities. The dataset includes eye-tracking data from 27 deaf and 34 hearing participants, comprising approximately 1.47 million samples annotated into four categories: fixation, saccade, smooth pursuit, and noise. After data preprocessing and feature extraction, a 1D-CNN-BLSTM model was employed for eye movement classification. The model achieved F1 scores of 97.1% and 78.2% on fixation and saccade tasks, respectively, demonstrating strong classification performance. Further analysis revealed that deaf participants exhibited a significantly higher proportion of fixation behaviors, indicating enhanced visual concentration, while hearing participants showed more frequent saccades, suggesting differences in language modality usage and visual processing strategies. This work contributes empirical evidence to the understanding of visual cognition in the deaf population and provides a valuable resource for eye movement classification and cross-modal language processing research.

    参考文献
    相似文献
    引证文献
引用本文
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2025-03-18
  • 最后修改日期:2025-05-29
  • 录用日期:2025-06-19
  • 在线发布日期:
  • 出版日期:
文章二维码