Isolated sign recognition using ASL datasets with consistent text-based gloss labeling and curriculum learning
Date Issued
2022-06-04Author(s)
Dafnis, Konstantinos M.
Chroni, Evgenia
Neidle, Carol
Metaxas, Dimitris
Metadata
Show full item recordPermanent Link
https://hdl.handle.net/2144/45988Citation (published version)
K. Dafnis, E. Chroni, C. Neidle, D. Metaxas. 2022. "Isolated Sign Recognition using ASL Datasets with Consistent Text-based Gloss Labeling and Curriculum Learning" 7th International Workshop on Sign Language Translation and Avatar Technology: The Junction of the Visual and the TextualAbstract
We present a new approach for isolated sign recognition, which combines a spatial-temporal Graph Convolution Network (GCN) architecture for modeling human skeleton keypoints with late fusion of both the forward and backward video streams, and we explore the use of curriculum learning. We employ a type of curriculum learning that dynamically estimates, during training, the order of difficulty of each input video for sign recognition; this involves learning a new family of data parameters that are dynamically updated during training. The research makes use of a large combined video dataset for American Sign Language (ASL), including data from both the American Sign Language Lexicon Video Dataset (ASLLVD) and the Word-Level American Sign Language (WLASL) dataset, with modified gloss labeling of the latter—to ensure 1-1 correspondence between gloss labels and distinct sign productions, as well as consistency in gloss labeling across the two datasets. This is the first time that these two datasets have been used in combination for isolated sign recognition research. We also compare the sign recognition performance on several different subsets of the combined dataset, varying in, e.g., the minimum number of samples per sign (and therefore also in the total number of sign classes and video examples).
Collections