*more projects will be added periodically.
We used motion-capture data recorded from humans to train machine learning models to predict realistic timing parameters for ASL animation, with an focus on inserting prosodic breaks (pauses), adjusting the pause durations for these pauses, and adjusting differential signing rate for ASL animations, based on the sentence syntax and other features. Our goal is to automate this aspect of animation synthesis and to create understandable output.
HCI; NLP; AI; Accessibility.
Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. "Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations." The 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '18), Galway, Ireland.
Sedeeq Al-khazraji. 2018. "Using Data-Driven Approach for Modeling Timing Parameters of American Sign Language." In Proceedings of the Doctoral Consortium of the 20th ACM International Conference on Multimedia Interaction.
Sedeeq Al-khazraji, Sushant Kafle, and Matt Huenerfauth. 2018. "Modeling and Predicting the Location of Pauses for the Generation of Animations of American Sign Language." In Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, The 11th International Conference on Language Resources and Evaluation (LREC 2018),
Signers associate items under discussion with locations around their body, which the signer may point to later in the discourse to refer to these items again. We model and predict the most natural locations for spatial reference points (SRPs), based on recordings of human signers’ movements. We evaluated ASL animations generated from the model in a user-based study.
Analyzed the locational distribution of spatial reference points established by an ASL signer in motion captured dataset and modeled them using Gaussian Mixture Model (GMM) in three most common pointed clusters which helped in improving the pointing feature of existing ASL animation tool.
HCI; NLP; AI; Accessibility.
Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth. 2018. "Modeling the Use of Space for Pointing in American Sign Language Animation." Journal on Technology and Persons with Disabilities, California State University, Northridge.
The goal of this research is to develop technologies to generate animations of a virtual human character performing American Sign Language.
Collecting a motion-capture corpus of ASL and modeling data to produce accurate animations.
The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.
This material is based upon work supported in part by the National Science Foundation under award number 0746556.
Animated ASL can produce useful perceptual stimuli for linguistic research experiments. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics.
HCI; NLP; Accessibility.
Automatic Speech Recognition (ASR) converts human speech into textual information displayed on the screen. We are studying the promise for making spoken content accessible for people who are deaf or hard of hearing (DHH). An experimental study is being conducted between hearing and DHH using different scenarios.
HCI; ASR; Accessibility.