More details about my Ph.D. projects.
We used motion-capture data recorded from humans to train machine learning models to predict realistic timing parameters for ASL animation, with an focus on inserting prosodic breaks (pauses), adjusting the pause durations for these pauses, and adjusting differential signing rate for ASL animations, based on the sentence syntax and other features. Our goal is to automate this aspect of animation synthesis and to create understandable output.
HCI; NLP; AI; Accessibility.
Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. "Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations." The 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '18), Galway, Ireland.
Sedeeq Al-khazraji. 2018. "Using Data-Driven Approach for Modeling Timing Parameters of American Sign Language." In Proceedings of the Doctoral Consortium of the 20th ACM International Conference on Multimedia Interaction.
Sedeeq Al-khazraji, Sushant Kafle, and Matt Huenerfauth. 2018. "Modeling and Predicting the Location of Pauses for the Generation of Animations of American Sign Language." In Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, The 11th International Conference on Language Resources and Evaluation (LREC 2018),
Signers associate items under discussion with locations around their body, which the signer may point to later in the discourse to refer to these items again. We model and predict the most natural locations for spatial reference points (SRPs), based on recordings of human signers’ movements. We evaluated ASL animations generated from the model in a user-based study.
Analyzed the locational distribution of spatial reference points established by an ASL signer in motion captured dataset and modeled them using Gaussian Mixture Model (GMM) in three most common pointed clusters which helped in improving the pointing feature of existing ASL animation tool.
HCI; Machine Learning; NLP; AI; Accessibility.
Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth. 2018. "Modeling the Use of Space for Pointing in American Sign Language Animation." Journal on Technology and Persons with Disabilities, California State University, Northridge.
The goal of this research is to develop EMBR system, a new software platform to generate animations of a virtual human character performing American Sign Language. Investigating how to create tools that enable researchers to build dictionaries of animations of individual signs and to efficiently assemble them to produce sentences and longer passages.
HCI; Full Stack Developer; Software Engineering.
Abhishek Kannekanti, Sedeeq Al-khazraji, Matt Huenerfauth. 2019 (to appear). "Design and Evaluation of a User-Interface for Authoring Sentences of American Sign Language Animation." 21st International Conference on Human-Computer Interaction, Orlando, Florida, USA.
Collecting a motion-capture corpus of ASL and modeling data to produce accurate animations.
NLP, Data Pre-processing.
The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.
This material is based upon work supported in part by the National Science Foundation under award number 0746556.
Animated ASL can produce useful perceptual stimuli for linguistic research experiments. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics.
HCI; NLP; Accessibility.
Developing Android research tool using IBM Watson, Java, and Python that investigate the automatic captioning benefit between hearing and hard-of-hearing individuals in group meetings.
HCI; ASR; Mobile Development; Accessibility.