My Projects

My Projects

*more projects will be added periodically.

Image

Modeling Speed & Timing Of American Sign Language (ASL) Animations

We used motion-capture data recorded from humans to train machine learning models to predict realistic timing parameters for ASL animation, with an focus on inserting prosodic breaks (pauses), adjusting the pause durations for these pauses, and adjusting differential signing rate for ASL animations, based on the sentence syntax and other features. Our goal is to automate this aspect of animation synthesis and to create understandable output.

  • Analyzed data of ASL recordings using Python and engineered features based on the sentence syntax, for training and evaluating classification models to predict, where to insert pauses in ASL animations. My model outperformed the baseline with 80% F1-Score accuracy.
  • Trained Gradient Boosted Regression Trees model to adjust signing speed in ASL. My model lowered RMSE by 23.8% compared to the state-of-art models.
  • I designed and built the machine learning models with 15 different features using Python and Matlab.
  • I designed a user experiments to test the validity and effectiveness of the system.
  • I handled all the data collection and analysis.
  • I was the lead author on the ASSETS 2018 paper.

Research Areas

HCI; NLP; AI; Accessibility.


Awards

2018 SIGACCESS Best Paper Award

Publications

Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. "Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations." The 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '18), Galway, Ireland.

[Available on ACM Digital Library]


Sedeeq Al-khazraji. 2018. "Using Data-Driven Approach for Modeling Timing Parameters of American Sign Language." In Proceedings of the Doctoral Consortium of the 20th ACM International Conference on Multimedia Interaction.

[Available on ACM Digital Library]


Sedeeq Al-khazraji, Sushant Kafle, and Matt Huenerfauth. 2018. "Modeling and Predicting the Location of Pauses for the Generation of Animations of American Sign Language." In Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, The 11th International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan.

[Available on LREC]

Image

Modeling the Use of Space for Pointing in American Sign Language Animation

Signers associate items under discussion with locations around their body, which the signer may point to later in the discourse to refer to these items again. We model and predict the most natural locations for spatial reference points (SRPs), based on recordings of human signers’ movements. We evaluated ASL animations generated from the model in a user-based study.

Analyzed the locational distribution of spatial reference points established by an ASL signer in motion captured dataset and modeled them using Gaussian Mixture Model (GMM) in three most common pointed clusters which helped in improving the pointing feature of existing ASL animation tool.


Research Areas

HCI; NLP; AI; Accessibility.


Publications

Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth. 2018. "Modeling the Use of Space for Pointing in American Sign Language Animation." Journal on Technology and Persons with Disabilities, California State University, Northridge.

[Available on ScholarWorks]

Image

ASL Animation Tools & Technologies

The goal of this research is to develop technologies to generate animations of a virtual human character performing American Sign Language.

  • My role in this project is investigating how to create tools that enable researchers to build dictionaries of animations of individual signs and to efficiently assemble them to produce sentences and longer passages.
  • Supervising master students in UX design process.
  • Supervising master student in system implementing and GUI design.
  • Programming backend parts of the system.

Research Areas

HCI; Accessibility.


Read More
Image

Generating ASL Animation from Motion-Capture Data

Collecting a motion-capture corpus of ASL and modeling data to produce accurate animations.

  • I am transforming the motion-capture corpus to new ELAN platform.
  • Writing a tool to cleaning and preporcessing the data.
  • Supervising the data annotation process.

Research Areas

NLP; Accessibility.


Data & Corpora

The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.

Read About The Project Learn About The Corpus
Image

Linguistic Stimuli for ASL Research

Animated ASL can produce useful perceptual stimuli for linguistic research experiments. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics.

I am programming the generating of animated ASL stimuli for linguistic research experiments, including minor variations in handshape, location, orientation, or movement.

Research Areas

HCI; NLP; Accessibility.


Read More
Image

Automatic Speech Recognition (ASR) for Meeting

Automatic Speech Recognition (ASR) converts human speech into textual information displayed on the screen. We are studying the promise for making spoken content accessible for people who are deaf or hard of hearing (DHH). An experimental study is being conducted between hearing and DHH using different scenarios.

Image
  • Supervised REU students during different project stages. Guide them through experiments; including:
    • Recording experiments videos
    • Transforming data (audio, video, and textuual annoatations) to ELAN sofware
    • Data preprocessing and cleaning
    • Using cloud services
    • Analyzing the results and writing the reports

  • Working on developing a research tools (Android app & server-side services) that investigates the automatic captioning in group meetings.

Research Areas

HCI; ASR; Accessibility.