My Projects

My Projects

More details about my Ph.D. projects.

Image

Modeling Speed & Timing Of American Sign Language (ASL) Animations

We used motion-capture data recorded from humans to train machine learning models to predict realistic timing parameters for ASL animation, with an focus on inserting prosodic breaks (pauses), adjusting the pause durations for these pauses, and adjusting differential signing rate for ASL animations, based on the sentence syntax and other features. Our goal is to automate this aspect of animation synthesis and to create understandable output.

  • Analyzed data of ASL recordings using Python and engineered features based on the sentence syntax, for training and evaluating classification models to predict, where to insert pauses in ASL animations. My model outperformed the baseline with 80% F1-Score accuracy.
  • Trained Gradient Boosted Regression Trees model to adjust signing speed in ASL. My model lowered RMSE by 23.8% compared to the state-of-art models.
  • I designed and built the machine learning models with 15 different features using Python and Matlab.
  • I designed a user experiments to test the validity and effectiveness of the system.
  • I handled all the data collection and analysis.
  • I was the lead author on the ASSETS 2018 paper.

Research Areas

HCI; NLP; AI; Accessibility.


Awards

2018 SIGACCESS Best Paper Award

Publications

Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. "Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations." The 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '18), Galway, Ireland.

[Available on ACM Digital Library]


Sedeeq Al-khazraji. 2018. "Using Data-Driven Approach for Modeling Timing Parameters of American Sign Language." In Proceedings of the Doctoral Consortium of the 20th ACM International Conference on Multimedia Interaction.

[Available on ACM Digital Library]


Sedeeq Al-khazraji, Sushant Kafle, and Matt Huenerfauth. 2018. "Modeling and Predicting the Location of Pauses for the Generation of Animations of American Sign Language." In Proceedings of the 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community, The 11th International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan.

[Available on LREC]


Funding Support:

This material is based upon work supported by the National Science Foundation under awards 1400802, 1462280, and 1746056.
  • Amount of funding: $581,496. Matt Huenerfauth, PI. June 2008 to May 2014. “CAREER: Learning to Generate American Sign Language Animation through Motion-Capture and Participation of Native ASL Signers.” National Science Foundation, Faculty Early Career Development (CAREER) Award Program, CISE Directorate, IIS Division, HCC Cluster.
  • Amount of funding: $537,997. Matt Huenerfauth, PI. September 2014 to August 2019. “CHS: Medium: Collaborative Research: Immediate Feedback to Support Learning American Sign Language through Multisensory Recognition.” National Science Foundation, CISE Directorate, IIS Division.
Read More About The Project
Image

Modeling the Use of Space for Pointing in American Sign Language Animation

Signers associate items under discussion with locations around their body, which the signer may point to later in the discourse to refer to these items again. We model and predict the most natural locations for spatial reference points (SRPs), based on recordings of human signers’ movements. We evaluated ASL animations generated from the model in a user-based study.

Analyzed the locational distribution of spatial reference points established by an ASL signer in motion captured dataset and modeled them using Gaussian Mixture Model (GMM) in three most common pointed clusters which helped in improving the pointing feature of existing ASL animation tool.


Research Areas

HCI; Machine Learning; NLP; AI; Accessibility.


Publications

Jigar Gohel, Sedeeq Al-khazraji, Matt Huenerfauth. 2018. "Modeling the Use of Space for Pointing in American Sign Language Animation." Journal on Technology and Persons with Disabilities, California State University, Northridge.

[Available on ScholarWorks]

Image

ASL Animation Tools & Technologies

The goal of this research is to develop EMBR system, a new software platform to generate animations of a virtual human character performing American Sign Language. Investigating how to create tools that enable researchers to build dictionaries of animations of individual signs and to efficiently assemble them to produce sentences and longer passages.

  • Managing and collaborate with a full-stack develper team of user experience researchers, designers, and software engineers.
  • Supervising master students in system GUI design.
  • Programming backend parts of the EMBR system.

Research Areas

HCI; Full Stack Developer; Software Engineering.


Publications

Abhishek Kannekanti, Sedeeq Al-khazraji, Matt Huenerfauth. 2019 (to appear). "Design and Evaluation of a User-Interface for Authoring Sentences of American Sign Language Animation." 21st International Conference on Human-Computer Interaction, Orlando, Florida, USA.

[To be Available on Springer]

Image

Generating ASL Animation from Motion-Capture Data

Collecting a motion-capture corpus of ASL and modeling data to produce accurate animations.

  • I am transforming the motion-capture corpus to new ELAN platform.
  • Writing a tool to cleaning and preporcessing the data.
  • Supervising the data annotation process.

Research Areas

NLP, Data Pre-processing.


Data & Corpora

The motion-capture corpus of American Sign Language collected during this project is available for non-commercial use by the research community.


Funding Support:

This material is based upon work supported in part by the National Science Foundation under award number 0746556.



Learn About The Corpus
Image

Linguistic Stimuli for ASL Research

Animated ASL can produce useful perceptual stimuli for linguistic research experiments. This technology can produce stimuli for display in experimental studies with ASL signers, to study ASL linguistics.

I am programming the generating of animated ASL stimuli for linguistic research experiments, including minor variations in handshape, location, orientation, or movement.

Research Areas

HCI; NLP; Accessibility.


Funding Support:

  • $23,616 Hearing Health Foundation.
  • $10,000 Research Seed Funding, Golisano College of Computing and Information Sciences, Rochester Institute of Technology.
  • $10,000 Scholarship Portfolio Development Initiative, National Technical Institute for the Deaf, Rochester Institute of Technology.


Read More
Image

Automatic Speech Recognition (ASR) for Meeting

Developing Android research tool using IBM Watson, Java, and Python that investigate the automatic captioning benefit between hearing and hard-of-hearing individuals in group meetings.

Image
  • Supervised REU students during different project stages. Guide them through experiments; including:
    • Developing differnt project parts (Android app & server-side services).
    • Recording experiments videos
    • Transforming data (audio, video, and textuual annoatations) to ELAN sofware
    • Data preprocessing and cleaning
    • Using the cloud services
    • Analyzing the results and writing the reports

Research Areas

HCI; ASR; Mobile Development; Accessibility.