The American Sign Language (ASL) has been a major communication tool for the deaf and mute community, but it is somehow rare for the general public to master the language.
While the deaf and mute community does not find any difficulty in communicating with the general public thanks to the invention of smart phones, we still yet to have a product that allow ASL users to express themselves via software with the speed and efficiency that has come to tools such as speech recognition software or language translators.
The good news is, we are not far from having one now. A student in India had this invention that will help the general public to better communicate with the deaf and mute community.
In her LinkedIn profile, 3rd-year Computer Science student at Tamil Nadu’s Vellore Institute of Technology, Priyanjali Gupta shared how she managed to develop an artificial intelligence (AI) model that is able to to be translated ASL signings into English instantaneously.
According to Priyanjali, the model was developed using TensorFlow object detection API, a software interface built off one of the world’s most popular machine-learning libraries designed by Google.
The model has so far been able to translate signs using transfer learning from a pre-trained model called ssd_mobilenet.
Priyanjali also shared a video of how her model is able to translate the hand-signs which were picked up by the AI and immediately translated into english words.
“The dataset is made manually by running the Image Collection Python file that collects images from your webcam for all the mentioned below signs in the American Sign Language: Hello, I love you, Thank you, Please, Yes, and No,” she explained.
Netizens were impressed by her innovation and many were curious about the design and methodology used to come up with the model.
Responding to one critical comment, she admitted that she made the most of a pre-trained model to come up with her own, but she was confident that the open-source community would eventually be able to build upon such concepts to hopefully develop an AI better suited for more complex tasks in the same vein.
“To build a deep-learning model from scratch just for sign detection is a really hard problem, but not impossible,” she wrote.
“And currently I’m just an amateur student but I am learning and I believe sooner or later our own open-source community which is much more experienced and learned than me will find a solution, and maybe we can have deep-learning models solely for sign languages.”
While the design is nowhere close to being adopted widely, it is still a good idea that will help us better understand the deaf and mute community. Hopefully, this tech soon be developed and available for the public.
What do you think about this? Share your thoughts!