Researchers in Japan are starting to develop a system that recognizes sign language and automatically converts it into Japanese characters – only requiring a commercial motion sensor such as the “Microsoft Kinect.”
Mizuho Information & Research Institute Inc and Chiba University aim to improve communications between hearing-impaired people and normal listeners, with plans for a prototype to be available in October 2013, and a full-fledged version in 2014.
The system uses four steps to achieve its goal:
- Senses the movements of the signer’s forearm (wrists, elbows, etc)
- Compares the movements with motion data for each word
- Automatically estimates the meanings of the movements
- Displays Japanese characters on a monitor in real time.
Mizuho Information & Research Institute will be responsible for the application of the system, while Chiba University will offer a technique to recognize sign languages and prepare motion data for each word.
If the system reaches fruition, it could provide an ingenious new way for hearing-impaired people and normal listeners to interact, without having to spend months, or possible years, learning sign language. This is especially true when any kind of web-based communication is involved, making the use of microphones highly improbable.