Baidu AI Cloud launched a sign language platform on Thursday, able to generate digital avatars for sign language translation and live interpretation within minutes.
Released as a new offering of Baidu AI Cloud’s digital avatar platform XiLing, this new product aims to help break down communication barriers for the deaf and hard-of-hearing (DHH) community by boosting the accessibility of automated sign language translation. An AI sign language interpreter developed using the platform will perform its duties during the upcoming 2022 Beijing Winter Paralympic Games.
Also released with the platform on Thursday were two all-in-one AI sign language translators, providing one-stop solutions with a streamlined set-up process and plug-and-use features.
With the technological changes brought by AI, production and operational costs of digital avatars have been reduced to a significant degree, making it possible for AI sign language to scale up and serve more DHH individuals, said Tian Wu, Baidu Corporate Vice President.
Today, China is home to 27.8 million DHH individuals, but is faced with a massive shortage of qualified professionals to serve their needs, with no more than 10,000 sign language translators. This gap is especially apparent in medical and legal settings.
For DHH individuals who want to study or socialize online without barriers, the XiLing AI sign language platform can be quickly integrated into commonly used mobile applications, websites, and mini programs within a few hours, performing functions like sign language video synthesis and livestream synthesis, text-to-sign language translation, and audio-to-sign language translations.
The all-in-one translators are tailored for offline scenarios to improve the accessibility of public services. Baidu’s translators come with two models – a full offline version V3, and a cloud-connected version P3. Both are able to realize ASR speech recognition, speech translation and portrait rendering.
Compared to translations between spoken languages, sign language translation is more complicated mainly because it is not translated word by word from verbal speech. To make AI sign language comprehensible, Baidu scientists had to resolve three key challenges: the clarity of speech recognition, the accuracy of sign language translation and the fluency of sign language movements.
To address speech recognition clarity, the XiLing AI sign language platform uses Baidu’s home-grown SMLTA speech recognition model to achieve end-to-end modeling speech recognition through integrating acoustics and language.
In terms of the accuracy and refinement of sign language translation, Baidu has built the first neural network-based sign language translation model with a controllable degree of refinement, which can automatically learn sign language translation knowledge from real data such as word order adjustment, word mapping and length control to generate natural sign language that conforms to the habits of DHH people.
SEE ALSO: Baidu: $19.54 Billion Revenue in 2021
To ensure the fluency of sign language actions, the AI sign language platform has sorted nearly 11,000 actions based on the National Universal Sign Language Dictionary with its “action fusion algorithm,” so that all digital sign language gestures have the degree of coherency and expression as human sign language. In addition, with the help of 4D scanning technology, the accuracy of mouth shape generation has been optimized by up to 98.5%.