• AiNews.com
  • Posts
  • Google DeepMind Announces SignGemma for ASL-to-Text Translation

Google DeepMind Announces SignGemma for ASL-to-Text Translation

A woman in her 60s with silver-gray hair and a calm expression sits at a wooden desk in a warmly lit room, using American Sign Language. She holds both hands in front of her with curved thumbs-up shapes, knuckles touching and slightly rotated outward, accurately representing the ASL sign for “how.” In front of her, a laptop displays the translated text “Hi, how are you today?” in large white letters on a dark screen. A smartphone and notepad rest on the desk. The background features a bookcase, potted plants, and soft natural light, creating a comfortable home setting.

Image Source: ChatGPT-4o

Google DeepMind Announces SignGemma for ASL-to-Text Translation

Google DeepMind has introduced SignGemma, an upcoming open model designed to translate sign languages into spoken language text. Set for release later this year, SignGemma aims to become part of the Gemma family of models, with a focus on accessibility, inclusion, and multilingual capability.

The model is trained to handle a variety of sign languages, though its strongest and most thoroughly tested performance is in American Sign Language (ASL) paired with English text output.

A Step Toward More Inclusive AI

Sign language understanding remains a complex challenge for AI systems, requiring deep alignment between visual motion, grammar, and language intent. With SignGemma, DeepMind is aiming to bridge that gap—starting with ASL—to make digital communication more accessible for Deaf and Hard of Hearing communities.

The model’s multilingual training base suggests potential broader use in the future, but DeepMind notes that its current strengths lie in ASL-English translation. It will be released as an open model, allowing researchers, developers, and accessibility advocates to evaluate and build on it directly.

Collaboration Ahead of Launch

As development continues, DeepMind is actively inviting feedback and participation from:

  • Developers exploring sign-language applications

  • Researchers working in accessibility, language, or AI

  • Deaf and Hard of Hearing communities worldwide

The team emphasizes that these voices are essential to ensuring SignGemma is both accurate and meaningfully useful in real-world settings.

Those interested in contributing or testing early versions can share feedback via a form linked by DeepMind here.

What This Means

AI tools like SignGemma represent a major opportunity to bring underrepresented languages—like sign languages—into the center of digital interaction. While major language models have made progress in text and speech, sign language remains a complex, highly visual mode of communication that has often been left behind in mainstream AI development.

For the Deaf and Hard of Hearing communities—who have long been underserved by both tech design and linguistic resources—sign language translation models could help shift how accessibility is approached. Rather than treating sign language as an afterthought, SignGemma signals a potential move toward building tools where sign language is a starting point, not a retrofit.

By making the model open and inviting community collaboration, DeepMind is also acknowledging that inclusion isn’t just about access—it’s about agency. The value of a model like SignGemma won’t come from how well it generalizes in a lab, but from how well it reflects the realities and needs of the people it’s built to serve.

True inclusion in AI doesn’t begin with translation—it begins with participation.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.