 Your new post is loading...
|
Scooped by
Charles Tiayon
December 13, 2025 9:41 AM
|
Google Translate will now let users hear live speech translations in any headphones on its Android app, expanding a feature that was once only available with Pixel Buds. "Google Translate brings real-time speech translations to any headphones Live speech translations were once only on the Pixel Buds. Google Translate’s latest update brings live speech translations, originally available only on the Pixel Buds, to any headphones you want, with support for over 70 languages. It’s rolling out today in beta and just requires a compatible Android phone with the Translate app (unlike Apple’s similar feature, which requires AirPods). It’s one of a few new features coming to Google Translate, along with improved text translations. Using Gemini, Translate will now offer more accurate translations of phrases like idioms and slang, which have a different meaning than what they literally sound like word for word, such as the expression “stealing my thunder.” Android users will soon have an option to hear real-time translations through their headphones. Image: Google Today’s update also includes an expansion of the Practice feature in Translate, bringing it to 20 new countries and adding more supported languages. The Practice feature, which launched in beta in August, is a bit like Duolingo, but baked into Google Translate. It uses AI to make customized language learning sessions based on your skill level, including vocabulary practice and listening comprehension. Live speech-to-speech translation is rolling out today in the US, Mexico, and India on Android and will make its way over to the iOS Translate app next year. Improved text translations are rolling out today in the US and Mexico on both the Android and iOS Translate apps, as well as on the web version of Translate. Practice is still a beta feature in Translate, so it may not be available to everyone yet." Stevie Bonifield Dec 12, 2025 https://www.theverge.com/news/843483/google-translate-live-speech-translations-headphones #Metaglossia #metaglossia_mundus
Are humans the only beings on the planet that use language to communicate?
"Burg Giebichenstein
Kunsthochschule Halle
“Language can only deal meaningfully with a special, restricted segment of reality. The rest, and it is presumably the much larger part, is silence.” George Steiner
Are humans the only beings on the planet that use language to communicate? Can we decipher the nonhuman world around us without harnessing it to our own socialization, syntax, and lexicon? Is interspecies communication even possible? Translation has been described as a precondition that underlies all (human) cultural transactions upon which communication is based. It also is inherently political and stands at the forefront of so many of today’s questions around identity, gender, post-colonial criticism, feminist critique, machine translation and canon creation, yet its connection within the context of the nonhuman turn, interspecies communication, and eco-criticism has not yet been fully explored.
Whether we are talking about classic linguistic and literary translation, or any number of related fields including: language and literature, cultural studies, performance, visual and media arts—the core question that translators and theorists of translation have been debating about for centuries remains the same: is it possible to translate without interpreting? Is linguistic and cultural equivalence even possible? These questions become all the more urgent in the limit-case of interspecies communication. Can we apply empathic modes of translation to nonhuman articulations, wherein translation involves a form of metamorphosis, not of text, but of the translator. As such, translators are something of a hybrid species with one foot in each culture and language, and whose very existence revolves around traveling between worlds. Translators have something of a mythical being about them, akin to a chameleon or centaur. In this course, we will not be engaging in a scientific exploration of interspecies communication, but examining theories around empathic translation-- a process that sees translation not merely as the transformation of a text, but of the translator themself.
Emerging and classical theories of translation can offer a paradigm for engaging with plant and animal articulation, not language as such, but different forms of articulation perceived through the senses, one in which our hearing and seeing,“once intertwined and attentive to the calls and cries of animals, all but disappeared with the invention of the alphabet, retreating into a kind of silence.”
In David Abram's words: “By giving primacy to perception we can see the natural world, not as inert and passive, but as dynamic and participatory. The winds, rivers and birds speak in their own way (if we listen), the sounds of nature not only have informed indigenous languages, but language in general--humans are but one being intertwined with other beings and ‘presences.’ This perspective sees the landscape as a sensuous field, and human perception as but one point of view that is in reciprocity, in expressive communication, with other points of view and ways of being.”
How can theories of translation help us make sense of this new view of a world teeming with language and sentience? What theories abound in reference to multiplicity of “language,” even as Walter Benjamin would argue for a “universal (human) language.” What practical tools does translation studies offer, and what bridges can it forge between the disciplines? The first half of the seminar focuses on key theoretical concepts relevant to the history and practice of translation. In the second half, students will engage in translation experiments that intersect with their own artistic/design practice. A final project should be considered a first draft of something that could develop later into a larger project.
The course will be taught in English and German.
This seminar is ideally suited to students interested in: Literature, Translation Theory / Translation / Cultural Studies / Critical Theory, Creative Writing/ Post-humanism, Trans-humanism, Eco-criticism, the More-than-Human Turn.
Teachers
Dr. Zaia Alexander"
https://www.burg-halle.de/en/course/l/talk-with-the-animals-translation-in-a-more-than-human-world
#Metaglossia
#metaglossia_mundus
#métaglossie
"Advances made through No Language Left Behind (NLLB) have demonstrated that high-quality machine translation (MT) scale to 200 languages. Later Large Language Models (LLMs) have been adopted for MT, increasing in quality but not necessarily extending language coverage. Current systems remain constrained by limited coverage and a persistent generation bottleneck: while crosslingual transfer enables models to somehow understand many undersupported languages, they often cannot generate them reliably, leaving most of the world’s 7,000 languages—especially endangered and marginalized ones—outside the reach of modern MT. Early explorations in extreme scaling offered promising proofs of concept but did not yield sustained solutions. We present Omnilingual Machine Translation (OMT), the first MT system supporting more than 1,600 languages. This scale is enabled by a comprehensive data strategy that integrates large public multilingual corpora with newly created datasets, including manually curated MeDLEY bitext, synthetic backtranslation, and mining, substantially expanding coverage across long-tail languages, domains, and registers. To ensure both reliable and expansive evaluation, we combined standard metrics with a suite of evaluation artifacts: BLASER 3 quality estimation model (reference-free), OmniTOX toxicity classifier, BOUQuET dataset (a newly created, largest-to-date multilingual evaluation collection built from scratch and manually extended across a wide range of linguistic families), and Met-BOUQuET dataset (faithful multilingual quality estimation at scale). We explore two ways of specializing an LLM for machine translation: as a decoder-only model (OMT-LLaMA) or as a module in an encoder–decoder architecture (OMT-NLLB). The former consists of a model built on LLaMA3, with multilingual continual pretraining and retrieval-augmented translation for inference-time adaptation. The latter is a model built on top of a multilingual aligned space (OmniSONAR, itself also based on LLaMA3), and introduces a training methodology that can exploit non-parallel data, allowing us to incorporate the decoder-only continuous pretraining data into the training of an encoder–decoder architecture. Notably, all our 1B to 8B parameter models match or exceed the MT performance of a 70B LLM baseline, revealing a clear specialization advantage and enabling strong translation quality in low-compute settings. Moreover, our evaluation of English-to-1,600 translations further shows that while baseline models can interpret undersupported languages, they frequently fail to generate them with meaningful fidelity; OMT-LLaMA models substantially expand the set of languages for which coherent generation is feasible. Additionally, OMT models improve in cross-lingual transfer, being close to solving the “understanding” part of the puzzle in MT for the 1,600 evaluated. Beyond strong out-of-the-box performance, we find that finetuning and retrieval-augmented generation offer additional pathways to improve quality for the given subset of languages when targeted data or domain knowledge is available. Our leaderboard and main humanly created evaluation datasets (BOUQuET and Met-BOUQuET) are dynamically evolving towards Omnilinguality and freely available.
Download the Paper AUTHORS Written by
Omnilingual MT Team
Belen Alastruey
Niyati Bafna
Andrea Caciolai
Kevin Heffernan
Artyom Kozhevnikov
Christophe Ropers
Eduardo Sánchez
Charles-Eric Saint-James
Ioannis Tsiamas
Chierh CHENG
Joe Chuang
Paul-Ambroise Duquenne
Mark Duppenthaler
Nate Ekberg
Cynthia Gao
Pere Lluís Huguet Cabot
João Maria Janeiro
Jean Maillard
Gabriel Mejia Gonzalez
Holger Schwenk
Edan Toledo
Arina Turkatenko
Albert Ventayol-Boada
Rashel Moritz
Alexandre Mourachko
Surya Parimi
Mary Williamson
Shireen Yates
David Dale
Marta R. Costa-jussa
Publisher
arXiv
Research Topics
Natural Language Processing (NLP)" https://ai.meta.com/research/publications/omnilingual-mt-machine-translation-for-1600-languages/ #metaglossia_mundus #metaglossia
"Between Languages: How English dubs are rewriting emotion across global anime hits Moving past the dated ‘sub vs dub’ debate, the English voice casts of ‘Frieren: Beyond Journey’s End’, ‘Jujutsu Kaisen’ Season 3, and ‘Sentenced to Be a Hero’ break down how dubbing carries subtext and emotion to sound for a global anime audience Published - March 18, 2026 05:02 pm IST Ayaan Paul Chowdhury
Whether it is a teenager carrying the weight of a citywide massacre, an immortal mage learning to recognise grief too late, or a goddess who swings between divine poise and child-like exuberance, the current anime slate feels packed with a range of curious characters. Once treated as an auxiliary track for international markets, the English language dub for a lot of these popular anime is now far closer to the centre of that exchange, shaped by a diverse group of talented voice actors.
Crunchyroll’s Winter 2026 lineup brings together returning giants and new experiments, with Jujutsu Kaisen’s third season continuing its descent into the Culling Game arc, Frieren: Beyond Journey’s End’s sophomore run refining its study of time and memory, and the new Sentenced to Be a Hero reframing fantasy heroism as institutional punishment. What links the three is the degree to which their English casts are asked to navigate tone, emotion and cultural specificity, across languages that do not always align cleanly.
In Jujutsu Kaisen, Adam McArthur approaches its main protagonist Yuji Itadori with a grounded kind of pragmatism. “If you boil down what he’s feeling, it’s immense guilt,” he says, describing the aftermath of the Shibuya Incident, where Yuji becomes complicit in a catastrophic mass murder under Sukuna’s control. The third season pushes him into the Culling Games, a sprawling death tournament engineered to destabilise Japan’s cursed energy system, with Yuji positioned between execution orders and moral obligation.
McArthur’s task involves holding together the memory of a character who once operated with an unguarded, shounen-MC optimism while allowing that optimism to persist in altered form. “What I love about Yuji is he is always going to be Yuji. He’s going to choose good even when bad things happen to him. He continues to do that.”
The cost of sustaining that emotional register revealed itself in the routine he describes. “I’d go in and record those scenes. The director would be like, ‘okay thanks!’, I’d get in my car, cry on the way home, try to act normal, only to come back next week and do it again. It was the scene with Nanami, the scene with Nobara, all of it. It’s not just one episode. It keeps coming back” he recalls. But McArthur does not frame that strain as a burden. “It’s tough, but it’s also rewarding. You don’t always get to do that with characters in animation. You get to do the light stuff and the really heavy stuff, and that’s a treat.”
Kayleigh McKee approaches Yuta Okkotsu through a more technical lens. “He’s a year older, he has more friends and more support, so I wanted to keep that friendliness from the movie but make him come across as more competent,” she says, describing Yuta’s reintroduction after Jujutsu Kaisen: 0 as a villain-apparent tasked with killing Yuji. “When he first appears, he almost seems like a villain, but he’s only portraying a villain to Yuji. So, I was portraying a character who was trying to portray himself as a villain. That was a unique challenge, and I had a lot of fun with it. I tried to sprinkle in little bits where, if you know he’s not fully selling it, you can pick up on that.” Her work extends to Kirara, a confident and mischievous trans Jujutsu sorcerer whose playful exterior masks her strategic combat style. As one of the few openly trans women working prominently in anime dubbing, McKee occupies a space that has historically remained opaque even within an industry long accustomed to gender-fluid casting traditions — Mayumi Tanaka’s Luffy, Masako Nozawa’s Goku, and Junko Takeuchi’s Naruto stand as defining performances that have shaped these iconic shounen MC’s so completely that questions of gender fall away in the act of listening.
“I’ve portrayed non-binary characters, binary trans characters, cis characters, creatures, monsters,” she says. “I’ve seen overwhelming support from fans. People will say things like they want to see me voicing an entire series, which is flattering, but what it really comes down to is skill, which is something anyone could develop. I just had more of an existential reason to put the work into it,” she smiles. McKee also situates that effort within the industry’s response. “Most directors use me as a utility artist. They know I can do this range without hesitation. As long as I can portray the role in a respectful and representational way, then why not use me. That’s very flattering.”
The contrast between Japanese and English performances remains a constant negotiation. Mallorie Rodak describes her approach to Frieren as more than mere imitation. The elven mage, who has lived for over a millennium, speaks with an insouciance that risks reading as absence if handled too literally, and Rodak adjusts by introducing minimal inflections that suggest interiority without overt signalling. “Frieren is a difficult character because of her lack of emotion,” she says. “There’s a sliding scale. You don’t want to be completely devoid of emotion because then it feels boring, but you also don’t want to infuse too much emotion because it won’t feel like someone who’s been alive for a thousand years.” She traces that balance back to the original performance. “Atsumi Tanezaki’s work was a big inspiration. We hear the Japanese voices before we record, so the depth she brought to the character really informed how I approached it.”
Restraint seems to define Frieren more broadly — the series emerged from its first season as one of the most critically celebrated anime in recent years, with its contemplative pacing and attention to detail distinguishing it within a crowded field. This second outing continues Frieren’s journey north, while maintaining the episodic structure that allows smaller interactions to accumulate into something larger. The English dub follows that structure closely, with Jill Harris and Jordan Dash Cruz locating Fern and Stark’s emotional cores through specific moments.
Harris points to a brief exchange involving a cute head pat as the key to understanding Fern, whose outward composure conceals a need for validation that shapes her behaviour. “There’s a flashback where Heiter says that Fern needs a lot of praise,” she says. “And Frieren gives him a head pat, and it just clicked for me. Fern often seems stoic, but I think she often feels unappreciated. When she passes the mage exam and tells Frieren the spell she chose, and Frieren gives her a little head pat, Fern is just beaming. She wants to do a good job. She wants to make people happy. She wants head pats”, Harris smiles.
Cruz approaches Stark by identifying the gap between his self-perception and the way others see him. “For me, it was the dragon fight and also the episode with his brother,” he says. “You get to see exactly how strong Stark is, but you also witness why he views himself negatively. His relationship with his father also explains why he believes he is weak even when everyone else sees him as strong. Once you know that, it informs everything. You understand why he reacts the way he does, why he doubts himself.”
The actors’ reflections converge around the genre’s ability to hold more tender emotions within heightened settings. “When I watch something, it’s almost always fantasy or sci-fi,” Rodak says. “There’s an immersion that feels separate from reality, but the relationships and emotions are universal.” Cruz extends that idea through identification. “You can put yourself in these characters. You feel like you’re going on the journey with them.” Harris frames it in terms of accessibility. “My parents don’t watch anime, but they watched Frieren. It has a human element that appeals to everyone.”
Though Frieren refined the genre’s introspective potential, 2026’s latest offering, Sentenced to Be a Hero pushes in the opposite direction, using its premise to interrogate the structures that define heroism itself. The series centres on Xylo Forbartz, a condemned figure forced into endless cycles of combat as part of a penal system that weaponises heroism as a cruel form of punishment. Emi Lo describes the appeal of this inversion. “Nothing is as it seems,” she says. “Heroes are criminals. Goddesses are weapons. It makes you want to keep looking into the world because everything you expect is flipped.”
Lo’s performance as Teoritta — the self-proclaimed “Goddess of Swords”, who forges a contract with Xylo and commands an arsenal of summoned blades while oscillating between divine authority and childish enthusiasm — reflects that instability. “She’s a gremlin,” Lo chuckles. “When she summons a sword, she’s excited and energetic, and that comes from her desire to help. She just believes she needs to help. If you think about that as childhood innocence, where a kid hones in on one emotion, that helped me balance it.”
Dawn M. Bennett approaches Patausche Kivia with an emphasis on evolution. The disciplined captain of the Holy Knights enters the story as a staunch believer in order and duty, a figure shaped by doctrine who gradually learns to question it. “What resonated with me was her ability to listen,” Bennett says. “I expected her to be very set in her ways, always arguing with Xylo, but she becomes more open-minded. She realises that if she wants to do the right thing, she has to question her own beliefs.”
The idea of questioning preconceptions seems to align quite well within the wider moment of anime’s current global circulation, where streaming platforms have expanded access while also increasing demand for localisation. Increased visibility, expanded distribution, and a more diverse pool of performers have altered the expectations placed on English-language adaptations, pushing them toward a level of nuance that parallels their Japanese counterparts while retaining their own distinct rhythms. The actors navigating this space approach translation as an ongoing negotiation, where each line becomes an opportunity to locate meaning within the gaps between languages. And though the distance between languages persists, within that distance there is often room for a different kind of truth to take hold.
Rodak recalls how a single, three-worded line from Frieren — “Aura, kill yourself” — circulated online, drawing viewers who had not previously engaged with the series. “People would come up to me and say they weren’t going to watch the show, but they saw that clip with Aura and it convinced them,” she says. “There’s something cold about the way Frieren delivers that line, her back is turned, she’s walking away. It shows her power for the first time. That reaction, the memes, all of it made that scene one of my favourites,” Rodak beams.
Jujutsu Kaisen Season 3, Frieren: Beyond Journey’s End Season 2 and Sentenced to Be a Hero are currently streaming on Crunchyroll, with new episodes airing weekly" https://www.thehindu.com/entertainment/movies/english-anime-dubs-voice-cast-frieren-beyond-journeys-end-jujutsu-kaisen-sentenced-to-be-a-hero/article70757642.ece #metaglossia_mundus #metaglossia
"Korean fiction has experienced a rapid surge in popularity in the English-speaking world in recent years. Many attribute this to the Korean Wave that's been sweeping through cinema and music. Whatever the reason, Korean writers have been winning major literary awards and attracting the spotlight for their achievements. With so much amazing fiction to choose from, there are tons of great options for readers. We've compiled a list of some of the best Korean fiction in multiple genres from the 2010s and 2020s, including powerhouse authors like Han Kang and Bora Chung alongside rising stars like Sang Young Park.
Cursed Bunny
by Bora Chung translated by Anton Hur
Originally published in Korean in 2017, Cursed Bunny was a finalist for the 2023 National Book Award for Translated Literature. This haunting collection of short stories is deliciously eerie, sometimes veering into body horror and at other times utilizing surrealism and even absurdism.
Kim Jiyoung, Born 1982
by Cho Nam-Joo translated by Jamie Chang
Kim Jiyoung, Born 1982's Korean release in 2016 coincided with the global #MeToo movement. Featuring a sort of Korean everywoman figure as its protagonist, the novel dives right into a powerful critique of misogyny in the contemporary era. The book's interrogation of gender inequality is enacted both through its unique premise (a woman takes on the consciousness of a myriad other women) and its unsettling narration, delivered by the male psychiatrist evaluating her case.
Love in the Big City
by Sang Young Park translated by Anton Hur
Longlisted for the 2022 International Booker Prize (among others), Love in the Big City has garnered enough popularity that it was recently made into an independent film. Told in sections organized around different relationships in the protagonist's life, it has a surprisingly lighthearted feel to it for a book that contends with homophobia, dysfunctional families, unhealthy relationships, and loneliness. It's a thought-provoking read about a young gay man's quest for love.
Untold Night and Day
by Bae Suah translated by Deborah Smith
Untold Night and Day is a fascinatingly disorienting work of literary fiction. It begins firmly enough, grounded in reality, but as the story unfolds, characters and experiences begin to collapse in on each other. If you're a reader who enjoys an unconventional and potentially challenging read, this book is perfect for you.
Welcome to the Hyunam-dong Bookshop
by Hwang Bo-reum translated by Shanna Tan
The popularity of what some have termed "healing fiction," which is what select contemporary Korean and Japanese fiction with "cozy" and fabulist elements have been labeled, has been growing over the past several years. Welcome to the Hyunam-dong Bookshop falls into this category and makes for a comforting and introspective read that focuses on the balance between ambition and happiness.
The Hole
by Hye-young Pyun translated by Sora Kim-Russell
This novel is a psychological thriller at its finest. Grappling with some of the darker aspects of life—such as control, guilt, and loss—this deeply uncomfortable story of a man who has been paralyzed in a car accident pushes readers to reflect on the consequences of living. As he is subjected to abuse and neglect, the lines between truth and lies blur in terrifying ways.
View eBook
Greek Lessons
by Han Kang translated by Deborah Smith and Emily Yae Won
Han Kang was awarded a Nobel Prize in literature “for her intense poetic prose that confronts historical traumas and exposes the fragility of human life.” Despite the fact that it was the first book Kang published after her smash hit The Vegetarian, Greek Lessons wasn't published in English until over a decade later. It grapples with themes of loss and trauma using prose that exhibits the author's roots as a poet."
By Kobo • March 15, 2026
https://www.kobo.com/blog/the-best-korean-fiction-in-translation
#metaglossia
#metaglossia_mundus
"Translation technology tools play a pivotal role in overcoming linguistic and other communication-related obstacles in different types of crises. Drawing on real-life examples, the chapter explores translation technology as an agency-enabling solution that facilitates access to instant, accessible information. The complex interaction between translation tools, human actors, and society is mapped through the lenses of sociological theories of agency. The chapter highlights how the development and deployment of such technologies can affect both crisis preparedness and containment, frequently amplifying the voices of governments and technology providers at the expense of those directly affected. To establish inclusive, technology-enabled communication, the chapter offers recommendations for contextually relevant crisis policies and management strategies, advocating for adaptable approaches and positioning human translators as safeguards against overreliance on AI tools. It also underlines the need for transparent, trustworthy communication channels and balancing sociocultural factors and power dynamics, ensuring that crisis communication is inclusive and people-centred." https://www.taylorfrancis.com/chapters/edit/10.4324/9781003271314-15/translation-technologies-automation-crisis-situations-khetam-al-sharou-mieke-vandenbroucke-gert-vercauteren #metaglossia #metaglossia_mundus
The GDELT Project, which collects and analyses global news and social data in real time, is disclosing experiments using AI to process large volumes of news and policy documents. It continuously gathers content in more than 100 languages and updates key datasets about events, relationships and images about every 15 minutes. GDELT also runs a platform translating news written in 65 languages. Recent tests include extracting leadership-change announcements and converting a 3,100-page U.S. bill into an infographic.
" 기자명Jinju Hong 2026-03-16 13:05:00
GDELT unveils AI experiments translating multilingual news, extracting leadership changes and turning a 3,100-page U.S. defence bill into an infographic. (홍진주)] The GDELT Project, which collects and analyses global news and social data in real time, is releasing various experiments that use artificial intelligence to analyse large volumes of news and policy documents.
An online outlet, Gigazine, reported on March 15 local time that the GDELT Project is a global archive that continuously collects content published in more than 100 languages worldwide, including broadcasts, newspapers and web news, and builds it into a database. It links various elements, including people, organisations, places, events and news sources, into a single network. It provides data on events around the world, their background and trends in public opinion.
The project was founded by data scientist Kalev Leetaru and political scientist Philip Schrodt, and it collects news and social media (SNS) data from 1979 to the present. The collected data are used as a basis for analysing global political, economic and social trends by quantitatively coding social events and reactions to them.
GDELT in particular releases large datasets so researchers and journalists can use them for analysis. The data consist of three streams: event data that classify physical activity worldwide into more than 300 categories; relationship data that record people, organisations, places, topics and emotions; and data that analyse the visual story of news images. The data are updated about every 15 minutes.
GDELT also operates a translingual platform that processes global news written in 65 languages through real-time translation using its own translation system.
Recently, it has also been actively conducting analysis experiments using AI. The GDELT Project disclosed an experiment that uses a Gemini-based model to automatically extract announcements of leadership changes at governments or companies from global news and organise them into a knowledge graph. In the process, AI was used to generate reports by going beyond organising personnel information and inferring the political and economic background.
In another experiment, work was carried out to input the roughly 3,100-page U.S. National Defense Authorization Act into AI and convert the entire bill into a single infographic. In the process, various analyses were also performed, including topic analysis of the bill, organisation of related bills and generation of expected questions.
GDELT also disclosed a large-scale translation experiment. According to a February 2026 announcement, it translated about 3 million TV news broadcasts accumulated over 25 years using AI. The cost to translate a total of 62 billion characters of broadcast data amounting to about 6 billion seconds was about $74,634. This is work that is estimated to have required millions of dollars using past methods.
Such projects are assessed as examples showing the possibility that AI can comprehensively analyse vast amounts of news and policy documents. Experts say such data-based analysis could become a new tool for understanding global political and economic trends." https://www.digitaltoday.co.kr/en/view/39425/ai-translates-25-years-of-news-in-100-countries-summarises-3100-page-bill-in-big-data-test #metaglossia #metaglossia_mundus
"The world's first Tibetan large language model and its application, DeepZang, has been officially unveiled in Lhasa, Southwest China's Xizang Autonomous Region. This model fills the gap in indigenous large language models at both the national and ethnic levels, while also facilitating the innovation and inheritance of Tibetan ethnic culture in the AI era, the company's chairman told the Global Times.
Developed independently by CHOKNOR Information Technology Co., Ltd. in Xizang, the model and its application are the first Tibetan large language model to complete national filing for generative AI in China, filling a technological gap in this field globally, according to local media Tibet.cn.
The World Record Certification Agency (WRCA) also awarded the certification of "the World's first Tibetan large language model" at DeepZang's launch event, chinanews.com reported on Monday.
Tenzin Norbu, chairman of the CHOKNOR company, told the Global Times on Monday that this open-source large model platform is China's first ethnic language AI open platform designed for multilingual and multimodal capabilities. The DeepZang platform supports over 80 languages, including Tibetan, Putonghua, English, Mongolian and Uygur, enabling an integrated approach to listening, speaking, translating, recognizing and thinking, Tenzin added.
The DeepZang model marks a strategic leap for China to take the lead in the AI field for ethnic languages, officially inaugurating the high-quality AI development of Tibetan-language in Xizang and dawning the era of AI for the Tibetan language, Tibet.cn reported.
The DeepZang application was also launched on Sunday, supporting intelligent interactions in Tibetan, Putonghua and English. Users can speak or type a sentence to access real-time mutual translation, Tibetan-language Q&A and cultural knowledge inquiries, according to the report.
Shortly after its launch on Sunday, the app recorded an average of 4,000 downloads per hour, the Global Times learned from the company.
Tenzin said the company has built a high-quality parallel corpus of nearly 70 million precise Tibetan-Putonghua language pairs. Additionally, they have completed large-scale speech data collection across the three major Tibetan dialect regions, establishing China's largest and accurately annotated Tibetan speech database to date, he added.
As shown in a video released by the Xizang Daily, several users voice-inputted instructions in different Tibetan dialects, and the application achieved accurate recognition and delivered prompt responses with high efficiency.
Tenzin said the development of this large language model has filled the gap in Tibetan large language models at the national and ethnic levels, and it also gives full play to the Tibetan cultural value, facilitating the innovation and inheritance of Tibetan ethnic culture in the AI era.
An official from Lhasa people's government was quoted by Tibet.cn as saying that the successful development of DeepZang has provided a valuable exploratory model for the global AI community in the processing of low-resource languages. It stands as a testament that modern information technology can effectively underpin the preservation and development of traditional cultures, the official added.
"Through this large language model and its application, we also aim to provide an authentic platform for global users seeking to learn about Tibetan culture, history and politics, thereby preventing the dissemination of distorted ideologies and values," Tenzin said.
In another video posted by the Lhasa Women's Federation on its official WeChat account, a student from Xizang University said that DeepZang's translation function is very useful, though the translation of some four-character idioms is still not fully developed.
Tenzin said that the model is currently limited by the scope of its corpus data, and the company will continue to refine and update it based on user feedback.
In the future, this large language model is set to extend its capabilities to sectors including education, healthcare and ecology, delivering convenient and efficient services to enterprises and government agencies..."
https://www.globaltimes.cn/page/202603/1357052.shtml
#metaglossia
#metaglossia_mundus
" Stanford engineer has demonstrated that frontier language models can run directly on everyday edge devices using convex optimization, eliminating reliance on cloud servers and costly GPUs. The breakthrough, unveiled at NeurIPS 2024, enables secure, lower-cost, personalized AI with early international commercial deployments.
United States, March 12, 2026 -- A Stanford engineer has shown that the world’s most advanced "frontier" language models can now run directly on regular edge and local devices. This removes the pure reliance on cloud servers and costly specialized hardware.
This engineer used advanced mathematical optimization techniques to show that sophisticated and helpful "frontier" language models can run on the personal devices people already have. This change means the industry no longer has to rely on the cloud or expensive specialized GPU hardware.
Breaking the Cloud Dependency
Running advanced neural networks usually means using an army of cloud computing resources, which requires expensive GPU farms, a steady internet connection, and per-token API fees. Miria K. Feng, a PhD candidate in electrical engineering at Stanford University, has successfully merged the potential of mathematical convex optimization techniques with large-scale deep learning applications for far more accessible and personalizable AI. Powerful frontier models running on your local devices mean greater security, since your data stays local and reduces the cost of paying per-token fees to a few large tech conglomerates.
The combination of using mathematical optimization to reformulate neural networks is not new and was proposed by Turing Award winner Yoshua Bengio. But the practical deployment of these elegant theoretical techniques in large-scale AI was first publicly announced in Miria's work at NeurIPS 2024. This quiet breakthrough has led to frontier models that efficiently run personalizable inference on everyday edge devices that we already carry in our back pockets.
“The goal was to prove that you don't need a GPU cluster or fiber internet connection to use frontier technology,” said Miria. “We use principled convex optimization techniques in conjunction with machine learning to cut the computing power needed without sacrificing quality in results. This dramatically reduces barriers to entry for global users and helps safeguard user privacy since data is not being constantly shared on the cloud."
From Academic Research to Market Launch
Early deployments in Canada, Singapore, and Japan to build accessible, everyday, personalized AI tools were a resounding success for Miria's innovations. Her commercial deployments span widely, from Toyota Motor Corporation in Nagoya, Japan, to FCS Solutions in Singapore.
Meanwhile, Miria is continuing her cutting-edge doctoral work at Stanford University as a Rambus Corporation Fellow, with beta tests in the hospitality sector set to go live in Los Angeles and Las Vegas in 2026. Official news about partnerships is expected later this year.
A Multidisciplinary Approach Her unique background shapes Miria's technical work. She is a Kiwanis Music Festival gold medalist and concert pianist and is currently a student of Melinda Lee Masur. Her top national performances in the Pascal, Fermat, and Euclid mathematics competitions continue to give her a creative yet principled approach to engineering. She paid her own way through school and has lived in several countries, which led her to focus on "equitable access" and to build tools that work for everyone, regardless of local infrastructure or income.
About the company: Miria K. Feng is a doctoral researcher in the Department of Electrical Engineering at Stanford University, focusing on electrical engineering and convex optimization for deep learning. As a Stanford Graduate Fellowship winner and a Rambus Corporation Fellow, she connects theoretical math optimization with real-world applications through refreshing innovation
Contact Info: Name: Miria Feng Email: Send Email Organization: 9-Figure Media Website: https://9figuremedia.com/"
https://markets.businessinsider.com/news/stocks/new-technology-brings-advanced-language-models-to-everyday-devices-1035923587 #metaglossia #metaglossia_mundus
"La Chine adopte une loi qui promeut le mandarin comme «langue commune nationale»
L’Assemblée nationale populaire chinoise Chine a approuvé jeudi une loi dite d’«unité ethnique» que les défenseurs des droits de l’Homme estiment délétère pour les langues et les cultures minoritaires dans le pays.
La Chine a adopté jeudi, au cours de son événement politique annuel des Deux Sessions (une réunion parlementaire durant laquelle le gouvernement chinois fixe ses grandes orientations économiques et politiques pour l’année à venir, une loi sur la «promotion de l’unité et du progrès ethniques», approuvée sans débat lors de cette session parlementaire. Cette nouvelle loi, adoptée par l’Assemblée nationale populaire chinoise (ANP), formalise désormais des politiques visant à promouvoir le mandarin comme «langue commune nationale» dans l’éducation, les affaires officielles et les lieux publics.
Pékin présente cette loi comme un outil de modernisation et de prospérité, affirmant qu’elle renforcera «le sentiment de communauté commune de la nation chinoise» et améliorera les perspectives d’emploi des minorités grâce à la maîtrise du mandarin. Cependant les universitaires et défenseurs des droits humains y voient la consolidation juridique d’une politique d’assimilation forcée, selon les informations de la BBC.
D’après les informations du média britannique, cette mesure est composée de clauses qui affaiblissent le statut des autres langues officielles présentes en Chine au profit du mandarin. Elle vise ainsi les 55 minorités officielles représentant environ 9% des 1,4 milliard d’habitants de la Chine.
La langue comme principale cible Dans certaines régions comme le Tibet ou la Mongolie intérieure, où vivent d’importants groupes ethniques minoritaires, des politiques gouvernementales ont déjà ordonné que le mandarin soit utilisé comme langue d’enseignement. Yalkun Uluyol, chercheur dédié à la Chine à l’ONG Human Rights Watch, décrit à l’AFP la nouvelle loi comme un «changement radical» par rapport à une politique de l’ère de l’ancien dirigeant Deng Xiaoping, qui garantissait aux minorités le droit d’utiliser leurs propres langues. Les établissements d’enseignement devront désormais utiliser le mandarin comme principale langue d’enseignement. Les adolescents seront désormais tenus d’avoir «une maîtrise de base» du mandarin à l’issue de la scolarité obligatoire.
Des tensions autour de la langue avaient déjà éclaté bien avant l’adoption de cette loi. En 2020, en Mongolie intérieure, la suppression brutale des manuels scolaires en mongol avait provoqué de rares mais puissantes manifestations. Certains parents avaient même retenu leurs enfants à la maison en signe de protestation, considérant cette mesure comme une menace directe à leur identité culturelle. La répression avait été immédiate et massive, suivie de campagnes de rééducation. Désormais les élèves de la région ne peuvent plus étudier le mongol qu’une heure par jour, comme simple langue étrangère, d’après les informations de Associated Press.
La loi prévoit aussi des sanctions contre les parents ou tuteur chinois en Chine qui transmettraient à leurs enfants des idées jugées contraires à «l’harmonie ethnique». Le texte instaure également une base juridique inédite pour poursuivre des individus ou organisations basés hors de Chine si leurs actes nuisent à «l’unité ethnique», un mécanisme qui inquiète particulièrement les communautés ouïghoures, tibétaines et mongoles en exil, souvent parmi les plus critiquées par le régime.
La Chine, où l’ethnie largement majoritaire est celle des Hans, reconnaît à l’intérieur de ses frontières 55 minorités qui rassemblent plusieurs centaines de langues et dialectes. Le gouvernement chinois est accusé depuis des décennies de mener des politiques pour assimiler de force ces minorités à la majorité Han." Joséphine Guilhem de Pothuau 13 mars 2026 https://www.lefigaro.fr/international/la-chine-adopte-une-loi-qui-promeut-le-mandarin-comme-langue-commune-nationale-20260313 #metaglossia #metaglossia_mundus
"English PEN’s flagship translation grant programme, PEN Translates, announced its latest round of winners, awarding grants to 18 titles from 14 publishers across 12 languages and 16 regions. Three of those titles come from African writers — from Egypt, Sudan, and Mauritius — and one of them makes history as the first Mauritian title ever to receive a PEN Translates award.
The Egyptian title is The Field by Hamdi Abu Golayyel, translated from the Arabic by Robin Moger and published by Saqi Books. Abu Golayyel — who passed away in June 2023 — was one of Egypt’s most distinctive literary voices, born in Fayoum and widely described as a chronicler of the lives of Egypt’s marginalised and working class. Three of his novels have previously been translated into English: Thieves in Retirement (tr. Marilyn Booth, 2006), A Dog with No Tail(tr. Robin Moger, 2009), which won the Naguib Mahfouz Medal for Literature in 2008, and The Men Who Swallowed the Sun (tr. Humphrey Davies, 2022), whose translator was joint winner of the 2022 Saif Ghobash Banipal Prize for Arabic Literary Translation. The Field will be a welcome return of his work to English-language readers. The Sudanese title is Under the Neem Tree by Rania Mamoun, a Sudanese activist and bestselling writer of poetry, fiction, and nonfiction, translated from the Arabic by Elisabeth Jaquette and published by Comma Press. Jaquette previously translated Mamoun’s Thirteen Months of Sunrise (Comma Press, 2019), which was shortlisted for the 2020 Warwick Prize for Women in Translation and was itself a PEN Translates award winner, making this a continuation of a celebrated translating partnership.
The most historic of the three is The Rasta’s Song by Sharon Paul from Mauritius, translated from the French and Mauritian Creole by Nadiyah Abdullatif and published by Balestier Press. This is the first time a title from Mauritius has ever received a PEN Translates award, a milestone that reflects both the programme’s expanding geographic reach and the growing recognition that Francophone and Creole-language African literatures deserve a place in the global translation conversation. The inclusion of Mauritian Creole as a source language is itself significant: it joins Slovak as one of two languages appearing in the PEN Translates portfolio for the first time in this round.
PEN Translates has now supported over 400 books translated from over 90 languages, awarding over £1.2m in grants since its inception. Books are selected on the basis of outstanding literary quality, the strength of the publishing project, and their contribution to UK bibliodiversity. The programme’s Translation Advisory Co-chair Nichola Smalley described this round as giving “hope for the future of UK translation publishing” and for African literature specifically, three grants in a single round, including a historic first, is a result worth celebrating!" by Blessing Uwisike February 26, 2026 https://share.google/B1Gq1t8fiiOoFaf2z #metaglossia #metaglossia_mundus
Creators can now upload language-specific thumbnails, enabling viewers to see previews in their preferred language and improving discoverability globally
"YouTube has introduced a new feature that allows creators to upload translated thumbnails for their videos, a move aimed at helping content reach audiences across different languages more effectively.
The update enables creators to add multiple thumbnail versions for a single video in different languages. When viewers browse the platform, the thumbnail displayed will automatically match their language preferences, allowing them to see a preview image that feels more familiar and relevant.
For instance, a viewer whose interface language is Hindi may see a Hindi-language thumbnail, while someone browsing in Spanish could see a Spanish version of the same video’s thumbnail. Despite the different preview images, both viewers would still be watching the same underlying video.
The feature is designed to complement YouTube’s existing multi-language audio capabilities, which allow creators to upload alternative audio tracks in different languages for the same video. By adding translated thumbnails to the mix, the platform is extending localisation beyond audio to the visual entry point of a video.
Creators can add these translated thumbnails through YouTube Studio, where they can upload different thumbnail images mapped to specific languages. Once added, YouTube automatically determines which version to display based on the viewer’s language settings.
The company says the feature is intended to help creators improve discoverability and engagement among global audiences, particularly for channels that publish content aimed at viewers in multiple regions."
https://www.buzzincontent.com/news/youtube-rolls-out-translated-thumbnails-to-help-creators-reach-multilingual-audiences-11205414
#metaglossia #metaglossia_mundus
"Taariq Ahmed, Assistant Campus Editor March 13, 202 Poet, translator and New York University English and Spanish and Portuguese Prof. Urayoán Noel shared his work and discussed ideas in a Thursday poetry reading and Q&A event with Northwestern community members as part of the English department’s Unsettling Sound series.
Noel is the author of several books in English and Spanish. He performed for about 15 attendees in University Hall.
He started by reading poems from his 2021 collection, “Transversal,” performing with voice and volume changes and reciting both the English and Spanish versions. Throughout the event, he performed poetry with instrumental music in the background.
“Now poetry is just a name for this, our faint embodied sound, for music once it’s not around, for ash in lockstep with the flame, for streets still summoning the same old shadows,” Noel said in one poem named “Juliécimas.”
Noel then transitioned into his ongoing series, “Wokitokiteki,” which he described to the audience as a “walking poetic improvisation project.” He said he creates the content while walking through neighborhoods in Puerto Rico and those in U.S. states with significant Puerto Rican populations.
In honor of his visit to the Chicagoland area, he delivered one piece inspired by Humboldt Park, Illinois, a historically Puerto Rican cultural hub, specifically referencing his observations from the walk.
As a translator, Noel has repeatedly translated works from Garifuna and Guatemalan poet Wingston González. Noel recited poems from an unpublished translation of González’s 2015 book, “Translaciones.”
When explaining his relationship with González, he said they share a relationship for things like performance and improvisation, despite their cultural differences.
Citing inspiration from “The Traffic in Meaning: Translation, Contagion, Infiltration” by Mary Louise Pratt, he talked about how translation is less about producing equivalences and more about understanding and representing the experiences of others.
Later, Noel read from his 2025 autobiographical prose work, “Cuaderno de Isabela/Isabela Notebook,” and handed out copies to attendees.
“Tell me if there’s a city like the one with the horse staring at the sea in front of windows with iron bars and flanked by piles of car tires…” Noel said in one poem, translated to be “Pueblo” or “Town.”
Noel then transitioned into a Q&A with the audience. He discussed the Wokitokiteki project and the concept of improvisation. He also compared product versus process.
Noel also talked about his philosophy on teaching poetry and writing to students. He said emphasizing the process of writing poetry is essential, as the product is “tied to racial capitalist ideas” of generating something to sell.
“We can always do things to become better writers, but I can’t tell you what you need to write,” Noel said. “What I can share with you is the process. How did my process get me from A to B?”
NU Spanish and Portuguese Prof. Emily Maguire, who went to graduate school with Noel at NYU, said she believes he is an impressive performer.
She said he is one of the most proficient bilingual people she has ever met.
“He has a tremendous facility in both Spanish and English, but he is also someone who has a tremendous gift for performing live and a real ability to capture an audience and move and entertain in surprising and creative ways,” she said.
Spanish and Portuguese Prof. Julia Oliver Rajan, who is Puerto Rican, said though she was initially unfamiliar with Noel, she enjoyed his performance.
“It resonated with me the vibrancy of his poetry,” Rajan said. “The way he described Puerto Rico, the struggles of Puerto Rico — I liked those things in his poetry.”
In the Q&A, Noel spoke about what it is like to translate works from poets who are from a different culture or who are dead, both of which he has done in his career.
He said to be a translator it was crucial to embrace these discrepancies, calling translation the “least messed-up kind of appropriation.”
“You’re not going to do away with the fundamental tension of ‘Oh, this person is dead, and I’m here telling their story,’ especially if they’re from community X, and I’m from community Y,” Noel said. “But to me, that shouldn’t dissuade us, because there’s way more work that needs to be translated than there are translators.”
Email: r.ahmed@u.northwestern.edu"
Taariq Ahmed, Assistant Campus Editor
March 13, 2026 https://dailynorthwestern.com/2026/03/13/campus/poet-translator-and-professor-urayoan-noel-shares-work-in-reading-qa-event/ #metaglossia #metaglossia_mundus
"Zigbang (PDG : Ahn Seong-woo), une entreprise de technologies immobilières complète, a annoncé le 13 décembre l’intégration de la reconnaissance vocale en temps réel basée sur l’IA et de la traduction multilingue à sa plateforme de bureau virtuel, Soma. Ces fonctionnalités visent à faciliter la collaboration des équipes internationales en s’affranchissant des barrières linguistiques.
Soma est une plateforme de bureau virtuel basée sur le métavers, développée par Zigbang. Elle recrée l'environnement spatial d'un bureau physique en ligne, favorisant les échanges et la collaboration naturels, même dans des contextes de travail à distance ou hybrides. Grâce à cette mise à jour, les utilisateurs peuvent visualiser le contenu vocal dans l'espace virtuel sous forme de texte en temps réel et traduire les propos de leur interlocuteur dans la langue de leur choix. Actuellement, Soma prend en charge plus de 50 langues et 145 paramètres régionaux. Le texte généré peut être enregistré localement et utilisé, par exemple, pour la rédaction de comptes rendus de réunion.
Cette fonctionnalité intègre une technologie de traduction contextuelle qui prend en compte le contexte de la conversation précédente. En allant au-delà d'une simple substitution mot à mot pour proposer des traductions qui préservent la fluidité du dialogue, l'objectif est de minimiser les problèmes de communication susceptibles de survenir lors de réunions multinationales.
Les données de conversation sont conçues pour être traitées entre les participants plutôt que stockées sur un serveur. La reconnaissance vocale et la traduction utilisent une architecture à double flux qui assure simultanément une transmission stable et en temps réel des messages, ce qui la rend adaptée aux environnements d'entreprise exigeant une sécurité renforcée.
Grâce à cette mise à jour, Zigbang prévoit de faire évoluer Soma en une plateforme contribuant à l'amélioration des processus en apprenant les flux de décision organisationnels et les contextes de travail, allant ainsi au-delà du simple enregistrement des réunions et de l'extraction des tâches. À long terme, l'entreprise poursuit ses recherches afin de créer un environnement « Moi numérique » où le travail peut se poursuivre même en l'absence de l'utilisateur, grâce à des agents d'IA qui apprennent le langage et les jugements professionnels individuels.
Un responsable de Zigbang a expliqué : « L'introduction de cette fonctionnalité multilingue basée sur l'IA est une première étape vers la réduction des barrières linguistiques », ajoutant : « Nous prévoyons d'étendre progressivement les fonctionnalités afin que Soma puisse évoluer au-delà d'un simple espace virtuel pour devenir une plateforme qui améliore les méthodes de collaboration organisationnelle. »
Avec la récente généralisation du travail à distance et hybride, les outils de traduction et de collaboration en temps réel basés sur l'IA suscitent un intérêt croissant en tant que solutions clés pour améliorer la productivité et l'efficacité dans les environnements d'entreprise mondiaux.
출처: Zigbang intègre des fonctionnalités de reconnaissance vocale multilingues en temps réel basées sur l'IA et de traduction à Virtual Office Soma - 벤처스퀘어 https://www.venturesquare.net/fr/1049186/"
https://www.venturesquare.net/fr/1049186/
#metaglossia #metaglossia_mundus
"… À Locronan (29), la traduction en breton de panneaux installés dans le cadre d’un parcours patrimonial a fait hérisser le poil des brittophones de la commune.
Jean-Marc Louboutin fait partie du collectif qui demande le retrait des panneaux d’un parcours patrimonial installé à Locronan. En cause ? La traduction « catastrophique » des textes en langue bretonne. (Photo Aude Flambard) « Il s’agit vraiment d’un exemple de mésusage de l’intelligence artificielle », estime Anne Gouerou. Elle fait partie d’un collectif d’habitants brittophones de Locronan (29) qui s’est constitué après la découverte de panneaux installés, début mars, dans le cadre d’un parcours patrimonial qui retrace l’histoire du cinéma dans la petite commune finistérienne..." Par Paul Bohec avec Aude Flambard Le 13 mars 2026 à 16h50 https://www.letelegramme.fr/finistere/locronan-29180/un-massacre-de-la-langue-a-locronan-la-traduction-bretonne-de-ces-panneaux-fait-bondir-des-habitants-7003989.php #metaglossia #metaglossia_mundus
"Many of the immigrants detained at Northwest State Correctional Facility in Swanton have the same question for the volunteer attorneys who’ve visited to provide in-person counsel.
“One of the questions that we got asked the most often was, ‘Where am I? What state am I in?’” said Emma Matters, an immigration attorney with the Vermont Asylum Assistance Project. “Even that very, very basic information that you assume someone has access to, people go without if they don’t have someone coming in and conversing with them in their language and explaining to them just what is going on.”
Matters says the experience underscores the disadvantage that immigrants who don’t speak English face when they’re detained in facilities that can’t communicate in a language they understand. And she says prohibitions on language-access devices at the Vermont Department of Corrections have in some cases prevented attorneys from providing the basic legal services that immigrants need to fight their cases.
“Without someone who’s able to provide them with that information, let them know what’s being put in front of them or what might be put in front of them, people end up being vulnerable to life-changing harm,” Matters said.
The number of people arrested and detained by Immigration and Customs Enforcement is up tenfold in New England since the start of President Donald Trump’s second term in office. Some of them have ended up in two prisons operated by the Department of Corrections, which contracts with the Department of Homeland Security to provide temporary lodging for immigrant detainees.
Local immigration attorneys almost universally support the state’s decision to lodge detained immigrants, at Northwest, for men, and Chittenden Regional Correctional Facility, in South Burlington, for women.
“We need these beds,” Jill Martin Diaz, executive director of the Vermont Asylum Assistance Project, told lawmakers in January. “Because there is absolutely no substitute to me getting in my car and driving up the road ... flashing my attorney credential and being able to meet with my client face-to-face.”
Mae Nagusky / Vermont Public File Chittenden Regional Correctional Facility is Vermont's only women's prison and one of two facilities that routinely houses immigrant detainees. Attorneys are raising concerns about what they say is a lack of language translation services available as they meet with clients. But a Department of Corrections policy that prohibits attorneys from using their own translation services in state facilities has hindered their ability to help, attorneys say.
“DOC policies and deficiencies are preventing low bono and volunteer attorneys from being able to speak with their clients who are in detention and is thereby depriving them of access to their due process rights,” said Hillary Rich, a staff attorney at the Vermont ACLU who spent two years practicing asylum law in Laredo and San Antonio, Texas.
No outside devices Matters said the Vermont Asylum Assistance Project has been making regular trips to the state prisons since last year to meet with newly detained immigrants. She said the organization explains their rights, advises them of potential claims, and provides referrals to lawyers who might provide representation.
VAAP attorneys had previously been allowed to bring in their own “tools of interpretation,” including laptops or cell phones on which they could call out to access live translation services.
“It’s very hard to know in advance what type of language capabilities we’re going to need on that day,” Matter said. “We see people detained who speak a wide variety of languages, including rare and Indigenous languages.”
But in October, officials at VAAP say, the department told them they could no longer bring those devices into the facilities. The single DOC landline that attorneys now have access to drops calls frequently, Matters said. And she said it bottlenecks a process that previously allowed multiple attorneys to work several cases simultaneously. The process became so inefficient that VAAP has cut the number of trips it makes to state prisons in half.
“The numerical reality of that is that … between tens and hundreds of people who would otherwise have access to legal screenings, basic know your rights, and case advice and potential referral out to legal services, go without,” she said.
Peter Hirschfeld / Vermont Public File Elected officials and nonprofit leaders gathered in the Statehouse in May 2025 to announce the launch of the Vermont Immigration Legal Defense Fund. Jill Martin Diaz, at the podium, with Vermont Asylum Assistance Project, said the money would be used to train and hire legal professionals to provide pro bono assistance to noncitizens facing immigration proceedings. Corrections Commissioner Jon Murad said in an interview Wednesday that the department has a longstanding policy that prohibits people from bringing “anything with cellular capacity” into a state prison.
“What if it’s misplaced? What if it disappears? What if it is then transferred over to the direct control of people in our custody?” said Murad, who joined the department in August. “That is a risk, and one that we don’t want to countenance.”
VAAP’s ability to bring in its own devices up until October, according to Murad, might have been related to a lapse in policy enforcement.
'Set up to fail' The commissioner said the department has since taken steps to lower the language barrier, by providing attorneys with DOC-owned devices that have translation capabilities.
Murad said DOC had six such devices at Northwest and three at Chittenden Regional. Matters said VAAP attorneys who visited DOC facilities as recently as March 6 have not been told about the new devices.
“That was brand new information to me and to all of my colleagues,” Matters said Wednesday.
The DOC devices don’t have cellular capacity – a shortcoming Matters said would likely render them useless to VAAP attorneys.
“We require live interpretation services. We need to be speaking to a human,” Matters said.
Murad said the department is working on a plan that would give lawyers the ability to make calls to translation services on DOC-owned devices, though he said he doesn’t have a timeline for that yet. He said the department has undertaken other efforts to facilitate access to counsel for detained immigrants – it sends VAAP a daily list of names of new arrivals at facilities, so the organization is aware of individuals who might need assistance.
Rich, of the ACLU, said a DOC policy she obtained in February through a public records request shows that immigrant detainees are responsible for coordinating their own remote hearings.
“Which for a limited English proficient detainee who does not have counsel and doesn’t even know what state they’re in is going to prove impossible,” she said. “These folks are being set up to fail in their immigration court systems by the deficiencies in DOC procedures.”
Rich said Northwest and Chittenden Regional are subject to public accommodations laws that include language-access requirements. She said the Department of Corrections might be violating those laws.
“Lawsuits are just one tool in our toolbox,” Rich said, “but of course it is a tool we are very comfortable wielding when necessary.”
Peter Hirschfeld https://www.vermontpublic.org/local-news/2026-03-12/lawyers-raise-alarm-about-language-translation-services-for-vermonts-detained-immigrants #metaglossia #metaglossia_mundus
"Theorizing “Global Criticality” and the Politics of Just Translation
07 May 2026 18:00 to 19:30
Bush House, Strand Campus, London
07
May
Professor Emily Apter is giving the keynote lecture at the annual conference of the Department of Interdisciplinary Humanities, King's College London. This lecture is open to the public.
"Translation and justice, the focus of my book What is Just Translation? Changing Languages in the Political Present, engages Gayatri Chakravorty Spivak’s notion of “global criticality” as a rubric for a vision of language politics that straddles the fields of law, global language policy, non-monolingual pedagogies and reparations applied to forms of linguistic injustice and cultural appropriationism. I associate “global criticality” with translational workarounds - ways of working micropolitically with language and intermedial forms of expression. These microforms stand in contradistinction to one-size-fits-all paradigms or “isms” that are anchored in colonial Euro-chronology and beholden to reductive bipolarities between major and minor, metropole and periphery, written and performative. As a micropolitics of language, “global criticality” flows into Spivak’s notion of “living translation:” a triple play on living with translation, living life in translation, and “live” translation, which vivifies life itself."
About the speaker
Professor Emily Apter is Julius Silver Professor of Comparative Literature and French Literature, Thought and Culture at New York University. Her books include: Unexceptional Politics: On Obstruction, Impasse and the Impolitic (Verso, 2018); Against World Literature: On the Politics of Untranslatability (2013); Dictionary of Untranslatables: A Philosophical Lexicon (co-edited with Barbara Cassin, Jacques Lezra and Michael Wood) (2014); and The Translation Zone: A New Comparative Literature (2006). Since 2000 she has edited the book series Translation/Transnation with Princeton University Press. Essays have appeared in New Literary History, October, Public Culture, Crisis and Critique, History and Theory, Diacritics, PMLA, Comparative Literature, Critique, Les Temps qui Restent, Representations, Art Journal, Third Text, Paragraph, boundary 2, Artforum, Esprit Créateur and Critical Inquiry. In 2019 she was the Daimler Fellow at the American Academy in Berlin. In 2017–18 she served as President of the American Comparative Literature Association. In fall 2014 she was a Humanities Council Fellow at Princeton University and in 2003–2004 she was a Guggenheim Fellowship recipient. In 2022 she co-edited and introduced Gayatri Chakravorty Spivak’s Living Translation, a collection of Spivak’s contributions to translation theory. Her book What is Just Translation? Changing Languages in the Political Present is nearing completion. Her next book project, on the conceptualization of the unborn (or “prepersons”) is provisionally titled Conception: The Laws."
https://www.kcl.ac.uk/events/theorizing-global-criticality-and-the-politics-of-just-translation
#metaglossia
#metaglossia_mundus
"Vozo AI , an AI-powered video localization platform, today announced the beta launch of Visual Translate , a generative AI capability that automatically localizes on‑screen text while maintaining the original design, layout and animation. This release addresses a long-standing gap in AI video translation: while subtitles and dubbing translate what viewers hear, most tools still fail to translate the text viewers see within the video itself.
Vozo Visual Translate localizes on-screen text in videos.
In many videos—such as training materials, product demos, and explainer content—key information appears directly within visuals, including slide text, labels, callouts, diagrams, and charts. When that content remains in the original language, international viewers may understand the narration but still miss critical context.
Visual Translate closes this gap by automatically:
• Working directly from the video itself—no original project files required
• Detecting and translating on-screen text within videos
• Preserving the original layout, style, and animations
• Allowing text, fonts, colors, and positions to be edited and customized
The result is a fully localized video where both narration and visuals are translated coherently, giving international audiences the same clarity as native viewers.
During the alpha phase, a multinational manufacturing company used Visual Translate to localize slide-based training videos for global teams and distributor networks. By translating visual content directly within the video into nine languages, rather than manually editing, the company reduced localization time by over 96%—turning a two-day process into just 30 minutes.
By automating what was once a highly manual process, Visual Translate marks a shift in AI video translation—moving beyond basic dubbing and subtitles toward truly complete, scalable localization that preserves how meaning is conveyed visually. The capability is particularly valuable for education, corporate training, and marketing, where critical information often appears in step-by-step instructions, labels, and other visual elements rather than narration alone.
“Most video translation tools focus on speech,” said Dr. CY Zhou, Founder and CEO of Vozo AI. “But in many videos, meaning is conveyed visually—through slides, diagrams, and on-screen text. Visual Translate fills that missing layer, enabling truly complete video localization and allowing ideas and knowledge to move across languages with far greater clarity and impact.”
Visual Translate is currently available in beta.
About Vozo AI Vozo AI is an AI-powered video localization platform that enables teams and enterprises to scale video content across languages and markets. By translating both spoken audio and visual content, Vozo ensures that meaning is preserved across the entire video experience, delivering truly native viewing for global audiences. For more information, visit www.vozo.ai . https://lasvegassun.com/news/2026/mar/12/beyond-dubbing-vozo-ai-launches-visual-translate-f/ #metaglossia #metaglossia_mundus
"The European Commission’s Directorate-General for Translation (DG Translation) has invited students from the European Master’s in Translation (EMT) network to take part in a project assessing how well AI language models work in EU languages.
Students on EMT programmes — a network of university master’s courses in translation recognised by the EU — will be able to contribute to work aimed at improving the evaluation of AI models across different European languages, according to a statement published by the European Commission on Wednesday.
The project will involve examining how AI models perform and how that performance is measured, with a focus on making the tools better suited to EU languages.
Focus on evaluation of AI for EU languages The Commission said the work brings together language professionals and AI engineers, and the project will give students insight into how linguistic skills and AI development can be combined.
It added that participants will also be able to explore potential career paths linked to language technology as part of the project." Thursday 12 March 2026 By The Brussels Times Newsroom https://www.brusselstimes.com/2018989/ai-translation-tools-under-scrutiny-in-new-eu-backed-student-project #metaglossia #metaglossia_mundus
"The poet and writer Coleman Barks died last month at the age of 88. He was well known for his translations of the works of the 13th-century Persian mystic poet Jalaluddin Rumi. Coleman Barks even appears on a Coldplay album, "A Head Full of Dreams," reading a translation of Rumi’s “The Guest House.”
Here & Now's Lisa Mullins talks to Coleman Barks's sister, Elizabeth Barks Cox, who is also a writer, about his life and work.
This segment aired on March 12, 2026." https://www.wbur.org/hereandnow/2026/03/12/coleman-barks-obituary #metaglossia #metaglossia_mundus
Not so fast: A University of Houston professor of psychology is disputing a high-profile study claiming that people who live in multilingual countries show healthier brain aging, claiming instead that wealthy countries, with the best healthcare systems, offer longer life expectancies.
"University of Houston professor of psychology Arturo Hernandez is disputing a high-profile study published in the journal Nature Aging claiming that people who live in multilingual countries show healthier brain aging. Though the study got lots of attention, Hernandez reports in the journal Brain and Language that the findings warrant cautious interpretation and reframing of public health implications.
“We took a closer look and argued that the study’s conclusions go further than the data can support,” said Hernandez.
According to Hernandez, the countries with high multilingualism in Europe also happen to be the wealthiest, with the best healthcare systems and the longest life expectancies, sometimes by as much as six years. When those structural differences are accounted for, the apparent language effect largely disappears.
“There is a real temptation in science to find individual behavioral solutions: learn a language, do a puzzle, take a supplement - are all suggested as solutions to problems that are fundamentally structural,” said Hernandez. “When those solutions get oversold, it can erode public trust in science and distract from the harder work of building the conditions that actually support healthy aging: Access to healthcare, good nutrition, economic stability. We wanted to make sure the public gets an accurate picture of what the evidence shows.”
In the original article, researchers examined records in 27 European countries and claimed that multilingualism protects against accelerated aging whereas monolingualism increased risk of accelerated aging.
Countries with high multilingualism, like Luxembourg (82.5 years) and the Netherlands (82.5 years), have some of the highest life expectancies in the world. Meanwhile, countries with low multilingualism, such as Bulgaria (75.8 years) and Romania (76.3 years), lag nearly six or seven years behind.
“A six-year gap in life expectancy is unlikely to be explained by language. World-class healthcare, superior early-childhood nutrition, higher occupational safety, and lower chronic stress offer a more parsimonious account—the same structural forces that produce longevity in general,” said Hernandez, who points to Japan as another example.
As a largely monolingual society, it boasts an exceptional life expectancy of 84.5 years. “Low inequality, a healthy diet, and a robust universal healthcare system account for that advantage far better than language ever could,” said Hernandez.
“As scientists, we do a disservice to the public when we promote individual behavioral hacks as substitutes for structural resources. Learning a language is a beautiful, culturally enriching endeavor. It connects us to others and expands our world. But we must be careful not to overpromise it as a clinical intervention for aging,” Hernandez said.
Journal
Brain and Language
Article Title
Multilingualism and aging: Country-level patterns may not support individual-level causal claims
Article Publication Date
9-Mar-2026"
https://www.eurekalert.org/news-releases/1119284
#metaglossia_mundus
#metaglossia
Cérémonie de remise des diplômes 2026 - Faculté de traduction et d'interprétation - UNIGE
Université de Genève
La cérémonie de remise des diplômes se tiendra le vendredi 27 novembre 2026 à 18h, dans le hall d’Uni-Mail.
🎉 Accessible uniquement sur invitation, cet événement festif destiné aux personnes diplômées en 2025–2026 sera l’occasion de célébrer leur brillante réussite et partager des souvenirs marquants de leur cursus.
La cérémonie sera retransmise en streaming le jour J.
10 mars 2026
https://www.unige.ch/fti/a-la-une/ceremonie-de-remise-des-diplomes-2026
#metaglossia_mundus
#metaglossia
#métaglossie
"Published: March 11, 2026 12.16am SAST
Isabel Tello Fons, Universitat de València
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
“–Tú también te enojarías si tuvieras una peluca como la mía —prosiguió el Avispón–. Se meten con uno, y uno, que no le gusta que le tomen la ‘peluca’, pues se enfada… ¡natural! Y entonces es cuando me entra la murria, me arrebujo debajo de un árbol y me quedo tieso de frío. Y, para aliviarme, cojo un pañuelo amarillo y me lo ato alrededor de la cara… ¡Oséase, como ahora! ¡Natural!”.
Así tradujo Ramón Buckley la voz del Avispón en la novela A través del Espejo y lo que Alicia encontró allí, de Lewis Carroll. La versión original recrea el dialecto cockney londinense, muy ligado a la clase obrera, lo que Buckley transformó en un dialecto castizo madrileño, conservando el tono quejón y ordinario del personaje de la obra de Carroll:
“You’d be cross too, if you’d a wig like mine,” the Wasp went on. “They jokes, at one. And they worrits one. And then I gets cross. And I gets cold. And I gets under a tree. And I gets a yellow handkerchief. And I ties up my face –as at the present”.
Cuando leemos una novela traducida, no solo seguimos una historia: escuchamos voces. Voces que revelan quiénes son los personajes, de dónde vienen y qué lugar ocupan en su comunidad. Pero ¿qué pasa con esas voces cuando pasan de un idioma a otro? ¿Cómo se traducen los dialectos, acentos, ritmos y registros que forman parte de la identidad profunda de los personajes? Abordar estas cuestiones es uno de los desafíos más complejos y menos visibles de la literatura.
Voces que importan
La forma de “hablar” de los personajes, lo que llamamos variación lingüística, abarca rasgos diferentes como vocabulario local, jergas, expresiones propias de una comunidad, formas de una lengua pasada o maneras particulares de construir las frases. Estos rasgos no son adornos, son recursos de caracterización: cumplen funciones narrativas y de estilo importantes.
El dialecto de un lugar podría tener una función reivindicativa; el acento rural podría transmitir humor, ternura o jerarquía; una jerga juvenil podría significar cercanía o pertenencia a un grupo y un habla histórica sitúa al lector en otra época. Si estas voces desaparecen en la traducción, el personaje se vuelve más plano y la historia pierde parte de su trama original.
Por ejemplo, en Las aventuras de Huckleberry Finn, Mark Twain diferenció a sus personajes mediante siete dialectos diferentes, y en Oliver Twist, Dickens utilizó el argot de ladrones y rufianes para mostrar el habla del hampa londinense.
Sin equivalencias directas
Uno de los mayores retos de la traducción literaria es que los dialectos no son intercambiables. No existe un “equivalente” español del inglés del sur de Estados Unidos, ni un dialecto aquí que corresponda exactamente al de Liverpool. Cada variedad lingüística está anclada en su territorio, historia y contexto social.
Get your news from people who know what they’re talking about.
Get newsletter
Por eso, si tradujéramos de forma literal un dialecto extranjero, el resultado sería extraño o incluso cómico. Si cambiáramos un dialecto inglés por uno español real, convertiríamos a Huckleberry en un niño andaluz, canario o mexicano y manipularíamos su identidad original. Pero a la vez, si se ignora esa forma de hablar y se traduce a la lengua estándar, se pierde su personalidad lingüística.
La traducción literaria busca conseguir efectos equivalentes: que el lector perciba el mismo matiz social y emocional que quien lo lee en versión original, aunque se usen recursos distintos para conseguirlo.
La traducción más humana
La tarea del traductor literario no es mecánica; es un ejercicio de escucha y de interpretación. El traductor se hace preguntas como qué efecto produce esa voz en el lector del original, qué rasgos lingüísticos usar para conseguir ese efecto en la traducción o hasta qué punto marcar o no una variedad.
Puede que la mejor solución no sea apuntar hacia un dialecto concreto, sino usar un registro ligeramente desviado de la lengua estándar para insinuar un origen social que no desplace culturalmente al personaje. Otras veces, puede que un rasgo léxico o una estructura gramatical basten para recrear el ambiente.
Cada decisión requiere criterio y responsabilidad. La literatura representa grupos sociales reales, y tratarlos con respeto exige una mirada ética.
Como he comprobado en mi investigación (de próxima publicación), esa mirada ética es algo que la IA, por ahora, no posee. La IA no “entiende” las implicaciones sociales de la forma de hablar de un personaje. No sabe cuándo un dialecto transmite marginación o cuándo marca jerarquía social. Trabaja detectando patrones estadísticos, no intenciones humanas.
Cuando se le pide traducir voces no estándar, suele haber dos consecuencias. O bien el texto traducido aparece “limpio”, y un personaje que hablaba con un acento local termina hablando de forma normativa, con lo que su personalidad se diluye; o bien la IA imita marcas dialectales, pero mezcla jergas incompatibles o deforma palabras sin criterio. Esto crea estereotipos no deseados, es decir, caricaturas.
Por tanto, ante la reflexión y minuciosidad que conlleva la traducción de la variación lingüística, la IA genera respuestas rápidas que no tienen todavía suficiente sensibilidad para manejar ambigüedades, ironías o alusiones culturales.
¿Quiere recibir más artículos como este? Suscríbase a Suplemento Cultural y reciba la actualidad cultural y una selección de los mejores artículos de historia, literatura, cine, arte o música, seleccionados por nuestra editora de Cultura Claudia Lorenzo.
Por qué necesitamos decisiones
Las herramientas como la IA pueden ser muy útiles en las fases previas y complementarias de la traducción porque permiten localizar información rápidamente, comparar usos reales en grandes corpus, identificar patrones de estilo… Sin embargo, si tienden a igualar las voces, también igualarán las experiencias. Utilizándola sin control perderemos diversidad lingüística y, con ella, diversidad humana.
Y es que las variedades lingüísticas no son solamente desviaciones del estándar: son lenguas muchas veces minoritarias o minorizadas, vulnerables o en riesgo. Protegerlas ayuda a conservar nuestro patrimonio cultural y una valiosa pluralidad.
Para que las voces lleguen al lector sin perder su identidad, hace falta alguien que las escuche y las recree. Esa es una tarea esencialmente humana. Por eso, cada vez que una traducción literaria nos deja oír un mundo distinto, estamos también salvando una parte de nuestra diversidad cultural.
Una versión de este artículo se publicó en la revista Telos, de la Fundación Telefónica."
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
#Metaglossia
#metaglossia_mundus
#métaglossie
"In late 2025, generative AI crossed another critical threshold. Following GPT-5.1 in November, OpenAI released GPT-5.2 on 11 December — a model designed to generate adaptive, discipline-specific academic prose with fewer stylistic traces and greater structural variation. For universities, the concern was immediate: if AI can write fluently, unpredictably, and in discipline-appropriate academic language, does detectability still hold?
Early results show that it does.
How StrikePlagiarism responds to GPT-5.2
The release of GPT-5.2 reinforced a broader challenge facing higher education: AI development now outpaces institutional policy cycles. For StrikePlagiarism, this moment required immediate empirical validation rather than theoretical assumptions.
Within days of GPT-5.2 entering academic use, StrikePlagiarism.com was tested against newly generated and paraphrased GPT-5.2 texts under realistic academic conditions. The results were unambiguous:
Over 97% detection accuracy across GPT-5.2 outputs False results below 1%, preserving academic fairness Consistent performance after paraphrasing and stylistic diversification Rather than relying on surface-level markers, StrikePlagiarism.com analysed behavioural consistency across longer academic texts — identifying patterns that remain statistically improbable in authentic student work. Reports delivered probability-based, side-by-side comparisons, providing educators with interpretable evidence rather than automated verdicts.
Why GPT-5.2 remains detectable
GPT-5.2 demonstrates strong control over academic conventions and avoids obvious repetition. However, analysis across extended submissions consistently revealed:
non-random reasoning structures, unusually uniform transitions between claims, absence of natural cognitive drift. Individually, these signals are subtle. Taken together, they form a measurable behavioural profile. Detection no longer depends on awkward phrasing or stylistic errors, but on identifying improbably stable reasoning across complex texts. Fluency improves — invisibility does not.
Core advantages of StrikePlagiarism.com’s AI detection approach
StrikePlagiarism.com was designed to support institutions operating at scale, across disciplines and languages:
Multilingual AI-content detection at scale AI-generated content is detected across 100+ languages, enabling consistent integrity standards in international and multilingual academic environments. Proven accuracy against advanced generative models Detection accuracy exceeds 97%, including paraphrased and stylistically diversified GPT-5.2 texts — demonstrating reliability under real academic conditions. Ultra-low false-positive rates False results remain below 1%, protecting students from incorrect attribution and ensuring that detection strength never compromises fairness. Why AI detection is critical right now
GPT-5.2 makes one reality clear: the primary risk for universities is no longer obvious AI misuse, but large volumes of academically convincing AI-generated work entering assessment unnoticed. This is not a future concern — it is a present operational challenge.
StrikePlagiarism addresses this challenge at an institutional level. By combining high-accuracy AI behaviour analysis with transparent, probability-based reporting, StrikePlagiarism.com enables universities to respond now, not retrospectively. When academic decisions must be defensible at the moment they are made, evidence-based AI detection becomes essential infrastructure rather than an optional safeguard." 97% accuracy against GPT-5.2: inside StrikePlagiarism.com’s detection results | THE Campus Learn, Share, Connect https://share.google/hA12nxsAaMGdPGqDX #Metaglossia #metaglossia_mundus #métaglossie
"Since the start of this year’s Amar Ekushey Book Fair, readers and publishers have noticed a rise in Bengali translations of world literature classics, with the growing popularity of these works clearly reflected in readers’ enthusiastic response. Readers who feel less comfortable in studying English texts but have immense interests to enjoy the taste of literary works from diverse cultures and languages, transcending national boundaries, are searching for and purchasing translation works, said the publishers. According to them, they have published more Bengali translations of classics from other languages responding to the demand of readers, but translation of classics of Bangla literature in foreign languages has not increased in the expected way. Baatighar always brings a good number of translations. Publisher of Baatighar, Dipankar Das, said, “There is always a demand for translated literature. It will increase further. One may not read English comfortably, but has a penchant for world literature. Translation helps them get the taste of world literature.” There are considerable allegations about the quality of many newly published translations. With the increasing popularity of translated books, the number of substandard translations is also increasing. Responding to this complaint, Dipankar said if a reader does not understand the translation, then questions can be raised about quality. Baatighar, however, publishes books by ensuring quality. Salesperson of Seba Prokashoni, Azizul Hakim, said the sale of translated books has been going well for the last several years. “Translations and novels are our bestsellers. A big portion of our new books this time is translations. Average sales of these translated copies are getting better every day,” he said. Small or big, almost all publishing houses are bringing translations of novels, thrillers, detective series, biographies, theories and historical books of world-famous authors into Bangla. Also, many translated books are published without the permission of the original authors. As a result, the editing of these books is not done properly.
Read More 5,000 tons of diesel to arrive from India today
Translator Mostaq Sharif has translated “After Dark” by Japanese author Haruki Murakami. According to him, the state of translation literature is having a good time as the demand for such works is increasing gradually. He said, “Good to see that translation works are getting a good response from readers. It indicates the changing taste of readers. But the substandard works of translation are deceiving the bookworms. Publishers and readers need to be careful while selecting and purchasing books.” On the 12th day of the book fair, a discussion programme titled “Shahidullah Kaiser” was held at the main stage of the book fair at 3pm with Syed Azizul Haque Chowdhury in the chair."
https://www.daily-sun.com/metropolis/862418 #Metaglossia #metaglossia_mundus #métaglossie
"Grants and Prizes for Promoting Italian Books and Translations (New Zealand)
UPCOMING GRANTS IN MARCH 2026!
Deadline: 31-Mar-2026
The Ministry of Foreign Affairs and International Cooperation offers prizes and grants to promote Italian language and culture abroad through literary translations, scientific works, and audiovisual productions. Each prize is valued at €5,000, targeting high-quality translations, publications, dubbing, or subtitling of works created or published since January 2025. Eligible applicants include publishers, translators, production companies, and cultural institutions, with applications due by 31 March 2026.
Overview
This initiative aims to strengthen the global dissemination of Italian culture by supporting:
Translation and publication of Italian literary and scientific works into foreign languages
Production, dubbing, and subtitling of Italian short and feature films, as well as television series
Promotion of contemporary Italian literature and audiovisual content
Expansion of cultural exchange and international reach
The program ensures that both literary and audiovisual works maintain high-quality standards and reach wider international audiences.
Prize Details
Maximum number of prizes for 2026: 10
Prize value: €5,000 each
Language distribution:
Spanish: 5 prizes
Arabic: 1 prize
Chinese: 1 prize
French: 1 prize
English: 1 prize
German: 1 prize
Eligible Works
Literary and scientific works (including e-books) translated and published in a foreign language on or after 1 January 2025
Audiovisual productions (short/feature films, TV series) produced, dubbed, or subtitled on or after 1 January 2025"
https://www2.fundsforngos.org/arts-culture/grants-and-prizes-for-promoting-italian-books-and-translations-new-zealand/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Zoom announced on Tuesday, March 10, that it is bringing real-time audio translation to Zoom Meetings, allowing users to understand speakers in different languages during calls. The video communications platform also unveiled a new feature aimed at detecting synthetic audio or video in Zoom Meetings.
The new features coming to Zoom Meetings are among a handful of new AI-powered capabilities coming to Zoom’s enterprise-grade offerings, including Zoom Workplace, Zoom Phone, and Zoom CX.
The live voice translation feature will let Zoom users speak in their native language while others on the call can hear the translated speech in their preferred language in real-time. The feature is currently available in five languages, with support for more languages coming soon.
Zoom first gained widespread recognition during the COVID-19 pandemic, when remote work and virtual meetings became the norm. Since then, the company has looked to become more than a video conferencing platform by launching AI-powered productivity tools and customer support products for enterprises. It has sought to define its competitive edge in the crowded AI industry by taking a federated approach where multiple AI models, including Zoom’s own models and those from OpenAI, Anthropic, and Meta, are dynamically selected to provide cost-effective solutions.
The company’s new on-call deepfake risk detection feature arrives as AI-driven online scams continue to surge. It could play a key role in protecting users from ‘digital arrest’ scams, many of which rely on deceptive video calls to trick victims.
“The next phase of enterprise AI will be defined by the ability to move from conversation to action. Zoom’s agentic AI platform is designed to orchestrate action across systems, turning every meeting, call, and customer interaction into a trigger for workflow automation,” said Velchamy Sankarlingam, president of Product & Engineering at Zoom.
Alongside these Zoom Meeting features, the company also introduced a suite of AI-powered office apps such as AI Docs, Slides, and Sheets that can be used to generate document drafts, spreadsheets with data, or presentations based on meeting transcripts and data from other services.
Zoom further said that AI avatars, announced last year, will start becoming available to users later this month. The feature lets users create photo-realistic, AI-generated avatars of themselves to appear in online meetings on their behalf. Zoom’s AI Companion 3.0, its latest AI assistant on the web unveiled last year, will soon be accessible through a desktop app. The AI assistant is also being integrated across the Zoom Workplace app, Zoom Business Services, and Workvivo, its app for employee communication.
In addition, AI Companion 3.0 can be integrated with third-party platforms such as Slack, ServiceNow, Box, Google Drive, and OneDrive, enabling the AI assistant to synthesise enterprise data across applications and provide insights from multiple data sources.
Amid the rising popularity of AI agents, Zoom is letting users create and deploy custom as well as pre-built AI agents through no-code, natural language prompts. These custom AI agents can act on users’ behalf to automate workflows across third-party systems such as Salesforce, Slack, and ServiceNow, the company said.
For developers, Zoom announced a new suite of enterprise‑grade AI APIs which can be used to build apps that leverage the transcription, translation, summarization, deep reasoning, and image‑processing technologies powering Zoom’s own products.
In Zoom Phone, the company is rolling out agentic workflows that help enterprise clients automatically execute tasks such as drafting emails or sending out summaries. It is also adding new SMS capabilities for the 24/7 virtual receptionist to handle customer engagements via text, answer questions, collect information, support scheduling flows, and escalate to a human when needed.%" https://indianexpress.com/article/technology/tech-news-technology/zoom-new-live-voice-translation-deepfake-detection-video-calls-10575835/ #Metaglossia #metaglossia_mundus #métaglossie
|
"Google Translate brings real-time speech translations to any headphones
Live speech translations were once only on the Pixel Buds.
Google Translate’s latest update brings live speech translations, originally available only on the Pixel Buds, to any headphones you want, with support for over 70 languages. It’s rolling out today in beta and just requires a compatible Android phone with the Translate app (unlike Apple’s similar feature, which requires AirPods).
It’s one of a few new features coming to Google Translate, along with improved text translations. Using Gemini, Translate will now offer more accurate translations of phrases like idioms and slang, which have a different meaning than what they literally sound like word for word, such as the expression “stealing my thunder.”
Android users will soon have an option to hear real-time translations through their headphones. Image: Google
Today’s update also includes an expansion of the Practice feature in Translate, bringing it to 20 new countries and adding more supported languages. The Practice feature, which launched in beta in August, is a bit like Duolingo, but baked into Google Translate. It uses AI to make customized language learning sessions based on your skill level, including vocabulary practice and listening comprehension.
Live speech-to-speech translation is rolling out today in the US, Mexico, and India on Android and will make its way over to the iOS Translate app next year. Improved text translations are rolling out today in the US and Mexico on both the Android and iOS Translate apps, as well as on the web version of Translate. Practice is still a beta feature in Translate, so it may not be available to everyone yet."
Stevie Bonifield
Dec 12, 2025
https://www.theverge.com/news/843483/google-translate-live-speech-translations-headphones
#Metaglossia
#metaglossia_mundus