 Your new post is loading...
|
Scooped by
Charles Tiayon
Today, 11:46 AM
|
"A new analysis of genetic studies proposes that the cognitive capacity for language was already present at least 135,000 years ago, with language likely becoming a social tool around 100,000 years ago. The study challenges long-standing debates about the timing of language emergence. The research was conducted by a team led by Shigeru Miyagawa, a linguist at the Massachusetts Institute of Technology (MIT), alongside Rob DeSalle and Ian Tattersall from the American Museum of Natural History (AMNH). Genetics and human language evolution Previous attempts to determine the origins of language have relied on fossil records, cultural artifacts, or linguistic reconstruction. This study took a different approach. The team examined genetic evidence to trace the earliest known divergence of human populations, reasoning that all human languages likely share a common origin. “The logic is very simple. Every population branching across the globe has human language, and all languages are related,” Miyagawa explained. “I think we can say with a fair amount of certainty that the first split occurred about 135,000 years ago, so human language capacity must have been present by then, or before.” The study systematically reviewed 15 genetic studies conducted over the past 18 years. These studies included: Y chromosome analysis (which traces paternal lineage), mitochondrial DNA studies (which track maternal ancestry), as well as whole-genome studies (which examine broader genetic variation). Human populations branched out Together, these genetic studies suggest that human populations began splitting around 135,000 years ago, meaning that before this divergence, Homo sapiens was a single, undivided population. Since every group that branched out maintained the ability to communicate through language, this strongly suggests that language had already developed by this time. A 2017 study attempted a similar genetic approach but had access to fewer datasets. With more recent genetic research available, the current study provides a more precise estimate for when language capacity was present. “Quantity-wise, we have more studies, and quality-wise, it’s a narrower window [of time],” said Miyagawa, who is also affiliated with the University of São Paulo. Language as a unique human trait Miyagawa has long argued that all human languages share fundamental similarities, making it likely that they evolved from a common source. His past research has explored unexpected linguistic connections, such as similarities between English, Japanese, and Bantu languages. Some scholars propose that language capacity dates back millions of years, based on the vocal abilities of primates. However, Miyagawa believes this perspective is flawed. He emphasizes that human language is unique, not just because of vocal ability, but because of its combination of words and grammar, which creates an infinitely generative system of communication. “Human language is qualitatively different because there are two things – words and syntax – working together to create this very complex system,” he explained. “No other animal has a parallel structure in their communication system. And that gives us the ability to generate very sophisticated thoughts and to communicate them to others.” From thought to communication The study also suggests that language did not begin as a social tool but instead may have first developed as an internal cognitive system. “Language is both a cognitive system and a communication system,” Miyagawa said. “My guess is that prior to 135,000 years ago, it did start out as a private cognitive system, but relatively quickly that turned into a communications system.” Human use of social language If language was cognitively present before 135,000 years ago, when did it become an active part of human social life? The archaeological record offers clues. Around 100,000 years ago, early humans began engaging in symbolic activities, such as making meaningful markings on objects and using fire to produce ocher, a decorative red pigment. Such behaviors suggest that humans were using symbols to convey meaning – a crucial aspect of language. These findings reinforce the argument that language was the driving force behind the emergence of modern human behavior. “Behaviors compatible with language and the consistent exercise of symbolic thinking are detectable only in the archaeological record of Homo sapiens,” the authors said. A catalyst for human advancement One of the study’s co-authors, Ian Tattersall, has previously proposed that language played a transformative role in human evolution. He argues that once language emerged, it triggered a cascade of innovations, from symbolic art to more complex social structures. “Language was the trigger for modern human behavior. Somehow, it stimulated human thinking and helped create these kinds of behaviors,” Miyaga notes. “If we are right, people were learning from each other [due to language] and encouraging innovations of the types we saw 100,000 years ago.” However, not all researchers agree. Some scholars propose a gradual development of complex behaviors, arguing that language was just one of many factors shaping human evolution. Others believe that cultural changes – such as tool use and social coordination – may have influenced linguistic development rather than the other way around. The origins of human language Despite the ongoing debate, Miyagawa and his colleagues believe their study marks an important step forward in understanding how and when language emerged. “Our approach is very empirically based, grounded in the latest genetic understanding of early Homo sapiens,” Miyagawa concluded. “I think we are on a good research arc, and I hope this will encourage people to look more at human language and evolution.” By integrating genetic evidence with archaeological findings, this research provides a clearer timeline for when language capacity emerged. While many questions remain, the study reinforces the idea that language was central to shaping human history, allowing our ancestors to develop complex cultures, communicate across generations, and ultimately, create the societies we live in today. The study is published in the journal Frontiers in Psychology." 03-17-2026 ByEric Ralls Earth.com staff writer https://www.earth.com/news/when-humans-created-the-first-language-and-communication-skills/ #metaglossia #metaglossia_mundus
Are humans the only beings on the planet that use language to communicate?
"Burg Giebichenstein
Kunsthochschule Halle
“Language can only deal meaningfully with a special, restricted segment of reality. The rest, and it is presumably the much larger part, is silence.” George Steiner
Are humans the only beings on the planet that use language to communicate? Can we decipher the nonhuman world around us without harnessing it to our own socialization, syntax, and lexicon? Is interspecies communication even possible? Translation has been described as a precondition that underlies all (human) cultural transactions upon which communication is based. It also is inherently political and stands at the forefront of so many of today’s questions around identity, gender, post-colonial criticism, feminist critique, machine translation and canon creation, yet its connection within the context of the nonhuman turn, interspecies communication, and eco-criticism has not yet been fully explored.
Whether we are talking about classic linguistic and literary translation, or any number of related fields including: language and literature, cultural studies, performance, visual and media arts—the core question that translators and theorists of translation have been debating about for centuries remains the same: is it possible to translate without interpreting? Is linguistic and cultural equivalence even possible? These questions become all the more urgent in the limit-case of interspecies communication. Can we apply empathic modes of translation to nonhuman articulations, wherein translation involves a form of metamorphosis, not of text, but of the translator. As such, translators are something of a hybrid species with one foot in each culture and language, and whose very existence revolves around traveling between worlds. Translators have something of a mythical being about them, akin to a chameleon or centaur. In this course, we will not be engaging in a scientific exploration of interspecies communication, but examining theories around empathic translation-- a process that sees translation not merely as the transformation of a text, but of the translator themself.
Emerging and classical theories of translation can offer a paradigm for engaging with plant and animal articulation, not language as such, but different forms of articulation perceived through the senses, one in which our hearing and seeing,“once intertwined and attentive to the calls and cries of animals, all but disappeared with the invention of the alphabet, retreating into a kind of silence.”
In David Abram's words: “By giving primacy to perception we can see the natural world, not as inert and passive, but as dynamic and participatory. The winds, rivers and birds speak in their own way (if we listen), the sounds of nature not only have informed indigenous languages, but language in general--humans are but one being intertwined with other beings and ‘presences.’ This perspective sees the landscape as a sensuous field, and human perception as but one point of view that is in reciprocity, in expressive communication, with other points of view and ways of being.”
How can theories of translation help us make sense of this new view of a world teeming with language and sentience? What theories abound in reference to multiplicity of “language,” even as Walter Benjamin would argue for a “universal (human) language.” What practical tools does translation studies offer, and what bridges can it forge between the disciplines? The first half of the seminar focuses on key theoretical concepts relevant to the history and practice of translation. In the second half, students will engage in translation experiments that intersect with their own artistic/design practice. A final project should be considered a first draft of something that could develop later into a larger project.
The course will be taught in English and German.
This seminar is ideally suited to students interested in: Literature, Translation Theory / Translation / Cultural Studies / Critical Theory, Creative Writing/ Post-humanism, Trans-humanism, Eco-criticism, the More-than-Human Turn.
Teachers
Dr. Zaia Alexander"
https://www.burg-halle.de/en/course/l/talk-with-the-animals-translation-in-a-more-than-human-world
#Metaglossia
#metaglossia_mundus
#métaglossie
"L'équipe de McGill remporte la 20e édition des Jeux de la traduction!
Très belle nouvelle : après avoir été 2e l'an dernier, l'équipe de McGill a remporté la 20e édition des Jeux de la traduction, compétition interuniversitaire canadienne ! Un grand bravo à Noah Bourdon (capitaine), Alexandre Baraton, Jeanne Bergeon, Rose Langlois, Raphael Schmieder-Gropen, Austin Witter et Catherine Zich, qui étudient au Département des littératures de langue française et à l'École d'Éducation Permanente.
L'équipe a admirablement tiré son épingle du jeu dans plusieurs épreuves :
Équipe gagnante au classement général Équipe gagnante pour l'épreuve de relais (EN-FR, FR-EN) 1ère place - Épreuve individuelle (EN-FR) : Rose 1ère place - Bandes dessinées (FR-EN) : Austin et Noah 1ère place - Réseaux sociaux (FR-EN) : Alexandre et Jeanne 1ère place - Audiovisuel (EN-FR) : Austin, Noah et Rose 2e place - Publicité (EN-FR) : Raphaël et Rose 2e place - Chanson (FR-EN) : Alexandre, Jeanne et Raphaël 3e place - Chanson (EN-FR) : Alexandre, Jeanne et Raphaël
Karolina Roman, doctorante au DLLF, et les professeures Audrey Coussy et Catherine Leclerc ont aidé l’équipe à s’entraîner. Cette 20e édition s'est tenue à l’Université du Québec à Trois-Rivières, du 13 au 15 mars 2026." Catégorie: Départ. de langue et littérature françaises Dernière mise à jour : mer, 03/18/2026 - 21:34 Site de source: /litterature" https://www.mcgill.ca/litterature/fr/channels/news/lequipe-de-mcgill-remporte-la-20e-edition-des-jeux-de-la-traduction-372014 #metaglossia #metaglossia_mundus #métaglossie
What constitutes language - and what doesn't What do we mean when we talk about language? And how do we differentiate ourselves from animals and plants – and from artificial intelligence? A linguist, a biologist and a digital humanities researcher provide answers.
"The marine biologist David Gruber recently told the New York Times that he and his team had managed to decipher a kind of alphabet of sperm whales, and that this alphabet was also accompanied by a whale-specific version of words. If this is true, then Gruber's particular research interest is understandable: deciphering these words, this language that is inaccessible to us, with the help of artificial intelligence. The biologist is hoping for nothing less than a new Copernican revolution, "the realisation that we are not the only beings with a rich inner and communal life".
Language requires conscious understanding There is no question in today's research that animals and even plants communicate with each other. But whether the exchange of information that takes place should and may actually be called language continues to give rise to controversy – with people sometimes getting quite emotional about the issue. Matthias Erb from the Institute of Plant Sciences at the University of Bern, a specialist in the effects of plant scents, has made a clear decision in this regard: “I never talk about 'language' in connection with plants, I don't use the word.” For him, language is a “complex communication system”. The communication part, yes, he allows that that also applies to plants. This basically just requires a transmitter that “sends” information in order to trigger something in the recipient. Consciousness is not necessarily required on either side.
Espionage instead of cooperation Erb's reticence is not only due to philosophical considerations. He still remembers the “talking trees”, a popular science phenomenon that took an all too rapid and rather unfortunate turn towards the esoteric in the 1990s. “This put a halt on our field of research for almost 20 years; communication via scents was a taboo subject.” Erb is therefore rather critical of the fact that communication among trees is currently experiencing a small renaissance thanks to the work of bestselling author and forester Peter Wohlleben and his "Wood Wide Web" . In general, the scene has a tendency to thoroughly misunderstand some signal paths. You often hear the example of trees “warning” their neighbours through chemical signals when a pest infestation occurs. Erb has a completely different view of this process: this would actually put trees at a competitive disadvantage, “I would rather call it espionage”. Finding out about the infestation of the neighbouring tree would therefore give the neighbour a knowledge advantage that the originator of the signal would have preferred to avoid. For this reason, Erb is convinced that this is not a deliberate act of communication; nature finds ways to make the best possible use of information, regardless of any intention to send it. It is good at it, you could say it is in its nature.
«Trees don't use chemical signals to warn their neighbours of pest infestations - I would rather call it espionage.»
- Matthias Erb
Communication, understood in this rather broad manner, can take surprising forms. The colour of flowers, for example: for Erb, there is definitely something like an “intention to send information”, even if it only manifests itself evolutionarily, over long periods of time. The colour pigments are produced explicitly for this purpose, which reminds him of the scent molecules he investigates in his research. However, in order to be able to call this “language”, the biologist believes that conscious understanding is required. And here we would probably still be rather cautious in general: who would be prepared to attribute consciousness to plants?
Language models merely generate character strings In the meantime, the issue has become rather muddled in another, related area: who would be prepared to attribute consciousness to machines? After all, they are proving that they are now capable of language. GPT and its consorts, who have only been around for a good five years, deliver texts in all tones and for all situations with an almost rage-inducing naturalness. Tobias Hodel, Professor of Digital Humanities at the Walter Benjamin Kolleg since the summer of 2025 and a specialist in texts and artificial intelligence, insists on a small but crucial difference: “large language models produce text, not language.” He calls what an AI generates character strings, meaningful sequences of “tokens”, as they are called in technical jargon. But there is still something missing for it to be deemed proper language, Hodel believes. The language model only pretends to produce language. And what about us? We are only too happy to accept the illusion. Hodel calls it “positionality”, our access to language is always linked to a “social and cultural experience”. Language is therefore not simple, it is always received by a certain entity, and this is undoubtedly a detail that is of little interest to language models in their stubborn and zealous reproduction of patterns.
«Our access to language is always linked to a social and cultural experience.»
- Tobias Hodel
So is there a fundamental misunderstanding here? Erb is also bothered by the fact that we make use of human concepts to describe something that has little to do with human language. The biologist believes that we are also using projections. The primary aim should be to “understand nature better”, and he sees no reason why a forest should function in a similar way to a human community: “trees are completely different to us, they don't have a central nervous system, it all works in a 'wonderfully modular' way.” Unfortunately, this line of thought only leads further down the slippery slope when it comes to language models: could it perhaps be that we are not projecting anything into the machines at all, because we – oh shock! – function in a similar way to these famous neural networks? That our brain black boxes harbour similar secrets to the AI black boxes, of which no one can say exactly how they do what they do? What if our language, in essence, is nothing more than “strings of characters”, if our thoughts are magically “produced” when we speak, as Kleist once described it?
Is the world made of language? When it comes to such distinctions (or indistinguishability), we inevitably end up reaching for the dusty old philosophical tool box at some point. We find ourselves once again facing big questions that have long been considered obsolete, or at least are hardly discussed in recent philosophical discourse. Is the world made of language? Can we even gain an understanding of the world beyond language? The Austrian philosopher Ludwig Wittgenstein was categorical on this point: “The limits of my language mean the limits of my world.” And can we really define consciousness – and its relationship to language – clearly? One thought experiment among many, again using the example of colours: language models could speak very eloquently about colours, but they could never gain a real understanding of “blue” or “green”, it is often said, because they lack the sensory experience to do so.
Zur Person
Prof. Dr. Tobias Hodel ist ausserordentlicher Professor für Digital Humanities am Walter Benjamin Kolleg der Universität Bern.
Kontakt This, though, is an argument that blind people must see as an affront – they also have a multifaceted understanding of colours and gain it through language. Ask yourself what proportion of “world knowledge” you have acquired yourself and what proportion you have acquired indirectly through language (be it in conversations or through texts). It is therefore only logical that Silicon Valley started talking about “reasoning models” a year or two ago. With language comes the ability for machines to reason, as if by magic – Hegel would not necessarily have disagreed. But Hodel does: “all these promises, general artificial intelligence (AGI), reasoning, that ultimately comes from advertising language.” This is also not without strange logic: language also potentially always means manipulation, deception and exaggeration. Silicon Valley may just be falling for its own magic tricks.
Linguistically gifted animals that can express what they want Either way, we are currently experiencing a strange and, for many, disconcerting moment in human history. Our position of “human exceptionalism” is being contested from two sides. Are we the only living beings that can speak? And what if machines are suddenly more than just “stochastic parrots”, as an influential paper from the Google ethics department puts it (a publication which earned the author Timnit Gebru no thanks, but a termination of her employment)? Is language still suitable as a distinguishing feature? As what ultimately sets humans apart? The question can also be turned around to the most pragmatic definition of language: it is the tool that only people have at their disposal. It is what shapes us, our interactions, our knowledge, our emotions. We may be animals, but we are the linguistically gifted ones.
Magazine uniFOKUS
Language This article first appeared in uniFOKUS, the University of Bern print magazine. Four times a year, uniFOKUS focuses on one specialist area from different points of view. Current focus topic: Language.
Subscribe to uniFOKUS free of charge This has been the axiom for centuries in the history of philosophy, and no one really dared to seriously object. Linguist and Director of the Institute of Linguistics, Linda Konnerth, puts it this way: “human languages are communication systems with which we can express everything we want to express.” This includes, in particular, fictional or long-past events. It is not just about exchanging information, but also about emotions, attitudes to what is being said and the fact that “we often want to remain vague and not communicate everything explicitly”.
More than an evolutionary advantage? There is no doubt that language represents an evolutionary advantage. What is in doubt, however, is where this advantage lies exactly: is it the ability to coordinate and divide up foraging and other tasks? Or to strengthen the relationship between children and parents? And did the development of language really run parallel to the development of sapience? It is also debated whether animal sounds are intentional communication or simply an expression of alarm or fear. Meerkats, for example, easily become so excited that they emit their warning call even when no other meerkats are nearby. Chimpanzees, on the other hand, curb their alarm calls when they see that the group has already spotted the danger.
«Human languages are communication systems with which we can express everything we want to express.»
- Linda Konnerth
In any case, once the language was there, it proved to be a success story. It spread and diversified: “there are around 7,000 languages in the world today,” says Konnerth, and not all of them are documented. This is why a lot of research is being carried out in general linguistics on indigenous minority languages far away from the political centres; the University of Bern is establishing close cooperation with universities in the Global South in order to sustainably expand basic linguistic research.
Researching languages before they disappear And language is dynamic. New language forms are still emerging today – sociolects, contact varieties, Konnerth calls them. But more is needed before you can call it a new language: the language form must be used for all purposes and passed on as the native language. In this respect, another dynamic is much more significant: “in general, the number of languages is falling rapidly.” The reasons for this are an increase in communication infrastructure and the resulting language contact. It is obviously that the socio-economic interest of parents to have their children grow up with a national language or other majority languages, which increases the pressure on minority languages. Estimates suggest that a quarter of today's languages will no longer be spoken by 2100 and that the rate of language extinction is likely to triple. It is therefore a race against time: “the main thing is that we want to understand how different languages are and how these differences develop.”
Zur Person
© zvg Prof. Dr. Linda Konnerth ist Assistenzprofessorin für historische Sprachwissenschaft und geschäftsführende Direktorin des Instituts für Sprachwissenschaft der Universität Bern.
Kontakt Viewed as a whole, today and in evolutionary terms, “small” languages with up to 100,000 speakers are the norm and therefore of particular interest to linguists. In this respect, Switzerland is a small paradise for a linguist, with all its living dialects. Konnerth thus sees an ethical dimension to her research. She recalls, for example, indigenous North American languages, which can be “revitalised” thanks to linguistic documentation due to their significant importance for the descendants of these speaker communities.
Researching languages with the help of AI Konnerth also sees great potential in the use of AI – but for the time being, the technology is mainly helping with the more tedious, routine tasks in research. Because linguists primarily work with spoken language, the bottleneck lies in transcribing the audio recordings, and this is where AI is getting better and better. This is where Konnerth and Hodel's approaches are similar. For the AI expert, automatic text extraction also represents a “huge opportunity”. Literary research projects could now deal with unimagined volumes of text, “we can suddenly analyse 5,000 books instead of 50”. However, he is still a little cautious when it comes to the future hopes of language models: at the moment, specific AI models are still better than the general models, and as a researcher you also have to ask yourself what it costs to train and operate such giant models, “also ecologically”. In this respect, for him the use of language models is also a “question of decency”, especially for a literary connoisseur. It's also about an awareness of how many different text genres and tonalities there actually are – not just the “plastic texts” spewed out by the machine. And he compares it to furniture handmade by an artisan carpenter versus IKEA mass-produced products. Hodel is convinced that this is ultimately not a bad thing for the humanities and that they will become more relevant: “after all, it is the domain of the humanities to make it clear how different kinds of knowledge stand in the world.”
Protection against language? Matthias Erb also expects positive developments for his field of research: there will certainly be some breakthroughs in the next few years, especially at the molecular level. The mechanisms still need to be investigated, for example on the receptor side: how exactly do the fragrance molecules get into the plants, where do they dock, what happens next? Research may primarily uncover chemical phenomena and abstract correlations, but Erb is aware that it also has to deal with language – and that a little caution is always required, especially when “selling” the results: “human language has its limits, of course, but we have to use it to communicate our results effectively.”
Zur Person
Prof. Dr. Matthias Erb ist Leiter der Sektion Biotische Interaktionen am Institut für Pflanzenwissenschaften der Universität Bern.
Kontakt Which brings us back to language and manipulation, a topic that we are currently grappling with in connection with fake news and the vulnerability of democracies. In this context, linguistic competence also means competence with fact and fiction. The fact that anything can actually be told, that we also love being told something, leads Hodel to wonder whether we might need “protective layers against language” – “because now we realise how impressionable we are”. But doesn't language always reflect its own limits and ambiguity in a playful way? A unique moment on the beach, a beautiful experience that will stay with us forever: we find it “indescribable”, and that itself is the best description.
Words that decay like musty mushrooms At the beginning of the 19th century, Kleist put an optimistic spin on it in his essay “On the gradual formation of thoughts during speech”: “language is then no longer a fetter, like a brake on the wheel of the mind, but like a second wheel on its axis, running parallel to it.” A hundred years later, Hofmannsthal wrote in the Chandos letter: “I have completely lost the ability to think or speak coherently about anything. [...] the abstract words, which the tongue must naturally use in order to express any kind of judgment, decayed in my mouth like musty mushrooms.” Language as the wheel that keeps the cart turning, versus language that is forever in our way – words that decay like musty mushrooms: there is no better way to illustrate the great rift that language means to us. If you want to boil this topic down to a more concise form, please contact the language AI you trust.
About the author Roland Fischer is a freelance science journalist.
Keywords uniFOKUS Language This article is part of the category The online magazine of the University of Bern."
What constitutes language - and what doesn't https://share.google/LBjI0nHGuT4S7ybCd
"Advances made through No Language Left Behind (NLLB) have demonstrated that high-quality machine translation (MT) scale to 200 languages. Later Large Language Models (LLMs) have been adopted for MT, increasing in quality but not necessarily extending language coverage. Current systems remain constrained by limited coverage and a persistent generation bottleneck: while crosslingual transfer enables models to somehow understand many undersupported languages, they often cannot generate them reliably, leaving most of the world’s 7,000 languages—especially endangered and marginalized ones—outside the reach of modern MT. Early explorations in extreme scaling offered promising proofs of concept but did not yield sustained solutions. We present Omnilingual Machine Translation (OMT), the first MT system supporting more than 1,600 languages. This scale is enabled by a comprehensive data strategy that integrates large public multilingual corpora with newly created datasets, including manually curated MeDLEY bitext, synthetic backtranslation, and mining, substantially expanding coverage across long-tail languages, domains, and registers. To ensure both reliable and expansive evaluation, we combined standard metrics with a suite of evaluation artifacts: BLASER 3 quality estimation model (reference-free), OmniTOX toxicity classifier, BOUQuET dataset (a newly created, largest-to-date multilingual evaluation collection built from scratch and manually extended across a wide range of linguistic families), and Met-BOUQuET dataset (faithful multilingual quality estimation at scale). We explore two ways of specializing an LLM for machine translation: as a decoder-only model (OMT-LLaMA) or as a module in an encoder–decoder architecture (OMT-NLLB). The former consists of a model built on LLaMA3, with multilingual continual pretraining and retrieval-augmented translation for inference-time adaptation. The latter is a model built on top of a multilingual aligned space (OmniSONAR, itself also based on LLaMA3), and introduces a training methodology that can exploit non-parallel data, allowing us to incorporate the decoder-only continuous pretraining data into the training of an encoder–decoder architecture. Notably, all our 1B to 8B parameter models match or exceed the MT performance of a 70B LLM baseline, revealing a clear specialization advantage and enabling strong translation quality in low-compute settings. Moreover, our evaluation of English-to-1,600 translations further shows that while baseline models can interpret undersupported languages, they frequently fail to generate them with meaningful fidelity; OMT-LLaMA models substantially expand the set of languages for which coherent generation is feasible. Additionally, OMT models improve in cross-lingual transfer, being close to solving the “understanding” part of the puzzle in MT for the 1,600 evaluated. Beyond strong out-of-the-box performance, we find that finetuning and retrieval-augmented generation offer additional pathways to improve quality for the given subset of languages when targeted data or domain knowledge is available. Our leaderboard and main humanly created evaluation datasets (BOUQuET and Met-BOUQuET) are dynamically evolving towards Omnilinguality and freely available.
Download the Paper AUTHORS Written by
Omnilingual MT Team
Belen Alastruey
Niyati Bafna
Andrea Caciolai
Kevin Heffernan
Artyom Kozhevnikov
Christophe Ropers
Eduardo Sánchez
Charles-Eric Saint-James
Ioannis Tsiamas
Chierh CHENG
Joe Chuang
Paul-Ambroise Duquenne
Mark Duppenthaler
Nate Ekberg
Cynthia Gao
Pere Lluís Huguet Cabot
João Maria Janeiro
Jean Maillard
Gabriel Mejia Gonzalez
Holger Schwenk
Edan Toledo
Arina Turkatenko
Albert Ventayol-Boada
Rashel Moritz
Alexandre Mourachko
Surya Parimi
Mary Williamson
Shireen Yates
David Dale
Marta R. Costa-jussa
Publisher
arXiv
Research Topics
Natural Language Processing (NLP)" https://ai.meta.com/research/publications/omnilingual-mt-machine-translation-for-1600-languages/ #metaglossia_mundus #metaglossia
"Between Languages: How English dubs are rewriting emotion across global anime hits Moving past the dated ‘sub vs dub’ debate, the English voice casts of ‘Frieren: Beyond Journey’s End’, ‘Jujutsu Kaisen’ Season 3, and ‘Sentenced to Be a Hero’ break down how dubbing carries subtext and emotion to sound for a global anime audience Published - March 18, 2026 05:02 pm IST Ayaan Paul Chowdhury
Whether it is a teenager carrying the weight of a citywide massacre, an immortal mage learning to recognise grief too late, or a goddess who swings between divine poise and child-like exuberance, the current anime slate feels packed with a range of curious characters. Once treated as an auxiliary track for international markets, the English language dub for a lot of these popular anime is now far closer to the centre of that exchange, shaped by a diverse group of talented voice actors.
Crunchyroll’s Winter 2026 lineup brings together returning giants and new experiments, with Jujutsu Kaisen’s third season continuing its descent into the Culling Game arc, Frieren: Beyond Journey’s End’s sophomore run refining its study of time and memory, and the new Sentenced to Be a Hero reframing fantasy heroism as institutional punishment. What links the three is the degree to which their English casts are asked to navigate tone, emotion and cultural specificity, across languages that do not always align cleanly.
In Jujutsu Kaisen, Adam McArthur approaches its main protagonist Yuji Itadori with a grounded kind of pragmatism. “If you boil down what he’s feeling, it’s immense guilt,” he says, describing the aftermath of the Shibuya Incident, where Yuji becomes complicit in a catastrophic mass murder under Sukuna’s control. The third season pushes him into the Culling Games, a sprawling death tournament engineered to destabilise Japan’s cursed energy system, with Yuji positioned between execution orders and moral obligation.
McArthur’s task involves holding together the memory of a character who once operated with an unguarded, shounen-MC optimism while allowing that optimism to persist in altered form. “What I love about Yuji is he is always going to be Yuji. He’s going to choose good even when bad things happen to him. He continues to do that.”
The cost of sustaining that emotional register revealed itself in the routine he describes. “I’d go in and record those scenes. The director would be like, ‘okay thanks!’, I’d get in my car, cry on the way home, try to act normal, only to come back next week and do it again. It was the scene with Nanami, the scene with Nobara, all of it. It’s not just one episode. It keeps coming back” he recalls. But McArthur does not frame that strain as a burden. “It’s tough, but it’s also rewarding. You don’t always get to do that with characters in animation. You get to do the light stuff and the really heavy stuff, and that’s a treat.”
Kayleigh McKee approaches Yuta Okkotsu through a more technical lens. “He’s a year older, he has more friends and more support, so I wanted to keep that friendliness from the movie but make him come across as more competent,” she says, describing Yuta’s reintroduction after Jujutsu Kaisen: 0 as a villain-apparent tasked with killing Yuji. “When he first appears, he almost seems like a villain, but he’s only portraying a villain to Yuji. So, I was portraying a character who was trying to portray himself as a villain. That was a unique challenge, and I had a lot of fun with it. I tried to sprinkle in little bits where, if you know he’s not fully selling it, you can pick up on that.” Her work extends to Kirara, a confident and mischievous trans Jujutsu sorcerer whose playful exterior masks her strategic combat style. As one of the few openly trans women working prominently in anime dubbing, McKee occupies a space that has historically remained opaque even within an industry long accustomed to gender-fluid casting traditions — Mayumi Tanaka’s Luffy, Masako Nozawa’s Goku, and Junko Takeuchi’s Naruto stand as defining performances that have shaped these iconic shounen MC’s so completely that questions of gender fall away in the act of listening.
“I’ve portrayed non-binary characters, binary trans characters, cis characters, creatures, monsters,” she says. “I’ve seen overwhelming support from fans. People will say things like they want to see me voicing an entire series, which is flattering, but what it really comes down to is skill, which is something anyone could develop. I just had more of an existential reason to put the work into it,” she smiles. McKee also situates that effort within the industry’s response. “Most directors use me as a utility artist. They know I can do this range without hesitation. As long as I can portray the role in a respectful and representational way, then why not use me. That’s very flattering.”
The contrast between Japanese and English performances remains a constant negotiation. Mallorie Rodak describes her approach to Frieren as more than mere imitation. The elven mage, who has lived for over a millennium, speaks with an insouciance that risks reading as absence if handled too literally, and Rodak adjusts by introducing minimal inflections that suggest interiority without overt signalling. “Frieren is a difficult character because of her lack of emotion,” she says. “There’s a sliding scale. You don’t want to be completely devoid of emotion because then it feels boring, but you also don’t want to infuse too much emotion because it won’t feel like someone who’s been alive for a thousand years.” She traces that balance back to the original performance. “Atsumi Tanezaki’s work was a big inspiration. We hear the Japanese voices before we record, so the depth she brought to the character really informed how I approached it.”
Restraint seems to define Frieren more broadly — the series emerged from its first season as one of the most critically celebrated anime in recent years, with its contemplative pacing and attention to detail distinguishing it within a crowded field. This second outing continues Frieren’s journey north, while maintaining the episodic structure that allows smaller interactions to accumulate into something larger. The English dub follows that structure closely, with Jill Harris and Jordan Dash Cruz locating Fern and Stark’s emotional cores through specific moments.
Harris points to a brief exchange involving a cute head pat as the key to understanding Fern, whose outward composure conceals a need for validation that shapes her behaviour. “There’s a flashback where Heiter says that Fern needs a lot of praise,” she says. “And Frieren gives him a head pat, and it just clicked for me. Fern often seems stoic, but I think she often feels unappreciated. When she passes the mage exam and tells Frieren the spell she chose, and Frieren gives her a little head pat, Fern is just beaming. She wants to do a good job. She wants to make people happy. She wants head pats”, Harris smiles.
Cruz approaches Stark by identifying the gap between his self-perception and the way others see him. “For me, it was the dragon fight and also the episode with his brother,” he says. “You get to see exactly how strong Stark is, but you also witness why he views himself negatively. His relationship with his father also explains why he believes he is weak even when everyone else sees him as strong. Once you know that, it informs everything. You understand why he reacts the way he does, why he doubts himself.”
The actors’ reflections converge around the genre’s ability to hold more tender emotions within heightened settings. “When I watch something, it’s almost always fantasy or sci-fi,” Rodak says. “There’s an immersion that feels separate from reality, but the relationships and emotions are universal.” Cruz extends that idea through identification. “You can put yourself in these characters. You feel like you’re going on the journey with them.” Harris frames it in terms of accessibility. “My parents don’t watch anime, but they watched Frieren. It has a human element that appeals to everyone.”
Though Frieren refined the genre’s introspective potential, 2026’s latest offering, Sentenced to Be a Hero pushes in the opposite direction, using its premise to interrogate the structures that define heroism itself. The series centres on Xylo Forbartz, a condemned figure forced into endless cycles of combat as part of a penal system that weaponises heroism as a cruel form of punishment. Emi Lo describes the appeal of this inversion. “Nothing is as it seems,” she says. “Heroes are criminals. Goddesses are weapons. It makes you want to keep looking into the world because everything you expect is flipped.”
Lo’s performance as Teoritta — the self-proclaimed “Goddess of Swords”, who forges a contract with Xylo and commands an arsenal of summoned blades while oscillating between divine authority and childish enthusiasm — reflects that instability. “She’s a gremlin,” Lo chuckles. “When she summons a sword, she’s excited and energetic, and that comes from her desire to help. She just believes she needs to help. If you think about that as childhood innocence, where a kid hones in on one emotion, that helped me balance it.”
Dawn M. Bennett approaches Patausche Kivia with an emphasis on evolution. The disciplined captain of the Holy Knights enters the story as a staunch believer in order and duty, a figure shaped by doctrine who gradually learns to question it. “What resonated with me was her ability to listen,” Bennett says. “I expected her to be very set in her ways, always arguing with Xylo, but she becomes more open-minded. She realises that if she wants to do the right thing, she has to question her own beliefs.”
The idea of questioning preconceptions seems to align quite well within the wider moment of anime’s current global circulation, where streaming platforms have expanded access while also increasing demand for localisation. Increased visibility, expanded distribution, and a more diverse pool of performers have altered the expectations placed on English-language adaptations, pushing them toward a level of nuance that parallels their Japanese counterparts while retaining their own distinct rhythms. The actors navigating this space approach translation as an ongoing negotiation, where each line becomes an opportunity to locate meaning within the gaps between languages. And though the distance between languages persists, within that distance there is often room for a different kind of truth to take hold.
Rodak recalls how a single, three-worded line from Frieren — “Aura, kill yourself” — circulated online, drawing viewers who had not previously engaged with the series. “People would come up to me and say they weren’t going to watch the show, but they saw that clip with Aura and it convinced them,” she says. “There’s something cold about the way Frieren delivers that line, her back is turned, she’s walking away. It shows her power for the first time. That reaction, the memes, all of it made that scene one of my favourites,” Rodak beams.
Jujutsu Kaisen Season 3, Frieren: Beyond Journey’s End Season 2 and Sentenced to Be a Hero are currently streaming on Crunchyroll, with new episodes airing weekly" https://www.thehindu.com/entertainment/movies/english-anime-dubs-voice-cast-frieren-beyond-journeys-end-jujutsu-kaisen-sentenced-to-be-a-hero/article70757642.ece #metaglossia_mundus #metaglossia
"Korean fiction has experienced a rapid surge in popularity in the English-speaking world in recent years. Many attribute this to the Korean Wave that's been sweeping through cinema and music. Whatever the reason, Korean writers have been winning major literary awards and attracting the spotlight for their achievements. With so much amazing fiction to choose from, there are tons of great options for readers. We've compiled a list of some of the best Korean fiction in multiple genres from the 2010s and 2020s, including powerhouse authors like Han Kang and Bora Chung alongside rising stars like Sang Young Park.
Cursed Bunny
by Bora Chung translated by Anton Hur
Originally published in Korean in 2017, Cursed Bunny was a finalist for the 2023 National Book Award for Translated Literature. This haunting collection of short stories is deliciously eerie, sometimes veering into body horror and at other times utilizing surrealism and even absurdism.
Kim Jiyoung, Born 1982
by Cho Nam-Joo translated by Jamie Chang
Kim Jiyoung, Born 1982's Korean release in 2016 coincided with the global #MeToo movement. Featuring a sort of Korean everywoman figure as its protagonist, the novel dives right into a powerful critique of misogyny in the contemporary era. The book's interrogation of gender inequality is enacted both through its unique premise (a woman takes on the consciousness of a myriad other women) and its unsettling narration, delivered by the male psychiatrist evaluating her case.
Love in the Big City
by Sang Young Park translated by Anton Hur
Longlisted for the 2022 International Booker Prize (among others), Love in the Big City has garnered enough popularity that it was recently made into an independent film. Told in sections organized around different relationships in the protagonist's life, it has a surprisingly lighthearted feel to it for a book that contends with homophobia, dysfunctional families, unhealthy relationships, and loneliness. It's a thought-provoking read about a young gay man's quest for love.
Untold Night and Day
by Bae Suah translated by Deborah Smith
Untold Night and Day is a fascinatingly disorienting work of literary fiction. It begins firmly enough, grounded in reality, but as the story unfolds, characters and experiences begin to collapse in on each other. If you're a reader who enjoys an unconventional and potentially challenging read, this book is perfect for you.
Welcome to the Hyunam-dong Bookshop
by Hwang Bo-reum translated by Shanna Tan
The popularity of what some have termed "healing fiction," which is what select contemporary Korean and Japanese fiction with "cozy" and fabulist elements have been labeled, has been growing over the past several years. Welcome to the Hyunam-dong Bookshop falls into this category and makes for a comforting and introspective read that focuses on the balance between ambition and happiness.
The Hole
by Hye-young Pyun translated by Sora Kim-Russell
This novel is a psychological thriller at its finest. Grappling with some of the darker aspects of life—such as control, guilt, and loss—this deeply uncomfortable story of a man who has been paralyzed in a car accident pushes readers to reflect on the consequences of living. As he is subjected to abuse and neglect, the lines between truth and lies blur in terrifying ways.
View eBook
Greek Lessons
by Han Kang translated by Deborah Smith and Emily Yae Won
Han Kang was awarded a Nobel Prize in literature “for her intense poetic prose that confronts historical traumas and exposes the fragility of human life.” Despite the fact that it was the first book Kang published after her smash hit The Vegetarian, Greek Lessons wasn't published in English until over a decade later. It grapples with themes of loss and trauma using prose that exhibits the author's roots as a poet."
By Kobo • March 15, 2026
https://www.kobo.com/blog/the-best-korean-fiction-in-translation
#metaglossia
#metaglossia_mundus
"Translation technology tools play a pivotal role in overcoming linguistic and other communication-related obstacles in different types of crises. Drawing on real-life examples, the chapter explores translation technology as an agency-enabling solution that facilitates access to instant, accessible information. The complex interaction between translation tools, human actors, and society is mapped through the lenses of sociological theories of agency. The chapter highlights how the development and deployment of such technologies can affect both crisis preparedness and containment, frequently amplifying the voices of governments and technology providers at the expense of those directly affected. To establish inclusive, technology-enabled communication, the chapter offers recommendations for contextually relevant crisis policies and management strategies, advocating for adaptable approaches and positioning human translators as safeguards against overreliance on AI tools. It also underlines the need for transparent, trustworthy communication channels and balancing sociocultural factors and power dynamics, ensuring that crisis communication is inclusive and people-centred." https://www.taylorfrancis.com/chapters/edit/10.4324/9781003271314-15/translation-technologies-automation-crisis-situations-khetam-al-sharou-mieke-vandenbroucke-gert-vercauteren #metaglossia #metaglossia_mundus
The GDELT Project, which collects and analyses global news and social data in real time, is disclosing experiments using AI to process large volumes of news and policy documents. It continuously gathers content in more than 100 languages and updates key datasets about events, relationships and images about every 15 minutes. GDELT also runs a platform translating news written in 65 languages. Recent tests include extracting leadership-change announcements and converting a 3,100-page U.S. bill into an infographic.
" 기자명Jinju Hong 2026-03-16 13:05:00
GDELT unveils AI experiments translating multilingual news, extracting leadership changes and turning a 3,100-page U.S. defence bill into an infographic. (홍진주)] The GDELT Project, which collects and analyses global news and social data in real time, is releasing various experiments that use artificial intelligence to analyse large volumes of news and policy documents.
An online outlet, Gigazine, reported on March 15 local time that the GDELT Project is a global archive that continuously collects content published in more than 100 languages worldwide, including broadcasts, newspapers and web news, and builds it into a database. It links various elements, including people, organisations, places, events and news sources, into a single network. It provides data on events around the world, their background and trends in public opinion.
The project was founded by data scientist Kalev Leetaru and political scientist Philip Schrodt, and it collects news and social media (SNS) data from 1979 to the present. The collected data are used as a basis for analysing global political, economic and social trends by quantitatively coding social events and reactions to them.
GDELT in particular releases large datasets so researchers and journalists can use them for analysis. The data consist of three streams: event data that classify physical activity worldwide into more than 300 categories; relationship data that record people, organisations, places, topics and emotions; and data that analyse the visual story of news images. The data are updated about every 15 minutes.
GDELT also operates a translingual platform that processes global news written in 65 languages through real-time translation using its own translation system.
Recently, it has also been actively conducting analysis experiments using AI. The GDELT Project disclosed an experiment that uses a Gemini-based model to automatically extract announcements of leadership changes at governments or companies from global news and organise them into a knowledge graph. In the process, AI was used to generate reports by going beyond organising personnel information and inferring the political and economic background.
In another experiment, work was carried out to input the roughly 3,100-page U.S. National Defense Authorization Act into AI and convert the entire bill into a single infographic. In the process, various analyses were also performed, including topic analysis of the bill, organisation of related bills and generation of expected questions.
GDELT also disclosed a large-scale translation experiment. According to a February 2026 announcement, it translated about 3 million TV news broadcasts accumulated over 25 years using AI. The cost to translate a total of 62 billion characters of broadcast data amounting to about 6 billion seconds was about $74,634. This is work that is estimated to have required millions of dollars using past methods.
Such projects are assessed as examples showing the possibility that AI can comprehensively analyse vast amounts of news and policy documents. Experts say such data-based analysis could become a new tool for understanding global political and economic trends." https://www.digitaltoday.co.kr/en/view/39425/ai-translates-25-years-of-news-in-100-countries-summarises-3100-page-bill-in-big-data-test #metaglossia #metaglossia_mundus
"The world's first Tibetan large language model and its application, DeepZang, has been officially unveiled in Lhasa, Southwest China's Xizang Autonomous Region. This model fills the gap in indigenous large language models at both the national and ethnic levels, while also facilitating the innovation and inheritance of Tibetan ethnic culture in the AI era, the company's chairman told the Global Times.
Developed independently by CHOKNOR Information Technology Co., Ltd. in Xizang, the model and its application are the first Tibetan large language model to complete national filing for generative AI in China, filling a technological gap in this field globally, according to local media Tibet.cn.
The World Record Certification Agency (WRCA) also awarded the certification of "the World's first Tibetan large language model" at DeepZang's launch event, chinanews.com reported on Monday.
Tenzin Norbu, chairman of the CHOKNOR company, told the Global Times on Monday that this open-source large model platform is China's first ethnic language AI open platform designed for multilingual and multimodal capabilities. The DeepZang platform supports over 80 languages, including Tibetan, Putonghua, English, Mongolian and Uygur, enabling an integrated approach to listening, speaking, translating, recognizing and thinking, Tenzin added.
The DeepZang model marks a strategic leap for China to take the lead in the AI field for ethnic languages, officially inaugurating the high-quality AI development of Tibetan-language in Xizang and dawning the era of AI for the Tibetan language, Tibet.cn reported.
The DeepZang application was also launched on Sunday, supporting intelligent interactions in Tibetan, Putonghua and English. Users can speak or type a sentence to access real-time mutual translation, Tibetan-language Q&A and cultural knowledge inquiries, according to the report.
Shortly after its launch on Sunday, the app recorded an average of 4,000 downloads per hour, the Global Times learned from the company.
Tenzin said the company has built a high-quality parallel corpus of nearly 70 million precise Tibetan-Putonghua language pairs. Additionally, they have completed large-scale speech data collection across the three major Tibetan dialect regions, establishing China's largest and accurately annotated Tibetan speech database to date, he added.
As shown in a video released by the Xizang Daily, several users voice-inputted instructions in different Tibetan dialects, and the application achieved accurate recognition and delivered prompt responses with high efficiency.
Tenzin said the development of this large language model has filled the gap in Tibetan large language models at the national and ethnic levels, and it also gives full play to the Tibetan cultural value, facilitating the innovation and inheritance of Tibetan ethnic culture in the AI era.
An official from Lhasa people's government was quoted by Tibet.cn as saying that the successful development of DeepZang has provided a valuable exploratory model for the global AI community in the processing of low-resource languages. It stands as a testament that modern information technology can effectively underpin the preservation and development of traditional cultures, the official added.
"Through this large language model and its application, we also aim to provide an authentic platform for global users seeking to learn about Tibetan culture, history and politics, thereby preventing the dissemination of distorted ideologies and values," Tenzin said.
In another video posted by the Lhasa Women's Federation on its official WeChat account, a student from Xizang University said that DeepZang's translation function is very useful, though the translation of some four-character idioms is still not fully developed.
Tenzin said that the model is currently limited by the scope of its corpus data, and the company will continue to refine and update it based on user feedback.
In the future, this large language model is set to extend its capabilities to sectors including education, healthcare and ecology, delivering convenient and efficient services to enterprises and government agencies..."
https://www.globaltimes.cn/page/202603/1357052.shtml
#metaglossia
#metaglossia_mundus
" Stanford engineer has demonstrated that frontier language models can run directly on everyday edge devices using convex optimization, eliminating reliance on cloud servers and costly GPUs. The breakthrough, unveiled at NeurIPS 2024, enables secure, lower-cost, personalized AI with early international commercial deployments.
United States, March 12, 2026 -- A Stanford engineer has shown that the world’s most advanced "frontier" language models can now run directly on regular edge and local devices. This removes the pure reliance on cloud servers and costly specialized hardware.
This engineer used advanced mathematical optimization techniques to show that sophisticated and helpful "frontier" language models can run on the personal devices people already have. This change means the industry no longer has to rely on the cloud or expensive specialized GPU hardware.
Breaking the Cloud Dependency
Running advanced neural networks usually means using an army of cloud computing resources, which requires expensive GPU farms, a steady internet connection, and per-token API fees. Miria K. Feng, a PhD candidate in electrical engineering at Stanford University, has successfully merged the potential of mathematical convex optimization techniques with large-scale deep learning applications for far more accessible and personalizable AI. Powerful frontier models running on your local devices mean greater security, since your data stays local and reduces the cost of paying per-token fees to a few large tech conglomerates.
The combination of using mathematical optimization to reformulate neural networks is not new and was proposed by Turing Award winner Yoshua Bengio. But the practical deployment of these elegant theoretical techniques in large-scale AI was first publicly announced in Miria's work at NeurIPS 2024. This quiet breakthrough has led to frontier models that efficiently run personalizable inference on everyday edge devices that we already carry in our back pockets.
“The goal was to prove that you don't need a GPU cluster or fiber internet connection to use frontier technology,” said Miria. “We use principled convex optimization techniques in conjunction with machine learning to cut the computing power needed without sacrificing quality in results. This dramatically reduces barriers to entry for global users and helps safeguard user privacy since data is not being constantly shared on the cloud."
From Academic Research to Market Launch
Early deployments in Canada, Singapore, and Japan to build accessible, everyday, personalized AI tools were a resounding success for Miria's innovations. Her commercial deployments span widely, from Toyota Motor Corporation in Nagoya, Japan, to FCS Solutions in Singapore.
Meanwhile, Miria is continuing her cutting-edge doctoral work at Stanford University as a Rambus Corporation Fellow, with beta tests in the hospitality sector set to go live in Los Angeles and Las Vegas in 2026. Official news about partnerships is expected later this year.
A Multidisciplinary Approach Her unique background shapes Miria's technical work. She is a Kiwanis Music Festival gold medalist and concert pianist and is currently a student of Melinda Lee Masur. Her top national performances in the Pascal, Fermat, and Euclid mathematics competitions continue to give her a creative yet principled approach to engineering. She paid her own way through school and has lived in several countries, which led her to focus on "equitable access" and to build tools that work for everyone, regardless of local infrastructure or income.
About the company: Miria K. Feng is a doctoral researcher in the Department of Electrical Engineering at Stanford University, focusing on electrical engineering and convex optimization for deep learning. As a Stanford Graduate Fellowship winner and a Rambus Corporation Fellow, she connects theoretical math optimization with real-world applications through refreshing innovation
Contact Info: Name: Miria Feng Email: Send Email Organization: 9-Figure Media Website: https://9figuremedia.com/"
https://markets.businessinsider.com/news/stocks/new-technology-brings-advanced-language-models-to-everyday-devices-1035923587 #metaglossia #metaglossia_mundus
"La Chine adopte une loi qui promeut le mandarin comme «langue commune nationale»
L’Assemblée nationale populaire chinoise Chine a approuvé jeudi une loi dite d’«unité ethnique» que les défenseurs des droits de l’Homme estiment délétère pour les langues et les cultures minoritaires dans le pays.
La Chine a adopté jeudi, au cours de son événement politique annuel des Deux Sessions (une réunion parlementaire durant laquelle le gouvernement chinois fixe ses grandes orientations économiques et politiques pour l’année à venir, une loi sur la «promotion de l’unité et du progrès ethniques», approuvée sans débat lors de cette session parlementaire. Cette nouvelle loi, adoptée par l’Assemblée nationale populaire chinoise (ANP), formalise désormais des politiques visant à promouvoir le mandarin comme «langue commune nationale» dans l’éducation, les affaires officielles et les lieux publics.
Pékin présente cette loi comme un outil de modernisation et de prospérité, affirmant qu’elle renforcera «le sentiment de communauté commune de la nation chinoise» et améliorera les perspectives d’emploi des minorités grâce à la maîtrise du mandarin. Cependant les universitaires et défenseurs des droits humains y voient la consolidation juridique d’une politique d’assimilation forcée, selon les informations de la BBC.
D’après les informations du média britannique, cette mesure est composée de clauses qui affaiblissent le statut des autres langues officielles présentes en Chine au profit du mandarin. Elle vise ainsi les 55 minorités officielles représentant environ 9% des 1,4 milliard d’habitants de la Chine.
La langue comme principale cible Dans certaines régions comme le Tibet ou la Mongolie intérieure, où vivent d’importants groupes ethniques minoritaires, des politiques gouvernementales ont déjà ordonné que le mandarin soit utilisé comme langue d’enseignement. Yalkun Uluyol, chercheur dédié à la Chine à l’ONG Human Rights Watch, décrit à l’AFP la nouvelle loi comme un «changement radical» par rapport à une politique de l’ère de l’ancien dirigeant Deng Xiaoping, qui garantissait aux minorités le droit d’utiliser leurs propres langues. Les établissements d’enseignement devront désormais utiliser le mandarin comme principale langue d’enseignement. Les adolescents seront désormais tenus d’avoir «une maîtrise de base» du mandarin à l’issue de la scolarité obligatoire.
Des tensions autour de la langue avaient déjà éclaté bien avant l’adoption de cette loi. En 2020, en Mongolie intérieure, la suppression brutale des manuels scolaires en mongol avait provoqué de rares mais puissantes manifestations. Certains parents avaient même retenu leurs enfants à la maison en signe de protestation, considérant cette mesure comme une menace directe à leur identité culturelle. La répression avait été immédiate et massive, suivie de campagnes de rééducation. Désormais les élèves de la région ne peuvent plus étudier le mongol qu’une heure par jour, comme simple langue étrangère, d’après les informations de Associated Press.
La loi prévoit aussi des sanctions contre les parents ou tuteur chinois en Chine qui transmettraient à leurs enfants des idées jugées contraires à «l’harmonie ethnique». Le texte instaure également une base juridique inédite pour poursuivre des individus ou organisations basés hors de Chine si leurs actes nuisent à «l’unité ethnique», un mécanisme qui inquiète particulièrement les communautés ouïghoures, tibétaines et mongoles en exil, souvent parmi les plus critiquées par le régime.
La Chine, où l’ethnie largement majoritaire est celle des Hans, reconnaît à l’intérieur de ses frontières 55 minorités qui rassemblent plusieurs centaines de langues et dialectes. Le gouvernement chinois est accusé depuis des décennies de mener des politiques pour assimiler de force ces minorités à la majorité Han." Joséphine Guilhem de Pothuau 13 mars 2026 https://www.lefigaro.fr/international/la-chine-adopte-une-loi-qui-promeut-le-mandarin-comme-langue-commune-nationale-20260313 #metaglossia #metaglossia_mundus
"English PEN’s flagship translation grant programme, PEN Translates, announced its latest round of winners, awarding grants to 18 titles from 14 publishers across 12 languages and 16 regions. Three of those titles come from African writers — from Egypt, Sudan, and Mauritius — and one of them makes history as the first Mauritian title ever to receive a PEN Translates award.
The Egyptian title is The Field by Hamdi Abu Golayyel, translated from the Arabic by Robin Moger and published by Saqi Books. Abu Golayyel — who passed away in June 2023 — was one of Egypt’s most distinctive literary voices, born in Fayoum and widely described as a chronicler of the lives of Egypt’s marginalised and working class. Three of his novels have previously been translated into English: Thieves in Retirement (tr. Marilyn Booth, 2006), A Dog with No Tail(tr. Robin Moger, 2009), which won the Naguib Mahfouz Medal for Literature in 2008, and The Men Who Swallowed the Sun (tr. Humphrey Davies, 2022), whose translator was joint winner of the 2022 Saif Ghobash Banipal Prize for Arabic Literary Translation. The Field will be a welcome return of his work to English-language readers. The Sudanese title is Under the Neem Tree by Rania Mamoun, a Sudanese activist and bestselling writer of poetry, fiction, and nonfiction, translated from the Arabic by Elisabeth Jaquette and published by Comma Press. Jaquette previously translated Mamoun’s Thirteen Months of Sunrise (Comma Press, 2019), which was shortlisted for the 2020 Warwick Prize for Women in Translation and was itself a PEN Translates award winner, making this a continuation of a celebrated translating partnership.
The most historic of the three is The Rasta’s Song by Sharon Paul from Mauritius, translated from the French and Mauritian Creole by Nadiyah Abdullatif and published by Balestier Press. This is the first time a title from Mauritius has ever received a PEN Translates award, a milestone that reflects both the programme’s expanding geographic reach and the growing recognition that Francophone and Creole-language African literatures deserve a place in the global translation conversation. The inclusion of Mauritian Creole as a source language is itself significant: it joins Slovak as one of two languages appearing in the PEN Translates portfolio for the first time in this round.
PEN Translates has now supported over 400 books translated from over 90 languages, awarding over £1.2m in grants since its inception. Books are selected on the basis of outstanding literary quality, the strength of the publishing project, and their contribution to UK bibliodiversity. The programme’s Translation Advisory Co-chair Nichola Smalley described this round as giving “hope for the future of UK translation publishing” and for African literature specifically, three grants in a single round, including a historic first, is a result worth celebrating!" by Blessing Uwisike February 26, 2026 https://share.google/B1Gq1t8fiiOoFaf2z #metaglossia #metaglossia_mundus
Creators can now upload language-specific thumbnails, enabling viewers to see previews in their preferred language and improving discoverability globally
"YouTube has introduced a new feature that allows creators to upload translated thumbnails for their videos, a move aimed at helping content reach audiences across different languages more effectively.
The update enables creators to add multiple thumbnail versions for a single video in different languages. When viewers browse the platform, the thumbnail displayed will automatically match their language preferences, allowing them to see a preview image that feels more familiar and relevant.
For instance, a viewer whose interface language is Hindi may see a Hindi-language thumbnail, while someone browsing in Spanish could see a Spanish version of the same video’s thumbnail. Despite the different preview images, both viewers would still be watching the same underlying video.
The feature is designed to complement YouTube’s existing multi-language audio capabilities, which allow creators to upload alternative audio tracks in different languages for the same video. By adding translated thumbnails to the mix, the platform is extending localisation beyond audio to the visual entry point of a video.
Creators can add these translated thumbnails through YouTube Studio, where they can upload different thumbnail images mapped to specific languages. Once added, YouTube automatically determines which version to display based on the viewer’s language settings.
The company says the feature is intended to help creators improve discoverability and engagement among global audiences, particularly for channels that publish content aimed at viewers in multiple regions."
https://www.buzzincontent.com/news/youtube-rolls-out-translated-thumbnails-to-help-creators-reach-multilingual-audiences-11205414
#metaglossia #metaglossia_mundus
"Taariq Ahmed, Assistant Campus Editor March 13, 202 Poet, translator and New York University English and Spanish and Portuguese Prof. Urayoán Noel shared his work and discussed ideas in a Thursday poetry reading and Q&A event with Northwestern community members as part of the English department’s Unsettling Sound series.
Noel is the author of several books in English and Spanish. He performed for about 15 attendees in University Hall.
He started by reading poems from his 2021 collection, “Transversal,” performing with voice and volume changes and reciting both the English and Spanish versions. Throughout the event, he performed poetry with instrumental music in the background.
“Now poetry is just a name for this, our faint embodied sound, for music once it’s not around, for ash in lockstep with the flame, for streets still summoning the same old shadows,” Noel said in one poem named “Juliécimas.”
Noel then transitioned into his ongoing series, “Wokitokiteki,” which he described to the audience as a “walking poetic improvisation project.” He said he creates the content while walking through neighborhoods in Puerto Rico and those in U.S. states with significant Puerto Rican populations.
In honor of his visit to the Chicagoland area, he delivered one piece inspired by Humboldt Park, Illinois, a historically Puerto Rican cultural hub, specifically referencing his observations from the walk.
As a translator, Noel has repeatedly translated works from Garifuna and Guatemalan poet Wingston González. Noel recited poems from an unpublished translation of González’s 2015 book, “Translaciones.”
When explaining his relationship with González, he said they share a relationship for things like performance and improvisation, despite their cultural differences.
Citing inspiration from “The Traffic in Meaning: Translation, Contagion, Infiltration” by Mary Louise Pratt, he talked about how translation is less about producing equivalences and more about understanding and representing the experiences of others.
Later, Noel read from his 2025 autobiographical prose work, “Cuaderno de Isabela/Isabela Notebook,” and handed out copies to attendees.
“Tell me if there’s a city like the one with the horse staring at the sea in front of windows with iron bars and flanked by piles of car tires…” Noel said in one poem, translated to be “Pueblo” or “Town.”
Noel then transitioned into a Q&A with the audience. He discussed the Wokitokiteki project and the concept of improvisation. He also compared product versus process.
Noel also talked about his philosophy on teaching poetry and writing to students. He said emphasizing the process of writing poetry is essential, as the product is “tied to racial capitalist ideas” of generating something to sell.
“We can always do things to become better writers, but I can’t tell you what you need to write,” Noel said. “What I can share with you is the process. How did my process get me from A to B?”
NU Spanish and Portuguese Prof. Emily Maguire, who went to graduate school with Noel at NYU, said she believes he is an impressive performer.
She said he is one of the most proficient bilingual people she has ever met.
“He has a tremendous facility in both Spanish and English, but he is also someone who has a tremendous gift for performing live and a real ability to capture an audience and move and entertain in surprising and creative ways,” she said.
Spanish and Portuguese Prof. Julia Oliver Rajan, who is Puerto Rican, said though she was initially unfamiliar with Noel, she enjoyed his performance.
“It resonated with me the vibrancy of his poetry,” Rajan said. “The way he described Puerto Rico, the struggles of Puerto Rico — I liked those things in his poetry.”
In the Q&A, Noel spoke about what it is like to translate works from poets who are from a different culture or who are dead, both of which he has done in his career.
He said to be a translator it was crucial to embrace these discrepancies, calling translation the “least messed-up kind of appropriation.”
“You’re not going to do away with the fundamental tension of ‘Oh, this person is dead, and I’m here telling their story,’ especially if they’re from community X, and I’m from community Y,” Noel said. “But to me, that shouldn’t dissuade us, because there’s way more work that needs to be translated than there are translators.”
Email: r.ahmed@u.northwestern.edu"
Taariq Ahmed, Assistant Campus Editor
March 13, 2026 https://dailynorthwestern.com/2026/03/13/campus/poet-translator-and-professor-urayoan-noel-shares-work-in-reading-qa-event/ #metaglossia #metaglossia_mundus
"Zigbang (PDG : Ahn Seong-woo), une entreprise de technologies immobilières complète, a annoncé le 13 décembre l’intégration de la reconnaissance vocale en temps réel basée sur l’IA et de la traduction multilingue à sa plateforme de bureau virtuel, Soma. Ces fonctionnalités visent à faciliter la collaboration des équipes internationales en s’affranchissant des barrières linguistiques.
Soma est une plateforme de bureau virtuel basée sur le métavers, développée par Zigbang. Elle recrée l'environnement spatial d'un bureau physique en ligne, favorisant les échanges et la collaboration naturels, même dans des contextes de travail à distance ou hybrides. Grâce à cette mise à jour, les utilisateurs peuvent visualiser le contenu vocal dans l'espace virtuel sous forme de texte en temps réel et traduire les propos de leur interlocuteur dans la langue de leur choix. Actuellement, Soma prend en charge plus de 50 langues et 145 paramètres régionaux. Le texte généré peut être enregistré localement et utilisé, par exemple, pour la rédaction de comptes rendus de réunion.
Cette fonctionnalité intègre une technologie de traduction contextuelle qui prend en compte le contexte de la conversation précédente. En allant au-delà d'une simple substitution mot à mot pour proposer des traductions qui préservent la fluidité du dialogue, l'objectif est de minimiser les problèmes de communication susceptibles de survenir lors de réunions multinationales.
Les données de conversation sont conçues pour être traitées entre les participants plutôt que stockées sur un serveur. La reconnaissance vocale et la traduction utilisent une architecture à double flux qui assure simultanément une transmission stable et en temps réel des messages, ce qui la rend adaptée aux environnements d'entreprise exigeant une sécurité renforcée.
Grâce à cette mise à jour, Zigbang prévoit de faire évoluer Soma en une plateforme contribuant à l'amélioration des processus en apprenant les flux de décision organisationnels et les contextes de travail, allant ainsi au-delà du simple enregistrement des réunions et de l'extraction des tâches. À long terme, l'entreprise poursuit ses recherches afin de créer un environnement « Moi numérique » où le travail peut se poursuivre même en l'absence de l'utilisateur, grâce à des agents d'IA qui apprennent le langage et les jugements professionnels individuels.
Un responsable de Zigbang a expliqué : « L'introduction de cette fonctionnalité multilingue basée sur l'IA est une première étape vers la réduction des barrières linguistiques », ajoutant : « Nous prévoyons d'étendre progressivement les fonctionnalités afin que Soma puisse évoluer au-delà d'un simple espace virtuel pour devenir une plateforme qui améliore les méthodes de collaboration organisationnelle. »
Avec la récente généralisation du travail à distance et hybride, les outils de traduction et de collaboration en temps réel basés sur l'IA suscitent un intérêt croissant en tant que solutions clés pour améliorer la productivité et l'efficacité dans les environnements d'entreprise mondiaux.
출처: Zigbang intègre des fonctionnalités de reconnaissance vocale multilingues en temps réel basées sur l'IA et de traduction à Virtual Office Soma - 벤처스퀘어 https://www.venturesquare.net/fr/1049186/"
https://www.venturesquare.net/fr/1049186/
#metaglossia #metaglossia_mundus
"… À Locronan (29), la traduction en breton de panneaux installés dans le cadre d’un parcours patrimonial a fait hérisser le poil des brittophones de la commune.
Jean-Marc Louboutin fait partie du collectif qui demande le retrait des panneaux d’un parcours patrimonial installé à Locronan. En cause ? La traduction « catastrophique » des textes en langue bretonne. (Photo Aude Flambard) « Il s’agit vraiment d’un exemple de mésusage de l’intelligence artificielle », estime Anne Gouerou. Elle fait partie d’un collectif d’habitants brittophones de Locronan (29) qui s’est constitué après la découverte de panneaux installés, début mars, dans le cadre d’un parcours patrimonial qui retrace l’histoire du cinéma dans la petite commune finistérienne..." Par Paul Bohec avec Aude Flambard Le 13 mars 2026 à 16h50 https://www.letelegramme.fr/finistere/locronan-29180/un-massacre-de-la-langue-a-locronan-la-traduction-bretonne-de-ces-panneaux-fait-bondir-des-habitants-7003989.php #metaglossia #metaglossia_mundus
"Many of the immigrants detained at Northwest State Correctional Facility in Swanton have the same question for the volunteer attorneys who’ve visited to provide in-person counsel.
“One of the questions that we got asked the most often was, ‘Where am I? What state am I in?’” said Emma Matters, an immigration attorney with the Vermont Asylum Assistance Project. “Even that very, very basic information that you assume someone has access to, people go without if they don’t have someone coming in and conversing with them in their language and explaining to them just what is going on.”
Matters says the experience underscores the disadvantage that immigrants who don’t speak English face when they’re detained in facilities that can’t communicate in a language they understand. And she says prohibitions on language-access devices at the Vermont Department of Corrections have in some cases prevented attorneys from providing the basic legal services that immigrants need to fight their cases.
“Without someone who’s able to provide them with that information, let them know what’s being put in front of them or what might be put in front of them, people end up being vulnerable to life-changing harm,” Matters said.
The number of people arrested and detained by Immigration and Customs Enforcement is up tenfold in New England since the start of President Donald Trump’s second term in office. Some of them have ended up in two prisons operated by the Department of Corrections, which contracts with the Department of Homeland Security to provide temporary lodging for immigrant detainees.
Local immigration attorneys almost universally support the state’s decision to lodge detained immigrants, at Northwest, for men, and Chittenden Regional Correctional Facility, in South Burlington, for women.
“We need these beds,” Jill Martin Diaz, executive director of the Vermont Asylum Assistance Project, told lawmakers in January. “Because there is absolutely no substitute to me getting in my car and driving up the road ... flashing my attorney credential and being able to meet with my client face-to-face.”
Mae Nagusky / Vermont Public File Chittenden Regional Correctional Facility is Vermont's only women's prison and one of two facilities that routinely houses immigrant detainees. Attorneys are raising concerns about what they say is a lack of language translation services available as they meet with clients. But a Department of Corrections policy that prohibits attorneys from using their own translation services in state facilities has hindered their ability to help, attorneys say.
“DOC policies and deficiencies are preventing low bono and volunteer attorneys from being able to speak with their clients who are in detention and is thereby depriving them of access to their due process rights,” said Hillary Rich, a staff attorney at the Vermont ACLU who spent two years practicing asylum law in Laredo and San Antonio, Texas.
No outside devices Matters said the Vermont Asylum Assistance Project has been making regular trips to the state prisons since last year to meet with newly detained immigrants. She said the organization explains their rights, advises them of potential claims, and provides referrals to lawyers who might provide representation.
VAAP attorneys had previously been allowed to bring in their own “tools of interpretation,” including laptops or cell phones on which they could call out to access live translation services.
“It’s very hard to know in advance what type of language capabilities we’re going to need on that day,” Matter said. “We see people detained who speak a wide variety of languages, including rare and Indigenous languages.”
But in October, officials at VAAP say, the department told them they could no longer bring those devices into the facilities. The single DOC landline that attorneys now have access to drops calls frequently, Matters said. And she said it bottlenecks a process that previously allowed multiple attorneys to work several cases simultaneously. The process became so inefficient that VAAP has cut the number of trips it makes to state prisons in half.
“The numerical reality of that is that … between tens and hundreds of people who would otherwise have access to legal screenings, basic know your rights, and case advice and potential referral out to legal services, go without,” she said.
Peter Hirschfeld / Vermont Public File Elected officials and nonprofit leaders gathered in the Statehouse in May 2025 to announce the launch of the Vermont Immigration Legal Defense Fund. Jill Martin Diaz, at the podium, with Vermont Asylum Assistance Project, said the money would be used to train and hire legal professionals to provide pro bono assistance to noncitizens facing immigration proceedings. Corrections Commissioner Jon Murad said in an interview Wednesday that the department has a longstanding policy that prohibits people from bringing “anything with cellular capacity” into a state prison.
“What if it’s misplaced? What if it disappears? What if it is then transferred over to the direct control of people in our custody?” said Murad, who joined the department in August. “That is a risk, and one that we don’t want to countenance.”
VAAP’s ability to bring in its own devices up until October, according to Murad, might have been related to a lapse in policy enforcement.
'Set up to fail' The commissioner said the department has since taken steps to lower the language barrier, by providing attorneys with DOC-owned devices that have translation capabilities.
Murad said DOC had six such devices at Northwest and three at Chittenden Regional. Matters said VAAP attorneys who visited DOC facilities as recently as March 6 have not been told about the new devices.
“That was brand new information to me and to all of my colleagues,” Matters said Wednesday.
The DOC devices don’t have cellular capacity – a shortcoming Matters said would likely render them useless to VAAP attorneys.
“We require live interpretation services. We need to be speaking to a human,” Matters said.
Murad said the department is working on a plan that would give lawyers the ability to make calls to translation services on DOC-owned devices, though he said he doesn’t have a timeline for that yet. He said the department has undertaken other efforts to facilitate access to counsel for detained immigrants – it sends VAAP a daily list of names of new arrivals at facilities, so the organization is aware of individuals who might need assistance.
Rich, of the ACLU, said a DOC policy she obtained in February through a public records request shows that immigrant detainees are responsible for coordinating their own remote hearings.
“Which for a limited English proficient detainee who does not have counsel and doesn’t even know what state they’re in is going to prove impossible,” she said. “These folks are being set up to fail in their immigration court systems by the deficiencies in DOC procedures.”
Rich said Northwest and Chittenden Regional are subject to public accommodations laws that include language-access requirements. She said the Department of Corrections might be violating those laws.
“Lawsuits are just one tool in our toolbox,” Rich said, “but of course it is a tool we are very comfortable wielding when necessary.”
Peter Hirschfeld https://www.vermontpublic.org/local-news/2026-03-12/lawyers-raise-alarm-about-language-translation-services-for-vermonts-detained-immigrants #metaglossia #metaglossia_mundus
"Theorizing “Global Criticality” and the Politics of Just Translation
07 May 2026 18:00 to 19:30
Bush House, Strand Campus, London
07
May
Professor Emily Apter is giving the keynote lecture at the annual conference of the Department of Interdisciplinary Humanities, King's College London. This lecture is open to the public.
"Translation and justice, the focus of my book What is Just Translation? Changing Languages in the Political Present, engages Gayatri Chakravorty Spivak’s notion of “global criticality” as a rubric for a vision of language politics that straddles the fields of law, global language policy, non-monolingual pedagogies and reparations applied to forms of linguistic injustice and cultural appropriationism. I associate “global criticality” with translational workarounds - ways of working micropolitically with language and intermedial forms of expression. These microforms stand in contradistinction to one-size-fits-all paradigms or “isms” that are anchored in colonial Euro-chronology and beholden to reductive bipolarities between major and minor, metropole and periphery, written and performative. As a micropolitics of language, “global criticality” flows into Spivak’s notion of “living translation:” a triple play on living with translation, living life in translation, and “live” translation, which vivifies life itself."
About the speaker
Professor Emily Apter is Julius Silver Professor of Comparative Literature and French Literature, Thought and Culture at New York University. Her books include: Unexceptional Politics: On Obstruction, Impasse and the Impolitic (Verso, 2018); Against World Literature: On the Politics of Untranslatability (2013); Dictionary of Untranslatables: A Philosophical Lexicon (co-edited with Barbara Cassin, Jacques Lezra and Michael Wood) (2014); and The Translation Zone: A New Comparative Literature (2006). Since 2000 she has edited the book series Translation/Transnation with Princeton University Press. Essays have appeared in New Literary History, October, Public Culture, Crisis and Critique, History and Theory, Diacritics, PMLA, Comparative Literature, Critique, Les Temps qui Restent, Representations, Art Journal, Third Text, Paragraph, boundary 2, Artforum, Esprit Créateur and Critical Inquiry. In 2019 she was the Daimler Fellow at the American Academy in Berlin. In 2017–18 she served as President of the American Comparative Literature Association. In fall 2014 she was a Humanities Council Fellow at Princeton University and in 2003–2004 she was a Guggenheim Fellowship recipient. In 2022 she co-edited and introduced Gayatri Chakravorty Spivak’s Living Translation, a collection of Spivak’s contributions to translation theory. Her book What is Just Translation? Changing Languages in the Political Present is nearing completion. Her next book project, on the conceptualization of the unborn (or “prepersons”) is provisionally titled Conception: The Laws."
https://www.kcl.ac.uk/events/theorizing-global-criticality-and-the-politics-of-just-translation
#metaglossia
#metaglossia_mundus
"Vozo AI , an AI-powered video localization platform, today announced the beta launch of Visual Translate , a generative AI capability that automatically localizes on‑screen text while maintaining the original design, layout and animation. This release addresses a long-standing gap in AI video translation: while subtitles and dubbing translate what viewers hear, most tools still fail to translate the text viewers see within the video itself.
Vozo Visual Translate localizes on-screen text in videos.
In many videos—such as training materials, product demos, and explainer content—key information appears directly within visuals, including slide text, labels, callouts, diagrams, and charts. When that content remains in the original language, international viewers may understand the narration but still miss critical context.
Visual Translate closes this gap by automatically:
• Working directly from the video itself—no original project files required
• Detecting and translating on-screen text within videos
• Preserving the original layout, style, and animations
• Allowing text, fonts, colors, and positions to be edited and customized
The result is a fully localized video where both narration and visuals are translated coherently, giving international audiences the same clarity as native viewers.
During the alpha phase, a multinational manufacturing company used Visual Translate to localize slide-based training videos for global teams and distributor networks. By translating visual content directly within the video into nine languages, rather than manually editing, the company reduced localization time by over 96%—turning a two-day process into just 30 minutes.
By automating what was once a highly manual process, Visual Translate marks a shift in AI video translation—moving beyond basic dubbing and subtitles toward truly complete, scalable localization that preserves how meaning is conveyed visually. The capability is particularly valuable for education, corporate training, and marketing, where critical information often appears in step-by-step instructions, labels, and other visual elements rather than narration alone.
“Most video translation tools focus on speech,” said Dr. CY Zhou, Founder and CEO of Vozo AI. “But in many videos, meaning is conveyed visually—through slides, diagrams, and on-screen text. Visual Translate fills that missing layer, enabling truly complete video localization and allowing ideas and knowledge to move across languages with far greater clarity and impact.”
Visual Translate is currently available in beta.
About Vozo AI Vozo AI is an AI-powered video localization platform that enables teams and enterprises to scale video content across languages and markets. By translating both spoken audio and visual content, Vozo ensures that meaning is preserved across the entire video experience, delivering truly native viewing for global audiences. For more information, visit www.vozo.ai . https://lasvegassun.com/news/2026/mar/12/beyond-dubbing-vozo-ai-launches-visual-translate-f/ #metaglossia #metaglossia_mundus
"The European Commission’s Directorate-General for Translation (DG Translation) has invited students from the European Master’s in Translation (EMT) network to take part in a project assessing how well AI language models work in EU languages.
Students on EMT programmes — a network of university master’s courses in translation recognised by the EU — will be able to contribute to work aimed at improving the evaluation of AI models across different European languages, according to a statement published by the European Commission on Wednesday.
The project will involve examining how AI models perform and how that performance is measured, with a focus on making the tools better suited to EU languages.
Focus on evaluation of AI for EU languages The Commission said the work brings together language professionals and AI engineers, and the project will give students insight into how linguistic skills and AI development can be combined.
It added that participants will also be able to explore potential career paths linked to language technology as part of the project." Thursday 12 March 2026 By The Brussels Times Newsroom https://www.brusselstimes.com/2018989/ai-translation-tools-under-scrutiny-in-new-eu-backed-student-project #metaglossia #metaglossia_mundus
"The poet and writer Coleman Barks died last month at the age of 88. He was well known for his translations of the works of the 13th-century Persian mystic poet Jalaluddin Rumi. Coleman Barks even appears on a Coldplay album, "A Head Full of Dreams," reading a translation of Rumi’s “The Guest House.”
Here & Now's Lisa Mullins talks to Coleman Barks's sister, Elizabeth Barks Cox, who is also a writer, about his life and work.
This segment aired on March 12, 2026." https://www.wbur.org/hereandnow/2026/03/12/coleman-barks-obituary #metaglossia #metaglossia_mundus
Not so fast: A University of Houston professor of psychology is disputing a high-profile study claiming that people who live in multilingual countries show healthier brain aging, claiming instead that wealthy countries, with the best healthcare systems, offer longer life expectancies.
"University of Houston professor of psychology Arturo Hernandez is disputing a high-profile study published in the journal Nature Aging claiming that people who live in multilingual countries show healthier brain aging. Though the study got lots of attention, Hernandez reports in the journal Brain and Language that the findings warrant cautious interpretation and reframing of public health implications.
“We took a closer look and argued that the study’s conclusions go further than the data can support,” said Hernandez.
According to Hernandez, the countries with high multilingualism in Europe also happen to be the wealthiest, with the best healthcare systems and the longest life expectancies, sometimes by as much as six years. When those structural differences are accounted for, the apparent language effect largely disappears.
“There is a real temptation in science to find individual behavioral solutions: learn a language, do a puzzle, take a supplement - are all suggested as solutions to problems that are fundamentally structural,” said Hernandez. “When those solutions get oversold, it can erode public trust in science and distract from the harder work of building the conditions that actually support healthy aging: Access to healthcare, good nutrition, economic stability. We wanted to make sure the public gets an accurate picture of what the evidence shows.”
In the original article, researchers examined records in 27 European countries and claimed that multilingualism protects against accelerated aging whereas monolingualism increased risk of accelerated aging.
Countries with high multilingualism, like Luxembourg (82.5 years) and the Netherlands (82.5 years), have some of the highest life expectancies in the world. Meanwhile, countries with low multilingualism, such as Bulgaria (75.8 years) and Romania (76.3 years), lag nearly six or seven years behind.
“A six-year gap in life expectancy is unlikely to be explained by language. World-class healthcare, superior early-childhood nutrition, higher occupational safety, and lower chronic stress offer a more parsimonious account—the same structural forces that produce longevity in general,” said Hernandez, who points to Japan as another example.
As a largely monolingual society, it boasts an exceptional life expectancy of 84.5 years. “Low inequality, a healthy diet, and a robust universal healthcare system account for that advantage far better than language ever could,” said Hernandez.
“As scientists, we do a disservice to the public when we promote individual behavioral hacks as substitutes for structural resources. Learning a language is a beautiful, culturally enriching endeavor. It connects us to others and expands our world. But we must be careful not to overpromise it as a clinical intervention for aging,” Hernandez said.
Journal
Brain and Language
Article Title
Multilingualism and aging: Country-level patterns may not support individual-level causal claims
Article Publication Date
9-Mar-2026"
https://www.eurekalert.org/news-releases/1119284
#metaglossia_mundus
#metaglossia
Cérémonie de remise des diplômes 2026 - Faculté de traduction et d'interprétation - UNIGE
Université de Genève
La cérémonie de remise des diplômes se tiendra le vendredi 27 novembre 2026 à 18h, dans le hall d’Uni-Mail.
🎉 Accessible uniquement sur invitation, cet événement festif destiné aux personnes diplômées en 2025–2026 sera l’occasion de célébrer leur brillante réussite et partager des souvenirs marquants de leur cursus.
La cérémonie sera retransmise en streaming le jour J.
10 mars 2026
https://www.unige.ch/fti/a-la-une/ceremonie-de-remise-des-diplomes-2026
#metaglossia_mundus
#metaglossia
#métaglossie
"Published: March 11, 2026 12.16am SAST
Isabel Tello Fons, Universitat de València
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
“–Tú también te enojarías si tuvieras una peluca como la mía —prosiguió el Avispón–. Se meten con uno, y uno, que no le gusta que le tomen la ‘peluca’, pues se enfada… ¡natural! Y entonces es cuando me entra la murria, me arrebujo debajo de un árbol y me quedo tieso de frío. Y, para aliviarme, cojo un pañuelo amarillo y me lo ato alrededor de la cara… ¡Oséase, como ahora! ¡Natural!”.
Así tradujo Ramón Buckley la voz del Avispón en la novela A través del Espejo y lo que Alicia encontró allí, de Lewis Carroll. La versión original recrea el dialecto cockney londinense, muy ligado a la clase obrera, lo que Buckley transformó en un dialecto castizo madrileño, conservando el tono quejón y ordinario del personaje de la obra de Carroll:
“You’d be cross too, if you’d a wig like mine,” the Wasp went on. “They jokes, at one. And they worrits one. And then I gets cross. And I gets cold. And I gets under a tree. And I gets a yellow handkerchief. And I ties up my face –as at the present”.
Cuando leemos una novela traducida, no solo seguimos una historia: escuchamos voces. Voces que revelan quiénes son los personajes, de dónde vienen y qué lugar ocupan en su comunidad. Pero ¿qué pasa con esas voces cuando pasan de un idioma a otro? ¿Cómo se traducen los dialectos, acentos, ritmos y registros que forman parte de la identidad profunda de los personajes? Abordar estas cuestiones es uno de los desafíos más complejos y menos visibles de la literatura.
Voces que importan
La forma de “hablar” de los personajes, lo que llamamos variación lingüística, abarca rasgos diferentes como vocabulario local, jergas, expresiones propias de una comunidad, formas de una lengua pasada o maneras particulares de construir las frases. Estos rasgos no son adornos, son recursos de caracterización: cumplen funciones narrativas y de estilo importantes.
El dialecto de un lugar podría tener una función reivindicativa; el acento rural podría transmitir humor, ternura o jerarquía; una jerga juvenil podría significar cercanía o pertenencia a un grupo y un habla histórica sitúa al lector en otra época. Si estas voces desaparecen en la traducción, el personaje se vuelve más plano y la historia pierde parte de su trama original.
Por ejemplo, en Las aventuras de Huckleberry Finn, Mark Twain diferenció a sus personajes mediante siete dialectos diferentes, y en Oliver Twist, Dickens utilizó el argot de ladrones y rufianes para mostrar el habla del hampa londinense.
Sin equivalencias directas
Uno de los mayores retos de la traducción literaria es que los dialectos no son intercambiables. No existe un “equivalente” español del inglés del sur de Estados Unidos, ni un dialecto aquí que corresponda exactamente al de Liverpool. Cada variedad lingüística está anclada en su territorio, historia y contexto social.
Get your news from people who know what they’re talking about.
Get newsletter
Por eso, si tradujéramos de forma literal un dialecto extranjero, el resultado sería extraño o incluso cómico. Si cambiáramos un dialecto inglés por uno español real, convertiríamos a Huckleberry en un niño andaluz, canario o mexicano y manipularíamos su identidad original. Pero a la vez, si se ignora esa forma de hablar y se traduce a la lengua estándar, se pierde su personalidad lingüística.
La traducción literaria busca conseguir efectos equivalentes: que el lector perciba el mismo matiz social y emocional que quien lo lee en versión original, aunque se usen recursos distintos para conseguirlo.
La traducción más humana
La tarea del traductor literario no es mecánica; es un ejercicio de escucha y de interpretación. El traductor se hace preguntas como qué efecto produce esa voz en el lector del original, qué rasgos lingüísticos usar para conseguir ese efecto en la traducción o hasta qué punto marcar o no una variedad.
Puede que la mejor solución no sea apuntar hacia un dialecto concreto, sino usar un registro ligeramente desviado de la lengua estándar para insinuar un origen social que no desplace culturalmente al personaje. Otras veces, puede que un rasgo léxico o una estructura gramatical basten para recrear el ambiente.
Cada decisión requiere criterio y responsabilidad. La literatura representa grupos sociales reales, y tratarlos con respeto exige una mirada ética.
Como he comprobado en mi investigación (de próxima publicación), esa mirada ética es algo que la IA, por ahora, no posee. La IA no “entiende” las implicaciones sociales de la forma de hablar de un personaje. No sabe cuándo un dialecto transmite marginación o cuándo marca jerarquía social. Trabaja detectando patrones estadísticos, no intenciones humanas.
Cuando se le pide traducir voces no estándar, suele haber dos consecuencias. O bien el texto traducido aparece “limpio”, y un personaje que hablaba con un acento local termina hablando de forma normativa, con lo que su personalidad se diluye; o bien la IA imita marcas dialectales, pero mezcla jergas incompatibles o deforma palabras sin criterio. Esto crea estereotipos no deseados, es decir, caricaturas.
Por tanto, ante la reflexión y minuciosidad que conlleva la traducción de la variación lingüística, la IA genera respuestas rápidas que no tienen todavía suficiente sensibilidad para manejar ambigüedades, ironías o alusiones culturales.
¿Quiere recibir más artículos como este? Suscríbase a Suplemento Cultural y reciba la actualidad cultural y una selección de los mejores artículos de historia, literatura, cine, arte o música, seleccionados por nuestra editora de Cultura Claudia Lorenzo.
Por qué necesitamos decisiones
Las herramientas como la IA pueden ser muy útiles en las fases previas y complementarias de la traducción porque permiten localizar información rápidamente, comparar usos reales en grandes corpus, identificar patrones de estilo… Sin embargo, si tienden a igualar las voces, también igualarán las experiencias. Utilizándola sin control perderemos diversidad lingüística y, con ella, diversidad humana.
Y es que las variedades lingüísticas no son solamente desviaciones del estándar: son lenguas muchas veces minoritarias o minorizadas, vulnerables o en riesgo. Protegerlas ayuda a conservar nuestro patrimonio cultural y una valiosa pluralidad.
Para que las voces lleguen al lector sin perder su identidad, hace falta alguien que las escuche y las recree. Esa es una tarea esencialmente humana. Por eso, cada vez que una traducción literaria nos deja oír un mundo distinto, estamos también salvando una parte de nuestra diversidad cultural.
Una versión de este artículo se publicó en la revista Telos, de la Fundación Telefónica."
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
#Metaglossia
#metaglossia_mundus
#métaglossie
"In late 2025, generative AI crossed another critical threshold. Following GPT-5.1 in November, OpenAI released GPT-5.2 on 11 December — a model designed to generate adaptive, discipline-specific academic prose with fewer stylistic traces and greater structural variation. For universities, the concern was immediate: if AI can write fluently, unpredictably, and in discipline-appropriate academic language, does detectability still hold?
Early results show that it does.
How StrikePlagiarism responds to GPT-5.2
The release of GPT-5.2 reinforced a broader challenge facing higher education: AI development now outpaces institutional policy cycles. For StrikePlagiarism, this moment required immediate empirical validation rather than theoretical assumptions.
Within days of GPT-5.2 entering academic use, StrikePlagiarism.com was tested against newly generated and paraphrased GPT-5.2 texts under realistic academic conditions. The results were unambiguous:
Over 97% detection accuracy across GPT-5.2 outputs False results below 1%, preserving academic fairness Consistent performance after paraphrasing and stylistic diversification Rather than relying on surface-level markers, StrikePlagiarism.com analysed behavioural consistency across longer academic texts — identifying patterns that remain statistically improbable in authentic student work. Reports delivered probability-based, side-by-side comparisons, providing educators with interpretable evidence rather than automated verdicts.
Why GPT-5.2 remains detectable
GPT-5.2 demonstrates strong control over academic conventions and avoids obvious repetition. However, analysis across extended submissions consistently revealed:
non-random reasoning structures, unusually uniform transitions between claims, absence of natural cognitive drift. Individually, these signals are subtle. Taken together, they form a measurable behavioural profile. Detection no longer depends on awkward phrasing or stylistic errors, but on identifying improbably stable reasoning across complex texts. Fluency improves — invisibility does not.
Core advantages of StrikePlagiarism.com’s AI detection approach
StrikePlagiarism.com was designed to support institutions operating at scale, across disciplines and languages:
Multilingual AI-content detection at scale AI-generated content is detected across 100+ languages, enabling consistent integrity standards in international and multilingual academic environments. Proven accuracy against advanced generative models Detection accuracy exceeds 97%, including paraphrased and stylistically diversified GPT-5.2 texts — demonstrating reliability under real academic conditions. Ultra-low false-positive rates False results remain below 1%, protecting students from incorrect attribution and ensuring that detection strength never compromises fairness. Why AI detection is critical right now
GPT-5.2 makes one reality clear: the primary risk for universities is no longer obvious AI misuse, but large volumes of academically convincing AI-generated work entering assessment unnoticed. This is not a future concern — it is a present operational challenge.
StrikePlagiarism addresses this challenge at an institutional level. By combining high-accuracy AI behaviour analysis with transparent, probability-based reporting, StrikePlagiarism.com enables universities to respond now, not retrospectively. When academic decisions must be defensible at the moment they are made, evidence-based AI detection becomes essential infrastructure rather than an optional safeguard." 97% accuracy against GPT-5.2: inside StrikePlagiarism.com’s detection results | THE Campus Learn, Share, Connect https://share.google/hA12nxsAaMGdPGqDX #Metaglossia #metaglossia_mundus #métaglossie
|
"A new analysis of genetic studies proposes that the cognitive capacity for language was already present at least 135,000 years ago, with language likely becoming a social tool around 100,000 years ago.
The study challenges long-standing debates about the timing of language emergence.
The research was conducted by a team led by Shigeru Miyagawa, a linguist at the Massachusetts Institute of Technology (MIT), alongside Rob DeSalle and Ian Tattersall from the American Museum of Natural History (AMNH).
Genetics and human language evolution
Previous attempts to determine the origins of language have relied on fossil records, cultural artifacts, or linguistic reconstruction. This study took a different approach.
The team examined genetic evidence to trace the earliest known divergence of human populations, reasoning that all human languages likely share a common origin.
“The logic is very simple. Every population branching across the globe has human language, and all languages are related,” Miyagawa explained.
“I think we can say with a fair amount of certainty that the first split occurred about 135,000 years ago, so human language capacity must have been present by then, or before.”
The study systematically reviewed 15 genetic studies conducted over the past 18 years.
These studies included: Y chromosome analysis (which traces paternal lineage), mitochondrial DNA studies (which track maternal ancestry), as well as whole-genome studies (which examine broader genetic variation).
Human populations branched out
Together, these genetic studies suggest that human populations began splitting around 135,000 years ago, meaning that before this divergence, Homo sapiens was a single, undivided population.
Since every group that branched out maintained the ability to communicate through language, this strongly suggests that language had already developed by this time.
A 2017 study attempted a similar genetic approach but had access to fewer datasets. With more recent genetic research available, the current study provides a more precise estimate for when language capacity was present.
“Quantity-wise, we have more studies, and quality-wise, it’s a narrower window [of time],” said Miyagawa, who is also affiliated with the University of São Paulo.
Language as a unique human trait
Miyagawa has long argued that all human languages share fundamental similarities, making it likely that they evolved from a common source.
His past research has explored unexpected linguistic connections, such as similarities between English, Japanese, and Bantu languages.
Some scholars propose that language capacity dates back millions of years, based on the vocal abilities of primates.
However, Miyagawa believes this perspective is flawed. He emphasizes that human language is unique, not just because of vocal ability, but because of its combination of words and grammar, which creates an infinitely generative system of communication.
“Human language is qualitatively different because there are two things – words and syntax – working together to create this very complex system,” he explained.
“No other animal has a parallel structure in their communication system. And that gives us the ability to generate very sophisticated thoughts and to communicate them to others.”
From thought to communication
The study also suggests that language did not begin as a social tool but instead may have first developed as an internal cognitive system.
“Language is both a cognitive system and a communication system,” Miyagawa said. “My guess is that prior to 135,000 years ago, it did start out as a private cognitive system, but relatively quickly that turned into a communications system.”
Human use of social language
If language was cognitively present before 135,000 years ago, when did it become an active part of human social life? The archaeological record offers clues.
Around 100,000 years ago, early humans began engaging in symbolic activities, such as making meaningful markings on objects and using fire to produce ocher, a decorative red pigment.
Such behaviors suggest that humans were using symbols to convey meaning – a crucial aspect of language.
These findings reinforce the argument that language was the driving force behind the emergence of modern human behavior.
“Behaviors compatible with language and the consistent exercise of symbolic thinking are detectable only in the archaeological record of Homo sapiens,” the authors said.
A catalyst for human advancement
One of the study’s co-authors, Ian Tattersall, has previously proposed that language played a transformative role in human evolution.
He argues that once language emerged, it triggered a cascade of innovations, from symbolic art to more complex social structures.
“Language was the trigger for modern human behavior. Somehow, it stimulated human thinking and helped create these kinds of behaviors,” Miyaga notes.
“If we are right, people were learning from each other [due to language] and encouraging innovations of the types we saw 100,000 years ago.”
However, not all researchers agree. Some scholars propose a gradual development of complex behaviors, arguing that language was just one of many factors shaping human evolution.
Others believe that cultural changes – such as tool use and social coordination – may have influenced linguistic development rather than the other way around.
The origins of human language
Despite the ongoing debate, Miyagawa and his colleagues believe their study marks an important step forward in understanding how and when language emerged.
“Our approach is very empirically based, grounded in the latest genetic understanding of early Homo sapiens,” Miyagawa concluded.
“I think we are on a good research arc, and I hope this will encourage people to look more at human language and evolution.”
By integrating genetic evidence with archaeological findings, this research provides a clearer timeline for when language capacity emerged.
While many questions remain, the study reinforces the idea that language was central to shaping human history, allowing our ancestors to develop complex cultures, communicate across generations, and ultimately, create the societies we live in today.
The study is published in the journal Frontiers in Psychology."
03-17-2026
ByEric Ralls
Earth.com staff writer
https://www.earth.com/news/when-humans-created-the-first-language-and-communication-skills/
#metaglossia
#metaglossia_mundus