 Your new post is loading...
|
Scooped by
Charles Tiayon
February 19, 2:49 PM
|
"Google has released WAXAL, an open speech dataset covering 21 Sub-Saharan African languages including Acholi, Hausa, Igbo, Luganda, Shona, Swahili, and Yoruba. The dataset contains over 11,000 hours of speech data from nearly 2 million recordings, developed over three years in partnership with Makerere University in Uganda, the University of Ghana, Digital Umuganda in Rwanda, and other African organizations. The name WAXAL comes from the Wolof word for “speak,” and the collection is now available here. The dataset includes approximately 1,250 hours of transcribed speech for automatic speech recognition and over 20 hours of studio recordings for text-to-speech synthesis. Participants were asked to describe pictures in their native languages to capture natural speech patterns, while professional voice actors provided high-quality audio recordings. Google partnered with Media Trust and Loud n Clear for voice recording work, with the framework allowing partner organizations to retain ownership of the data they collected while making it available to researchers globally. For African writers, artists, and technologists, the implications are worth examining carefully. Speech recognition and text-to-speech technologies determine whose voices get translated into text, whose stories can be transcribed, and which languages digital assistants will understand. The availability of this data could enable African developers to build tools on their own terms rather than waiting for Silicon Valley to decide which markets merit investment. The potential applications for writers and literary communities are tangible. Imagine transcription tools that actually work for interviews conducted in Igbo or Yoruba, making the labor of documentation less extractive. Audiobook production in Swahili or Hausa without requiring English as an intermediary language. Voice-based storytelling platforms where oral traditions can be archived and shared without being filtered through colonial languages. Translation tools built by and for African language speakers rather than optimized for European language pairs. The dataset provides the foundation for developers to build them now. That said, questions remain about how this data will be used and who ultimately benefits. Large tech companies have a pattern of extracting resources from the continent, whether raw materials or, increasingly, cultural and linguistic data, while the tools built from these resources often serve markets elsewhere first. The dataset’s open license means anyone can use it, but whether African developers and researchers will have the computational resources and infrastructure to compete with well-funded corporations is another matter. WAXAL offers infrastructure that didn’t exist before, but infrastructure alone doesn’t redistribute power." https://brittlepaper.com/2026/02/speech-data-for-21-african-languages-is-now-open-access-what-does-that-mean-for-us/ #Metaglossia #metaglossia_mundus #métaglossie
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"21 February - International Mother Language Day Council of Europe Strasbourg 20 February 2026 The Council of Europe’s commitment to linguistic diversity in everyday life and education Through mother-tongue-based multilingual education, societies can foster greater inclusion, preserve minority and indigenous languages, and support equitable access to education. International Mother Language Day, marked globally on 21 February, shines a spotlight on these goals.
“Home language is an asset rather than an obstacle to learning”, with this concept in mind, schools across Europe often mark the day with activities such as multilingual storytelling, poetry recitations, and sharing traditions in pupils' home languages, with students’ linguistic backgrounds viewed as assets, promoting well-being, intercultural dialogue, and academic success..." The Council of Europe’s commitment to linguistic diversity in everyday life and education - Portal" 2026 international mother language theme: youth voices on multilingual education. https://share.google/swekdPETlx7yMcOGh #Metaglossia #metaglossia_mundus #métaglossie
"February 20, 2026 Source: The University of Osaka Summary: Human language may seem messy and inefficient compared to the ultra-compact strings of ones and zeros used by computers—but our brains actually prefer it that way. New research reveals that while digital-style encoding could theoretically compress information more tightly, it would demand far more mental effort from both speaker and listener. Instead, language is built around familiar words and predictable patterns that reflect our real-world experiences, allowing the brain to constantly anticipate what comes next and narrow down meaning step by step.
Human language isn’t maximally compressed like computer code—but that’s a feature, not a flaw. Its familiar structure helps the brain predict meaning in real time, making communication far less cognitively demanding than a stream of digital bits. Credit: Shutterstock
Human language is remarkably rich and intricate. Yet from the standpoint of information theory, the same ideas could theoretically be transmitted in a far more compressed format. That raises an intriguing question: why do people not communicate in a digital system of ones and zeros like computers do?
Michael Hahn, a linguist based in Saarbrücken, set out to answer that question with Richard Futrell from the University of California, Irvine. Together, they created a model explaining why human language looks the way it does. Their research was recently published in Nature Human Behaviour.
Human Language and Information Efficiency
Roughly 7,000 languages are spoken across the globe. Some are used by only a few remaining speakers, while others such as Chinese, English, Spanish and Hindi are spoken by billions. Despite their differences, all languages serve the same essential purpose. They communicate meaning by combining words into phrases, which are then arranged into sentences. Each part carries its own meaning, and together they create a clear message.
"This is actually a very complex structure. Since the natural world tends towards maximizing efficiency and conserving resources, it's perfectly reasonable to ask why the brain encodes linguistic information in such an apparently complicated way instead of digitally, like a computer," explains Michael Hahn. In theory, encoding speech as binary sequences of ones and zeros would be more efficient because it compresses information more tightly than spoken language. So why do humans not communicate like R2-D2 from Star Wars? Hahn and Futrell believe they have found the answer.
Language Is Built Around Real World Experience
"Human language is shaped by the realities of life around us," says Michael Hahn. "If, for instance, I was to talk about half a cat paired with half a dog and I referred to this using the abstract term 'gol', nobody would know what I meant, as it's pretty certain that no one has seen a gol -- it simply does not reflect anyone's lived experience. Equally, it makes no sense to blend the words 'cat' and 'dog' into a string of characters that uses the same letters but is impossible to interpret," he continues.
A scrambled form such as "gadcot" technically contains letters from both words, but it is meaningless to listeners. By contrast, the phrase "cat and dog" is instantly understandable because both animals are familiar concepts. Human language works because it connects directly to shared knowledge and lived experience.
The Brain Prefers Familiar Patterns
Hahn summarizes the findings this way: "Put simply, it's easier for our brain to take what might seem to be the more complicated route." Although natural language is not maximally compressed, it places far less strain on the brain. That is because the brain processes words in constant interaction with what we already know about the world.
A purely digital code might transmit information faster, but it would be detached from everyday experience. Hahn compares this to commuting to work: "On our usual commute, the route is so familiar to us that the drive is almost like on autopilot. Our brain knows exactly what to expect, so the effort it needs to make is much lower. Taking a shorter but less familiar route feels much more tiring, as the new route demands that we be far more attentive during the drive." From a mathematical perspective, he adds, "The number of bits the brain needs to process is far smaller when we speak in familiar, natural ways."
In other words, speaking and understanding binary code would require much more mental effort from both the speaker and the listener. Instead, the brain constantly estimates how likely certain words and phrases are to appear next. Because we use our native language daily over decades, these patterns become deeply embedded, making communication smoother and less demanding.
How Predictive Processing Shapes Speech
Hahn offers a clear illustration: "When I say the German phrase 'Die fünf grünen Autos' (Engl.: 'the five green cars'), the phrase will almost certainly make sense to another German speaker, whereas 'Grünen fünf die Autos' (Engl.: 'green five the cars') won't," he says.
When someone hears "Die fünf grünen Autos," the brain starts interpreting meaning immediately. The word "Die" signals certain grammatical possibilities. A German listener can instantly narrow the options, ruling out masculine or neuter singular nouns. The next word, "fünf," suggests something countable, excluding abstract ideas such as love or thirst. Then "grünen" indicates that the noun will be plural and green in color. At that point, the object could be cars, bananas or frogs. Only when the final word, "Autos," is spoken does the meaning fully settle into place. With each word, the brain reduces uncertainty until only one interpretation remains.
In contrast, "Grünen fünf die Autos" disrupts this predictable pattern. The expected grammatical signals appear in the wrong order, so the brain cannot easily build meaning from the sequence.
Implications for AI and Language Models
Hahn and Futrell were able to demonstrate these patterns mathematically. Their findings, published in Nature Human Behaviour, show that human language prioritizes reducing cognitive load over maximizing compression.
These insights may also inform improvements in large language models (LLMs), the systems behind generative AI tools such as ChatGPT or Microsoft's Copilot. By better understanding how the human brain processes language, researchers could design AI systems that align more closely with natural communication patterns." https://www.sciencedaily.com/releases/2026/02/260219040811.htm #Metaglossia #metaglossia_mundus #métaglossie
"Exposing biases, moods, personalities, and abstract concepts hidden in large language models A new method developed at MIT could root out vulnerabilities and improve LLM safety and performance. Jennifer Chu | MIT News Publication Date:February 19, 2026
By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they’re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it’s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.
Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.
The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.
In the case of the “conspiracy theorist” concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous “Blue Marble” image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.
The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model’s safety or enhance its performance.
“What this really says about LLMs is that they have these concepts in them, but they’re not all actively exposed,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of mathematics at MIT. “With our method, there’s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.”
The team published their findings today in a study appearing in the journal Science. The study’s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the University of Pennsylvania.
A fish in a black box
As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as “hallucination” and “deception.” In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has “hallucinated,” or constructed erroneously as fact.
To find out whether a concept such as “hallucination” is encoded in an LLM, scientists have often taken an approach of “unsupervised learning” — a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as “hallucination.” But to Radhakrishnan, such an approach can be too broad and computationally expensive.
“It’s like going fishing with a big net, trying to catch one species of fish. You’re gonna get a lot of fish that you have to look through to find the right one,” he says. “Instead, we’re going in with bait for the right species of fish.”
He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks — a broad category of AI models that includes LLMs — implicitly use to learn features.
Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.
“We wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,” Radhakrishnan says.
Converging on a concept
The team’s new approach identifies any concept of interest within a LLM and “steers” or guides a model’s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).
The researchers then searched for representations of each concept in several of today’s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.
A standard large language model is, broadly, a neural network that takes a natural language prompt, such as “Why is the sky blue?” and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.
The team’s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a “conspiracy theorist,” the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns.
The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a “conspiracy theorist.” They also identified and enhanced the concept of “anti-refusal,” and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.
Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of “brevity” or “reasoning” in any response an LLM generates. The team has made the method’s underlying code publicly available.
“LLMs clearly have a lot of these abstract concepts stored within them, in some representation,” Radhakrishnan says. “There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.”
This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research." https://news.mit.edu/2026/exposing-biases-moods-personalities-hidden-large-language-models-0219 #Metaglossia #metaglossia_mundus #métaglossie
"Messes traduites par IA et police d’écriture de Michel-Ange : la basilique Saint-Pierre fête ses 400 ans
Par Jean Lannoy
Journaliste Multimédia CathoBel
Publié le 19 février 2026
Pour marquer les 400 ans de sa dédicace, la basilique Saint-Pierre de Rome déploie plusieurs innovations numériques. Traduction des messes en 60 langues grâce à l'intelligence artificielle, QR codes, smart pass pour les pèlerins et création d'une police d'écriture inspirée de Michel-Ange : le Vatican assume un tournant technologique au service de l'accueil et de la mission.
Traduire la liturgie en 60 langues
Dédicacée en 1626, la basilique Saint-Pierre est l'un des lieux les plus visités au monde. Chaque jour, des fidèles de toutes nationalités participent aux célébrations. Tous ne comprennent pas l'italien ou le latin. À l'occasion du quatrième centenaire, un système de traduction assistée par intelligence artificielle sera proposé lors des grandes célébrations. Concrètement, des QR codes disposés à l'entrée permettront aux participants d'accéder, via leur smartphone, à une traduction audio ou écrite de la messe dans près de 60 langues.
Le cardinal Mauro Gambetti, archiprêtre de la basilique, explique l'intention : "Depuis des siècles, la basilique Saint-Pierre accueille des fidèles de toutes les nations et de toutes les langues. En mettant à disposition un outil qui aide le plus grand nombre à comprendre les paroles de la liturgie, nous voulons servir la mission universelle de l'Église."
Comment fonctionne le dispositif ?
Le système repose sur un modèle de langage entraîné à reconnaître et à traduire des discours complexes. Le système de traduction utilise Lara, une IA développée par l’entreprise de services linguistiques Translated, en collaboration avec Carnegie-AI LLC et le professeur Alexander Waibel, pionnier de la traduction vocale assistée par IA.
La célébration est captée, analysée en temps réel, puis restituée dans différentes langues. Contrairement aux anciens logiciels de traduction automatique, ces modèles ne fonctionnent pas mot à mot. Ils interprètent des phrases entières en tenant compte du contexte. Les responsables insistent en indiquant que le modèle de langage a été entraîné pour intégrer le vocabulaire liturgique et les références bibliques. L'objectif affiché n'est pas une traduction mécanique, mais une restitution fidèle du sens. Si les traductions restent un lieu possible d'erreurs, Lara est programmé pour viser la précision plutôt qu'une volonté de plaire, réduisant les risques d'hallucinations. Le Vatican insiste toutefois sur le fait que l'intelligence artificielle reste un outil. Elle vise à faciliter la participation et la communion entre les personnes. Chez les traducteurs, l'annonce fait toutefois grincer des dents.
Pour les 400 ans, la basilique met également en place un smart pass destiné à organiser les flux de visiteurs et à centraliser les informations pratiques. Dans un lieu qui accueille des milliers de personnes par jour, le numérique permet de mieux orienter les pèlerins, d'éviter certaines files d'attente et de structurer l'accueil. L'objectif affiché est d'améliorer l'expérience des visiteurs dans un contexte international. Un écosystème numérique élargi permettra également aux pèlerins de partager leurs prières et leurs témoignages.
Des zones auparavant inaccessibles
Des zones auparavant inaccessibles du complexe monumental seront ouvertes au public, notamment l'ensemble de la terrasse située en-dessous de la coupole, avec ses trois sections en forme d'éventail, et les salles octogonales, qui abritent les maquettes de la basilique par Antonio da Sangallo le Jeune et du dôme de Michel-Ange, ainsi que des œuvres provenant des archives du musée de la basilique.
Une police d'écriture inspirée de Michel-Ange
Autre initiative : la création d'une police d'écriture inspirée des lettres gravées sous la coupole conçue par Michel-Ange. Cette typographie spécifique, "Michelangelus", dévelopée par Studio Gusto, sera incluse dans la suite Office de Microsoft. Le géant américain du numérique a déjà collaboré récemment avec la basilique romaine pour la réalisation d'un jumeau numérique de l'édifice.
Lire aussi : Intelligence artificielle : une réplique numérique de la basilique Saint-Pierre pour le Jubilé
Une analyse de la coupole
Grâce au mécénat du groupe énergétique italien Eni, une cartographie tridimensionnelle complète du "Cupolone" a pu être réalisée. Cette modélisation a notamment permis d'obtenir des données inédites sur les fondations de la coupole, grâce à des technologies capables de sonder jusqu'à plus de cent mètres sous le niveau du sol. L'opération, qui a duré deux mois, a représenté un investissement de "plusieurs centaines de milliers d'euros", a précisé Claudio Granata, représentant d'Eni. Parallèlement, une dizaine de capteurs ont été installés dans la basilique afin de surveiller en continu " l'état de santé " de l'édifice, a indiqué Alberto Capitanucci, responsable du projet pour la Fabrique de Saint-Pierre, l'organisme du Vatican chargé de l'entretien et de la conservation de la basilique.
Quatre siècles d’histoire au cœur de Rome
La basilique actuelle s'élève à l'emplacement du tombeau de l'apôtre Pierre, premier évêque de Rome, martyrisé au Ier siècle. Une première basilique constantinienne y fut construite au IVe siècle. Jugée fragile au début du XVIe siècle, elle est remplacée par l'édifice que nous connaissons aujourd'hui. En 1506, le pape Jules II pose la première pierre d'un chantier colossal qui mobilisera les plus grands artistes de leur temps. Donato Bramante en conçoit les premières lignes. Michel-Ange en redessine la structure et imagine la coupole monumentale, achevée après sa mort. Carlo Maderno prolonge la nef et réalise la façade. Gian Lorenzo Bernini aménage la place elliptique et son impressionnante colonnade au XVIIe siècle. Longue de 186 mètres, haute de 136 mètres sous la coupole, la basilique Saint-Pierre demeure l'une des plus vastes églises du monde. Elle abrite des œuvres majeures, comme la Pietà de Michel-Ange, le baldaquin du Bernin ainsi que les tombeaux pontificaux. C'est aujourd’hui un des lieux les plus visités au monde, avec près de 12 millions de pèlerins et touristes chaque année."
https://www.cathobel.be/2026/02/messes-traduites-par-ia-et-police-decriture-de-michel-ange-la-basilique-saint-pierre-fete-ses-400-ans/
#Metaglossia
#metaglossia_mundus
#métaglossie
BYU teaches courses in more foreign languages than any college in America "BYU at 150: The Language University By Sharman Gill, February 18, 2026
The BYU College of Humanities shows how BYU in its 150th year has become the language university. When Jake Jackson discovered he would be serving a Hmong-speaking mission in Minnesota for The Church of Jesus Christ of Latter-day Saints, he was confused. He had never heard of Hmong. Was it related to Mongolian? No. Even more surprisingly, he would learn that he had been living near a Hmong community while growing up in Delta, a small town in west-central Utah.
“My mom grew up with friends whose families had resettled in the area after the Vietnam War; essentially, there was a Hmong community in my backyard,” said Jackson, now a BYU junior studying International Relations. “I have since made friends with several Hmong families in Utah, and so these connections have come full circle.”
Note To celebrate efforts and achievements that have helped BYU become a worldwide gathering place, the campus community is coming together this Friday, February 20th, at 6 pm for BYU 150: Night of Light, World on Campus. During this Night of Light, participants can embark on scavenger hunts across campus, explore the BYU museums, enjoy performances representing languages and cultures from around the world, and more. Jackson is pursuing Hmong studies at BYU. In addition to teaching Hmong 101, he works as a research assistant, translating interviews on Hmong culture and economy. His language skills have led him to become a certified interpreter for the U.S. Department of Justice and an interpreter for the Church of Jesus Christ’s General Conference. After graduation, his next step is to become an attorney.
Jackson is one of thousands of BYU students — more than 60% — who speak a second language. Remarkably, at least 121 languages are spoken on campus.
In BYU’s sesquicentennial year, it is timely to celebrate campus as a significant gathering place for world languages — a place where common ideals foster interactions across boundaries. Not unlike the way sport can function, bringing together languages and cultures for the 2026 Winter Olympics and the FIFA World Cup 2026.
A Prophetic Call for BYU to Become the Language Capital of the World
BYU’s language initiatives have accelerated since the 1975 centennial address statement by the twelfth president of the Church of Jesus Christ, Spencer W. Kimball: “BYU should become the acknowledged language capital of the world.”
Over the last 50 years, BYU has risen to this prophetic call. According to the most recent MLA survey, BYU ranks #1 in both language course offerings and advanced language enrollments. In addition, students benefit from exceptional opportunities to earn language certificates, gain immersive language experiences and collaborate with professors on first-class research.
This ongoing achievement is well-earned by many dedicated faculty and students, as well as BYU’s coordinating hub, the Center for Language Studies (CLS).
“We have an amazing team here,” said Ray Clifford, Director of CLS since 2004, and Associate Dean in the College of Humanities. “Our center supervises 80 teachers, and that is in addition to languages taught in the language departments; everybody puts forth so much effort.”
BYU Leads in Number of Languages Offered
As the membership of the Church of Jesus Christ has grown larger outside the United States than within it, BYU now offers an astounding 84 non-English language courses — higher than any other U.S. University. According to the MLA survey, Harvard ranks second with 78 languages, and Berkley ranks third with 59 languages. Among BYU’s offerings are some of the world’s rarer languages, including Hmong, Navajo, Kiribati, and several Central American indigenous languages.
When asked about why BYU focuses on minimally spoken languages, Jackson reflected on his experiences speaking Hmong and the service implications.
“It demonstrates the Christ-centered principle of reaching out to the one among the 100,” Jackson said. “My experience with communicating with Hmong individuals, and especially the elderly, is that if we can speak their language, it means the world to them; these relationships have enriched my life and seem to bring joy to the lives of those with whom I have interacted.”
BYU Leads in Advanced Enrollments and Language Certificates
BYU has the highest number of advanced enrollments in the U.S., with an average trend over the last decade that exceeds other universities by 50%.
Returned missionaries contribute to this high enrollment after having been immersed in worldwide languages and cultures for at least 18 months. Clifford explained that these returned missionaries are about halfway to becoming proficient in their mission languages. BYU helps them hone their skills and prepare for lifelong learning and service.
Any BYU student, regardless of their major, can demonstrate high language proficiency by earning a Language Certificate. These certificates, available in 24 languages, are awarded by the CLS based on completion of upper-level language courses and proficiency ratings certified by ACTFL, an internationally recognized language proficiency and testing organization. The CLS awards language certificates to over 500 BYU students each year from more than 120 majors across campus.
A Language Certificate is recognized on BYU transcripts and boosts the value of the university degree. While an advanced rating for a Language Certificate is impressive, many students earn the even higher “Superior” level of language proficiency, showing language abilities well beyond AI translation technology.
“AI communicates facts rather than communicates in a way that truly connects people,” said Josh Perkey, assistant dean of communications in the College of Humanities. “When our students are assessed at a superior level, they can understand sarcasm; they can read between the lines and connect to others in ways that the large language models have not been coded for.”
BYU Provides Abundant Immersion Programs
BYU uniquely offers year-round immersive housing for students who want to grow as confident and proficient world language speakers. Also, BYU coordinates 127 Study Abroad and internship programs through the David M. Kennedy Center for International Studies, such as past field school programs in Hmong communities around the world.
Fiona Bates (’24) studied French, Spanish, and Chinese at BYU. She highlighted her Study Abroad experiences as a way to learn about common humanity. As she served Alzheimer patients in France, interacted with people along the Camilla de Santiago pilgrimage route, and completed intensive Mandarin training in Taiwan, she learned to better appreciate cultural differences and similarities.
“I met so many amazing people from all over the world,” Bates said. “I’ve learned that there is so much we have in common with other people . . . I think [understanding that] is beautiful and what we need more of — seeking to understand differences; when we do that, I think that brings us closer together.”
These rich immersion experiences prepared Bates for her current work as a medical interpreter at a pediatric hospital.
Ellen Knell, Associate Director of BYU's Center of Language Studies BYU Engages in Rigorous Language Research
While BYU is known for taking language studies abroad, significant research about language and the brain is happening right on campus. The Language Sciences Laboratory is a dedicated research space equipped with high-tech research tools — fNIRS, EEG, eye-tracking devices, and sound isolation booths.
Linguistics professors are at the forefront of BYU’s language research, publishing in top-tier journals. In recent months, Jeff Green has published studies in Brain and Language on perception and American Sign Language, as well as an eye-tracking study on Chinese character processing in Language Awareness. In addition, Studies in Second Language Learning and Teaching recently published Dan Dewey’s research examining heart rate variability and personality traits as predictors of Arabic language proficiency.
While most research is completed on campus, some professors are transporting the equipment for outreach. For example, Ellen Knell – Associate Director of BYU’s Center of Language Studies — and her graduate students are involving a local high school AP student and her Dual-Language Immersion class.
“The fNIRS technology is transportable — just like a cap you put on the head,” Knell said. “We take this technology into schools to learn about language proficiency and age, and to involve high school students in the research process.”
BYU Connects People for Employment
As a place where students attain “Advanced” and even the fully professional level of “Superior” language proficiency, BYU is a hotspot for job recruiters and those seeking to learn how to better teach languages.
“Various groups, delegates, and foreign dignitaries are consistently on campus to see how we are teaching and assessing language, and to recruit people with language skills,” Knell said.
Clifford remembers a thank-you note from a student that represents the broader marketability of a language-enhanced degree.
“We had a student graduate who had interviews from four international accounting firms, and he said he spent more time talking about his language skills than his accounting skills,” Clifford said. “He had his choice of offers.”
BYU Connects People for Service
On-campus cultural exchange is ongoing; BYU receives many international visitors each year, including diplomats and distinguished leaders of nations.
“I remember being in a conference room with a group of students who greeted the President of Kiribati and his wife in Kiribati, performed a local dance with cultural costumes, and presented them with student-written books for their elementary school children,” Knell said. “Communication and building relationships go hand-in-hand.”
Assistant Director of the Center for Language Studies, Molly McCall, shared about a recent lunch with students and a campus guest, the Armenian Ambassador to the United States.
“The ambassador explained Armenia’s relationship with the United States and talked about some amazing things that recently happened for his country,” McCall said. “As introductions were made around the table, his eyes lit up, and he got a big smile on his face when the students spoke in Armenian about places they had lived in his home country during their missions; they shared dialogue in his native language and he extended an invitation to them to attend the UN Biodiversity Conference Armenia is hosting later this year.”
The deeper cultural connections have led to meaningful service opportunities. For example, the Ukrainian class raised thousands of dollars to help refugee women and children after Ukraine was invaded. And the Haitian Creole professor informed the BYU humanities community about the recent crisis in Haiti and provided support to the local Haitian community.
Language and language learning are fundamentally about relationships, explained Christopher Oscarson, Dean of Humanities. In the relational process, individuals can change for the greater good.
“We form unique ties by communicating in another person’s native tongue, and additionally, we are ourselves changed by the experience of learning languages; language learners develop charity, patience, humility, perspective and gratitude as they extend the limits of their imagination to embrace others and their world views,” Oscarson said. “Simply put, learning languages can help us to be more Christlike and prepare us to serve others.”
A Prophetic Call for Worldwide Service
BYU, as the premier language university, aligns with continuing prophetic direction. Fifty years after President Kimball declared BYU “the language capital of the world,” President Dallin H. Oaks (eighteenth president of the Church) issued a call for worldwide service — a call that relies on cultural and linguistic proficiencies.
“Our ministry is a ministry of all the children of God on the face of the earth,” President Oaks said. “We pray for all. We seek to serve all. And we invoke the blessings of the Lord Jesus Christ upon all who seek to serve Him, to do so in worthiness and commitment and optimism. We do not have the answers to all the world’s problems. They have not been revealed. But what we do know is that we are all children of heavenly parents, and that we are called to serve all of the children of God.”" https://news.byu.edu/intellect/byu-at-150-the-language-university #Metaglossia #metaglossia_mundus #métaglossie
Whether writing a paper or a book, find out how to improve your academic writing at each stage of the process
"Communicating the worth of your work to the academic world – and beyond – starts with writing. Writing for a journal, turning your work into a book or reviewing existing research all require distinct skills – and the development of those skills can make the difference between publication and recognition or research fading into obscurity.
GenAI may offer a shortcut to summarising large bodies of research and to getting words down on paper but a chatbot cannot generate an effective piece of academic writing from a single prompt, yet. Explore how GenAI can act as a supplement, rather than a substitute, for well-honed writing skills, and how to teach students to understand the difference.
Here, we’ll delve into the nuts and bolts of academic writing, offering advice for writing abstracts, citations and literature reviews, alongside conquering bigger projects such as writing a book. Find out how to develop a sustainable writing routine by making the process happier and how to pass on your writing skills to students.
Strategies to improve your academic writing
Finishing your draft or manuscript is not the end of the process but the beginning. From full-scale structural edits to the nitty gritty of proofreading, here’s how to improve your argument and develop your voice. Reading the work of others, and having them – or GenAI tools, used mindfully – read yours, can help you shape your thinking on the page and better present your original scholarship.
What is developmental editing, and why does your scholarly manuscript need it? Academics might find it hard to see the flaws in their work but to be a writer is to be edited – embrace it. Princeton University’s Laura Portwood-Stacer outlines the importance of developmental editing.
Peer feedback is the secret weapon for better academic writing: Harness the power of your academic community to hone your arguments, sharpen your writing and develop your critical thinking. Dina Nasr and Rayan Awadalla of Dubai Medical University offer advice.
Yes, GenAI can make academic writing easier without making us less scholarly: Generative AI does not change scholarship’s foundations of judgement, authorship and care but it does require academics to apply them more intentionally when writing. The University of Southern Queensland’s Nicole Brownlie shows how.
How mathematical practices can improve your writing: Writing is similar to three specific mathematical practices: modelling, problem-solving and proving, writes Caroline Yoon of the University of Auckland. Learn how to use these to improve academic writing.
Where it all begins: writing your abstract
An effective abstract gives readers an overview of your research paper and provides them with a guide to follow your arguments. It distils the purpose of your work, the methodologies used and your conclusions into a succinct passage. Michael Willis of Wiley describes it as a “shop-window view” to sell your research. Here’s how to make yours enticing and informative, as well as concise.
How to write an abstract for a research paper: Ankitha Shetty of Manipal Academy of Higher Education outlines three elements to include in your research paper abstract and some tips for making yours stand out.
Read this before you write your abstract: The abstract is arguably the most important element of a scholarly article, so it should be informative, meaningful and impactful. Wiley’s Michael Willis offers two objectives, and practical tips, to keep in mind.
Making the abstract concrete: Yinchun Lee and Steven Bateman of Xi’an Jiaotong-Liverpool University share strategies for writing effective abstracts for conference and research papers.
How to write a literature review
Literature reviews collate and analyse the existing research on a topic, to show your understanding of the field and where your research sits within it. It’s the foundation on which your own scholarship will be built, so use it to form the narrative of your research, develop your distinctive voice and strengthen your critical writing. Find out how here.
Streamline the literature review process with these tips: How to make the research, reading and referencing processes smooth from Natalie K. D. Seedan of the University of the West Indies.
A practical guide to writing a literature review: From organising key search terms to checking citations, this video by Bareq Ali Abdulhadi offers simple, practical tips to crafting a literature review that will lay a sound foundation for your academic paper.
Using literature reviews to strengthen research: tips for PhDs and supervisors: The Royal Literary Fund’s Anne Wilson explains how to develop a narrative and context for new research through your literature review, with tips for early career researchers and their supervisors.
The nuts and bolts of research papers
Break down the structure of a research paper into component parts. Find here all you need to know about introductions, citations and explaining methodology, and how GenAI tools can help.
Great citations: how to avoid referencing questionable evidence: Researchers don’t always stick to careful citation practices and occasionally cite evidence that has been called into question or even retracted by publishers. Elsevier’s Dmitry Malkov provides practical tips on how to avoid citing faulty evidence and maintain good citation hygiene.
Writing your first journal article? Here’s how to get the structure right: By structuring your journal article effectively, you improve your chances of getting published and growing your opportunities to disseminate your work. Natalie K. D. Seedan of the University of the West Indies explains.
Be the conductor of your own GenAI orchestra for academic writing: Instead of using a single GenAI tool to create a one-note research paper, why not tune up an orchestra of machine assistants? Aditi Jhaveri, Nora Binte Hussin and Siyang Zhou of Hong Kong University of Science and Technology analyse the tools available.
Tips for turning your academic expertise into a book
Writing a book requires a different approach to shorter articles and papers. To engage the reader and maintain your motivation over several hundred pages, you must craft a narrative arc – and be clear on why this book is important for you to write in the first place. Find out how to propose your book to academic publishers and discover tips for turning your thesis into a book, as well as staying relevant across the length of time it will take you to complete it.
Anatomy of an academic book proposal: Pitch your book to publishers with an irresistible proposal. Here are all the elements you’ll need, writes Richard Baggaley from the University of Westminster.
How to figure out your book: Want to use summer’s student-free time to work on that academic manuscript? Dive into these tips and exercises by K. Anne Amienne and Daniela Blei of Scholars & Writers to craft a more engaging next draft.
‘Prune the tree to let the fruit stand out’: Universidad Austral’s Damián Fernández Pedemonte provides his tips on turning your thesis into a book – how to edit yourself, how to pitch and when to use GenAI.
Keeping your research relevant in an accelerating news cycle: When publishing is slow but world events move quickly, how can scholars ensure their work will be read and cited and contribute to academic discussion? Yasmin Y. Ortiga of Singapore Management University and Jenny Gavacs of Whetson Editing show how to stay relevant.
A happier – and more streamlined – writing process
With dozens of other responsibilities competing for our time and attention, it’s easy for writing to be pushed aside or become a chore. Here’s how to find the joy, the time and the motivation to write regularly and hone your skills – and what a slower, more human process can offer.
Science isn’t a solo sport – let’s write accordingly: Generosity in authorship, sharing imperfect drafts and writing daily are academic habits that make research clearer, fairer and more impactful, says Virginia Tech’s Audrey Ruple.
‘Academic writing equals chaos’: If you have stalled in your latest writing project, the University of Winchester’s Glenn Fosbraey shares three tips for breaking through blocks, getting organised and finishing the final draft.
Can we use AI for academic writing? It depends: The University of Bristol’s Marios Kremantzis and Eleonora Pantano consider how researchers can use AI responsibly, without compromising scholarly rigour or integrity.
Has AI cost academia the joy of text? Rather than asking what writing can be outsourced to AI, we might begin by asking which parts of the process need to remain slow, imperfect and human, argue four academics from the University of West London and the University of Worcester.
Helping students improve academic writing
With a reported 92 per cent of students admitting using GenAI in assessments, the craft of academic writing risks falling by the wayside, yet many would argue writing is vital for developing wider thinking, analytical and communication skills. Give your students the tools to balance machine intelligence with their own critical thinking, develop their style and use feedback to improve.
Exploration of style: practical ways educators can teach academic writing in the sciences: First-year students need to adapt their writing style when transitioning to university. Rui Xue Zhang and Xinzhi Li of Macau University of Science and Technology show how educators can support freshmen to develop flexible, analytical and evidence-based writing.
Conversations with bots: teaching students how – and when – to use GenAI for academic writing: Joseph Tinsley and Huimin He of Xi’an Jiaotong-Liverpool University outline their four-step process teaches students how to use GenAI tools to brainstorm ideas, understand and act on feedback and edit their essays in line with assessment rubrics.
‘The process of writing forces the writer to be present’: Writing is hard and uncomfortable but the craft of turning thoughts into words should not be lost to the frictionless ease of generative AI, write the University of Southern Queensland’s Jackie Webb and Christina Birnbaum.
Peer feedback: a burden for students or route to better academic writing? Asking students to give anonymous feedback on each other’s work can not only result in better writing skills but offer them opportunities to try new approaches and refine assessment tasks, writes the University of Southampton’s Alison Daniell."
https://www.timeshighereducation.com/campus/take-your-academic-writing-skills-next-level
#Metaglossia
#metaglossia_mundus
#métaglossie
"“The Azerbaijani language is not only a means of communication, but a model of national thinking,” said Vusala Mahirghizi, Director General of APA Media Group and Chair of the Language Commission of the Azerbaijan Press Council, at the conference titled “The Language of the Media in the Context of Scientific Discussion,” APA reports.
Vusala Mahirghizi noted that President Ilham Aliyev, in his speeches and interviews on the protection of the Azerbaijani language, has also described the language as an important attribute of our national identity.
Azerbaijan Karate Federation awards Apasport Sports Agency
MEDIA: Disinformation campaign targeting Azerbaijan–Türkiye relations underway on Russian-language Tsargrad.tv
She said that the media must continuously cooperate with linguists in order to preserve the language: “The National Academy of Sciences has prepared orthographic, orthoepic, lexical, and terminological dictionaries of the Azerbaijani language. Of course, this is very important. However, for both the media and broader public accessibility, it is essential to digitize these dictionaries and integrate them into artificial intelligence tools. The main issue is the digitization of the orthoepic dictionary. This would be a very valuable resource both for television presenters and for those who are newly learning our language. Unfortunately, people sometimes hear incorrect pronunciations of words on television programs and social media, and they memorize them in that way.”
The media executive also noted that online media currently use tools that check the correct spelling of Azerbaijani words: “The most helpful feature here is that once a news item is written, the text is reviewed by tools within a few seconds. However, those tools related to the Azerbaijani language were created in the past, and the updated rules have not been incorporated into them. I believe that the Institute of Linguistics could update or recreate these electronic tools. This could help quickly eliminate mistakes that occur in the media and in widespread use regarding the Azerbaijani language.”
Gunay Elshan"
https://en.apa.az/media/digitization-of-azerbaijani-dictionaries-essential-for-media-and-ai-says-apa-media-chief-492187
#Metaglossia
#metaglossia_mundus
#métaglossie
"When Translation Becomes Invisible: Voice Translation and the New Language Frontier
Successful consumer technology innovations almost always make their way into the corporate world before long. Smartphones, cloud apps and video calls quickly made the jump from consumer to professional life. Once people discover a better user experience and quality is possible, they will demand that work tools meet the same standards as those they use in their personal lives. It is what already happened with our own products. Individuals adopted DeepL Translator on their own devices for speed and quality, then organisations standardised it.
This is what makes Apple’s new Live Translation for AirPods feature so exciting. When translation changes from an intermediate, deliberate step — an app you open or a button you push — to a voice you just hear, expectations shift from “can I translate this?” to “why isn’t it already translated?” That’s the tipping point where mainstream consumer adoption creates the momentum that will accelerate enterprise change.
The pattern is set: translation will be a native capability, not a separate task. And this creates a cultural domino-effect that is set to change the way organizations think about language internally and in their interactions with customers.
The moment millions experience in‑ear, hands‑free translation, the expectations around language move. Whatever the context, if translation “just happens” in daily life, it will be judged by that standard in professional life, too.
The opportunities are immediate and compelling for international businesses. Customer relationships change when agents and customers can converse naturally across languages in real time, without stilted handoffs. Global sales teams can prospect and negotiate in new markets without waiting to build bilingual staffing. Cross‑border collaboration within companies—design in Berlin, research in Tokyo, marketing in São Paulo — will feel less like coordination and more like conversation. Internal teams with global memberships immediately feel more connected when English no-longer needs to act as the default intermediary.
Trust, not just polish
Consumer polish isn’t the same thing as enterprise readiness, though. Businesses operate under accuracy, privacy, and accountability constraints that personal devices and applications don’t face. A mistranslated phrase in an industrial or clinical briefing is an enormous risk, not an inconvenience. A fuzzy summary of a legal decision is a liability, not a funny story to tell friends after a holiday.
The translation layer must be engineered for trust as deliberately as it is for fluency. That means clear processor roles, heightened security, data‑residency choices, and auditable traces for all kinds of communication.
This is why consumer zeitgeist should be seen as a catalyst, not a competitor for enterprise providers like DeepL. Devices make translation feel natural and private; enterprises need it to also be dependable and governed. The opportunity is to stitch these two sets of requirements together, so in‑ear immediacy sits on top of verifiable privacy, accuracy, and clear audit trails.
Future-proofing with Language AI
Organisations should treat Language AI as strategy, not a bolt‑on. They should build around inclusion—make live translation ambient by default, not an exception. It’s a key ingredient in allowing every employee to excel, not just those with the ‘right’ language skills.
Beyond day‑to‑day communication, live translation can redefine organisational design. When language is no longer a filter, talent pools widen. You hire the best engineer, not the best English‑speaking engineer. Leadership pipelines diversify. Culture scales globally with fewer compromises to voice and nuance. Translation stops being an accommodation and becomes a growth capability.
Agentic AI: from understanding to action
A second trend heightens the stakes: agentic AI. These systems don’t just translate; they understand instructions and act across tools. DeepL recently introduced DeepL Agent, aimed at automating business workflows and increasing productivity – content creation and analysis, comparing documents, localisation, research, follow‑ups—so comprehension turns into action inside enterprise systems. It’s the operational counterpart to ambient translation: if the ear makes understanding effortless, the agent makes the next steps automatic.
The progression is so natural. In a multilingual meeting, participants hear translated speech in their ears or see it on their screens. In the background, an agent captures decisions, drafts minutes, localises collateral, and updates key records within apps—quietly, and within policy. The conversation doesn’t end at understanding; it triggers everything required to turn discussion into actions.
The next normal
Virtual voice translation is no longer something down the line, it’s here and already in use. Contact centres can offer help across the globe. Field sales can move faster in new regions without waiting for bilingual hires. Product teams spread across time zones can collaborate and ship solutions with fewer misunderstandings and less busy work. Each step compounds into faster cycles and broader reach.
The likely cultural shift within companies is as significant as the technical one. When translation is effortless, people will use it more. Teams talk more. Markets feel closer. Customers expect to be served in their language without hesitation. That changed behaviour pulls Language AI from the background of operations to the foreground of growth.
From support function to growth driver
This is a real turning point. The future of translation and the future of business are converging. Translation will be less about overcoming barriers and more about creating opportunities—new segments to enter, new partnerships to forge, new products to make legible to the world on day one.
Success won’t just be about technology. Humans will stay present to do what they do best, taking the judgements and making the decisions—eyes up, hands‑free—while Language AI does the heavy lifting of understanding, rendering, and initiating tasks with transparent controls.
Consumer tools have shown the world how live translation can feel. Enterprise Language AI will show what it can achieve—reliable, governed, and woven into the fabric of work. If we get the chemistry right — earbud ease on top, enterprise integrity beneath — we’ll cut friction within businesses and in their interaction with the world, drive productivity and deliver more positive outcomes without even noticing the tech."
By Stefan Mesken, Chief Scientist, DeepL
https://aijourn.com/when-translation-becomes-invisible-voice-translation-and-the-new-language-frontier/
#Metaglossia
#metaglossia_mundus
#métaglossie
"La locution latine Ego translator fait écho à la formule de Paul Valéry Ego scriptor, que le poète retint comme titre d’une des rubriques de ses Cahiers. Elle désigne « l’écriveur », terme que Valéry préfère à « écrivain », se regardant écrire et témoignant, dans une attitude réflexive, de sa démarche créatrice, aussi bien que de son être profond, indissociable de l’écriture. De manière distincte mais complémentaire, notre proposition de journée d’étude se rattache aussi à la démarche singulière d'Henri Meschonnic qui, dès les premières lignes de Poétique du traduire, qualifiait la théorie de la traduction d’« accompagnement réflexif » et situait au contraire le principe de son travail de traducteur et de théoricien dans la formule lapidaire suivante : « l’expérience est première ». Suivant ces modèles, notre titre Ego translator voudrait à son tour référer à la personne du traducteur/de la traductrice se désignant lui-même/elle-même et se décrivant sur divers plans de son expérience.
Responsable de l'événement :
M. Stefano Magni, MCF HDR (CAER)
Conception, organisation et coordination :
M. Arnaud Gingold (doctorant, CAER), Mme Silvia Tedeschi (doctorant, CAER), M. Pierre Troullier (agrégé, doctorant, CIELAM)
—
Programme
Lundi 23 février 2026
13h30-14h00 : accueil des intervenant·es
14h00-17h00
● Introduction (10 min)
● “Logos sauvage : une expérience de la traduction”, Arnaud Gingold (15 min.)
● “Traduire, écrire, s’absenter”, Christophe Mileschi (40 min.)
Pause (20 min.)
● “Le littéralisme serait-il l’approche traductive la plus subversive pour traduire les poèmes des surréalistes ? Quelques réflexions sur le choix mimétique de Diana Grange Fiori traductrice de Francis Picabia”, Emanuela Nanni (40 min.)
● “Ce que traduire veut dire : anatomie d’un travail situé dans le marché linguistique”, Annalisa Romani (40 min.)
—
Mardi 24 février 2026
9h30-10h00 : accueil des intervenant·es
10h00-13h00 :
● “Eugenio Montale et Oscar Vladislas de Lubicz-Milosz : La berline arrêtée dans la nuit”, Silvia Tedeschi (15 min.)
● “Chinois-français la traduction / d’une possibilité”, Pierre Vinclair (40 min.)
Pause (10 min)
● [Titre à venir], Antonio Werli (40 min.)
● “Portrait du poète en purgatif : traduction live d’un extrait de satire de James Joyce”, Pierre Troullier (15 min.)
● Conclusion (10 min.)
—
Localisation
Bâtiment multimédia - Salle de colloque 1
Aix-Marseille Université - Campus Schuman
29 avenue Robert Schuman
13621 Aix-en-Provence Cedex 01
Le campus est accessible depuis Marseille avec le bus 50 et depuis la gare routière d’Aix avec le bus A.
Responsable : Arnaud Gingold Url de référence : https://caer.univ-amu.fr/ Adresse : Aix-Marseille Université - voir sur une carte Document(s) joint : https://www.fabula.org/actualites/documents/132848_3cc6daaaab9e9cb5bbbd23174a1f6173.pdf" Publié le 19 Février 2026 par Faculté des lettres - Université de Lausanne (Source : Pierre Troullier) https://www.fabula.org/actualites/132848/ego-translator-experiences-de-la-traduction.html #Metaglossia #metaglossia_mundus #métaglossie
"Publié avec le soutien du Conseil scientifique de la faculté des Lettres de Sorbonne Université, de l’UMR 8224 Eur’ORBEM (CNRS - Sorbonne Université) et du GDR 3607 Connaissance de l’Europe médiane (CNRS).
Éditeur : Presses universitaires de Rennes
Lieu d’édition : Rennes
Publication sur OpenEdition Books : 17 février 2026
ISBN numérique : 979-10-413-1119-4
DOI : 10.4000/15ppn
Collection : Interférences
Année d’édition : 2026
ISBN (Édition imprimée) : 979-10-413-0823-1
Nombre de pages : 658
Sommaire
Résumé
Auteurs
Quelles sont les destinées de la littérature de langue française au xxe siècle dans quatre espaces linguistiques de l’Europe centrale : hongrois, polonais, slovaque et tchèque ? Pour répondre, l’ouvrage mobilise historiens, sociologues de la littérature, comparatistes, historiens littéraires et traducteurs afin d’élucider le contexte et les flux de traduction vers ces langues. Il présente les trajectoires de passeuses et passeurs centre-européens et il se penche sur des cas emblématiques de transferts littéraires (de Baudelaire à Romain Rolland en passant par Jules Verne). Au centre des antagonismes qui traversent la littérature mondiale, la traduction a pu être une pratique consolatoire, lançant un pont par-delà les conflits les plus sombres. Son étude contribue à la compréhension des relations entre l’Europe centrale et l’espace occidental à travers les crises, les conflits et les affrontements idéologiques qui ont marqué l’Europe de la fin du xixe siècle à nos jours..."
Auteurs
Antoine Marès (dir.)
Paris 1 Panthéon-Sorbonne.
IdRef : 027008878
Antoine Marès, historien de l’Europe médiane, professeur émérite des universités (Paris 1 Panthéon-Sorbonne) a réuni avec Clara Royer de grands spécialistes centre-européens et français de la question qui proposent ici des synthèses et des analyses inédites sur les facettes de la circulation des belles-lettres d’expression française au cœur de l’Europe.
Clara Royer (dir.)
Sorbonne Université.
IdRef : 134043162
Clara Royer, spécialiste de littérature hongroise et professeure à Sorbonne Université, a réuni avec Antoine Marès de grands spécialistes centre-européens et français de la question qui proposent ici des synthèses et des analyses inédites sur les facettes de la circulation des belles-lettres d’expression française au cœur de l’Europe.
https://lnkd.in/eTTgAUza
#Metaglossia
#metaglossia_mundus
#métaglossie
https://lnkd.in/eTTgAUza
"« La diversité linguistique est la source même de la capacité d’innovation de l’Inde »
Nicolas Idier
Ecrivain et historien
Sindhuja Veeraragavan
Traductrice littéraire originaire du Tamil Nadu et diplômée de l’université Jawaharlal Nehru, à Delhi, et de l’université de Stirling, en Ecosse.
Publié le 17 février 2026
À l’occasion du quatrième voyage officiel d’Emmanuel Macron en Inde, et alors que le pays accueille le sommet international sur l’intelligence artificielle du 19 au 20 février 2026 à Delhi, une traductrice et un écrivain alertent sur les dangers de l’IA comme outil de traduction et sur l’importance du plurilinguisme.
À l’heure où l’Inde prend une place croissante dans le nouvel équilibre d’un monde marqué par une accélération technologique sans précédent, nous affirmons que la source même de sa capacité d’innovation est la diversité linguistique, terreau de son pluralisme culturel et de l’histoire de la pensée démocratique décrite par Amartya Sen, lauréat du prix Nobel d’économie en 1998.
Cette diversité – 22 langues officielles, plus de 270 langues qualifiées de maternelles selon le recensement de 2011 –, a fait naître une manière de réfléchir et de travailler ensemble, que résume une autre grande voix de l’Inde – la romancière Arundhati Roy – en réponse à la question poème de Pablo Neruda, « dans quelle langue tombe la pluie sur les villes tourmentées ? » : la langue de la traduction.
Un filtre prescripteur
En Inde comme partout dans le monde, traduire est un acte profondément politique, non seulement dans le choix des œuvres que l’on transporte d’une langue à l’autre, mais aussi dans celui du mot pour transmettre le sens. En laissant l’intelligence artificielle procéder à ce choix, ce n’est pas seulement la diversité des langues que l’on met en péril, mais toute une longue histoire vers l’Indépendance – un concept qui devrait nous faire collectivement réfléchir – que l’on interromprait.
L’acte de traduire a le pouvoir de créer et de briser les chaînes. Et pendant un certain temps, les chaînes semblaient bien se briser. Après la traduction de Thaïs d’Anatole France par l’écrivain hindi Munshi Premchand (1880-1936) ou de La Marseillaise par Subramanya Bharati (1882-1921), poète tamoul révolutionnaire, la seconde moitié du XXe siècle a vu circuler en Inde et dans le monde (car l’Inde est un monde) des auteurs comme Perumal Murugan, banni en tamoul mais traduit en anglais, ou encore Arundhati Roy, passant de l’anglais à l’hindi (et à quarante autres langues).
Dans la longue et chaotique histoire des langues et du savoir, l’IA n’est pas neutre. Derrière son apparente omnipotence linguistique se cache le risque d’une régression sociale majeure. La traduction est un filtre prescripteur, capable d’inverser certaines positions de pouvoir. Or, dans l’entraînement des modèles d’IA, les langues non anglaises (et en particulier non occidentales) ne représentent qu’une petite fraction.
Un potentiel émancipateur
L’utilisation de l’IA conduira à une nouvelle recentralisation du pouvoir narratif, en réimposant des catégories grossières d’altérité que les traducteurs postcoloniaux et féministes ont passé des décennies à remettre en question. Ce qui serait alors perdu, ce n’est pas seulement l’élégance ou le style – ce que dès 1946 George Orwell avait pressenti dans son clairvoyant manifeste La politique et la langue –, mais le potentiel émancipateur de la diversité linguistique.
Plus fondamentalement, quand bien même il serait possible de former l’IA à écrire et à traduire, l’espace silencieux entre deux mots lui échappera toujours. C’est pourtant dans cet espace vide, ce silence que se déploie la liberté d’interprétation, premier et seul gage de l’innovation véritable.
Le maintien de la diversité des langues repose sur chacun de nos actes. En Inde, où nous refusons le monopole d’une langue hindi épurée de sa composante ourdoue et sanskritisée au détriment de la profondeur historique et agnostique du sanskrit. Et partout ailleurs, en nous inscrivant à des cours de langue vivante, plutôt que de télécharger un logiciel de traduction automatique, en lisant des œuvres garanties « sans IA », en encourageant les enfants dans cette voie royale du plurilinguisme.
« Une certaine lecture du monde »
Car l’IA traduit peut-être, mais elle traduit sans humanité. Si l’on n’y prend pas garde, elle nous entraînera dans un balbutiement mécanique, qui finira par rétrécir le monde des idées et lever de nouvelles frontières entre des communautés étanches les unes aux autres – une définition du système des castes malheureusement encore prégnant dans l’Inde de 2026.
Dans son discours tenu à la conférence des Ambassadeurs, Emmanuel Macron reliait le « grand partenariat géographique » de l’Indo-Pacifique à « une certaine lecture du monde ». Pour répondre aux inquiétudes légitimes de toute langue menacée, que ce soit par le suprématisme politique ou technologique, et véritablement installer l’Inde comme contrepoids d’un déséquilibre qui menace la grammaire de notre humanité, faisons de cette lecture du monde une défense et illustration de la diversité linguistique et inscrivons en son cœur l’art très humain de la traduction."
https://www.la-croix.com/a-vif/la-diversite-linguistique-est-la-source-meme-de-la-capacite-d-innovation-de-linde-20260217
#Metaglossia
#metaglossia_mundus
#métaglossie
"L'Albertine Translation Prize pour David Broder et sa traduction d'Algérie 1962
La Villa Albertine, établissement culturel du ministère de l’Europe et des Affaires étrangères basé aux États-Unis, a décerné son Albertine Translation Prize, à David Broder, pour sa traduction, à paraitre chez Verso Books, du livre Algérie 1962, de Malika Rahal (paru en janvier 2022 chez La Découverte).
Le 16/02/2026 à 12:36 par Dépêche
Chaque année, la Villa Albertine décerne son Albertine Translation Prize, destiné à encourager la traduction et la diffusion d'œuvres françaises outre-Atlantique. Il est doté à hauteur de 5000 $, une somme qui revient au traducteur.
Le lauréat de cette année, David Broder, est un écrivain, journaliste et traducteur installé à Rome. Il apparait régulièrement dans l'hebdomadaire italien Internazionale.
En Algérie, l'année 1962 est à la fois la fin d'une guerre et la difficile transition vers la paix. Mettant fin à une longue colonisation française marquée par une combinaison rare de violence et d'acculturation, elle voit l'émergence d'un État algérien d'abord soucieux d'assurer sa propre stabilité et la survie de sa population. Si, dans les pays du Sud, cette date est devenue le symbole de l'ensemble des indépendances des peuples colonisés, en France, 1962 est connue surtout par les expériences des pieds-noirs et des harkis.
En Algérie, l'historiographie de l'année 1962 se réduit pour l'essentiel à la crise politique du FLN et aux luttes fratricides qui l'ont accompagnée. Mais on connaît encore très mal l'expérience des habitants du pays qui y restent alors. D'où l'importance de ce livre, qui entend restituer la façon dont la période a été vécue par cette majorité. L'année 1962 est scandée par trois moments : cessez-le-feu d'Evian du 19 mars, Indépendance de juillet, proclamation de la République algérienne le 25 septembre.
– Le résumé de l'éditeur pour Algérie 1962
Malika Rahal, historienne, chargée de recherche au CNRS, est spécialiste de l'histoire contemporaine de l'Algérie. Elle dirige, depuis 2022, l'Institut d'histoire du temps présent (IHTP).
Parallèlement à cette récompense, la Villa Albertine apporte un soutien financier à plusieurs projets de traduction, au nombre de 17 en 2025. L'Albertine Translation Prize est sponsorisé par Van Cleef and Arpels, la Florence Gould Foundation, l'Albertine Foundation et l'Institut français.
À LIRE - En dessous du volcan de Malcolm Lowry : une nouvelle traduction signée Claro en 2028
L'année dernière, deux prix de traduction avaient été décernés à Ève Hill-Agnus et Gregory Elliot, pour leur travail respectif sur Ultramarins, de Mariette Navarro (Quidam) et Pourquoi la guerre ? de Frédéric Gros (Albin Michel)."
https://actualitte.com/article/129317/prix-litteraires/l-albertine-translation-prize-pour-david-broder-et-sa-traduction-d-algerie-1962
#Metaglossia
#metaglossia_mundus
#métaglossie
"Zoomers, we have bad news: your six or seven minutes in the vanguard of the cultural zeitgeist are officially over. The New York Times has published a fresh guide to Gen Z slang. It’s all downhill from here.
The NYT’s guide is particularly notable for its insistence that a significant amount of today’s slang isn’t new at all. The paper is clearly wedded to this idea; in December last year, they insisted that the term “brain rot” should be attributed to Henry David Thoreau, who used it in his 1854 glamping memoir Walden. They’ve doubled down on this approach, reporting that the hot new slang among zoomers includes archaic words like “yap,” “skedaddle,” and (deep breath) “goon.”
Critics might argue that the NYT’s grasp of some cutting-edge slang isn’t quite as solid as it might like to believe: take, for example, its assertion that “Calling someone a ‘goon’ is no longer just a 1920s habit.” It feels somehow dirty to direct someone as venerable and respectable as the Gray Lady to the less salubrious parts of the internet, but… that ain’t how “goon” is being used in 2026, ma’am. (The paper of record doesn’t have an entirely flawless record on this front.)
The whole everything-old-is-new-again angle feels a little dubious, too. The article interviews one Brianne Hughes, a historical linguist: “Old terms often return subconsciously amid a sort of inventory-taking whenever a significant milestone arrives—like the turn of a decade or the anniversary of a cultural event. ‘It’s just a reason to go back through the old photos of the language, and being like, Oh yeah, I remember, that was pretty fun,’ Hughes said.”
It’d be interesting to know whether there are studies and research that support this view—none are cited—because intuitively, it seems perfectly feasible for a term as generic as “brain rot” to be coined in 2026 as it was in 1854, and that more generally, words and phrases are just as likely to be created anew over and over again as they are to re-emerge from some sort of linguistic deep freeze.
Does it even matter? Well, for our money, the process by which slang emerges, spreads, and is eventually re-subsumed into the blob of “proper” language is one of the most fascinating parts of linguistics. It’s a process one can track via Green’s Dictionary of Slang, an exhaustive dictionary of argot that, while not quite as venerable as the NYT, is pretty much the definitive source on its subject. As it happens, the entire dictionary has recently been made available in its entirety online for free, which is excellent news for anyone given to researching etymology for shits and giggles.
And what does Green’s have to say about “brain rot”? Nothing. The term is yet to appear in the slang dictionary of record, which suggests that Thoreau’s usage never quite made it out of the 19th-century skibidi toilet."
Green's Dictionary of Slang © Jonathon Green / Amazon
https://gizmodo.com/largest-dictionary-of-english-slang-is-now-free-online-to-help-you-talk-like-a-zoomer-2000720203
#Metaglossia
#metaglossia_mundus
#métaglossie
"A group of 52 biblical “specialists” have released a new version of the Bible in which inclusive language and “political correctness” have replaced some “divisive” teachings of Christianity in order to present a “more just language” for groups such as feminists and homosexuals.
According to the AFP news agency, the new version of the Sacred Scriptures was presented at a book fair in Frankfurt. Entitled, “The Bible in a More Just Language,” the translation has Jesus no longer referring to God as “Father,” but as “our Mother and Father who are in heaven.”
Likewise, Jesus is no longer referred to as the “Son” but rather as the “child” of God. The title “Lord” is replaced with “God” or “the Eternal One.” The devil, however, is still referred to with masculine pronouns.
“One of the great ideas of the Bible is justice. We have made a translation that does justice to women, Jews, and those who are disregarded,” said Pastor Hanne Koehler, who led the team of translators.
Last December, Matin Dreyer, pastor and founder of the sect “Jesus Freaks,” published the “Volksbibel” (The People’s Bible), in a supposed attempt to make the message of Christianity more “accessible.” Jesus “returns” instead of resurrects, and multiplies “hamburgers” instead of the fish and loaves. In the parable of the prodigal son, the younger son squanders his inheritance at dance clubs and ends up “cleaning bathrooms at McDonald’s.”" https://www.ewtnnews.com/world/europe/controversial-politically-correct-version-of-the-bible-published-in-germany?redirectedfrom=cna #Metaglossia #metaglossia_mundus #métaglossie
"Portuguese-English Translator - Remote
Watching America
Remote, Volunteer can be anywhere in the world
Details
Start Date:
March 1, 2026
End Date:
July 7, 2026
Available Times:
Weekdays (daytime, evenings), Weekends (daytime, evenings)
Time Commitment:
A few hours per month
Recurrence:
Recurring
Benefits:
Training Provided
Good For:
Age 55+, International Volunteers
Participation Requirements:
Attend Orientation
Description
Watching America (www.watchingamerica.com) is currently seeking volunteer translators (at least 18 years old) to translate foreign news stories in Portuguese about the United States into English. You do not need prior translating experience to do this. You must be available to find and translate at least one article every two weeks (a minimum of 26 articles per year). Applicants will be asked to translate a test article. While this is a volunteer position, it allows for flexibility in regards to how much or how little you would like to work above the minimum, helps build your resume, comes with an offer of a solid letter of recommendation from Robin Koerner, president and publisher of Watching America, and gets your name on an internationally acclaimed website. All languages are needed and any new ideas you may have for the project are welcome!
Watching America (WatchingAmerica.com) is an internationally renowned website that translates foreign news about the U.S. from all over the world, to enable Americans to see how they and their policies are perceived globally. Over its thirteen years of existence, it has gained mainstream recognition in the U.S. and abroad and has been talked about, among others, by the BBC, NPR, Foreign Policy Magazine, The Guardian and The Village Voice. It depends on a team of volunteers who benefit from the prestige of the project in developing their careers elsewhere, and who have a passion for the idea of breaking down the final barrier among peoples of the world — the language barrier."
https://www.idealist.org/en/volunteer-opportunity/ea19f6e59f074d0ebe9f3d28171c0677-portuguese-english-translator-remote-watching-america-washington
#Metaglossia
#metaglossia_mundus
#métaglossie
"Singing Grass and Paper Republic are delighted to announce a new prize for translated fiction from Chinese to English designed to showcase literary translators of contemporary voices. This exciting initiative will invite participants to translate a short extract from the acclaimed Mao Dun prize-winning author Liu Zhenyun whose new novel, Salty Jokes (咸的玩笑), has just taken China by storm. The winning translator will receive £1500 and 2 runners-up £500 each. The submissions will be judged by an international Jury of translation experts...
Nicky Harman <n.harmanic@gmail.com>"
https://u.osu.edu/mclc/2026/02/16/new-chinese-to-english-translation-prize/
#Metaglossia
#metaglossia_mundus
#métaglossie
The Vatican has teamed up with Translated, a language service provider, to create live translations in 60 languages "The Vatican is leaning into AI. AI-assisted live translations are being introduced for Holy Mass attendees — the holy masses if you will. The Papal Basilica of Saint Peter in the Vatican has teamed up with Translated, a language service provider, to create live translations in 60 languages.
"Saint Peter’s Basilica has, for centuries, welcomed the faithful from every nation and tongue. In making available a tool that helps many to understand the words of the liturgy, we wish to serve the mission that defines the centre of the Catholic Church, universal by its very vocation," Cardinal Mauro Gambetti, O.F.M. Conv., Archpriest of the Papal Basilica of Saint Peter in the Vatican, said in a statement. "I am very happy with the collaboration with Translated. In this centenary year, we look to the future with prudence and discernment, confident that human ingenuity, when guided by faith, may become an instrument of communion."
Visitors to the Vatican will have the option to scan a QR code. They will then have access to live audio and text translations of the liturgy. It doesn't require an app and should work right on a web page.
The technology stems from Lara, a translation AI tool Translated launched in 2024. Translated claims that Lara works with the "sensitivity of over 500,000 native-speaking professional translators."" https://www.engadget.com/ai/the-vatican-introduces-an-ai-assisted-live-translation-service-163014907.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAFpq9f8Z0g7iNreJNLxLfc8MqR0g4I4bNKXzInJhEnuvGiC7laPUept1zQTNlxIFHGqlSBFF5ihbeauHeDL8xcPOtVrYDiQ4WSlDCWAy6GD07P3lzY8wIqswh5OXOzYAkbmZnAR3pni3SVamnb2cTEU341ojKbXqmjNSvSmPSLA0 #Metaglossia #metaglossia_mundus #métaglossie
"Interpreters from across Canada work daily to preserve Indigenous languages.
As February marks Indigenous Languages Month, NNSL Media spoke with six interpreters at the NWT Legislative Assembly who make politics understandable in Inuinnaqtun, Inuktitut, Dene Suline, Sahtuot’ine and Dene Zhatie.
Necessity
Many found themselves interpreting out of necessity.
Tuppittia Qitsualik, a speaker of Inuinnaqtun, said that without her mother, she felt compelled to rise to the responsibility of interpreting for her unilingual father.
Dennis Drygeese, a speaker of Dene Suline Yatie, recalls that while growing up, his mother, a teacher, would travel often. As a result, he often stayed with his grandparents, who “didn’t speak a word of English.” If ever they needed help speaking to the Co-op manager, or to social services, Drygeese would be there to translate on their behalf.
Sarah Cleary, who is fluent in Sahtuot’ine Yati, began her translation in a similar way, saying, “When I was young, the Elders in my community always asked me to interpret, and I’d do my best. Sometimes I’d have to use sign language.”
Suzie Napayok-Short, an Inuktitut speaker, said, “As I was growing up, the Qallunaat — that’s the white man — had authority in pretty much everything our parents had to do, and as I grew I realized they really need to know exactly what is being said here.”
Preservation
Preservation of identity and culture are at the heart of many of the interpreters’ work.
Mary Jane Cazon, a speaker of Dene Zhatie, recalls her parents telling her that retaining her language and her culture would lead her into the future.
Drygeese shared a similar sentiment, stating: “My grandmother told me, ‘Hang on to your words, my boy, because one day it’s going to be really useful to you,’ so here I am.”
Regarding the importance of preserving one’s language, Joe Ototkiak, a speaker of Inuinnaqtun, said, “Language is a cornerstone. Without language, there’s really no identity.” He added, “I just hope and pray that Inuinnaqtun keeps thriving, it’s a language that was at the brink of disappearing.”
Information
Some translators the value that their work has in ensuring unilingual Elders fully understand the ways of western politics.
Otokiak said, “It’s about making sure our Elders have some idea of what’s going on around them, and informing them of new things on the horizon.”
Cleary recalled a time when her father, being a seasonal worker, was denied ration money. Sarah went to the local church with the goal of calling Yellowknife for assistance, but the church’s priest, on Sarah’s behalf, went and spoke with the agent that had denied her father the money for food. Sarah’s interpreting was able to help feed her family that winter.
Napayok-Short recalls that as a child, many of her community members didn’t fully understand the settler-imposed rules. She saw that as her opportunity to help her people and decided to take up the task of interpreting.
“I thought, ‘I can get really good at these languages if I keep listening and learning, so that’s what I’m gonna do — that’s going to be my way of helping my family.’”
Revitalization
Many interpreters remain hopeful and optimistic that their languages will be preserved, and even adopted more widely.
Cazon noted that she and her husband are both fluent in their mother tongue. They make an effort to speak the language in their home and around their grandchildren.
“One day, hopefully, they’ll really be able to pick it up and be able to be fluent, just like us,” she said.
Cleary envisions a future where, due to the officialization of so many Indigenous languages within the NWT, youth in Indigenous communities will be able to be educated in their mother tongue. “If they wanted to go south and enter a university, because our language is official, they would have their entrance requirement,” she said.
NWT News/North" https://www.pentictonherald.ca/spare_news/article_30c6349e-06bb-5a33-8c56-c52e27823a6f.html #Metaglossia #metaglossia_mundus #métaglossie
Rozenn Milin, historienne, sociologue et journaliste, parle de sa thèse intitulée : « La honte et le châtiment, ou l’interdiction de parler sa langue maternelle à l’école. L’exemple du breton et des langues africaines ».
"À Morlaix, l’historienne Rozenn Milin a présenté sa thèse sur l’interdiction des langues maternelles, notamment le breton, à l’école. Une politique linguistique née sous la Terreur et visant à imposer le français.
Rozenn Milin a présenté sa thèse sur les interdictions des langues maternelles, dont le breton.
Les enseignants de la filière bilingue (breton-français) du lycée Tristan-Corbière ont invité, jeudi 12 février, Rozenn Milin, historienne, sociologue et journaliste, à venir parler de sa thèse intitulée : « La honte et le châtiment, ou l’interdiction de parler sa langue maternelle à l’école. L’exemple du breton et des langues africaines ». Un sujet qui, comme l’a rappelé Jean Roualec-Quéré, enseignant, s’inscrit dans le thème étudié cette année par les élèves, sur l’appropriation et la transmission de l’identité bretonne. La conférence s’est tenue en français, car elle s’adressait également aux élèves et aux enseignants hors filière bilingue..."
https://www.letelegramme.fr/finistere/morlaix-29600/la-honte-et-le-chatiment-quand-lecole-interdisait-le-breton-et-les-langues-africaines-6983224.php #Metaglossia #metaglossia_mundus #métaglossie
"Robbie Meredith Education and arts correspondent, BBC News NI
Ellie-May is one of only three deaf pupils at her post-primary school New sign language laws in Northern Ireland will have a "huge impact", a deaf teenager has said.
Ellie-May, 14, gave evidence at Stormont to assembly members on the Communities Committee as they scrutinised the Sign Languages Bill.
The bill's aim is to make information and services, including from government departments and public bodies, accessible to people from the deaf community.
The committee's scrutiny of the bill is now completed which means that it can now have its final stages in the assembly before becoming law.
Plans for a sign language bill for Northern Ireland have been in the pipeline for a number of years.
The Sign Language Bill (Northern Ireland) 2025 was subsequently introduced in the Assembly by Communities Minister Gordon Lyons in February 2025.
The Sign Language Bill (Northern Ireland) 2025 was introduced in the assembly by Communities Minister Gordon Lyons in February 2025
The bill would give official and equal recognition of British Sign Language (BSL) and Irish Sign Language (ISL) as languages of Northern Ireland and promote the use and understanding of sign languages.
It means that public bodies will have to, by law, "take reasonable steps to ensure that the sorts of information and services provided by it are as accessible to individuals in the deaf community as to those individuals who are not in the deaf community".
The Department for Communities estimated that at least 5000 people in Northern Ireland use either British Sign Language (BSL) or Irish Sign Language (ISL) as their preferred way to communicate.
'Very tough' Ellie-May is one of only three deaf pupils at her post-primary school.
"It is very tough because you're deaf and you're surrounded by hearing people at a mainstream school, and you have a lot to concentrate on," she said.
"When you're lip reading you're absorbing information and your brain is processing differently from a hearing person."
Ellie-May would like to see more young people in school learn to sign.
"Sign language is very good because obviously you can communicate to people better and you can just sign," she said.
Gemma McMullan's two-year-old son George is deaf Gemma McMullan, whose two-year old son George is deaf, said deaf children will "be recognised as part of the community" due to the new laws.
George was first identified as deaf at only eight weeks old and has been attending sign language classes at Action Deaf Youth.
"Sign language isn't just a communication tool, it's going to be recognised as an official language and that's really important," she said.
"Our deaf children are going to be seen and they're going to be recognised as part of the community and they're going to have more access to education, healthcare, things like that so it's really important that our children are recognised and using their own language as part of their identity."
Julie Graham from Action Deaf Youth, an organisation for deaf children and young people, said it is a "massive, a massive movement forward".
"What's crucial is particularly to look at deaf children and young people because this will transform their futures."
Julie Graham from Action Deaf Youth said it is a "massive" move forward
The British Deaf Association's Majella McAteer told BBC News NI that the finger spelling differs between the two sign languages.
"We spell out words with two hands in British Sign Language and with one hand in Irish Sign Language, but our numbers as well in BSL use one hand, but our numbers in ISL use two hands. So there are big differences," she said.
"Here in Northern Ireland it's something that's very unique to have two sign languages that are used and the majority of our deaf community will know both languages.
"Some will use British sign language, some will predominantly use ISL, it depends where you were educated."
"But we're really proud of it because it just shows the rich culture that we have here."
The committee chair, Sinn Fein MLA Colm Gildernew, said the bill was a landmark one.
"In practical terms it's going to mean that all public bodies will have to provide services to people in their first languages," he said.
"It gives official recognition to both BSL and ISL and it recognises that those languages are the first language of people when they're using services."" https://www.bbc.com/news/articles/c20l6l9d4gwo #Metaglossia #metaglossia_mundus #métaglossie
"Avant que le français ne prenne l’importance qu’on lui connaît aujourd’hui, les langues de l’écrit restent, au sortir de l’Antiquité, le latin et le grec. Les parlers vernaculaires, de racine romane, sont quant à eux multiples et complexes. Mais un tournant majeur s’opère à partir du XIIIe siècle. Le français devient alors l’une des langues les plus parlées de l’Occident, acquérant une forte valeur symbolique d’union à un moment de consolidation du pouvoir royal. C’est à cette période qu’émerge un nouveau genre littéraire, celui du roman arthurien, dont l’essor est remarquable dès son apparition. Cet attrait nouveau pour la fiction est illustré dans l’exposition par un précieux vestige répertoriant des extraits de romans du poète champenois Chrétien de Troyes. Réutilisés au XVIIIe comme reliures de dossiers d’un notaire, ces Fragments d’Annonay, datés de la fin du XIIe-début du XIIIe, constituent un témoignage émouvant de l’émergence du français à l’écrit.
L’usage de la langue continue d’évoluer au gré des goûts et des transformations sociales. Au XIVe siècle, le style courtois connaît un succès considérable, en lien avec le développement des milieux urbains et de cour. Les thèmes amoureux et légers investissent fables et poésies, jusque dans la forme même des manuscrits, comme l’illustre l’élégant Chansonnier cordiforme de Montchenu (vers 1475) dont les enluminures colorées et chantantes représentent un couple, vêtu à la mode de l’époque, absorbé dans une conversation galante.
L’écrit comme laboratoire de la créativité et de la connaissance La Renaissance et les Temps modernes marquent un moment de stabilisation et d’institutionnalisation de la langue française. Le traité de Villers-Cotterêts, signé en 1539 dans l’ancien château de François Ier, impose par exemple l’usage du français dans les textes juridiques et administratifs. La langue devient alors un outil central pour penser le monde et diffuser les connaissances. Les philosophes des Lumières incarnent pleinement cette ambition, par leur volonté de passer la société au crible de la raison. On remarque avec plaisir les manuscrits autographes de Diderot et de Montesquieu, rares témoins matériels d’idéaux cherchant à éradiquer l’obscurantisme au profit de la vérité. Si le XVIIIe siècle est celui des Lumières, le XIXe s’impose comme le siècle du roman. Les écrivains conservent davantage les traces de leurs travaux préparatoires, faisant du manuscrit un espace de réflexion et d’expérimentation. Graphie, mise en page, ratures ou reprises deviennent autant d’indices de la psyché de l’auteur et de sa manière de concevoir l’œuvre, de jouer avec les mots. Il est ainsi amusant de comparer l’écriture fluide et appliquée d’Alexandre Dumas ou de Colette au tâtonnement presque chaotique de Gustave Flaubert. Le théâtre n’est pas en reste et révèle lui aussi des pratiques d’écriture spécifiques. La langue s’y déploie sous la plume des dramaturges et l’on ne peut être qu’intrigué par la technique astucieuse et ingénieuse mise en œuvre par l’auteur libanais Wajdi Mouawad, dont les notes, présentées sous la forme d’un schéma, témoignent d’un rapport très fort entre écriture, intrigue et mise en scène.
L’écriture comme miroir de l’âme Le manuscrit peut également se faire le témoin de réflexions plus personnelles, d’une quête de soi et du désir de coucher sur le papier des sentiments intimes, destinés à soi-même ou à des proches. L’exposition revient ainsi sur la pratique très codifiée de la correspondance au Moyen Âge, qui relève alors davantage d’un jeu intellectuel impersonnel, souvent dicté et retranscrit par un tiers. C’est au XVIIe siècle, dans le sillon de l’essor des salons littéraires, que la correspondance devient plus spontanée, plus personnelle. Les lettres écrites par Madame de Sévigné à sa fille, la comtesse de Grignan, conservent encore aujourd’hui le souvenir ému de leur séparation, ainsi que la trace sensible de la main de l’épistolière, quatre siècles plus tard, 2026 marquant les 400 ans de sa naissance...
Il a fallu attendre une période relativement récente pour que l’ego et l’intime deviennent de véritables sujets d’écriture et d’introspection. Les exemples les plus marquants dans l’exposition datent des XIXe et XXe siècles, à l’image du Journal de deuil de Roland Barthes (1977-1978), qui consigne, en 330 fiches, le cheminement intérieur d’un fils confronté à la perte de sa mère. Qu’ils soient le fait de grandes figures ou d’auteurs anonymes, tous ces écrits tissent un lien d’intimité puissant, presque familier, avec celles et ceux qui ont fait vivre et évoluer la langue de Molière.
« Trésors et secrets d’écriture. Manuscrits de la Bibliothèque nationale de France, du Moyen Âge à nos jours » Cité internationale de la langue française, 1 Pl. Aristide Briand, 02600 Villers-Cotterêts Jusqu’au 1er mars
https://www.connaissancedesarts.com/arts-expositions/tresors-de-la-bnf-plus-de-100-manuscrits-racontent-lhistoire-de-la-langue-francaise-au-chateau-de-villers-cotterets-11209938/ #Metaglossia #metaglossia_mundus #métaglossie
" By Lindy-Ann Edwards-Alleyne, Public Information Assistant, UNIC Caribbean
Colonialism is often the starting point to explain the linguistic makeup of the wider Caribbean, but the true language landscape is much more diverse. According to the University of the West Indies (UWI) St. Augustine Department of Modern Languages and Linguistics, while European languages dominate the linguistic grouping of ‘official languages’, there is a plethora of others, including Amerindian, African, Creole, and Asian languages, which also contribute to the region’s linguistic diversity and heritage.
A multilingual Caribbean space relies heavily on translation services to facilitate written communication among speakers of different languages and transfers of knowledge within the public, private and academic sectors. At the University of the West Indies, there are Translation Bureaus at the St. Augustine, Mona and Cave Hill campuses, and post-graduate programmes that contribute to the sustained growth of a highly qualified cadre of professional translators. At the same time, online translation tools and applications, especially those powered by artificial intelligence, offer an alternative for accessing translation services. Tools like Google Translate or Chat GPT have become modern-day quick references for the translation of words, paragraphs and in some cases complete documents from one language into another. For our observance of International Translation Day earlier this year, UNIC Caribbean explored the impact of artificial intelligence translation tools on the practice of translation in the Caribbean.
We spoke with Dr. Rossana Herrero-Martín, Coordinator of the UWI Cave Hill Translation Bureau, Mr. Eric Maitrejean, Coordinator of the UWI St. Augustine Caribbean Interpreting and Translation Bureau and Mrs. Lyndell Logan-Salina, retired translator and interpreter about the world of professional translation and the impact of AI-powered tools on its relevance and sustainability.
Translation: the process and product Herrero-Martín describes the process of translation as not merely the swapping of words from one language into another but rather a “purposeful, ethical act of intercultural communication: rendering meaning, intent, tone and function of a source text into a target language so it works for its new audience.” The skill set of a professional translator is underpinned by a perfect command of the target language, including an advanced-level understanding of style and register, as well as cultural nuance and contexts. The process of translation also includes background research, revision and proofreading so that the translated product adheres to international standards. According to Maitrejean and Logan-Salina, exposure to and familiarity with a broad range of topics and issues - including historical and current events - are assets for students of translation, enhancing their capacity to recognise and appropriately replicate topic-related or contextual nuance and references. Logan-Salina says the translator’s ability to weave together these various linguistic, cultural and contextual threads, is directly linked to the quality of the translated product.
The professional translator’s fluency also extends to the use of translation tools and technology, including machine and computer aides and software designed to support their work. Artificial intelligence within the context of translation studies, refers to the ability of computer systems to translate text from one language to another. Tools powered by this technology can potentially enhance the efficiency of the professional translator’s workflow, from drafting to revision, and even the inclusion of terminology options to convey appropriate nuance and context.
“But machines doing that now!” Before she entered the profession, says Logan-Salina, many outside the field anticipated the imminent takeover of the translation process by computers. Since the turn of the century, artificial intelligence has enhanced the capacity and output of translation tools and applications. These tools have become widely available and accessible via online and mobile applications, making translating from one language into another not just easily available, but also cost effective for those purchasing this service.
So, does the availability of AI portend the demise of translation as a human endeavour? For Maitrejean and Herrero-Martín, as Coordinators of Caribbean Translation Bureaus, reinvention and evolution is what comes next for the profession of translation. According to Maitrejean, it is the responsibility of language professionals to upskill and improve their capacity to utilise AI tools as part of their professional toolkit, in order to add value in sectors where “a very simple phone app” can replace their services.
Herrero-Martín envisions translators as not merely remaining relevant but also leading the modern-day process and workflows that utilise these tools to ensure adherence to international standards of quality and compliance. Output that is rapidly (and often very cheaply) churned out by online translation tools still requires professional review and editing for linguistic, cultural or contextual nuance. This is where Herrero-Martín predicts a sustained and even increased demand for professional translators. She believes that despite the advances made by AI translation tools, core human skills – “judgement, ethics, cultural intelligence, writing craft” – will remain irreplaceable.
Translators and AI – preserving the multilingualism of the Caribbean The linguistic landscape of the wider Caribbean includes many of what Herrero-Martín refers to as “under resourced” languages; that is, languages for which there are few or no resources that can be accessed by translators globally. Within the context of the 21st century, this lack of data available to train AI translation algorithms and models severely reduces the options for translation from and into these languages, potentially undermining the value of these languages as vehicles of knowledge and culture for their respective communities.
For Herrero-Martín, Logan-Salina and Maitrejean, professional translators possess a unique skillset as builders of corpora including, dictionaries, glossaries and other language resources. For them, this skillset is critical for curating the resources required to ensure these languages become potentially more visible to AI-powered algorithms, and therefore more accessible globally. Herrero-Martín says it is imperative that this resource-building and curation prioritize community language sovereignty and the safeguarding of this intangible heritage:
When carried out responsibly and in close collaboration with language communities, translation can be a powerful instrument of preservation, revitalization, and visibility for Caribbean languages, including indigenous languages.” The unique role of professional translators as community partners, language curators and potential facilitators of resource access to AI-powered algorithms and models should be considered “not just a technical act but an act of cultural justice.
The expert contributions of qualified translators working in the Caribbean facilitates communication while preserving the region’s multilingual identity. Artificial intelligence technology has transformed the process of translation and offers translators tools to enhance their professional experience while continuing to practice the art of translation. By keeping pace with the advantages offered by AI and adopting and adapting its tools to address the linguistic needs of the region, Caribbean translators strengthen their roles as protectors of the region’s multiple language identities and linguistic diversity. https://www.un.org/ht/node/239755 #Metaglossia #metaglossia_mundus #métaglossie
|
"Google has released WAXAL, an open speech dataset covering 21 Sub-Saharan African languages including Acholi, Hausa, Igbo, Luganda, Shona, Swahili, and Yoruba. The dataset contains over 11,000 hours of speech data from nearly 2 million recordings, developed over three years in partnership with Makerere University in Uganda, the University of Ghana, Digital Umuganda in Rwanda, and other African organizations. The name WAXAL comes from the Wolof word for “speak,” and the collection is now available here.
The dataset includes approximately 1,250 hours of transcribed speech for automatic speech recognition and over 20 hours of studio recordings for text-to-speech synthesis. Participants were asked to describe pictures in their native languages to capture natural speech patterns, while professional voice actors provided high-quality audio recordings. Google partnered with Media Trust and Loud n Clear for voice recording work, with the framework allowing partner organizations to retain ownership of the data they collected while making it available to researchers globally.
For African writers, artists, and technologists, the implications are worth examining carefully. Speech recognition and text-to-speech technologies determine whose voices get translated into text, whose stories can be transcribed, and which languages digital assistants will understand. The availability of this data could enable African developers to build tools on their own terms rather than waiting for Silicon Valley to decide which markets merit investment.
The potential applications for writers and literary communities are tangible. Imagine transcription tools that actually work for interviews conducted in Igbo or Yoruba, making the labor of documentation less extractive. Audiobook production in Swahili or Hausa without requiring English as an intermediary language. Voice-based storytelling platforms where oral traditions can be archived and shared without being filtered through colonial languages. Translation tools built by and for African language speakers rather than optimized for European language pairs. The dataset provides the foundation for developers to build them now.
That said, questions remain about how this data will be used and who ultimately benefits. Large tech companies have a pattern of extracting resources from the continent, whether raw materials or, increasingly, cultural and linguistic data, while the tools built from these resources often serve markets elsewhere first. The dataset’s open license means anyone can use it, but whether African developers and researchers will have the computational resources and infrastructure to compete with well-funded corporations is another matter. WAXAL offers infrastructure that didn’t exist before, but infrastructure alone doesn’t redistribute power."
https://brittlepaper.com/2026/02/speech-data-for-21-african-languages-is-now-open-access-what-does-that-mean-for-us/
#Metaglossia
#metaglossia_mundus
#métaglossie