 Your new post is loading...
|
Scooped by
Charles Tiayon
March 12, 6:22 PM
|
CĂ©rĂ©monie de remise des diplĂŽmes 2026 - FacultĂ© de traduction et d'interprĂ©tation - UNIGE UniversitĂ© de GenĂšve  La cĂ©rĂ©monie de remise des diplĂŽmes se tiendra le vendredi 27 novembre 2026 Ă 18h, dans le hall dâUni-Mail.  đ Accessible uniquement sur invitation, cet Ă©vĂ©nement festif destinĂ© aux personnes diplĂŽmĂ©es en 2025â2026 sera lâoccasion de cĂ©lĂ©brer leur brillante rĂ©ussite et partager des souvenirs marquants de leur cursus.  La cĂ©rĂ©monie sera retransmise en streaming le jour J.   10 mars 2026 https://www.unige.ch/fti/a-la-une/ceremonie-de-remise-des-diplomes-2026 #metaglossia_mundus #metaglossia #mĂ©taglossieÂ
Are humans the only beings on the planet that use language to communicate?
Â
"Burg Giebichenstein
Kunsthochschule Halle
Â
âLanguage can only deal meaningfully with a special, restricted segment of reality. The rest, and it is presumably the much larger part, is silence.â George Steiner
Â
Are humans the only beings on the planet that use language to communicate? Can we decipher the nonhuman world around us without harnessing it to our own socialization, syntax, and lexicon? Is interspecies communication even possible? Translation has been described as a precondition that underlies all (human) cultural transactions upon which communication is based. It also is inherently political and stands at the forefront of so many of todayâs questions around identity, gender, post-colonial criticism, feminist critique, machine translation and canon creation, yet its connection within the context of the nonhuman turn, interspecies communication, and eco-criticism has not yet been fully explored.
Â
Whether we are talking about classic linguistic and literary translation, or any number of related fields including: language and literature, cultural studies, performance, visual and media artsâthe core question that translators and theorists of translation have been debating about for centuries remains the same: is it possible to translate without interpreting? Is linguistic and cultural equivalence even possible? These questions become all the more urgent in the limit-case of interspecies communication. Can we apply empathic modes of translation to nonhuman articulations, wherein translation involves a form of metamorphosis, not of text, but of the translator. As such, translators are something of a hybrid species with one foot in each culture and language, and whose very existence revolves around traveling between worlds. Translators have something of a mythical being about them, akin to a chameleon or centaur. In this course, we will not be engaging in a scientific exploration of interspecies communication, but examining theories around empathic translation-- a process that sees translation not merely as the transformation of a text, but of the translator themself.
Â
Emerging and classical theories of translation can offer a paradigm for engaging with plant and animal articulation, not language as such, but different forms of articulation perceived through the senses, one in which our hearing and seeing,âonce intertwined and attentive to the calls and cries of animals, all but disappeared with the invention of the alphabet, retreating into a kind of silence.â
Â
In David Abram's words: âBy giving primacy to perception we can see the natural world, not as inert and passive, but as dynamic and participatory. The winds, rivers and birds speak in their own way (if we listen), the sounds of nature not only have informed indigenous languages, but language in general--humans are but one being intertwined with other beings and âpresences.â This perspective sees the landscape as a sensuous field, and human perception as but one point of view that is in reciprocity, in expressive communication, with other points of view and ways of being.â
Â
How can theories of translation help us make sense of this new view of a world teeming with language and sentience? What theories abound in reference to multiplicity of âlanguage,â even as Walter Benjamin would argue for a âuniversal (human) language.â What practical tools does translation studies offer, and what bridges can it forge between the disciplines? The first half of the seminar focuses on key theoretical concepts relevant to the history and practice of translation. In the second half, students will engage in translation experiments that intersect with their own artistic/design practice. A final project should be considered a first draft of something that could develop later into a larger project.
Â
The course will be taught in English and German.
Â
This seminar is ideally suited to students interested in: Literature, Translation Theory / Translation / Cultural Studies / Critical Theory, Creative Writing/ Post-humanism, Trans-humanism, Eco-criticism, the More-than-Human Turn.
Â
Teachers
Dr. Zaia Alexander"
https://www.burg-halle.de/en/course/l/talk-with-the-animals-translation-in-a-more-than-human-world
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
Â
"The poet and writer Coleman Barks died last month at the age of 88. He was well known for his translations of the works of the 13th-century Persian mystic poet Jalaluddin Rumi. Coleman Barks even appears on a Coldplay album, "A Head Full of Dreams," reading a translation of Rumiâs âThe Guest House.â
Here & Now's Lisa Mullins talks to Coleman Barks's sister, Elizabeth Barks Cox, who is also a writer, about his life and work.
This segment aired on March 12, 2026." https://www.wbur.org/hereandnow/2026/03/12/coleman-barks-obituary #metaglossia #metaglossia_mundus
Not so fast: A University of Houston professor of psychology is disputing a high-profile study claiming that people who live in multilingual countries show healthier brain aging, claiming instead that wealthy countries, with the best healthcare systems, offer longer life expectancies.Â
"University of Houston professor of psychology Arturo Hernandez is disputing a high-profile study published in the journal Nature Aging claiming that people who live in multilingual countries show healthier brain aging. Though the study got lots of attention, Hernandez reports in the journal Brain and Language that the findings warrant cautious interpretation and reframing of public health implications.Â
Â
âWe took a closer look and argued that the studyâs conclusions go further than the data can support,â said Hernandez. Â
Â
According to Hernandez, the countries with high multilingualism in Europe also happen to be the wealthiest, with the best healthcare systems and the longest life expectancies, sometimes by as much as six years. When those structural differences are accounted for, the apparent language effect largely disappears. Â
Â
âThere is a real temptation in science to find individual behavioral solutions: learn a language, do a puzzle, take a supplement - are all suggested as solutions to problems that are fundamentally structural,â said Hernandez. âWhen those solutions get oversold, it can erode public trust in science and distract from the harder work of building the conditions that actually support healthy aging: Access to healthcare, good nutrition, economic stability. We wanted to make sure the public gets an accurate picture of what the evidence shows.âÂ
Â
In the original article, researchers examined records in 27 European countries and claimed that multilingualism protects against accelerated aging whereas monolingualism increased risk of accelerated aging.Â
Â
Countries with high multilingualism, like Luxembourg (82.5 years) and the Netherlands (82.5 years), have some of the highest life expectancies in the world. Meanwhile, countries with low multilingualism, such as Bulgaria (75.8 years) and Romania (76.3 years), lag nearly six or seven years behind.Â
Â
âA six-year gap in life expectancy is unlikely to be explained by language. World-class healthcare, superior early-childhood nutrition, higher occupational safety, and lower chronic stress offer a more parsimonious accountâthe same structural forces that produce longevity in general,â said Hernandez, who points to Japan as another example. Â
Â
As a largely monolingual society, it boasts an exceptional life expectancy of 84.5 years. âLow inequality, a healthy diet, and a robust universal healthcare system account for that advantage far better than language ever could,â said Hernandez.Â
Â
âAs scientists, we do a disservice to the public when we promote individual behavioral hacks as substitutes for structural resources. Learning a language is a beautiful, culturally enriching endeavor. It connects us to others and expands our world. But we must be careful not to overpromise it as a clinical intervention for aging,â Hernandez said.Â
Â
Journal
Brain and Language
Â
Article Title
Multilingualism and aging: Country-level patterns may not support individual-level causal claims
Â
Article Publication Date
9-Mar-2026"
https://www.eurekalert.org/news-releases/1119284
#metaglossia_mundusÂ
#metaglossia
Â
"Published: March 11, 2026 12.16am SAST
Isabel Tello Fons, Universitat de ValĂšncia
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
Â
ââTĂș tambiĂ©n te enojarĂas si tuvieras una peluca como la mĂa âprosiguiĂł el AvispĂłnâ. Se meten con uno, y uno, que no le gusta que le tomen la âpelucaâ, pues se enfada⊠¥natural! Y entonces es cuando me entra la murria, me arrebujo debajo de un ĂĄrbol y me quedo tieso de frĂo. Y, para aliviarme, cojo un pañuelo amarillo y me lo ato alrededor de la cara⊠¥OsĂ©ase, como ahora! ÂĄNatural!â.
Â
AsĂ tradujo RamĂłn Buckley la voz del AvispĂłn en la novela A travĂ©s del Espejo y lo que Alicia encontrĂł allĂ, de Lewis Carroll. La versiĂłn original recrea el dialecto cockney londinense, muy ligado a la clase obrera, lo que Buckley transformĂł en un dialecto castizo madrileño, conservando el tono quejĂłn y ordinario del personaje de la obra de Carroll:
Â
âYouâd be cross too, if youâd a wig like mine,â the Wasp went on. âThey jokes, at one. And they worrits one. And then I gets cross. And I gets cold. And I gets under a tree. And I gets a yellow handkerchief. And I ties up my face âas at the presentâ.
Â
Cuando leemos una novela traducida, no solo seguimos una historia: escuchamos voces. Voces que revelan quiĂ©nes son los personajes, de dĂłnde vienen y quĂ© lugar ocupan en su comunidad. Pero ÂżquĂ© pasa con esas voces cuando pasan de un idioma a otro? ÂżCĂłmo se traducen los dialectos, acentos, ritmos y registros que forman parte de la identidad profunda de los personajes? Abordar estas cuestiones es uno de los desafĂos mĂĄs complejos y menos visibles de la literatura.
Â
Voces que importan
La forma de âhablarâ de los personajes, lo que llamamos variaciĂłn lingĂŒĂstica, abarca rasgos diferentes como vocabulario local, jergas, expresiones propias de una comunidad, formas de una lengua pasada o maneras particulares de construir las frases. Estos rasgos no son adornos, son recursos de caracterizaciĂłn: cumplen funciones narrativas y de estilo importantes.
Â
El dialecto de un lugar podrĂa tener una funciĂłn reivindicativa; el acento rural podrĂa transmitir humor, ternura o jerarquĂa; una jerga juvenil podrĂa significar cercanĂa o pertenencia a un grupo y un habla histĂłrica sitĂșa al lector en otra Ă©poca. Si estas voces desaparecen en la traducciĂłn, el personaje se vuelve mĂĄs plano y la historia pierde parte de su trama original.
Â
Por ejemplo, en Las aventuras de Huckleberry Finn, Mark Twain diferenciĂł a sus personajes mediante siete dialectos diferentes, y en Oliver Twist, Dickens utilizĂł el argot de ladrones y rufianes para mostrar el habla del hampa londinense.
Â
Sin equivalencias directas
Uno de los mayores retos de la traducciĂłn literaria es que los dialectos no son intercambiables. No existe un âequivalenteâ español del inglĂ©s del sur de Estados Unidos, ni un dialecto aquĂ que corresponda exactamente al de Liverpool. Cada variedad lingĂŒĂstica estĂĄ anclada en su territorio, historia y contexto social.
Â
Get your news from people who know what theyâre talking about.
Get newsletter
Por eso, si tradujĂ©ramos de forma literal un dialecto extranjero, el resultado serĂa extraño o incluso cĂłmico. Si cambiĂĄramos un dialecto inglĂ©s por uno español real, convertirĂamos a Huckleberry en un niño andaluz, canario o mexicano y manipularĂamos su identidad original. Pero a la vez, si se ignora esa forma de hablar y se traduce a la lengua estĂĄndar, se pierde su personalidad lingĂŒĂstica.
Â
La traducciĂłn literaria busca conseguir efectos equivalentes: que el lector perciba el mismo matiz social y emocional que quien lo lee en versiĂłn original, aunque se usen recursos distintos para conseguirlo.
Â
La traducciĂłn mĂĄs humana
La tarea del traductor literario no es mecĂĄnica; es un ejercicio de escucha y de interpretaciĂłn. El traductor se hace preguntas como quĂ© efecto produce esa voz en el lector del original, quĂ© rasgos lingĂŒĂsticos usar para conseguir ese efecto en la traducciĂłn o hasta quĂ© punto marcar o no una variedad.
Â
Puede que la mejor solución no sea apuntar hacia un dialecto concreto, sino usar un registro ligeramente desviado de la lengua eståndar para insinuar un origen social que no desplace culturalmente al personaje. Otras veces, puede que un rasgo léxico o una estructura gramatical basten para recrear el ambiente.
Â
Cada decisión requiere criterio y responsabilidad. La literatura representa grupos sociales reales, y tratarlos con respeto exige una mirada ética.
Â
Como he comprobado en mi investigaciĂłn (de prĂłxima publicaciĂłn), esa mirada Ă©tica es algo que la IA, por ahora, no posee. La IA no âentiendeâ las implicaciones sociales de la forma de hablar de un personaje. No sabe cuĂĄndo un dialecto transmite marginaciĂłn o cuĂĄndo marca jerarquĂa social. Trabaja detectando patrones estadĂsticos, no intenciones humanas.
Â
Cuando se le pide traducir voces no estĂĄndar, suele haber dos consecuencias. O bien el texto traducido aparece âlimpioâ, y un personaje que hablaba con un acento local termina hablando de forma normativa, con lo que su personalidad se diluye; o bien la IA imita marcas dialectales, pero mezcla jergas incompatibles o deforma palabras sin criterio. Esto crea estereotipos no deseados, es decir, caricaturas.
Â
Por tanto, ante la reflexiĂłn y minuciosidad que conlleva la traducciĂłn de la variaciĂłn lingĂŒĂstica, la IA genera respuestas rĂĄpidas que no tienen todavĂa suficiente sensibilidad para manejar ambigĂŒedades, ironĂas o alusiones culturales.
Â
Â
ÂżQuiere recibir mĂĄs artĂculos como este? SuscrĂbase a Suplemento Cultural y reciba la actualidad cultural y una selecciĂłn de los mejores artĂculos de historia, literatura, cine, arte o mĂșsica, seleccionados por nuestra editora de Cultura Claudia Lorenzo.
Â
Por qué necesitamos decisiones
Las herramientas como la IA pueden ser muy Ăștiles en las fases previas y complementarias de la traducciĂłn porque permiten localizar informaciĂłn rĂĄpidamente, comparar usos reales en grandes corpus, identificar patrones de estilo⊠Sin embargo, si tienden a igualar las voces, tambiĂ©n igualarĂĄn las experiencias. UtilizĂĄndola sin control perderemos diversidad lingĂŒĂstica y, con ella, diversidad humana.
Â
Y es que las variedades lingĂŒĂsticas no son solamente desviaciones del estĂĄndar: son lenguas muchas veces minoritarias o minorizadas, vulnerables o en riesgo. Protegerlas ayuda a conservar nuestro patrimonio cultural y una valiosa pluralidad.
Â
Para que las voces lleguen al lector sin perder su identidad, hace falta alguien que las escuche y las recree. Esa es una tarea esencialmente humana. Por eso, cada vez que una traducciĂłn literaria nos deja oĂr un mundo distinto, estamos tambiĂ©n salvando una parte de nuestra diversidad cultural.
Â
Una versiĂłn de este artĂculo se publicĂł en la revista Telos, de la FundaciĂłn TelefĂłnica."
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"In late 2025, generative AI crossed another critical threshold. Following GPT-5.1 in November, OpenAI released GPT-5.2 on 11 December â a model designed to generate adaptive, discipline-specific academic prose with fewer stylistic traces and greater structural variation. For universities, the concern was immediate: if AI can write fluently, unpredictably, and in discipline-appropriate academic language, does detectability still hold?
Early results show that it does.
How StrikePlagiarism responds to GPT-5.2
The release of GPT-5.2 reinforced a broader challenge facing higher education: AI development now outpaces institutional policy cycles. For StrikePlagiarism, this moment required immediate empirical validation rather than theoretical assumptions.
Within days of GPT-5.2 entering academic use, StrikePlagiarism.com was tested against newly generated and paraphrased GPT-5.2 texts under realistic academic conditions. The results were unambiguous:
Over 97% detection accuracy across GPT-5.2 outputs False results below 1%, preserving academic fairness Consistent performance after paraphrasing and stylistic diversification Rather than relying on surface-level markers, StrikePlagiarism.com analysed behavioural consistency across longer academic texts â identifying patterns that remain statistically improbable in authentic student work. Reports delivered probability-based, side-by-side comparisons, providing educators with interpretable evidence rather than automated verdicts.
Why GPT-5.2 remains detectable
GPT-5.2 demonstrates strong control over academic conventions and avoids obvious repetition. However, analysis across extended submissions consistently revealed:
non-random reasoning structures, unusually uniform transitions between claims, absence of natural cognitive drift. Individually, these signals are subtle. Taken together, they form a measurable behavioural profile. Detection no longer depends on awkward phrasing or stylistic errors, but on identifying improbably stable reasoning across complex texts. Fluency improves â invisibility does not.
Core advantages of StrikePlagiarism.comâs AI detection approach
StrikePlagiarism.com was designed to support institutions operating at scale, across disciplines and languages:
Multilingual AI-content detection at scale AI-generated content is detected across 100+ languages, enabling consistent integrity standards in international and multilingual academic environments. Proven accuracy against advanced generative models Detection accuracy exceeds 97%, including paraphrased and stylistically diversified GPT-5.2 texts â demonstrating reliability under real academic conditions. Ultra-low false-positive rates False results remain below 1%, protecting students from incorrect attribution and ensuring that detection strength never compromises fairness. Why AI detection is critical right now
GPT-5.2 makes one reality clear: the primary risk for universities is no longer obvious AI misuse, but large volumes of academically convincing AI-generated work entering assessment unnoticed. This is not a future concern â it is a present operational challenge.
StrikePlagiarism addresses this challenge at an institutional level. By combining high-accuracy AI behaviour analysis with transparent, probability-based reporting, StrikePlagiarism.com enables universities to respond now, not retrospectively. When academic decisions must be defensible at the moment they are made, evidence-based AI detection becomes essential infrastructure rather than an optional safeguard." 97% accuracy against GPT-5.2: inside StrikePlagiarism.comâs detection results | THE Campus Learn, Share, Connect https://share.google/hA12nxsAaMGdPGqDX #Metaglossia #metaglossia_mundus #mĂ©taglossie
"Since the start of this yearâs Amar Ekushey Book Fair, readers and publishers have noticed a rise in Bengali translations of world literature classics, with the growing popularity of these works clearly reflected in readersâ enthusiastic response. Readers who feel less comfortable in studying English texts but have immense interests to enjoy the taste of literary works from diverse cultures and languages, transcending national boundaries, are searching for and purchasing translation works, said the publishers. According to them, they have published more Bengali translations of classics from other languages responding to the demand of readers, but translation of classics of Bangla literature in foreign languages has not increased in the expected way. Baatighar always brings a good number of translations. Publisher of Baatighar, Dipankar Das, said, âThere is always a demand for translated literature. It will increase further. One may not read English comfortably, but has a penchant for world literature. Translation helps them get the taste of world literature.â There are considerable allegations about the quality of many newly published translations. With the increasing popularity of translated books, the number of substandard translations is also increasing. Responding to this complaint, Dipankar said if a reader does not understand the translation, then questions can be raised about quality. Baatighar, however, publishes books by ensuring quality. Salesperson of Seba Prokashoni, Azizul Hakim, said the sale of translated books has been going well for the last several years. âTranslations and novels are our bestsellers. A big portion of our new books this time is translations. Average sales of these translated copies are getting better every day,â he said. Small or big, almost all publishing houses are bringing translations of novels, thrillers, detective series, biographies, theories and historical books of world-famous authors into Bangla. Also, many translated books are published without the permission of the original authors. As a result, the editing of these books is not done properly.
Read More 5,000 tons of diesel to arrive from India today
Translator Mostaq Sharif has translated âAfter Darkâ by Japanese author Haruki Murakami. According to him, the state of translation literature is having a good time as the demand for such works is increasing gradually. He said, âGood to see that translation works are getting a good response from readers. It indicates the changing taste of readers. But the substandard works of translation are deceiving the bookworms. Publishers and readers need to be careful while selecting and purchasing books.â On the 12th day of the book fair, a discussion programme titled âShahidullah Kaiserâ was held at the main stage of the book fair at 3pm with Syed Azizul Haque Chowdhury in the chair."
https://www.daily-sun.com/metropolis/862418 #Metaglossia #metaglossia_mundus #métaglossie
"Grants and Prizes for Promoting Italian Books and Translations (New Zealand)
UPCOMING GRANTS IN MARCH 2026!
Â
Deadline: 31-Mar-2026
Â
The Ministry of Foreign Affairs and International Cooperation offers prizes and grants to promote Italian language and culture abroad through literary translations, scientific works, and audiovisual productions. Each prize is valued at âŹ5,000, targeting high-quality translations, publications, dubbing, or subtitling of works created or published since January 2025. Eligible applicants include publishers, translators, production companies, and cultural institutions, with applications due by 31 March 2026.
Overview
This initiative aims to strengthen the global dissemination of Italian culture by supporting:
Â
Translation and publication of Italian literary and scientific works into foreign languages
Â
Production, dubbing, and subtitling of Italian short and feature films, as well as television series
Â
Promotion of contemporary Italian literature and audiovisual content
Â
Expansion of cultural exchange and international reach
Â
The program ensures that both literary and audiovisual works maintain high-quality standards and reach wider international audiences.
Â
Prize Details
Maximum number of prizes for 2026: 10
Â
Prize value: âŹ5,000 each
Â
Language distribution:
Â
Spanish: 5 prizes
Â
Arabic: 1 prize
Â
Chinese: 1 prize
Â
French: 1 prize
Â
English: 1 prize
Â
German: 1 prize
Â
Eligible Works
Literary and scientific works (including e-books) translated and published in a foreign language on or after 1 January 2025
Â
Audiovisual productions (short/feature films, TV series) produced, dubbed, or subtitled on or after 1 January 2025"
https://www2.fundsforngos.org/arts-culture/grants-and-prizes-for-promoting-italian-books-and-translations-new-zealand/
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"Zoom announced on Tuesday, March 10, that it is bringing real-time audio translation to Zoom Meetings, allowing users to understand speakers in different languages during calls. The video communications platform also unveiled a new feature aimed at detecting synthetic audio or video in Zoom Meetings.
The new features coming to Zoom Meetings are among a handful of new AI-powered capabilities coming to Zoomâs enterprise-grade offerings, including Zoom Workplace, Zoom Phone, and Zoom CX.
The live voice translation feature will let Zoom users speak in their native language while others on the call can hear the translated speech in their preferred language in real-time. The feature is currently available in five languages, with support for more languages coming soon.
Zoom first gained widespread recognition during the COVID-19 pandemic, when remote work and virtual meetings became the norm. Since then, the company has looked to become more than a video conferencing platform by launching AI-powered productivity tools and customer support products for enterprises. It has sought to define its competitive edge in the crowded AI industry by taking a federated approach where multiple AI models, including Zoomâs own models and those from OpenAI, Anthropic, and Meta, are dynamically selected to provide cost-effective solutions.
The companyâs new on-call deepfake risk detection feature arrives as AI-driven online scams continue to surge. It could play a key role in protecting users from âdigital arrestâ scams, many of which rely on deceptive video calls to trick victims.
âThe next phase of enterprise AI will be defined by the ability to move from conversation to action. Zoomâs agentic AI platform is designed to orchestrate action across systems, turning every meeting, call, and customer interaction into a trigger for workflow automation,â said Velchamy Sankarlingam, president of Product & Engineering at Zoom.
Alongside these Zoom Meeting features, the company also introduced a suite of AI-powered office apps such as AI Docs, Slides, and Sheets that can be used to generate document drafts, spreadsheets with data, or presentations based on meeting transcripts and data from other services.
Zoom further said that AI avatars, announced last year, will start becoming available to users later this month. The feature lets users create photo-realistic, AI-generated avatars of themselves to appear in online meetings on their behalf. Zoomâs AI Companion 3.0, its latest AI assistant on the web unveiled last year, will soon be accessible through a desktop app. The AI assistant is also being integrated across the Zoom Workplace app, Zoom Business Services, and Workvivo, its app for employee communication.
In addition, AI Companion 3.0 can be integrated with third-party platforms such as Slack, ServiceNow, Box, Google Drive, and OneDrive, enabling the AI assistant to synthesise enterprise data across applications and provide insights from multiple data sources.
Amid the rising popularity of AI agents, Zoom is letting users create and deploy custom as well as pre-built AI agents through no-code, natural language prompts. These custom AI agents can act on usersâ behalf to automate workflows across third-party systems such as Salesforce, Slack, and ServiceNow, the company said.
For developers, Zoom announced a new suite of enterpriseâgrade AI APIs which can be used to build apps that leverage the transcription, translation, summarization, deep reasoning, and imageâprocessing technologies powering Zoomâs own products.
In Zoom Phone, the company is rolling out agentic workflows that help enterprise clients automatically execute tasks such as drafting emails or sending out summaries. It is also adding new SMS capabilities for the 24/7 virtual receptionist to handle customer engagements via text, answer questions, collect information, support scheduling flows, and escalate to a human when needed.%" https://indianexpress.com/article/technology/tech-news-technology/zoom-new-live-voice-translation-deepfake-detection-video-calls-10575835/ #Metaglossia #metaglossia_mundus #métaglossie
County officials say the technology might limit burnout among call takers, but AI researchers are skeptical.
Â
"Posted inOnondaga County
Onondaga County begins using AI to translate, transcribe and summarize 911 calls
County officials say the technology might limit burnout among call takers, but AI researchers are skeptical.
Â
by Laura Robertson
March 11, 2026
Â
The old Onondaga County court building is home to the county legislature. Credit: Mike Greenlar | Central Current
Onondaga Countyâs 911 center recently began using artificial intelligence technology to assist with calls. The technology allows for live transcription, location, and call summarization, and will in the future provide live translation, according to county executive spokesperson Justin Sayles.
Â
The county will spend $350,000 this year on the technology, said Sayles, and the county will consider annual renewals of the product. The funding was approved as part of the Onondaga County Department of Emergency Communicationsâ 2026 budget.
Â
The technology was developed by Prepared, a company that makes AI products.
Â
The countyâs 911 center has a significant staffing shortage, according to Sayles and Emergency Communications Commissioner Julie Corn. County officials hope AI will help prevent burnout.
Â
â911 call-takers take call after call, day after day with the stress that there is zero room for error,â said Sayles. âHaving tools that aid in their success and support them to be their best is one way we can limit burnout.â
Â
Â
We bring you the information you need about Central New York by holding those in power to account and holding a mirror up to our community. Syracuse is evolving and your stories are shaping it. Join us.
Â
Email Address
Sign up
While the county expects the technology to help with burnout, some experts are more skeptical about AIâs utility for 911 call takers. Concerns range from AIâs ability to triage emergency calls to its ability to effectively translate for non-English speaking callers.
Â
In an October presentation to the public safety committee, Corn said that the use of the technology âfalls right in line with the county executiveâs initiative to have AI programs in his vision.â
Â
The AI 911 system works like this: Non-emergency phone calls â those that come into the 911 center on ten-digit numbers â will be transferred directly to an AI bot. If âkey emergency words or scenariosâ are mentioned, the AI bot is trained to transfer the call back to a human, said Sayles. He added that other calls would be determined to be non-emergencies âin the same way they are nowâ and transferred to the bot.
Â
Prepared will provide a live transcription of emergency 911 calls, but a human will write the messages sent to emergency responders. Call takers will still be expected to take notes on their calls. The call recordings, notes and AI-drafted transcript will all be saved separately and will be able to be compared, Sayles said. Only the notes will be able to be edited, he said.
Â
Ben Winters, the director of AI and data privacy at the Consumer Federation of America, said he would be âvery worriedâ about the potential for false negatives if a bot were to triage emergency calls.
Â
Winters also said AI is not equally good at all forms of transcription. When people are rushed, crying, using headphones or on speaker, it is more likely to miss what is being said. He added that 911 callers might not feel comfortable sharing exactly what is going on, and that call-takers are trained to try to get needed information from callers.
Â
Winters said the redundancy in record keeping was good but questioned when the AI transcription might actually be used.
Â
âWhat is the record that they go with?â he asked. âWhat are the ones they report and act on?â
Â
Onondaga County is also linguistically diverse. As of 2024, the most commonly spoken languages among people with low English proficiency include Ukrainian, Nepali and Burmese. Sayles said that the technology would be able to translate all these languages.
Â
But the AI tools powering translation are not always representative of how native speakers speak, said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology who researches multilingual AI. The AI powering translation services may have fewer natively-created digitized examples of some languages to train on, she said, which could mean that translations might feature overly complicated or outdated words that people do not use regularly and might not understand.
Â
Bhatia gave an example: If AI translates the English word âvaccineâ to an equivalent of the older âinnoculation,â the listener might not understand those words. She added that translation tools sometimes even make up new words when they donât know the correct one.
Â
Translation tools should be developed and evaluated with local language speakers and community-based organizations, Bhatia said.
Â
âAI-based translation tools may come in handy when we need legible translations in a pinch but we shouldnât confuse them as capable of the fluent, nuanced, and accurate translations people need when they are seeking emergency and life-saving services,â said Bhatia.
Â
The county currently translates calls using Voiance. The program uses live interpreters. The county will still contract with Voiance, said Sayles. In the future, a bot will likely translate live, but in the meantime, the county will work to ensure the change in translation methods doesnât leave service gaps.
Â
Prepared is used in other 911 departments across the county, including in Baltimore. Sayles said Onondaga County had âsolid working relationshipsâ with other counties using Prepared. So far, those counties have raised no concerns, he said.
Â
Prepared was recently purchased by Axon Enterprise, a company that develops technology for the military and police.
Â
In a press release shortly after the acquisition, Axon boosted Prepared as a means of âowning the first 120 secondsâ of an emergency call. Axon believes the technology could help supervisors to see ârisk patterns and coaching opportunitiesâ in their callers, the press release said.
Â
One of Axonâs other AI products, the controversial Draft One generative AI police report writer, has been accused of being âdesigned to defy transparencyâ by the nonprofit Electronic Frontier Foundation.
Â
Sayles did not directly say whether it would integrate other Axon AI programs â like Draft One â into county operations but said the county is âconstantly evaluating opportunities to integrate AIâ into county operations.
Â
âThe AI technology continues to improve,â said Sayles. âAt the end of the day, the call-taker is still fully trained in listening to calls and capturing and translating information for dispatch.â"
https://centralcurrent.org/onondaga-county-begins-using-ai-to-translate-transcribe-and-summarize-911-calls/
#Metaglossia
#metaglossia_mundus
#metaglossia_mundus
"...Bien que l'Afrique abrite plus de 2 000 langues, la grande majorité des modÚles d'apprentissage automatique qui alimentent les systÚmes d'IA actuels sont principalement entraßnés sur l'anglais, le mandarin et quelques autres langues dominantes à l'échelle mondiale. Pour des millions de personnes sur le continent, cela signifie que la prochaine génération d'outils numériques risque de rester inaccessible.
Le Nigeria franchit aujourd'hui une étape majeure pour changer cela.
L'espace Agence nationale de développement des technologies de l'information (NITDA) a conclu un partenariat avec NKENNEAi, une plateforme d'intelligence artificielle pour les langues africaines, afin d'accélérer le développement d'infrastructures conçues spécifiquement pour les langues africaines.
Cette collaboration vise à développer des technologies de traduction et de langage évolutives, capables de soutenir les services gouvernementaux, les systÚmes de santé, les plateformes financiÚres et les applications numériques pour l'ensemble de la population multilingue du Nigéria.
Avec plus de 500 langues parlĂ©es au NigĂ©ria, la langue demeure l'un des principaux obstacles Ă l'inclusion numĂ©rique et Ă l'accĂšs aux technologies. Le partenariat NKENNEAiâNITDA vise Ă combler cette lacune en dĂ©veloppant des systĂšmes d'IA spĂ©cifiquement entraĂźnĂ©s sur les langues africaines et leurs structures tonales.
De l'apprentissage des langues Ă l'infrastructure africaine de l'IA
NKENNEAi est né de NKENNE, l'une des plateformes d'apprentissage des langues africaines à la croissance la plus rapide.
Fondée dans le but de préserver et d'enseigner les langues africaines, NKENNE est devenue une plateforme mondiale comptant plus de 400 000 utilisateurs apprenant des langues telles que l'igbo, le yoruba, le swahili, le haoussa, le twi, le somali et le pidgin nigérian.
à mesure que la plateforme s'est développée, son corpus croissant de données textuelles et vocales en langues africaines a jeté les bases de quelque chose de bien plus vaste : le développement de systÚmes d'intelligence artificielle capables de comprendre les langues africaines à grande échelle.
Ces recherches ont mené à la création de NKENNEAi, une plateforme d'IA multilingue axée sur la construction de l'infrastructure nécessaire à l'intelligence artificielle des langues africaines.
Michael Odokara-Okigbo, PDG de NKENNEAi, a dĂ©clarĂ© que la croissance de NKENNE rĂ©vĂ©lait une opportunitĂ© bien plus vaste : bĂątir les fondements technologiques de lâIA pour les langues africaines. « NKENNE a dĂ©butĂ© comme une mission culturelle visant Ă prĂ©server et Ă enseigner les langues africaines », a expliquĂ© M. Odokara-Okigbo. « Alors que notre communautĂ© sâĂ©tendait Ă des centaines de milliers dâapprenants, nous avons rĂ©alisĂ© que les donnĂ©es et les connaissances linguistiques que nous dĂ©veloppions pouvaient alimenter un projet dâune tout autre envergure. NKENNEAi a pour objectif de construire lâinfrastructure qui permettra aux langues africaines dâexister, de se dĂ©velopper et de prospĂ©rer au sein des systĂšmes dâintelligence artificielle. »
Une approche différente pour l'entraßnement de l'IA aux langues africaines
La plupart des modÚles d'IA mondiaux ont des difficultés avec les langues africaines car ils ne disposent pas des données d'entraßnement et des cadres linguistiques nécessaires pour comprendre le ton, les variations dialectales et le sens contextuel.
NKENNEAi développe une approche différente.
L'entreprise développe des chaßnes de traitement de données spécialisées, des systÚmes d'annotation linguistique et des ensembles de données vocales conçus spécifiquement pour les langues africaines, permettant la création de modÚles d'apprentissage automatique qui capturent avec précision le sens tonal et les nuances dialectales.
Cette méthodologie comprend des ensembles de données de phrases bilingues à grande échelle pour la traduction automatique, des ensembles de données vocales annotées pour les systÚmes de transcription vocale, un étiquetage linguistique sensible au ton qui préserve le sens à travers les dialectes et une validation linguistique menée par la communauté avec des locuteurs natifs.
En combinant l'expertise linguistique et l'infrastructure d'apprentissage automatique, NKENNEAi développe des modÚles d'IA sensibles à la tonalité, capables de comprendre les langues africaines avec une précision bien supérieure à celle des systÚmes de traduction traditionnels.
La plateforme prend en charge des technologies telles que la traduction automatique par IA texte-texte, la transcription vocale en texte, la synthÚse vocale texte-parole et les API d'IA multilingues pour les développeurs et les entreprises.
Ces outils permettent aux startups, aux gouvernements et aux entreprises d'intégrer directement le support des langues africaines dans leurs plateformes numériques.
Le systÚme se concentre actuellement sur des langues telles que le yoruba, l'igbo, le haoussa, le swahili et le pidgin nigérian, et prévoit d'étendre sa couverture à d'autres langues africaines.
Soutenu par un soutien mondial Ă la recherche
Le développement de NKENNEAi a également bénéficié du soutien de financements internationaux pour la recherche, notamment de plusieurs subventions de la part de US National Science Foundation (NSF).
Dans le cadre du programme de recherche sur l'innovation des petites entreprises (SBIR) de la NSF, un financement a été accordé à ESM Global Productions, la société à l'origine de NKENNEAi, afin de faire progresser le développement d'une plateforme de traduction IA multilingue pour les langues africaines.
En 2024, la société a reçu une subvention de 1 million de dollars de la NSF (phase II) pour étendre son API de traduction des langues africaines et poursuivre le développement de modÚles de parole et de langage conçus spécifiquement pour les langues tonales.
Ce travail soutient le développement de modÚles de traduction multilingues pour les langues africaines, de systÚmes de transcription vocale entraßnés sur des ensembles de données vocales africaines, de modÚles vocaux de synthÚse vocale et d'API évolutives permettant l'intégration des langues africaines sur les plateformes numériques.
Ensemble, ces efforts contribuent à la mise en place de l'un des plus grands ensembles de données structurées et de pipelines d'entraßnement à l'IA axés spécifiquement sur les langues africaines.
En accord avec les ambitions nationales du Nigéria en matiÚre d'IA
Ce partenariat avec la NITDA s'inscrit dans la stratĂ©gie numĂ©rique globale du NigĂ©ria, menĂ©e par le ministĂšre fĂ©dĂ©ral des Communications, de l'Innovation et de l'Ăconomie numĂ©rique, dirigĂ© par le Dr Bosun Tijani, ministre nigĂ©rian des Communications, de l'Innovation et de l'Ăconomie numĂ©rique.
Avant de rejoindre le gouvernement, Tijani a cofondé PÎle de co-création (CcHUB)Le Nigeria est l'un des centres d'innovation technologique les plus influents d'Afrique. Son ministÚre a piloté des initiatives visant à positionner le Nigeria comme un pÎle mondial de l'intelligence artificielle et de l'innovation numérique.
Des programmes tels que l'initiative « 3 Million Technical Talent » visent à former des millions de Nigérians aux compétences numériques et liées à l'IA, tout en renforçant l'écosystÚme technologique du pays.
Grùce à sa collaboration avec NKENNEAi, NITDA étudie comment une infrastructure d'IA développée localement peut aider à servir la population multilingue du Nigéria tout en renforçant les capacités nationales du pays en matiÚre d'IA.
DĂ©velopper la main-d'Ćuvre pour l'IA en langues africaines
Au-delĂ de la modĂ©lisation, ce partenariat vise Ă©galement Ă dĂ©velopper la main-d'Ćuvre nĂ©cessaire au maintien des systĂšmes d'IA en langues africaines.
Les initiatives prévues comprennent la formation d'annotateurs de données IA, d'ingénieurs en traitement du langage naturel et d'équipes techniques du secteur public qui soutiendront le développement d'ensembles de données linguistiques et le déploiement du systÚme.
Ces programmes visent à garantir que l'IA linguistique africaine ne soit pas seulement conçue pour le continent, mais conçue par les personnes qui comprennent le mieux ses langues et ses cultures.
Importance des recherches sur la psychose
L'économie numérique africaine est en pleine expansion, mais la langue demeure l'un des principaux obstacles à l'accÚs au numérique.
Des millions d'Africains interagissent plus facilement dans leurs langues autochtones qu'en anglais, pourtant la plupart des plateformes numériques restent conçues principalement pour les utilisateurs anglophones.
Sans accessibilité linguistique, des services tels que la communication en matiÚre de soins de santé, les outils financiers et les plateformes gouvernementales restent difficiles d'accÚs pour une grande partie de la population.
Une infrastructure linguistique basée sur l'IA pourrait transformer cela en permettant aux plateformes de communiquer avec les utilisateurs dans les langues qu'ils parlent au quotidien.
En compétition dans la course mondiale à l'IA du langage
Les entreprises technologiques mondiales commencent Ă reconnaĂźtre l'importance des langues africaines.
Des entreprises comme Google ont rĂ©cemment Ă©tendu leur assistance en matiĂšre d'IA et de recherche Ă des langues comme le yoruba et le haoussa, tĂ©moignant d'un intĂ©rĂȘt croissant pour les technologies linguistiques africaines.
Cependant, tandis que les entreprises mondiales commencent à intégrer les langues africaines dans leurs systÚmes, NKENNEAi se concentre entiÚrement sur la construction d'une infrastructure d'IA conçue spécifiquement pour la complexité linguistique de l'Afrique.
« Les langues africaines sont profondĂ©ment tonales, contextuelles et culturellement riches », a dĂ©clarĂ© Odokara-Okigbo. « DĂ©velopper une IA capable de les comprendre vĂ©ritablement exige une infrastructure conçue spĂ©cifiquement pour ces langues. Notre mission est de faire en sorte que lâAfrique ne se contente pas de consommer lâintelligence artificielle, mais quâelle construise les systĂšmes fondamentaux qui la sous-tendent. »
Construire l'avenir linguistique de l'Afrique
Le partenariat entre NKENNEAi et NITDA représente une étape importante pour garantir que les langues africaines soient pleinement représentées dans l'écosystÚme mondial de l'IA.
L'initiative sera dĂ©ployĂ©e par Ă©tapes, selon une approche progressive comprenant des intĂ©grations pilotes avec des agences gouvernementales, l'extension Ă d'autres langues, des programmes de formation de la main-d'Ćuvre et le dĂ©veloppement Ă©ventuel d'une infrastructure d'IA linguistique nationale plus large.
En combinant le soutien gouvernemental et l'innovation du secteur privé, cette initiative vise à positionner le Nigéria comme un leader mondial des infrastructures d'intelligence artificielle pour les langues africaines.
Pour NKENNEAi, la mission va encore plus loin : construire les fondements technologiques qui garantissent que les langues africaines restent dynamiques, accessibles et utilisables à l'Úre de l'intelligence artificielle." https://techcabal.com/fr/2026/03/09/Nitda-s%27associe-%C3%A0-Nkennea/ #Metaglossia #metaglossia_mundus #métaglossie
"Aborder la question du langage Ă©galitaire, câest sâengager dans un dĂ©bat houleux qui bien souvent oscille en rĂ©alitĂ© entre hostilitĂ© de principe et mĂ©connaissance. Pourtant le sujet mĂ©rite mieux, car pour construire davantage dâĂ©galitĂ© entre les sexes, il faut aussi comprendre comment « nos habitudes langagiĂšres » nous font voir le monde « au travers dâun prisme masculin ». Câest Ă cette rĂ©flexion que nous invitent les trois psycholinguistes Pascal Gygax, Sandrine Zufferey et Ute Gabriel dans Et si on arrĂȘtait de penser au masculin ?, version enrichie de leur prĂ©cĂ©dent ouvrage Le cerveau pense-t-il au masculin ? publiĂ© en 2021. Une approche scientifique, claire et accessible qui invite Ă retrouver une langue moins androcentrĂ©e et plus inclusive. Une lecture particuliĂšrement bienvenue Ă lâoccasion du 8 mars, journĂ©e internationale des droits des femmes.
Une approche scientifique qui sait sâadresser aux non-spĂ©cialistes
Lorsquâun dĂ©bat divise, expliquent les auteurices, il faut opter pour une dĂ©marche « evidence based », autrement dit pour une dĂ©marche « qui sâappuie sur des donnĂ©es scientifiques ». Pour interroger les phĂ©nomĂšnes sociaux en lien avec le langage, ces donnĂ©es scientifiques sont issues de la psycholinguistique, « discipline Ă la croisĂ©e de la linguistique (lâĂ©tude du langage) et de la psychologie expĂ©rimentale (Ă©tude des comportements et processus mentaux observables) ».
Un tel projet pourrait effrayer : concepts inconnus ? terminologie complexe ? dĂ©monstrations ardues ? On aurait tort, car tout est mis en Ćuvre pour que ces donnĂ©es scientifiques soient accessibles aux non-spĂ©cialistes, Ă qui lâouvrage est dâailleurs destinĂ© en premiĂšre intention. EmaillĂ© de « petites expĂ©riences de psychologie Ă faire avec [son] entourage », tout Ă fait exploitables aussi en animation avec des Ă©lĂšves, ponctuĂ© de courts bilans (« Que retenir de ce chapitre ? »), qui rĂ©capitulent lâessentiel de chaque partie, il est Ă la fois documentĂ©, Ă©clairant, et ludique.
Et il se lit avec beaucoup de curiositĂ© et dâintĂ©rĂȘt, tant certains comptes-rendus dâĂ©tudes montrent de maniĂšre surprenante, et Ă©difiante, combien « le langage [est] maitre de notre pensĂ©e ». A valeur monĂ©taire identique, Ă©value-t-on sa satisfaction de la mĂȘme maniĂšre si on reçoit en rĂ©compense une « petite piĂšce » ou « une piĂšce » ? InterprĂšte-t-on identiquement les pleurs de nourrissons selon que lâon apprend quâil sâagit de filles ou de garçons ? Trouve-t-on autant rĂ©ussi un tour de magie si lâon croit que câest un homme ou que câest une femme qui le rĂ©alise ? ⊠Autant dâexpĂ©riences qui permettent de comprendre comment un mot active dans notre cerveau des reprĂ©sentations difficiles Ă contrĂŽler, mĂȘme quand elles ne sont pas pertinentes.
Les biais langagiers, produits des stéréotypes de genre
Ce lien entre langage et pensĂ©e, explique lâouvrage, a Ă©tĂ© mis au jour au dĂ©but du XXe siĂšcle par « lâhypothĂšse Sapir-Whorf » aussi appelĂ©e « relativisme culturel ». Ces deux anthropologues dĂ©fendaient lâidĂ©e que la façon dont on perçoit le monde â notamment les couleurs â dĂ©pend du langage dont on dispose pour les dĂ©signer ; thĂ©orie Ă lâorigine du concept orwellien de Novlangue dans 1984. Si cette hypothĂšse a Ă©tĂ© discutĂ©e, elle a permis dâinterroger le rĂŽle du langage et de comprendre que celui-ci créée des biais qui « influence[nt] non seulement notre maniĂšre de voir le monde, mais Ă©galement notre maniĂšre de penser et notre maniĂšre de nous comporter ».
Ces biais langagiers sont le reflet de la sociĂ©tĂ© dans laquelle nous vivons. Celle-ci ayant tendance « Ă considĂ©rer les hommes [âŠ] comme la norme de notre espĂšce », et « les normes masculines [âŠ] comme des standards neutres », nos pratiques langagiĂšres sont soumises Ă lâandrocentrisme et influencĂ©es par des stĂ©rĂ©otypes de genre. Nous associons ainsi de maniĂšre automatique certains mots et caractĂ©ristiques aux filles/femmes ou aux garçons/hommes, notre cerveau sâĂ©tant habituĂ© Ă les activer, et apprenons Ă regarder, sans nous en rendre compte, le monde Ă travers un prisme dĂ©formant, masculin, aux effets discriminants.
Par exemple, « il existe un stĂ©rĂ©otype selon lequel les femmes sont plus bavardes que les hommes ». En rĂ©alitĂ© la littĂ©rature scientifique dĂ©montre lâinverse, les femmes parlent moins longtemps, et moins souvent, cĂšdent davantage la parole, interrompent moins les conversations⊠Mais ce stĂ©rĂ©otype est si puissant quâil fausse notre perception. Deux chercheuses ont ainsi montrĂ© que lorsquâon Ă©coute une conversation entre deux personnes, mĂȘme si chacune dâentre elles a strictement le mĂȘme temps de parole, on ressent celui dont dispose la femme, pourtant identique Ă celui dont dispose lâhomme, comme supĂ©rieur. Leçon Ă retenir quand on sâintĂ©resse Ă la question du bavardage en classeâŠ
En finir avec la pseudo valeur générique du masculin
« Certains aspects plus formels liĂ©s Ă la grammaire du français », langue genrĂ©e, renforcent par ailleurs ces biais inĂ©galitaires. Câest en particulier le cas du genre grammatical masculin. Forme ambigĂŒe, le masculin est en effet censĂ© recouvrir deux sens diffĂ©rents : un sens « spĂ©cifique », qui renvoie uniquement Ă du masculin, et un sens « gĂ©nĂ©rique », appelĂ© aussi universel, qui, neutralisant ce sens genrĂ©, dĂ©signerait aussi bien des femmes que des hommes.
Mais le masculin gĂ©nĂ©rique actionne-t-il vraiment autant de reprĂ©sentations fĂ©minines que de reprĂ©sentations masculines ? Toutes les expĂ©riences menĂ©es en psychologie expĂ©rimentale â lâouvrage en relate plusieurs â montrent quâil nâen est rien. Dans les faits « lâinterprĂ©tation dite « gĂ©nĂ©rique » du masculin est thĂ©oriquement possible, mais elle est trĂšs difficile Ă adopter pour notre cerveau » tant elle sâoppose aux biais langagiers auxquels celui-ci sâest habituĂ© trĂšs tĂŽt. Une Ă©quipe de recherche a dâailleurs montrĂ© que « lâassociation masculin = homme commence Ă apparaitre chez les enfants vers lâĂąge de trois ans ».
En rĂ©alitĂ©, si le masculin lâa emportĂ©, y compris dans les rĂšgles dâaccord Ă partir du XVIIe, ce nâest pas parce quâil a valeur de neutre universel dĂ©genrĂ©, mais parce quâil est, selon les grammairiens de lâĂ©poque, « plus noble [et] prĂ©vaut seul contre deux ou plusieurs fĂ©minins » en raison de la « supĂ©rioritĂ© du mĂąle sur la femelle ». Câest ce contexte « imprĂ©gnĂ© dâandrocentrisme, voire de misogynie », comme lâa montrĂ© *Eliane Viennot, professeuse Ă©mĂ©rite de littĂ©rature française de la Renaissance et spĂ©cialiste de lâhistoire de la langue dans son ouvrage Non, le masculin ne lâemporte pas sur le fĂ©minin, qui a favorisĂ© « lâĂ©volution de la langue vers un masculin dominant » en la dĂ©fĂ©minisant.
Plaidoyer pour un langage moins exclusif
Cette utilisation constante du masculin nâest pas sans consĂ©quence. Car si lâutilisation du masculin universel ne créée pas le sexisme, il « lâamplifie comme une loupe ». En termes dâorientation, les effets sont dĂ©vastateurs. Difficile lorsquâon est une fille ou quâon ne se reconnait pas dans certaines normes sociales masculines, de se projeter dans un mĂ©tier uniquement genrĂ© au masculin, surtout sâil est traditionnellement exercĂ© par les hommes. En revanche toutes les Ă©tudes montrent que le sentiment de lĂ©gitimitĂ©, efficacitĂ©, rĂ©ussite est considĂ©rablement renforcĂ© lorsque la forme langagiĂšre explicite Ă la fois le fĂ©minin et le masculin. Mettre une profession aussi au fĂ©minin câest donc ouvrir une porte mentale vers un futur possible pour tous et toutes.
Lâouvrage invite donc Ă dĂ©masculiniser le langage, pour le rendre non exclusif. Soit en utilisant des outils de neutralisation : termes Ă©picĂšnes (mots identiques au fĂ©minin et au masculin), reformulation dĂ©genrĂ©e, innovations langagiĂšres⊠Soit en utilisant des outils de refĂ©minisation : double flexion (dont lâefficacitĂ©, largement documentĂ©e, nâest plus contestĂ©e), formes contractĂ©es avec utilisation du point mĂ©dian⊠Des Ă©tudes en cours montrent que, contrairement Ă certaines critiques qui leur ont Ă©tĂ© faites, ces formes ne sont pas excluantes car lâĆil sây habitue rapidement (en moyenne Ă la 3e occurrence). Il est mĂȘme possible quâelles posent moins de difficultĂ©s aux personnes atteintes dâun trouble dyslexique que bien des complexitĂ©s arbitraires de la langue française.
Quoi quâil en soit, le point mĂ©dian nâest quâun outil parmi dâautres, les auteurices nây ont dâailleurs pas recours dans lâouvrage. De nombreux autres procĂ©dĂ©s existent, il faut juste sâen emparer. Car si visibiliser chacun et chacune en sâopposant au sexisme langagier ne suffira pas Ă effacer les inĂ©galitĂ©s de genre, câest une Ă©tape nĂ©cessaire pour sâattaquer aux rapports de pouvoir que continue dâimposer ce prisme du masculin.
Claire Berest
Et si on arrĂȘtait de penser au masculin ? Comment voir le monde sous un autre genre, Pascal Gygax, Sandrine Zufferey et Ute Gabriel. Sur le site des Ă©ditions Le Robert.
Interview dâElaine Viennot Ă retrouver sur le site du CafĂ© pĂ©dagogique.
« Guide de communication égalitaire : un outil pour accompagner les équipes éducatives ». Article à retrouver sur le site du Café pédagogique." https://www.cafepedagogique.net/2026/03/09/le-prisme-masculin-de-la-langue-et-ses-effets/ #Metaglossia #metaglossia_mundus #métaglossie
"Last year at the Vatican Library, I had the chance to see a portion of the Bible with an incredible history. It wasnât the famous Codex Vaticanus but a translation of the Gospels into Persian from the 1740s.
While a translation of the Gospels into the language of a Muslim empire is itself noteworthy, the history behind this particular text is even more remarkable. It represents one of two times when the ruler of Iran (or Persia, as it was called by the West before 1935) praised the Bible and furthered its spread in the region.
At a time when Iran is often associated with hostility toward Christianity, these episodes remind us that God can work through unlikely and even evil leaders. I find encouragementâand a prompting to prayâwhen I reflect on unexpected ways God used infamous Iranian leaders to spread the gospel. Let me introduce you to two of them.
Nader Shah (1688â1747) Iranâs most ruthless leader in its history arguably was Nader Shah, who ruled Persia from 1736 to 1747 and led a constant stream of military campaigns. His sack of Delhi in 1739 perhaps best demonstrated his military might and brutality. After taking the city, a revolt arose that the shah crushed, resulting in the deaths of up to 20,000 civilians.
The shah, characterized as a ânotorious despot and mass murderer who wrought destruction on a large scale and ruined his country,â also brought together Jewish, Catholic, and Armenian scholars in Persia to translate the Old and New Testaments. This included the copy of the Gospels that Catholic missionaries sent to the Vatican Library.
I find encouragementâand a prompting to prayâwhen I reflect on how God has used Iranian leaders to support the spread of the gospel.
After the missionaries completed translating the Gospels, they went to present the translation to Nader Shah. As they waited an hour for an audience with the shah, they saw 18 people led to his chamber who later were carried out as lifeless bodies, having been strangled. With a trepidation reminiscent of Esther approaching the Persian King Ahasuerus, they entered the shahâs court expecting martyrdom. However, the shah received the Persian translation and rewarded them with silver equivalent to a few yearsâ wages.
Nader Shahâs motivations for developing a Persian translation of the Bible are unclear. He may have sought to understand Judaism and Christianity in his empire more fully. Perhaps he hoped to syncretize the religions. Whatever his motivations, he was the unlikely catalyst for the first effort to translate the whole Bible into Persian.
Fath-Ali Shah Qajar (1772â1834) If Nader Shah was one of the most ruthless leaders of Iran, Fath-Ali Shah Qajar was perhaps one of the most opulent. He ruled for a relatively stable period over three decades from 1797 to 1834. Heâs easily recognizable in portraits with his long beard, thin waist, and bejeweled attire.
In 1812, evangelical missionary Henry Martyn completed a translation of the New Testament into Persian. Martyn, who knew William Wilberforce, Charles Simeon, and William Carey, worked tirelessly in Shiraz, Persia, to translate the New Testament.
When he finished, he attempted to present a beautiful bound copy to Fath-Ali Shah. Martyn reached the shahâs encampment but couldnât enter his court to present the New Testament. However, one secretary read to the shah three tracts Martyn had written to present the gospel to Muslims. Martyn died four months later, at the young age of 31, while trying to return to England.
While Martyn didnât live to see it, the British ambassador to Persia presented his Persian New Testament to Fath-Ali Shah in 1814. After reviewing the New Testament, the shah sent a letter commending it. He asserted that Martyn had translated the text âin a style most befitting sacred books, that is, in an easy and simple diction.â He said heâd command his attendants to read him the New Testament from beginning to end and support its distribution around Persia. Those who were âvirtuously engagedâ in spreading the New Testament and teaching its meaning, the shah said, would be âdeservedly honored with . . . royal favor.â
While there are certainly elements of diplomatic flattery in this letter, the shahâs approval had far-reaching consequences. Throughout the 19th century, missionaries like Peter Gordon and William Glen distributed hundreds of copies across Persia with a relative degree of freedom.
Godâs Sovereignty and Iranian Leaders These two stories of Persian leaders supporting the Bibleâs translation and distribution are surprising in light of current religious restrictions in Iran. But itâs not that surprising in light of biblical history.
In the Old Testament, the Lord sovereignly uses Persian leaders to protect his people and further his covenant plan for redemption. King Ahasuerus circulates a letter that saves the Jewish people from certain destruction (Est. 8:11â13). Nehemiah receives a letter of support from the Persian King Artaxerxes to help rebuild the walls of Jerusalem (Neh. 2). King Cyrus sends incredible amounts of gold and silver to support the rebuilding of the temple in Jerusalem (Ezra 1:2â4).
In the Old Testament the Lord sovereignly uses Persian leaders to protect his people and further his covenant plan for redemption.
God sovereignly works to move kings and rulersâeven the most pagan kings and the most ruthless rulersâto do his will. In Ezra 1:1, we see that the Lord âstirred up the spirit of Cyrus king of Persia.â The connection between Godâs sovereignty and his directing of a Persian king is crystal clear in Isaiah 44:24â45:25. This passage first emphasizes that itâs the Lord âwho made all things, who alone stretched out the heavensâ (v. 24). Turning to Cyrus, the Lord states that he âshall fulfill all [Godâs] purposeâ (v. 28). In the next verse, Cyrus is referred to as Godâs anointed and the one âwhose right hand [God has] graspedâ (45:1).
Letâs pray for the next ruler of Iran. Pray that, as the Lord has done before in history, heâd use the next leader to protect his people and further the spread of the gospel message. Both Christians and Muslims have suffered greatly in Iran in recent decades, yet the gospel is still advancing.
We should pray for an end to suffering in Iran. But we can also trust that amid uncertainty, missiles, and war, our sovereign God guides the hand and thwarts the will of rulers.
Free eBook by Tim Keller: âThe Freedom of Self-Forgetfulnessâ Imagine a life where you donât feel inadequate, easily offended, desperate to prove yourself, or endlessly preoccupied with how you look to others. Imagine relishing, not resenting, the success of others. Living this way isnât far-fetched. Itâs actually guaranteed to believers, as they learn to receive Godâs approval, rather than striving to earn it.
In Tim Kellerâs short ebook, The Freedom of Self-Forgetfulness: The Path To True Christian Joy, he explains how to overcome the toxic tendencies of our ageäžnot by diluting biblical truth or denying our differencesäžbut by rooting our identity in Christ.
TGC is offering this Keller resource for free, so you can discover the âblessed restâ that only self-forgetfulness brings." https://www.thegospelcoalition.org/article/iran-leaders-praised-bible/ #Metaglossia #metaglossia_mundus #mĂ©taglossie
Also sentenced: Murat Ybyraiuly â TranslatorâReporter; arrested in August 2023, charges not publicly disclosed; sentenced to 5.5 years.
OñalÄan MĂșlikuly â TranslatorâReporter; arrested in January 2023; sentenced in 2024 to 7 years.
Janibek Jaudatuly â Translator; arrested in January 2023; sentenced in 2024 to 7.5 years.
Â
Â
"Adil Semeykhanuly has reportedly been sentenced to six-and-a-half years for his ânegative interpretationâ of Kazakh poet Abai Kunanbaev.
by Serikzhan Bilash and Tilek Niyazbek
Â
A respected Kazakh language editor and cultural researcher, Adil Semeykhanuly, has reportedly been sentenced to six and a half years in prison in Chinaâs Xinjiang region after more than a year of detention and house arrest, according to Kazakh language media and colleagues familiar with the case.
Â
Colleagues say Adil Semeykhanuly received a 6œ-year prison sentence over allegations of a ânegative interpretationâ of the teachings of the Kazakh poet Abai Kunanbaev (1845â1904). Semeykhanuly, a long-time editor at the âĐšŃĐœĐ¶Đ°ÒŁâ (Shynzhan) newspaper and a recognised scholar of Abai, was first detained in January 2024. Sources say he spent seven to eight months in custody before being placed under house arrest due to insufficient evidence, according to relatives.
Â
On 20 August 2025, he was reportedly sentenced on charges that he ânegatively propagated the teachings of Abaiâ and âformed a separate public opinion,â accusations observers describe as politically broad and vague.
Â
Kazakh outlets report that four other Kazakh intellectuals working in the same media environment were also arrested and later sentenced:
Â
Tegis ZĂ€ybekuly â Deputy Editor of the Kazakh Editorial Department; arrested in October 2024. His sentence remains unknown.
Murat Ybyraiuly â TranslatorâReporter; arrested in August 2023, charges not publicly disclosed; sentenced to 5.5 years.
OñalÄan MĂșlikuly â TranslatorâReporter; arrested in January 2023; sentenced in 2024 to 7 years.
Janibek Jaudatuly â Translator; arrested in January 2023; sentenced in 2024 to 7.5 years.
Â
Colleagues describe the series of arrests as part of an intensifying crackdown on Kazakhlanguage publishing, translation work, and cultural expression in Xinjiang.
Â
Semeykhanulyâs participation in a 2005 Chinese delegation to Kazakhstan for the 160th anniversary of Abai in Semey was reportedly cited as one of the incidents that Chinese authorities scrutinized. He was widely regarded as a mentor to young journalists and a prolific contributor of cultural essays.
Â
Â
Monument to Abai in Beijing, with Chinese and Kazakh flags.
A survey of Chinese public court databases, government bulletins, and legal notices found no official records confirming the arrests or sentences of Semeykhanuly or the four other intellectuals. The absence of public documentation is common in politically sensitive cases in Xinjiang, where legal processes remain opaque. The news has, however, been confirmed by Kazakh sources.
Â
The case also stands in contrast to cultural diplomacy between the two nations: China maintains an Abai monument in Beijing, and state media frequently describe the poet as a âbridge of friendship.â Kazakhstan hosts at least five Confucius Institutes established in partnership with Chinese universities. These public gestures of mutual cultural respect sit uneasily alongside the sentencing of an Abai scholar over an alleged ânegative interpretationâ of the poetâs teachings.
Â
Human rights organizations continue to report systemic pressure on Uyghur, Kazakh, and other Turkic intellectuals, including charges related to ideology, cultural activity, or perceived separatism. Families often report limited access to information and fear repercussions for speaking publicly.
Â
As of publication, Chinese authorities have not acknowledged the reported arrests or sentences. Requests for comment were sent to Xinjiang regional authorities and the Chinese Embassy in Kazakhstan."
by Serikzhan Bilash | Mar 9, 2026 | News China
https://bitterwinter.org/kazakh-scholar-sentenced-in-xinjiang-for-misinterpreting-a-poet/
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"Literature can bring culture and emotions to life and open up new perspectives â and the same is true for learning German. In this episode, we explore how texts, stories and theatre can enrich the experience of learning the language. We also talk about a theatre project our studio guest, Jonas Teupert, lecturer and director of the German program at the University of Melbourne, staged together with students. Also present is the student who played the lead role in the production: Anindo Minifie. Published 9 March 2026 2:29pm By Julia Grewe" https://www.sbs.com.au/language/german/en/podcast-episode/how-literature-and-theatre-bring-language-to-life-episode-6/7k6cxjh80 #Metaglossia #metaglossia_mundus #mĂ©taglossie
"Held on the theme: âTerminology development in the Ghanaian language,â the workshop and lecture was attended by students for 21 colleges of education, graduate students from several universities, traditional leaders, entrepreneurs, policymakers among other stakeholders.
Â
Prof Appah noted that Ghana had adequate human, linguistic and institutional resources for the cause but was obstructed by inadequate funding.
Â
He made a case for the introduction of a government-sponsored national terminology programme and a register to streamline the development of terminologies.
Â
Through the programme, government would provide funding for research and other critical activities for the gathering, development, and dissemination of the terminologies, he proposed.
Â
Prof. Appah made a direct call on the Ghana Tertiary Education Commission (GTEC), Ghana National Research Fund, and GetFund to help fund their activities on creating terminologies.
Â
While appealing to government, he entreated the Linguistic Association of Ghana to demonstrate their seriousness by forming a research team to start work.
Â
Prof Appah, stressed that local terminologies would help to decolonise education in Ghana, demystifying all complex concepts taught in a foreign language and clearing all impediments.
Â
âThe people and teachers of the languages we teach who donât speak English are not participating in knowledge creation and so if you donât have the capacity to think, practice, read, and access knowledge in your own language, then you lack linguistic sovereignty,â he added.
Â
The UG principal proposed a teacher education and assessment reform that would promptly adopt new creations.
Â
Dr Vincent Erskine Aziaku, Head of Department of Ghanaian languages and Linguistics, explaining the purpose of the workshop, maintained that Ghana remained under colonisation as it continued to depend on a foreign language.
Â
The problem, he noted, had been the lack of terminologies, intimating that âterminology development is the only way we can succeed in having our language.â
Â
Dr Samuel Owoahene Acheampong, Faculty of Ghanaian Languages Education, University of Education, Winneba (UEW), underscored the need for standardisation to ensure coherence and consistency in the terminologies.
Â
He appealed to government to put together a standardisation council to verify all terminologies to ensure authors were not producing contradictory contents.
Â
Mr Scoon Boakye Appiah, Founder and CEO of AyaPrep, an education technology company, entreated stakeholders to leverage technology to promote the use of Ghanaian language in teaching and learning.
Â
GNA
Edited by Alice Tettey/Linda Asante Agyei
Provided by SyndiGate Media Inc."
Â
https://www.msn.com/en-xl/africa/ghana/stakeholders-advocate-national-terminology-programme-for-ghanaian-languages/ar-AA1WNFfh
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"âWriting, Reviewing, Translating: Women, Words, and Worldsâ on February 17 at Mir Anis Hall, JMI.
Â
The Sarojini Naidu Centre for Womenâs Studies (SNCWS), Jamia Millia Islamia, in collaboration with The Book Review Literary Trust successfully organised a one-day national symposium on âWriting, Reviewing, Translating: Women, Words, and Worldsâ on February 17 at Mir Anis Hall, JMI.
Â
Chandra Chari, Founder Editor of The Book Review Literary Trust addressed the gathering about the origins and objectives of The Book Review journal and its sustained commitment to fostering critical literary culture in India. She underscored the importance of book reviewing as a vital intellectual practice and emphasised the role of women in shaping contemporary literary discourse.
Â
The first session, titled âReviewing, Writing, Publishing Women â A Critical Exploration of Gendered Literary Landscapes,â was moderated by Dr. Aakriti Mandhwani. The panel featured Dr. Semeen Ali, Rachna Kalra, Dr. Malvika Maheshwari, Dr. Sucharita Sengupta, and Dr. Kanupriya Dhingra. The speakers reflected on questions of identity and authorship, editorial gatekeeping, the politics of literary knowledge, and the sustainability of womenâs writing in South Asia. Discussions highlighted the need to move beyond reductive categorisations of âwomenâs writing,â to encourage mentorship and alternative platforms, and to view reviewing as both scholarship and resistance.
Â
A session âWriting the City,â moderated by Dr Faiz Ullah, explored literary engagements with urban spaces, particularly Delhi. Speakers Ananya Vajpeyi, Ekta Chauhan, and Aishwarya Jha reflected on the city as a site of memory, transformation, and affect. The discussion examined urban villages, shifting cityscapes, nostalgia, and the interplay between lived experience and literary imagination.
Â
It was followed by a session titled âWriting/Translating Women,â which was moderated by Dr. Amina Hussain, Assistant Professor, SNCWS. The panel included renowned Hindi author Mridula Garg, noted translator Prof. Arjumand Ara, Dr. Deeba Zafir, and Dr. Firdous Azmat Siddiqui. The speakers addressed the epistemic marginalisation of womenâs writing, the complexities of translation, intersectional concerns of caste and class, and representations of Muslim women in literature and history. The session emphasised that writing must provoke critical reflection, that translation demands ethical responsibility, and that marginal voices must be represented with nuance and sensitivity.
Â
The symposium reaffirmed Jamia Millia Islamiaâs commitment to fostering inclusive and critical academic spaces that foreground womenâs voices in literature, scholarship, and translation, and to promoting dialogue that bridges disciplines..."
Â
TNN | Mar 7, 2026, 12:30 IST https://share.google/RqcgYC2ybChNrLPSg
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"Modern, AI-native platforms designed around Arabic constraints are now seen as essential for governing quality, ensuring consistency, and speeding up the localization of all written assets.
RIYADH: Faced with a globalized workforce and cross-border operations, companies across the Middle East are now embedding live translation into the fabric of daily work, adopting a hybrid human-artificial intelligence strategy to break down language barriers.
For years, multilingual translation in the region was a logistical feature reserved for annual shareholder meetings, flagship conferences, or international trade shows. Today, it has become a daily operational necessity.
Nour Al-Hassan, founder and CEO of Tarjama and Arabic.AI, said in an interview with Arab News that âas companies in the Middle East expand globally, multilingual communication is no longer occasional, it is part of everyday work.â
Tarjama is a MENA-based language technology company that launched Arabic.AI, an advanced, specialized platform for the Arabic language, to deliver high-quality, culturally nuanced, and industry-specific translation and content solutions.
Tarjama and Arabic.AI's founder and CEO Nour Al-Hassan. (Supplied)
Al-Hassanâs sentiments were echoed by Edward Crook, vice president of strategy at AI-powered neural machine translation service, DeepL, who told Arab News: âIn the UAE and Saudi Arabia, 84 percent of professionals have integrated AI translation into their daily workflows, signalling a rapid shift from using language AI tools for big events to making them a staple of daily operations.â
Oddmund Braaten, CEO of multilingual event technology company Interprefy, told Arab News that such language support was previously used only for major external sessions, with in-person interpreters brought in for specific language pairs.
âWhat has changed is that live translation is now part of everyday operations,â he said, adding that the organization runs recurring virtual trainings and frequent internal briefings, and multilingual access is built in by default.
âThis has allowed Arabic- and English-speaking teams, along with additional language groups, to participate on equal terms,â Braaten explained.
According to the CEO, internal training uses remote simultaneous interpretation with live captions, especially for technical content. For larger audiences, âAI speech translation is added to extend language coverage.â
This same language experience is maintained for both in-person and remote participants.
As companies across the UAE, Saudi Arabia, Qatar, and Bahrain diversify their economies and attract talent from across the globe, the demand for inclusive communication has moved from the event stage to the weekly team huddle, the training webinar, and the internal strategy update.
This transition from occasional to essential is underscored by new research. A study by Interprefy revealed that 82 percent of Middle Eastern business event organizers now report high demand for multilingual services.
Crucially, 61 percent see clear value in using live translation for webinars, 55 percent for business meetings, and 54 percent for internal âall handsâ sessions.
This aligns with broader regional adoption trends that were observed from 2024 onwards. According to a separate survey by DeepL, 84 percent of professionals in Saudi Arabia and the UAE have already integrated AI translation tools into their daily workflows.
The drivers are enhancing productivity, developing new language skills, and, for 46 percent of professionals, successfully expanding business into new markets. Crook added that the primary drivers are âboth internal and external: from developing language skills, to boosting time efficiency, and managing supplier relationships.â
To meet this surging, everyday demand, businesses are increasingly adopting a pragmatic, hybrid approach. They are moving beyond a one-size-fits-all model to a two-track system that balances nuance with scale, and cost with critical accuracy.
According to Interprefy, for sensitive negotiations, confidential board discussions, legal proceedings, or complex technical workshops, the expertise of professional human interpreters remains irreplaceable.
This ensures subtlety, cultural nuance, and absolute accuracy where the stakes are highest.
This approach is mirrored by companies such as Tarjama. As Al-Hassan explained: âTarjama combines professional human translation with its AI-driven CleverSo platform to deliver a hybrid model that mirrors the evolution of translation tools, enhancing productivity without replacing human expertise.â
For the constant stream of daily interactions, such as project check-ins, company-wide broadcasts, training modules, and supplier communications, AI-powered live translation and captions provide scalable, instantaneous, and cost-effective understanding.
This layer ensures that language is never a barrier to participation, collaboration, or swift decision-making in fast-moving environments.
He explained how this hybrid model is practically implemented, noting it usually takes one of three forms.
In some cases, professional interpreters cover the main spoken languages, while AI speech translation is added for languages spoken by a small number of participants. In other situations, professional interpreters are combined with live captions or subtitles. In a third scenario, all three are used together.
This foundational shift extends beyond spoken communication to the very systems that manage a companyâs multilingual content. As organizations generate more material for diverse audiences, they require specialized technology that handles the regionâs dominant language with native fluency.
For the written word, financial statements and marketing campaigns, Arabic-first Translation Management Systems are becoming critical. As highlighted in a 2025 report by Tarjama on its CleverSo platform, generic systems built for Latin scripts struggle with right-to-left layout, segmentation, and Arabic user interface needs, leading to inaccurate translations that hurt conversion and trust.
Al-Hassan emphasized the need for specialized systems, stating: âHigh expectations for Arabic quality, multiple dialects, and regulatory requirements mean generic tools are not enough. Businesses now need specialized systems that fit into daily workflows and handle language with consistency, security, and cultural awareness.â
Modern, AI-native platforms designed around Arabic constraints are now seen as essential for governing quality, ensuring consistency, and speeding up the localization of all written assets, from scanned PDFs to mobile app strings.
Regarding quality for high-stakes content, Al-Hassan added: âOur approach is built on a fundamental principle: quality cannot be inspected in; it must be designed in from the very beginning.â
The trend is set to intensify. With 77 percent of Saudi and Emirati professionals believing AI will positively impact daily work efficiency by 2029, the integration of intelligent language tools is becoming a benchmark for competitive, inclusive, and globally agile businesses.
Crook confirmed this outlook, saying that some â77 percent believe AI will be the fundamental driver of workplace efficiency by 2029.â
When justifying the investment in everyday multilingual communication, business leaders point to measurable returns. Braaten shared that leaders justify it by removing friction, reducing risk, and enabling effective contribution at scale. The returns are visible in productivity, with fewer follow-up meetings and faster team alignment, as well as in employee inclusion and retention.
Oddmund Braaten, CEO of multilingual event technology company Interprefy. (Supplied)
He also noted that 85 percent of organizers report attendee frustration when multilingual support is not available.
Clients of companies like Tarjama quantify the return on investment on two levels. Al-Hassan stated that internally, they measure faster turnaround times and lower costs, while externally, they look at quicker market entry and faster campaign launches. For most, the real value combines improved internal efficiency with accelerated growth across markets.
As AI translation becomes ubiquitous, Tarjama sees the next competitive frontier in consultancy-driven localization of complex business, government, and advisory content, addressing challenges around regulatory compliance and scalable market launches.
This shift is operational, as explained by Braaten who gave an example of a GCC-headquartered organization now using live translation more frequently. According to the CEO, the firm â with teams and stakeholders across the Middle East and Europe â is now âdelivering ongoing professional training rather than just a limited number of annual events.â
Al-Hassan describes this as a shift from âconference-scaleâ to âworkflow-scaleâ translation, where âtranslation is built into business systemsâ and âcontent moves through the workflow and becomes multilingual as part of the process.â" Miguel Hadchity 01 February 2026 https://www.arabnews.com/node/2631332/amp #Metaglossia #metaglossia_mundus #mĂ©taglossie
" WAXAL: A large-scale open resource for African language speech technology March 6, 2026
WAXAL provides a critical, open-access foundation for African speech technology. Featuring a large corpus of ASR and TTS data for 27 native languages under a highly permissive license, WAXAL empowers the African AI ecosystem to build robust speech systems that better reflect the region's unique linguistic diversity.
Quick links WAXAL dataset Paper Share Voice-enabled technologies like virtual assistants and automated transcription have transformed how we interact with computers. However, their benefits disproportionately favor a handful of high-resource languages. This divide has left hundreds of millions of people â particularly in Sub-Saharan Africa, home to over 2,000 distinct languages â unable to access essential technology in their native tongues. Several years ago, the team at Google Research set out to help tackle this problem.
Watch the film To address this critical need, we introduce WAXAL: a large-scale, openly accessible speech dataset that initially covers 27 Sub-Saharan African languages spoken by over 100 million speakers across more than 26 countries. Developed through a multi-year effort beginning in 2021, in collaboration with African academic and community organizations, WAXAL provides the high-quality, permissively licensed data necessary to build robust speech systems. Setting a foundational milestone, this initial release features approximately 1,846 hours of transcribed natural speech for automatic speech recognition (ASR) and over 565 hours of high-fidelity recordings for text-to-speech (TTS). We are releasing these resources under a Creative Commons license (CC-BY-4.0) to catalyze research and enable inclusive voice-enabled technologies tailored to the unique linguistic characteristics of the continent. We intend for the WAXAL collection to continuously evolve and expand to include additional languages as part of our ongoing effort to bridge the digital divide.
Introducing WAXAL By addressing critical data scarcity for over 100 million speakers, WAXAL aims to empower the regional AI research ecosystem. To support the development of robust speech technologies, the corpus integrates two specialized datasets designed to provide comprehensive coverage for both speech recognition and synthesis tasks.
WAXAL-ASR (Spontaneous Understanding): Comprising approximately 1,846 hours of transcribed audio, this dataset captures natural, unscripted speech. Instead of reading scripts, diverse participants were asked to describe visual stimuli covering 50+ topics in their native language. This image-prompted elicitation captured authentic linguistic variations, including tonal nuances and code-switching. This method successfully yielded more natural speech than traditional methods.
Examples from Googleâs Open Images used as prompts to elicit natural speech for the ASR dataset.
WAXAL-TTS (High-Fidelity Generation): Designed to facilitate the creation of natural-sounding synthetic voices, this dataset contains over 565 hours of high-quality, phonetically balanced audio. The TTS collection process was highly collaborative: local community members worked in pairs to draft scripts of 10,000â20,000 words, alternating reader and recorder roles. To ensure professional-grade acoustics, some participants used project funding to build custom studio boxes. The resulting recordings were then segmented, matched with the script text, and reviewed for accuracy and quality.
TTS recording box at University of Ghana.
The WAXAL corpus's dual focus on unscripted ASR data and high-fidelity TTS audio is designed to enable the development of full-duplex conversational systems. Specifically, the ASR component facilitates the modeling of varied, spontaneous speech input typical of real-world scenarios, while the high-quality TTS component provides the clean reference data required for generating clear, natural output. The table below lists the 27 languages currently included in the dataset:
Breakdown of the current WAXAL dataset, showing the 27 initial Sub-Saharan African languages and the availability of ASR and TTS data for each.
Anchoring in the African AI ecosystem Crucial to the WAXAL project was our commitment to working with, and contributing directly to, the African AI ecosystem. The data collection effort was led entirely by African academic and community organizations, guided by Google experts on world-class data collection practices. This collaborative approach ensured the corpus was built by and for the community it serves; with shared methodology each partner focused on a specific subset of languages. Our partners included Makerere University, which collected ASR and/or TTS data for nine different languages, and the University of Ghana, which focused its efforts on eight languages, using the ASR image-prompted data collection methodology outlined above. Additional key collaborators were Digital Umuganda, in partnership with Addis Ababa University, who were instrumental in leading the ASR collection for several regional languages. For the high-quality, studio-recorded voices, Media Trust, Loud n Clear and African Institute for Mathematical Sciences Senegal spearheaded the TTS recordings across various regional languages.
This framework is fundamentally rooted in the principle that our partners retain ownership of the collected data toward the shared commitment to make all datasets openly available for the broader community. This deep collaboration and open-access philosophy have already enabled notable derivative research and publications.
Through this framework, our partners have already enabled new research, such as the development of a cookbook for community-driven collection of impaired speech . This research resulted in the first open-source dataset for Akan speakers with conditions like cerebral palsy and stammering, and demonstrated that in-person, image-prompted elicitation is more effective than text-based prompts for these populations. This work provides a vital roadmap for developing inclusive speech technologies in low-resource environments. Furthermore, the initiative supported a major study that introduced a 5,000-hour speech corpus for five Ghanaian languages â Akan, Ewe, Dagbani, Dagaare, and Ikposo. This work established infrastructure for building robust ASR and TTS systems tailored to the linguistic diversity of West Africa by using a controlled crowdsourcing approach to capture natural, spontaneous intonations. Other essential research has focused on benchmarking four state-of-the-art models (Whisper, XLS-R, MMS, and W2v-BERT) across 13 African languages. This study analyzed how performance scales with increased training data, offering key insights into data efficiency and highlighting that scaling benefits are strongly dependent on linguistic complexity and domain alignment. Finally, a systematic literature review was published, cataloging 74 datasets across 111 African languages to map the current frontier of speech technology. This review emphasized the urgent need for multi-domain conversational corpora and the adoption of linguistically informed metrics, such as Character Error Rate (CER), to better evaluate performance in morphologically rich and tonal language contexts. Conclusion and future directions WAXAL represents a key milestone in bridging the digital divide, offering a high-quality, open-access speech resource for 27 Sub-Saharan African languages. Developed through deep collaboration with African academic and community organizations, this initiative empowers the continentâs AI ecosystem and preserves linguistic diversity. We hope WAXAL will continue to serve as a vital resource for the digital preservation of African languages and a foundation for future innovations. Google remains committed to this effort, with plans to continuously expand the WAXAL dataset." Tavonga Siyavora, Senior Product Manager, and Abdoulaye Diack, Program Manager, Google Research
https://research.google/blog/waxal-a-large-scale-open-resource-for-african-language-speech-technology/ #Metaglossia #metaglossia_mundus #métaglossie
"No, USCIS does not require certified translations to be notarized. What's more important is the certification statement by the translator or translation agency confirming that the translated document is accurate and complete to the best of their knowledge.
Â
Some people still go ahead and notarize their documents. While it's not an official requirement, it simply adds an extra level of authenticity to your document.
Â
In a notarized translation, a notary public verifies the identity of the person signing the translator's certification. The notary does not verify the translation itself; they only confirm that the translator signed the certification in their presence.
Â
Common USCIS Translation Mistakes That Cause Delays
Â
Even small translation errors can affect your USCIS application. If authorities cannot verify the information on your documents, they may issue a Request for Evidence (RFE), which can delay your application.
Â
Here are common mistakes that may affect your application;
Â
1. Partial translations
Â
Translating some parts of your documents can jeopardize your application. USCIS requires a full translation of the entire document, including stamps, annotations, and seals, so that officers can review it in context.
Â
2. Missing certification statement
Â
Every translated document must include a signed certificate confirming the translation is complete and accurate. Without the statement, USCIS may treat the document as invalid.
Â
3. Incorrect name spellings
Â
If your names don't match across your documents, it can raise concerns about your identity or relationship claims, especially if you're applying for a family visa.
Â
4. Formatting inconsistencies
Â
Translated documents that do not follow the structure and formatting of the original document can make it difficult for officers to locate key information. This can also result in delays or RFEs.
Â
5. Using unauthorized translators
Â
If a translation is found inaccurate or the translator's credentials are in question, USCIS may reject the application or request additional documents. To avoid this, it's always best to use professional translation services that specialize in USCIS document translation
Â
FAQs
Â
Can a family member translate my documents for USCIS?
Â
USCIS does not explicitly ban family members from translating your documents. However, it raises concerns about bias. It's best to use an independent translator or translation agency to avoid any issues.
Â
Do I need to submit both the original document and the translation?
Â
Absolutely. When submitting your documents to USCIS, always submit the translation alongside the original documents so officers can verify the information.
Â
What should the translator's certification statement include?
Â
The certification statement must state that the translation is complete and accurate and that the translator is fluent in both languages. It should also include the translator's name, signature, and date
Â
Will USCIS reject my application if the translation is incorrect?
Â
There's a high chance that they will. Alternatively, they may issue a Request for Evidence (RFE) requesting additional documents or a corrected translation, which can increase the processing time for your document. It's always important to submit accurate documents from the start.
Â
How long does it take to get a certified translation for USCIS?
Â
The turnaround time depends on the provider you're using. Translayte provides USCIS translations in 12 hours or less, depending on the length of your document and the language pair.
Â
Media Contact:
Â
Sophia Orji
Content Manager
Email: sophia.orji@translayte.com
Website: https://translayte.com
Â
SOURCE: BDXL Ltd"
Â
https://www.bignewsnetwork.com/news/278905951/uscis-translation-requirements-2026
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"BENNINGTON -- The Bennington Writing Seminars, the MFA in Writing program at Bennington College, announced the launch of a new dual-genre concentration in Literary Translation. Applicants and current students studying Fiction, Nonfiction, or Poetry will be able to add Literary Translation as a secondary concentration, lengthening the program from four to five terms.
âBennington College has a great history as a center for the translation of literature,â said Bennington Writing Seminars Executive Director Mark Wunderlich, âand we are happy to now offer instruction in literary translation in our graduate writing program. Students will now be able to spend two terms studying with some of the finest translators in the field and leave with a fully translated work.â
Bennington College alum and Bennington Writing Seminars faculty member and National Book Award-winning translator Bruna Dantas Lobato designed the program to enable students to engage with a global literary community.
âWe translate literature to engage with the world and its many languages, to be in conversation with and open to modes of thinking and being besides our own,â she said. âLiterary translation is the rewriting of a literary text in a new language and all the transformations that act entails, as the text travels to a new cultural, linguistic, and aesthetic context. Translation broadens and deepens our understanding of humanity and language, shows us there are more possibilities beyond our reach, and pushes us to challenge our own perspective. It is thanks to translation and translators that readers arenât cut off from the rest of the world, living in intellectual isolation.â
Dual-genre will extend the program from four to five semesters, three in the student's main genre and two in Literary Translation. Dual-genre applicants may apply to study Fiction, Nonfiction, or Poetry as their main genre, and Literary Translation, Fiction, Nonfiction, or Poetry as their secondary genre.
Applications are now open, and the coursework begins next January.
Lobato is a writer and translator. Her fiction has appeared in The New Yorker, Guernica, A Public Space, The Dial, and The Common. She was awarded the 2023 National Book Award in Translated Literature for The Words that Remain by StĂȘnio Gardel. Originally from Natal, Brazil, she lives in Iowa and teaches at Grinnell College. Her debut novel, "Blue Light Hours," is out now from Grove Atlantic." https://www.benningtonbanner.com/local-news/bennington-college-launches-literary-translation-program/article_b3a9cf80-c81a-474e-9327-1951d9b23f16.html #Metaglossia #metaglossia_mundus #mĂ©taglossie
"The High-Performance Language Technologies (HPLT) project is developing very large-scale multilingual resources for large language models and machine translation.
Massive text collections for pre-training are the âcrude oilâ of the large language model (LLM) era. The process of ârefiningâ high-quality datasets from web data at scale presupposes computational infrastructure and technological muscle that is often characteristic of corporate environments, as evidenced, for example, by some notable generally available pre-training datasets: C4,Âč FineWeb 1 & 2,2,3 MADLAD-400,⎠or Nemotron-CC.â” With a few notable exceptions, this line of work tends to capitalise on the English language.
Â
Here, we present the open-source results6,9,10 of the European R&D consortium HPLT â a project that has been funded under the auspices of the Horizon Europe programme in 2022â2025. Together with a myriad of additional results, HPLT has produced massive pre-training datasets of high-quality texts in close to 200 distinct languageâscript combinations. Its 2025 monolingual data release, HPLT 3.0, comprises some 30 trillion sub-word tokens in total, of which close to half represent languages other than English. We make this resource publicly available under the most permissive terms of use possible. We further share a state-of-the-art and open-source data preparation pipeline, an innovative multilingual evaluation framework, as well as hundreds of language models pre-trained on HPLT data.
Â
Â
Fig. 1
Furthermore, the project has produced novel bilingual datasets for more than 50 language pairs, hundreds of associated machine translation models, open-source pipelines for data preparation, model training, and evaluation, as well as synthesised additional pre-training data for underrepresented languages by machine translation of very high-quality English documents. In our view, it is the totality of generally available and very large-scale resources and the documentation of the underlying processes that bears promise of âdemocratisingâ the current LLM and MT landscape.
Â
Organisation
The HPLT consortium comprised partners from five different universities (Charles University in Prague and the Universities of Edinburgh, Helsinki, Oslo, and Turku), two national HPC centres (CESNET in the Czech Republic and Sigma2 in Norway), and a language engineering company (Prompsit) from all around Europe. The project has received about âŹ4.1m from the Horizon Europe programme and ÂŁ960,000 from UK Research and Innovation, and ran from September 2022 through December 2025. The project was coordinated by Jan HajiÄ (Charles University), with technical coordination by Kenneth Heafield (Edinburgh) and Stephan Oepen (Oslo) in its first and second halves, respectively.
Â
Data curation
HPLT has gathered and processed more than ten petabytes of raw web data. The project has released more than 30 billion tokens (word-like units) of high-quality textual data, accompanied by rich metadata, for close to 200 distinct languages. The process of extracting, cleaning, annotating, and filtering texts from raw web archives is schematically depicted in Fig. 1, composed of about a dozen modules.
Â
Raw web archives were drawn from three sources: the Internet Archive (IA), host of the iconic Wayback Machine); the non-profit Common Crawl Foundation (CC); and the ArchiveBot volunteer infrastructure for long-term web archiving. Sub-tasks like, for example, the extraction of ârunning textâ from marked-up document formats, language identification at the document and paragraph levels, âfuzzyâ near-deduplication, annotation with a wealth of text quality and regulatory compliance signals, and final filtering based on all available information, each directly impact the practical utility of the final data sets. Here, text quality versus overall volume present separate and typically antithetical dimensions for optimisation, creating a rich space for different design choices and trade-offs. This remains an active area of research. The open-source HPLT processing pipelines are highly flexible and parameterisable, where default values represent the current state of knowledge.
Â
Monolingual statistics
To put the HPLT monolingual data into perspective, Table 1 (below) presents document and token counts (see note) for the English and multilingual (non-English) partitions of the data, as well as counts for a small sample of individual languages. For ease of comparison, these statistics are accompanied with average document lengths and per-language proportions, and contrasted with corresponding figures for three other publicly available multilingual datasets mentioned above.
Â
Â
Table 1: Note: For the purpose of comparable statistics across languages and different datasets, all token counts are computed using the Gemma-3 tokenizer,âž a SentencePiece model with a vocabulary of 256K sub-words, providing good coverage for all target languages
As is evident from these numbers, HPLT 3.0 is by far the largest publicly available such dataset, and its multilingual breadth compares favourably to other widely used resources. In Gemma-3 tokens, the multilingual HPLT 3.0 partition is about 2â3 times larger than FineWeb and the earlier version HPLT 2.0, respectively, and five times larger than the older MADLAD-400 dataset. In terms of average document length, which often is correlated with text quality, HPLT 3.0 and 2.0 pattern alike, markedly ahead of FineWeb but well behind MADLAD-400. For a small selection of European languages, the table shows languages ranging between a âmereâ billion of available tokens to others with hundreds of billions.
Â
In-depth analytics
Training data quality arguably is the most important factor in model quality, but in-depth data inspection at scale is a challenging endeavour. HPLT has developed an open-source tool, HPLT Analytics, to compute a broad range of fine-grained statistics and enable interactive visualisation and exploration. The datasets are internally structured in documents, paragraph-like segments, and tokens. Descriptive frequency and length statistics, combined with basic correlation analysis with metadata like internet domains or predicted text register labels, can reveal distributional trends or outliers. Annotations are predominantly available at the document level, but in some cases also for smaller units. Contrasting the distributions of document versus segment language predictions, for example, allows insights into both degrees of in-document âcode switchingâ and uncertainty in language identification, typically among closely related languages.
Â
Multilingual evaluation
As an additional tool to gauge data quality and experimentally inform design choices in training data preparation (as well as in language model training), the project has developed a framework for automated large-scale multilingual evaluation, dubbed HPLT-e. In its current state of development, the framework comprises 127 language understanding and generation tasks across the nine European languages highlighted in Table 1.
Â
This selection allowed both availability of native speakers in the project team and a minimum level of diversity in terms of language resources, families, and scripts. Tasks in HPLT-e are often drawn from pre-existing benchmark suites, but emphasising natively constructed (rather than translated) tasks and extending each with three to seven human-written prompts to mitigate the methodological challenge of prompt sensitivity. Similar to Penedo et al.,2,3 we pretrain separate âsmallishâ (2B parameters) GPT-like models per language using an otherwise fixed pretraining setup, and evaluate them at regular checkpoint intervals in a zero-shot regime, carefully selecting tasks that meet a range of evaluation signal criteria, i.e. can be expected to act as informative and reliable indicators of training data quality. Such criteria include monotonicity and relative stability of model performance as pretraining progresses, ranking consistency across pretraining intervals, and multiple, indicators of limited prompt sensitivity. Fig. 2 shows a comparison of the four datasets introduced above using HPLT-e. To aggregate scores across different prompts, tasks, and languages, per-task scores are maximised across prompts and min-max normalised relative to a task-specific random baseline. Per-task scores are then averaged across task categories within each language and, finally, across languages. An alternative approach to overall aggregation is called Bordaâs count, using VoteânâRank,â· which is essentially the average of per-language counts of a model outranking all the others. Models trained on all four datasets for up to 100B tokens show a monotonic performance improvement on our selected tasks. Models pretrained on (the comparatively smaller) MADLAD-400 achieve the highest multilingual score, followed by HPLT 3.0, while HPLT 2.0 and FineWeb perform on par. These results are corroborated by rank-based aggregation across tasks and languages, which yields: MADLAD-400, HPLT 3.0, and HPLT 2.0 and FineWeb.
Â
Language models
While training data creation has taken centre stage in the HPLT work plan, the project has also developed a wealth of language models of different sizes and architectures supporting various languages and language groups.
Â
In addition to large language models trained from scratch for Finnish and Norwegian, a common theme in this work was strong emphasis on smaller, specialised models that are efficient to run. In total, publicly available project results comprise hundreds of language models, including the following sub-groups:
Â
55 monolingual encoder-only (BERT-like) models for a typologically diverse set of languages. When fine-tuned as embedders for âclassicâ language understanding tasks, these models uniformly show superior performance to standard multilingual models.
57 monolingual encoderâdecoder (T5-like) models, again for a typologically broad set of languages. These models exhibit competitive performance in both embedding and generation benchmarks, thus, offering a novel platform for experimentation.
38 monolingual decoder-only (GPT-like) reference models, each with 2.15B parameters and trained to 100B tokens. These models can serve a number of purposes, including as baselines for mono- and multilingual training, references for the comparison of HPLT and other data, and tools for contrasting the HPLT data quality across different.
Two larger (13B parameters), continually pretrained generative models, for Finnish and Norwegian, built on the fully open-source OLMo 2 platform. These models compare favourably to language-specific adaptations of the Mistral NeMo model, suggesting that fully transparent foundation models can yield competitive results to their merely open-weight counterparts.
Mining for bilingual text
Another wealth of open-source results from HPLT are related to machine translation (MT), notably large collections of parallel texts derived from mining the monolingual datasets for translational correspondences at the sentence of document levels. These resources are created using the additional processing block called Bitextor Pipeline in Fig. 1. The pipeline applies a multi-stage text extraction procedure that identifies documents with identical content in different languages using various matching and alignment techniques implemented as an open source toolbox.Âč Heavy parallel computing makes it possible to run such bitext mining on a scale provided by the monolingual web-crawls coming from HPLT. Traditionally, parallel texts are provided as sentence-aligned bitexts that can directly be fed into machine translation training. HPLT provides three releases of parallel text corpora with a language coverage of 57 language pairs. The data is collected in an English-centric manner aligning documents with English counterparts in our dataset. Pivoting on those English documents, we can then also derive multilingual parallel text collections spanning 1,446 language pairs. In total, HPLT provides 2.7 million sentence alignments released from our repository of parallel corpora, OPUS.ÂČ
Â
Â
Fig. 2
Machine Translation
Mirroring the interplay of data creation and model building in the LLM track, HPLT has worked intensely on the development and evaluation of new translation models for 100 language pairs, combined with novel infrastructures for automated training at scale and integration of benchmarking results into the OPUS dashboard. A special focus is set on efficiency, emphasising the need of compact translation models that can run locally on edge devices. Specialised models that are several magnitudes smaller than common general-purpose language models enable fast inference without losing translation performance and enable secure deployments that are independent from external services and online connections. Translation models trained including HPLT data show competitive performance in comparison, especially for lesser-resourced languages. To further reduce computational costs, we also developed a pipeline for systematic multilingual knowledge distillation that supports the transfer from expensive teacher models to compact student models that can be as small as 20 megabytes of size.
Â
Computational infrastructure
All work in HPLT has been exceedingly compute- and storage-intensive, made possible through a combination of resources covered by the project grant and of additional substantial resources allocated to consortium members from national (Czech, Finnish, and Norwegian) quotas and through the EuroHPC system. âBulkâ storage for very large-scale web data, in total close to 21 petabytes, was distributed over facilities in the Czech Republic (CESNET), Norway (Sigma2), and Finland (LUMI). Exclusive access to dedicated compute nodes tightly integrated with the storage systems made possible a first stage of lightweight document and metadata extraction (see Fig. 1), reducing the data volume for further processing by about a factor of three.
Â
In addition to some experimentation on national superclusters, the EuroHPC LUMI system served as the main âworkhorseâ for HPLT, where the consortium used combined allocations of around 60 million CPU and about 11.5 million GPU hours over the 40-month project duration, which is the theoretical equivalent â on average â of more than 2,000 active CPUs at all times."
6th March 2026
https://www.innovationnewsnetwork.com/hplt-high-performance-large-language-models-for-europe/67406/
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
"What is language validation? Language validation, also known as linguistic validation, is a crucial part of the translation process in which a person fluent in the target language confirms that translated content is technically accurate and captures the cultural nuances of your original training content. Without this step, you may risk employees misunderstanding the translated version in their native language.
Without proper language validation, your training program could include inaccuracies that confuse learners and erode their trust. For example, a football analogy in translated content can confuse learners, since American football is a completely different sport from footballâŠwell, everywhere else.
Poor translations can also cause bigger problems. For example, imagine your sales training course uses the common American idiom, âYou ROCK!â Americans interpret that as a supportive encouragement, but a direct translation will sound more like calling the sales leader an inanimate, hard collection of minerals one might throw at an enemy. Sure, the translation is technically correct, but it doesnât make sense. Validators who speak the target native language catch these simple errors and correct them before the final version creates a lot of confusion.
How to choose the right validation method for your document Different assets require different levels of validation. You may worry that a professional translator is your only option, but the truth is, it depends.
To help you pick the best validation method for your assets, we put together a handy framework that covers validation approaches for low-, mid-, and high-risk documents.
Low-risk documents Low-risk documents are informational or supplemental materials where minor translation errors wonât impact learner performance, create compliance issues, or lead to legal problems.
Examples include:
Internal training announcements Course welcome pages Module introductions Translation validation tips for low-risk documents Here are some easy translation validation tips and options to help you translate low-risk documents.
1. Use free online translation tools for forward translation Use Google Translate, DeepL Translator, or Reverso for a quick, free validation check. Translate your low-risk training content into the target language and scan the output to surface obvious issues like missing information and incorrect terminology. Google Translate is especially useful to validate simple text, like headings, short instructions, and summaries, where the impact of errors is low. Still, the output might contain grammatical errors and awkward literal translations.
2. Use a second tool for backward translation Reverse translation, also known as back translation, is a quality assurance method where a translated text is retranslated into the original language to ensure the back translation holds up to the original.
Translate a section of your training into the target language. Articulate customers can use Localization for this first pass and expect a highly accurate first draft. Then, use a second online translation tool to translate it back into the original language. Compare the two versions to spot meaning shifts, missing details, or overly literal phrasing.
This approach works best for summaries, introductions, and labels. But mid and high-risk documents require a more robust approach.
Mid-risk documents Mid-risk documents are assets that guide actions or influence behavior. This means translation errors could impact learner performance, but are not as likely to cause legal or safety issues.
Examples include:
Procedural guidelines Internal playbooks or messaging frameworks Quizzes or assessments that reinforce procedural knowledge Even if you donât have an expert, youâll benefit from submitting these to a competent reviewer to ensure clarity, tone, and cultural appropriateness.
Translation validation tips for mid-risk documents Reverse translation tricks arenât enough for mid-risk documents like procedural guidelines, internal playbooks, and messaging frameworks, where the impact of error is higher. So, youâll need to bring in a competent, though not necessarily expert, validator.
Here are a few validator options when skipping the review process wonât suffice.
1. Consult fluent speakers in the target languages A colleague fluent in the source and target languages can help validate mid-risk documents because they understand the organizationâs intent and the target languageâs cultural nuances.
For example, an employee based in Canada who speaks fluent Brazilian Portuguese can review step-by-step guidelines for accuracy, tone, inconsistent terminology, and awkward literal translations.
However, being U.S.-based, they may miss locally preferred expressions or subtler cultural nuances.
2. Have a local employee review content An employee who speaks the target language and lives in the target country (e.g., a Brazilian Portuguese native speaker who lives in Brazil) can flag language that sounds awkward, overly formal, or unnatural to local employees. Because theyâre immersed in the linguistic and cultural norms of the target country, they may miss subtle differences from the original language.
3. Ask a trusted community source A friend or family member who speaks the target language can provide feedback on mid-risk documents, specifically confusing or complex language, awkward phrasing, and whether instructions are clear from an outsiderâs perspective.
However, they lack the organization-specific knowledge needed to ensure content aligns with the businessâs needs and training intent. They may also fail to pick up on nuances of the target language and cultural references.
High-risk documents High-risk documents are materials where translation errors could create serious problems, including legal, safety, compliance, or financial risks.
Examples include:
Compliance courses Workplace safety or regulatory training modules Legal documents These documents require professional translators or validators to ensure accuracy, maintain regulatory compliance, and protect learners and businesses from risk.
High-risk documents: When to use a professional validator Consider using a professional translator or validator when handling high-risk content like compliance course content, legal documents, and workplace safety modules.
Technically inaccurate or culturally misaligned translations of these content types can lead to legal, safety, and compliance issues that put your organization and employees at risk.
For example, say the original version of the organizationâs mandatory data privacy and security training course forbids employees from sharing customersâ personal information outside of the company without written consent. But the Japanese translation erroneously allows this.
Employees sharing sensitive data could pose security risks to customers, and the company could also face severe backlash and costly regulatory fines.
A professional translator helps prevent these problems by ensuring mandatory instructions, policies, and safety procedures are translated correctly and that content aligns with local regulations like the GDPR for EU and UK-based employees.
Pros and cons of linguistic validation methods Now that you have a list of validation options, letâs compare them by considering the pros and cons of each.
Free online translation tools Pros:
Best for low-risk content (e.g., course introductions, announcements, optional resources) Free and fast Cons
Lacks context and cultural nuance Grammatical or phrasing issues Reverse translation Pros
Best for low-risk content Helps surface meaning drift and missing information Cons
Time-consuming Requires using multiple translation tools Lacks context and cultural nuance Native-speaking colleague Pros
Best for mid-risk content (e.g., procedural guidelines, internal playbooks, messaging frameworks) Fluent in source language and target language Deep understanding of organization tone and terminology Can review accuracy, tone, terminology, and awkward phrasing Cons
May miss local phrasing or cultural nuance Local employee Pros
Best for mid-risk content Deep understanding of local language and workplace norms Can validate tone, clarity, and natural/local usage Cons
May miss subtle misalignment with original source language Trusted community source Pros
Best for mid-risk content Can validate confusing or complex language and awkward phrasing Cons
Lacks organizational knowledge and training context May miss professional or industry-specific nuance May miss target language nuance Professional translator or validator Pros
Best for high-risk content (e.g., compliance courses, legal documents, workplace safety modules) High accuracy and cultural alignment Ensures consistency across programs Cons
Higher cost Longer turnaround time When choosing a translation validation method, think about your contentâs risk level first. Then factor in which free or low-cost resources you have available that will deliver the quality you need to avoid major translation errors.
What to look for in a quality translation A quality translation isnât just technically accurate. It also captures the original meaning of the source content, aligns with organizational goals, and fits the workplace training context.
Here are five things to look for when verifying translation quality:
Preservation of original meaning: Make sure the translated course or content captures the intended meaning of the source material, including tone, context, style, and cultural references. Adherence to rules for grammar and mechanics: Check that the grammar, spelling, and sentence structure are correct and consistent in the target language. Alignment with company brand and training goals: Translation quality also depends on how well the content aligns with your organizationâs brand voice, audience, and training goals. Workplace training and e-learning relevance: The translated content should use language, examples, and references that align with e-learning and workplace training. Consistent terminology usage: Quality translation ensures specific words and phrases are the same in every language to prevent confusion. This is easier when you have a custom translation glossary, as is the case with Articulate Localization, a localization solution embedded in Articulateâs course authoring platform. With a top-notch translation, you save hours on rewrites, boost learner confidence, and ensure a better learning experience for your global workforce.
Validate training with the right resources Just because you donât have access to a professional validator or linguist doesnât mean your goal of translating course content is unattainable. Quite the opposite, actually.
With the right resources, you can ensure your training course or program delivers essential information to learners in a way that makes sense to them linguistically and culturally.
Knowing which resources to use when makes the process even easier. While generic tools work for low-risk content like module introductions, medium and high-risk documents require more robust options, especially when compliance and safety are concerned.
So weigh your options carefully. And should you reach the point of needing a professional validator, check out our blog post on how to find the right one."
https://www.articulate.com/blog/translation-validation-tips-when-you-dont-have-a-professional-validator/ #Metaglossia #metaglossia_mundus #métaglossie
"Tilde, the language technology company from Latvia, has adapted its large language model TildeOpen LLM for translation and integrated it into a machine translation platform that provides reliable high-quality translations into 34âŻEuropean languages.Â
Â
Until now, the model was mainly a significant scientific achievement in the development of artificial intelligence for European languages, but it had not yet been adapted for everyday use by a wider audience. Now, it is available to the public for both private translation needs and daily work.
Â
Starting today, anyone can use the translation platform, which provides exceptionally high-quality and secure translation into 34âŻEuropean languages, including Latvian, Lithuanian, and Estonian, and provides for accurate use of terminology and more natural, fluent sentences, reducing the post-editing workload of the machine-translated texts.
Â
TildeOpen provides quality that is competitive compared to much larger global models, such as ChatGPT-4.1, even though it is about 60âŻtimes smaller. Detailed results of the comparative tests are available in the ranking of large language models TildeBench.
Â
Organisations can deploy TildeOpen on premises or in Europe-based clouds, thus maintaining full control of their data. Unlike many global AI solutions, the data is never transferred outside Europe. This is especially important for public bodies and enterprises that handle sensitive information. At the same time, the model can be customised to suit individual needs, thus providing particularly accurate and reliable translations.
Â
TildeOpen was published as an open-source foundational model for European languages on the Hugging Face platform in autumn 2025. It was developed in Tildeâs research laboratory on behalf of the European Commission. The model has 30âŻbillion parameters and is trained on hundreds of billions of words in European languages, including 29âŻbillion Latvian text units. This is the largest known amount of data used in the development of Latvian artificial intelligence. The model was developed after winning the Large AI Grand Challenge contest organised by the European Commission, using the LUMI supercomputer in Finland."
6 March, 2026
Avots: Press release
https://labsoflatvia.com/en/news/tildes-artificial-intelligence-marks-a-new-era-for-translation-in-european-languages
#MetaglossiaÂ
#metaglossia_mundusÂ
#mĂ©taglossieÂ
|
Cérémonie de remise des diplÎmes 2026 - Faculté de traduction et d'interprétation - UNIGE
Â
Université de GenÚve
Â
Â
Â
Â
La cĂ©rĂ©monie de remise des diplĂŽmes se tiendra le vendredi 27 novembre 2026 Ă 18h, dans le hall dâUni-Mail.
Â
Â
Â
Â
đ Accessible uniquement sur invitation, cet Ă©vĂ©nement festif destinĂ© aux personnes diplĂŽmĂ©es en 2025â2026 sera lâoccasion de cĂ©lĂ©brer leur brillante rĂ©ussite et partager des souvenirs marquants de leur cursus.
Â
Â
Â
Â
La cérémonie sera retransmise en streaming le jour J.
Â
Â
Â
Â
 10 mars 2026
Â
https://www.unige.ch/fti/a-la-une/ceremonie-de-remise-des-diplomes-2026
Â
#metaglossia_mundusÂ
Â
#metaglossiaÂ
Â
#mĂ©taglossieÂ