 Your new post is loading...
|
Scooped by
Charles Tiayon
November 4, 2011 10:25 PM
|
A acessibilidade será oferecida através dos serviços de audiodescrição, para deficientes visuais, e Tradução em Libras, para deficientes auditivos nas exibições de longas.
Are humans the only beings on the planet that use language to communicate?
"Burg Giebichenstein
Kunsthochschule Halle
“Language can only deal meaningfully with a special, restricted segment of reality. The rest, and it is presumably the much larger part, is silence.” George Steiner
Are humans the only beings on the planet that use language to communicate? Can we decipher the nonhuman world around us without harnessing it to our own socialization, syntax, and lexicon? Is interspecies communication even possible? Translation has been described as a precondition that underlies all (human) cultural transactions upon which communication is based. It also is inherently political and stands at the forefront of so many of today’s questions around identity, gender, post-colonial criticism, feminist critique, machine translation and canon creation, yet its connection within the context of the nonhuman turn, interspecies communication, and eco-criticism has not yet been fully explored.
Whether we are talking about classic linguistic and literary translation, or any number of related fields including: language and literature, cultural studies, performance, visual and media arts—the core question that translators and theorists of translation have been debating about for centuries remains the same: is it possible to translate without interpreting? Is linguistic and cultural equivalence even possible? These questions become all the more urgent in the limit-case of interspecies communication. Can we apply empathic modes of translation to nonhuman articulations, wherein translation involves a form of metamorphosis, not of text, but of the translator. As such, translators are something of a hybrid species with one foot in each culture and language, and whose very existence revolves around traveling between worlds. Translators have something of a mythical being about them, akin to a chameleon or centaur. In this course, we will not be engaging in a scientific exploration of interspecies communication, but examining theories around empathic translation-- a process that sees translation not merely as the transformation of a text, but of the translator themself.
Emerging and classical theories of translation can offer a paradigm for engaging with plant and animal articulation, not language as such, but different forms of articulation perceived through the senses, one in which our hearing and seeing,“once intertwined and attentive to the calls and cries of animals, all but disappeared with the invention of the alphabet, retreating into a kind of silence.”
In David Abram's words: “By giving primacy to perception we can see the natural world, not as inert and passive, but as dynamic and participatory. The winds, rivers and birds speak in their own way (if we listen), the sounds of nature not only have informed indigenous languages, but language in general--humans are but one being intertwined with other beings and ‘presences.’ This perspective sees the landscape as a sensuous field, and human perception as but one point of view that is in reciprocity, in expressive communication, with other points of view and ways of being.”
How can theories of translation help us make sense of this new view of a world teeming with language and sentience? What theories abound in reference to multiplicity of “language,” even as Walter Benjamin would argue for a “universal (human) language.” What practical tools does translation studies offer, and what bridges can it forge between the disciplines? The first half of the seminar focuses on key theoretical concepts relevant to the history and practice of translation. In the second half, students will engage in translation experiments that intersect with their own artistic/design practice. A final project should be considered a first draft of something that could develop later into a larger project.
The course will be taught in English and German.
This seminar is ideally suited to students interested in: Literature, Translation Theory / Translation / Cultural Studies / Critical Theory, Creative Writing/ Post-humanism, Trans-humanism, Eco-criticism, the More-than-Human Turn.
Teachers
Dr. Zaia Alexander"
https://www.burg-halle.de/en/course/l/talk-with-the-animals-translation-in-a-more-than-human-world
#Metaglossia
#metaglossia_mundus
#métaglossie
"Published: March 11, 2026 12.16am SAST
Isabel Tello Fons, Universitat de València
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
“–Tú también te enojarías si tuvieras una peluca como la mía —prosiguió el Avispón–. Se meten con uno, y uno, que no le gusta que le tomen la ‘peluca’, pues se enfada… ¡natural! Y entonces es cuando me entra la murria, me arrebujo debajo de un árbol y me quedo tieso de frío. Y, para aliviarme, cojo un pañuelo amarillo y me lo ato alrededor de la cara… ¡Oséase, como ahora! ¡Natural!”.
Así tradujo Ramón Buckley la voz del Avispón en la novela A través del Espejo y lo que Alicia encontró allí, de Lewis Carroll. La versión original recrea el dialecto cockney londinense, muy ligado a la clase obrera, lo que Buckley transformó en un dialecto castizo madrileño, conservando el tono quejón y ordinario del personaje de la obra de Carroll:
“You’d be cross too, if you’d a wig like mine,” the Wasp went on. “They jokes, at one. And they worrits one. And then I gets cross. And I gets cold. And I gets under a tree. And I gets a yellow handkerchief. And I ties up my face –as at the present”.
Cuando leemos una novela traducida, no solo seguimos una historia: escuchamos voces. Voces que revelan quiénes son los personajes, de dónde vienen y qué lugar ocupan en su comunidad. Pero ¿qué pasa con esas voces cuando pasan de un idioma a otro? ¿Cómo se traducen los dialectos, acentos, ritmos y registros que forman parte de la identidad profunda de los personajes? Abordar estas cuestiones es uno de los desafíos más complejos y menos visibles de la literatura.
Voces que importan
La forma de “hablar” de los personajes, lo que llamamos variación lingüística, abarca rasgos diferentes como vocabulario local, jergas, expresiones propias de una comunidad, formas de una lengua pasada o maneras particulares de construir las frases. Estos rasgos no son adornos, son recursos de caracterización: cumplen funciones narrativas y de estilo importantes.
El dialecto de un lugar podría tener una función reivindicativa; el acento rural podría transmitir humor, ternura o jerarquía; una jerga juvenil podría significar cercanía o pertenencia a un grupo y un habla histórica sitúa al lector en otra época. Si estas voces desaparecen en la traducción, el personaje se vuelve más plano y la historia pierde parte de su trama original.
Por ejemplo, en Las aventuras de Huckleberry Finn, Mark Twain diferenció a sus personajes mediante siete dialectos diferentes, y en Oliver Twist, Dickens utilizó el argot de ladrones y rufianes para mostrar el habla del hampa londinense.
Sin equivalencias directas
Uno de los mayores retos de la traducción literaria es que los dialectos no son intercambiables. No existe un “equivalente” español del inglés del sur de Estados Unidos, ni un dialecto aquí que corresponda exactamente al de Liverpool. Cada variedad lingüística está anclada en su territorio, historia y contexto social.
Get your news from people who know what they’re talking about.
Get newsletter
Por eso, si tradujéramos de forma literal un dialecto extranjero, el resultado sería extraño o incluso cómico. Si cambiáramos un dialecto inglés por uno español real, convertiríamos a Huckleberry en un niño andaluz, canario o mexicano y manipularíamos su identidad original. Pero a la vez, si se ignora esa forma de hablar y se traduce a la lengua estándar, se pierde su personalidad lingüística.
La traducción literaria busca conseguir efectos equivalentes: que el lector perciba el mismo matiz social y emocional que quien lo lee en versión original, aunque se usen recursos distintos para conseguirlo.
La traducción más humana
La tarea del traductor literario no es mecánica; es un ejercicio de escucha y de interpretación. El traductor se hace preguntas como qué efecto produce esa voz en el lector del original, qué rasgos lingüísticos usar para conseguir ese efecto en la traducción o hasta qué punto marcar o no una variedad.
Puede que la mejor solución no sea apuntar hacia un dialecto concreto, sino usar un registro ligeramente desviado de la lengua estándar para insinuar un origen social que no desplace culturalmente al personaje. Otras veces, puede que un rasgo léxico o una estructura gramatical basten para recrear el ambiente.
Cada decisión requiere criterio y responsabilidad. La literatura representa grupos sociales reales, y tratarlos con respeto exige una mirada ética.
Como he comprobado en mi investigación (de próxima publicación), esa mirada ética es algo que la IA, por ahora, no posee. La IA no “entiende” las implicaciones sociales de la forma de hablar de un personaje. No sabe cuándo un dialecto transmite marginación o cuándo marca jerarquía social. Trabaja detectando patrones estadísticos, no intenciones humanas.
Cuando se le pide traducir voces no estándar, suele haber dos consecuencias. O bien el texto traducido aparece “limpio”, y un personaje que hablaba con un acento local termina hablando de forma normativa, con lo que su personalidad se diluye; o bien la IA imita marcas dialectales, pero mezcla jergas incompatibles o deforma palabras sin criterio. Esto crea estereotipos no deseados, es decir, caricaturas.
Por tanto, ante la reflexión y minuciosidad que conlleva la traducción de la variación lingüística, la IA genera respuestas rápidas que no tienen todavía suficiente sensibilidad para manejar ambigüedades, ironías o alusiones culturales.
¿Quiere recibir más artículos como este? Suscríbase a Suplemento Cultural y reciba la actualidad cultural y una selección de los mejores artículos de historia, literatura, cine, arte o música, seleccionados por nuestra editora de Cultura Claudia Lorenzo.
Por qué necesitamos decisiones
Las herramientas como la IA pueden ser muy útiles en las fases previas y complementarias de la traducción porque permiten localizar información rápidamente, comparar usos reales en grandes corpus, identificar patrones de estilo… Sin embargo, si tienden a igualar las voces, también igualarán las experiencias. Utilizándola sin control perderemos diversidad lingüística y, con ella, diversidad humana.
Y es que las variedades lingüísticas no son solamente desviaciones del estándar: son lenguas muchas veces minoritarias o minorizadas, vulnerables o en riesgo. Protegerlas ayuda a conservar nuestro patrimonio cultural y una valiosa pluralidad.
Para que las voces lleguen al lector sin perder su identidad, hace falta alguien que las escuche y las recree. Esa es una tarea esencialmente humana. Por eso, cada vez que una traducción literaria nos deja oír un mundo distinto, estamos también salvando una parte de nuestra diversidad cultural.
Una versión de este artículo se publicó en la revista Telos, de la Fundación Telefónica."
https://theconversation.com/la-ia-puede-traducir-palabras-pero-no-voces-narrativas-275284
#Metaglossia
#metaglossia_mundus
#métaglossie
"In late 2025, generative AI crossed another critical threshold. Following GPT-5.1 in November, OpenAI released GPT-5.2 on 11 December — a model designed to generate adaptive, discipline-specific academic prose with fewer stylistic traces and greater structural variation. For universities, the concern was immediate: if AI can write fluently, unpredictably, and in discipline-appropriate academic language, does detectability still hold?
Early results show that it does.
How StrikePlagiarism responds to GPT-5.2
The release of GPT-5.2 reinforced a broader challenge facing higher education: AI development now outpaces institutional policy cycles. For StrikePlagiarism, this moment required immediate empirical validation rather than theoretical assumptions.
Within days of GPT-5.2 entering academic use, StrikePlagiarism.com was tested against newly generated and paraphrased GPT-5.2 texts under realistic academic conditions. The results were unambiguous:
Over 97% detection accuracy across GPT-5.2 outputs False results below 1%, preserving academic fairness Consistent performance after paraphrasing and stylistic diversification Rather than relying on surface-level markers, StrikePlagiarism.com analysed behavioural consistency across longer academic texts — identifying patterns that remain statistically improbable in authentic student work. Reports delivered probability-based, side-by-side comparisons, providing educators with interpretable evidence rather than automated verdicts.
Why GPT-5.2 remains detectable
GPT-5.2 demonstrates strong control over academic conventions and avoids obvious repetition. However, analysis across extended submissions consistently revealed:
non-random reasoning structures, unusually uniform transitions between claims, absence of natural cognitive drift. Individually, these signals are subtle. Taken together, they form a measurable behavioural profile. Detection no longer depends on awkward phrasing or stylistic errors, but on identifying improbably stable reasoning across complex texts. Fluency improves — invisibility does not.
Core advantages of StrikePlagiarism.com’s AI detection approach
StrikePlagiarism.com was designed to support institutions operating at scale, across disciplines and languages:
Multilingual AI-content detection at scale AI-generated content is detected across 100+ languages, enabling consistent integrity standards in international and multilingual academic environments. Proven accuracy against advanced generative models Detection accuracy exceeds 97%, including paraphrased and stylistically diversified GPT-5.2 texts — demonstrating reliability under real academic conditions. Ultra-low false-positive rates False results remain below 1%, protecting students from incorrect attribution and ensuring that detection strength never compromises fairness. Why AI detection is critical right now
GPT-5.2 makes one reality clear: the primary risk for universities is no longer obvious AI misuse, but large volumes of academically convincing AI-generated work entering assessment unnoticed. This is not a future concern — it is a present operational challenge.
StrikePlagiarism addresses this challenge at an institutional level. By combining high-accuracy AI behaviour analysis with transparent, probability-based reporting, StrikePlagiarism.com enables universities to respond now, not retrospectively. When academic decisions must be defensible at the moment they are made, evidence-based AI detection becomes essential infrastructure rather than an optional safeguard." 97% accuracy against GPT-5.2: inside StrikePlagiarism.com’s detection results | THE Campus Learn, Share, Connect https://share.google/hA12nxsAaMGdPGqDX #Metaglossia #metaglossia_mundus #métaglossie
"Since the start of this year’s Amar Ekushey Book Fair, readers and publishers have noticed a rise in Bengali translations of world literature classics, with the growing popularity of these works clearly reflected in readers’ enthusiastic response. Readers who feel less comfortable in studying English texts but have immense interests to enjoy the taste of literary works from diverse cultures and languages, transcending national boundaries, are searching for and purchasing translation works, said the publishers. According to them, they have published more Bengali translations of classics from other languages responding to the demand of readers, but translation of classics of Bangla literature in foreign languages has not increased in the expected way. Baatighar always brings a good number of translations. Publisher of Baatighar, Dipankar Das, said, “There is always a demand for translated literature. It will increase further. One may not read English comfortably, but has a penchant for world literature. Translation helps them get the taste of world literature.” There are considerable allegations about the quality of many newly published translations. With the increasing popularity of translated books, the number of substandard translations is also increasing. Responding to this complaint, Dipankar said if a reader does not understand the translation, then questions can be raised about quality. Baatighar, however, publishes books by ensuring quality. Salesperson of Seba Prokashoni, Azizul Hakim, said the sale of translated books has been going well for the last several years. “Translations and novels are our bestsellers. A big portion of our new books this time is translations. Average sales of these translated copies are getting better every day,” he said. Small or big, almost all publishing houses are bringing translations of novels, thrillers, detective series, biographies, theories and historical books of world-famous authors into Bangla. Also, many translated books are published without the permission of the original authors. As a result, the editing of these books is not done properly.
Read More 5,000 tons of diesel to arrive from India today
Translator Mostaq Sharif has translated “After Dark” by Japanese author Haruki Murakami. According to him, the state of translation literature is having a good time as the demand for such works is increasing gradually. He said, “Good to see that translation works are getting a good response from readers. It indicates the changing taste of readers. But the substandard works of translation are deceiving the bookworms. Publishers and readers need to be careful while selecting and purchasing books.” On the 12th day of the book fair, a discussion programme titled “Shahidullah Kaiser” was held at the main stage of the book fair at 3pm with Syed Azizul Haque Chowdhury in the chair."
https://www.daily-sun.com/metropolis/862418 #Metaglossia #metaglossia_mundus #métaglossie
"Grants and Prizes for Promoting Italian Books and Translations (New Zealand)
UPCOMING GRANTS IN MARCH 2026!
Deadline: 31-Mar-2026
The Ministry of Foreign Affairs and International Cooperation offers prizes and grants to promote Italian language and culture abroad through literary translations, scientific works, and audiovisual productions. Each prize is valued at €5,000, targeting high-quality translations, publications, dubbing, or subtitling of works created or published since January 2025. Eligible applicants include publishers, translators, production companies, and cultural institutions, with applications due by 31 March 2026.
Overview
This initiative aims to strengthen the global dissemination of Italian culture by supporting:
Translation and publication of Italian literary and scientific works into foreign languages
Production, dubbing, and subtitling of Italian short and feature films, as well as television series
Promotion of contemporary Italian literature and audiovisual content
Expansion of cultural exchange and international reach
The program ensures that both literary and audiovisual works maintain high-quality standards and reach wider international audiences.
Prize Details
Maximum number of prizes for 2026: 10
Prize value: €5,000 each
Language distribution:
Spanish: 5 prizes
Arabic: 1 prize
Chinese: 1 prize
French: 1 prize
English: 1 prize
German: 1 prize
Eligible Works
Literary and scientific works (including e-books) translated and published in a foreign language on or after 1 January 2025
Audiovisual productions (short/feature films, TV series) produced, dubbed, or subtitled on or after 1 January 2025"
https://www2.fundsforngos.org/arts-culture/grants-and-prizes-for-promoting-italian-books-and-translations-new-zealand/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Zoom announced on Tuesday, March 10, that it is bringing real-time audio translation to Zoom Meetings, allowing users to understand speakers in different languages during calls. The video communications platform also unveiled a new feature aimed at detecting synthetic audio or video in Zoom Meetings.
The new features coming to Zoom Meetings are among a handful of new AI-powered capabilities coming to Zoom’s enterprise-grade offerings, including Zoom Workplace, Zoom Phone, and Zoom CX.
The live voice translation feature will let Zoom users speak in their native language while others on the call can hear the translated speech in their preferred language in real-time. The feature is currently available in five languages, with support for more languages coming soon.
Zoom first gained widespread recognition during the COVID-19 pandemic, when remote work and virtual meetings became the norm. Since then, the company has looked to become more than a video conferencing platform by launching AI-powered productivity tools and customer support products for enterprises. It has sought to define its competitive edge in the crowded AI industry by taking a federated approach where multiple AI models, including Zoom’s own models and those from OpenAI, Anthropic, and Meta, are dynamically selected to provide cost-effective solutions.
The company’s new on-call deepfake risk detection feature arrives as AI-driven online scams continue to surge. It could play a key role in protecting users from ‘digital arrest’ scams, many of which rely on deceptive video calls to trick victims.
“The next phase of enterprise AI will be defined by the ability to move from conversation to action. Zoom’s agentic AI platform is designed to orchestrate action across systems, turning every meeting, call, and customer interaction into a trigger for workflow automation,” said Velchamy Sankarlingam, president of Product & Engineering at Zoom.
Alongside these Zoom Meeting features, the company also introduced a suite of AI-powered office apps such as AI Docs, Slides, and Sheets that can be used to generate document drafts, spreadsheets with data, or presentations based on meeting transcripts and data from other services.
Zoom further said that AI avatars, announced last year, will start becoming available to users later this month. The feature lets users create photo-realistic, AI-generated avatars of themselves to appear in online meetings on their behalf. Zoom’s AI Companion 3.0, its latest AI assistant on the web unveiled last year, will soon be accessible through a desktop app. The AI assistant is also being integrated across the Zoom Workplace app, Zoom Business Services, and Workvivo, its app for employee communication.
In addition, AI Companion 3.0 can be integrated with third-party platforms such as Slack, ServiceNow, Box, Google Drive, and OneDrive, enabling the AI assistant to synthesise enterprise data across applications and provide insights from multiple data sources.
Amid the rising popularity of AI agents, Zoom is letting users create and deploy custom as well as pre-built AI agents through no-code, natural language prompts. These custom AI agents can act on users’ behalf to automate workflows across third-party systems such as Salesforce, Slack, and ServiceNow, the company said.
For developers, Zoom announced a new suite of enterprise‑grade AI APIs which can be used to build apps that leverage the transcription, translation, summarization, deep reasoning, and image‑processing technologies powering Zoom’s own products.
In Zoom Phone, the company is rolling out agentic workflows that help enterprise clients automatically execute tasks such as drafting emails or sending out summaries. It is also adding new SMS capabilities for the 24/7 virtual receptionist to handle customer engagements via text, answer questions, collect information, support scheduling flows, and escalate to a human when needed.%" https://indianexpress.com/article/technology/tech-news-technology/zoom-new-live-voice-translation-deepfake-detection-video-calls-10575835/ #Metaglossia #metaglossia_mundus #métaglossie
County officials say the technology might limit burnout among call takers, but AI researchers are skeptical.
"Posted inOnondaga County
Onondaga County begins using AI to translate, transcribe and summarize 911 calls
County officials say the technology might limit burnout among call takers, but AI researchers are skeptical.
by Laura Robertson
March 11, 2026
The old Onondaga County court building is home to the county legislature. Credit: Mike Greenlar | Central Current
Onondaga County’s 911 center recently began using artificial intelligence technology to assist with calls. The technology allows for live transcription, location, and call summarization, and will in the future provide live translation, according to county executive spokesperson Justin Sayles.
The county will spend $350,000 this year on the technology, said Sayles, and the county will consider annual renewals of the product. The funding was approved as part of the Onondaga County Department of Emergency Communications’ 2026 budget.
The technology was developed by Prepared, a company that makes AI products.
The county’s 911 center has a significant staffing shortage, according to Sayles and Emergency Communications Commissioner Julie Corn. County officials hope AI will help prevent burnout.
“911 call-takers take call after call, day after day with the stress that there is zero room for error,” said Sayles. “Having tools that aid in their success and support them to be their best is one way we can limit burnout.”
We bring you the information you need about Central New York by holding those in power to account and holding a mirror up to our community. Syracuse is evolving and your stories are shaping it. Join us.
Email Address
Sign up
While the county expects the technology to help with burnout, some experts are more skeptical about AI’s utility for 911 call takers. Concerns range from AI’s ability to triage emergency calls to its ability to effectively translate for non-English speaking callers.
In an October presentation to the public safety committee, Corn said that the use of the technology “falls right in line with the county executive’s initiative to have AI programs in his vision.”
The AI 911 system works like this: Non-emergency phone calls — those that come into the 911 center on ten-digit numbers — will be transferred directly to an AI bot. If “key emergency words or scenarios” are mentioned, the AI bot is trained to transfer the call back to a human, said Sayles. He added that other calls would be determined to be non-emergencies “in the same way they are now” and transferred to the bot.
Prepared will provide a live transcription of emergency 911 calls, but a human will write the messages sent to emergency responders. Call takers will still be expected to take notes on their calls. The call recordings, notes and AI-drafted transcript will all be saved separately and will be able to be compared, Sayles said. Only the notes will be able to be edited, he said.
Ben Winters, the director of AI and data privacy at the Consumer Federation of America, said he would be “very worried” about the potential for false negatives if a bot were to triage emergency calls.
Winters also said AI is not equally good at all forms of transcription. When people are rushed, crying, using headphones or on speaker, it is more likely to miss what is being said. He added that 911 callers might not feel comfortable sharing exactly what is going on, and that call-takers are trained to try to get needed information from callers.
Winters said the redundancy in record keeping was good but questioned when the AI transcription might actually be used.
“What is the record that they go with?” he asked. “What are the ones they report and act on?”
Onondaga County is also linguistically diverse. As of 2024, the most commonly spoken languages among people with low English proficiency include Ukrainian, Nepali and Burmese. Sayles said that the technology would be able to translate all these languages.
But the AI tools powering translation are not always representative of how native speakers speak, said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology who researches multilingual AI. The AI powering translation services may have fewer natively-created digitized examples of some languages to train on, she said, which could mean that translations might feature overly complicated or outdated words that people do not use regularly and might not understand.
Bhatia gave an example: If AI translates the English word “vaccine” to an equivalent of the older ‘innoculation,’ the listener might not understand those words. She added that translation tools sometimes even make up new words when they don’t know the correct one.
Translation tools should be developed and evaluated with local language speakers and community-based organizations, Bhatia said.
“AI-based translation tools may come in handy when we need legible translations in a pinch but we shouldn’t confuse them as capable of the fluent, nuanced, and accurate translations people need when they are seeking emergency and life-saving services,” said Bhatia.
The county currently translates calls using Voiance. The program uses live interpreters. The county will still contract with Voiance, said Sayles. In the future, a bot will likely translate live, but in the meantime, the county will work to ensure the change in translation methods doesn’t leave service gaps.
Prepared is used in other 911 departments across the county, including in Baltimore. Sayles said Onondaga County had “solid working relationships” with other counties using Prepared. So far, those counties have raised no concerns, he said.
Prepared was recently purchased by Axon Enterprise, a company that develops technology for the military and police.
In a press release shortly after the acquisition, Axon boosted Prepared as a means of “owning the first 120 seconds” of an emergency call. Axon believes the technology could help supervisors to see “risk patterns and coaching opportunities” in their callers, the press release said.
One of Axon’s other AI products, the controversial Draft One generative AI police report writer, has been accused of being “designed to defy transparency” by the nonprofit Electronic Frontier Foundation.
Sayles did not directly say whether it would integrate other Axon AI programs — like Draft One — into county operations but said the county is “constantly evaluating opportunities to integrate AI” into county operations.
“The AI technology continues to improve,” said Sayles. “At the end of the day, the call-taker is still fully trained in listening to calls and capturing and translating information for dispatch.”"
https://centralcurrent.org/onondaga-county-begins-using-ai-to-translate-transcribe-and-summarize-911-calls/
#Metaglossia
#metaglossia_mundus
#metaglossia_mundus
"...Bien que l'Afrique abrite plus de 2 000 langues, la grande majorité des modèles d'apprentissage automatique qui alimentent les systèmes d'IA actuels sont principalement entraînés sur l'anglais, le mandarin et quelques autres langues dominantes à l'échelle mondiale. Pour des millions de personnes sur le continent, cela signifie que la prochaine génération d'outils numériques risque de rester inaccessible.
Le Nigeria franchit aujourd'hui une étape majeure pour changer cela.
L'espace Agence nationale de développement des technologies de l'information (NITDA) a conclu un partenariat avec NKENNEAi, une plateforme d'intelligence artificielle pour les langues africaines, afin d'accélérer le développement d'infrastructures conçues spécifiquement pour les langues africaines.
Cette collaboration vise à développer des technologies de traduction et de langage évolutives, capables de soutenir les services gouvernementaux, les systèmes de santé, les plateformes financières et les applications numériques pour l'ensemble de la population multilingue du Nigéria.
Avec plus de 500 langues parlées au Nigéria, la langue demeure l'un des principaux obstacles à l'inclusion numérique et à l'accès aux technologies. Le partenariat NKENNEAi–NITDA vise à combler cette lacune en développant des systèmes d'IA spécifiquement entraînés sur les langues africaines et leurs structures tonales.
De l'apprentissage des langues à l'infrastructure africaine de l'IA
NKENNEAi est né de NKENNE, l'une des plateformes d'apprentissage des langues africaines à la croissance la plus rapide.
Fondée dans le but de préserver et d'enseigner les langues africaines, NKENNE est devenue une plateforme mondiale comptant plus de 400 000 utilisateurs apprenant des langues telles que l'igbo, le yoruba, le swahili, le haoussa, le twi, le somali et le pidgin nigérian.
À mesure que la plateforme s'est développée, son corpus croissant de données textuelles et vocales en langues africaines a jeté les bases de quelque chose de bien plus vaste : le développement de systèmes d'intelligence artificielle capables de comprendre les langues africaines à grande échelle.
Ces recherches ont mené à la création de NKENNEAi, une plateforme d'IA multilingue axée sur la construction de l'infrastructure nécessaire à l'intelligence artificielle des langues africaines.
Michael Odokara-Okigbo, PDG de NKENNEAi, a déclaré que la croissance de NKENNE révélait une opportunité bien plus vaste : bâtir les fondements technologiques de l’IA pour les langues africaines. « NKENNE a débuté comme une mission culturelle visant à préserver et à enseigner les langues africaines », a expliqué M. Odokara-Okigbo. « Alors que notre communauté s’étendait à des centaines de milliers d’apprenants, nous avons réalisé que les données et les connaissances linguistiques que nous développions pouvaient alimenter un projet d’une tout autre envergure. NKENNEAi a pour objectif de construire l’infrastructure qui permettra aux langues africaines d’exister, de se développer et de prospérer au sein des systèmes d’intelligence artificielle. »
Une approche différente pour l'entraînement de l'IA aux langues africaines
La plupart des modèles d'IA mondiaux ont des difficultés avec les langues africaines car ils ne disposent pas des données d'entraînement et des cadres linguistiques nécessaires pour comprendre le ton, les variations dialectales et le sens contextuel.
NKENNEAi développe une approche différente.
L'entreprise développe des chaînes de traitement de données spécialisées, des systèmes d'annotation linguistique et des ensembles de données vocales conçus spécifiquement pour les langues africaines, permettant la création de modèles d'apprentissage automatique qui capturent avec précision le sens tonal et les nuances dialectales.
Cette méthodologie comprend des ensembles de données de phrases bilingues à grande échelle pour la traduction automatique, des ensembles de données vocales annotées pour les systèmes de transcription vocale, un étiquetage linguistique sensible au ton qui préserve le sens à travers les dialectes et une validation linguistique menée par la communauté avec des locuteurs natifs.
En combinant l'expertise linguistique et l'infrastructure d'apprentissage automatique, NKENNEAi développe des modèles d'IA sensibles à la tonalité, capables de comprendre les langues africaines avec une précision bien supérieure à celle des systèmes de traduction traditionnels.
La plateforme prend en charge des technologies telles que la traduction automatique par IA texte-texte, la transcription vocale en texte, la synthèse vocale texte-parole et les API d'IA multilingues pour les développeurs et les entreprises.
Ces outils permettent aux startups, aux gouvernements et aux entreprises d'intégrer directement le support des langues africaines dans leurs plateformes numériques.
Le système se concentre actuellement sur des langues telles que le yoruba, l'igbo, le haoussa, le swahili et le pidgin nigérian, et prévoit d'étendre sa couverture à d'autres langues africaines.
Soutenu par un soutien mondial à la recherche
Le développement de NKENNEAi a également bénéficié du soutien de financements internationaux pour la recherche, notamment de plusieurs subventions de la part de US National Science Foundation (NSF).
Dans le cadre du programme de recherche sur l'innovation des petites entreprises (SBIR) de la NSF, un financement a été accordé à ESM Global Productions, la société à l'origine de NKENNEAi, afin de faire progresser le développement d'une plateforme de traduction IA multilingue pour les langues africaines.
En 2024, la société a reçu une subvention de 1 million de dollars de la NSF (phase II) pour étendre son API de traduction des langues africaines et poursuivre le développement de modèles de parole et de langage conçus spécifiquement pour les langues tonales.
Ce travail soutient le développement de modèles de traduction multilingues pour les langues africaines, de systèmes de transcription vocale entraînés sur des ensembles de données vocales africaines, de modèles vocaux de synthèse vocale et d'API évolutives permettant l'intégration des langues africaines sur les plateformes numériques.
Ensemble, ces efforts contribuent à la mise en place de l'un des plus grands ensembles de données structurées et de pipelines d'entraînement à l'IA axés spécifiquement sur les langues africaines.
En accord avec les ambitions nationales du Nigéria en matière d'IA
Ce partenariat avec la NITDA s'inscrit dans la stratégie numérique globale du Nigéria, menée par le ministère fédéral des Communications, de l'Innovation et de l'Économie numérique, dirigé par le Dr Bosun Tijani, ministre nigérian des Communications, de l'Innovation et de l'Économie numérique.
Avant de rejoindre le gouvernement, Tijani a cofondé Pôle de co-création (CcHUB)Le Nigeria est l'un des centres d'innovation technologique les plus influents d'Afrique. Son ministère a piloté des initiatives visant à positionner le Nigeria comme un pôle mondial de l'intelligence artificielle et de l'innovation numérique.
Des programmes tels que l'initiative « 3 Million Technical Talent » visent à former des millions de Nigérians aux compétences numériques et liées à l'IA, tout en renforçant l'écosystème technologique du pays.
Grâce à sa collaboration avec NKENNEAi, NITDA étudie comment une infrastructure d'IA développée localement peut aider à servir la population multilingue du Nigéria tout en renforçant les capacités nationales du pays en matière d'IA.
Développer la main-d'œuvre pour l'IA en langues africaines
Au-delà de la modélisation, ce partenariat vise également à développer la main-d'œuvre nécessaire au maintien des systèmes d'IA en langues africaines.
Les initiatives prévues comprennent la formation d'annotateurs de données IA, d'ingénieurs en traitement du langage naturel et d'équipes techniques du secteur public qui soutiendront le développement d'ensembles de données linguistiques et le déploiement du système.
Ces programmes visent à garantir que l'IA linguistique africaine ne soit pas seulement conçue pour le continent, mais conçue par les personnes qui comprennent le mieux ses langues et ses cultures.
Importance des recherches sur la psychose
L'économie numérique africaine est en pleine expansion, mais la langue demeure l'un des principaux obstacles à l'accès au numérique.
Des millions d'Africains interagissent plus facilement dans leurs langues autochtones qu'en anglais, pourtant la plupart des plateformes numériques restent conçues principalement pour les utilisateurs anglophones.
Sans accessibilité linguistique, des services tels que la communication en matière de soins de santé, les outils financiers et les plateformes gouvernementales restent difficiles d'accès pour une grande partie de la population.
Une infrastructure linguistique basée sur l'IA pourrait transformer cela en permettant aux plateformes de communiquer avec les utilisateurs dans les langues qu'ils parlent au quotidien.
En compétition dans la course mondiale à l'IA du langage
Les entreprises technologiques mondiales commencent à reconnaître l'importance des langues africaines.
Des entreprises comme Google ont récemment étendu leur assistance en matière d'IA et de recherche à des langues comme le yoruba et le haoussa, témoignant d'un intérêt croissant pour les technologies linguistiques africaines.
Cependant, tandis que les entreprises mondiales commencent à intégrer les langues africaines dans leurs systèmes, NKENNEAi se concentre entièrement sur la construction d'une infrastructure d'IA conçue spécifiquement pour la complexité linguistique de l'Afrique.
« Les langues africaines sont profondément tonales, contextuelles et culturellement riches », a déclaré Odokara-Okigbo. « Développer une IA capable de les comprendre véritablement exige une infrastructure conçue spécifiquement pour ces langues. Notre mission est de faire en sorte que l’Afrique ne se contente pas de consommer l’intelligence artificielle, mais qu’elle construise les systèmes fondamentaux qui la sous-tendent. »
Construire l'avenir linguistique de l'Afrique
Le partenariat entre NKENNEAi et NITDA représente une étape importante pour garantir que les langues africaines soient pleinement représentées dans l'écosystème mondial de l'IA.
L'initiative sera déployée par étapes, selon une approche progressive comprenant des intégrations pilotes avec des agences gouvernementales, l'extension à d'autres langues, des programmes de formation de la main-d'œuvre et le développement éventuel d'une infrastructure d'IA linguistique nationale plus large.
En combinant le soutien gouvernemental et l'innovation du secteur privé, cette initiative vise à positionner le Nigéria comme un leader mondial des infrastructures d'intelligence artificielle pour les langues africaines.
Pour NKENNEAi, la mission va encore plus loin : construire les fondements technologiques qui garantissent que les langues africaines restent dynamiques, accessibles et utilisables à l'ère de l'intelligence artificielle." https://techcabal.com/fr/2026/03/09/Nitda-s%27associe-%C3%A0-Nkennea/ #Metaglossia #metaglossia_mundus #métaglossie
"Aborder la question du langage égalitaire, c’est s’engager dans un débat houleux qui bien souvent oscille en réalité entre hostilité de principe et méconnaissance. Pourtant le sujet mérite mieux, car pour construire davantage d’égalité entre les sexes, il faut aussi comprendre comment « nos habitudes langagières » nous font voir le monde « au travers d’un prisme masculin ». C’est à cette réflexion que nous invitent les trois psycholinguistes Pascal Gygax, Sandrine Zufferey et Ute Gabriel dans Et si on arrêtait de penser au masculin ?, version enrichie de leur précédent ouvrage Le cerveau pense-t-il au masculin ? publié en 2021. Une approche scientifique, claire et accessible qui invite à retrouver une langue moins androcentrée et plus inclusive. Une lecture particulièrement bienvenue à l’occasion du 8 mars, journée internationale des droits des femmes.
Une approche scientifique qui sait s’adresser aux non-spécialistes
Lorsqu’un débat divise, expliquent les auteurices, il faut opter pour une démarche « evidence based », autrement dit pour une démarche « qui s’appuie sur des données scientifiques ». Pour interroger les phénomènes sociaux en lien avec le langage, ces données scientifiques sont issues de la psycholinguistique, « discipline à la croisée de la linguistique (l’étude du langage) et de la psychologie expérimentale (étude des comportements et processus mentaux observables) ».
Un tel projet pourrait effrayer : concepts inconnus ? terminologie complexe ? démonstrations ardues ? On aurait tort, car tout est mis en œuvre pour que ces données scientifiques soient accessibles aux non-spécialistes, à qui l’ouvrage est d’ailleurs destiné en première intention. Emaillé de « petites expériences de psychologie à faire avec [son] entourage », tout à fait exploitables aussi en animation avec des élèves, ponctué de courts bilans (« Que retenir de ce chapitre ? »), qui récapitulent l’essentiel de chaque partie, il est à la fois documenté, éclairant, et ludique.
Et il se lit avec beaucoup de curiosité et d’intérêt, tant certains comptes-rendus d’études montrent de manière surprenante, et édifiante, combien « le langage [est] maitre de notre pensée ». A valeur monétaire identique, évalue-t-on sa satisfaction de la même manière si on reçoit en récompense une « petite pièce » ou « une pièce » ? Interprète-t-on identiquement les pleurs de nourrissons selon que l’on apprend qu’il s’agit de filles ou de garçons ? Trouve-t-on autant réussi un tour de magie si l’on croit que c’est un homme ou que c’est une femme qui le réalise ? … Autant d’expériences qui permettent de comprendre comment un mot active dans notre cerveau des représentations difficiles à contrôler, même quand elles ne sont pas pertinentes.
Les biais langagiers, produits des stéréotypes de genre
Ce lien entre langage et pensée, explique l’ouvrage, a été mis au jour au début du XXe siècle par « l’hypothèse Sapir-Whorf » aussi appelée « relativisme culturel ». Ces deux anthropologues défendaient l’idée que la façon dont on perçoit le monde – notamment les couleurs – dépend du langage dont on dispose pour les désigner ; théorie à l’origine du concept orwellien de Novlangue dans 1984. Si cette hypothèse a été discutée, elle a permis d’interroger le rôle du langage et de comprendre que celui-ci créée des biais qui « influence[nt] non seulement notre manière de voir le monde, mais également notre manière de penser et notre manière de nous comporter ».
Ces biais langagiers sont le reflet de la société dans laquelle nous vivons. Celle-ci ayant tendance « à considérer les hommes […] comme la norme de notre espèce », et « les normes masculines […] comme des standards neutres », nos pratiques langagières sont soumises à l’androcentrisme et influencées par des stéréotypes de genre. Nous associons ainsi de manière automatique certains mots et caractéristiques aux filles/femmes ou aux garçons/hommes, notre cerveau s’étant habitué à les activer, et apprenons à regarder, sans nous en rendre compte, le monde à travers un prisme déformant, masculin, aux effets discriminants.
Par exemple, « il existe un stéréotype selon lequel les femmes sont plus bavardes que les hommes ». En réalité la littérature scientifique démontre l’inverse, les femmes parlent moins longtemps, et moins souvent, cèdent davantage la parole, interrompent moins les conversations… Mais ce stéréotype est si puissant qu’il fausse notre perception. Deux chercheuses ont ainsi montré que lorsqu’on écoute une conversation entre deux personnes, même si chacune d’entre elles a strictement le même temps de parole, on ressent celui dont dispose la femme, pourtant identique à celui dont dispose l’homme, comme supérieur. Leçon à retenir quand on s’intéresse à la question du bavardage en classe…
En finir avec la pseudo valeur générique du masculin
« Certains aspects plus formels liés à la grammaire du français », langue genrée, renforcent par ailleurs ces biais inégalitaires. C’est en particulier le cas du genre grammatical masculin. Forme ambigüe, le masculin est en effet censé recouvrir deux sens différents : un sens « spécifique », qui renvoie uniquement à du masculin, et un sens « générique », appelé aussi universel, qui, neutralisant ce sens genré, désignerait aussi bien des femmes que des hommes.
Mais le masculin générique actionne-t-il vraiment autant de représentations féminines que de représentations masculines ? Toutes les expériences menées en psychologie expérimentale – l’ouvrage en relate plusieurs – montrent qu’il n’en est rien. Dans les faits « l’interprétation dite « générique » du masculin est théoriquement possible, mais elle est très difficile à adopter pour notre cerveau » tant elle s’oppose aux biais langagiers auxquels celui-ci s’est habitué très tôt. Une équipe de recherche a d’ailleurs montré que « l’association masculin = homme commence à apparaitre chez les enfants vers l’âge de trois ans ».
En réalité, si le masculin l’a emporté, y compris dans les règles d’accord à partir du XVIIe, ce n’est pas parce qu’il a valeur de neutre universel dégenré, mais parce qu’il est, selon les grammairiens de l’époque, « plus noble [et] prévaut seul contre deux ou plusieurs féminins » en raison de la « supériorité du mâle sur la femelle ». C’est ce contexte « imprégné d’androcentrisme, voire de misogynie », comme l’a montré *Eliane Viennot, professeuse émérite de littérature française de la Renaissance et spécialiste de l’histoire de la langue dans son ouvrage Non, le masculin ne l’emporte pas sur le féminin, qui a favorisé « l’évolution de la langue vers un masculin dominant » en la déféminisant.
Plaidoyer pour un langage moins exclusif
Cette utilisation constante du masculin n’est pas sans conséquence. Car si l’utilisation du masculin universel ne créée pas le sexisme, il « l’amplifie comme une loupe ». En termes d’orientation, les effets sont dévastateurs. Difficile lorsqu’on est une fille ou qu’on ne se reconnait pas dans certaines normes sociales masculines, de se projeter dans un métier uniquement genré au masculin, surtout s’il est traditionnellement exercé par les hommes. En revanche toutes les études montrent que le sentiment de légitimité, efficacité, réussite est considérablement renforcé lorsque la forme langagière explicite à la fois le féminin et le masculin. Mettre une profession aussi au féminin c’est donc ouvrir une porte mentale vers un futur possible pour tous et toutes.
L’ouvrage invite donc à démasculiniser le langage, pour le rendre non exclusif. Soit en utilisant des outils de neutralisation : termes épicènes (mots identiques au féminin et au masculin), reformulation dégenrée, innovations langagières… Soit en utilisant des outils de reféminisation : double flexion (dont l’efficacité, largement documentée, n’est plus contestée), formes contractées avec utilisation du point médian… Des études en cours montrent que, contrairement à certaines critiques qui leur ont été faites, ces formes ne sont pas excluantes car l’œil s’y habitue rapidement (en moyenne à la 3e occurrence). Il est même possible qu’elles posent moins de difficultés aux personnes atteintes d’un trouble dyslexique que bien des complexités arbitraires de la langue française.
Quoi qu’il en soit, le point médian n’est qu’un outil parmi d’autres, les auteurices n’y ont d’ailleurs pas recours dans l’ouvrage. De nombreux autres procédés existent, il faut juste s’en emparer. Car si visibiliser chacun et chacune en s’opposant au sexisme langagier ne suffira pas à effacer les inégalités de genre, c’est une étape nécessaire pour s’attaquer aux rapports de pouvoir que continue d’imposer ce prisme du masculin.
Claire Berest
Et si on arrêtait de penser au masculin ? Comment voir le monde sous un autre genre, Pascal Gygax, Sandrine Zufferey et Ute Gabriel. Sur le site des éditions Le Robert.
Interview d’Elaine Viennot à retrouver sur le site du Café pédagogique.
« Guide de communication égalitaire : un outil pour accompagner les équipes éducatives ». Article à retrouver sur le site du Café pédagogique." https://www.cafepedagogique.net/2026/03/09/le-prisme-masculin-de-la-langue-et-ses-effets/ #Metaglossia #metaglossia_mundus #métaglossie
"Last year at the Vatican Library, I had the chance to see a portion of the Bible with an incredible history. It wasn’t the famous Codex Vaticanus but a translation of the Gospels into Persian from the 1740s.
While a translation of the Gospels into the language of a Muslim empire is itself noteworthy, the history behind this particular text is even more remarkable. It represents one of two times when the ruler of Iran (or Persia, as it was called by the West before 1935) praised the Bible and furthered its spread in the region.
At a time when Iran is often associated with hostility toward Christianity, these episodes remind us that God can work through unlikely and even evil leaders. I find encouragement—and a prompting to pray—when I reflect on unexpected ways God used infamous Iranian leaders to spread the gospel. Let me introduce you to two of them.
Nader Shah (1688–1747) Iran’s most ruthless leader in its history arguably was Nader Shah, who ruled Persia from 1736 to 1747 and led a constant stream of military campaigns. His sack of Delhi in 1739 perhaps best demonstrated his military might and brutality. After taking the city, a revolt arose that the shah crushed, resulting in the deaths of up to 20,000 civilians.
The shah, characterized as a “notorious despot and mass murderer who wrought destruction on a large scale and ruined his country,” also brought together Jewish, Catholic, and Armenian scholars in Persia to translate the Old and New Testaments. This included the copy of the Gospels that Catholic missionaries sent to the Vatican Library.
I find encouragement—and a prompting to pray—when I reflect on how God has used Iranian leaders to support the spread of the gospel.
After the missionaries completed translating the Gospels, they went to present the translation to Nader Shah. As they waited an hour for an audience with the shah, they saw 18 people led to his chamber who later were carried out as lifeless bodies, having been strangled. With a trepidation reminiscent of Esther approaching the Persian King Ahasuerus, they entered the shah’s court expecting martyrdom. However, the shah received the Persian translation and rewarded them with silver equivalent to a few years’ wages.
Nader Shah’s motivations for developing a Persian translation of the Bible are unclear. He may have sought to understand Judaism and Christianity in his empire more fully. Perhaps he hoped to syncretize the religions. Whatever his motivations, he was the unlikely catalyst for the first effort to translate the whole Bible into Persian.
Fath-Ali Shah Qajar (1772–1834) If Nader Shah was one of the most ruthless leaders of Iran, Fath-Ali Shah Qajar was perhaps one of the most opulent. He ruled for a relatively stable period over three decades from 1797 to 1834. He’s easily recognizable in portraits with his long beard, thin waist, and bejeweled attire.
In 1812, evangelical missionary Henry Martyn completed a translation of the New Testament into Persian. Martyn, who knew William Wilberforce, Charles Simeon, and William Carey, worked tirelessly in Shiraz, Persia, to translate the New Testament.
When he finished, he attempted to present a beautiful bound copy to Fath-Ali Shah. Martyn reached the shah’s encampment but couldn’t enter his court to present the New Testament. However, one secretary read to the shah three tracts Martyn had written to present the gospel to Muslims. Martyn died four months later, at the young age of 31, while trying to return to England.
While Martyn didn’t live to see it, the British ambassador to Persia presented his Persian New Testament to Fath-Ali Shah in 1814. After reviewing the New Testament, the shah sent a letter commending it. He asserted that Martyn had translated the text “in a style most befitting sacred books, that is, in an easy and simple diction.” He said he’d command his attendants to read him the New Testament from beginning to end and support its distribution around Persia. Those who were “virtuously engaged” in spreading the New Testament and teaching its meaning, the shah said, would be “deservedly honored with . . . royal favor.”
While there are certainly elements of diplomatic flattery in this letter, the shah’s approval had far-reaching consequences. Throughout the 19th century, missionaries like Peter Gordon and William Glen distributed hundreds of copies across Persia with a relative degree of freedom.
God’s Sovereignty and Iranian Leaders These two stories of Persian leaders supporting the Bible’s translation and distribution are surprising in light of current religious restrictions in Iran. But it’s not that surprising in light of biblical history.
In the Old Testament, the Lord sovereignly uses Persian leaders to protect his people and further his covenant plan for redemption. King Ahasuerus circulates a letter that saves the Jewish people from certain destruction (Est. 8:11–13). Nehemiah receives a letter of support from the Persian King Artaxerxes to help rebuild the walls of Jerusalem (Neh. 2). King Cyrus sends incredible amounts of gold and silver to support the rebuilding of the temple in Jerusalem (Ezra 1:2–4).
In the Old Testament the Lord sovereignly uses Persian leaders to protect his people and further his covenant plan for redemption.
God sovereignly works to move kings and rulers—even the most pagan kings and the most ruthless rulers—to do his will. In Ezra 1:1, we see that the Lord “stirred up the spirit of Cyrus king of Persia.” The connection between God’s sovereignty and his directing of a Persian king is crystal clear in Isaiah 44:24–45:25. This passage first emphasizes that it’s the Lord “who made all things, who alone stretched out the heavens” (v. 24). Turning to Cyrus, the Lord states that he “shall fulfill all [God’s] purpose” (v. 28). In the next verse, Cyrus is referred to as God’s anointed and the one “whose right hand [God has] grasped” (45:1).
Let’s pray for the next ruler of Iran. Pray that, as the Lord has done before in history, he’d use the next leader to protect his people and further the spread of the gospel message. Both Christians and Muslims have suffered greatly in Iran in recent decades, yet the gospel is still advancing.
We should pray for an end to suffering in Iran. But we can also trust that amid uncertainty, missiles, and war, our sovereign God guides the hand and thwarts the will of rulers.
Free eBook by Tim Keller: ‘The Freedom of Self-Forgetfulness’ Imagine a life where you don’t feel inadequate, easily offended, desperate to prove yourself, or endlessly preoccupied with how you look to others. Imagine relishing, not resenting, the success of others. Living this way isn’t far-fetched. It’s actually guaranteed to believers, as they learn to receive God’s approval, rather than striving to earn it.
In Tim Keller’s short ebook, The Freedom of Self-Forgetfulness: The Path To True Christian Joy, he explains how to overcome the toxic tendencies of our age一not by diluting biblical truth or denying our differences一but by rooting our identity in Christ.
TGC is offering this Keller resource for free, so you can discover the “blessed rest” that only self-forgetfulness brings." https://www.thegospelcoalition.org/article/iran-leaders-praised-bible/ #Metaglossia #metaglossia_mundus #métaglossie
Also sentenced: Murat Ybyraiuly — Translator–Reporter; arrested in August 2023, charges not publicly disclosed; sentenced to 5.5 years.
Oñalğan Múlikuly — Translator–Reporter; arrested in January 2023; sentenced in 2024 to 7 years.
Janibek Jaudatuly — Translator; arrested in January 2023; sentenced in 2024 to 7.5 years.
"Adil Semeykhanuly has reportedly been sentenced to six-and-a-half years for his “negative interpretation” of Kazakh poet Abai Kunanbaev.
by Serikzhan Bilash and Tilek Niyazbek
A respected Kazakh language editor and cultural researcher, Adil Semeykhanuly, has reportedly been sentenced to six and a half years in prison in China’s Xinjiang region after more than a year of detention and house arrest, according to Kazakh language media and colleagues familiar with the case.
Colleagues say Adil Semeykhanuly received a 6½-year prison sentence over allegations of a “negative interpretation” of the teachings of the Kazakh poet Abai Kunanbaev (1845–1904). Semeykhanuly, a long-time editor at the “Шынжаң” (Shynzhan) newspaper and a recognised scholar of Abai, was first detained in January 2024. Sources say he spent seven to eight months in custody before being placed under house arrest due to insufficient evidence, according to relatives.
On 20 August 2025, he was reportedly sentenced on charges that he “negatively propagated the teachings of Abai” and “formed a separate public opinion,” accusations observers describe as politically broad and vague.
Kazakh outlets report that four other Kazakh intellectuals working in the same media environment were also arrested and later sentenced:
Tegis Zäybekuly — Deputy Editor of the Kazakh Editorial Department; arrested in October 2024. His sentence remains unknown.
Murat Ybyraiuly — Translator–Reporter; arrested in August 2023, charges not publicly disclosed; sentenced to 5.5 years.
Oñalğan Múlikuly — Translator–Reporter; arrested in January 2023; sentenced in 2024 to 7 years.
Janibek Jaudatuly — Translator; arrested in January 2023; sentenced in 2024 to 7.5 years.
Colleagues describe the series of arrests as part of an intensifying crackdown on Kazakhlanguage publishing, translation work, and cultural expression in Xinjiang.
Semeykhanuly’s participation in a 2005 Chinese delegation to Kazakhstan for the 160th anniversary of Abai in Semey was reportedly cited as one of the incidents that Chinese authorities scrutinized. He was widely regarded as a mentor to young journalists and a prolific contributor of cultural essays.
Monument to Abai in Beijing, with Chinese and Kazakh flags.
A survey of Chinese public court databases, government bulletins, and legal notices found no official records confirming the arrests or sentences of Semeykhanuly or the four other intellectuals. The absence of public documentation is common in politically sensitive cases in Xinjiang, where legal processes remain opaque. The news has, however, been confirmed by Kazakh sources.
The case also stands in contrast to cultural diplomacy between the two nations: China maintains an Abai monument in Beijing, and state media frequently describe the poet as a “bridge of friendship.” Kazakhstan hosts at least five Confucius Institutes established in partnership with Chinese universities. These public gestures of mutual cultural respect sit uneasily alongside the sentencing of an Abai scholar over an alleged “negative interpretation” of the poet’s teachings.
Human rights organizations continue to report systemic pressure on Uyghur, Kazakh, and other Turkic intellectuals, including charges related to ideology, cultural activity, or perceived separatism. Families often report limited access to information and fear repercussions for speaking publicly.
As of publication, Chinese authorities have not acknowledged the reported arrests or sentences. Requests for comment were sent to Xinjiang regional authorities and the Chinese Embassy in Kazakhstan."
by Serikzhan Bilash | Mar 9, 2026 | News China
https://bitterwinter.org/kazakh-scholar-sentenced-in-xinjiang-for-misinterpreting-a-poet/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Literature can bring culture and emotions to life and open up new perspectives — and the same is true for learning German. In this episode, we explore how texts, stories and theatre can enrich the experience of learning the language. We also talk about a theatre project our studio guest, Jonas Teupert, lecturer and director of the German program at the University of Melbourne, staged together with students. Also present is the student who played the lead role in the production: Anindo Minifie. Published 9 March 2026 2:29pm By Julia Grewe" https://www.sbs.com.au/language/german/en/podcast-episode/how-literature-and-theatre-bring-language-to-life-episode-6/7k6cxjh80 #Metaglossia #metaglossia_mundus #métaglossie
"Held on the theme: “Terminology development in the Ghanaian language,” the workshop and lecture was attended by students for 21 colleges of education, graduate students from several universities, traditional leaders, entrepreneurs, policymakers among other stakeholders.
Prof Appah noted that Ghana had adequate human, linguistic and institutional resources for the cause but was obstructed by inadequate funding.
He made a case for the introduction of a government-sponsored national terminology programme and a register to streamline the development of terminologies.
Through the programme, government would provide funding for research and other critical activities for the gathering, development, and dissemination of the terminologies, he proposed.
Prof. Appah made a direct call on the Ghana Tertiary Education Commission (GTEC), Ghana National Research Fund, and GetFund to help fund their activities on creating terminologies.
While appealing to government, he entreated the Linguistic Association of Ghana to demonstrate their seriousness by forming a research team to start work.
Prof Appah, stressed that local terminologies would help to decolonise education in Ghana, demystifying all complex concepts taught in a foreign language and clearing all impediments.
“The people and teachers of the languages we teach who don’t speak English are not participating in knowledge creation and so if you don’t have the capacity to think, practice, read, and access knowledge in your own language, then you lack linguistic sovereignty,” he added.
The UG principal proposed a teacher education and assessment reform that would promptly adopt new creations.
Dr Vincent Erskine Aziaku, Head of Department of Ghanaian languages and Linguistics, explaining the purpose of the workshop, maintained that Ghana remained under colonisation as it continued to depend on a foreign language.
The problem, he noted, had been the lack of terminologies, intimating that “terminology development is the only way we can succeed in having our language.”
Dr Samuel Owoahene Acheampong, Faculty of Ghanaian Languages Education, University of Education, Winneba (UEW), underscored the need for standardisation to ensure coherence and consistency in the terminologies.
He appealed to government to put together a standardisation council to verify all terminologies to ensure authors were not producing contradictory contents.
Mr Scoon Boakye Appiah, Founder and CEO of AyaPrep, an education technology company, entreated stakeholders to leverage technology to promote the use of Ghanaian language in teaching and learning.
GNA
Edited by Alice Tettey/Linda Asante Agyei
Provided by SyndiGate Media Inc."
https://www.msn.com/en-xl/africa/ghana/stakeholders-advocate-national-terminology-programme-for-ghanaian-languages/ar-AA1WNFfh
#Metaglossia
#metaglossia_mundus
#métaglossie
"“Writing, Reviewing, Translating: Women, Words, and Worlds” on February 17 at Mir Anis Hall, JMI.
The Sarojini Naidu Centre for Women’s Studies (SNCWS), Jamia Millia Islamia, in collaboration with The Book Review Literary Trust successfully organised a one-day national symposium on “Writing, Reviewing, Translating: Women, Words, and Worlds” on February 17 at Mir Anis Hall, JMI.
Chandra Chari, Founder Editor of The Book Review Literary Trust addressed the gathering about the origins and objectives of The Book Review journal and its sustained commitment to fostering critical literary culture in India. She underscored the importance of book reviewing as a vital intellectual practice and emphasised the role of women in shaping contemporary literary discourse.
The first session, titled “Reviewing, Writing, Publishing Women – A Critical Exploration of Gendered Literary Landscapes,” was moderated by Dr. Aakriti Mandhwani. The panel featured Dr. Semeen Ali, Rachna Kalra, Dr. Malvika Maheshwari, Dr. Sucharita Sengupta, and Dr. Kanupriya Dhingra. The speakers reflected on questions of identity and authorship, editorial gatekeeping, the politics of literary knowledge, and the sustainability of women’s writing in South Asia. Discussions highlighted the need to move beyond reductive categorisations of “women’s writing,” to encourage mentorship and alternative platforms, and to view reviewing as both scholarship and resistance.
A session “Writing the City,” moderated by Dr Faiz Ullah, explored literary engagements with urban spaces, particularly Delhi. Speakers Ananya Vajpeyi, Ekta Chauhan, and Aishwarya Jha reflected on the city as a site of memory, transformation, and affect. The discussion examined urban villages, shifting cityscapes, nostalgia, and the interplay between lived experience and literary imagination.
It was followed by a session titled “Writing/Translating Women,” which was moderated by Dr. Amina Hussain, Assistant Professor, SNCWS. The panel included renowned Hindi author Mridula Garg, noted translator Prof. Arjumand Ara, Dr. Deeba Zafir, and Dr. Firdous Azmat Siddiqui. The speakers addressed the epistemic marginalisation of women’s writing, the complexities of translation, intersectional concerns of caste and class, and representations of Muslim women in literature and history. The session emphasised that writing must provoke critical reflection, that translation demands ethical responsibility, and that marginal voices must be represented with nuance and sensitivity.
The symposium reaffirmed Jamia Millia Islamia’s commitment to fostering inclusive and critical academic spaces that foreground women’s voices in literature, scholarship, and translation, and to promoting dialogue that bridges disciplines..."
TNN | Mar 7, 2026, 12:30 IST https://share.google/RqcgYC2ybChNrLPSg
#Metaglossia
#metaglossia_mundus
#métaglossie
"Modern, AI-native platforms designed around Arabic constraints are now seen as essential for governing quality, ensuring consistency, and speeding up the localization of all written assets.
RIYADH: Faced with a globalized workforce and cross-border operations, companies across the Middle East are now embedding live translation into the fabric of daily work, adopting a hybrid human-artificial intelligence strategy to break down language barriers.
For years, multilingual translation in the region was a logistical feature reserved for annual shareholder meetings, flagship conferences, or international trade shows. Today, it has become a daily operational necessity.
Nour Al-Hassan, founder and CEO of Tarjama and Arabic.AI, said in an interview with Arab News that “as companies in the Middle East expand globally, multilingual communication is no longer occasional, it is part of everyday work.”
Tarjama is a MENA-based language technology company that launched Arabic.AI, an advanced, specialized platform for the Arabic language, to deliver high-quality, culturally nuanced, and industry-specific translation and content solutions.
Tarjama and Arabic.AI's founder and CEO Nour Al-Hassan. (Supplied)
Al-Hassan’s sentiments were echoed by Edward Crook, vice president of strategy at AI-powered neural machine translation service, DeepL, who told Arab News: “In the UAE and Saudi Arabia, 84 percent of professionals have integrated AI translation into their daily workflows, signalling a rapid shift from using language AI tools for big events to making them a staple of daily operations.”
Oddmund Braaten, CEO of multilingual event technology company Interprefy, told Arab News that such language support was previously used only for major external sessions, with in-person interpreters brought in for specific language pairs.
“What has changed is that live translation is now part of everyday operations,” he said, adding that the organization runs recurring virtual trainings and frequent internal briefings, and multilingual access is built in by default.
“This has allowed Arabic- and English-speaking teams, along with additional language groups, to participate on equal terms,” Braaten explained.
According to the CEO, internal training uses remote simultaneous interpretation with live captions, especially for technical content. For larger audiences, “AI speech translation is added to extend language coverage.”
This same language experience is maintained for both in-person and remote participants.
As companies across the UAE, Saudi Arabia, Qatar, and Bahrain diversify their economies and attract talent from across the globe, the demand for inclusive communication has moved from the event stage to the weekly team huddle, the training webinar, and the internal strategy update.
This transition from occasional to essential is underscored by new research. A study by Interprefy revealed that 82 percent of Middle Eastern business event organizers now report high demand for multilingual services.
Crucially, 61 percent see clear value in using live translation for webinars, 55 percent for business meetings, and 54 percent for internal “all hands” sessions.
This aligns with broader regional adoption trends that were observed from 2024 onwards. According to a separate survey by DeepL, 84 percent of professionals in Saudi Arabia and the UAE have already integrated AI translation tools into their daily workflows.
The drivers are enhancing productivity, developing new language skills, and, for 46 percent of professionals, successfully expanding business into new markets. Crook added that the primary drivers are “both internal and external: from developing language skills, to boosting time efficiency, and managing supplier relationships.”
To meet this surging, everyday demand, businesses are increasingly adopting a pragmatic, hybrid approach. They are moving beyond a one-size-fits-all model to a two-track system that balances nuance with scale, and cost with critical accuracy.
According to Interprefy, for sensitive negotiations, confidential board discussions, legal proceedings, or complex technical workshops, the expertise of professional human interpreters remains irreplaceable.
This ensures subtlety, cultural nuance, and absolute accuracy where the stakes are highest.
This approach is mirrored by companies such as Tarjama. As Al-Hassan explained: “Tarjama combines professional human translation with its AI-driven CleverSo platform to deliver a hybrid model that mirrors the evolution of translation tools, enhancing productivity without replacing human expertise.”
For the constant stream of daily interactions, such as project check-ins, company-wide broadcasts, training modules, and supplier communications, AI-powered live translation and captions provide scalable, instantaneous, and cost-effective understanding.
This layer ensures that language is never a barrier to participation, collaboration, or swift decision-making in fast-moving environments.
He explained how this hybrid model is practically implemented, noting it usually takes one of three forms.
In some cases, professional interpreters cover the main spoken languages, while AI speech translation is added for languages spoken by a small number of participants. In other situations, professional interpreters are combined with live captions or subtitles. In a third scenario, all three are used together.
This foundational shift extends beyond spoken communication to the very systems that manage a company’s multilingual content. As organizations generate more material for diverse audiences, they require specialized technology that handles the region’s dominant language with native fluency.
For the written word, financial statements and marketing campaigns, Arabic-first Translation Management Systems are becoming critical. As highlighted in a 2025 report by Tarjama on its CleverSo platform, generic systems built for Latin scripts struggle with right-to-left layout, segmentation, and Arabic user interface needs, leading to inaccurate translations that hurt conversion and trust.
Al-Hassan emphasized the need for specialized systems, stating: “High expectations for Arabic quality, multiple dialects, and regulatory requirements mean generic tools are not enough. Businesses now need specialized systems that fit into daily workflows and handle language with consistency, security, and cultural awareness.”
Modern, AI-native platforms designed around Arabic constraints are now seen as essential for governing quality, ensuring consistency, and speeding up the localization of all written assets, from scanned PDFs to mobile app strings.
Regarding quality for high-stakes content, Al-Hassan added: “Our approach is built on a fundamental principle: quality cannot be inspected in; it must be designed in from the very beginning.”
The trend is set to intensify. With 77 percent of Saudi and Emirati professionals believing AI will positively impact daily work efficiency by 2029, the integration of intelligent language tools is becoming a benchmark for competitive, inclusive, and globally agile businesses.
Crook confirmed this outlook, saying that some “77 percent believe AI will be the fundamental driver of workplace efficiency by 2029.”
When justifying the investment in everyday multilingual communication, business leaders point to measurable returns. Braaten shared that leaders justify it by removing friction, reducing risk, and enabling effective contribution at scale. The returns are visible in productivity, with fewer follow-up meetings and faster team alignment, as well as in employee inclusion and retention.
Oddmund Braaten, CEO of multilingual event technology company Interprefy. (Supplied)
He also noted that 85 percent of organizers report attendee frustration when multilingual support is not available.
Clients of companies like Tarjama quantify the return on investment on two levels. Al-Hassan stated that internally, they measure faster turnaround times and lower costs, while externally, they look at quicker market entry and faster campaign launches. For most, the real value combines improved internal efficiency with accelerated growth across markets.
As AI translation becomes ubiquitous, Tarjama sees the next competitive frontier in consultancy-driven localization of complex business, government, and advisory content, addressing challenges around regulatory compliance and scalable market launches.
This shift is operational, as explained by Braaten who gave an example of a GCC-headquartered organization now using live translation more frequently. According to the CEO, the firm — with teams and stakeholders across the Middle East and Europe — is now “delivering ongoing professional training rather than just a limited number of annual events.”
Al-Hassan describes this as a shift from “conference-scale” to “workflow-scale” translation, where “translation is built into business systems” and “content moves through the workflow and becomes multilingual as part of the process.”" Miguel Hadchity 01 February 2026 https://www.arabnews.com/node/2631332/amp #Metaglossia #metaglossia_mundus #métaglossie
" WAXAL: A large-scale open resource for African language speech technology March 6, 2026
WAXAL provides a critical, open-access foundation for African speech technology. Featuring a large corpus of ASR and TTS data for 27 native languages under a highly permissive license, WAXAL empowers the African AI ecosystem to build robust speech systems that better reflect the region's unique linguistic diversity.
Quick links WAXAL dataset Paper Share Voice-enabled technologies like virtual assistants and automated transcription have transformed how we interact with computers. However, their benefits disproportionately favor a handful of high-resource languages. This divide has left hundreds of millions of people — particularly in Sub-Saharan Africa, home to over 2,000 distinct languages — unable to access essential technology in their native tongues. Several years ago, the team at Google Research set out to help tackle this problem.
Watch the film To address this critical need, we introduce WAXAL: a large-scale, openly accessible speech dataset that initially covers 27 Sub-Saharan African languages spoken by over 100 million speakers across more than 26 countries. Developed through a multi-year effort beginning in 2021, in collaboration with African academic and community organizations, WAXAL provides the high-quality, permissively licensed data necessary to build robust speech systems. Setting a foundational milestone, this initial release features approximately 1,846 hours of transcribed natural speech for automatic speech recognition (ASR) and over 565 hours of high-fidelity recordings for text-to-speech (TTS). We are releasing these resources under a Creative Commons license (CC-BY-4.0) to catalyze research and enable inclusive voice-enabled technologies tailored to the unique linguistic characteristics of the continent. We intend for the WAXAL collection to continuously evolve and expand to include additional languages as part of our ongoing effort to bridge the digital divide.
Introducing WAXAL By addressing critical data scarcity for over 100 million speakers, WAXAL aims to empower the regional AI research ecosystem. To support the development of robust speech technologies, the corpus integrates two specialized datasets designed to provide comprehensive coverage for both speech recognition and synthesis tasks.
WAXAL-ASR (Spontaneous Understanding): Comprising approximately 1,846 hours of transcribed audio, this dataset captures natural, unscripted speech. Instead of reading scripts, diverse participants were asked to describe visual stimuli covering 50+ topics in their native language. This image-prompted elicitation captured authentic linguistic variations, including tonal nuances and code-switching. This method successfully yielded more natural speech than traditional methods.
Examples from Google’s Open Images used as prompts to elicit natural speech for the ASR dataset.
WAXAL-TTS (High-Fidelity Generation): Designed to facilitate the creation of natural-sounding synthetic voices, this dataset contains over 565 hours of high-quality, phonetically balanced audio. The TTS collection process was highly collaborative: local community members worked in pairs to draft scripts of 10,000–20,000 words, alternating reader and recorder roles. To ensure professional-grade acoustics, some participants used project funding to build custom studio boxes. The resulting recordings were then segmented, matched with the script text, and reviewed for accuracy and quality.
TTS recording box at University of Ghana.
The WAXAL corpus's dual focus on unscripted ASR data and high-fidelity TTS audio is designed to enable the development of full-duplex conversational systems. Specifically, the ASR component facilitates the modeling of varied, spontaneous speech input typical of real-world scenarios, while the high-quality TTS component provides the clean reference data required for generating clear, natural output. The table below lists the 27 languages currently included in the dataset:
Breakdown of the current WAXAL dataset, showing the 27 initial Sub-Saharan African languages and the availability of ASR and TTS data for each.
Anchoring in the African AI ecosystem Crucial to the WAXAL project was our commitment to working with, and contributing directly to, the African AI ecosystem. The data collection effort was led entirely by African academic and community organizations, guided by Google experts on world-class data collection practices. This collaborative approach ensured the corpus was built by and for the community it serves; with shared methodology each partner focused on a specific subset of languages. Our partners included Makerere University, which collected ASR and/or TTS data for nine different languages, and the University of Ghana, which focused its efforts on eight languages, using the ASR image-prompted data collection methodology outlined above. Additional key collaborators were Digital Umuganda, in partnership with Addis Ababa University, who were instrumental in leading the ASR collection for several regional languages. For the high-quality, studio-recorded voices, Media Trust, Loud n Clear and African Institute for Mathematical Sciences Senegal spearheaded the TTS recordings across various regional languages.
This framework is fundamentally rooted in the principle that our partners retain ownership of the collected data toward the shared commitment to make all datasets openly available for the broader community. This deep collaboration and open-access philosophy have already enabled notable derivative research and publications.
Through this framework, our partners have already enabled new research, such as the development of a cookbook for community-driven collection of impaired speech . This research resulted in the first open-source dataset for Akan speakers with conditions like cerebral palsy and stammering, and demonstrated that in-person, image-prompted elicitation is more effective than text-based prompts for these populations. This work provides a vital roadmap for developing inclusive speech technologies in low-resource environments. Furthermore, the initiative supported a major study that introduced a 5,000-hour speech corpus for five Ghanaian languages — Akan, Ewe, Dagbani, Dagaare, and Ikposo. This work established infrastructure for building robust ASR and TTS systems tailored to the linguistic diversity of West Africa by using a controlled crowdsourcing approach to capture natural, spontaneous intonations. Other essential research has focused on benchmarking four state-of-the-art models (Whisper, XLS-R, MMS, and W2v-BERT) across 13 African languages. This study analyzed how performance scales with increased training data, offering key insights into data efficiency and highlighting that scaling benefits are strongly dependent on linguistic complexity and domain alignment. Finally, a systematic literature review was published, cataloging 74 datasets across 111 African languages to map the current frontier of speech technology. This review emphasized the urgent need for multi-domain conversational corpora and the adoption of linguistically informed metrics, such as Character Error Rate (CER), to better evaluate performance in morphologically rich and tonal language contexts. Conclusion and future directions WAXAL represents a key milestone in bridging the digital divide, offering a high-quality, open-access speech resource for 27 Sub-Saharan African languages. Developed through deep collaboration with African academic and community organizations, this initiative empowers the continent’s AI ecosystem and preserves linguistic diversity. We hope WAXAL will continue to serve as a vital resource for the digital preservation of African languages and a foundation for future innovations. Google remains committed to this effort, with plans to continuously expand the WAXAL dataset." Tavonga Siyavora, Senior Product Manager, and Abdoulaye Diack, Program Manager, Google Research
https://research.google/blog/waxal-a-large-scale-open-resource-for-african-language-speech-technology/ #Metaglossia #metaglossia_mundus #métaglossie
"No, USCIS does not require certified translations to be notarized. What's more important is the certification statement by the translator or translation agency confirming that the translated document is accurate and complete to the best of their knowledge.
Some people still go ahead and notarize their documents. While it's not an official requirement, it simply adds an extra level of authenticity to your document.
In a notarized translation, a notary public verifies the identity of the person signing the translator's certification. The notary does not verify the translation itself; they only confirm that the translator signed the certification in their presence.
Common USCIS Translation Mistakes That Cause Delays
Even small translation errors can affect your USCIS application. If authorities cannot verify the information on your documents, they may issue a Request for Evidence (RFE), which can delay your application.
Here are common mistakes that may affect your application;
1. Partial translations
Translating some parts of your documents can jeopardize your application. USCIS requires a full translation of the entire document, including stamps, annotations, and seals, so that officers can review it in context.
2. Missing certification statement
Every translated document must include a signed certificate confirming the translation is complete and accurate. Without the statement, USCIS may treat the document as invalid.
3. Incorrect name spellings
If your names don't match across your documents, it can raise concerns about your identity or relationship claims, especially if you're applying for a family visa.
4. Formatting inconsistencies
Translated documents that do not follow the structure and formatting of the original document can make it difficult for officers to locate key information. This can also result in delays or RFEs.
5. Using unauthorized translators
If a translation is found inaccurate or the translator's credentials are in question, USCIS may reject the application or request additional documents. To avoid this, it's always best to use professional translation services that specialize in USCIS document translation
FAQs
Can a family member translate my documents for USCIS?
USCIS does not explicitly ban family members from translating your documents. However, it raises concerns about bias. It's best to use an independent translator or translation agency to avoid any issues.
Do I need to submit both the original document and the translation?
Absolutely. When submitting your documents to USCIS, always submit the translation alongside the original documents so officers can verify the information.
What should the translator's certification statement include?
The certification statement must state that the translation is complete and accurate and that the translator is fluent in both languages. It should also include the translator's name, signature, and date
Will USCIS reject my application if the translation is incorrect?
There's a high chance that they will. Alternatively, they may issue a Request for Evidence (RFE) requesting additional documents or a corrected translation, which can increase the processing time for your document. It's always important to submit accurate documents from the start.
How long does it take to get a certified translation for USCIS?
The turnaround time depends on the provider you're using. Translayte provides USCIS translations in 12 hours or less, depending on the length of your document and the language pair.
Media Contact:
Sophia Orji
Content Manager
Email: sophia.orji@translayte.com
Website: https://translayte.com
SOURCE: BDXL Ltd"
https://www.bignewsnetwork.com/news/278905951/uscis-translation-requirements-2026
#Metaglossia
#metaglossia_mundus
#métaglossie
"BENNINGTON -- The Bennington Writing Seminars, the MFA in Writing program at Bennington College, announced the launch of a new dual-genre concentration in Literary Translation. Applicants and current students studying Fiction, Nonfiction, or Poetry will be able to add Literary Translation as a secondary concentration, lengthening the program from four to five terms.
“Bennington College has a great history as a center for the translation of literature,” said Bennington Writing Seminars Executive Director Mark Wunderlich, “and we are happy to now offer instruction in literary translation in our graduate writing program. Students will now be able to spend two terms studying with some of the finest translators in the field and leave with a fully translated work.”
Bennington College alum and Bennington Writing Seminars faculty member and National Book Award-winning translator Bruna Dantas Lobato designed the program to enable students to engage with a global literary community.
“We translate literature to engage with the world and its many languages, to be in conversation with and open to modes of thinking and being besides our own,” she said. “Literary translation is the rewriting of a literary text in a new language and all the transformations that act entails, as the text travels to a new cultural, linguistic, and aesthetic context. Translation broadens and deepens our understanding of humanity and language, shows us there are more possibilities beyond our reach, and pushes us to challenge our own perspective. It is thanks to translation and translators that readers aren’t cut off from the rest of the world, living in intellectual isolation.”
Dual-genre will extend the program from four to five semesters, three in the student's main genre and two in Literary Translation. Dual-genre applicants may apply to study Fiction, Nonfiction, or Poetry as their main genre, and Literary Translation, Fiction, Nonfiction, or Poetry as their secondary genre.
Applications are now open, and the coursework begins next January.
Lobato is a writer and translator. Her fiction has appeared in The New Yorker, Guernica, A Public Space, The Dial, and The Common. She was awarded the 2023 National Book Award in Translated Literature for The Words that Remain by Stênio Gardel. Originally from Natal, Brazil, she lives in Iowa and teaches at Grinnell College. Her debut novel, "Blue Light Hours," is out now from Grove Atlantic." https://www.benningtonbanner.com/local-news/bennington-college-launches-literary-translation-program/article_b3a9cf80-c81a-474e-9327-1951d9b23f16.html #Metaglossia #metaglossia_mundus #métaglossie
"The High-Performance Language Technologies (HPLT) project is developing very large-scale multilingual resources for large language models and machine translation.
Massive text collections for pre-training are the ‘crude oil’ of the large language model (LLM) era. The process of ‘refining’ high-quality datasets from web data at scale presupposes computational infrastructure and technological muscle that is often characteristic of corporate environments, as evidenced, for example, by some notable generally available pre-training datasets: C4,¹ FineWeb 1 & 2,2,3 MADLAD-400,⁴ or Nemotron-CC.⁵ With a few notable exceptions, this line of work tends to capitalise on the English language.
Here, we present the open-source results6,9,10 of the European R&D consortium HPLT – a project that has been funded under the auspices of the Horizon Europe programme in 2022–2025. Together with a myriad of additional results, HPLT has produced massive pre-training datasets of high-quality texts in close to 200 distinct language–script combinations. Its 2025 monolingual data release, HPLT 3.0, comprises some 30 trillion sub-word tokens in total, of which close to half represent languages other than English. We make this resource publicly available under the most permissive terms of use possible. We further share a state-of-the-art and open-source data preparation pipeline, an innovative multilingual evaluation framework, as well as hundreds of language models pre-trained on HPLT data.
Fig. 1
Furthermore, the project has produced novel bilingual datasets for more than 50 language pairs, hundreds of associated machine translation models, open-source pipelines for data preparation, model training, and evaluation, as well as synthesised additional pre-training data for underrepresented languages by machine translation of very high-quality English documents. In our view, it is the totality of generally available and very large-scale resources and the documentation of the underlying processes that bears promise of ‘democratising’ the current LLM and MT landscape.
Organisation
The HPLT consortium comprised partners from five different universities (Charles University in Prague and the Universities of Edinburgh, Helsinki, Oslo, and Turku), two national HPC centres (CESNET in the Czech Republic and Sigma2 in Norway), and a language engineering company (Prompsit) from all around Europe. The project has received about €4.1m from the Horizon Europe programme and £960,000 from UK Research and Innovation, and ran from September 2022 through December 2025. The project was coordinated by Jan Hajič (Charles University), with technical coordination by Kenneth Heafield (Edinburgh) and Stephan Oepen (Oslo) in its first and second halves, respectively.
Data curation
HPLT has gathered and processed more than ten petabytes of raw web data. The project has released more than 30 billion tokens (word-like units) of high-quality textual data, accompanied by rich metadata, for close to 200 distinct languages. The process of extracting, cleaning, annotating, and filtering texts from raw web archives is schematically depicted in Fig. 1, composed of about a dozen modules.
Raw web archives were drawn from three sources: the Internet Archive (IA), host of the iconic Wayback Machine); the non-profit Common Crawl Foundation (CC); and the ArchiveBot volunteer infrastructure for long-term web archiving. Sub-tasks like, for example, the extraction of ‘running text’ from marked-up document formats, language identification at the document and paragraph levels, ‘fuzzy’ near-deduplication, annotation with a wealth of text quality and regulatory compliance signals, and final filtering based on all available information, each directly impact the practical utility of the final data sets. Here, text quality versus overall volume present separate and typically antithetical dimensions for optimisation, creating a rich space for different design choices and trade-offs. This remains an active area of research. The open-source HPLT processing pipelines are highly flexible and parameterisable, where default values represent the current state of knowledge.
Monolingual statistics
To put the HPLT monolingual data into perspective, Table 1 (below) presents document and token counts (see note) for the English and multilingual (non-English) partitions of the data, as well as counts for a small sample of individual languages. For ease of comparison, these statistics are accompanied with average document lengths and per-language proportions, and contrasted with corresponding figures for three other publicly available multilingual datasets mentioned above.
Table 1: Note: For the purpose of comparable statistics across languages and different datasets, all token counts are computed using the Gemma-3 tokenizer,⁸ a SentencePiece model with a vocabulary of 256K sub-words, providing good coverage for all target languages
As is evident from these numbers, HPLT 3.0 is by far the largest publicly available such dataset, and its multilingual breadth compares favourably to other widely used resources. In Gemma-3 tokens, the multilingual HPLT 3.0 partition is about 2–3 times larger than FineWeb and the earlier version HPLT 2.0, respectively, and five times larger than the older MADLAD-400 dataset. In terms of average document length, which often is correlated with text quality, HPLT 3.0 and 2.0 pattern alike, markedly ahead of FineWeb but well behind MADLAD-400. For a small selection of European languages, the table shows languages ranging between a ‘mere’ billion of available tokens to others with hundreds of billions.
In-depth analytics
Training data quality arguably is the most important factor in model quality, but in-depth data inspection at scale is a challenging endeavour. HPLT has developed an open-source tool, HPLT Analytics, to compute a broad range of fine-grained statistics and enable interactive visualisation and exploration. The datasets are internally structured in documents, paragraph-like segments, and tokens. Descriptive frequency and length statistics, combined with basic correlation analysis with metadata like internet domains or predicted text register labels, can reveal distributional trends or outliers. Annotations are predominantly available at the document level, but in some cases also for smaller units. Contrasting the distributions of document versus segment language predictions, for example, allows insights into both degrees of in-document ‘code switching’ and uncertainty in language identification, typically among closely related languages.
Multilingual evaluation
As an additional tool to gauge data quality and experimentally inform design choices in training data preparation (as well as in language model training), the project has developed a framework for automated large-scale multilingual evaluation, dubbed HPLT-e. In its current state of development, the framework comprises 127 language understanding and generation tasks across the nine European languages highlighted in Table 1.
This selection allowed both availability of native speakers in the project team and a minimum level of diversity in terms of language resources, families, and scripts. Tasks in HPLT-e are often drawn from pre-existing benchmark suites, but emphasising natively constructed (rather than translated) tasks and extending each with three to seven human-written prompts to mitigate the methodological challenge of prompt sensitivity. Similar to Penedo et al.,2,3 we pretrain separate ‘smallish’ (2B parameters) GPT-like models per language using an otherwise fixed pretraining setup, and evaluate them at regular checkpoint intervals in a zero-shot regime, carefully selecting tasks that meet a range of evaluation signal criteria, i.e. can be expected to act as informative and reliable indicators of training data quality. Such criteria include monotonicity and relative stability of model performance as pretraining progresses, ranking consistency across pretraining intervals, and multiple, indicators of limited prompt sensitivity. Fig. 2 shows a comparison of the four datasets introduced above using HPLT-e. To aggregate scores across different prompts, tasks, and languages, per-task scores are maximised across prompts and min-max normalised relative to a task-specific random baseline. Per-task scores are then averaged across task categories within each language and, finally, across languages. An alternative approach to overall aggregation is called Borda’s count, using Vote’n’Rank,⁷ which is essentially the average of per-language counts of a model outranking all the others. Models trained on all four datasets for up to 100B tokens show a monotonic performance improvement on our selected tasks. Models pretrained on (the comparatively smaller) MADLAD-400 achieve the highest multilingual score, followed by HPLT 3.0, while HPLT 2.0 and FineWeb perform on par. These results are corroborated by rank-based aggregation across tasks and languages, which yields: MADLAD-400, HPLT 3.0, and HPLT 2.0 and FineWeb.
Language models
While training data creation has taken centre stage in the HPLT work plan, the project has also developed a wealth of language models of different sizes and architectures supporting various languages and language groups.
In addition to large language models trained from scratch for Finnish and Norwegian, a common theme in this work was strong emphasis on smaller, specialised models that are efficient to run. In total, publicly available project results comprise hundreds of language models, including the following sub-groups:
55 monolingual encoder-only (BERT-like) models for a typologically diverse set of languages. When fine-tuned as embedders for ‘classic’ language understanding tasks, these models uniformly show superior performance to standard multilingual models.
57 monolingual encoder–decoder (T5-like) models, again for a typologically broad set of languages. These models exhibit competitive performance in both embedding and generation benchmarks, thus, offering a novel platform for experimentation.
38 monolingual decoder-only (GPT-like) reference models, each with 2.15B parameters and trained to 100B tokens. These models can serve a number of purposes, including as baselines for mono- and multilingual training, references for the comparison of HPLT and other data, and tools for contrasting the HPLT data quality across different.
Two larger (13B parameters), continually pretrained generative models, for Finnish and Norwegian, built on the fully open-source OLMo 2 platform. These models compare favourably to language-specific adaptations of the Mistral NeMo model, suggesting that fully transparent foundation models can yield competitive results to their merely open-weight counterparts.
Mining for bilingual text
Another wealth of open-source results from HPLT are related to machine translation (MT), notably large collections of parallel texts derived from mining the monolingual datasets for translational correspondences at the sentence of document levels. These resources are created using the additional processing block called Bitextor Pipeline in Fig. 1. The pipeline applies a multi-stage text extraction procedure that identifies documents with identical content in different languages using various matching and alignment techniques implemented as an open source toolbox.¹ Heavy parallel computing makes it possible to run such bitext mining on a scale provided by the monolingual web-crawls coming from HPLT. Traditionally, parallel texts are provided as sentence-aligned bitexts that can directly be fed into machine translation training. HPLT provides three releases of parallel text corpora with a language coverage of 57 language pairs. The data is collected in an English-centric manner aligning documents with English counterparts in our dataset. Pivoting on those English documents, we can then also derive multilingual parallel text collections spanning 1,446 language pairs. In total, HPLT provides 2.7 million sentence alignments released from our repository of parallel corpora, OPUS.²
Fig. 2
Machine Translation
Mirroring the interplay of data creation and model building in the LLM track, HPLT has worked intensely on the development and evaluation of new translation models for 100 language pairs, combined with novel infrastructures for automated training at scale and integration of benchmarking results into the OPUS dashboard. A special focus is set on efficiency, emphasising the need of compact translation models that can run locally on edge devices. Specialised models that are several magnitudes smaller than common general-purpose language models enable fast inference without losing translation performance and enable secure deployments that are independent from external services and online connections. Translation models trained including HPLT data show competitive performance in comparison, especially for lesser-resourced languages. To further reduce computational costs, we also developed a pipeline for systematic multilingual knowledge distillation that supports the transfer from expensive teacher models to compact student models that can be as small as 20 megabytes of size.
Computational infrastructure
All work in HPLT has been exceedingly compute- and storage-intensive, made possible through a combination of resources covered by the project grant and of additional substantial resources allocated to consortium members from national (Czech, Finnish, and Norwegian) quotas and through the EuroHPC system. ‘Bulk’ storage for very large-scale web data, in total close to 21 petabytes, was distributed over facilities in the Czech Republic (CESNET), Norway (Sigma2), and Finland (LUMI). Exclusive access to dedicated compute nodes tightly integrated with the storage systems made possible a first stage of lightweight document and metadata extraction (see Fig. 1), reducing the data volume for further processing by about a factor of three.
In addition to some experimentation on national superclusters, the EuroHPC LUMI system served as the main ‘workhorse’ for HPLT, where the consortium used combined allocations of around 60 million CPU and about 11.5 million GPU hours over the 40-month project duration, which is the theoretical equivalent – on average – of more than 2,000 active CPUs at all times."
6th March 2026
https://www.innovationnewsnetwork.com/hplt-high-performance-large-language-models-for-europe/67406/
#Metaglossia
#metaglossia_mundus
#métaglossie
"What is language validation? Language validation, also known as linguistic validation, is a crucial part of the translation process in which a person fluent in the target language confirms that translated content is technically accurate and captures the cultural nuances of your original training content. Without this step, you may risk employees misunderstanding the translated version in their native language.
Without proper language validation, your training program could include inaccuracies that confuse learners and erode their trust. For example, a football analogy in translated content can confuse learners, since American football is a completely different sport from football…well, everywhere else.
Poor translations can also cause bigger problems. For example, imagine your sales training course uses the common American idiom, “You ROCK!” Americans interpret that as a supportive encouragement, but a direct translation will sound more like calling the sales leader an inanimate, hard collection of minerals one might throw at an enemy. Sure, the translation is technically correct, but it doesn’t make sense. Validators who speak the target native language catch these simple errors and correct them before the final version creates a lot of confusion.
How to choose the right validation method for your document Different assets require different levels of validation. You may worry that a professional translator is your only option, but the truth is, it depends.
To help you pick the best validation method for your assets, we put together a handy framework that covers validation approaches for low-, mid-, and high-risk documents.
Low-risk documents Low-risk documents are informational or supplemental materials where minor translation errors won’t impact learner performance, create compliance issues, or lead to legal problems.
Examples include:
Internal training announcements Course welcome pages Module introductions Translation validation tips for low-risk documents Here are some easy translation validation tips and options to help you translate low-risk documents.
1. Use free online translation tools for forward translation Use Google Translate, DeepL Translator, or Reverso for a quick, free validation check. Translate your low-risk training content into the target language and scan the output to surface obvious issues like missing information and incorrect terminology. Google Translate is especially useful to validate simple text, like headings, short instructions, and summaries, where the impact of errors is low. Still, the output might contain grammatical errors and awkward literal translations.
2. Use a second tool for backward translation Reverse translation, also known as back translation, is a quality assurance method where a translated text is retranslated into the original language to ensure the back translation holds up to the original.
Translate a section of your training into the target language. Articulate customers can use Localization for this first pass and expect a highly accurate first draft. Then, use a second online translation tool to translate it back into the original language. Compare the two versions to spot meaning shifts, missing details, or overly literal phrasing.
This approach works best for summaries, introductions, and labels. But mid and high-risk documents require a more robust approach.
Mid-risk documents Mid-risk documents are assets that guide actions or influence behavior. This means translation errors could impact learner performance, but are not as likely to cause legal or safety issues.
Examples include:
Procedural guidelines Internal playbooks or messaging frameworks Quizzes or assessments that reinforce procedural knowledge Even if you don’t have an expert, you’ll benefit from submitting these to a competent reviewer to ensure clarity, tone, and cultural appropriateness.
Translation validation tips for mid-risk documents Reverse translation tricks aren’t enough for mid-risk documents like procedural guidelines, internal playbooks, and messaging frameworks, where the impact of error is higher. So, you’ll need to bring in a competent, though not necessarily expert, validator.
Here are a few validator options when skipping the review process won’t suffice.
1. Consult fluent speakers in the target languages A colleague fluent in the source and target languages can help validate mid-risk documents because they understand the organization’s intent and the target language’s cultural nuances.
For example, an employee based in Canada who speaks fluent Brazilian Portuguese can review step-by-step guidelines for accuracy, tone, inconsistent terminology, and awkward literal translations.
However, being U.S.-based, they may miss locally preferred expressions or subtler cultural nuances.
2. Have a local employee review content An employee who speaks the target language and lives in the target country (e.g., a Brazilian Portuguese native speaker who lives in Brazil) can flag language that sounds awkward, overly formal, or unnatural to local employees. Because they’re immersed in the linguistic and cultural norms of the target country, they may miss subtle differences from the original language.
3. Ask a trusted community source A friend or family member who speaks the target language can provide feedback on mid-risk documents, specifically confusing or complex language, awkward phrasing, and whether instructions are clear from an outsider’s perspective.
However, they lack the organization-specific knowledge needed to ensure content aligns with the business’s needs and training intent. They may also fail to pick up on nuances of the target language and cultural references.
High-risk documents High-risk documents are materials where translation errors could create serious problems, including legal, safety, compliance, or financial risks.
Examples include:
Compliance courses Workplace safety or regulatory training modules Legal documents These documents require professional translators or validators to ensure accuracy, maintain regulatory compliance, and protect learners and businesses from risk.
High-risk documents: When to use a professional validator Consider using a professional translator or validator when handling high-risk content like compliance course content, legal documents, and workplace safety modules.
Technically inaccurate or culturally misaligned translations of these content types can lead to legal, safety, and compliance issues that put your organization and employees at risk.
For example, say the original version of the organization’s mandatory data privacy and security training course forbids employees from sharing customers’ personal information outside of the company without written consent. But the Japanese translation erroneously allows this.
Employees sharing sensitive data could pose security risks to customers, and the company could also face severe backlash and costly regulatory fines.
A professional translator helps prevent these problems by ensuring mandatory instructions, policies, and safety procedures are translated correctly and that content aligns with local regulations like the GDPR for EU and UK-based employees.
Pros and cons of linguistic validation methods Now that you have a list of validation options, let’s compare them by considering the pros and cons of each.
Free online translation tools Pros:
Best for low-risk content (e.g., course introductions, announcements, optional resources) Free and fast Cons
Lacks context and cultural nuance Grammatical or phrasing issues Reverse translation Pros
Best for low-risk content Helps surface meaning drift and missing information Cons
Time-consuming Requires using multiple translation tools Lacks context and cultural nuance Native-speaking colleague Pros
Best for mid-risk content (e.g., procedural guidelines, internal playbooks, messaging frameworks) Fluent in source language and target language Deep understanding of organization tone and terminology Can review accuracy, tone, terminology, and awkward phrasing Cons
May miss local phrasing or cultural nuance Local employee Pros
Best for mid-risk content Deep understanding of local language and workplace norms Can validate tone, clarity, and natural/local usage Cons
May miss subtle misalignment with original source language Trusted community source Pros
Best for mid-risk content Can validate confusing or complex language and awkward phrasing Cons
Lacks organizational knowledge and training context May miss professional or industry-specific nuance May miss target language nuance Professional translator or validator Pros
Best for high-risk content (e.g., compliance courses, legal documents, workplace safety modules) High accuracy and cultural alignment Ensures consistency across programs Cons
Higher cost Longer turnaround time When choosing a translation validation method, think about your content’s risk level first. Then factor in which free or low-cost resources you have available that will deliver the quality you need to avoid major translation errors.
What to look for in a quality translation A quality translation isn’t just technically accurate. It also captures the original meaning of the source content, aligns with organizational goals, and fits the workplace training context.
Here are five things to look for when verifying translation quality:
Preservation of original meaning: Make sure the translated course or content captures the intended meaning of the source material, including tone, context, style, and cultural references. Adherence to rules for grammar and mechanics: Check that the grammar, spelling, and sentence structure are correct and consistent in the target language. Alignment with company brand and training goals: Translation quality also depends on how well the content aligns with your organization’s brand voice, audience, and training goals. Workplace training and e-learning relevance: The translated content should use language, examples, and references that align with e-learning and workplace training. Consistent terminology usage: Quality translation ensures specific words and phrases are the same in every language to prevent confusion. This is easier when you have a custom translation glossary, as is the case with Articulate Localization, a localization solution embedded in Articulate’s course authoring platform. With a top-notch translation, you save hours on rewrites, boost learner confidence, and ensure a better learning experience for your global workforce.
Validate training with the right resources Just because you don’t have access to a professional validator or linguist doesn’t mean your goal of translating course content is unattainable. Quite the opposite, actually.
With the right resources, you can ensure your training course or program delivers essential information to learners in a way that makes sense to them linguistically and culturally.
Knowing which resources to use when makes the process even easier. While generic tools work for low-risk content like module introductions, medium and high-risk documents require more robust options, especially when compliance and safety are concerned.
So weigh your options carefully. And should you reach the point of needing a professional validator, check out our blog post on how to find the right one."
https://www.articulate.com/blog/translation-validation-tips-when-you-dont-have-a-professional-validator/ #Metaglossia #metaglossia_mundus #métaglossie
"Tilde, the language technology company from Latvia, has adapted its large language model TildeOpen LLM for translation and integrated it into a machine translation platform that provides reliable high-quality translations into 34 European languages.
Until now, the model was mainly a significant scientific achievement in the development of artificial intelligence for European languages, but it had not yet been adapted for everyday use by a wider audience. Now, it is available to the public for both private translation needs and daily work.
Starting today, anyone can use the translation platform, which provides exceptionally high-quality and secure translation into 34 European languages, including Latvian, Lithuanian, and Estonian, and provides for accurate use of terminology and more natural, fluent sentences, reducing the post-editing workload of the machine-translated texts.
TildeOpen provides quality that is competitive compared to much larger global models, such as ChatGPT-4.1, even though it is about 60 times smaller. Detailed results of the comparative tests are available in the ranking of large language models TildeBench.
Organisations can deploy TildeOpen on premises or in Europe-based clouds, thus maintaining full control of their data. Unlike many global AI solutions, the data is never transferred outside Europe. This is especially important for public bodies and enterprises that handle sensitive information. At the same time, the model can be customised to suit individual needs, thus providing particularly accurate and reliable translations.
TildeOpen was published as an open-source foundational model for European languages on the Hugging Face platform in autumn 2025. It was developed in Tilde’s research laboratory on behalf of the European Commission. The model has 30 billion parameters and is trained on hundreds of billions of words in European languages, including 29 billion Latvian text units. This is the largest known amount of data used in the development of Latvian artificial intelligence. The model was developed after winning the Large AI Grand Challenge contest organised by the European Commission, using the LUMI supercomputer in Finland."
6 March, 2026
Avots: Press release
https://labsoflatvia.com/en/news/tildes-artificial-intelligence-marks-a-new-era-for-translation-in-european-languages
#Metaglossia
#metaglossia_mundus
#métaglossie
"The 2026 Finnish State Award for Foreign Translators has been presented to the distinguished Danish translator Siri Nordborg Møller (b. 1981), whose work has greatly broadened the international visibility of Finnish literature. The EUR 15,000 award has been granted annually since 1975 by the Ministry of Education and Culture based on a proposal from FILI – Finnish Literature Exchange.
Over the course of her remarkable career, Siri Nordborg Møller has translated more than 130 Finnish books. She began her work as a translator in 2006 while studying for a Master’s degree in Finnish at the University of Copenhagen. Since then, she has translated a wide range of acclaimed fiction, including seven books by Leena Krohn; works by Matias Riikonen, Johanna Sinisalo and Pajtim Statovci; crime novels by Satu Rämö Max Seeck and Arttu Tuominen; poetry by Sirkka Turkka; graphic novels by JP Ahonen and Tommi Musturi; nonfiction by Mia Kankimäki; and many internationally appealing children’s and young adult books, such as Aino Havukainen and Sami Toivonen’s Tatu and Patu series and works by Siiri Enoranta, Vilja‑Tuulia Huotarinen, Siri Kolu and Timo Parvela. She is currently working on Arttu Tuominen’s Delta crime series and Naraka, a fantasy novel by Elina Pitkäkangas.
According to Nordborg Møller, versatility has always been central to her work. She aims to accept every translation assignment offered to her so she can present Danish readers with as broad a range of Finnish literature as possible.
“Siri Nordborg Møller has an exceptionally wide ranging body of work, and she has been highly productive as a translator. Through her active contribution she has introduced the richness and diversity of Finnish literature to readers in Denmark and increased the international presence of Finnish writing. Translators play a vital role in bringing Finnish literature to new audiences abroad,” says Minister of Science and Culture Mari‑Leena Talvitie.
Finnish literature is currently being published in Danish in impressive numbers. “According to FILI’s statistics, Danish had the third‑highest number of Finnish titles published last year, after German and Estonian. Altogether, 28 Finnish books appeared in Danish, seven of them translated by Siri Nordborg Møller,” says FILI’s Director Tiia Strandén.
Siri Nordborg Møller notes that every book brings its own inspiration and challenge. From time to time, she encounters novels whose language resonates so strongly that the translation process becomes pure joy.
“Such works include Anu Kaaja’s Katie‑Kate, Katri Lipson’s The Ice Cream Man, Matias Riikonen’s Matara, Johanna Sinisalo’s Not Before Sundown, Siiri Enoranta’s Summer Storm, Vilja‑Tuulia Huotarinen’s Light Light Light and all of Leena Krohn’s works,” Nordborg Møller says.
She has also translated a substantial body of children’s and young adult literature. Picture books with minimal text may appear simple to translate, but they often contain names and wordplay that demand great ingenuity.
“The Tatu and Patu books are in a class of their own. Their humour and chaos have to be conveyed to Danish readers with energy and wit. The most challenging of all was Tatu and Patu: Monster‑Monster and Other Strange Stories, written entirely in rhyme. Translating it was enormous fun, but at times it felt almost impossible.”
Inquiries and requests for interview: Hannele Jyrkkä, Communications Manager, FILI – Finnish Literature Exchange hannele.jyrkka@finlit.fi, tel. +358 50 322 2387"
Ministry of Education and Culture
Publication date4.3.2026 10.06 Type:Press release
https://valtioneuvosto.fi/en/-/1410845/siri-nordborg-m-ller-wins-finnish-state-award-for-foreign-translators
#Metaglossia
#metaglossia_mundus
#métaglossie
"BRUSSELS — Dozens of wannabe EU translators who were forced last year to resit a grueling entry exam because of a technical blunder have now been incorrectly disqualified, they said.
Some of the nearly 10,000 would-be Eurocrats who did the online test last year and who had to repeat the exercise a few months later because of a “set-up defect” were told they were being disregarded because they hadn’t completed all the exams. They say this was an error and that they’ve done everything that was requested.
“I did sit all of them! So I do not understand! How can they be so careless? What do we do?” wrote one applicant on a Facebook group for candidates. Messages in this group and a separate private Whatsapp chat suggest dozens of people are affected. POLITICO has chosen not to name the people who wrote messages because the Facebook group is private.
The tests are run by the European Personnel Selection Office (EPSO), an interinstitutional body that organizes recruitment for institutions including the European Commission, the European Parliament and the Council of the EU. The exams are a gateway to a career in the EU civil service.
“I regret to inform you that your participation [in the process] has come to an end, since you failed to sit at least one of the tests scheduled for the competition,” according to letters sent to two candidates POLITICO spoke to, and screenshotted by several others on the Facebook group for linguist candidates.
There are scores of messages from candidates online who received that message and say they did take part in all of the required exams. Some of those candidates say they contacted TestWe, the platform that runs the online tests, which confirmed to them they had completed all of their tests.
“This is just SOOOO ridiculous,” wrote another person on Facebook, who said she had also been falsely identified as not completing all of the tests.
Two candidates who were affected told POLITICO they are aware of dozens of people who received the email.
“I was already very annoyed when I had to resit the test,” said one candidate who sat the Spanish-language competition last year and asked to remain anonymous. “Now we see all these errors, all these inconsistencies. I have proof of all the exams I sat. I just don’t think it’s fair.”
The translator tests include exams on language knowledge and verbal and numerical reasoning. | Kenzo Tribouillard/AFP via Getty Images
“We had to wait 1 year for this crap,” one frustrated person with an anonymous username wrote on the Facebook group.
Another candidate who took part in the Greek language competition, and who asked not to be named because they are considering taking legal action, said: “I took it for granted that this was just a mix up with the emails they sent. But it’s been more than a week now and we don’t have any news.”
In a statement to POLITICO, a Commission spokesperson said that EPSO took all complaints seriously.
It is “currently carrying out thorough checks on each individual complaint, in close cooperation with its service provider, TestWe,” the spokesperson said. “At this stage, it would be premature to indicate how many candidates may be concerned. All cases are being examined individually, and candidates who have not yet submitted a request can still do so. EPSO is committed to reducing delays as much as possible while ensuring that each request is handled carefully.”
‘Now or never’
The translator tests include exams on language knowledge and verbal and numerical reasoning. Successfully passing those tests and getting onto the EPSO reserve list allows people to apply for specific open positions within the institutions.
The competitions to get on the reserve list only take place once every several years.
“You feel that if you lose this chance, most probably, with all the transformations in the industry like AI, it’s now or never for many of the candidates,” said the Greek-language candidate.
To complicate things further, the reserve lists featuring the successful candidates for some languages — Dutch, Maltese and Danish — of the most recent competitions have already been published, leading candidates to worry that those people have an advantage for jobs.
“The ones who did not have this issue will actually engage in the recruitment process and might have more chances, and that could create an issue as well,” the Greek candidate added.
“How is it so difficult to arrange a test?” wrote another anonymous user on the Facebook group.
This article has been updated"
March 4, 2026 4:02 am CET
By Mari Eccles
https://www.politico.eu/article/eu-translators-botched-their-entry-exam-again/
#Metaglossia
#metaglossia_mundus
#métaglossie
|