 Your new post is loading...
|
Scooped by
Charles Tiayon
August 13, 2024 12:49 AM
|
Harnessing AI to Preserve the World’s Endangered Languages Introduction The world’s linguistic diversity is under threat. According to UNESCO, over 40% of the approximately 7,000 languages spoken globally are endangered, with many at risk of disappearing forever. As globalization and the dominance of major world languages like English, Mandarin, and Spanish continue to grow, the race is on to preserve the unique cultural treasures embodied in these minority tongues before they are lost to future generations. Fortunately, advances in artificial intelligence (AI) and machine learning are providing powerful new tools in the fight to save endangered languages. From high-tech documentation efforts to community-driven language revitalization programs, AI is playing a critical role in reversing the tide of linguistic extinction. In this article, we’ll explore some of the innovative ways that AI is being leveraged to preserve the world’s endangered languages. The Power of AI in Language Preservation At the heart of the endangered language crisis is a lack of comprehensive data. Many minority and indigenous languages have never been thoroughly documented, with no written grammars, dictionaries, or recorded oral histories available. This lack of linguistic data makes it extremely challenging to develop the educational resources, language-learning tools, and computational applications needed to support language revitalization efforts. This is where artificial intelligence is creating a significant transformation. Advanced speech recognition, natural language processing, and machine learning algorithms are enabling the rapid digitization and documentation of endangered language materials at unprecedented scales. Researchers are deploying AI-powered audio and video recording devices to capture spoken language data from fluent elders, while AI-assisted transcription and translation tools are allowing this data to be efficiently processed and annotated. One pioneering example is the Endangered Languages Documentation Programme (ELDP) at SOAS University of London. This initiative has used AI-powered recording devices and transcription software to build a vast digital archive of endangered language materials, including over 4,000 hours of audio and video recordings in more than 300 languages. By automating the data collection and processing workflow, the ELDP has been able to significantly accelerate the documentation of these at-risk tongues. Similarly, the Wikitongues project has leveraged AI-powered speech recognition to create an online repository of crowdsourced video recordings of people speaking over 1,000 different languages. This growing digital library allows linguists, educators, and community members to access authentic language data and collaborate on preserving their linguistic heritage. Revitalizing Endangered Languages with AI Beyond just documenting endangered languages, AI is also playing a crucial role in revitalizing them. Intelligent language-learning chatbots, for instance, are being developed to provide interactive, conversational practice for endangered language speakers, particularly younger generations who may not have had the opportunity to learn from fluent elders. These AI assistants can be customized with culturally relevant content and designed to encourage frequent use, helping to foster intergenerational transmission of endangered languages. In New Zealand, the Te Hiku Media organization has created an AI-powered language app called “Te Reo Hāpai” that teaches conversational Māori through interactive games and lessons. Similarly, in Canada, the FirstVoices initiative has developed a suite of mobile apps powered by AI speech recognition that allow Indigenous language learners to practice their skills through voice-enabled activities. Multilingual AI systems are also proving useful for language preservation, as they can facilitate communication and collaboration between speakers of different endangered languages. For example, the Universal Dependencies project is using AI-driven multilingual natural language processing to create vast datasets of syntactically annotated text in over 100 languages, including many at-risk minority tongues. This linguistic data can then be leveraged to build machine translation systems, educational resources, and other computational tools to support endangered language communities. Ethical Considerations Of course, the integration of AI into language preservation efforts also raises important ethical and practical considerations. There are valid concerns about data privacy, intellectual property rights, and the potential for AI-powered tools to be misused or to inadvertently cause harm to vulnerable language communities. Careful design, rigorous testing, and close collaboration with local stakeholders are essential to ensure that AI is deployed responsibly and equitably in this domain. Conclusion The urgent need to preserve the world’s endangered languages has never been more pressing. With over 40% of the approximately 7,000 languages spoken globally now classified as at-risk, the race is on to document, revitalize, and transmit these vital cultural artifacts to future generations before they disappear forever. Fortunately, the rapid advancement of artificial intelligence (AI) and machine learning technologies is providing powerful new tools to aid in this critical effort. From automated language documentation and digitization to interactive AI-powered language learning apps, the integration of AI into language preservation initiatives is transforming the landscape of endangered language conservation. As we continue to explore the remarkable potential of AI to support endangered language communities, it will be essential to do so in a responsible and ethical manner – one that prioritizes the needs, rights, and cultural autonomy of these vulnerable linguistic groups. Only then can we truly harness the full power of AI to safeguard the rich diversity of human expression and ensure that no language is left behind. You may also like: AI and the Revival of Extinct Languages FAQ Q1. What is AI’s role in endangered language preservation? A1. AI is revolutionizing endangered language preservation through technologies like automated language documentation, AI-powered language learning apps, and multilingual AI systems that facilitate communication and collaboration between speakers of different minority languages. Q2. What are some examples of AI-powered language preservation initiatives? A2. Examples include the Endangered Languages Documentation Programmed at SOAS University of London, the Wikitongues project, the Te Reo Hāpai Māori language app in New Zealand, and the First Voices initiative in Canada. Q3. What ethical considerations arise with using AI for language preservation? A3. Key concerns include data privacy, intellectual property rights, and the potential for AI tools to be misused or cause unintended harm to vulnerable language communities. Careful design, rigorous testing, and close collaboration with local stakeholders are essential. Q4. How can AI help reverse the tide of linguistic extinction? A4. By automating and streamlining the documentation, revitalization, and transmission of endangered languages, AI technologies are providing new hope for safeguarding the rich cultural diversity embodied in the world’s minority tongues. Q5. What is the current state of endangered language preservation globally? A5. According to UNESCO, over 40% of the approximately 7,000 languages spoken globally are currently endangered, with many at serious risk of disappearing forever due to factors like globalization and the dominance of major world languages.
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"Les patients ne maîtrisant pas les langues nationales peuvent, dans les hôpitaux, recourir à des médiateurs interculturels. En 2024, ce dispositif a représenté une dépense de 7,666 millions d’euros pour l’État fédéral, selon des chiffres communiqués par le ministre des Affaires sociales Frank Vandenbroucke (Vooruit) en réponse à une question du sénateur Vlaams Belang Klaas Slootmans.
Au total, 127 médiateurs interculturels étaient actifs dans les hôpitaux en 2024, pour 136.518 interventions. Le système a été utilisé pour un large éventail de langues, principalement le turc, l’arabe et le berbère. Le russe et le bulgare figurent également parmi les langues fréquemment rencontrées, relève Klaas Slootmans.
« Ces chiffres montrent une nouvelle fois comment la Belgique évolue de plus en plus vers un État multiculturel de facilités, où le multilinguisme est financé de manière structurelle, tandis que la population propre se retrouve de plus en plus souvent, littéralement et figurativement, laissée dans le froid », réagit le sénateur. « Cela contraste fortement avec la situation des néerlandophones, qui dans les hôpitaux bruxellois ne sont même pas aidés dans leur propre langue. »
Les dépenses ont fortement augmenté au cours de la dernière décennie. En 2013, 74 médiateurs avaient réalisé 80.760 interventions pour un coût de 2,662 millions d’euros. En 2019, le dispositif comptait 117 médiateurs, avec 114.060 interventions et une facture de 3,822 millions d’euros. Pendant la période du Covid (2020-2021), le nombre de médiateurs est resté stable à 117, tandis que les coûts sont montés à 4,049 millions d’euros puis à 5,102 millions d’euros.
En 2024, 127 médiateurs ont effectué 136.518 interventions, pour un coût total de 7,666 millions d’euros.
« En à peine dix ans, le coût a donc presque triplé, ce qui est manifestement la conséquence directe d’une politique migratoire fédérale défaillante, qui impose à la Flandre des factures exorbitantes. Alors que nos hôpitaux ploient sous les pénuries de personnel et les listes d’attente, nous injectons des millions dans des interprètes pour ceux qui refusent de s’adapter », fustige Klaas Slootmans."
https://www.medi-sphere.be/fr/actualites/les-couts-lies-a-la-traduction-dans-les-hopitaux-ont-presque-triple-en-dix-ans.html
#Metaglossia
#metaglossia_mundus
#métaglossie
"Google, in partnership with African research institutions, has launched WAXAL, a large-scale open speech dataset designed to enhance artificial intelligence tools for African languages, writes Delight Sunny for Techpoint Africa.
The dataset includes speech data for 21 Sub-Saharan African languages, such as Hausa, Yoruba, Igbo, Luganda, Swahili and Acholi. According to Google, WAXAL is designed to support over 100 million speakers who have largely been left out of voice-based technologies due to a lack of quality language data.
“The ultimate impact of WAXAL is the empowerment of people in Africa. This dataset provides the critical foundation for students, researchers and entrepreneurs to build technology on their own terms, in their own languages, finally reaching over 100 million people,” Aisha Walcott-Bryant, head of Google Research Africa, says." Techpoint Africa 06 February 2026 Whatsapp https://www.universityworldnews.com/post-mobile.php?story=20260206102126739 #Metaglossia #metaglossia_mundus #métaglossie
"Nida’s Functional Equivalence Theory and Koller’s Equivalence Theory have formed two representative theoretical paradigms centered around the issue of “equivalence”, exerting a profound influence on the development and practice of translation theory. This article attempts to introduce Nida’s “Analysis-Transfer-Restructuring-Testing” model of Functional Equivalence Theory and the multi-level equivalence types of Koller’s Equivalence Theory, which include “denotative, connotative, text-normative, pragmatic, and formal-aesthetic” aspects, as well as to compare the connections and differences between Nida’s and Koller’s translation theories. The aim is to help readers gain a deeper understanding of Nida’s and Koller’s translation theories and to provide some reference for the more reasonable and flexible selection and application of translation theories in practice.
Keywords
Nida’s Functional Equivalence Theory, Koller’s Equivalence Theory, Comparative Study" Advances in Applied Sociology > Vol.16 No.2, February 2026
A Comparative Study of Nida’s and Koller’s Translation Theories Yan Li School of Language and Cultures, Youjiang Medical University for Nationalities, Baise, China. DOI: 10.4236/aasoci.2026.162005
Abstract https://www.scirp.org/journal/paperinformation?paperid=149427 #Metaglossia #metaglossia_mundus #métaglossie
"Freelance Simultaneous and Conference Interpreter Required Copy
Conference IPs
Homebased
Freelance/Self-Employed
About Clear Voice
Clear Voice are a fast-growing language services provider operating in the Public and Private Sector and attracting clients needing simultaneous and conference interpreting services.
We are seeking highly skilled Freelance Simultaneous and Conference Interpreters both in the UK and Overseas to provide high-level language interpretation services (in person and video remote interpreting) in live, multilingual environments. This is a self-employed, contract-based opportunity to support our clients’ needs.
Clear Voice is an equal opportunities organisation and welcomes applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
Assignments include but are not limited to:
Commercial/Business
Events
Asylum and Refugee Services
Modern Day Slavery
Social Housing / Social Security
Legal
Medical
Employment / Education
Academic
Retail/e-commerce
Marketing
Media & Broadcasting
What we expect from our linguists
Essential Skills
Native or near-native fluency in both source and target languages
Proficiency in English language (when appropriate to supply evidence)
Ability to provide accurate, real-time interpretation from the source language to the target language and vice versa
Experience in interpreting in various formats including booth interpreting, remote simultaneous interpreting and whispered interpreting
Collaboration with event organisers, Clear Voice, technical staff, and other interpreters to ensure smooth delivery
Knowledge and understanding of simultaneous interpreting
Ability to maintain confidentiality and impartiality at all times
Ability to prepare for assignments by reviewing terminology, briefings, and materials
Adherence to professional ethics and standards of practice
Familiarity with interpretation equipment and Remote Simultaneous Interpreting (RSI) platforms such as Interprefy, Zoom, Kudo, or similar
Qualifications and Experience
Minimum 2 years’ proven professional experience in simultaneous and/or conference interpreting
Accreditation or certification from a recognised interpreting body (e.g., NAATI, AIIC, ATA, UN, EU, or equivalent).
Master’s degree in Interpreting or conference interpreting
Post-graduate qualification in conference interpreting
Full qualified membership of a relevant professional body is desirable, but not essential.
You must be also able to provide:
Evidence that you are eligible to work in the UK, if UK based
2 Professional References
Provision of a valid DBS certificate (if UK based) or Police"
https://careers.clearvoice.org.uk/job/7IE7c2dazkXRZfP5pa0tsL
#Metaglossia
#metaglossia_mundus
#métaglossie
"Bad Bunny’s Super Bowl halftime show will be the first time a Super Bowl performance will be interpreted in Puerto Rican Sign Language, also known as LSPR.
Puerto Rican partially deaf performer Celimar Rivera Cosme will make history while interpreting the performance. Rivera Cosme of Puerto Rico, who will lead a “multilingual signing program” during Bad Bunny’s Super Bowl halftime performance, will prove that Puerto Rican Deaf culture has a place on one of music’s biggest stages.
LSPR incorporates Spanish and is a dialect of American Sign Language. It comes with its own grammar, rhythm, and cultural identity inspired by Puerto Rico’s history and Deaf community.
Three sisters react after welcoming babies only months apart
“In a historic first, the signed rendition of the Apple Music Halftime Show will feature a multilingual signing program incorporating Puerto Rican Sign Language, led by Deaf Puerto Rican performer Celimar Rivera Cosme,” an NFL release said.
Ahead of her performance, Rivera Cosme told ABC News through an interpreter in Spanish, “I feel incredibly proud because everything that Bad Bunny is doing is making history. And it means that sign language is also going to make history there.”
“The most important thing is to emphasize that we have our own language, our own identity and our own culture.”
Rivera Cosme does more than interpretation as she puts in a lot of emotion and storytelling while doing her work, making sure that deaf audiences have almost the same experience as other fans while at concerts.
“Interpretation is one thing, but I’m not going there to interpret, I’m going to perform. In Puerto Rico, we’re very used to seeing interpreters everywhere,” she said to ABC News. “But the Super Bowl is different — you have to add your flow, your vibe, your style, and your attitude, and bring all of that together with the interpretation. The body’s movement is different.”
She recounted how shocked she was when she got the call from the NFL for the job.
“I said, ‘Well, this means a great responsibility for me, especially for my deaf community, because it’s great that they chose me, but I want to shine, and I want the Puerto Rican deaf community to shine with me too,'” she said.
But this will not be the first time Rivera Cosme will be interpreting for Bad Bunny. In 2022, while the deaf community in Puerto Rico was advocating to have interpreters at concerts, it asked the team of Bad Bunny if they could include interpreters at his world tour stop in Puerto Rico. He agreed and Rivera Cosme was brought on board for his 2022 World’s Hottest Tour, where she earned acclaim. She also interpreted during Bad Bunny’s El Choli residency in Puerto Rico, a cultural celebration of Puerto Rican identity.
“But the Super Bowl is a very big stage where many people will have their eyes on this event, and I’m very proud of that and of representing our Puerto Rican Sign Language,” Rivera Cosme said.
The Super Bowl LX will be played at Levi’s Stadium in Santa Clara, California, on February 8, with Bad Bunny headlining the halftime show and becoming the first solo Spanish performer. Jay-Z had to come to the defense of Bad Bunny recently after a section of people, including President Donald Trump, expressed displeasure with the Puerto Rican musician being selected to headline the Halftime Show.
Bad Bunny, whose real name is Benito Antonio Martínez Ocasio, performs in Spanish and has a global fanbase. The 31-year-old is also a vocal Trump critic, so it came as no surprise that the president and other Conservatives condemned his selection."
by Mildred Europa Taylor
February 05, 2026,
https://face2faceafrica.com/article/meet-the-puerto-rican-deaf-interpreter-set-to-make-history-during-bad-bunny-super-bowl-halftime-show
#Metaglossia
#metaglossia_mundus
#métaglossie
...the prosecution argued that the accused does not require a Malayalam interpreter, as the court continues to face difficulty securing two interpreters for the language.
"WOODLAND, Calif. — During a hearing Wednesday in Superior Court of California in Yolo County, the prosecution argued that the accused does not require a Malayalam interpreter, as the court continues to face difficulty securing two interpreters for the language.
Judge Paul K. Richardson opened the trial readiness conference by underscoring the shortage of Malayalam interpreters, saying, “The people that help schedule interpreters here in Yolo County have indicated that there are not sufficient numbers of interpreters in the (accused’s) requested language,” which complicates scheduling future court dates and has the potential to prolong the case.
“Maybe if we can secure the interpreters, knowing that we’re gonna need two to alternate, and then set the case,” Deputy Public Defender Vincent Maher said.
After a brief discussion about the scheduling conflict, Deputy District Attorney Aloysius Patchen said, “There were approximately six court dates without an interpreter and (the accused) testified in English for an entire day at the last hearing.”
Patchen went on to reference the accused’s ex-wife, who is also the victim in the domestic abuse case, stating that during their relationship they spoke only English and that the accused had “studied in English since grade school all the way to his master’s.”
“I think it’s highly inappropriate for people, especially the prosecutor, to be commenting on my client’s need,” Maher said.
Maher added, “Nobody is in a better position than me to determine my client’s understanding of legalese or the nuances of the English language; I really am at a loss that a prosecutor at every single appearance can make derogatory remarks about somebody who needs an interpreter.”
Maher emphasized that it was ludicrous to “berate (his) client as a fraud because he wants to have an interpreter that speaks his native language.”
Following further discussion, Richardson said two Malayalam interpreters would be present at the next court hearing, despite the difficulty in securing them and the prosecution’s insistence that the accused understands English and does not need a Malayalam interpreter.
The trial-setting conference is scheduled for March 4 at 9 a.m."
By Nancy Carrillo
February 5, 2026
https://davisvanguard.org/2026/02/court-struggles-with-interpreters/
#Metaglossia
#metaglossia_mundus
#métaglossie
"When I hear someone talking about translated books, my first thought is excitement.
With translated works of literature, I’m able to learn about other cultures and experience stories from around the world. How beautiful is that?
Prior to mass publications of printed books as well as translations becoming more accessible, we wouldn’t have this vast amount of knowledge from authors that speak languages other than our own.
As someone who, unfortunately, doesn’t know another spoken or written language other than English, I’m incredibly grateful for the gift that translators give us. When done right, the people who translate works are able to encapsulate the essence of the story. A work of literature that surpasses language barriers is truly a magical moment.
Thus far, I have primarily read works from English speakers, given my existence in a primarily English-speaking country. While I have read books from authors outside of the U.S., most of the works have not been translated, but one novel I read ages ago has always stuck with me.
While I don’t quite remember the premise years after reading, I remember the feeling the novel left me with. It felt philosophical yet incredibly grounding. It was a warm hug that nurtured my middle school soul. I loved Paulo Coelho’s novel “The Alchemist” so much I finished it in one sitting. Granted, it’s not a particularly long novel, but still, I was astounded that a book could make me feel that way.
Originally published in Portuguese, “The Alchemist” has been translated into over 60 languages since 1988. After reading the novel, I was so intrigued that I did what anyone obsessed with a book would do, google absolutely everything about it.
According to an interview with The Guardian from the early 2000s, Coelho said he wrote the novel in roughly two weeks. The all-encompassing feeling I had while reading was echoed by the author in his writing of the novel. I was incredibly moved by this piece.
Since then, there have been works that have consumed me, but hardly any to the same extent that “The Alchemist” did.
Last year I was wandering around Auntie’s Bookstore — as I frequently do — when I saw that one of their tables was filled with translated works by women. I was so shocked I stood there in awe for a long time. In that moment, I was so appreciative of the emphasis Auntie’s deliberately put on underrepresented literature.
Since then, I’ve continued to be pleasantly surprised at Auntie’s. They continue to perpetuate the values that are traditionally associated with small, independent bookstores, and I love that about them. Auntie’s being a welcoming and queer-friendly space that puts on community events and represents marginalized authors is incredibly reminiscent of home.
I’m incredibly fortunate that I was raised in a place that valued a diversity of perspectives and cultures. My local bookstore has always felt like home; a comfort washes over me whenever I step inside.
Through my time frequenting independent bookstores and libraries, I’ve been exposed to a diversity of opinions, perspectives and cultures. For this, I’m incredibly grateful. While my reading list is still mostly centered on English literature, I aim to increase the diversity of authorship in the coming years. Whether that is from authors of color, queer stories or translated works, I’m excited for what the future holds.
I have amassed quite the list of books on my "to be read" list , as so many novels pique my interest, but alas, there are only so many hours in the day to be spent reading for pleasure rather than out of desperate necessity for assignments.
Recommended to me have been many translated novels, including “Tender is the Flesh” by Agustina Bazterrica, “The Shadow of the Wind” by Carlos Ruiz Zafón and “Siddartha” by Hermann Hesse. I cannot wait until my schedule allows me the time to read these novels.
Reading has always been my escape, my reprieve from daily life. I have been continuously amazed by the ability authors have to share other people’s experiences and stories through literature. It’s magical. When traveling across languages, I find it to be even more special. Charlie Oltman is a news editor."
https://www.gonzagabulletin.com/opinion/translated-works-of-literature-are-essential-conduits-of-diversity/article_561d8e54-4042-460d-8595-27b06a8af3e6.html #Metaglossia #metaglossia_mundus #métaglossie
"Ever since COVID-19, video conferencing apps have been a staple web tool in almost every working professional and student's life. Platforms like Google Meet, Microsoft Teams, Zoom, and more have constantly gained new bells and whistles since, with Google now looking to abolish language barriers with its latest Google Meet feature expansion.
Google Meet on computers supports real-time speech translation. Unlike translated captions, speech translation allows participants to speak in their native language, while the platform translates the speech in real-time into spoken sentences, essentially preserving the "flow of conversation by creating an audio translation dubbed over the original speech that mimics the speaker’s tone and speaking cadence."
Up until now, this feature has been limited to Google Meet on computers. Leaks recently suggested that the feature might make its way to the platform's mobile apps, and that's exactly what's happening now.
In a new Workspace updates post announcing speech translation's general availability for businesses, the tech giant also announced the feature's mobile expansion.
You can't try it out just yet
Speech translation will roll out to the Meet Android and iOS apps in the coming months.
In addition to the mobile expansion, the tech giant also indicated that it will make visual updates to the speech translation user interface. This applies to the feature's UI on computers. Additionally, Google will also make refinements to translation accuracy and nuance, which should apply to the feature on all surfaces.
It's worth noting that speech translation is limited to Business Standard and Plus, Enterprise Standard and Plus. Frontline Plus, Google AI Pro and Ultra customers. Those with a Google AI Ultra for Business add-on or a Google AI Pro for Education add-on can also access the feature."
Karandeep Singh Oberoi
Feb 4, 2026, 5:24 PM EST
https://www.androidpolice.com/google-meet-speech-translation-mobile-expansion/
#Metaglossia
#metaglossia_mundus
#métaglossie
"New study explores the value placed on support from family, friends, and society by those in various kinds of intercultural relationships.
Friends approving of romantic partners matters: research shows it's not just socially convenient but also linked to relationship quality and even physical stress responses. Gaining that approval can be even more important for couples who are crossing cultural boundaries, where relationships may attract scrutiny, disapproval, or misunderstanding.
Drawing on data from over 750 people in intercultural relationships, Hanieh Naeimi and team explore whose approval matters most for relationship quality. Writing in Social Psychological and Personality Science, they show that approval from friends, family, and society matter in different ways for different couples depending on cultural background and relationship stage.
Participants represented a wide range of backgrounds, with around half identifying as White and the rest as Black, Latin American, East Asian, South Asian, Middle Eastern, Native American, or multicultural. Their relationships varied widely in length, from under a year to several decades, and ranged all the way from just dating to committed relationships to marriage. To examine differences between couples, participants were grouped into three broad pairings: couples where both partners were White, couples with one White partner and one from a minority background, and couples where both were from minority backgrounds but with different cultural backgrounds.
All participants completed the same online survey. Firstly, they reported how much approval they felt from their family, friends, and society more generally using brief rating scales, before reporting on their relationship quality, including how satisfied and how committed they felt.
Feeling that friends approved of a relationship was the strongest predictor of both relationship satisfaction and commitment, with higher friend approval linked to higher satisfaction and stronger commitment for about 95% of couples. Family approval also mattered, but less significantly and consistently. It was particularly pertinent for Latin American and Middle Eastern participants and for couples in relationships of a year or less, likely because partners are still integrating each other into familial networks. Interestingly, approval from society at large had no measurable effect on either satisfaction or commitment.
Couples where both partners were from minority backgrounds benefitted the most from friend approval. Overall, the support of friends explained about 23% of the differences in relationship satisfaction and 26% of differences in commitment across all couples — suggesting that while family and societal approval sometimes help, feeling supported by friends is the most reliable predictor of a happy, committed intercultural relationship.
Because the data for this study was collected at a single point in time, we can't tell if social approval actually improves relationships, or if happier couples just feel more supported. Grouping couples into broad cultural categories may also mean more nuanced differences were missed.
Overall, though, the findings suggest that feeling validated and accepted by one's peers can strengthen a sense of satisfaction and commitment in a romantic relationship, helping couples grow even when family or societal approval is mixed.
Read the paper in full:
Naeimi, H., Muise, A., Di Bartolomeo, A., West, A., & Impett, E. A. (2025). With a Little Help From My Friends: Social Approval and Relationship Quality in Intercultural Romantic Relationships. Social Psychological and Personality Science. https://doi.org/10.1177/1948550625138944"
03 February 2026
By Emily Reynolds
https://www.bps.org.uk/research-digest/intercultural-relationships-whose-support-matters-most
#metaglossia
#metaglossia_mundus
#métaglossie
"Espresso Translations New York Earns ATC Membership and Elevates Translation Services NY Standards PRESS RELEASE GlobeNewswire Feb. 2, 2026, 05:49 PM New York, NY, Feb. 02, 2026 (GLOBE NEWSWIRE) -- Certified translation demands are rising fast across New York and the USA, and clients are responding by choosing agencies backed by recognised industry standards. Espresso Translations New York has recently become an accredited member of the Association of Translation Companies (ATC), a milestone that strengthens its credibility for official and business-critical multilingual communication.
For clients comparing the top certified translation agencies in New York and the USA, the ATC serves as a key industry benchmark, listing vetted language service providers in its member directory. This makes ATC membership especially relevant for organisations and individuals who want added confidence that a translation agency operates with professional standards and quality controls.
That is exactly why Espresso Translations New York support organisations and individuals who require accurate multilingual work for legal, commercial, academic, and international use. These projects often involve court filings, immigration applications, academic transcripts and diplomas, business contracts, and regulated financial or compliance documents, where even small errors can create delays or complications. To meet that standard, the agency delivers translation solutions that protect meaning and context while keeping language clear, consistent, and reliable.
“Espresso Translations New York stands out by pairing recognised accreditation with practical language solutions for high-stakes needs,” said a spokesperson for this New York-based translation provider. “As a translation agency New York business community trusts, the company delivers certified translations, localisation, transcription, subtitling, editing, and multilingual support.”
This full-service capability helps clients manage multiple language requirements through one consistent provider, especially when accuracy and presentation quality must remain consistent across formats. Espresso Translation’s language solutions also support more than 150 languages, helping organisations expand internationally and assisting individuals with official documentation that requires precision and cultural sensitivity.
Quality standards remain a key differentiator in the USA translation market. The ATC member listing notes that Espresso Translations is ISO-certified and supported by a large network of linguists. This signals structured workflows and quality oversight for clients who want more than basic translation delivery. These credentials matter because certified translation work often supports decisions where errors can create delays, confusion, or compliance issues. Espresso Translations applies professional review processes that help reduce risks tied to inaccuracies, formatting inconsistencies, or wording that fails to match the intent of the original text.
As part of its full-suite translation services New York businesses count on, the company also provides professional transcription services for clients who need accurate spoken content converted into clear, usable text. This capability supports legal, academic, and commercial needs, ensuring interviews, meetings, recorded statements, and multilingual audio are documented accurately. Alongside ATC-recognised standards, the agency reinforces its position as a trusted provider for certified translation and multilingual projects.
Espresso Translations continues to raise expectations for certified translation services in New York and the USA by combining real-world delivery with recognised professional standards. With ATC membership strengthening credibility, the agency gives organisations and individuals added confidence that every project is handled with accuracy and accountability.
To learn more about Espresso Translations and request certified translation support from a translation agency New York clients trust, visit www.espressotranslations.com.
About Espresso Translations
Founded in 2018, Espresso Translations is a New York-based language services provider delivering certified translations, localisation, transcription, subtitling, and multilingual editing for businesses and individuals worldwide. The agency supports 150+ languages and applies structured quality processes to produce accurate, culturally sensitive communication." https://markets.businessinsider.com/news/stocks/espresso-translations-new-york-earns-atc-membership-and-elevates-translation-services-ny-standards-1035775617 #Metaglossia #metaglossia_mundus #métaglossie
"The Army’s Western Hemisphere Institute for Security Cooperation marks one of the first military components reporting impacts from the platform.
The Pentagon recently tapped California-based tech company LILT to supply artificial intelligence-powered translation options to military forces worldwide.
According to a press release viewed by DefenseScoop before its public release Tuesday, the Defense Department’s Chief Digital and AI Office awarded LILT a flexible other transaction contract to expand its platform for military domain-specific vocabularies, after it was prototyped and proven via the Defense Innovation Unit.
“The platform enables rapid and accurate translation of text, video, and audio content into or out of English,” per the release. “This specialized, unique AI platform supports U.S. military missions that span from understanding foreign technical documentation and training materials to facilitating foreign partner exercises and supporting direct action missions.”
The Army’s Western Hemisphere Institute for Security Cooperation (WHINSEC) marks one of the first military components reporting impacts from the platform.
Advertisement
It historically would take personnel a full year to translate materials for the institute’s Command and General Staff Officer Course (CGSOC), which is delivered in Spanish and underpins instruction to students from a dozen Latin American and Caribbean partner nations. The course lasts 11 months.
In 2025, WHINSEC officials used LILT technology to translate the entire current-year CGSOC curriculum over the course of just a few weeks.
“LILT AI Translation has catapulted WHINSEC into the future of AI and its nexus with security cooperation and professional military education across the Western Hemisphere. We can now rapidly integrate international forces from countries that partner with the U.S. at all echelons: in the classrooms, during multinational exercises, and foreseeably in coalition combat outposts,” Army Col. Eldridge Singleton, 9th WHINSEC commandant, said in a statement. “That’s what we train for at CGSOC.”
The press release did not disclose the total value of the contract. A Pentagon spokesperson did not respond to DefenseScoop’s request for more information.
The announcement comes on the heels of DOD leaders launching a new AI Acceleration Strategy, which directed personnel to integrate the emerging technology into their daily operations." Brandi Vincent February 3, 2026 Brandi Vincent is a Senior Reporter at DefenseScoop, where she reports on disruptive technologies and associated policies impacting Pentagon and military personnel. Prior to joining SNG, she produced a documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland. She was named Best New Journalist at the 2024 Defence Media Awards."
https://defensescoop.com/2026/02/03/ai-enabled-translation-military-use-cdao-lilt-contract/ #Metaglossia #metaglossia_mundus #métaglossie
"Los chicos tienen un vocabulario que los adultos no entienden. El nuevo diccionario adolescente: un traductor para entender qué quieren decir los chicos hoy Miércoles 04 de Febrero de 2026, 07:08
“Dejá de tirar beef amigo”, “Es bait, no caigas”, “Amiga, devoraste con ese outfit, serviste face card”, “Subió una foto en blanco y negro sin poner descripción… está farmeando aura de misterioso”, “Cómo carreaste ayer en la fiesta, tenés alto rizz, estás en tu prime”, “No sé cómo no te diste cuenta, si ese pibe es una red flag andante”, “Tengo que hacerte un update del chisme porque te falta lore”, “Pegué un re glow up desde que entré a mi fitness era, siento que estoy llegando a mi peak”.
Estas son solo algunas de las palabras que integran el “nuevo diccionario” de los sub-18: una serie de códigos que los hermana y que, a su vez, los diferencia del mundo adulto que no puede llegar a comprender de qué hablan cuando los están escuchando.
“Hay dos cosas que yo siempre digo sobre el léxico joven. Primero, que la lengua cambia, es una constatación; no es tan violento el cambio, pero en el léxico es muy fácil incorporar una palabra nueva porque es el área de la lengua más susceptible a cambiar. Por otro lado, hay que recordar que a cierta edad hay una necesidad de diferenciarse identitariamente con los adultos o figuras de autoridad. Se crea un grupo de pertenencia que se une y se ata a la lengua”, opina Santiago Kalinowski, director del departamento de Investigaciones Lingüísticas y Filológicas de la Academia Argentina de Letras.
Algunas palabras clave
AURA: una especie de puntuación invisible de qué tan “cool”, respetado o imponente sos. Ejemplo: “Esto me da +1000 de aura”.
BAIT: viene del inglés y significa literalmente “cebo” o “carnada”. En el mundo de los adolescentes e Internet, se refiere a contenido creado específicamente para provocar una reacción, engañar a alguien o generar una pelea. Ejemplo: “Esto es bait, no caigas”.
BEEF: conflicto o pelea entre personas. Ejemplo: “Estos dos tienen un beef”.
BEIGE FLAG: significa “bandera beige” y es un término nuevo para describir algo raro o aburrido de una persona que no llega a ser malo (red flag) ni bueno (green flag).
BRAIN ROT: podredumbre cerebral por estar expuesto en demasía a contenido digital “basura”.
CARREAR: viene del inglés “to carry” (cargar o llevar a cuestas) y nació originalmente en el mundo de los videojuegos, en donde el “carry” es la persona que tiene todo el peso del equipo. Ejemplo: “La música estaba malísima, pero mi amigo empezó a bailar y carreó la noche”.
DELULU: es una abreviatura de la palabra inglesa delusional (delirante). Se usa para describir a alcés guien que tiene fantasías o expectativas poco realistas, especialmente en el amor o respecto a sus metas personales.
DEVORAR: se usa para decir que alguien hizo algo extremadamente bien, con mucha confianza o con un estilo impecable.
FACE CARD: se usa para hablar de la belleza o el atractivo en el rostro de alguien, a veces figura en oraciones en inglés como “face card never declines”, o en español “la face card de esa chica es increíble”.
FARMEAR: hacer lo mismo mil veces para ganar puntos o ítems y así subir de nivel o progresar. Se usa en el mundo de los videojuegos. Se puede combinar con “aura” y decir: “Estoy farmeando aura”.
FOMO: en inglés significa “fear of missing out”, o sea la ansiedad o miedo de quedarse afuera de algo. Ejemplo: “Literalmente solo vine por FOMO, estaba cansadísima”.
GLOW UP: es estar en tu mejor momento.
LORE: Es la historia de fondo de alguien o de algo, da más contexto o historia a una situación puntual. Ejemplo: “No te puedo explicar por qué es así, es que no conotodo su lore”.
PRIME: estar en tu mejor momento. Ejemplo: “Estoy en mi prime”. Es como “glow up”.
RED FLAG: traducido del inglés significa “bandera roja”. Se suele utilizar cuando en una relación amorosa hay señales que indican comportamientos que no nos gustan o que podrían considerarse tóxicos. Ejemplo: “No viste las red flags”, “Era una red flag andante”.
SERVIR: esta expresión se usa cuando alguien, especialmente una mujer, se luce haciendo algo. Ejemplo: “Ella sirvió rostro (face card) ayer”, “ella sirvió actuación”.
SIX SEVEN: los adolescentes lo usan como una jerga viral sin un significado fijo; sirve más como un código de identidad ligado a desconcertar a los adultos.
VIBE: la vibra que algo o alguien te genera. Ejemplo: “Este chico me da malas vibes”.
W MOMENTO: es común en el lenguaje de los videojuegos, transmisiones en vivo y redes sociales. La “w” viene de la palabra “win” (victoria). Un “W moment” es un momento de victoria o una situación donde alguien hizo algo genial.
Los orígenes
El origen de estas palabras es variado: algunas provienen del mundo gamer (como farmear aura, o carrear la partida), otras nacieron a partir de memes virales (six seven), otras están más asociadas a la comunidad LGBTQ+ (devorar, servir, slay) y otras a redes sociales como Tik Tok (Ok mañana). Es un vocabulario que viene en gran parte del inglés, el idioma predominante.
“Hay una jerga gamer y cada tanto una palabra que circula ahí pega un salto y se usa en otros ambientes, como en el streaming. La palabra “lore” nadie la explica en los mundos gamers, pero cuando uno ve streams y la usan, la explican. Es una palabra vieja en inglés que significa “saber”, entre culta y arcaica, que por la trayectoria sinuosa de los juegos de rol como el Minecraft empieza a circular como algo más popular, y trata de saltar a un ámbito más general”, analiza el experto.
Añade que se ve una trayectoria al ámbito de chimento, por ejemplo. “Hoy hablan de Luciano Castro y el lore que hay detrás”, ilustra Kalinowski. La farándula, entonces, es una de las áreas en donde palabras como lore -es decir, la historia de fondo o trama de un personaje- cobran relevancia. Pero no todas las palabras que usan los de la Generación Alfa (2010- 2024/5) -algunas las comparten con los más chicos de la Generación Z- están amplificadas por los medios.
La mayoría son indescifrables para los papás, los hermanos mayores, los abuelos y otros adultos. En este último tiempo surgieron reels de Instagram o videos de Tik Tok en donde los sub-18 ponen a sus padres frente al desafío viral de leer palabras de su generación sin tener idea de lo que están diciendo.
“Hay que advertir que lo que sucede en esos años de la adolescencia son rasgos del cronolecto -propios de la edad-, en donde los chicos buscan diferenciarse de las generaciones mayores. Lo que suele pasar es que a ciertas edades se establece un código para hablar con la gente de ese grupo y después la enorme mayoría de esas palabras se dejan de usar, no es que se incorporan al léxico general”, agrega Kalinowski.
Para el especialista, de hecho, son muy pocas las palabras surgidas del léxico adolescente que realmente “trascienden” y se logran incorporar de forma permanente al léxico general. El caso más emblemático es el del “re”, un prefijo devenido en adverbio de grado.
De las palabras gamer como lore saltamos a las que provienen de otros espacios del mundo digital. Por ejemplo “six seven”, la palabra del año 2025 según Dictionary.com que irrumpió en TikTok como uno de los memes más repetidos entre jóvenes y adolescentes, no significa nada puntual. Es una palabra que nació como un código y que tiene su fundamento en irritar a los adultos, quienes no entienden por qué se usa.
Es habitual, entonces, que los padres escuchen a sus hijos adolescentes decir six seven en toda clase de situaciones. A una pregunta cualquiera, como “qué hora es” o “qué es eso” los chicos responderán “six seven”.
Hay otras que son más curiosas, porque no se usan tanto de forma oral pero sí en comentarios, sobre todo en TikTok. “Ok mañana”, frase que aparece ante chistes que no funcionan, suele acompañarse de un corazón violeta y es uno de los ejemplos; “ella jura” es otro.
Hoy en día los chicos ya no dicen que alguien tiene “flow” cuando quieren decir que alguien emana una energía especial que no se puede comprar. Ahora dicen que esa persona tiene “aura”, un giro de tuerca más místico o “épico” si se quiere.
El aura -término asociado al mundo del anime, cómics de superhéroes y al deporte, pero con presencia en el mundo gamer ahora- se conforma de “puntos invisibles” de prestigio que una persona suma o resta a través de acciones o actitudes.
Los chicos dirán que alguien tiene “+1000 de aura”, por ejemplo, cuando hace algo calificado como épico. Los gamers irán un poco más allá y podrán decir que está “farmeando aura” (farmear, que viene de la palabra en inglés farming): así se refieren a alguien que hace determinadas cosas para acumular “puntaje” de respeto social.
“Siempre se identifica la tecnología en estos tipos de comportamientos, es una dinámica que ves repetitivamente, hoy ves ciertos formatos que dominan el momento (los reels de Instagram o shorts de YouTube) y si ese contenido se crea deslocalizadamente entonces ahí el propio medio favorece la transferencia del léxico más rápido. Sobre todo del español y del inglés, que se convirtió en la lengua más influyente después de la Segunda Guerra Mundial”, concluye Kalinowski. /La Radio 1029 /Clarin" https://www.contextotucuman.com/nota/373167/el-nuevo-diccionario-adolescente-un-traductor-para-entender-que-quieren-decir-los-chicos-hoy.html #Metaglossia #metaglossia_mundus #métaglossie
"Des progrès inégalés ont été réalisés ces dernières années pour rendre la Bible accessible dans toutes les langues. Elle l’est déjà pour quelque 6 milliards de personnes, et les efforts se poursuivent à un rythme rapide.
Des progrès remarquables : c’est le constat qui s’impose à l’heure du bilan. Il y a plus de deux siècles, des sociétés bibliques et leurs partenaires se sont fixé pour défi de traduire la Bible dans toutes les langues connues. Plus de 6 milliards de personnes, soit 6 personnes sur 8 dans le monde, ont aujourd’hui accès à l’intégralité de la Bible dans la langue qu’ils comprennent le mieux. Au total, la Bible a été traduite dans quelque 760 à 790 langues – les chiffres varient un peu selon les sources. En comptant les traductions partielles, plus de la moitié des 7 398 langues parlées dans le monde sont couvertes.
On observe une nette accélération de l’avancement des travaux depuis quelques années. Au 1er août 2025, il ne restait plus que 544 langues sur liste d'attente pour une traduction des Écritures. L’année précédente, il y en avait 985 et en 1999, elles étaient plus de 5 000 à ne faire l’objet d’aucun projet.
Le Lévitique en Banda-Linda
Comment le travail est-il organisé ? Basé dans la région de Lucerne, le Suisse Christoph Müller travaille depuis le début des années 2000 en tant que conseiller en traduction sur des projets en Afrique centrale. Il vient tout juste de terminer le contrôle d’une traduction du Lévitique en Banda-Linda, langue parlée dans la République Centrafricaine.
Interrogé par téléphone, il explique : « Le conseiller doit pouvoir justifier d’une formation biblique et de connaissances en linguistique. Après une formation continue plus ou moins longue, en fonction de son profil et de ses disponibilités, il commence à exercer sous la supervision d’un consultant plus expérimenté, puis se met à travailler de manière plus ou moins autonome. Il ne doit pas forcément parler la langue dans laquelle la traduction est effectuée, même si, à la longue, il finit naturellement par avoir quelques notions. »
Une approche collaborative
Le rôle du conseiller est d’épauler et de former l’équipe de traducteurs locaux. Ces derniers doivent, en principe, avoir au moins un niveau bac et des connaissances bibliques. « Il y a une cinquantaine d’années, il arrivait encore que des missionnaires effectuent des traductions dans une langue qui n’était pas la leur, mais ce modèle a été abandonné, poursuit Christoph Müller. De nos jours, les traducteurs sont recrutés dans la population locale. » Cette approche collaborative vise à valoriser les compétences des communautés concernées, pour un meilleur résultat final. Le contrôle est réalisé en présence des traducteurs et d’un locuteur natif qui n’a pas participé au projet ; il sert d’interprète au consultant chargé de la vérification du travail. Un examen supplémentaire a lieu au sein de la communauté.
Toutes ces traductions sont le fruit d’une intense collaboration entre différentes organisations d’inspiration chrétienne, comme la Société internationale de linguistique (SIL), pour qui travaille Christoph Müller, ou l’association à but non lucratif Wycliffe, à Bienne. Celle-ci participe à des projets de traduction dans plus de 100 langues. Pour cela, elle dispose de quelque quatre-vingts collaborateurs – dont de précieux consultants envoyés sur le terrain.
Des questions épineuses
Dans la pratique, les difficultés sont nombreuses. Que signifie tel mot dans son contexte historique ? Que voulait dire ce texte à l’époque ? S'agit-il d’une figure de style et si oui, que signifie-t-elle ? Prenons par exemple ce verset : « Priez pour que vous ne deviez pas fuir en hiver » (Marc 13 :18). Comment traduire le mot « hiver » pour des lecteurs qui vivent dans une région où l’on ne connaît pas cette saison ? Certains traducteurs ont trouvé une parade en employant l’expression « temps froid ».
L'inévitable évolution de la langue pose un autre problème. Par exemple, de nos jours, on ne peut raisonnablement plus écrire : « Nous nous mîmes à genoux et nous priâmes, nous partîmes et nous arrivâmes » (Actes des Apôtres, 21 :5-8). Les Sociétés bibliques doivent donc périodiquement actualiser leurs traductions pour préserver la lisibilité des contenus. Ces dernières années, elles ont publié 13 révisions de la Bible intégrale et du Nouveau Testament dans des langues majeures, notamment une Bible pour les 78,6 millions d’Indiens parlant le tamoul.
Transformation digitale
En 2024, date du dernier rapport annuel de l’Alliance biblique universelle (ABU), 74 premières traductions des Écritures, soit 16 Bibles intégrales et 16 Nouveaux Testaments, ont été publiées. Un nouveau cap a ainsi été franchi : 100 millions de personnes ont reçu pour la première fois au moins une partie des Écritures dans leur langue. Le Nouveau Testament, par exemple, est maintenant disponible dans plus de 1 700 langues.
Pour accélérer encore la cadence, les grandes sociétés bibliques ont effectué leur transition numérique dans les années 2000. Selon l’ABU, la création de la bibliothèque biblique numérique (DBL), en 2011, a généré « une dynamique sans précédent ». Actuellement, plus de 2000 textes bibliques dans 2250 langues différentes et quelque 13 000 vidéos dans 42 langues des signes sont archivés dans cette bibliothèque. Beaucoup d’entre elles sont disponibles gratuitement via l’application YouVersion.
Bibles adaptées
Quelque 370 nouveaux textes dans 191 nouvelles langues y ont été ajoutés en 2024, portant le nombre total de textes disponibles à plus de 3 780. En même temps, les efforts pour préserver la diversité linguistique se sont poursuivis. Ainsi, un Nouveau Testament a été publié pour la première fois dans une dizaine de langues minoritaires comme le same du Sud, avec 600 locuteurs dans les régions septentrionales de la Norvège et de la Suède, et l’enggano, parlé par moins de 900 individus en Indonésie.
Les personnes qui ont des difficultés de lecture ne sont pas oubliées. Par exemple, la Société biblique espagnole a publié en 2024 une Bible en langage simplifié, avec un vocabulaire réduit le vocabulaire à moins de 4 000 mots – soit nettement moins que les 16 000 à 18 000 mots contenus dans les versions traditionnelles. Ce projet a obtenu le label « Easy-to-Read » de l’organisation Inclusion Europe.
Braille et langues des signes
La traduction des Écritures pour les personnes sourdes et malentendantes est une tâche autrement plus complexe, car des notions comme la miséricorde n’ont pas d’équivalents dans les langues des signes, ce qui oblige les traducteurs à faire preuve de créativité. Parmi les derniers projets menés à terme : la traduction d’une partie du livre de Josué et de la première épître aux Corinthiens dans une dizaine de langues des signes différentes.
Une quantité de textes en braille et de fichiers audio ont également été déposés dans la DBL ces dernières années. En 2024, ce sont notamment de nouvelles traductions dans des langues indiennes en braille – dont l’ao naga, le saurashtra, le rabha, le biate et le mara – qui ont été rendues accessibles.
Un travail sans fin
Si l’objectif des Sociétés bibliques s’est clairement rapproché depuis quelques années, beaucoup de travail reste à faire. Environ 1,5 milliard de personnes dans le monde n’ont toujours pas accès à une Bible complète dans leur langue, et 129 millions ne disposent purement et simplement d’aucune traduction, même partielle.
Parmi les langues non traduites, beaucoup sont parlées par des groupes de locuteurs peu nombreux, de sorte qu’elles sont menacées d’extinction. « Le travail de traduction de la Bible ne sera jamais vraiment terminé, car les langues et la société évoluent constamment ; les révisions seront donc toujours nécessaires », conclut Judith Sawers, responsable communication pour la région Centrafrique au sein de l’organisation SIL."
https://www.reformes.ch/eglises/2026/02/le-nombre-de-traductions-de-la-bible-explose-bible-traduction
#Metaglossia
#metaglossia_mundus
#métaglossie
Une exposition de livres s'est tenue hier, mardi 2 février 2026, au siège de l'Association pour la Recherche et l'Éducation pour le Développement (ARED), à Dakar. Cette rencontre s'inscrit dans la perspective de l'édition 2026 du Prix Cheikh Hamad bin Khalifa Al Thani pour la traduction et la compréhension internationale, une prestigieuse distinction internationale organisée par la fondation qatarie du même nom.
...l'accent a été mis sur les traductions de l'anglais vers le pulaar et du pulaar vers l'anglais, un axe particulièrement stratégique pour la compétition internationale à venir. "Les ouvrages présentés témoignent d'une production jugée «très riche» par les organisateurs. Ils couvrent des traductions du pulaar vers l'arabe et inversement, mais aussi des travaux scientifiques et techniques. Toutefois, l'accent a été mis sur les traductions de l'anglais vers le pulaar et du pulaar vers l'anglais, un axe particulièrement stratégique pour la compétition internationale à venir. La délibération finale est prévue vers la fin du mois de mars.
Restez informé des derniers gros titres sur WhatsApp | LinkedIn
Prenant la parole, le directeur général de l'ARED, Mamadou Amadou Ly, a expliqué l'objet de cette rencontre. Selon lui, l'exposition visait à rassembler chercheurs, écrivains et éditeurs travaillant en langue pulaar, notamment ceux engagés dans la traduction entre le pulaar et d'autres langues, en particulier l'arabe et l'anglais.
«Le pulaar a été retenu cette année comme langue internationale et doit concourir au même titre que des langues comme l'anglais. Il était important de montrer, en amont de la compétition, la richesse de notre production», a-t-il souligné.
Fondée en 1991, l'ARED est une organisation non gouvernementale spécialisée dans l'éducation. Elle oeuvre pour l'amélioration de la qualité de l'enseignement à travers l'utilisation des langues nationales, en s'appuyant sur la formation, l'édition, la recherche-action et les innovations pédagogiques. Cette exposition s'inscrit ainsi dans la continuité de sa mission de valorisation des langues africaines comme outils de savoir et de développement." https://fr.allafrica.com/stories/202602030469.html #Metaglossia #metaglossia_mundus #métaglossie
"Gros plan sur ToumAI une start-up marocaine qui travaille à intégrer les langues et dialectes africains dans les usages que l’on fait de l’intelligence artificielle. L'entreprise, qui compte une quinzaine d'employés, propose une application permettant de mieux communiquer entre clients et plateforme téléphonique de service.
Ce n'est pas toujours évident lorsque l'on parle hausa, bambara ou le darija d'utiliser des plateformes de service téléphonique pour gérer son compte bancaire, réserver un hôtel ou bien régler ses problèmes de connexion internet. Les serveurs téléphoniques ou les plateformes d'appel ne parlent pas toutes ces langues, loin s'en faut. Aussi, la start-up ToumAI a-t-elle mis au point une application qui permet, avec l'intelligence artificielle, de répondre aux demandes des clients dans différentes langues africaines.
Youcef Rahmani, l'un des trois fondateurs de ToumAI et de l'application HolistiCX : « On a, à peu près, huit langues aujourd'hui qui sont vraiment prêtes en production et déployées d'ailleurs sur des applications. Donc vraiment, notre objectif, c'est finalement de couvrir les langues les plus parlées. Dans un premier temps, le swahili, le lingala en Afrique de l'Est et en Afrique du Sud, le zoulou, le xhosa. il y a plusieurs langues, souvent mal considérées par les logiciels et l' IA et que nous, on souhaite couvrir. »
« L'IA ne comprend pas votre accent » Une intelligence artificielle qui s'adapte à l'oral, aux vocables africains, mais également aux intonations, si importantes, à la bonne compréhension d'une langue afin de mieux répondre aux besoins de celui qui l'exprime. Un service qui comble, pour toute une clientèle, un risque de fracture numérique et linguistique, souligne Youcef Rahmani
« En fait, c'est exactement ça ! ToumAI est né d'une de cette frustration. C'est que pendant longtemps, l'intelligence artificielle a été développée pour quelques langues et quelques cultures. Beaucoup de personnes se sont retrouvées exclues sans même que l'on s'en aperçoive. Au début, on ne s'en rendait pas compte parce que l'IA était reléguée à des rôles un peu obscurs pour le commun des mortels. Mais, lorsque Chatgpt est sorti, je pense que c'est là où les gens se sont rendus compte de la puissance de l'IA. Et en fait, pendant très longtemps, Chatgpt ne parlait pas vraiment d'autres langues à part les langues principales. Quand une IA ne comprend pas votre accent ou votre façon de parler ou votre langue, ce n'est pas juste un problème technique, ça devient un problème d'accès aux services, parce que l'on va avoir des logiciels qui intègrent de l'IA un peu partout, mais sans justement prendre en compte les différences culturelles et linguistiques. Nous avons voulu partir de la voix parce que la voix c'est l'univers où tout le monde parle. Et même quand on ne lit pas ou même lorsque l'on ne maîtrise pas le digital, on peut s'exprimer et obtenir des choses en parlant.»
HolistiCX a séduit des entreprises bancaires comme le marocain Attijariwafa Bank, des opérateurs téléphoniques comme Orange ou Inoui, et certaines entreprises immobilières comme Héritage. Les clients de ces entreprises ne se sentent plus exclus des services à cause d'une inadaptation technologique à la langue. Une solution pertinente qu'a accompagné le fonds de soutien Digital Africa et Malek Lagha, cheffe de projet.
« C'est très pertinent parce qu’ils ont besoin de collecter de la donnée et qu’aujourd’hui, les logiciels en IA n'arrivent pas à traduire des langues africaines. C'est un marché qui a beaucoup de potentiel. C'est de la donnée très précieuse et qui compte également pour beaucoup d'entreprises dans le monde. Aujourd'hui, ToumAI a conçu une IA pensée dès le départ pour des contextes complexes, capable de comprendre non seulement les mots, mais les intonations. Je pense que c'est pour ça que c'est pertinent pour des entreprises internationales. »
Pour le moment, la solution proposée par ToumAI concerne huit langues africaines, mais le projet futur est d'adapter une quarantaine d'autres langues à des solutions IA." https://www.rfi.fr/fr/podcasts/l-afrique-en-marche/20260203-toumai-rend-les-langues-africaines-intelligibles-pour-l-intelligence-artificielle #Metaglossia #metaglossia_mundus #métaglossie
"In undergraduate Classics, translation is an unforgiving exercise, demanding almost mathematical precision. I’ve spent excruciating hours poring over lexicons and grammar books, only to face reproof for neglecting the odd particle. When so many English versions of ancient texts already exist, not to mention digital translation resources, it’s easy to question why we bother.
Yet translation should be more than mechanic substitution. It demands that the translator acts as a conduit, conveying the intricacies of emotion, style, and intention, while negotiating the hurdles of linguistic complexity. It involves a degree of compromise, balancing fidelity to the original with creative interpretation. When a piece of literature is transposed into the idiom of a new age, a new culture, each adaptation becomes a radical re-reading, not a straightforward reproduction. Rather than representing the work as a historical artefact, mute and moribund on the page, the process of translation can shore up unmined meanings. In ancient languages, with a comparatively restricted vocabulary, each word is capable of being expressed in English in multiple ways, giving rise to vastly divergent interpretations. Word choice becomes a declaration of intent. As the translator Emily Wilson points out, the Odyssey’s opening line, which Fagles translates as “Sing to me of the man, Muse, the man of twists and turns”, could equally be rendered as “Tell me about a straying husband”, a very different framework for the same Greek words.
Things inevitably slip through the cracks; wordplay in particular demands more than a literal translation. For instance, Wilde’s The Importance of Being Earnest in French translation frequently becomes L’importance d’être Constant, replicating the pun by renaming the protagonist, yet losing out on the connotations of deceptiveness. Moreover, there are concepts so tethered to their specific language that they defy straightforward translation. How far the unfamiliar should be domesticated is a consequential choice – is it better to retain culturally specific allusions, or facilitate understanding through parallels or explanations? English translations of Elena Ferrante’s Neapolitan Quartet embed words of dialect, a deliberate choice to ensure that the work remains firmly rooted in its original context, with its particular local colour. The rhythms of each language, which determine much of literature’s emotional impact, are likewise impossible to reproduce exactly. Pushkin’s Eugene Onegin, for example, is widely regarded as untranslatable, owing to the intricacy of its rhyme scheme, and the unique musicality of the Russian. The best that can be achieved is adaptation: Deborah Smith in her translations of Han Kang – The Vegetarian (2015) and Human Acts (2016) – attempts to emulate the cadence of the Korean, its repetitions and underspecifications, resulting in a stark prose that enhances the tragedy.
The insistence on preserving the original essentially untampered with is futile; excessive hand-wringing over what is being lost in the process can only stunt creativity. Translation is, in a sense, a work of realignment – nothing can remain fixed. Since utter fidelity to the original source is impossible, the objective should be to create something that works in one’s own language, a discrete piece of art, so that the translator is effectively another writer of the same book. The boundaries of language are always permeable – a good translator is an unscrupulous gerrymanderer.
After all, translation is an inherently malleable concept, and does not necessarily signify replication of the source material. Language is not exclusively about designation, but the meanings hovering between statements, the conveyance of a mood, a perspective, an intention. There is no need, then, for translation to adhere to semantic, generic, or even formal boundaries. In this expansive spirit, Louis and Celia Zukofsky wrote homophonic translations of the poet Catullus, rendering not only the meaning but also the actual sounds of the Latin into English (miser Catulle becomes “Miss her, Catullus?”). Anne Carson went even further in her ‘translation’ of Catullus’ poem 101, an elegy for the death of his brother; Carson’s version constitutes a single long sheet of paper folded concertina style into a box entitled Nox, an epitaphic reflection on her own brother’s passing. How far then can we push the definition of translation? What’s to stop any response to a literary work being considered a translation – is Petersen’s Troy (2004), for example, a translation of the Iliad (despite it being a terrible film)?
The politics of translation are similarly complex. To translate a literary work into another language is, in a sense, to appropriate it from its original context for the enjoyment of another set of people. Taken further, a French translation of, say, an Arabic text could be viewed as an implicitly colonial act, while the ubiquity of English translations raises the spectre of global monolingualism. But surely this kind of engagement can be part of a dialogue, not an act of imperialistic plunder? Accessibility is the most fundamental objective of translation; widening the reach of a literary work is a conservationist practice, sustaining and invigorating its author’s voice, rather than an attenuation of its power. It is not sufficient for a translator to be merely a linguistic intermediary; the practice demands cultural proficiency and a profound understanding of the recipient language. The art of translation is one of bridging cultural divides, so that literature may resonate with readers worldwide. Such interaction eases the discomfort of translingual encounters and fosters cross-cultural understanding.
It is this notion of a participatory culture via translation that enriches the literary tradition – Goethe wrote that “every literature grows bored if it is not refreshed by foreign participation”. Translation does more than keep the original alive (although sometimes I wish we’d just let Latin die); it also vivifies the recipient language, traversing linguistic boundaries to provide access to unfamiliar cultures, concepts, and perspectives. The translator is literary critic, co-author, cultural ambassador, and, most importantly, close reader, engaging in a fundamentally creative practice. So perhaps it’s misguided to ask what gets lost in translation. The more pertinent question is what may be found."
Beatrix Arnold
1st February 2026
https://cherwell.org/2026/02/01/the-art-of-translation/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Investment in interpreter readiness supports better patient experience and outcomes, patient safety, and equitable care delivery at scale.
SUNRISE, FL, UNITED STATES, February 2, 2026 /EINPresswire.com/ -- Equiti announced today that its Martti solution’s 18,000+ qualified interpreters completed over 800,000 hours of specialized healthcare training in 2025, reinforcing the company’s commitment to reliable, high-quality language access for health systems nationwide.
This milestone reflects a year of extraordinary investment in interpreter preparedness as Martti supported millions of clinical encounters across over 320 spoken languages and dialects with an average connect time of just 19.4 seconds.
This investment is reflected in Martti’s interpreter training standards: each interpreter completes 120 hours of healthcare-specific training, triple the industry standard.
“Interpreter training is foundational to safe, effective communication in healthcare,” said Kerry Moreno, Vice President of Operations and Language Services at Equiti. “Completing more than 800,000 hours of training in a single year represents our commitment to ensuring interpreters are both fluent in their languages and fully equipped for the intensive clinical, cultural, and compliance demands of healthcare environments.”
Martti’s interpreter training programs are designed specifically for healthcare delivery, emphasizing clinical terminology, care workflows, patient privacy, cultural sensitivity, and regulatory compliance.
In 2025, Martti set a new standard by increasing its already stringent training requirements by 50% – from 80 to 120 hours – three times the industry standard of 40 hours. These expanded training requirements help ensure interpreters can support complex clinical communication across departments, specialties, and patient populations.
This investment directly supports health system priorities, including: - Medically qualified, highly trained healthcare interpreters equipped to support the clinical and cultural needs of patients - Faster access to interpreters, reducing delays at triage, treatment, and discharge - Interpretation that delivers a better patient experience, improved patient safety, and reduced risk - Lower administrative burden for clinical teams working with interpreters trained in healthcare-specific workflows - More consistent compliance, helping organizations remain audit-ready without added operational strain
The 800,000-hour training milestone was achieved following Equiti’s launch of the unified Martti platform, bringing together the best capabilities from Voyce and Martti to support healthcare organizations at scale. By aligning interpreter teams and standardizing operations, Equiti reinforced Martti as the trusted, healthcare-focused platform delivering high-quality interpretation, industry-leading language coverage, and fast connection times to providers and patients.
“Equal access to care starts with communication,” Moreno added. “By investing deeply in interpreter training, we’re helping health systems embed health equity into daily operations, reinforcing a dependable standard of care.”
To learn more about Martti’s interpreter training programs and language access solutions, visit www.martti.io." https://www.einpresswire.com/article/886037036/equiti-s-martti-interpreters-complete-more-than-800-000-hours-of-specialized-healthcare-training-in-2025 #Metaglossia #metaglossia_mundus #métaglossie
"Google has collaborated with African universities and research institutions to launch WAXAL, an open-source speech database designed to support the development of voice-based artificial intelligence for African languages.
African institutions, including Makerere University in Uganda, the University of Ghana, Digital Umuganda in Rwanda, and the African Institute for Mathematical Sciences (AIMS), participated in the data collection for this initiative. The dataset provides foundational data for 21 Sub-Saharan African languages, including Hausa, Luganda, Yoruba, and Acholi.
WAXAL is designed to support the development of speech recognition systems, voice assistants, text-to-speech tools, and other voice-enabled applications across sectors such as education, healthcare, agriculture, and public services.
“This dataset provides the critical foundation for students, researchers, and entrepreneurs to build technology on their own terms, in their own languages,” said Aisha Walcott-Bryantt, Head of Google Research Africa
WAXAL’s launch comes amid growing efforts across Africa to develop language technologies that reflect local cultures and realities.
In September 2025, the Nigerian government unveiled N-ATLAS, an open-source language model capable of recognising and transcribing spoken words and generating text, in Yoruba, Hausa, Igbo, and Nigerian-accented English.
Similar initiatives are emerging in the private sector, where startups such as South Africa’s Lelapa AI are building tools like Vulavula, which offers speech recognition, translation, and sentiment analysis.
By making this speech dataset openly accessible, WAXAL provides the fuel for a growing wave of homegrown efforts to bring African languages into the digital age.
Although Sub-Saharan Africa is home to more than 2,000 languages, reports suggest that fewer than 5% of those languages have the resources needed for Natural Language Processing (NLP), which allows computers to understand and comprehend human language. This lack of representation in training datasets limits the effectiveness of speech recognition and text-to-speech systems for African users.
Developed over three years with funding and technical support from Google, WAXAL addresses a major gap in global AI development.
WAXAL provides speech data for 21 Sub-Saharan African languages, including Fulani (Fula), Hausa, Igbo, Ikposo (Kposo), Swahili, and Yoruba. The dataset contains more than 11,000 hours of speech drawn from nearly two million individual recordings.
Under the project’s partnership model, contributing institutions retain ownership of the data they collected, while making it openly available to researchers and developers worldwide.
“For AI to have a real impact in Africa, it must speak our languages and understand our contexts,” Joyce Nakatumba-Nabende, Senior Lecturer at Makerere University’s School of Computing and Information Technology, said.
“The WAXAL dataset gives our researchers the high-quality data they need to build speech technologies that reflect our unique communities.”" Opeyemi Kareem 2nd Feb, 2026 https://techcabal.com/2026/02/02/google-joins-push-to-localise-ai-for-african-languages-with-speech-database/ #Metaglossia #metaglossia_mundus #métaglossie
"Écritures et mémoires : la littérature traduite comme archive transnationale de l'histoire
Numéro thématique de la Revue internationale de traduction moderne
Coordination : Ramila Demane Debbih (Université Constantine 1, Algérie) et Jean-Pierre Castellani (Université de Tours, France)
Date limite de soumission : 30 juin 2026
Publication prévue : décembre 2026
Langues acceptées : français, anglais, arabe, espagnol
—
Comment la littérature traduite produit-elle du savoir historique ? Cette question, au croisement de la traductologie, des études littéraires et de l'historiographie, demeure insuffisamment théorisée malgré son acuité contemporaine. Nombreux sont les lecteurs qui découvrent et comprennent l'histoire de pays ou d'événements lointains davantage par la fiction traduite que par les seuls travaux académiques. Un lecteur algérien contemporain qui n'a pas connu la guerre d'indépendance accède à la texture sensible de cette période par la lecture de Mammeri, Dib, Feraoun ou Kateb Yacine. De même, un lecteur maghrébin qui lit des romans ukrainiens traduits sur la guerre actuelle (Oksana Zaboujko, Serhiy Jadan) accède à une compréhension intime du conflit que les reportages journalistiques ne peuvent offrir.
Ce numéro thématique ambitionne d'interroger frontalement le statut épistémologique de la littérature traduite comme source et mode de production du savoir historique. Il part du postulat que la traduction d'œuvres littéraires à contenu historique ne constitue pas un simple transfert linguistique, mais un acte d'interprétation herméneutique qui reconfigure la connaissance du passé pour de nouveaux lectorats.
—
Argumentaire
La littérature traduite ne se contente pas de raconter l'histoire : elle produit une forme spécifique de connaissance historique que les discours historiographiques classiques, archives institutionnelles, monographies académiques, manuels scolaires, ne peuvent générer. À travers le roman historique, le témoignage littérarisé, l'autofiction mémorielle ou la poésie narrative, les œuvres traduites donnent accès à des dimensions de l'expérience historique (affective, corporelle, subjective, polyphonique, contradictoire), souvent absentes des récits officiels.
La fiction comme productrice de savoir historique
La fiction n'est pas un « véhicule » transparent qui transmettrait de l'information historique de manière ornementale. Elle constitue une forme de pensée qui articule autrement les rapports entre événement, subjectivité, temporalité et vérité. Là où l'historiographie académique privilégie la distance critique et l'objectivation documentaire, la littérature opère par incarnation narrative et singularisation des voix. Elle donne accès à ce que l'on pourrait nommer une vérité incarnée de l'histoire : non pas la vérité factuelle des dates et des données, mais la vérité phénoménologique de l'expérience vécue du passé.
Ainsi, Si c'est un homme de Primo Levi, Inyenzi ou les Cafards de Scholastique Mukasonga, Allah n'est pas obligé d'Ahmadou Kourouma ou Nocturne du Chili de Roberto Bolaño deviennent, en traduction, des références historiques mobilisées parfois davantage que les travaux académiques sur ces mêmes périodes.
La traduction comme reconfiguration herméneutique
La traduction ne transmet pas cette connaissance de manière neutre. Le traducteur opère des choix (lexicaux, syntaxiques, paratextuels), qui transforment l'accessibilité, l'intelligibilité et la résonance émotionnelle du texte source. Traduire un roman historique implique de négocier constamment avec l'altérité culturelle et temporelle : faut-il expliciter les références historiques opaques ? Naturaliser les concepts intraduisibles ? Conserver l'étrangeté linguistique pour préserver l'altérité historique ?
Le paratexte, notes du traducteur, préfaces, glossaires, devient alors un dispositif pédagogique essentiel qui transforme la traduction en médiation didactique. Le traducteur devient ainsi une sorte d'historien parallèle qui accompagne le texte littéraire d'un appareil critique informel.
Archives transnationales et géopolitique des mémoires
En circulant d'une aire linguistique à une autre, la littérature traduite crée des archives transnationales de mémoires plurielles. Ces archives sont symboliques : elles consistent en la sédimentation progressive, dans l'imaginaire collectif d'une communauté linguistique donnée, de récits historiques venus d'ailleurs.
Toutefois, cette circulation est profondément asymétrique. Certaines histoires voyagent massivement (génocides européens, guerres mondiales), tandis que d'autres restent confinées linguistiquement (guerres coloniales africaines, conflits post-soviétiques). Les langues-pivots (français, anglais, espagnol) jouent un rôle central : une œuvre traduite vers l'anglais acquiert une visibilité mondiale, tandis qu'une œuvre disponible uniquement en arabe, swahili ou portugais demeure largement inaccessible.
Cette asymétrie pose une question éthique et politique : quelle est la responsabilité des traducteurs, éditeurs et institutions académiques dans la circulation équitable des mémoires historiques ? Peut-on penser une forme de « justice traductive » ? Comment favoriser les traductions Sud-Sud ?
Axes thématiques
Les contributions pourront s'inscrire dans l'un des axes suivants (liste non exhaustive) :
Axe 1 : Épistémologie de la fiction historique traduite
Le roman historique comme source de connaissance : quelles vérités la fiction transmet-elle ?
Statut du témoignage littérarisé traduit : document ou création ?
Légitimité du savoir narratif face au savoir historiographique
La traduction comme interprétation herméneutique de l'événement historique
Temporalités narratives et mémoire en traduction
Axe 2 : La traduction comme construction d'archives transnationales
Quand la fiction traduite devient référence historique pour des lecteurs étrangers
Patrimonialisation mémorielle et canonisation par la traduction
Littératures « mineures » et invisibilisation de certaines histoires
Retraduction et réactualisation du savoir historique
Études de réception comparée
Axe 3 : Stratégies traductives et médiation du savoir historique
Le paratexte comme dispositif pédagogique (notes, préfaces, glossaires)
Traduire les référents historico-culturels : expliciter, naturaliser ou conserver l'étrangeté ?
Intraduisibilité des concepts historiques et création lexicale
Le traducteur comme médiateur culturel et historique
Éthique de la traduction : jusqu'où contextualiser ?
Axe 4 : Géopolitiques et idéologies de la traduction littéraire historique
Asymétries dans la circulation des mémoires : quelles histoires voyagent ? (Sud-Sud, Sud-Nord, Nord-Sud)
Pourquoi certaines histoires sont sur-traduites et d'autres sous-traduites ?
Le rôle des éditeurs, collections et institutions dans la sélection des œuvres
Censure, autocensure et manipulation en traduction
Peut-on penser une « justice traductive » en matière de mémoire historique ?
Traduction et décolonisation des savoirs
Corpus et approches
Les contributions pourront porter sur tout corpus littéraire traduit à dimension historique (roman, nouvelle, autofiction, témoignage littérarisé, poésie narrative), sans restriction géographique ou temporelle. Les approches comparatistes, les études de cas et les analyses de réception sont particulièrement encouragées.
Exemples de corpus possibles (non limitatifs) :
Littératures maghrébines traduites (guerre d'indépendance, décennie noire, migrations)
Littératures des génocides et des conflits armés en traduction
Romans postcoloniaux et leur circulation transnationale
Littératures des dictatures (Amérique latine, Europe de l'Est, monde arabe)
Œuvres contemporaines sur les conflits actuels (Ukraine, Syrie, Palestine, etc.).
—
Modalités de soumission
Langues acceptées : français, anglais, arabe, espagnol
Format des articles :
Longueur : 25 000 à 40 000 signes (espaces compris), notes et bibliographie incluses
Format : Times New Roman 12, interligne 1,5
Normes bibliographiques : APA 7e édition ou MLA (au choix, avec cohérence interne)
Accompagner l'article d'un résumé en français et en anglais (150-200 mots chacun) + 5 mots-clés dans chaque langue
Notice biobibliographique de l'auteur (100 mots maximum)
Procédure :
Évaluation en double aveugle par des pairs internationaux
Notification d'acceptation/révision : 8 semaines après soumission
Calendrier
Diffusion de l'appel : 1er février 2026
Date limite de soumission : 30 juin 2026
Notification aux auteurs : 15 septembre 2026
Réception des versions finales : 31 octobre 2026
Publication : décembre 2026.
—
Bibliographie indicative
Berman, Antoine (1984). L'épreuve de l'étranger : Culture et traduction dans l'Allemagne romantique. Paris : Gallimard.
Berman, Antoine (1999). La traduction et la lettre, ou l'auberge du lointain. Paris : Seuil.
Casanova, Pascale (1999). La République mondiale des lettres. Paris : Seuil.
Casanova, Pascale (2015). La langue mondiale : Traduction et domination. Paris : Seuil.
De Certeau, Michel (1975). L'écriture de l'histoire. Paris : Gallimard.
Lavocat, Françoise (2016). Fait et fiction : Pour une frontière. Paris : Seuil.
Lukács, Georg (1965). Le roman historique. Paris : Payot.
Meschonnic, Henri (1999). Poétique du traduire. Lagrasse : Verdier.
Nora, Pierre (dir.) (1984-1992). Les lieux de mémoire, 3 volumes. Paris : Gallimard.
Ricœur, Paul (2000). La mémoire, l'histoire, l'oubli. Paris : Seuil.
Sapiro, Gisèle (dir.) (2008). Translatio : Le marché de la traduction en France à l'heure de la mondialisation. Paris : CNRS Éditions.
Venuti, Lawrence (2008). The Translator's Invisibility: A History of Translation, 2e édition. London/New York : Routledge.
White, Hayden (2017). L'histoire s'écrit : Essais, recensions, interviews. Paris : Éditions de la Sorbonne.
Coordination scientifique
Ramila Demane Debbih
Maître de conférences en Littérature française
Université Constantine 1, Frères Mentouri
Laboratoire Langues et Traduction
Algérie
Jean-Pierre Castellani
Professeur émérite
Université de Tours
France
Contact et soumission :
Email : dds.ramila@gmail.com / jeanpierrecastellani@hotmail.com
Plateforme de soumission : https://asjp.cerist.dz/en/PresentationRevue/571
Note aux contributeurs :
Cette bibliographie est indicative et non exhaustive. Les auteurs sont encouragés à mobiliser d'autres références pertinentes pour leur sujet spécifique."
https://www.fabula.org/actualites/132493/ecritures-et-memoires-la-litterature-traduite-comme-archive-transnationale-de-l-histoire.html
#Metaglossia
#metaglossia_mundus
#métaglossie
""...This innovative product empowers developers to integrate real-time voice transcription and translation capabilities into their applications, significantly enhancing multilingual support for businesses.
The DeepL Voice API allows businesses to stream audio and receive transcriptions in the source language, along with translations into up to five target languages. The API provides a seamless experience for users, ensuring that language barriers do not hinder effective communication.
DeepL Voice API will be widely available for customers with spoken communication at their core, with contact centres and business process outsourcing (BPOs) providers being the earliest adopters of this solution.
Transforming Multilingual Support
The DeepL Voice API turns language support from a staffing problem many contact centers face, into an easy-to-use solution that fits well with current systems. By adding real-time transcription and translation to how agents work, supervisors can handle issues better, and agents can assist customers in different languages without needing to pass them on to a colleague or revert to written communication to allow for translation.
On the operational side, the Voice API provides clear transcripts and translations that help with quality checks and training of customer service teams. This allows for quicker reviews, fairer evaluations across different locations, and clearer feedback on agent performance and gaps in knowledge. By minimizing issues caused by language barriers, like longer calls, repeated contacts, and expensive misunderstandings, the DeepL Voice API changes the overall experience for the end user. ... Real time translation helps teams maintain service levels during nights, weekends, and holidays, when fewer specialized language agents are available. Two way understanding, not just text on screen Agents can follow the conversation through live translated audio, alongside on screen transcription and translation, so they can respond naturally and confidently in the moment...
The launch also includes a six week early access program for voice-to-voice capabilities, set to run from mid-February. This feature will allow agents to hear translated audio while communicating with customers in their preferred languages in real-time, further streamlining the customer experience.
Availability
The DeepL Voice API is available to all DeepL API Pro customers starting February 2. Interested businesses can get started by accessing the DeepL API documentation or contacting sales for Voice API access.
For more information on DeepL Voice API, including supported languages, please visit the website.
SOURCE DeepL" News provided by DeepL Feb 02, 2026, 06:00 ET https://lnkd.in/eFZxBCRm #Metaglossia #metaglossia_mundus #métaglossie
"AI Translation Triumphs Over Human Translators in Korean Literary Contest 12 of 16 English professors preferred ChatGPT's version of Joseon-era poem, sparking debate on AI's role in cultural translation
By Park Jin-seong Published 2026.02.02. 00:57 Updated 2026.02.02. 13:59 Artificial intelligence (AI) and humans have engaged in a competition to translate Korean literary works into English. Who emerged victorious?
Recently, the Literature Translation Institute of Korea under the Ministry of Culture, Sports and Tourism conducted a blind test involving 16 domestic English literature professors. The test compared an English version translated by a professional translator and one translated by ChatGPT for the Joseon-era poet Jang Yu’s poem “Be Cautious When Alone (Shindokjam),” which is set to be exported to English-speaking regions. Without revealing which translation was done by whom, the professors were shown the original Korean text and the two translations and asked which was better. The results showed that 12 professors chose the ChatGPT translation, two selected the human translation, and two declared “undecidable.”
Graphics by Baek Hyeong-seon ◇AI Translation Wins… “Exceeding the Threshold of Language Learning”
Professors who favored the AI translation praised ChatGPT’s deep understanding of Korean history and culture, as well as its effective preservation of the original’s rhythm and style. For example, in the passage “Above, the sky / Below, the earth / If you think I don’t know what I’ve done / Who are you trying to deceive?”, the human translator rendered “sky” as “Sky,” while ChatGPT translated it as “Heaven.” Considering the author was a Confucian scholar, “Heaven,” which incorporates the concept of a deity, was deemed more appropriate than the physical space “Sky.” Other evaluations included, “The parallelism of the original text was well-expressed in English literary terms” and “The concise word count preserves the original’s essence.”
Professors who preferred the human translation noted that it had “fewer non-grammatical sentences” and “a more natural title translation.” A professor who declared the translations undecidable stated, “The difference was not significant enough to insist on which translation is better if one of them is AI-generated,” adding, “It should be noted that AI has advanced to the point where it is difficult to distinguish from human translation.”
Experts assess that large language models (LLMs) like ChatGPT, designed to understand context well, inherently excel in translation and have reached a mature stage due to accumulated learning. Choi Byung-ho, a research professor at Korea University’s Human-Inspired AI Research Center, stated, “At least in Korean-to-English translation, it has reached a level where it can replace human translators,” adding, “It should be considered that all publicly available web data has already been learned, and the volume of learning has surpassed the threshold.”
The test was conducted by the office of Democratic Party of Korea lawmaker Min Hyung-bae, a member of the National Assembly’s Culture, Sports, and Tourism Committee. Min stated, “AI is already an irreversible reality,” adding, “While utilizing AI’s efficiency, it is time to contemplate the cultural context and ethics unique to humans.”
◇The Stronghold of ‘World Literature Collections’ Cracks… Absurd Mistranslations Like “Kingbatne” Also Emerge
As AI’s translation capabilities have improved, publishers previously considered the stronghold of major publishing houses like Minumsa and Munhakdongne are now emerging to produce “world literature collections.” These publishers are using AI to translate works by masters whose copyright protection periods (70 years after the author’s death) have expired, without incurring costs.
Translation examples in the Odyssey /Courtesy of Park Jin-seong A publisher that has been releasing science and technology academic books for over 30 years recently became a topic of discussion in the publishing industry. From October of last year, it released 12 books, including “The Little Prince” and “The Metamorphosis,” in just three months. This publisher, without professional translators, used Gemini for translation and had human editors review the content. The issue arose with the translation of Homer’s epic “The Odyssey,” which included, “Futile conversation is useless. Alppano? (None of my business),” a translation that sparked controversy. Newly coined terms like “Kingbatne! (I’m pissed!)” and “Seubuljae (self-inflicted disaster)” also appeared in the classical work’s translation. The publisher’s representative explained, “We couldn’t afford to produce books if we had to pay translation fees, so we used AI translation. We intentionally left the newly coined terms as they are, thinking they might be necessary for fun intergenerational communication.”
◇“Human Translators Will Utilize AI”
Overseas, business models where humans and AI collaborate have emerged. The UK-based GlobeScribe started a service last summer offering to translate a book for 100 dollars (approximately 145,000 Korean won). Considering that the minimum translation fee for a novel of around 1,000 manuscript pages in South Korea ranges from 3 million to 4 million Korean won, this is extremely affordable. The publisher uses AI to translate most of the text, with human translators refining parts with high literary value or complexity. UK translators are protesting. Ian Giles, chairman of the UK Society of Authors' (SoA) Translators Association, stated in an interview with The Guardian, “It is completely wrong to claim that AI can match or even surpass the delicate work of human translators by replacing authors.”
Domestic translator No Seung-young stated, “Given that AI learns and uses translation sentences refined by humans, most existing translated works will likely be sufficient with AI in the near future,” adding, “However, the importance of proofreading to prevent poor translations from AI has increased.”"
https://www.chosun.com/english/travel-food-en/2026/02/02/TTXCFMS2MJEINIZANZ5WVZB54Q/
"...While political discourse dominates headlines,... something different taking place. “People are independent. They can think for themselves, and they can have their own opinions,” he said. “In these small but growing communities, this is kind of a global village.”
Over time, Jerusalang has become more than a social gathering; it has developed into a network. Through the events, people have found jobs, apartments, travel companions, and romantic partners. One couple, Oland recalled, made a pact to keep attending events together, no matter what the outcome of their relationship was.
Participants hail from all over – Iran, Turkey, China, Korea, and beyond. Conversations often turn to heritage and upbringing, uncovering unexpected stories and highlighting how varied people’s experiences are.
Running Jerusalang has also shaped Oland himself. Drawing on his background in languages and business, he has learned more about marketing, community management, and social media strategy. “I don’t think there was an event where I didn’t learn something new about someone or about something,” he said. Most of all, he has learned how to build and sustain a community – and what people are looking for when they come to his events.
Jerusalang has begun expanding beyond its original format, hosting various events such as a multilingual open mic night where participants sang and played music from their cultures, which included performances in Cantonese, Chinese, Italian, and Arabic. “Other cities deserve this community as well,” he said. He also believes it is especially important for Israelis to learn more Arabic. In a city often defined by divisions, Jerusalang provides a bridging space where interaction is not forced but is casual, human, and authentic."
ByBATSHEVA SHULMAN
FEBRUARY 1, 2026 12:18
Updated: FEBRUARY 1, 2026 12:24
https://www.jpost.com/israel-news/article-884851
#Metaglossia : Gaza on my mind! Where are you?🤔🤔🤔😓😓😓😓
#metaglossia_mundus
#métaglossie
"Recently, it was announced that the Cambridge Dictionary has added around 6,000 new words, including Parasocial, Broligrachy, Delulu, Skibidi, Slop, Memify, Tradwife, Work-Wife, Work-Spouse, Gen Z, and Gen Alpha, among others. There has been heightened enthusiasm in sociological circles that the word “Parasocial” has been selected as Word of the Year 2025 by Cambridge. The term was first used by sociologists, Donald Horton and Richard Wohl, in 1956 to describe how viewers developed one-sided relationships with television personalities.
The factors that influence inclusion of a word in a dictionary include the popularity of the word in digital culture, new technological terminology, opening in new knowledge domains, changes in social setups, transformation of work relationships and even the pandemic. Here, the question arises, “Who decides what counts as a ‘real’ word to form part of a dictionary? How are the words shortlisted and selected?”
Colin McIntosh, Lexical Programme Manager, Cambridge Dictionary, clarifies, “It’s not every day you get to see words like Skibidi and Delulu make their way into the Dictionary. We only add words where we think they’ll have staying power. Internet culture is changing the English language, and the effect is fascinating to observe and capture in the Dictionary.”The words are included based on criteria broadly encompassing frequency of their use; consistency of their meanings, significant enough spread and commonality of definitions across regions.
The criteria of selection, though, seem quite straightforward, but have complexity from a sociological angle: How to decode “staying power” and “significant enough spread” of a word while shortlisting them for dictionaries and ascertaining the neutrality in the selection process.
Qarshiboyeva, M. and Abduraxmanova (2024) have underlined the problems, complexities and challenges of English lexicography. As English is a popular language in both the West and the East, words are coined in abundance in both regions. However, there is a chance of bias in giving more space and accommodation to the words germinated in the West due to a colonial superiority syndrome. The bias of considering the Indian subcontinent as less important continuously haunts the professional ethics of lexicography.
Here, the model of Little Tradition and Great Tradition conceptualised by Robert Redfield comes to be handy and relevant. Redfield puts forward that the culture of folk themes, oral traditions, dialects, and local deities forms the Little Tradition. Whereas, on the other hand, the culture professed by priests and hierarchy of religious leaders covering a legitimate form of all reflective, systematic and textually elaborated rituals/ epics may be considered as the Great Tradition.
Further, while shortlisting words, the possibility of analogically considering the West as a Great Tradition cannot be ruled out. Indian words like Juggad took comparatively a very long time to make their space in dictionaries in comparison to even slurs and slang from Western English. The colonial supremacy syndrome definitely needs to be counterweighed.
The connotation of the same word may be different across different regions. There are chances to overlook the definitions of the word from the “little traditions” or less significant regions. For example, the recently added word “work-wife” means “a woman with whom someone has a close, but not romantic, relationship at work, in which the two people help and trust each other in the same way that a married couple does.” If someone thinks of using this word in Indian work culture, even in a corporate office situated in India, then it may be considered too offensive, as the word wife /spouse cannot be separated from a romantic relationship in the Indian context.
The digital space is the new catchment area for the selection of words. Though the digital space is rich enough from a lexicographical angle but there also exists algorithmic biases which may creep into the selection of words which have lesser acceptance in the general public but have algorithmic abundance. For example, the word “Skibidi” initially started in videos on the YouTube platform as Skibidi toilet. The word, as accepted in the dictionary, means a word that can have different meanings, such as “cool” or “bad”, or can be used with no real meaning as a joke. TikTok and other platforms promoted their use as an online slang, and they could get their acceptance much more easily. Whereas than Indian terms, e.g. “Timepass” and “Dadagiri”, which were popular enough in the Indian subcontinent but had a lesser digital footprint, struggled for a comparatively longer period to find a space finally in the Oxford dictionary in the year 2017 only..." Vijay Pal is an independent researcher with interests in the sociology of everyday life, education and gender issues." https://doingsociology.org/2026/02/02/lexicography-through-a-sociological-lens-vijay-pal/ #Metaglossia #metaglossia_mundus #métaglossie
Develop AI-powered education tools in local languages, starting with Twi, Ewe, Dagbani, and Hausa.
"Ghana, Google ink AI education deal
Jan 30, 2026, 2:23pm GMT+1
Ghana is partnering with Google to develop AI-powered education tools in local languages, starting with Twi, Ewe, Dagbani, and Hausa. The program will later be scaled up to cover all 12 approved Ghanaian languages.
Of the 7,000 languages used worldwide today, education instruction is limited to 351, leaving many students with huge learning gaps. A shift to bilingual and multilingual education in Africa — which is home to around 2,000 languages — rather than teaching exclusively in the colonial language, usually English or French, has already underscored the benefits of children learning in their mother tongue, the UN said.
Ghana’s education minister emphasized the importance of including Hausa — spoken by around 22 million people across West Africa — in the AI initiative. “Language accessibility determines who can benefit from digital transformation,” TechAfrica News founder Akim Benamara wrote in a column last year.
— Paige Bruton"
Ghana, Google ink AI education deal | Semafor https://share.google/ZjUopsgMQ4Y7yVyWe
#Metaglossia
#metaglossia_mundus
#métaglossie
|
"Harnessing AI to Preserve the World’s Endangered Languages
Introduction
The world’s linguistic diversity is under threat. According to UNESCO, over 40% of the approximately 7,000 languages spoken globally are endangered, with many at risk of disappearing forever. As globalization and the dominance of major world languages like English, Mandarin, and Spanish continue to grow, the race is on to preserve the unique cultural treasures embodied in these minority tongues before they are lost to future generations.
Fortunately, advances in artificial intelligence (AI) and machine learning are providing powerful new tools in the fight to save endangered languages. From high-tech documentation efforts to community-driven language revitalization programs, AI is playing a critical role in reversing the tide of linguistic extinction. In this article, we’ll explore some of the innovative ways that AI is being leveraged to preserve the world’s endangered languages.
The Power of AI in Language Preservation
At the heart of the endangered language crisis is a lack of comprehensive data. Many minority and indigenous languages have never been thoroughly documented, with no written grammars, dictionaries, or recorded oral histories available. This lack of linguistic data makes it extremely challenging to develop the educational resources, language-learning tools, and computational applications needed to support language revitalization efforts.
This is where artificial intelligence is creating a significant transformation. Advanced speech recognition, natural language processing, and machine learning algorithms are enabling the rapid digitization and documentation of endangered language materials at unprecedented scales. Researchers are deploying AI-powered audio and video recording devices to capture spoken language data from fluent elders, while AI-assisted transcription and translation tools are allowing this data to be efficiently processed and annotated.
One pioneering example is the Endangered Languages Documentation Programme (ELDP) at SOAS University of London. This initiative has used AI-powered recording devices and transcription software to build a vast digital archive of endangered language materials, including over 4,000 hours of audio and video recordings in more than 300 languages. By automating the data collection and processing workflow, the ELDP has been able to significantly accelerate the documentation of these at-risk tongues.
Similarly, the Wikitongues project has leveraged AI-powered speech recognition to create an online repository of crowdsourced video recordings of people speaking over 1,000 different languages. This growing digital library allows linguists, educators, and community members to access authentic language data and collaborate on preserving their linguistic heritage.
Revitalizing Endangered Languages with AI
Beyond just documenting endangered languages, AI is also playing a crucial role in revitalizing them. Intelligent language-learning chatbots, for instance, are being developed to provide interactive, conversational practice for endangered language speakers, particularly younger generations who may not have had the opportunity to learn from fluent elders. These AI assistants can be customized with culturally relevant content and designed to encourage frequent use, helping to foster intergenerational transmission of endangered languages.
In New Zealand, the Te Hiku Media organization has created an AI-powered language app called “Te Reo Hāpai” that teaches conversational Māori through interactive games and lessons. Similarly, in Canada, the FirstVoices initiative has developed a suite of mobile apps powered by AI speech recognition that allow Indigenous language learners to practice their skills through voice-enabled activities.
Multilingual AI systems are also proving useful for language preservation, as they can facilitate communication and collaboration between speakers of different endangered languages. For example, the Universal Dependencies project is using AI-driven multilingual natural language processing to create vast datasets of syntactically annotated text in over 100 languages, including many at-risk minority tongues. This linguistic data can then be leveraged to build machine translation systems, educational resources, and other computational tools to support endangered language communities.
Ethical Considerations
Of course, the integration of AI into language preservation efforts also raises important ethical and practical considerations. There are valid concerns about data privacy, intellectual property rights, and the potential for AI-powered tools to be misused or to inadvertently cause harm to vulnerable language communities. Careful design, rigorous testing, and close collaboration with local stakeholders are essential to ensure that AI is deployed responsibly and equitably in this domain.
Conclusion
The urgent need to preserve the world’s endangered languages has never been more pressing. With over 40% of the approximately 7,000 languages spoken globally now classified as at-risk, the race is on to document, revitalize, and transmit these vital cultural artifacts to future generations before they disappear forever.
Fortunately, the rapid advancement of artificial intelligence (AI) and machine learning technologies is providing powerful new tools to aid in this critical effort. From automated language documentation and digitization to interactive AI-powered language learning apps, the integration of AI into language preservation initiatives is transforming the landscape of endangered language conservation.
As we continue to explore the remarkable potential of AI to support endangered language communities, it will be essential to do so in a responsible and ethical manner – one that prioritizes the needs, rights, and cultural autonomy of these vulnerable linguistic groups. Only then can we truly harness the full power of AI to safeguard the rich diversity of human expression and ensure that no language is left behind.
You may also like: AI and the Revival of Extinct Languages
FAQ
Q1. What is AI’s role in endangered language preservation?
A1. AI is revolutionizing endangered language preservation through technologies like automated language documentation, AI-powered language learning apps, and multilingual AI systems that facilitate communication and collaboration between speakers of different minority languages.
Q2. What are some examples of AI-powered language preservation initiatives?
A2. Examples include the Endangered Languages Documentation Programmed at SOAS University of London, the Wikitongues project, the Te Reo Hāpai Māori language app in New Zealand, and the First Voices initiative in Canada.
Q3. What ethical considerations arise with using AI for language preservation?
A3. Key concerns include data privacy, intellectual property rights, and the potential for AI tools to be misused or cause unintended harm to vulnerable language communities. Careful design, rigorous testing, and close collaboration with local stakeholders are essential.
Q4. How can AI help reverse the tide of linguistic extinction?
A4. By automating and streamlining the documentation, revitalization, and transmission of endangered languages, AI technologies are providing new hope for safeguarding the rich cultural diversity embodied in the world’s minority tongues.
Q5. What is the current state of endangered language preservation globally?
A5. According to UNESCO, over 40% of the approximately 7,000 languages spoken globally are currently endangered, with many at serious risk of disappearing forever due to factors like globalization and the dominance of major world languages."
#metaglossia_mundus: https://itmunch.com/harnessing-ai-to-preserve-endangered-languages/