 Your new post is loading...
|
Scooped by
Charles Tiayon
Today, 12:16 AM
|
The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video. "The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video. Talksign, a Nigeria- and UK-based AI company, has launched Talksign-1, a sign language translation model that converts American Sign Language (ASL) into speech and text in under 100 milliseconds. The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video. The model was trained on the WLASL2000 dataset and achieves 84.7% accuracy on single-sign recognition. It analyses about one second of signing to balance speed and accuracy but does not yet support continuous sentence-level translation or fingerspelling, limiting use to isolated signs. Founded in November 2025 by Edidiong Ekong and Kazi Mahathir Rahman, Talksign aims to address accessibility gaps faced by the deaf and hard-of-hearing community, particularly as many digital tools still assume users can hear and speak. The technology is designed to enable more direct communication between deaf and hearing individuals without always relying on human interpreters. Potential applications include education, healthcare, workplaces, and public spaces such as transport systems, emergency alerts, and live broadcasts. The company worked with deaf educators, native ASL users, and accessibility advocates during development. For privacy, gesture analysis is performed in the user’s browser, with only processed data sent to servers. Talksign notes the tool should not be used as the sole authority in medical, legal, or safety-critical situations. Currently limited to ASL, Talksign plans to expand support to other sign languages, increase vocabulary size, and add continuous signing and fingerspelling in future versions. The TechAfrica News Podcast" https://techafricanews.com/2026/02/17/talksign-launches-ai-model-translating-asl-to-speech-in-under-100ms/ #Metaglossia #metaglossia_mundus #métaglossie
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
" ByLance Eliot,Contributor. Dr. Lance B. Eliot is a world-renowned AI scientist and consultant.
Feb 22, 2026, 03:15am EST
Consider using AI to act as a logic-to-emotion translator when a touchy moment requires some help. getty In today’s column, I examine the use of generative AI and large language models (LLMs) to aid in communicating with people who are predominantly emotionally based and not conventionally amenable to logic.
Here’s the deal. It seems that there are increasingly large swaths of society that are primarily operating on an unbridled emotional basis. Attempts to use logic with them as a means of communication are fraught with great difficulty, tremendous frustration, and outright hardship. The more you try using logic, the worse things seem to become. They are only attuned to emotions and emotional language. Period, end of story.
What you need is a helpful real-time translator. The aim is to translate from logical ideas and statements into emotional forms of conveyance. Generative AI can do this. You can then use the generated emotional language as a means of engaging in a dialogue with the emotionally based person. It’s fine to convey the generated verbiage in your own words. You don’t need to strictly abide by the AI-produced wording. The heralded proposition is to get you in the ballpark of what will resonate with the emotionally charged receiver.
This use of AI can be extremely handy, though it isn’t a cure-all and won’t magically bring you eye-to-eye or mind-to-mind with someone who appears to be absent from logical reasoning and fortitude. As they say, sometimes something is better than nothing. Give it a whirl.
Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities
AI And Mental Health As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement." https://www.forbes.com/sites/lanceeliot/2026/02/22/talking-sensibly-with-highly-emotional-people-by-using-ai-such-as-chatgpt-to-translate-logic-into-emotional-language-they-will-understand/ #Metaglossia #metaglossia_mundus #métaglossie
"A translation into Maltese of Italian writer and philosopher Umberto Eco’s (1932-2016) novel Il Nome della Rosa was recently published by Klabb Kotba Maltin.
The Maltese version of this masterpiece has been given the title Isem il-Warda and has been carried out by Adrian Stivala, a university academic and translator, who took nearly 10 years to complete it.
Stivala admits the novel proved to be a challenge to translate on many levels.
Read more on Times2..." https://timesofmalta.com/article/maltese-translation-il-nome-della-rosa-published.1124468 #Metaglossia #metaglossia_mundus #métaglossie
"Language Support for New and Existing Projects and Languages translatewiki.net is a platform for gathering translations of interface messages for Wikimedia and other open-source projects. This quarter, 17 new projects, including Lingua Libre, Broomstick and Lexica, and 2 languages, Shughni [1] and Hyam [2], were added. In addition, 3 new languages from the African continent, Bole [3], Jju [4] and Bono [5], primarily spoken in Ghana and Nigeria and totaling over 2 million native speakers, were added to MediaWiki. This lays the groundwork for communities to contribute to future Wikimedia projects in their native languages.
Volunteers also made important contributions, including adding Hokkien Hàn-lô writing script [6] and changing numeral symbols in Levantine Arabic’s interface from Eastern Arabic to Western Arabic numerals.[7]
Update on the CapX Translat-a-thon The Capacity Exchange (CapX) platform hosted a Wikimedia Translat-a-thon in partnership with the Language Diversity Hub, bringing together volunteers from around the world to collaboratively translate and localize the CapX tool, documentation, and capacity directory. A key highlight was the Capacity Exchange translation tool, built specifically for this event to make it easier to translate capacities into many languages. Over the course of two weeks, 43 contributors worked together to produce 5,559 translations across 48 languages.[8]
12 languages selected for the Language Diversity Hub mentoring program The Language Diversity Hub has selected 12 languages (4 with an existing Wikipedia and 8 currently in Incubator) for its 2025-2026 mentorship program, aimed at helping advance their Wikimedia projects. The program will offer personalized mentorship tailored to each community’s needs, technical support around project infrastructure, workflows, and content development, and peer-learning opportunities to connect communities across regions, languages, and stages of growth.[9]
Universal Language Selector rewrite plans Universal Language Selector (ULS) is a MediaWiki extension. Its main features include language selection, input methods, web fonts, language search, and other language-related settings. The current codebase is quite old and suffers from performance issues.
The goal of the rewrite is to include ULS in MediaWiki core, so it can become the default language selector across MediaWiki, including mobile skins, and provide a more consistent user experience. The rewrite will use the modern Codex design system and extend ULS to offer clearer entry points for other tools or for modifying its behavior. Overall, these changes aim to address existing performance and accessibility issues.[10][11]
Community meetings and events Sign up to attend the upcoming Language Community Meeting in February 2026. In case you missed the language community meeting in November 2025, you can catch up by watching the video recording and reading the notes. This meeting was co-organized by the Language and Product Localization team and the Language Diversity Hub and featured over 20 attendees, including contributors from the Wikitongues project and Fante Wikimedia Community, who joined as session presenters. The ESEAP conference (for the East, Southeast Asia, and the Pacific region) will take place in Kaohsiung, Taiwan, on 15–17 May 2026. The focus of this conference will be on language diversity, Indigenous knowledge, minority languages, and technical support. Get involved If you are looking for technical tasks, take a look at the easy tasks related to internationalization, localization and translation of MediaWiki on Wikimedia Phabricator. If you are looking for tools to edit and translate articles and interface messages, you can use Content translation and Special:Translate on Translatewiki.net. These tools make it easier to work with content in different languages. Please share any feedback on the talk pages of these language tools. Stay tuned for the next release! You can subscribe to this newsletter.
References phab:T409846 phab:T405473 phab:T409708 phab:T408150 phab:T406198 phab:T392749 phab:T382781 https://diff.wikimedia.org/2026/01/12/many-tongues-one-movement-the-capx-translat-a-thon-2025/ https://diff.wikimedia.org/2025/12/17/announcing-selected-communities-for-the-2025-2026-ldh-mentorship-program/ phab:T395997
https://diff.wikimedia.org/2026/02/17/language-and-internationalization-newsletters-10/ #Metaglossia #metaglossia_mundus #métaglossie
"The Africa Institute invites applications for the Global Africa Translation Fellowship to support the translation of African and African Diaspora works into English, Arabic, and other languages. Grants of $1,000–$5,000 enable scholars and translators from the Global South to make significant texts accessible to a global audience while working remotely.
About the Fellowship The Global Africa Translation Fellowship is part of the African Languages and Translation Program and aims to:
Promote accessibility of African and African Diaspora literature, poetry, and critical theory to global audiences.
Support retranslations of classics, previously untranslated works, or newly conceived translation projects.
Encourage scholarly and high-quality translation practices, ensuring fidelity and contextual understanding.
Key Features Non-Residential Fellowship: Fellows work remotely and do not need to relocate to the Africa Institute or Global Studies University in Sharjah, UAE.
Funding: Grants range from $1,000 to $5,000 depending on project scope and quality, disbursed in two installments.
Project Scope: Supports poetry, prose, critical theory, retranslations, and previously untranslated texts. Projects must be feasible for completion within the fellowship period.
Archival Requirement: Fellows submit a copy of the completed translation for archival purposes; translations will not be published or used without consent.
Who is Eligible? Applicants from across the Global South.
Scholars, translators, or researchers with experience in African and African Diaspora studies.
Projects may be works-in-progress or newly proposed translations, provided they can be completed during the fellowship.
How to Apply Prepare CV/Resume: Two-page document including institutional affiliation, highest degree, and key publications or works.
Project Summary: Two-page single-spaced summary outlining project significance, justification, and proposed completion dates.
Sample Translation: Four- to five-page double-spaced sample showing the original text alongside the proposed translation.
Copyright Documentation: Clarify copyright status. If the work is not public domain, provide the copyright notice and letter confirming English-language rights.
Submit Application: Follow the Africa Institute’s submission guidelines before the deadline.
Why It Matters The fellowship fosters the global visibility of African and African Diaspora texts, contributing to:
Preservation and dissemination of cultural heritage.
Expansion of academic and literary resources in English, Arabic, and other languages.
Support for translators and scholars from underrepresented regions in the Global South.
Funding Details Grant Amount: $1,000–$5,000 depending on project scale.
Disbursement: Two installments—first at project start, second upon completion.
Support Provided: Financial assistance to complete translations remotely.
FAQs Who can apply? Scholars, translators, and researchers from the Global South with experience in African and African Diaspora studies.
What types of works are eligible? Poetry, prose, critical theory, previously untranslated works, and retranslations of classics.
Is relocation required? No, the fellowship is non-residential.
What is the grant amount? $1,000–$5,000, depending on project scope.
Are there submission requirements? Yes, including CV, project summary, sample translation, and copyright documentation.
Is prior experience required? Applicants should demonstrate ability to complete high-quality translations.
What happens after the translation is completed? A copy is submitted for archival purposes; no publication occurs without fellow consent.
Conclusion The Global Africa Translation Fellowship 2026 provides crucial support to translators and scholars, enabling them to make African and African Diaspora texts accessible to a wider audience. By funding high-quality translations and promoting cultural exchange, the fellowship strengthens global understanding of African literature and scholarly works.
For more information, visit The Africa Institute." https://www2.fundsforngos.org/arts-culture/entries-open-global-africa-translation-fellowship-2026/amp/ #Metaglossia #metaglossia_mundus #métaglossie
"Language barriers slow down the international diffusion of knowledge, study finds
by Ingrid Fadelli, Phys.org
edited by Gaby Clark, reviewed by Robert Egan
Rapid technological and scientific advances have fueled a huge wave of innovation over the past decades. The speed of global innovation is known to be dependent on the exchange of knowledge and skills between different nations worldwide.
Throughout history, discoveries made in some parts of the world have sparked development in other geographical regions. Therefore, if technical documents, patents and research papers are only available in one language and are not promptly and accurately translated into other languages, this can slow down the diffusion of knowledge and consequently international innovation.
Researchers at Motu Economic and Public Policy Research in New Zealand and the Research Institute of Economy in Japan recently carried out a study exploring the extent to which delays in the translation of patents and associated language barriers influence the speed and reach of technological progress. Patents are exclusive time-restricted legal rights for the fabrication and sale of new inventions or technological solutions that governments can grant investors.
The team's paper, published in Nature Human Behavior, specifically focuses on the translation of patents issued in Japan into English. The results reported in the paper suggest that language barriers accounted for approximately half of the delay in the diffusion of knowledge from Japan to the United States within the years considered in the study.
"Language barriers and translation costs are persistent obstacles to communication and have particularly pronounced economic impacts in technical domains," wrote Kyle Higham and Sadao Nagaoka in their paper. "We provide causal evidence on the effects of language barriers on the speed and extent of knowledge diffusion by exploiting a change in US patent policy that resulted in earlier disclosure of English-language technical knowledge from Japan."
Graph summarizing the average time to first citation to the US–JP twin cohort. The points indicate the (log-transformed) lag to first citation, by source, averaged by week. The dashed lines indicate the same, averaged over the 26-week periods before and after the AIPA came into effect. Credit: Higham & Nagaoka. (Nature Human Behaviour, 2026).
Tracking patent citations in the U.S. and Japan
As part of their study, the researchers collected and analyzed a sample of 2,770 citations of Japanese inventions by US-based inventors. First, they identified a reform in U.S. patent policy that had sped up the time it took to translate and disclose technical knowledge originating from Japan.
In their analyses, they looked at how long it took for U.S. inventors to cite patents and inventions originating from Japan both before and after the policy change. In addition, they tried to determine what types of firms most benefited from an earlier diffusion of Japanese inventions in English documents. Finally, they explored the possibility that the quality of inventions influenced the speed with which they became accessible to English-speaking inventors.
"We find that language barriers accounted for almost half the diffusion lag of Japan-originating knowledge to US-based inventors, relative to Japan-based inventors," wrote Higham and Nagaoka.
"This acceleration is significant only for firms with limited ability to translate (small research and development scale, or little involvement in the Japanese market) and is more pronounced for the diffusion of high-quality inventions, suggesting difficulties in quality-targeted translation. Thus, early publication of patent applications provides a substantial public good for cumulative innovation through accelerated access to translated foreign patents."
Implications for future technological and scientific progress
The results of the team's analyses suggest that language barriers significantly slow down the pace of global innovation, while the prompt translation of patents and technical documents speeds up international advancement. This effect appeared to be most pronounced for smaller U.S. firms who had fewer resources (e.g., had no in-house translators) and thus heavily relied on the public dissemination of documents translated in English.
The researchers found that Japanese inventions that were more impactful and of a higher quality were typically cited earlier in English documents than inventions of a lower quality. Overall, their analyses confirmed that the early disclosure and translation of patents speed up global innovation.
In the future, this study could encourage governments to update patent diffusion policies or introduce new measures aimed at accelerating cumulative innovation and supporting the global exchange of knowledge. In addition, it could inspire other teams to carry out more research focusing on the translation of patents in a broader range of languages.
Written for you by our author Ingrid Fadelli, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You'll get an ad-free account as a thank-you.
Publication details
Kyle Higham et al, Language barriers and the speed of international knowledge diffusion, Nature Human Behaviour (2026). DOI: 10.1038/s41562-025-02367-3.
Journal information: Nature Human Behaviour "
https://phys.org/news/2026-02-language-barriers-international-diffusion-knowledge.html
#Metaglossia
#metaglossia_mundus
#métaglossie
""Wordly has introduced Workspaces – a new version of its AI translation and captioning platform for event planners to use when the show is over and they're back in the office.
The company said Workspaces allows planners to manage live translation and captioning for conference keynotes, workshops and staff meetings under a single account that can be used across departments.
Rather than setting up the service separately for each event, organisations can extend the platform to internal meetings, training sessions and other gatherings.
“Inclusion and accessibility shouldn’t end when the last session wraps,” said Lakshman Rathnam, founder and CEO of Wordly. He said Workspaces gives event organisers control to extend language access beyond conferences into planning meetings, trainings and everyday operations.
According to the company, the platform is designed to support a range of activities that event planners typically oversee, including staff meetings, sales kickoffs and employee onboarding sessions. It can also be used for external-facing events such as industry conferences, customer webinars, board meetings and investor communications. Day-to-day applications include project standups, cross-functional collaboration meetings and employee interviews.
Rathnam said the move reflects Wordly’s shift from focusing primarily on large events to serving broader enterprise needs. He described Workspaces as supporting “global, multilingual organizations across large-scale conferences and daily business operations,” where demand for live translation and captions continues to grow.
Alongside the product update, Wordly introduced a refreshed brand identity and website.
Dave Deasy, the company’s chief marketing officer, said the new branding reflects its focus on addressing language and accessibility barriers as it expands beyond traditional event settings.
Workspaces is now available to existing and new clients."
https://www.meetings-conventions-asia.com/News/Technology/Wordly-takes-AI-translation-out-of-conference-venues"
https://www.meetings-conventions-asia.com/News/Technology/Wordly-takes-AI-translation-out-of-conference-venues
#Metaglossia
#metaglossia_mundus
#métaglossie
"À l’occasion de la Journée internationale des langues maternelles, célébrée le 21 février à l’initiative de l’UNESCO, l’Asbl Wikilinguila a organisé, à Kinshasa, une conférence consacrée à la promotion et à la valorisation de quatre langues nationales de la République démocratique du Congo (lingala, swahili, tshiluba et kikongo).
Placée sous le thème "Utiliser la technologie pour l’apprentissage multilingue : défis et opportunités", la rencontre a réuni étudiants, enseignants, chercheurs, acteurs culturels, professionnels de la communication, créateurs de contenu et membres de la société civile engagés dans la défense de l’identité linguistique congolaise à l’ère numérique.
En marge de la conférence, Wikilinguila a officiellement lancé la campagne "WikiForMotherTongue" (Wiki pour la langue maternelle), une initiative visant à renforcer la présence des langues nationales sur Internet, notamment à travers les projets Wikimedia.
Selon les organisateurs, l’objectif est double : sensibiliser les populations à l’importance des langues maternelles dans l’apprentissage et la transmission des savoirs, tout en encourageant la production et la traduction de contenus numériques en langues nationales.
Langue maternelle : fondement identitaire et enjeu éducatif
Dans son intervention, le professeur Munkulu di Déni, Encyclopédiste, Fondateur et Secrétaire perpétuel de l’académie Kongolaise, a insisté sur la dimension existentielle du langage. « La langue n’est pas seulement un instrument de communication, elle est le lieu où se tisse notre rapport au monde. Chacun de nous naît deux fois : biologiquement, puis dans une langue ».
Pour lui, l’intelligence humaine demeure la matrice du langage. Apprendre une langue ne se limite pas à mémoriser du vocabulaire ou des règles grammaticales ; c’est intégrer une vision du monde, une mémoire affective et un imaginaire culturel.
Dans le contexte congolais, les langues nationales et locales portent des savoirs écologiques, des traditions philosophiques et des formes de sociabilité spécifiques qui ne se traduisent pas toujours entièrement dans d’autres idiomes.
Technologie et multilinguisme : entre opportunités et vigilance
Les intervenants ont également souligné le rôle croissant des technologies numériques et de l’intelligence artificielle dans l’apprentissage linguistique. Applications mobiles, plateformes d’apprentissage, outils de traduction automatique et assistants conversationnels offrent de nouvelles possibilités d’accès au savoir.
Le professeur Ikalw’OfoLo, linguiste et traducteur professionnel, a mis en avant le potentiel de ces outils : « la technologie est un amplificateur de voix et de mémoire. Elle permet de préserver nos travaux, de les diffuser largement et de renforcer le plaidoyer pour les langues congolaises ».
Cependant, plusieurs défis ont été évoqués : la fracture numérique, qui limite l’accès aux outils technologiques ; le risque d’uniformisation linguistique, au profit des langues dominantes ; les limites qualitatives des traductions automatiques ; les enjeux éthiques liés à la protection des données et aux choix algorithmiques.
Pour le professeur Munkulu, l’enjeu n’est pas d’opposer intelligence humaine et intelligence artificielle, mais de penser leur complémentarité. « L’humain apporte le sens, le contexte et l’éthique. La machine apporte la vitesse et la capacité de traitement. L’apprentissage multilingue devient un espace hybride ».
« La technologie peut devenir un nouveau feu »
L’intervention de Mwene Kha Utanda Mwanta Nswan-a-Murund, Ngwej’I-Kabongo a mobilisé l’imaginaire traditionnel pour illustrer les enjeux contemporains. Évoquant le cercle autour du feu, lieu ancestral de transmission, il a posé une question centrale : « Est-ce que cet écran va éteindre le feu … ou devenir un nouveau feu pour nos peuples ? »
Pour lui, la RDC n’a pas besoin de devenir multilingue, elle l’a toujours été. Le défi consiste désormais à réveiller cette sagesse et à faire de la technologie un outil de transmission plutôt qu’un facteur de rupture identitaire. « Un peuple peut perdre ses terres ou ses richesses. Mais lorsqu’il perd ses langues, il perd la manière de rêver », a-t-il conclu.
Plaidoyer pour une réforme éducative
La conférence ne s’est pas limitée à Kinshasa. Des activités similaires ont été organisées simultanément à Goma, Mbuji-Mayi et Matadi. Deux ateliers pratiques dédiés à la valorisation et à la transmission des langues à l’ère du numérique sont également annoncés pour les 28 février et 3 mars prochains.
Wikilinguila, association sans but lucratif, concentre dans un premier temps ses actions sur les quatre langues nationales, avec l’ambition d’élargir progressivement son champ d’intervention aux langues locales, afin d’assurer leur documentation et leur transmission dans l’espace numérique.
Au-delà du numérique, les organisateurs ont lancé un appel aux autorités publiques pour une intégration effective des langues nationales dans le système éducatif congolais. Ils estiment qu’un Congolais devrait pouvoir maîtriser au moins deux des quatre langues nationales, en plus du français. Ils plaident pour des politiques éducatives qui reconnaissent la langue maternelle comme levier d’inclusion et d’amélioration des résultats scolaires, conformément aux orientations de l’UNESCO.
Wikilinguila ambitionne, à travers WikiForMotherTongue, de transformer les outils numériques en espaces de documentation et de valorisation des langues congolaises, notamment via Wikipédia et d’autres plateformes libres.
Dans un contexte où les langues africaines demeurent sous-représentées en ligne, l’initiative pose la question plus large de la souveraineté linguistique et culturelle à l’ère de l’intelligence artificielle.
James Mutuba"
Lundi 23 février 2026 - 08:42
https://actualite.cd/index.php/2026/02/23/rdc-wikilinguila-plaide-pour-un-multilinguisme-numerique-ancre-dans-les-langues
#Metaglossia
#metaglossia_mundus
#métaglossie
Padma Shri poet Shafi Shauq warns institutions—not people—threaten Kashmiri. He explains power, translation, and survival.
"How Institutions cause the death of languages: Shafi Shauq
Kashmiri poet, linguist, and critic on his formative years reading folk tales and world classics, why institutions suppress the growth of Kashmiri, and the spoken word as the true measure of a language’s life.
Published : Feb 22, 2026
Majid Maqbool
Shauq’s work explores Kashmiri language, poetry, and literary history, helping preserve and share the region’s cultural heritage. | Photo Credit: By Special Arrangement
Shafi Shauq is one of the most authoritative voices in Kashmiri letters—poet, fiction writer, critic, lexicographer, and translator whose work has brought Kashmiri literature to readers across languages and continents. His oeuvre spans literary criticism, lexicography, poetry, fiction, translation, and language studies, with over 100 books authored, edited, or translated across Kashmiri, English, Urdu, and Hindi.
Born in 1950 in Kaprin village of Shopian district in south Kashmir, Shauq completed his PhD in English from the University of Kashmir, where he subsequently taught for 33 years, retiring in 2010 as Professor and Dean of the Faculty of Arts.
In January, Shauq was named a Padma Shri recipient for 2026, India’s fourth-highest civilian award, in recognition of his contributions to literature and education, particularly the preservation of Kashmiri language and poetry.
His standout works include the anthology series The Best of Kashmiri Literature (volumes on Lal Ded, Nund Reshi, and Kashmiri Sufi poetry), poetry collections such as Remembering the Skies and Aa ki Naa, and foundational linguistic texts: Kaeshur Lugaat (Kashmiri Dictionary), Kaeshryuk Grammar (Grammar of the Kashmiri Language), Kaeshir Zaban ti Adibuk Tawaariekh (History of Kashmiri Language and Literature), and Zabaan ti Adab (Language and Literature).
He has previously received the Sahitya Akademi Award (2006), the Sahitya Akademi Translation Award (2007), the Bharati Bhasha Samman (CIIL), and the Jammu & Kashmir State Award for Outstanding Contribution to Languages and Literature. He has represented Indian literature at forums in China, Germany, the UK, Canada, Brazil, and elsewhere.
In this interview, Shauq talks about growing up in a south Kashmir village, the teachers and family members who shaped his literary sensibility, and his early reading of Kashmiri folk tales and world classics. He also speaks about the institutional constraints that stifle the Kashmiri language and insists that no language dies as long as its speakers remain.
Edited excerpts:
You grew up in Kaprin village in south Kashmir’s Shopian district. Which Kashmiri writers, stories, or texts captured your imagination as a young reader?
As a child I got interested in Kashmiri folk tales and mathnavis that my father read out on winter nights when there was no other means of entertainment. My sense of literature took shape when my elder brother Naji Munawar—a renowned writer—made me read some world classics: Alice in Wonderlandby Lewis Carroll, Nehru’s Letters from a Father to His Daughter, and Arabian Nights. I was also influenced by Urdu writers like Prem Chand and Krishen Chandra. In Kashmiri, I read the poems of Abdul Ahad Azad during my school days, and they left an indelible mark on my imagination. I also read and enjoyed the children’s poems written by Naji Munawar.
You’ve authored, edited, and translated over a hundred works. In your view, how do we attract younger readers to Kashmiri literature? Can the spoken language alone keep it alive?
I am glad that my books, in original and in translation, serve students and scholars. Wider readership comes only when we write books of information, science, and diverse fields of knowledge.
Kaeshur, like any other living language, is essentially a speaking language—a medium of understanding and explanation. It does not depend on written texts. It has been the identity of the Kaeshir (Kashmiri) people for thousands of years, long before written texts existed, and it will continue to be the medium of communication as long as Kashmiris are on the face of this earth. The rumour of the language’s extinction serves certain individuals’ ends. The spoken form of a language is the index of its community’s life; written texts are fast being replaced by newer forms of communication—speech to text and its reverse.
You’ve said that research and information-based writing are as vital as poetry. What drew you, alongside your poetry, to documenting indigenous knowledge?
Composing poetry is, and must be, an instinctual drive that draws strength from the playful creativity of human beings—it has a universal grammar and pattern. Poetry, in whatever form, will therefore always be an expression of the human condition in space and time.
Information-based knowledge is the result of an individual’s concerted and conscious effort; it is always subject to scrutiny and questioning. It has to be verifiable. Much has been done to document indigenous knowledge, and it is worth noting that a great deal of this work was done by European scholars—as I discuss in my book Europeans on Kashmir. Some local scholars of Kashmir too have produced significant works in documenting folklore and folk literature.
What challenges did you face in rendering Kashmiri idioms, proverbs, and imagery into English?
The most challenging task in translating Kashmiri literature is placing it in its proper context, with close attention to cultural content. My translation volumes—on Lal Ded, Nund Reshi, Sufi poetry, and love poetry—reflect my attempt to find dynamic equivalents for idioms, proverbs, and imagery. The greatest hurdle was overcoming fixed expressions and shibboleth phrases. Even so, I am satisfied with the work—success is always subject to scrutiny.
In your Best of Kashmiri Literature volumes, you introduced poets like Lal Ded and Nund Reshi to modern audiences. Can you recommend lesser-known Kashmiri poetry collections that would help readers outside the Valley understand its lyrical tradition?
I would point to the works of George Grierson, Aurel Stein, and Hinton Knowles.
What advice do you have for writers eager to write in Kashmiri but worried about readership and reach in the digital era?
Printed readable texts may not survive the onslaught of digital media, yet I hope our younger writers are alert to this new challenge. Without sacrificing the creative integrity of form and content, they might write in newer genres and forms. AI has yet to generate poetry and fiction in Kashmiri, but that time is approaching fast. ChatGPT and Copilot can already produce verse in perfect rhyme—that ability is now within reach.
You’ve spoken about integrating digital tools into Kashmiri. How can technology help revitalise the language?
Unfortunately, some writers today face serious hurdles in this direction. The biggest is the subordination of the language to unnecessary diacritical marks devised by certain people. AI and other tools do not treat anything as sacred. I support universal standards that do not treat the visual medium as a mere reflection of oral and aural media. English spelling is one illustration of the fact that spelling and pronunciation are arbitrary—and that has not killed English.
How crucial is institutional support—in schools, colleges, and universities—for the growth of Kashmiri language and literature?
No rhetoric is needed here, but the fact is that institutions cause the death of languages and literatures by subjecting them to pedagogical patterns and time-honoured rules.
Why has Kashmiri literature struggled while Malayalam or Tamil have flourished through prolific translation and government support?
Kashmiri literature has in fact always been rooted in translation, going back to the 14th century. Mehmood Gami gave the process new life when he adapted major Persian classics into Kashmiri. His mathnavis were enormously popular, and other narrative writers followed his lead, accepting the mathnavi as their medium of expression. Hundreds of long narratives based on translations were produced and later published—works like Prakash Ram’s Ramayana, Lakhiman Joo Bulbul’s Saamnaama, Maqbool Kralawari’s Gulrez, and Wahab Parrey’s Shahnaamah.
In the 20th century, translation received fresh impetus and most world classics were rendered into Kashmiri. European scholars—Aurel Stein, George Abraham Grierson, and
Hinton Knowles—translated Kashmiri classics into English and German, and preserved texts of all the Kashmiri classics.
Why have Kashmiris been unable to establish a dedicated literary journal to nurture young writers under established mentors?
I say this not to discourage such efforts, but it is a hard reality that journals and popular magazines require a very large readership and subscriber base. Social media could play a role—without becoming vulgar or bathetic.
How has reading and writing sustained you through decades of turbulence in Kashmir? Are there particular books—Kashmiri, English, or otherwise—that have been especially meaningful?
I say with firm conviction that studying the masters in all genres of literature is the fundamental principle of originality. Any piece of literature is, and must be, the product of individual genius and the art learnt by effort. Hard work and the study of masters across languages is indispensable. No work of literature exists in isolation from others.
Majid Maqbool is an independent journalist and writer based in Kashmir. Bookmarks is a fortnightly column where writers reflect on the books that shaped their ideas, work, and ways of seeing the world."
https://frontline.thehindu.com/interviews/shafi-shauq-kashmiri-language-interview/article70662904.ece
#Metaglossia
#metaglossia_mundus
#métaglossie
Primer diccionario publicado de la lengua Kumiay
"• Se trata de la lengua kumiay de Baja California, hablada en la comunidad de San José de la Zorra. Las hablantes Rosa María Silva Vega y Beatriz Carrillo Espinoza unieron esfuerzos con Carlos Ivanhoe Gil Burgoin para materializar esta herramienta escrita que contribuye a la preservación, difusión y cuidado de una lengua que acusa ya signos de lenta extinción.
• Tipey Aa Karkwar ’Ith es un material de consulta y estudio valioso para la propia comunidad de San José de la Zorra, los investigadores y creadores de políticas públicas, pero también para cualquier persona interesada en la memoria y el patrimonio cultural de los pueblos nativos de Baja California.
Este sábado 21 de febrero, en el marco del Día Internacional de la Lengua Materna, el Centro Cultural Tijuana (Cecut) será sede de la presentación del primer diccionario bilingüe de la lengua kumiay hablada en la comunidad de San José de la Zorra, una de las más antiguas de Baja California.
La obra, titulada Tipey Aa Karkwar ‘Ith: Un diccionario bilingüe del kumiay contemporáneo de San José de la Zorra, fue desarrollada por Carlos Ivanhoe Gil Burgoin, bajo el sello de la Editorial UABC, en colaboración con las hablantes nativas Rosa María Silva Vega y Beatriz Carrillo Espinoza.
El kumiay —lengua ancestral de una amplia región que abarca Tijuana, Tecate, Ensenada y San Diego— enfrenta un proceso de desaparición paulatina. Por ello, este diccionario representa un esfuerzo histórico por preservar, documentar y revitalizar una lengua que aún late en las comunidades indígenas del noroeste mexicano.
Con más de 2,000 entradas en kumiay, el diccionario ofrece significados, ejemplos de uso, pronunciación y notas gramaticales. Además, incluye un estudio introductorio que contextualiza el valor cultural e histórico de esta lengua, pilar identitario del pueblo kumiay.
Durante la presentación, acompañarán al autor María de los Ángeles Carrillo Silva, Etna Pascacio Montijo y Alejandra Velasco Pegueros, reconocidas investigadoras y promotoras de las lenguas originarias.
El evento se llevará a cabo en la Sala Federico Campbell del Cecut, a las 17:00 horas, con entrada libre al público.
Este lanzamiento se considera un hito para la memoria lingüística y cultural de Baja California, al ofrecer una herramienta viva para futuras generaciones y fortalecer el legado de los pueblos originarios que dieron identidad a la región."
CULTURA
19 febrero, 2026
Jose Luis
https://verazinforma.com/se-presentara-en-cecut-el-primer-diccionario-publicado-de-la-lengua-kumiay/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Prosody – a superpower for effective communication
21/02/2026
Languages are more than a means of communication: they define our identity and help us integrate into society.
Languages are more than a means of communication: they define our identity and help us integrate into society. Multilingual education promotes inclusivity, mutual understanding, and peace, while helping to preserve minority and indigenous languages. This year’s International Mother Language Day highlights the important role young people play in advancing multilingual education.
In this context, UNRIC spoke with Dr. Lieke van Maastricht from Radboud University in the Netherlands about prosody – the sound patterns of language, such as lexical stress, intonation, speech rhythm or accents. In our globalized world, more people are communicating with non-native speakers of their mother tongue. A foreign accent is a natural part of language learning; it reflects the diversity of human expression and is a reminder of the courage it takes to embrace new languages and cultures. Find out what prosody is and why it matters in language acquisition.
What is prosody and why is it important?
Prosody concerns the sounds of a language that span beyond individual vowels and consonants, such as lexical stress (which syllable stands out in a word), intonation (the rising and falling patterns of voice frequency), and speech rhythm. These larger speech sounds shape our impression of a language (e.g., Swedish is often perceived as singsong-y, Italian is said to sound staccato). However, prosody is not only nice to have because it makes languages sound interesting or melodic, but also a must-have for effective communication. We use prosody to convey both meaning and emotion in speech. For example, when someone is anxious, the pitch of their voice and speech rate often increases, whereas a reduced pitch range and speech rate can be a sign of depression or boredom. Moreover, producing the right intonation pattern can make the difference between a question and a statement, or between irony and sincerity. Placing word stress incorrectly can lead to a completely different lexical meaning or misinterpretation of an entire sentence.
Why do language learners often transfer intonation, rhythm or word stress from their mother tongue into a new language?
The tricky thing is that languages can differ enormously in terms of their prosody. For example, while in French the last syllable of a word is always the most prominent (e.g., nu-mé-RO, té-lé-PHONE, ki-lo-MÈTRE, capital letters indicate word stress), in Spanish, lexical stress placement is flexible (e.g., te-LÉ-fo-no, ki-LÓ-me-tro) and can change the meaning of a word or even whole sentence (e.g., NÚ-me-ro refers to the noun ‘the number’, nu-ME-ro means ‘I number’ in the present tense, and nu-me-RÓ means ‘(s)he numbered’ in the past tense).
As speakers are often unaware of the prosodic patterns of both their native language and the one they are trying to learn, they tend to subconsciously copy those they know from their native language to their new language. Many educational methods lack instruction in prosody, and it is often left out of teacher training as well. So even if foreign language teachers are (more) aware of its relevance to communication, they still often do not have the tools to teach it properly in class. This is unfortunate, as research shows that with training, we can improve our prosodic proficiency.
Prosody is a must-have for effective communication. We use prosody to convey both meaning and emotion in speech.
Is it possible to eliminate a foreign accent? Can improving prosody significantly reduce it?
Research has shown that improving your prosody will help reduce a foreign accent. Native speakers also consider learners with better prosody to be easier to understand. Interestingly, the results for actual comprehension (e.g., in the form of a transcription task or a comprehension test) are mixed. Some studies show that improving prosody also improves intelligibility, while others do not. Importantly, in our increasingly globalized world, more and more people are becoming accustomed to communicating with non-native speakers of their mother tongue. And while a reduced foreign accent and perceived ease of comprehension remain important contributors to the impression that we have of people, it is a relief to know that foreign language errors, be it in prosody or other areas, do not necessarily impede our capability to understand each other.
Accents in foreign languages are often associated with social judgements. Why do accents carry stigma? In your view, is accent bias more about actual language ability, or about social attitudes and perceptions?
This is not my area of expertise, but research has shown that speakers with a foreign accent are also more likely to be judged negatively by mother-tongue speakers regarding their competence, intelligence, and charisma. Given that errors in prosody contribute to the perception of a foreign accent, we can conclude that incorrectly producing the prosody of a foreign language has important societal implications, for instance, when it comes to finding a job and maintaining social relationships. What is worse, this can become a bit of a vicious circle: if native speakers perceive communication with a learner to be difficult or the speaker themselves as less charismatic or intelligent, this reduces that learner’s chances of receiving valuable input from native speakers and practice in their foreign language, which would naturally reduce their chances of improving in the language, including its prosody. Alternatively, we know that learners with good prosody are also perceived as more fluent and proficient, which may even distract native listeners from errors they make in other areas of the language. This could be seen as a virtuous circle. In conclusion, it is crucial that we start providing teachers and learners with the necessary tools to practice and assess prosody in their classrooms.
Your research focuses on prosody. Could you tell us more about what you are currently working on?
My current research focuses on two things: on the one hand, I am involved in several studies assessing the benefits of hand gestures, facial expressions, and other forms of nonverbal communication on prosody acquisition by foreign language learners. Since prominence in speech – e.g., both at the level of words (stress) and sentences (intonation) – often co-occurs with hand gestures, head movements, and facial expressions (e.g., eyebrow raises), there have been quite a few studies looking at whether learners can use this nonverbal information to learn the prosody of a foreign language. So far, the results of these studies have been mixed; some researchers find benefits, but others do not. The usefulness of gestures in these contexts appears to also depend on the complexity of the task, the nature of the prosodic cue in question, and the characteristics of individual learners (e.g., with respect to working memory capacity or musical aptitude).
On the other hand, I am trying to create a test to assess the prosodic accuracy of speech. Such a test would not only be useful for teachers and learners of a foreign language but also for clinical therapists working with patients struggling with prosody production or the creators of technology aimed at understanding or producing speech (e.g., such as Siri or Alexa, voice-activated virtual assistants).
Could you briefly explain what the Prosodic Proficiency Test Battery is and what it aims to do?
The test is still under development, but when it is ready, it will be freely available to anyone. As such, it would help foreign language teachers and learners in a few different ways: it would allow them to evaluate prosodic accuracy, either in their own speech (for learners) or in their students’ speech (for teachers). The test can be used for one-time assessments or to track prosodic development over time. You could also use it to see which specific contexts are challenging for a speaker and determine their learning goals. For instance, they may produce word stress quite well, but are not yet able to use sentence intonation correctly in the context of indirect questions. This provides teachers and learners with a concrete area to focus on.
Having a foreign accent is a completely normal aspect of learning and speaking a new language.
What would you say to learners who feel embarrassed about their accent in a language they have acquired?
Having a foreign accent is a completely normal aspect of learning and speaking a new language. It doesn’t necessarily impede communication and is nothing to be ashamed of. I like to see it as a reminder that I was brave enough to open myself up to a new language and culture, which is always enriching. Luckily, as more and more people speak a foreign language, we are also becoming increasingly accepting of variation in speech, both when spoken by learners and by native speakers with a regional accent. In my opinion, these differences among speakers should be celebrated, as they highlight our ability to remain in communication despite our different (language) backgrounds.
On International Mother Language Day, what message should we remember about accents and multilingualism?
Perhaps International Mother Language Day is a nice day to pay special attention to the accents that you hear around you, both regional ones and foreign ones. Try to guess where a speaker is from based on their accent. Do you think that your guess is based on the vowels and consonants that you hear, or because of the rhythm and melody of the speech? Once you become attuned to prosody, you will realize that it is all around you and that you use it to extract valuable information from speech about the speaker and their message. It’s like a secret superpower!
Dr Lieke van Maastricht
Dr. Lieke van Maastricht is a linguist specializing in phonetics and foreign language learning, as well as a Spanish teacher at Radboud University.
https://unric.org/en/prosody-a-superpower-for-effective-communication/
#Metaglossia
#metaglossia_mundus
#métaglossie
"As Artificial Intelligence (AI) tools gain popularity in document translation, legal and immigration professionals are warning of the growing risk that automated translations are being used for high stakes filings where even minor errors can lead to costly delays, rejected applications, or legal consequences.
In response to the rising demand for accountable certified human translation, Interpreters Unlimited (IU), a national language service provider, is doubling down on humans. IU has just launched Certified Translation by Interpreters UnlimitedTM, a secure online portal, developed in partnership with AcudocX, designed to streamline access to legally accepted document translation services across the United States.
Through the platform, individuals and organizations can securely upload files, receive clear rapid pricing and turnaround details, approve projects, and track progress from start to finish entirely online. The service is designed primarily for individuals who need translations for immigration paperwork, birth and marriage certificates, asylum materials, medical records, academic transcripts, and legal filings. It also supports professionals working in healthcare, law, education, government, and corporate environments.
While AI based translation tools offer speed and convenience, they can not provide certification, legal accountability, or guaranteed acceptance by courts and federal agencies such as U.S. Citizenship and Immigration Services (USCIS). Translations handled through Certified Translation by Interpreters UnlimitedTM do just that, meeting strict quality standards and including certified documentation that is accepted by courts, government agencies, school systems, and other official entities.
Immigration attorneys and legal aid organizations have increasingly reported cases where applicants submitted documents translated through automated tools, only to see their filings delayed or rejected because of critical errors. In several documented instances, mistranslated dates, locations, and medical histories triggered Requests for Evidence (RFEs) or denials, forcing families to restart the process and wait months longer than expected to reunite.
"A mistranslated date, medical term, or legal phrase can change the outcome of any immigration case or court proceeding," said Shamus Sayed, CEO of Interpreters Unlimited. "Technology can assist with efficiency, but when someone's legal status, health records, or professional future are on the line, human expertise and accountability are essential." For many users, this portal is more than a convenience during this time in our country, in the age of ICE and stricter immigration policies it is a lifeline, helping families reunite, students pursue opportunity, and immigrants move forward with confidence instead of confusion.
Certified Translation by Interpreters UnlimitedTM pairs qualified professional linguists with translation management technology to deliver faster certified translations in more than 70 languages. This approach eliminates the delays and minimizes the back and forth communication of the traditional translation process, allowing customers to move forward faster with confidence. Each project is assigned to a linguist based on the language pair, ensuring cultural and contextual accuracy alongside linguistic precision, and all certified translations meet acceptance standards for courts, government agencies, schools, and official institutions.
The launch reflects a broader industry shift, as AI tools become more widespread, demand is simultaneously increasing for verified, accountable human translation in high risk settings such as immigration, healthcare, and legal proceedings. "AI can summarize or approximate meaning," Sayed added. "But it cannot assume responsibility for the accuracy of a sworn translation. When families are applying for visas, students are submitting credentials, or patients are providing medical histories, precision matters, and that's where we come in."
Individuals and organizations can access the new portal at www.interpreters.com/certified-translation-portal.
Interpreters Unlimited
8943 Calliandra Rd
San Diego, CA 92126
Press Contact: Marc Westray
marc.westray@interpreters.com"
https://www.openpr.com/news/4396382/in-the-age-of-ai-interpreters-unlimited-doubles-down-on-human
#Metaglossia
#metaglossia_mundus
#métaglossie
"...ALTA is the only organization in the United States devoted exclusively to literary translators. Translation currently accounts for only 3% of published books in the United States, and public funding for translation is scarce. ALTA fills vital gaps for translators, many of whom work in isolation and without institutional backing. Immigrants are the backbone of its membership—14% of members are heritage speakers of a language they translate, 32% were born outside the United States, and 62% have at least one parent or grandparent who was born outside the United States.
Poet-translators represent an underserved segment of the literary translation field, receiving far fewer publishing resources, funding, or opportunities for awards than fiction translators. While ALTA doesn’t limit translation to poetry, all of its programs are inclusive of poetry, with several specifically dedicated to the art. ALTA provides two translation awards for poetry, two poetry-focused mentorships for emerging translators, five poetry translation workshops, more than 60 virtual pitch sessions with poetry publishers, and hosts multiple poetry readings at the annual conference. ALTA also partners with the University of Arizona’s Poetry Center to co-present public events and teaching programs that foreground poetry in translation.
A cornerstone of ALTA’s work is its Emerging Translator Mentorship Program, which pairs early-career translators with experienced mentors in their language of focus. After nine months, it concludes with a reading at the annual ALTA conference and an opportunity to submit work to publishers through ALTA’s First Look program. Since its inception in 2015, it has supported more than 90 translators working from more than 30 languages, with a strong emphasis on underrepresented languages and a significant focus on poetry. Many mentees have gone on to publish poetry books, assume leadership roles in the field, and receive major awards...
ALTA’s annual conference is the only one in the United States dedicated exclusively to literary translation. Each year, it convenes 500 translators, publishers, educators, students, and literary professionals for a vibrant mix of panels, readings, workshops, and public events. Poetry plays a central role in the gathering. The Declamación reading is one of the conference’s longest-running and most popular events, and dedicated workshops and sessions support the craft and visibility of poetry translation. At its 48th annual conference, held in Tucson in November 2025, ALTA collaborated with local partners to ensure that poetry events reached multilingual communities across Southern Arizona, where nearly half of the residents speak a language other than English at home.
Receiving a general operating support grant from the Poetry Foundation has helped ALTA sustain and strengthen its core infrastructure, ensuring that ALTA can continue to fulfill its mission with particular attention to poetry. With unrestricted funding from the General Operating Support grant, ALTA was able to strengthen staffing, administrative capacity, and fundraising efforts. The impact of this support extends beyond day-to-day operations, strengthening the diversity of the literary field by supporting access to global voices and visibility of poetry in translation..."
https://www.poetryfoundation.org/articles/1775196/meet-our-grantee-partner-american-literary-translators-association
#Metaglossia
#metaglossia_mundus
#métaglossie
"The Vice-President of India, Shri C. P. Radhakrishnan, released the updated versions of the Constitution of India in Tamil and Gujarati, along with the 8th Edition of the Legal Glossary (English–Hindi), at Uprashtrapati Bhavan in New Delhi today, on the occasion of International Mother Language Day..."
21 FEB 2026 7:34PM by PIB Delhi
https://www.pib.gov.in/PressReleasePage.aspx?PRID=2231312®=3&lang=1
#Metaglossia
#metaglossia_mundus
#métaglossie
"AI Manga Translator is a digital platform that uses artificial intelligence to translate manga and comic content into multiple languages in real time. The tool enables readers to access works originally published in different languages without manual translation, facilitating a more seamless global reading experience.
By analyzing text within speech bubbles and integrating contextual understanding, the AI can preserve narrative flow and stylistic nuances while converting dialogue and captions. For publishers, translators, and content platforms, this solution demonstrates the potential of AI to streamline localization processes and expand audience reach. The technology reduces the time and cost associated with traditional translation methods while supporting accessibility for international readers. Overall, AI Manga Translator exemplifies how AI applications can enhance media consumption and cross-cultural engagement in the publishing industry.
Image Credit: AI Manga Translator Trend Themes 1. Real-time Visual-text Translation - Enables instantaneous cross-language consumption of illustrated narratives, reducing localization lag and supporting simultaneous global releases. 2. Contextual Style Preservation - Focused on retaining authorial voice and visual-speech alignment, this trend preserves cultural nuance and artistic intent during automated translation. 3. Automated Localization Workflows - Transforms manual, time-intensive localization pipelines into scalable systems that lower costs and accelerate content monetization across markets. Industry Implications 1. Publishing - Traditional and digital publishers can reach broader multilingual audiences through integrated AI translation that shortens time-to-market for serialized content. 2. Streaming Platforms - Content platforms hosting visual narratives gain the ability to offer localized manga and comics at scale, enhancing user retention and regional engagement. 3. Language Services - Translation and localization vendors face a shift toward hybrid AI-human workflows that prioritize contextual editing and quality assurance over full manual translation." Ellen Smith — February 21, 2026 — Tech https://www.trendhunter.com/trends/ai-manga-translator #Metaglossia #metaglossia_mundus #métaglossie
"The theme of this conference, "Beyond Human Language: The impact of AI on Linguistics,” asks how language is created, implicated, manipulated, used, learned, taught, and bought and sold by the integration of Artificial Intelligence into all aspects of our lives. It is not an overstatement to say that the arrival of AI upends every aspect of Linguistics, from the way language gets created to the way language is used to the way language goes out of use. The relationship between language and AI emerges in a social and political context that forges new ways of human existence.
This conference aims to move beyond the polarized debate over AI (complete acceptance to outright rejection) to consider the implications and impact of AI on language and linguistic inquiry in real time—our time. To the degree that language and AI are enmeshed in our lives, this conference calls on linguists and scholars to shed light on the effects of this relationship, both good and bad, both revolutionary and mundane.
All topics in linguistics and language scholarship are also welcome for presentation at this conference, including
• Language Evolution and AI
• AI and Speech Registers/Speech Communities
• AI and Writing/Writing Systems
• AI and Language Learning/Teaching
• AI and Cognition
• AI and Translation/Interpretation
• AI and Linguistic Research and Scholarship
• Computational Linguistics
• Linguistic Analysis, Intelligibility, and Recognizability Of AI-Produced Material
• Distinctions Between Large Language Models (LLMs) and Human Language
• AI and Linguistic/Semiotic Creativity, Authorship, Personal Writing, and Writer’s Voice
• AI and Minoritized, Regional, Indigenous, and Racialized Language Varieties
• Language and AI Under Different Political Circumstances (e.g., Colonialism, Capitalism, Autocracy)
• Prompt Engineering
• All Other Language and Linguistics-Related Concerns
Questions? Please email Shonna Trinch at strinch@jjay.edu."
https://www.jjay.cuny.edu/news-events/events/beyond-human-language-impact-ai-linguistics
#Metaglossia
#metaglossia_mundus
#métaglossie
Cambridge Core - Applied Linguistics - Conceptualising China through translation
"This monograph provides an innovative methodology for investigating how China has been conceptualised historically by tracing the development of four key cultural terms (filial piety, face, fengshui, and guanxi) between English and Chinese. It addresses how specific ideas about what constitutes the uniqueness of Chinese culture influence the ways users of these concepts think about China and themselves.
Adopting a combination of archival research and mining of electronic databases, it documents how the translation process has been bound up in the production of new meaning.
In uncovering how both sides of the translation process stand to be transformed by it, the study demonstrates the dialogic nature of translation and its potential contribution to cross-cultural understanding. It also aims to develop a foundation on which other area studies might build broader scholarship about global knowledge production and exchange." Book Information Format: Paperback Pages: 272 Price: £25.00 Published Date: January 2026 Series: Alternative Sinology Description https://www.cambridge.org/core/books/conceptualising-china-through-translation/B2B9283914859BB64A1E131D407D49CA #Metaglossia #metaglossia_mundus #métaglossie
"Datum: 23 februari 2026, kl. 14.15–16.00
Plats: Engelska parken, Stora konferensrummet 16-3062
Typ: Seminarium
Föreläsare: Daniele Monticelli
Arrangör: Slaviska seminariet
Kontaktperson: Julie Hansen
Daniele Monticelli, Professor of Semiotics and Translation Studies at Tallinn University, Estonia, will give a presentation entitled: Periodizing, Articulating, Narrating and Computing: Methodological Challenges of Writing a “Big Translation History” This seminar is part of the Higher Slavic Seminar in cooperation with the research environment "Translation in Theory and Practice."
Abstract:
This talk asks how translation history can be written as an integral strand of cultural, intellectual and political history. What can the study of translation reveal about the formation, development and transformation of languages, literary cultures and national identities? Which methods and tools best serve these inquiries, and what are their limits?
Drawing on a large-scale, multi-author project that maps literary translation in Estonian history, I reflect on core methodological dilemmas for long-span translation histories: periodization, the articulation of findings into a coherent narrative, and the trade-off between thematic focus and comprehensiveness, close and distant reading. Special attention is given to the affordances and challenges of digital‑humanities approaches in big translation histories. Combining archival research and close reading with computational methods, the project has produced the Estonian Translation Database (ETD; ≈70,000 entries) and the Network of Estonian Translated Literature (NETL), which together trace texts, translators, institutions and circulation patterns across five historical periods.
I discuss practical problems encountered such as uneven and unstable metadata, selection and synchronization choices, and the risks and rewards of datification. Using examples from network analysis and visualization, I show how these tools reveal hidden agents and flows while also exposing the exclusions and distortions caused by missing data. The talk will appeal to scholars in translation studies, literary and cultural history, and digital humanities, and to anyone interested in how translation shapes broader historical processes.
Daniele Monticelli is Professor of Semiotics and Translation Studies at Tallinn University, Estonia. His research is characterized by a wide and interdisciplinary range of interests which include philosophy of language, semiotic of culture, critical theory and translation history. His work in translation history has focused on translation in contexts of radical cultural and social change with particular focus on the role of translation in the (de)construction of national identities in the 19th and the beginning of the 20th century, and translation under Communism in the USSR and Eastern Europe. Currently, he coordinates the research grant “Translation in History, Estonia 1850-2000: Institutions, Agents, Texts and Practices”, which brings together researchers from different disciplines with the aim of writing the first comprehensive history of translation in Estonia. He is co-editor of Between Cultures and Texts: Itineraries in Translation History (2011), Translation under Communism (2022) and the Routledge Handbook of the History of Translation Studies (2024). In the last fifteen years he has authored several literary translations from Estonian into Italian.
The seminar will be held in English."
https://www.uu.se/institution/moderna-sprak/kalendarium/arkiv/2026-02-23-periodizing-articulating-narrating-and-computing-methodological-challenges-of-writing-a-big-translation-history
#Metaglossia
#metaglossia_mundus
#métaglossie
"New SDK enables CX platforms to embed multilingual voice translation into live customer conversations
Krisp, a voice AI technology company, has announced the launch of its Voice Translation SDK, which allows customer experience (CX) platform developers to integrate real-time multilingual voice-to-voice translation into live customer conversations. The SDK supports over 60 languages and is optimized for synchronous interactions where clarity and conversational continuity are critical, enabling multilingual support without the need for human interpreters.
Why it matters In global customer experience, language barriers can directly impact speed and customer satisfaction. Real-time voice translation technology can help eliminate these barriers, changing the economics of global support and improving the overall customer experience.
The details Krisp's Voice Translation SDK is engineered to balance the competing constraints of latency, accuracy, and conversational flow in live, two-way conversations. It supports any combination of over 60 languages and applies local Noise Cancellation before audio is processed in the cloud, isolating the primary speaker and improving recognition accuracy. The SDK also supports custom vocabulary and domain-specific dictionaries, enabling teams to enforce terminology and maintain consistency across professional environments.
Krisp's Voice Translation technology has been live in production CX environments since 2025 as part of the company's Call Center AI platform. The Voice Translation SDK was launched on February 18, 2026. The players Krisp A voice AI technology company that pioneered the world's first real-time Voice Productivity software. Krisp's technology enhances digital voice communication through audio cleansing, noise cancellation, accent conversion, live speech-to-speech translation, and agent assist.
Davit Baghdasaryan Co-Founder and CEO of Krisp.
Got photos? Submit your photos here. › What they’re saying “In global customer experience, every language barrier directly impacts speed and customer satisfaction. Real-time voice translation has to work inside live production environments at scale. By making Voice Translation available as an SDK, we're enabling CX platforms to embed multilingual voice directly into live systems. Removing language friction changes the economics of global support.”
— Davit Baghdasaryan, Co-Founder and CEO of Krisp (businesswire.com)
What’s next The Voice Translation SDK is available for Windows, macOS, and Web developers, allowing integration into both native and browser-based applications." Published on Feb. 21, 2026 https://nationaltoday.com/us/ca/san-francisco/news/2026/02/21/krisp-launches-real-time-voice-translation-sdk/ #Metaglossia #metaglossia_mundus #métaglossie
"(Agence Ecofin) - Face à l’essor rapide de l’intelligence artificielle en Afrique, la collaboration internationale devient un levier stratégique pour garantir une adoption locale, souveraine et inclusive de ces technologies, tout en soutenant le développement économique et la création de compétences numériques sur le continent.
Le Kenya, l’Inde et l’Italie ont annoncé, le jeudi 19 février, la mise en place d’un partenariat stratégique trilatéral visant à développer et déployer à grande échelle des solutions d’intelligence artificielle souveraine sur le continent africain. L’initiative a été officialisée à New Delhi par la signature d’une lettre trilatérale d’intention stratégique en marge du Sommet sur l’impact de l’IA 2026.
Ce cadre de coopération vise à faire évoluer l’adoption de l’intelligence artificielle d’expérimentations isolées vers des « voies de diffusion de l’IA » structurées, avec pour objectif la mise en place de 100 mécanismes de déploiement d’ici 2030 afin d’élargir l’impact socio‑économique de ces technologies en Afrique.
Déploiement d’une IA adaptée aux réalités africaines
Le partenariat cible prioritairement le développement de solutions d’IA vocale multilingue conçues pour fonctionner dans des environnements à faible connectivité, avec une attention particulière portée à la souveraineté des données et à l’appropriation locale des technologies.
Les secteurs visés incluent l’agriculture, la santé, l’éducation, les services publics et les moyens de subsistance. L’initiative prévoit la mise à disposition d’infrastructures technologiques communes, notamment des modèles vocaux partagés et des capacités de calcul accessibles, afin de réduire les barrières d’entrée pour les innovateurs africains.
Cette approche repose sur la complémentarité des partenaires : l’expertise indienne en matière de biens publics numériques, l’écosystème d’innovation kényan positionné comme pôle technologique régional, et le savoir‑faire industriel italien dans les technologies d’intelligence artificielle.
Vers une infrastructure africaine d’IA souveraine
La collaboration est portée par la Fondation EkStep, la Direction de l’économie numérique du ministère kényan des TIC et le ministère italien des Entreprises et du Made in Italy, en partenariat avec le Programme des Nations unies pour le développement. Elle s’inscrit dans le prolongement du Pôle d’IA pour le développement durable soutenu par le G7 et aligné sur le plan Mattei de l’Italie pour l’Afrique. L’initiative fait également suite au Forum de Nairobi sur l’IA 2026, qui a facilité l’accès des innovateurs africains aux ressources de calcul et aux mécanismes de financement.
À travers ce partenariat, les signataires ambitionnent de poser les bases d’une infrastructure d’intelligence artificielle souveraine, inclusive et durable, portée par des acteurs africains et adaptée aux réalités économiques et linguistiques du continent. Selon la Banque africaine de développement, le déploiement inclusif de l’intelligence artificielle pourrait ajouter jusqu’à 1000 milliards de dollars au PIB de l’Afrique d’ici 2035, notamment grâce aux gains de productivité attendus dans des secteurs clés comme l’agriculture, la santé, l’éducation et les services publics." Le Kenya, l’Inde et l’Italie lancent un partenariat pour l’IA vocale souveraine en Afrique Samira Njoya Edité par Sèna D. B. de Sodji https://www.agenceecofin.com/actualites-numerique/2002-136014-le-kenya-l-inde-et-l-italie-lancent-un-partenariat-pour-l-ia-vocale-souveraine-en-afrique #Metaglossia #metaglossia_mundus #métaglossie
"“The Chosen” is officially breaking its own records.
The first season of “The Chosen” is now available in 125 languages — a significant stride toward the biblical drama’s goal of reaching 600 languages and 1 billion viewers worldwide.
For the second time, the biblical drama has been recognized by Guinness World Records as the most-translated season of a streaming series in history. The feat comes shortly after the series received the same record in September for translations in 86 languages.
Report ad
Michael Empric, the official adjudicator and spokesperson for Guinness World Records, presented the award for the world record to “The Chosen” cast at ChosenCon in Charlotte, North Carolina, on Friday, before thousands of attendees at the event.
“When we see numbers like this, they are able to provide a measuring stick for what’s happening, and we want to celebrate it,” Dallas Jenkins, creator and director of the series, told the audience. “All those languages mean lives changed. And so we don’t want to stop at 125.”
Some of the 125 subbed languages include Arabic, Bulgarian, Danish, Dutch, Flemish, Haitian, Hindi, Korean, Slovenian, Turkish and Vietnamese. Another 240 more language translations are currently in progress.
“The Chosen” has now surpassed — twice — the translations available for television hits such as “Friends,” “SpongeBob SquarePants” and “Baywatch.”
The achievement was largely made possible by the Come and See Foundation, an organization which helps fund and translate “The Chosen,” and aims to expand the series into 600 languages, which would make the show accessible to 95% of the world’s population.
“Because of global supporters and the extraordinary work of Come and See and their partners, I’m pretty sure this won’t be the last translation record we break,” said Dallas Jenkins, the creator and director of “The Chosen.”
Report ad
While accepting the award, “The Chosen” team recognized the support given to the Come and See Foundation through donations, prayer and sharing the series — all of which is required to make the translation process possible.
The Come and See Foundation’s translation team is made up of theologians and linguists, as well as pastors and Bible experts, who all make certain the translations are accurate to each diverse culture, as previously reported by the Deseret News. For the dubbed translations, there is a team of voice actors who record the dialogue in each new language.
The translation team behind “The Chosen” plans to maintain its current pace and aims to make every translation available in a dubbed version, so viewers can hear the series in their own language, Wendi Lord, the lead of the translation team at Come and See, told the Deseret News.
It is no small feat: Translating six seasons of the series into 600 unique languages is equivalent to translating 33,000 episodes.
“We’re all about appealing to each person,” Lord said. “We reach people individually, day by day, and we see transformation in their personal lives and their faith walk because of the work we are doing.”
More than 280 million viewers in 175 countries have seen “The Chosen.” And the biblical drama’s reach continues to expand across the globe.
The fifth season of the series, which was largely crowdfunded, received donations from 105,000 individuals or groups from 150 different countries.
Expanding the series’ global reach ultimately aiming to reach 1 billion viewers — and sharing the story of Jesus to a worldwide audience remain central to the Come and See Foundation’s mission.
“Breaking our own Guinness World Record would not be possible without the generosity of thousands of supporters from all around the world,” said James Barnett, CEO of Come and See.
“We are committed to translating and dubbing ‘The Chosen’ into 600 languages and making it accessible for all. People need to hear the story of Jesus in their own language, the language they speak, dream and pray in,” Barnett continued. “This recognition is a testament to our team of over 200 expert linguists, local theologians, pastors and biblical scholars from around the globe, working tirelessly to ensure our goal becomes a reality.”"
Margaret Darby
https://www.deseret.com/entertainment/2026/02/20/the-chosen-record-breaking-language-translations/
#Metaglossia
#metaglossia_mundus
#métaglossie
"Une maîtrise en traduction est relancée trois ans après la controverse provoquée par la suspension de la majorité des programmes à l’École de traduction et d’interprétation de l’Université d’Ottawa.
«La maîtrise sera lancée cet automne, confirme Elizabeth Marshman, professeure agrégée à l’École de traduction et d’interprétation de l’Université d’Ottawa, dans une entrevue récente avec Le Droit. Nous sommes en période d’inscriptions jusqu’au 1er mars.»
«Il était temps de revoir nos programmes, de les réorienter et de les moderniser, et de mieux cerner notre créneau pour répondre aux besoins, ajoute-t-elle. Ça faisait un moment qu’il n’y avait plus d’admissions. On veut relancer. Les employeurs à qui j’ai parlé sont aussi très contents de voir le programme revenir.»
Ce nouveau programme s’inscrit dans la continuité de l’ancien baccalauréat accéléré. Il s’adresse à des étudiantes et étudiants déjà titulaires de ce même diplôme et vise à leur fournir les compétences requises pour exercer un métier langagier, comme la traduction.
Mais, dans le contexte de l’essor de l’intelligence artificielle, la maîtrise qui débutera cet automne adopte un format différent de la précédente maîtrise en traductologie.
Les technologies comme l’IA générative seront intégrées au programme afin d’en comprendre le fonctionnement, les bonnes pratiques, les limites et les usages pertinents, selon l’établissement.
«On va explorer dans cette maîtrise: on va miser sur le jugement, l’expérience du travail avec outils et sans outils», indique Mme Marshman.
Des programmes encore suspendus
Toujours est-il que les programmes de certificat de premier cycle en traduction, de baccalauréat en traduction, ainsi que la maîtrise et le doctorat en traductologie, rattachés à l’École de traduction et d’interprétation, restent suspendus et «n’acceptent pas de nouvelles demandes d’admission présentement», précise l’Université d’Ottawa.
«Les cours ont été offerts après le début de la pause afin de permettre aux étudiants et étudiantes déjà inscrits aux programmes de terminer leurs études», ajoute l’établissement.
Cela dit, la maîtrise en interprétation de conférence et le cours «Initiation à la traduction», offerts comme cours au choix en première année de baccalauréat, continuent d’être ouverts aux étudiantes et étudiants.
Satisfaction et méfiance
Du côté du Syndicat des étudiants employés de l’Université d’Ottawa (SÉUO) 2626, on se dit «heureux» de l’annonce.
Nicholas Dallaire, président Syndicat des étudiants employés de l'Université d'Ottawa 2626. (Université d'Ottawa)
«C’est essentiel, souligne son président, Nicholas Dallaire, en entrevue avec Le Droit. Une université bilingue sans programme de traduction, dans la capitale du pays, c’est difficile à justifier. À mes yeux, une école de traduction est nécessaire, même si l’Université invoque régulièrement des compressions, y compris dans des programmes dits classiques.»
«On entend souvent parler d’une baisse du financement pour les départements en arts», regrette le président du syndicat.
M. Dallaire craint aussi que cette nouvelle maîtrise en traduction soit donnée «majoritairement en ligne». L’administration de l’Université d’Ottawa affirme toutefois que le programme sera offert principalement en présentiel.
Une source proche de l’École de traduction et d’interprétation, qui a demandé l’anonymat, se montre plus dubitative.
«On a besoin de traducteurs, c’est clair.»
Mais, selon elle, «il faut se poser la question de la forme de la formation», d’autant que la maîtrise ne pourra pas compter, pour l’instant, sur l’apport de diplômés d’un baccalauréat en traduction, toujours suspendu.
Elle estime aussi qu’«une maîtrise demande des compétences supérieures et une équipe solide», aujourd’hui affaiblie, et craint «une décision de l’Université surtout symbolique ou politique» en relançant ce programme.
«Pour l’instant, nous sommes ravis de contribuer à nouveau à la formation de traductrices et traducteurs professionnels avec la nouvelle maîtrise», confirme l’administration de l’Université d’Ottawa."
Sébastien Pierroz
20 février 2026
https://www.ledroit.com/actualites/actualites-locales/ottawa/2026/02/20/apres-le-tolle-luniversite-dottawa-relance-sa-maitrise-en-traduction-ZIR3NH646FFVRLCNXFYCKARBIE/
#Metaglossia
#metaglossia_mundus
#métaglossie
Of the 7,159 languages spoken worldwide, 3,193 (44 percent) are endangered, 3,479 (49 percent) are stable, and 487 (7 percent) are institutional, meaning they are used by governments, schools and the media...The Latin script, which is used to write English, French, Spanish, German and more, is used in at least 305 of the world’s 7,139 known living human languages. More than 70 percent of the world’s population use it.
"Where are the most endangered languages in the world?
February 21 is World Mother Language Day. Al Jazeera looks at the most spoken languages and which ones are endangered.
More than 7,000 languages are spoken around the world today and at least 3,000 of them, or 40 percent, are endangered.
English is the most widely spoken language, with approximately 1.5 billion speakers in 186 countries. Two out of every 10 English speakers are native, while the remaining 80 percent speak English as their second, third or higher language, according to Ethnologue, a database which catalogues the world’s languages.
Mandarin Chinese is the second most spoken language with almost 1.2 billion speakers. However, when accounting for native speakers, it is the largest language in the world, owing to China’s large population.
Hindi comes in third at 609 million speakers, followed by Spanish (559 million), and Standard Arabic (335 million).
Scripts in the world’s most popular languages
There are 293 known scripts – sets of graphic characters used to write a language – according to The World’s Writing Systems, a reference book about global scripts.
More than 156 scripts are still in use today, while more than 137 historical scripts, including Egyptian Hieroglyphs and Aztec pictograms, are no longer in use.
The Latin script, which is used to write English, French, Spanish, German and more, is used in at least 305 of the world’s 7,139 known living human languages. More than 70 percent of the world’s population use it.
Which are the most endangered languages?
Of the 7,159 languages spoken worldwide, 3,193 (44 percent) are endangered, 3,479 (49 percent) are stable, and 487 (7 percent) are institutional, meaning they are used by governments, schools and the media.
A language becomes endangered when its users begin to pass on a more dominant language to the children in the community. Many are used as second languages.
Get instant alerts and updates based on your interests. Be the first to know when big stories happen.
Yes, keep me updated
According to Ethnologue, some 337 languages are said to be dormant while 454 are extinct.
Dormant languages are those that no longer have proficient speakers, but the language still has social uses and the language is part of the identity of an ethnic community. Extinct languages are those that have no speakers and no social uses or groups that claim it as part of their heritage or identity.
According to Ethnologue, 88.1 million people speak an endangered language as their mother tongue. There are:
1,431 languages with fewer than 1,000 first-language speakers
463 with fewer than 100 speakers
110 with fewer than 10 speakers
Just 25 countries are home to some 80 percent of the world’s endangered languages. Oceania has the most endangered languages, followed by Asia, Africa and the Americas.
Some endangered languages include:
Oceania
In Australia, Yugambeh, an endangered Aboriginal language, is spoken by the Yugambeh people, primarily across the Gold Coast, Scenic Rim and Logan in eastern Australia.
In recent years, a strong community-led revitalisation programme and the use of learning apps have made the language more accessible to younger generations.
Asia
Japan’s Ainu (Ainu Itak) is a critically endangered language. According to UNESCO, it can’t be linked with certainty to any family of languages. The exact number of Ainu speakers is unknown, however a 2006 survey showed that out of 23,782 Ainu, 304 know the language.
Africa
In Ethiopia, Ongota is a critically endangered language.
It was spoken by a community on the west bank of the Weito River in southwest Ethiopia. There are only about 400 members of the community left, with a handful of elders speaking the language.
Americas
In North and Central America, almost all Indigenous languages are endangered. Louisiana Creole, a French-based creole with African and Indigenous influences, is a seriously endangered language in the United States, with it mostly spoken by elders.
Leco is an endangered Indigenous language spoken in Bolivia and is considered an isolated language – one that has no genetic relationship to other languages. The language is only now spoken by elders with a Leco ethnic population of only about 13,500.
Europe
Cornish (Kernewek), spoken in southwest England, was marked as an extinct language by UNESCO, until it was revived and in 2010 changed to an endangered language. It is spoken as a first language by 563 people according to the 2021 England and Wales census."
By AJLabs
Published On 21 Feb 2026
21 Feb 2026
https://www.aljazeera.com/news/2026/2/21/where-are-the-most-endangered-languages-in-the-world
#Metaglossia
#metaglossia_mundus
#métaglossie
L’UNESCO estime qu’environ 8 324 langues sont parlées ou signées dans le monde, dont près de 7 000 restent en usage. Cependant, seules quelques centaines bénéficient d’une réelle place dans les systèmes éducatifs et l’espace public, et moins d’une centaine sont présentes dans le monde numérique."
"Son Excellence Dr Nasser bin Hamad Al Hanzab, président du Conseil exécutif de l’Organisation des Nations Unies pour l’éducation, la science et la culture (UNESCO) et délégué permanent de l’État du Qatar auprès de l’organisation, a affirmé que l’apprentissage dans la langue maternelle joue un rôle central dans l’amélioration des performances scolaires, le développement de la confiance en soi et l’intégration active dans des sociétés plus inclusives. Dans une allocution prononcée lors d’une cérémonie organisée au siège de l’UNESCO à l’occasion de la Journée internationale de la langue maternelle, célébrée chaque année le 21 février, Son Excellence a expliqué que la promotion du multilinguisme contribue à consolider les valeurs de respect mutuel, d’ouverture et de compréhension entre les peuples. Il a précisé que les États membres, avec le soutien de l’UNESCO, s’efforcent de mettre en place des politiques visant à intégrer les langues maternelles dans les systèmes éducatifs, afin de garantir que toutes les langues puissent contribuer à l’échange des savoirs et à la promotion du développement durable et de la paix. Le président du Conseil exécutif de l’UNESCO a signalé que, malgré ces efforts à l’échelle internationale, de nombreuses langues demeurent menacées de disparition, entraînant une perte considérable pour l’humanité en raison des connaissances, traditions et cultures profondément enracinées qu’elles transmettent. Instituée par l’UNESCO en 1999 et adoptée par les Nations Unies en 2002, la Journée internationale de la langue maternelle vise à mettre en lumière l’importance de la diversité linguistique et son rôle central dans l’éducation, la culture et le renforcement de la cohésion sociale. Elle représente également une reconnaissance de la richesse du patrimoine linguistique mondial et rappelle l’importance de protéger les langues en tant que piliers de l’identité et vecteurs de transmission des savoirs et des traditions à travers les générations. Cette journée commémore aussi la mémoire des étudiants qui ont sacrifié leur vie pour défendre leur droit de parler leur langue maternelle, réaffirmant ainsi le rôle essentiel de la jeunesse dans la préservation des langues et la protection du patrimoine culturel. Le thème choisi par l’UNESCO pour cette édition, « Les voix de la jeunesse sur l’éducation multilingue », met en lumière les profondes mutations du paysage linguistique contemporain. Selon les Nations Unies, une langue disparaît toutes les deux semaines, emportant avec elle un patrimoine culturel et intellectuel complet. L’UNESCO estime qu’environ 8 324 langues sont parlées ou signées dans le monde, dont près de 7 000 restent en usage. Cependant, seules quelques centaines bénéficient d’une réelle place dans les systèmes éducatifs et l’espace public, et moins d’une centaine sont présentes dans le monde numérique." Paris, le 21 février /QNA/ https://qna.org.qa/fr-FR/news/news-details?id=le-president-du-conseil-executif-de-lunesco-met-en-avant-la-protection-des-langues-maternelles-et-leur-role-dans-le-dialogue-entre-les-peuples&date=21/02/2026 #Metaglossia #metaglossia_mundus #métaglossie
"21 February - International Mother Language Day Council of Europe Strasbourg 20 February 2026 The Council of Europe’s commitment to linguistic diversity in everyday life and education Through mother-tongue-based multilingual education, societies can foster greater inclusion, preserve minority and indigenous languages, and support equitable access to education. International Mother Language Day, marked globally on 21 February, shines a spotlight on these goals.
“Home language is an asset rather than an obstacle to learning”, with this concept in mind, schools across Europe often mark the day with activities such as multilingual storytelling, poetry recitations, and sharing traditions in pupils' home languages, with students’ linguistic backgrounds viewed as assets, promoting well-being, intercultural dialogue, and academic success..." The Council of Europe’s commitment to linguistic diversity in everyday life and education - Portal" 2026 international mother language theme: youth voices on multilingual education. https://share.google/swekdPETlx7yMcOGh #Metaglossia #metaglossia_mundus #métaglossie
|
"The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video.
Talksign, a Nigeria- and UK-based AI company, has launched Talksign-1, a sign language translation model that converts American Sign Language (ASL) into speech and text in under 100 milliseconds. The bidirectional system recognises 250 ASL signs, translating signed input via webcam into spoken or written language, and converting spoken or typed words into sign language video.
The model was trained on the WLASL2000 dataset and achieves 84.7% accuracy on single-sign recognition. It analyses about one second of signing to balance speed and accuracy but does not yet support continuous sentence-level translation or fingerspelling, limiting use to isolated signs.
Founded in November 2025 by Edidiong Ekong and Kazi Mahathir Rahman, Talksign aims to address accessibility gaps faced by the deaf and hard-of-hearing community, particularly as many digital tools still assume users can hear and speak. The technology is designed to enable more direct communication between deaf and hearing individuals without always relying on human interpreters.
Potential applications include education, healthcare, workplaces, and public spaces such as transport systems, emergency alerts, and live broadcasts. The company worked with deaf educators, native ASL users, and accessibility advocates during development.
For privacy, gesture analysis is performed in the user’s browser, with only processed data sent to servers. Talksign notes the tool should not be used as the sole authority in medical, legal, or safety-critical situations.
Currently limited to ASL, Talksign plans to expand support to other sign languages, increase vocabulary size, and add continuous signing and fingerspelling in future versions.
The TechAfrica News Podcast"
https://techafricanews.com/2026/02/17/talksign-launches-ai-model-translating-asl-to-speech-in-under-100ms/
#Metaglossia
#metaglossia_mundus
#métaglossie