 Your new post is loading...
|
Scooped by
Charles Tiayon
November 24, 2012 12:28 PM
|
Dr. Elena Lozinsky relates her experiences publishing her doctoral dissertation and Russian translations of Marcel Proust’s In Remembrance of Things Past. “It’s a miracle, but I found my publisher in a wonderful place—it’s Honoré Champion publishing house in Paris that has a specialized series “Recherches Proustiennes”, remarked Dr. Elena Lozinsky upon being asked how she came to have her doctoral dissertation published. After she defended it, many of those on the panel suggested that she seek out a publishing company to print her work. There was one catch, though: because her dissertation was written and defended in French, the publisher had to be French as well. Lozinsky took an interesting approach to writing her thesis. The stimulus actually came from her love for Marcel Proust’s seven-part novel, In Remembrance of Things Past, known also by the title In Search of Lost Time. Lozinsky, who is from Russia, studied Proust’s works for a long time, in particular the translations from the original French text into Russian. In comparing the 1989 French re-release of the text with its second Russian edition from 1960, she discovered that there were several discrepancies in translation that greatly affected the reader’s ability to understand the work as a whole. Thus, Lozinsky took it upon herself to begin translating the novel into Russian for a third time.
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"Timekettle Announces 2026 Breakthroughs in AI Interpretation, Introducing Its State-of-the-Art Translation Engine Selector and Next-Generation Bone-Conduction Voice Capture News provided by Timekettle Jan 05, 2026, 07:26 ET
Flagship W4 Earbuds Become the Most Accurate Interpreter Device Ever Released
LAS VEGAS, Jan. 5, 2026 /PRNewswire/ -- Timekettle, the global leader in AI-powered cross-language communication, today announces a major system-wide upgrade that redefines what real-time translation can achieve. Central to this leap forward is Timekettle's new SOTA (State of the Art) Translation Engine Selector, a behind-the-scenes but transformative capability that dynamically identifies the optimal translation model for each language pair and conversational scenario in real time.
Continue Reading
SOTA Translation Engine Selector While invisible to the user, the impact is unmistakable: the fastest, most precise, and most natural translations Timekettle has ever delivered, regardless of accent, grammar structure, or speaking style. This upgrade arrives alongside an enhanced Bone-Conduction & Hybrid Algorithm, now offering even cleaner, more accurate voice capture by improving the purity of the user's vocal signal before it reaches the translation pipeline.
Together, these advancements make the flagship W4 Interpreter Earbuds the most accurate in company history, surpassing the already industry-leading performance showcased at IFA 2025 that leaves its competition lost in translation.
Since, hearing is believing, this innovation along with the full Timekettle portfolio can be experienced throughout CES at the Timekettle booth [LVCC, North Hall — 9163].
A Leap Forward in Machine Understanding: SOTA Translation Engine Selector
Different language pairs behave like entirely different systems—English to Korean follows different logic than German to English; Spanish to Chinese requires different segmentation, prediction, and contextual inference than Japanese to French. Timekettle's SOTA Translation Engine Selector resolves this fundamental problem by:
Detecting the language pair and context in real time Selecting the specialized AI engine best suited to that specific pair Optimizing accuracy, fluency, and naturalness for each exchange This is equivalent to assigning a "native specialist" to every conversation. This capability is made possible by Timekettle's advanced Large Language Model and the ongoing evolution of its Babel OS platform now updated with new machine-learning layers trained on millions of linguistic samples.
The result: translations that sound less like a machine "attempting" to interpret—and more like a system that genuinely understands the intent, tone, and structure of what is being said. To Timekettle users, this means that no matter what languages they speak, Timekettle can always match them with the world's best translation engines to serve their needs, enabling them to enjoy the most accurate cross-language communication experience without fear, awkwardness, or hesitation; to truly express themselves boldly and communicate naturally.
Next-Generation Bone-Conduction & Hybrid Algorithm: Superior Voice Capture for Superior Accuracy
Building on its proprietary bone-voiceprint sensor technology, the enhanced 2026 algorithm upgrade extracts cleaner vocal vibrations to improve signal purity, isolates noise more effectively in chaotic environments, and more accurately identifies speakers during multi-voice conversations. These improvements work together to deliver far clearer acoustic input—the foundation of any high-precision translation—and ensure that the system captures meaning with exceptional fidelity. Because AI translation quality is only as strong as the audio it receives, this upgrade dramatically elevates performance across business meetings, travel scenarios, classrooms, and multilingual events.
Rolling Out Across the Entire Timekettle Portfolio
Beginning in early 2026, the SOTA Translation Engine Selector, updated Hybrid Algorithm, and extended Babel OS enhancements will roll out across the full Timekettle product family, including:
W4 Interpreter Earbuds: Now the most accurate consumer translation earbuds Timekettle has produced W4 Pro: Elevated for professional meetings and enterprise use T1 AI Translator: The world's fastest offline translator now combines Edge AI with the SOTA Engine Selector to boost accuracy beyond its already industry-leading offline performance and ultra-fast 0.2-second response time. WT2 Edge, M3 Earbuds, and X1 Interpreter Hub: Each gaining improvements in context prediction, segmentation, and tonal translation Shared Capabilities Across the Timekettle Portfolio
Across its expanding lineup of earbuds and handheld translators, Timekettle delivers a unified experience built around seamless, natural, real-time communication. All devices support instant translation across 43 languages and 96 accents, covering more than 95% of global regions to enable smooth, uninterrupted conversations anywhere in the world. Users benefit from all-day practicality as well, with long-lasting battery performance designed for travel, business, and extended multilingual interactions.
Timekettle's conversation modes are consistent across the portfolio, offering flexible ways to communicate in different settings. One-on-One Mode allows two people to share earbuds for fluid, face-to-face dialogue. Listen & Play Mode records spoken audio through the app and provides translations that can be reviewed or replayed later. Speak Mode enables users to talk into their device and have translated speech broadcast clearly through the phone's speaker, ideal for group discussions, classroom settings, tours, or presentations.
Whether using earbuds like the W4 or handheld devices such as the T1, the Timekettle ecosystem is designed to make multilingual communication effortless, intuitive, and adaptable to nearly any scenario.
"Accuracy is the result of perfect collaboration between hardware and software," said Leal Tian, Founder & CEO of Timekettle. "With the introduction of our SOTA Translation Engine Selector and the next-generation bone-conduction algorithm, we've elevated that collaboration to a new level. W4 becomes the most accurate interpreter device we've ever created—and this upgrade strengthens every product across our portfolio."
Availability
The 2026 SOTA Translation Engine Selector upgrade and enhanced Hybrid Algorithm will begin rolling out via over-the-air updates across supported Timekettle devices starting January 2026, with the full portfolio supported throughout Q1. The Timekettle portfolio is available to buy at www.timekettle.co and Amazon. Prices include W4 Interpreter Earbuds ($349), W4 Pro ($449), T1 AI Translator ($299.99), WT2 Edge ($349.99), M3 Earbuds ($149.99), and X1 Interpreter Hub ($699.99).
About Timekettle
Since 2016, Timekettle has led innovation in AI-driven cross-language communication, with award-winning devices used by travelers, businesses, educators, and multilingual teams worldwide. Its products—including the W4 and W4 Pro Interpreter Earbuds, WT2 Edge/W3 series, X1 Interpreter Hub, and T1 AI Translator—deliver seamless, natural, and secure communication across nearly all global regions.
SOURCE Timekettle"
https://www.prnewswire.com/news-releases/timekettle-announces-2026-breakthroughs-in-ai-interpretation-introducing-its-state-of-the-art-translation-engine-selector-and-next-generation-bone-conduction-voice-capture-302652627.html #Metaglossia #metaglossia_mundus
"Ten questions: Meet professor and literary translator Clyde Moneyhun January 2, 2026
Clyde Moneyhun, professor in the Department of Theatre, Film and Creative Writing, holds “Witch in Mourning” by Maria-Mercè Marçal, one of the four poetry collections he has translated into English Clyde Moneyhun, a professor in the Department of Theater, Film and Creative Writing, holds four degrees — a BA in comparative literature, an MA in American literature, an MFA in fiction writing, and a Ph.D. in rhetoric, composition and the teaching of English. Moneyhun has traveled all over the world, lived and worked in Japan for two years, and chased a number of passions during what he describes as a career of wandering. But Moneyhun found his deepest passion in literary translation — transitioning a poem from one language to another without losing the original’s impact.
It’s a task that’s more difficult than it sounds. How does one maintain rhyme and meter when the English words sound completely different than, say, Spanish? How to translate something like a pun across vastly different languages, cultures and time periods?
“There are probably a lot of people who would say, ‘You translate poetry? That’s impossible,’” Moneyhun said. “Which it is. But I think most translators, especially of poetry, would say that what you’re doing is recreating a poem. You’re doing the best you can. It’s impossible to translate a poem, but you can make a new poem that’s a kind of performance of the old poem.”
Moneyhun translates from Spanish, Italian and French, but primarily from Catalan — the official dialect of three autonomous communities in Eastern Spain. He has translated four books of poetry and one children’s book into English.
A professor at Boise State for the last 15 years, Moneyhun is now partially retired. He splits his time between Mahón, Spain — where he owns an apartment — and Boise. Equipped with a warm enthusiasm for literature, and an abundance of techniques he’s built over years in academia, Moneyhun continues to teach online nonfiction writing classes.
He took the time to answer our ten questions.
1. What is a question you have spent your life trying to answer?
CM: How to be happy, probably. I’ve done a lot of things that have made me happy, but the things that have made me happy, I’ve sort of fallen into. I’m not sure if they were really choices. I didn’t move to Boise to be happy, for example. I was working at Stanford, I was burnt out and a job here opened up.
And then it turned out to be fantastic. Little old Boise State turned out to be the key to much happiness, including finding translation — and getting rewarded for translation — on the job.
2. When did you first become fascinated by literary translation?
CM: I’ve had to answer that question many times. The creation story for me is, my undergrad major was not English. It was comparative literature, which is like world literature. This is at Columbia [University in New York City, in the mid-1970s], and you had to take not one but two foreign languages. So I took a lot of French and a little bit of Italian. It was terribly hard. It did not come naturally. It still does not come naturally, ironically.
Homework didn’t help me, but I would translate stuff. If the teacher gave us stuff, I would literally — and I didn’t know what I was doing — I would just sit with a dictionary and just translate it word for word, and I would learn that way.
The first translation I ever published was senior year of college. It was a thing I had translated in desperation to try to learn some Italian. And then, I dabbled across the years.
Right before I came here, when I was working at Stanford’s writing center, I took a class in Catalan. I knew some Italian, a lot of French and a little Spanish. But Catalan, the language of my ancestors, I did not know, and there was this class, this intensive two-years-in-one-year class. I took it, and again, in desperation, started translating. And when I got to Boise State, I was just on fire. I was ready to go. I was told, “Oh, really? You can do that [publish translations] to get tenure.” So that’s how it happened.
3. Do you remember the first translation you published?
CM: Oh, I remember it very well. It’s by an Italian novelist named Natalia Ginzburg. She’s really best known as a novelist, but she wrote some little pieces of memoir, and this one is called “Winter in Abruzzo.” Abruzzo is a mountainous area in Italy.
I still look at that little piece. When I was a senior in college and got that thing published, it was the first time it had ever been published in English.
4. What is the most common misconception about literary translation?
CM: That you just look up words and then, you substitute words. This word, that word, another word. Even Google Translate is more sophisticated than that. But most people think that’s the idea. Use a dictionary, and just do word substitution.
Languages have not just grammar, but sort of rules of operation, ways of expressing things, reality, that are different from each other. They’re just so different. Which makes translation very hard.
For example, Japanese. In Japanese, the verb is always at the end of the sentence, which means that you can watch the person you’re talking to, and when you get to the verb, you can do things to it, morphological things. You can make it more polite; you can make it negative. You can add a little word that means, “Don’t you agree?” English does not operate that way.
5. What do you do when you’re stuck on a problem?
CM: What I’ve learned with any problem, translation included — writing, personal problems with people — is just leave it for a while. You’ll come back. Just leave it. Stop beating your head against the wall. I mean, pop psychologists even recommend this. You’re having an argument with somebody, you know, come back later, when you’ve both thought about it. Teaching problems — they don’t solve themselves right away. So, let it go. Go do something else.
Translation is especially great. I dug up a translation the other day that I didn’t remember doing. It was the beginning of a poem up to the last couple of lines. And I thought, what the heck? This is really good. I gave up on it, because it got harder. And then I never went back, and now I am.
It will seem easier when you go back. Because you know, you’ve done it a thousand times. You’ll be back. You’ll be back later.
Teaching is a great thing. Teaching is like a riddle you’re solving all the time. You screw up, you know, you teach a class, something’s not going well. And if you can’t fix it, then maybe you’ll fix it next semester. “Well, we’ll never do that assignment again.”
6. What has changed the most in terms of your approach to teaching through the years?
CM: Back in the day, there was no such thing as writing drafts and getting comments and then doing revisions. I mean, when I was a kid in college, you just wrote. You didn’t even discuss your topic ahead of time with the teacher. It’s Thursday, the paper’s due. You hand in the paper, you get a few comments and a grade, and you just forge ahead to the next paper.
This isn’t how real writers write. They revise, revise and revise some more, and they show it to somebody, and they put it away. So, the insight was building as many revisions as humanly possible into the class. This changes the nature of the comments you can give. If you’re only going to see the paper once, you have to comment on everything. But if you’re going to allow students a couple of drafts, then you can talk about the huge things first. Towards the end, you’re talking about smaller and smaller things, like, you know, the fine points of organizing sentences.
You can only do that if you build in enough drafts.
7. What’s a book you’ve read recently that resonated with you?
CM: There’s a book called “Journey Into Cyprus” by Colin Thubron. Oh, man, what a writer. What a dream of a writer. [In 1972,] he decided what he really needed to do was backpack around Cyprus, which turns out to be really dangerous, because half of it’s owned by the Greeks, half by Turks. And every time you get to a dividing line, somebody threatens to shoot you.
In the book, you get his narrative, you get everything he sees, everybody he meets, everything he eats and drinks. History, mythology, philosophy — you get everything.
8. What’s the most interesting place you’ve visited in your world travels?
CM: Maybe Kyoto, Japan. The town is just one great Buddhist town. You can get lots of vegetarian food. You see priests walking down the street. It’s just beautiful. The outskirts are temples and mountains. I dream about it sometimes.
9. What is one hope you have for the next generation in your field?
CM: That translators get paid better. It’s really hard to make a living as a literary translator. There are a few superstars, you know, like Gabrielle García Márquez’s translator Gregory Rabassa, who made it big. García Márquez famously said, “When I read his translation of 100 Years of Solitude, I liked it more.” There are only a few people who can do that.
In general, everybody’s got a day job. Most of us translators are teachers, you know. I don’t think I myself would give up teaching, but for some people, it would be great to be able to make a living through translation alone.
10. What’s a principle you try to live by?
CM: Well, here’s the thing I tell my kids sometimes. Life on Earth is really difficult. You’re born, you suffer. You fall in love and they don’t love you, you break your arm. And then, somewhere along the way, somebody’s going to die. If it isn’t you, it’s other people. And since that’s just a basic truth of life, what we’re here for is to help each other through the thing. To be as kind as you can be to each other." https://www.boisestate.edu/news/2026/01/02/ten-questions-meet-professor-and-literary-translator-clyde-moneyhun #Metaglossia #metaglossia_mundus
Idioms are an often invisible barrier to understanding and inclusion for second-language speakers because their meanings rely on shared culture as well as language.
"Being a linguist — and someone who has tried to learn several languages (including English) in addition to my mother tongue (Flemish Dutch) — I have an annoying habit: instead of paying attention to what people are saying, I often get distracted by how they are saying it. The other day, this happened again in a meeting with colleagues.
I started writing down some of the expressions my colleagues were using to communicate their ideas that may be puzzling for users of English as a second or additional language.
In a span of about five minutes, I heard “it’s a no-brainer,” “to second something,” “being on the same page,” “to bring people up to speed,” “how you see fit,” “to table something” and “to have it out with someone.”
These are all expressions whose meanings do not follow straightforwardly from their lexical makeup — they’re called idioms by lexicologists.
Idioms are part of daily communication. But this anecdote also suggests that we take it for granted that such expressions are readily understood by members of the same community. However, when it comes to people who are new to said community, nothing could be further from the truth.
Idioms and the limits of language proficiency
Research conducted at the University of Birmingham several years ago revealed that international students for whom English is an additional language often misunderstand lecture content because they misinterpret their lecturers’ metaphorical phrases, including figurative idioms.
More recent research confirms that English idioms can remain elusive to second-language learners even if the expressions are intentionally embedded in transparent contexts.
One of my own recent studies, conducted with international students at Western University in Canada, also found that students incorrectly interpreted idioms and struggled to recall the actual meanings later on after being corrected.
This shows just how persistently confusing these expressions can be.
It’s worth mentioning that we’re talking about students who obtained high enough scores on standardized English proficiency tests to be admitted to English-medium universities. Knowledge of idioms appears to lag behind other facets of language.
When literal meanings get in the way
The challenge posed by idioms is not unique to English. All languages have large stocks of idioms, many of which second-language learners will find puzzling if the expressions do not have obvious counterparts in their mother tongue.
There are various obstacles to comprehending idioms, and recognizing these obstacles can help us empathize with those who are new to a community. For one thing, an idiom will inevitably be hard to understand if it includes a word that the learner does not know at all.
Members of a community need to have greater empathy for newcomers who are not yet familiar with the many hundreds of potentially confusing idioms that are used so spontaneously in everyday life. (Unsplash)
However, even if all the constituent words of an expression look familiar, the first meaning that comes to a learner’s mind can be misleading. For example, as a younger learner of English, I was convinced that the expression “to jump the gun” referred to an act of bravery because, to me, the phrase evoked an image of someone being held at gunpoint and who makes a sudden move to disarm an adversary.
I only realized that this idiom means “to act too soon” when I was told that the gun in this phrase does not allude to a firearm but to the pistol used to signal the start of a race.
I also used to think that to “follow suit” meant taking orders from someone in a position of authority because I thought “suit” alluded to business attire. Its actual meaning — “to do the same thing as someone else” — became clear only when I learned the other meaning of suit in card games such as bridge.
The idea that idioms prompt a literal interpretation may seem counter-intuitive to readers who have not learned a second language because we normally bypass such literal interpretations when we hear idioms in our first language. However, research suggests that second-language learners do tend to use literal meanings as they try to make sense of idioms.
Unfortunately, when language learners use a literal reading of an idiom to guess its figurative meaning, they are very often misled by ambiguous words. For example, they will almost inevitably misunderstand “limb” in the idiom “to go out on a limb” — meaning “to take a serious risk” — as a body part rather than a branch of a tree.
Recognizing the origin of an idiomatic expression can also be difficult because the domains of life from which certain idioms stem are not necessarily shared across cultures. For example, learners may struggle to understand English idioms derived from horse racing (“to win hands down”), golf (“par for the course”), rowing (“pull your weight”) and baseball (“cover your bases”), if these sports are uncommon in the communities in which they grew up.
A language’s stock of idioms provides a window into a community’s culture and history.
Same language, same idioms? Not exactly
Idiom repertoires vary across communities — whether defined regionally, demographically or otherwise — even when those communities share the same general language.
For example, if an Aussie were to criticize an anglophone Canadian for making a fuss by saying “you’re carrying on like a pork chop,” they may be lost in translation, even if there isn’t much of one. At least, linguistically that is.
Although people may have learned a handful of idioms in an English-language course taken in their home country, those particular idioms may not be the ones they will encounter later as international students or immigrants.
The moral is simple: be aware that expressions you consider perfectly transparent because you grew up with them may be puzzling to others. We need to have more empathy for people who are not yet familiar with the many hundreds of potentially confusing phrases that we use so spontaneously."
By Frank Boers,
Western University
December 16, 2025
https://theconversation.com/the-trouble-with-idioms-how-they-can-leave-even-fluent-english-speakers-behind-271681
#Metaglossia
#metaglossia_mundus
"L'arabe, pilier de l'espagnol : un dictionnaire encyclopédique ravive la mémoire linguistique d'al-Andalus
Un nouveau dictionnaire révèle l'origine arabe de milliers de mots et ravive la mémoire d'al-Andalus
Cours de langues de la Fondation Trois Cultures
Adieu BB, la légende nous a quittés
Ouverture des inscriptions pour le nouveau trimestre d'arabe et d'hébreu à la Fondation des trois cultures
Nouveau dictionnaire encyclopédique élaboré par le professeur Anwar Mahmoud Zenati, professeur d'histoire et de civilisation à l'université d'Ain Shams.
Abdelhay Korret
04/01/26
Sources et portée du lexique d'origine arabe
Méthodologie et lecture culturelle du dictionnaire
Un nouveau dictionnaire encyclopédique rédigé par le professeur Anwar Mahmoud Zenati, professeur d'histoire et de civilisation à l'université d'Ain Shams, vient d'être publié, avec une préface du professeur Karl Pinto, spécialiste en histoire islamique à l'université du Colorado à Boulder (États-Unis). L'ouvrage se concentre sur les mots espagnols d'origine arabe, dans une perspective scientifique qui va au-delà de la simple compilation lexicale pour s'aventurer dans l'analyse historique et culturelle.
Cet ouvrage constitue une contribution importante au domaine des études linguistiques et lexicographiques, car il aborde le profond héritage linguistique forgé au cours de siècles d'interaction entre l'arabe et l'espagnol, en particulier pendant l'expérience andalouse, lorsque l'arabe est devenu un vecteur fondamental pour la transmission des connaissances, des techniques, des concepts et des modes de vie vers l'espace ibérique.
Le dictionnaire met en évidence la présence de l'arabe dans l'espagnol comme l'un des témoignages les plus riches du contact entre les langues dans l'histoire, une présence façonnée au cours de près de huit siècles d'interaction culturelle, depuis la conquête islamique d'al-Andalus en 711 jusqu'à sa chute en 1492. Au cours de cette période, un environnement linguistique complexe s'est développé, dans lequel l'arabe a coexisté avec le latin tardif, le castillan, le catalan et le basque, donnant lieu à une influence étendue qui a englobé le vocabulaire, les structures linguistiques et les toponymes, et qui s'est étendue aux domaines scientifique, administratif et intellectuel.
Corepunk MMORPG
Un verdadero MMORPG de la vieja escuela ¡Cómo los de antes, pero mejor!
Sources et portée du lexique d'origine arabe
L'ouvrage s'appuie sur des sources linguistiques et historiques de renom, parmi lesquelles les travaux de Joan Corominas, Reinhart Dozy et Federico Corriente, ainsi que les dictionnaires de la Real Academia Española (RAE), qui s'accordent à souligner que l'arabe a apporté plus de 4 000 mots, directs et indirects, à l'espagnol, devenant ainsi sa deuxième source lexicale après le latin. Cette influence ne se limite pas à l'usage quotidien, mais touche des secteurs clés tels que l'agriculture et l'irrigation, l'architecture et les arts, la musique, l'administration et le droit, la philosophie et les sciences, le commerce, l'économie, la gastronomie et le textile.
Nouveau dictionnaire encyclopédique élaboré par le professeur Anwar Mahmoud Zenati, professeur d'histoire et de civilisation à l'université d'Ain Shams.
Le dictionnaire offre une approche globale de ce phénomène en retraçant les termes espagnols d'origine arabe, en analysant leur structure phonétique et morphologique et en étudiant leurs déplacements sémantiques au fil du temps, toujours en les reliant à leurs contextes culturels. Il met également en évidence les facteurs historiques qui ont contribué à consolider cette influence, tels que le mouvement de traduction scientifique à Tolède, l'essor des métiers et des industries, le développement de l'architecture et de l'agriculture andalouses, et l'évolution des systèmes administratifs et financiers inspirés du droit islamique.
Méthodologie et lecture culturelle du dictionnaire
Loin d'être conçu uniquement comme un dictionnaire linguistique, cet ouvrage se présente comme une lecture intellectuelle de l'importance de l'arabe dans l'espagnol, compris comme un registre vivant de la mémoire d'al-Andalus et comme une preuve des échanges culturels qui ont influencé la configuration de l'Europe de la Renaissance. Ses responsables soulignent que la langue n'est pas seulement un instrument de communication, mais un répertoire de l'expérience humaine, à travers lequel il est possible de relire l'histoire d'al-Andalus et de comprendre la nature du métissage culturel qui a fait de la péninsule ibérique un pont entre l'Orient et l'Occident.
L'objectif du dictionnaire est de réaffirmer une vérité linguistique fondamentale : l'influence de l'arabe sur l'espagnol n'est ni circonstancielle ni ornementale, mais une composante structurelle de l'identité même de la langue. L'héritage arabe en Espagne soutient l'ouvrage, transcendant le domaine politique pour devenir une contribution intellectuelle et culturelle durable, dont les traces sont encore présentes dans les noms des villes et des fleuves, ainsi que dans le lexique de la vie quotidienne, de la science et de l'administration, confirmant que le contact entre les langues a été, et continue d'être, une voie d'enrichissement mutuel.
Abdelhay Korret, journaliste et écrivain marocain"
https://www.atalayar.com/fr/articulo/culture/larabe-pilier-lespagnol-dictionnaire-encyclopedique-ravive-memoire-linguistique-dal-andalus/20260104100000221893.html
#Metaglossia
#metaglossia_mundus
""Neuroscientists Devise Formulas to Measure Multilingualism
New calculator measures how multilingual a person is—and how dominant each language is
Jan 5, 2026 James Devitt
More than half of the world’s population speaks more than one language—but there is no consistent method for defining “bilingual” or “multilingual.” This makes it difficult to accurately assess proficiency across multiple languages and to describe language backgrounds accurately.
A team of New York University researchers has now created a calculator that scores multilingualism, allowing users to see how multilingual they actually are and which language is their dominant one.
The work, which uses innovative formulas to build the calculator, is reported in the journal Bilingualism: Language and Cognition.
“Multilingualism is a very broad label,” explains Esti Blanco-Elorrieta, an assistant professor of psychology and neural science at NYU and the paper’s senior author. “These new formulas provide a clear, evidence-based way to understand your language strengths and how multilingual you truly are, bringing scientific clarity to an everyday part of life for millions of people.”
The calculator works in nearly 50 languages, including American Sign Language, and allows users to fill in an unlisted language.
Blanco-Elorrieta and Xuanyi Jessica Chen, an NYU doctoral student and the paper’s lead author, developed the formulas—embedded in a multilingual calculator that users can deploy to measure their multilingualism and language dominance—that are drawn from two primary variables:
Age of language acquisition for listening, reading, speaking, and writing
Self-rated language proficiency for listening, reading, speaking, and writing
The calculator then yields a multilingualism score, which indicates how multilingual a person is on a scale from monolingual to perfect polyglot. The language-dominance is separately tabulated by calculating the difference in ability between languages.
In the below video, a trio of multilingual NYU students tried it out to test their abilities in different languages—with results that were both surprising and affirming.
Video by Jonathan King/NYU
The authors—both multilingual speakers—note that past research has shown that self-rated language proficiency is, in fact, an accurate and efficient measure of actual language proficiency. The researchers also implemented other statistical controls to minimize self-rating bias. They add that, similarly, age of language acquisition has been shown to be a predictor of abilities: the earlier one learns a language, the more likely it is they will be able to master native-like proficiency in that language.
“Multilingualism is a very broad label. These new formulas provide a clear, evidence-based way to understand your language strengths and how multilingual you truly are, bringing scientific clarity to an everyday part of life for millions of people.” Esti Blanco-Elorrieta, an assistant professor of psychology and neural science
The researchers validated their measure by testing it in two distinct populations: healthy young bilinguals and older bilinguals with language impairments. They compared their results to those obtained from existing methods that rely on acquiring much more extensive language background information. Across both groups, the formulas produced language-dominance results that were nearly identical to those generated by more complicated measures, showing that the new approach is both simple and accurate.
“Rather than just labeling someone as ‘bilingual’ or ‘monolingual,’ this tool quantifies how multilingual one is,” notes Chen.
“This calculator offers a transparent, quantitative tool that researchers, clinicians, and educators can adopt to better characterize multilingual populations, ultimately improving research quality and real-world applications—from language education to clinical assessment,” concludes Blanco-Elorrieta.
The research was supported by grants from the National Institutes of Health (R00DC019973) and the National Science Foundation (2446452)."
https://www.nyu.edu/about/news-publications/news/2026/january/neuroscientists-devise-formulas-to-measure-multilingualism.html
#Metaglossia
#metaglossia_mundus
"Le Japon investit dans la traduction de mangas assistée par l'IA. Chaque année, seulement 10 % environ des mangas publiés au Japon sont traduits en anglais. Ce pourcentage est encore plus faible pour les autres langues.
Le gouvernement japonais a désigné le manga comme un nouveau secteur clé. - Photo : Romancing Japan
Selon Nikkei Asia , le Japon soutiendra l'expansion de la distribution de mangas sur les marchés internationaux en formant des ressources humaines capables de traduire rapidement des bandes dessinées à l'aide de l'intelligence artificielle (IA).
Il s'agit d'une nouvelle initiative du gouvernement japonais visant à empêcher les internautes d'accéder à des sites web piratés. Auparavant, l'Agence japonaise des affaires culturelles avait également pour objectif de développer un système d'intelligence artificielle capable de détecter automatiquement les sites web enfreignant le droit d'auteur.
Prévenir le piratage des mangas dû au manque de traductions. Les mangas sont très populaires sur les marchés étrangers, mais de nombreux lecteurs choisissent encore de lire des versions copiées et mises en ligne illégalement.
Selon Yukari Shiina, chargée de cours à l'Université des Arts de Tokyo, la principale raison est que le rythme de traduction des mangas n'a pas suivi la demande des lecteurs, créant ainsi les conditions propices à la propagation des violations de droits d'auteur.
Une enquête menée par l'organisation anti-piratage Authorised Books of Japan (ABJ) a révélé qu'environ 900 sites Web pirates spécialisés dans la publication de mangas ont enregistré 2,8 milliards de visites en provenance de 123 pays et territoires rien qu'en juin 2025.
Face à cette situation, l'Agence japonaise des affaires culturelles soutiendra des programmes de formation à la traduction de mangas par intelligence artificielle, à hauteur de 100 millions de yens par projet. Ce soutien devrait inclure des formations intensives en traduction et des méthodes d'application efficaces de l'IA.
Shogakukan, l'éditeur de mangas à succès comme Doraemon et Conan, soutient également le développement de logiciels de traduction de mangas basés sur l'intelligence artificielle. - Photo : Toho
Parallèlement, des entreprises privées accélèrent également le développement d'outils de traduction de mangas basés sur l'IA. Mantra, une start-up affiliée à l'Université de Tokyo, a développé un outil capable de traduire des œuvres complètes, y compris le style des dialogues et le contexte narratif.
Cet outil prend en charge 18 langues et permet de réduire de moitié le temps de traduction par rapport aux méthodes traditionnelles. Actuellement, Mantra traite environ 200 000 pages par mois, soit l’équivalent de 1 000 volumes de manga.
« L’IA réduit considérablement les tâches simples comme le remplacement de mots, mais les humains jouent toujours un rôle clé pour garantir l’exactitude, le naturel de l’expression et le contexte », a déclaré Shonosuke Ishiwatari, PDG de Mantra.
L'interface de traduction de mangas de Mantra, basée sur l'IA - Photo : Nikkei Asia
Depuis deux ans, la maison d'édition Shogakukan utilise des outils de traduction développés par différents spécialistes de l'IA, dont Mantra. Ces outils sont capables de reconnaître le texte des images de manga et d'effectuer les traductions. Lorsque le texte traduit est trop long pour la bulle de dialogue, l'IA ajuste automatiquement la taille de cette dernière.
La relecture, essentielle à l'amélioration de la qualité des traductions, reste encore une tâche humaine. Nobumasa Sawabe, PDG de Shogakukan, estime que l'IA accélère considérablement le processus de traduction. Shogakukan ambitionne de porter la part de son chiffre d'affaires réalisé à l'international à 10 % d'ici quatre ans, contre 3 à 4 % actuellement.
Par ailleurs, Mme Yukari Shiina a souligné que la qualité des traductions sur les sites de mangas piratés est souvent inégale. « Développer les activités de traduction est essentiel, non seulement pour garantir les droits légitimes des auteurs et des éditeurs, mais aussi pour offrir aux lecteurs des œuvres de meilleure qualité », a-t-elle insisté." Báo Tuổi Trẻ 04/01/2026 https://www.vietnam.vn/fr/nhat-ban-dau-tu-cho-nhan-luc-dich-manga-bang-ai #Metaglossia #metaglossia_mundus
une application de traduction bidirectionnelle khmère-anglais gratuite, désormais accessible sur Play Store, App Store et via le site www.translate.kh.
Le ministère des Postes et des Télécommunications (MPTC) du Cambodge marque un tournant décisif dans la transformation numérique nationale. Le 2 janvier, il a dévoilé TranslateKH 2.0, une application de traduction bidirectionnelle khmère-anglais gratuite, désormais accessible sur Play Store, App Store et via le site www.translate.kh.
Cette initiative, qui succède à la première version lancée le 25 novembre 2024, s’appuie sur l’intelligence artificielle avancée, le deep learning et un modèle linguistique massif pour des traductions contextuelles, surpassant les approches purement syntaxiques.
Au-delà de la commodité quotidienne, TranslateKH 2.0 s’inscrit dans une stratégie économique ambitieuse : doper la compétitivité du Cambodge sur la scène régionale et mondiale, où le bilinguisme anglais-khmer devient un atout stratégique.
Révolution technologique au cœur du projet
Dotée d’une compréhension contextuelle fine, l’application excelle dans les nuances culturelles et idiomatiques propres au khmer, une langue à tonalité complexe. Contrairement aux outils génériques comme Google Translate, TranslateKH privilégie la précision sémantique, idéale pour les documents officiels, contrats commerciaux ou échanges touristiques.
Le MPTC met en avant une interface intuitive, un mode hors ligne partiel et des mises à jour régulières basées sur les retours utilisateurs, positionnant le Cambodge comme pionnier en IA localisée en Asie du Sud-Est.
Enjeux économiques : un levier pour les PME et le commerce régional
Dans un Cambodge où le numérique explose – avec 75% de pénétration internet en 2025 –, TranslateKH agit comme un catalyseur économique. Les PME, pilier de l’économie (représentant 60% du PIB), gagnent en fluidité pour négocier avec des partenaires RCEP (Chine, Vietnam, Thaïlande).
Prenons l’exemple des exportateurs de textiles et d’agroalimentaire : des contrats en anglais clair accélèrent les deals, réduisant les coûts de traduction externe de 30 à 50%. Le tourisme, contributeur majeur (12% du PIB pré-pandémie), bénéficie aussi : guides et hôteliers communiquent sans barrière.
Focus sectoriel : Les entreprises du garment, concentrées à Phnom Penh et autour de Sihanoukville, intègrent déjà l’outil pour former leurs équipes bilingues, alignant compétences locales sur les standards ASEAN.
Impact éducatif : démocratiser l’anglais, former la jeunesse
L’éducation supérieure cambodgienne, avec ses 200 000 étudiants, souffre d’un déficit linguistique : seul 20% maîtrise l’anglais courant. TranslateKH 2.0 pallie cela en servant d’auxiliaire pédagogique – pratique interactive, quiz intégrés –, favorisant l’accès aux ressources MOOC internationales (Coursera, edX).
Pour les universités techniques (automobile, IT, construction), cela accélère la curriculum internationalisation, alignée sur les besoins industriels. Résultat projeté : hausse de 15% des embauches qualifiées d’ici 2027, selon les projections du ministère de l’Éducation.
Témoignage fictif illustratif : « En tant qu’enseignant en comptabilité, TranslateKH transforme mes cours : mes étudiants saisissent désormais les normes IFRS en un clin d’œil », confie un professeur de l’Institut Royal d’Économie.
Inclusion numérique et attractivité touristique
L’inclusion reste clé : l’application cible les zones rurales (40% de la population), via des partenariats avec les opérateurs télécoms pour des forfaits data bonifiés. Côté tourisme, Phnom Penh, Siem Reap et Sihanoukville attirent 7 millions de visiteurs prévus en 2026. Des menus bilingues instantanés ou des visites guidées en temps réel valorisent le patrimoine khmer (Angkor, gastronomie), multipliant les upsells dans l’hôtellerie-restauration.
Vers une souveraineté linguistique numérique
https://www.cambodgemag.com/post/traduction-connect%C3%A9e-le-cambodge-propulse-le-bilinguisme-num%C3%A9rique-avec-translatekh-2-0
#Metaglossia
#metaglossia_mundus
Les limites de la traduction des symboles "Dans le cadre des "Grandes conférences" était diffusée sur France Culture au cœur de l'été 1971, une allocution de Roger Caillois intitulée "Les limites de la traduction des symboles". Auteur littéraire et aussi essayiste, c'est en tant que traducteur qu'il intervenait à cette occasion.
“Seul de tous les arts, la littérature souffre de la malédiction de Babel.” C’est par cette sentence introductive que Roger Caillois résume l’enjeu du problème de la traduction, dans la conférence prononcée quelques mois seulement après son élection à l’Académie française au début de l'année 1971. La tâche ingrate du traducteur En cette année 1971, Caillois est un intellectuel reconnu depuis déjà de nombreuses années. Ses essais, ses critiques littéraires, son travail poétique lui valent une reconnaissance sinon du grand public du moins des cercles intellectuels. Mais Roger Caillois est aussi un traducteur lui-même et non des moindres puisqu’il a traduit en français l’œuvre du maître argentin Jorge Luis Borges. C'est donc en connaissance de cause qu'il évoque dans cette intervention la difficulté de transposer d’une langue à l’autre ce qui est parfois au-delà des mots. Il y explique ce qu'est selon lui une traduction fidèle, et quelles en sont les difficultés alors que "dans chaque langue, aucun mot même le plus ordinaire ne constitue l'équivalent exact du mot étranger correspondant." Cette "opération ingrate, complexe, traitresse" n’est pas pour autant impossible pour le traducteur mais sa tâche est colossale. Il lui faut selon les mots de Caillois "inventer le texte qu’aurait écrit l’auteur traduit si sa langue maternelle avait été celle du traducteur et non la sienne". Retrouvez l'ensemble de la Nuit "Roger Caillois, arpenteur de l’imaginaire"
Par Denise Parent Présentation Gérard Mourgue Avec Roger Caillois Les grandes conférences - Les limites de la traduction des symboles, par Roger Caillois : Le 21/05/1971 à Kaboul (1ère diffusion : 30/07/1971) Indexation web : Documentation sonore de Radio France Archive Ina-Radio France" https://www.radiofrance.fr/franceculture/podcasts/les-nuits-de-france-culture/roger-caillois-arpenteur-de-l-imaginaire-16-18-la-traduction-ideale-selon-roger-caillois-2631438 #Metaglossia #metaglossia_mundus
"For centuries, the Arapaho have called Colorado and Wyoming home. The tribe gave names to places like the Kawuneeche Valley, the Never Summer Mountains, and Mount Blue Sky.
But the language the Arapaho have spoken for centuries is at risk of disappearing, as fewer members of the tribe have learned the language.
A team of language experts at the University of Colorado Boulder is working to change that. They’re compiling an online database that includes recordings of the Arapaho language and can be used as a learning and teaching tool.
Andrew Cowell is a linguistics professor at CU, and faculty director of the Center for Native American and Indigenous studies. He helped launch this project more than two decades ago.
He joined Erin O’Toole to talk about how he hopes the digital database helps future generations learn and continue to speak the Arapaho language.
You can access the Arapaho Language Project here.👇🏿👇🏿 https://www.kunc.org/podcast/inthenoco/2026-01-02/the-arapaho-language-is-endangered-a-cu-professor-hopes-this-project-will-help-preserve-it #Metaglossia #metaglossia_mundus
"Con el objetivo de impulsar la inclusión y mejorar la comunicación con personas sordas en distintos entornos sociales y educativos, los estudiantes José Eduardo Méndez Merino y Luis Óscar Aguilar Cota trabajan en el desarrollo del Proyecto Signa, una innovadora propuesta tecnológica enfocada en la creación de un traductor de lenguaje de señas.
El proyecto, actualmente en su versión beta, consiste en una plataforma tipo curso interactivo que traduce al Lenguaje de Señas Mexicano (LSM). A diferencia de las plataformas tradicionales que traducen entre idiomas orales, Signa está diseñada para facilitar la comprensión y el aprendizaje del lenguaje de señas, buscando impactar positivamente en la vida de personas con discapacidad auditiva.
“El proyecto se llama Signa, es un traductor de lenguaje de señas que actualmente se encuentra en su versión beta. Desarrollamos una plataforma tipo curso interactivo que traduce al lenguaje de señas en lugar de a otro idioma”, explican sus creadores.
Actualmente consiste en una plataforma web, pero en el futuro los jóvenes quieren implementar una versión también en aplicación móvil. “Funciona mediante clases para los usuarios que gusten aprender la lengua de señas, poniéndoles actividades dependiendo el nivel que elijan.
De momento, el sistema inicia con tres cursos básicos, que son “Abecedario”, “Alimentos” y “Animales”. Estos módulos buscan introducir al usuario en los fundamentos del LSM a través de animaciones e interfaces accesibles. Además, presenta un sistema de evaluación para determinar si han aprendido los cursos.
Otra mejora en la que se encuentran trabajando es en implementar el reconocimiento de voz, para que mediante un creador de imágenes con Inteligencia Artificial pueda generar en automático la seña de lo que la persona diga. Para ello, detallan, tuvieron un acercamiento con un ingeniero de Google, quien les dio consejos y opciones para implementar esta tecnología a su programa.
“Cuando observamos los noticieros en televisión, es común ver a una persona en un recuadro pequeño interpretando el contenido en lenguaje de señas. Nuestro propósito es innovar este proceso incorporando tecnología; buscamos que, cada vez que una persona esté hablando, aparezca una animación generada con inteligencia artificial que traduzca simultáneamente a lenguaje de señas”.
Méndez Merino y Aguilar Cota señalaron que uno de los principales propósitos es generar empatía y reducir la discriminación que enfrentan las personas sordas en los entornos laborales y educativos.
“Lo que buscamos es generar empatía hacia las personas que utilizan el lenguaje de señas, a fin de disminuir la discriminación que enfrentan en los entornos laborales y educativos. Queremos invitar a la comunidad a mostrarse más abierta e incluyente con las personas usuarias del lenguaje de señas”.
Para garantizar precisión y confiabilidad, los estudiantes obtuvieron información sobre LSM a partir de fuentes oficiales del gobierno, recordando que cada país cuenta con su propio lenguaje de señas.
De esta manera, los jóvenes universitarios buscan romper la barrera de comunicación que enfrentan las personas con discapacidad auditiva, promoviendo mayor accesibilidad, comprensión y participación en los distintos ámbitos de la vida cotidiana." 1 de enero de 2026 NBCS Noticias https://nbcs.mx/?p=159640 #Metaglossia #metaglossia_mundus
"Google has started rolling out a new version of its Translate app with a feature that allows you to create more accurate Gemini AI-assisted translations, 9to5Google reported. The feature appears as an AI model picker at the top of the app, allowing you to choose between "Fast" and "Advanced" translations. It's appeared for some users on iOS but not Android to date, and the Advanced mode only translates between English and French, and English and Spanish. Google Translate now offers Gemini-assisted translations https://www.engadget.com/ai/google-translate-now-offers-gemini-assisted-translations-151008774.html #Metaglossia #metaglossia_mundus
"On December 30, 2025, Tencent Hunyuan announced the open-sourcing of Hunyuan Translation Model 1.5 (HY-MT1.5), available in 1.8B and 7B variants. Supporting mutual translation across 33 languages and 5 ethnic/regional dialect conversions, the models are now live on the Hunyuan website, GitHub, and Hugging Face for direct developer access.
Key highlights:
HY-MT1.5-1.8B targets edge deployment: quantized version runs smoothly on 1GB memory, enables offline real-time translation with average 50-token processing in 0.18 seconds (faster than major commercial APIs), matching most commercial tools—ideal for instant messaging and smart customer service. HY-MT1.5-7B upgrades the WMT25 30-language champion model, improving accuracy and reducing annotation carryover and language mixing for professional scenarios. Both models support edge-cloud hybrid deployment and three practical features: custom terminology libraries for fields like medicine and law, context understanding for coherent long-text translation, and format-preserving translation for webpages and documents. Built with large-model distillation, they deliver high performance in small sizes, already deployed in Tencent Meeting and Enterprise WeChat, with compatibility for Arm, Qualcomm, Muxi, and other platforms.
Source: IT Home" https://pandaily.com/tencent-hunyuan-open-sources-translation-model-1-5-runs-on-1-gb-phone-memory-outperforms-commercial-ap-is #Metaglossia #metaglossia_mundus
"Building a national Large Language Model: Opportunities and challenges
Imagine interacting with a virtual assistant that understands you perfectly – whether you’re speaking fluent Arabic, simplified English, or a natural mix of both. Within seconds, it delivers a response that isn’t just accurate but also culturally attuned, recognizing regional nuances and context. This is the promise of AI agents, intelligent digital assistants powered by cutting-edge Large Language Models (LLMs).
LLMs are transformative AI systems capable of understanding and generating human-like language. They serve as the brains behind AI agents, enabling them to process complex queries, adapt to different dialects, and engage in meaningful conversations. Whether assisting with customer service, translating in real-time, or offering legal and financial guidance, these agents are redefining human-computer interactions.
While models of private-sector leaders like OpenAI’s GPT 4.5, Google’s Gemini 2.0 and DeepSeek R1 lead the global AI race and more advanced AI agents are evolving (such as Manus AI), a new movement is taking shape: national LLMs. Countries are developing their own AI models tailored to their linguistic, cultural and economic realities.
These homegrown models not only ensure greater data sovereignty but also provide AI systems that truly understand local dialects, traditions and societal values – bridging gaps left by global models and shaping the future of AI on their own terms.
For the UAE and the broader Middle East, the potential of Large Language Models (LLMs) extends far beyond technological innovation. They represent a strategic opportunity to drive economic growth, enhance digital sovereignty, democratize AI access and foster inclusivity. While developing an advanced LLM comes with technical complexities, it also involves key economic decisions, workforce development and long-term strategic planning.
What gives the UAE an edge is its strong political determination, investment capacity and rapidly growing AI ecosystem. With a clear vision to lead in AI, the country is not just adopting global models but actively shaping the future of LLMs through the development of models like Jais (a 13-billion parameter model trained on a dataset containing 395 billion tokens of Arabic and English data).
By building models that understand the region’s linguistic diversity, cultural nuances and economic needs, the UAE can create AI solutions that are not only locally relevant but also globally competitive—setting a new benchmark for AI in the Arabic-speaking world and beyond.
The case for a national LLM
The demand for LLMs is undeniable, with applications spanning customer service, education, healthcare, coding, and even governance. Globally, over 30 nations are developing their own LLMs, embedding their unique linguistic and cultural preferences into AI systems. That’s not just about national pride but also about developing independent capabilities.
National LLMs allow countries to reduce reliance on foreign models by ensuring data sovereignty and security as well as full control over model parameters and biases.
Significant steps in this direction have already been taken with Jais in the UAE, one of the most advanced Arabic LLMs, demonstrating the country’s commitment to AI leadership and laying the groundwork for future innovations.
Arabic is a rich and complex language, infused with layers of dialects depending on the region and social context. Often, global models struggle with these intricacies. LLMs developed locally could further bridge this gap, fostering AI systems that understand and adapt to Emirati nuances while extending this capability to the wider Arabic-speaking world.
The region’s bilingualism – most prominently the interplay between Arabic and English – creates yet another incentive. Think of a business traveler at Dubai International Airport or a student accessing academic materials; an LLM that intuitively understands the cultural and linguistic expectations in such scenarios could transform user experiences. By enabling these services locally, the UAE can reduce reliance on foreign-produced AI systems, keeping data safe within national borders.
Beyond immediate use cases, a successful national LLM enhances the UAE’s competitiveness in the global AI race. It positions Emirati institutions as essential players in shaping the future of AI, fostering not just innovation but economic growth through AI-enabled industries.
The economic upside
Building a national LLM also becomes an economic catalyst beyond a technological milestone. Having an LLM available locally means government agencies, universities, and private enterprises can integrate advanced language understanding into their systems. This reduces time, effort, and costs previously spent outsourcing or adapting foreign tools.
But the economic benefits extend far beyond ease of use. A national LLM becomes the foundation for an ecosystem of innovation. It empowers local tech startups to build applications on top of the language model – from chatbots and translation aids to advanced analytics tools and agents. This boosts local entrepreneurship and economic diversification, a critical pillar of the UAE’s long-term development strategy.
National LLMs also create jobs, not only in the fields of AI model development but also in linguistics, ethics, and policymaking. Expanding regional AI expertise fosters human capital, ensuring that regional talents lead the way, rather than relying on external specialists.
The challenges
At the same time, developing an LLM is a high-stakes endeavor. Building state-of-the-art AI systems requires vast amounts of high-quality data, computing power, and financial investment.
Data availability and quality are arguably the most pressing challenges. Arabic dialects vary drastically not just between countries like Morocco and Iraq but even intra-nationally. Sourcing digitized, representative datasets for these diverse linguistic forms is daunting. Large portions of Arabic text available online – social media posts and blogs – might not meet the quality standards required for AI training.
Resources are another challenge. Training LLMs requires massive computing power and high-performance GPUs. The UAE’s growing investments in cloud infrastructure provide a strong foundation, while efficient approaches like DeepSeek’s offer a path to more affordable models. But technology alone isn’t enough – a national LLM holds value only if widely adopted. Without strong buy-in from government agencies, businesses, and educators, it risks becoming an underutilized investment.
Lastly, the talent question is critical. The UAE has made significant strides in cultivating AI-centric education, but the reality of a global AI talent shortage remains. Attracting world-class experts while simultaneously developing homegrown capabilities will be essential to leading the way in the future.
For the UAE and regional players, the choice to invest in a national LLM must be guided by purpose. Does everyone build from scratch, like GPT-NL in the Netherlands? Or do we layer Arabic-language capabilities onto an existing foundational model (RAG-layering), a cost-effective approach?
Each decision has trade-offs. Building a new foundational model offers exceptional contextualization but requires ongoing financial commitment for updates and maintenance. RAG-layering limits the scope of flexibility but ensures faster rollouts and lower initial costs.
Stakeholders must also decide how to prioritize applications. Should early focus be on public sector adoption, such as using LLMs in smart government services? Or should innovation centers work closely with private enterprises looking to solve commercial challenges? These choices will define the trajectory of the UAE’s AI ambitions.
Conclusion
The drive to build a national LLM isn’t merely a technical pursuit; it’s a strategic response to a rapidly shifting world. For the UAE and its neighbors, the opportunity lies not just in offering localized capabilities but in galvanizing economic innovation through AI tools that resonate globally and regionally.
The challenges, no doubt, are significant. But with visionary leadership, strategic public-private partnerships, and a commitment to developing both talent and infrastructure, the UAE can emerge not just as a participant but as a pioneer in the global AI landscape.
An article from Javier Alvarez and Fred Liebler. Both are senior members of FTI Consulting."
31 December 2025 Consultancy-me.com
https://www.consultancy-me.com/news/12262/building-a-national-large-language-model-opportunities-and-challenges
#Metaglossia
#metaglossia_mundus
"Forget Keanu: Ulster Scots translation of Beckett classic takes on spate of celebrity Godots
Tragicomedy will be performed outdoors in Northern Irish countryside as part of new festival celebrating Irish playwright
Beneath a stark steel tree in a bleak upland bog, a literary masterpiece is set to assume a different linguistic mantle.
Samuel Beckett’s enigmatic tragicomedy Waiting for Godot will make its world premiere in Ulster Scots, a moment described as a “coming of age” for the minority language, and the antithesis of the trend for celebrity Godots.
On Good Friday, after an uphill trek of about 3km, the audience will arrive at a spot in the vast volcanic Antrim Plateau in Northern Ireland, if not footsore then certainly empathic to the physical discomfort of Estragon struggling to remove his ill-fitting boots.
The “existential landscape of heath, moss and bog” in County Antrim lends itself to a script “peppered with exterior references”, said Seán Doran, of festival organiser Arts over Borders, which is staging the production as part of a major new arts festival, the Samuel Beckett Biennale.
But while there have been previous outdoor productions, it will be the “forceful pronunciation and sound” of delivering it in Ulster Scots, or Ullans, for the first time and in a region where the language is spoken, that will “bring a whole new total register” and change the whole performative aspect of the play, said Doran.
In October, a commissioner for Ulster-Scots was appointed under the Identity and Language Act in Northern Ireland to act as cultural safeguard for the language, which has its roots in the early 17th -century plantations of Scots speakers to the north of Ireland.
Against this backdrop, Frank Ferguson, who is translating the play, hailed its performance as “a major coming of age moment”. “It shows a confidence in what Ulster-Scots can do as a language, because you take on one of the major global dramatic phenomenon and you place it within its Ulster-Scots translation.”
The working title is Ettlin Fur Godot, and its famous stage directions of “A country road. A tree. Evening” will translate as “A loanen. A tree. Dailygan”. Ferguson, research director of the Centre for Irish and Scottish Studies at Ulster University, suggested that audiences may develop an ear for Ulster Scots, which has many words with similar roots to English – but that translations might be provided.
Ferguson considers it a language, not dialect, one that in the wake of the Good Friday agreement is “discovering itself and trying to find its way in the world”. And it works “beautifully” in the Godot setting, he said.
Not least because of “ the sense of waiting and hoping and longing for something; all minority languages are longing for that sort of moment of salvation, that moment of revelation. So, that looking, and hoping, and wishing for a Godot-like figure or moment works very well I think with Ulster Scots because in a sense it’s waiting for its moment to live and find itself again”.
It will be performed on Good Friday, 3 April 2026 (Beckett was born on a Good Friday), and is part of a new Beckett Biennale, which over the next 10 years will experiment with unexpected approaches including translations in Aboriginal Noongar, Sami and Inuit , and productions starring homeless actors.
Keanu Reeves (right) appearing as Estragon in Waiting for Godot, is the latest celebrity to star in the play. Photograph: Andy Henderson/AP
Arts Over Borders, producers of cross-border north-south arts festivals, aims to bring Godot back to its original roots when it premiered in French in Paris in 1953 and in London and Dublin two years later. The biennale aims to be the antithesis of the current trend for productions starring big names.
Keanu Reeves is the latest Hollywood star, currently performing on Broadway; double-acts such as Patrick Stewart and Ian McKellen, Bill Paterson and Brian Cox, Ben Whishaw and Lucian Msamati, and Robin Williams and Steve Martin are others who have risen to its challenges.
Doran appreciates “celebrity Godot” as a vehicle that spreads the word more effectively than anything else. But he believes it can detract from other perspectives, possibilities and insights. “And that’s clearly what we’re trying to do through the different languages, the outdoor setting, the homeless actors.”
The Samuel Beckett Biennale will be set in rural and urban settings in Northern Ireland, the Republic of Ireland and England in 2026, returning in 2028.
This article was amended on 2 January 2026."
https://www.theguardian.com/culture/2026/jan/01/samuel-beckett-waiting-for-godot-new-ulster-scots-translation
#Metaglossia
#metaglossia_mundus
We are building a world that treats translation as a problem to be solved. But translation was never just a technical challenge — it is an act of witnessing.
"When an interpreter breaks, they are not breaking down. They are breaking open — making room for unbearable truth to enter, and for all of us to see it.
Flynn Coleman is an international human rights attorney. She is a visiting scholar in the Women, Peace, and Leadership Program at Columbia University’s Climate School and the author of “A Human Algorithm.”
Roman Oleksiv was 11 years old when he stood before the European Parliament and, in a calm voice, described the last time he saw his mother. She was under the rubble of a hospital in Vinnytsia, Ukraine, hit by Russian missiles in July 2022. He could see her hair beneath the stone. He touched it. He said goodbye.
That’s when Ievgeniia Razumkova, the interpreter translating his words, stopped mid-sentence. Her eyes filled with tears, she shook her head. “Sorry,” she said. “I’m a bit emotional as well.”
A colleague then stepped in to finish, as Ievgeniia, still crying, placed her hand on the boy’s shoulder. He nodded and continued on.
That moment is what makes us human.
A translation algorithm would not have stopped. It would have rendered Roman’s testimony with perfect fluency and zero hesitation. It would have delivered the words “the last time I saw my mother” just as it would the sentence “hello, my name is Roman.” Same tone. Same rhythm. No recognition.
Today, we are building a world that treats translation — and increasingly everything else — as a problem to be solved. Translation apps now handle billions of words a day. Real-time tools let tourists order coffee in any language. Babel, we are told, is finally being fixed.
All of this has its place. But translation was never just a technical challenge. It is an act of witnessing.
An interpreter does not merely convert words from one language to another. They carry meaning across the chasm between us. They hear what silences say. They make split-second ethical and semantic decisions over which synonym preserves dignity, when a pause holds more truth than a sentence, whether to soften a phrase that would shatter a survivor.
When Ievgeniia broke down in Brussels, she was not failing. She was doing her job. Her face told a room full of diplomats what no algorithm could: “This matters. This child’s suffering is real. Pay attention.”
I have spent years working in international human rights law, war crimes tribunals, genocide prevention — all the imperfect architecture we try to rebuild after atrocity. In these spaces, everything hinges on language. One word can determine whether a survivor is believed. The difference between “I saw” and “I was made to see,” or between “they did this” and “this happened.”
Roman Oleksiv has undergone 36 surgeries. Burns cover nearly half his body. He was 7 years old when that missile hit. And when he described touching his dead mother’s hair, he needed someone in that room who could hold the weight of what he was saying — not just linguistically but humanly. Ievgeniia did that. And when she could not continue, another person stepped forward.
There is a reason interpreters in trauma proceedings receive psychological support. The best ones describe their work as a sacred burden. They absorb something. They metabolize horror, so it can cross from one language to another without losing its force.
Interpreters are not alone in this either. There are moments when trauma surgeons pause before delivering devastating news, journalists choose to lower their cameras, and judges listen longer than procedure requires. These are professions where humanity is not a flaw — it is the point.
This is not inefficiency. It is care made visible.
Algorithms process language as pattern, not communion. They have no understanding that another mind exists. They do not know that when Roman said goodbye, he was not describing a social gesture — he was performing the final ritual of love he would ever share with his mother, in the rubble of a hospital.
Translation apps do serve real purposes, and generative AI is becoming more proficient every day. But we should be honest about the trade we are making. When we treat human interpreters — and any human act of care — as inefficiencies to be optimized away, we lose that pause before “the last time I saw my mother.” We lose the hand on the shoulder. We lose the tears that say: “This child is not a data point. What happened to him is an atrocity.”
My work studying crimes against humanity has taught me that some frictions should not be smoothed. Some pauses are how we recognize one another as human. They are echoes in the dark, asking: “I am still here. Are you?”
When an interpreter breaks, they are not breaking down. They are breaking open — making room for unbearable truth to enter, and for all of us to see it.
Roman deserved someone who could help us stand in his deepest pain, so that we might all lift it together.
A machine could not do that. A machine, by design, does not stop."
January 1, 2026 4:00 am CET
By Flynn Coleman
https://www.politico.eu/article/when-the-interpreter-wept-what-automation-erases-inside-europes-institutions/
#Metaglossia
#metaglossia_mundus
"More than spelling: why language & identity are the heart of Bill 218
Ensuring language consistency, Senator Shelly Calvo introducing Bill 218, legislation aimed at standardizing the official use of the spelling “Chamoru” in Guam law and government documents.
Calvo noting that the government currently uses three different spellings to refer to the same indigenous people and language—an inconsistency she says is unusual in government practice and creates unnecessary confusion in public records, education, and policy.
The senator emphasized that the effort behind the bill is not new.
Discussions on orthography, pronunciation, and cultural accuracy have been underway for years among educators, language experts, and cultural institutions. She said the bill represents a continuation of that work—not its beginning." Friday, January 2nd 2026, 2:17 PM ChST By Destiny Cruz-Langas https://www.kuam.com/story/53350045/more-than-spelling-why-language-and-identity-are-the-heart-of-bill-218 #Metaglossia #metaglossia_mundus
"New York Gov. Kathy Hochul this month signed a bill requiring hospitals to provide language assistance services to patients. Translators who work with refugee and immigrant communities say translation services currently available can sometimes be confusing and unhelpful to patients.
WAMC’s Sajina Shrestha spoke to Yousaf Sherzad, a lead translator at Advanced Translations Services in Albany, about the new law and how existing translation services can be improved." WAMC Northeast Public Radio | By Sajina Shrestha Published December 31, 2025 at 1:15 PM EST https://www.wamc.org/news/2025-12-31/translators-hope-new-nys-bill-will-improve-on-existing-translation-services #Metaglossia #metaglossia_mundus
System 1 thinking is a near-instantaneous thinking process while System 2 thinking is slower and requires more effort.
Where do translators/interpreters fit? System 1 or/and System 2 Thinking?
"System 1 and System 2 thinking describes two distinct modes of cognitive processing introduced by Daniel Kahneman in his book Thinking, Fast and Slow. System 1 is fast, automatic, and intuitive, operating with little to no effort. This mode of thinking allows us to make quick decisions and judgments based on patterns and experiences. In contrast, System 2 is slow, deliberate, and conscious, requiring intentional effort. This type of thinking is used for complex problem-solving and analytical tasks where more thought and consideration are necessary...
The Basic Idea
When commuting to work, you always know which route to take without having to consciously think about it. You automatically walk to the subway station, habitually get off at the same stop, and walk to your office while your mind wanders. It’s effortless. However, the subway line is down today.
While your route to the subway station was intuitive, you now find yourself spending some time analyzing alternative routes to work in order to take the quickest one. Are the buses running? Is it too cold outside to walk? How much does a rideshare cost?
Our responses to these two scenarios demonstrate the differences between our instantaneous System 1 thinking and our slower, more deliberate System 2 thinking.
However, even when we think that we are being rational in our decisions, our System 1 beliefs and biases still drive many of our choices. Understanding the interplay of these two systems in our daily lives can help us become more aware of the bias in our decisions—and how we can avoid it.
“The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only the slower System 2 can construct thoughts in an orderly series of steps.
Key Terms
System 1 Thinking: Our brains’ fast, automatic, unconscious, and emotional response to situations and stimuli. This can be in the form of absentmindedly reading text on a billboard, knowing how to tie your shoelaces without a second thought, or instinctively hopping over a puddle on the sidewalk.
System 2 Thinking: The slow, effortful, and logical mode in which our brains operate when solving more complicated problems. For example, System 2 thinking is used when looking for a friend in a crowd, parking your vehicle in a tight space, or determining the quality-to-value ratio of your take-out lunch.
Automatic Thinking: An unconscious and instinctive process of human thinking. This term can be used interchangeably with System 1 thinking.
Reasoning: Consciously using existing information to logically make a decision or reach a conclusion, a key feature of System 2 thinking.
Dual Process Model: A theory in psychology that distinguishes two thought processes in humans by describing them as unconscious and conscious, respectively."
System 1 and System 2 Thinking - The Decision Lab https://share.google/tK9xREA3ne7hVjx4X
#Metaglossia
#metaglossia_mundus
La langue et la terre sont indissociables
Une langue ancienne éclaire l’adaptation climatique dans la région la plus chaude des États-Unis
Voir plus
Il y a deux ans, une source naturelle à Pueblo Zunile Nouveau-Mexique, l'une des régions les plus sèches des États-Unis, a commencé à disparaître. C'était une source que de nombreuses personnes, dont Jim Enote, un membre de la tribu Zuni, utilisaient pour arroser leurs champs et leurs jardins. Enote avait cultivé la région pendant 68 années consécutives, alors lorsque ses voisins ont tenté de rétablir l'approvisionnement en eau crucial, il les a rejoints.
Malgré tous leurs efforts, la source s'est tarie, victime du pire mégasécheresse vécue dans le Sud-Ouest depuis plus d'un millénaire. Pourtant, Enote pensait qu'il devait y avoir un moyen d'étancher la soif de sa communauté. Enfant, il avait appris les noms traditionnels des lieux locaux auprès de son grand-père. Et il se souvint qu'une zone située à seulement 30 mètres de l'un de ses champs portait un nom Zuni qui faisait référence à l'eau.
L'endroit était couvert d'herbe sèche, mais Enote décida de commencer à creuser. Moins d’un pied plus bas, il a heurté de l’eau stagnante. « Je viens d'un endroit où la terre parle une langue plus ancienne que n'importe quelle nation, où les noms des montagnes, des sources et des canyons sont des instructions, des histoires et des souvenirs », a déclaré Enote. « En tant que Zuni et agriculteur, je sais que la langue et la terre sont indissociables. »
Enote pratique une méthode autochtone de culture de fruits et de légumes appelée jardinage de gaufres. Il s’agit de créer des dépressions en forme de blocs bordées de murs en terre, formant une grille qui maximise la conservation de l’eau. Il a transporté cette eau de source nouvellement découverte, « avec précaution avec chaque goutte », dans son jardin.
Bientôt, Enote remarqua un wapiti visitant son point d'eau. Des asclépiades poussaient autour. Des libellules et des papillons monarques sont apparus. Aujourd'hui, son jardin n'est pas le seul à prospérer ; le paysage environnant aussi. Enote pense que ce type « d’adaptation climatique éprouvée » a aidé ses ancêtres à survivre à des cycles de sécheresse extrêmes, comme en témoignent les anciens tessons de poterie qu’il a trouvés autour de la source. « La langue est le point de rencontre de la terre et de la mémoire, et ce respect n'est pas de la nostalgie, c'est une responsabilité. Chaque nom de lieu dans nos langues est une leçon de survie et de respect », a-t-il déclaré.
Le consensus parmi les climatologues est clair : le Sud-Ouest connaît depuis un quart de siècle la période la plus sèche de la région depuis 1 200 ans. En conséquence, les niveaux d’eau des lacs Mead et Powell, les plus grands réservoirs du pays, sont tombés à des niveaux historiquement bas. Des méga-incendies, tels que l'incendie de Calf Canyon-Hermit's Peak au Nouveau-Mexique en 2022, ont fait rage dans la région, intensifiés par les conditions sèches. Et les responsables du Nouveau-Mexique rapportent que l'État a perdu jusqu'à 52 pour cent de ses zones humides.
En plus de ces pressions, les aquifères locaux continuent d'être épuisés par l'expansion de villes telles que Phénix et Tucson, tandis que le changement climatique réduit la fonte des neiges dans les montagnes, une source clé de reconstitution des eaux souterraines. Les fermes, les plus grands utilisateurs d’eau en Arizona et au Nouveau-Mexique, ont été contraintes de laisser leurs champs en jachère.
Mais de nombreux peuples autochtones de la région savent que cela n’a rien de nouveau. Les traditions tribales, les pétroglyphes et les études scientifiques, y compris les données sur les cernes des arbres, racontent que les Puebloans ancestraux ont subi plusieurs sécheresses extrêmes, dont une qui a conduit à l'abandon de colonies majeures. Ces ancêtres ont transmis des connaissances et des outils qui ont permis à leurs descendants de résister aux épreuves climatiques d’aujourd’hui.
« Les connaissances traditionnelles autochtones ont une vision du climat. La science du climat ne devrait donc pas seulement inclure les connaissances autochtones, mais commencer par les considérer comme une base pour comprendre la façon dont le monde change », a déclaré Enote. « Il ne s'agit pas seulement de résilience climatique. J'utilise le terme respect du climat passer du langage technique au langage relationnel. De cette façon, nous honorons le climat et la terre comme des éléments apparentés et non comme des marchandises. »
Enote estime que les connaissances autochtones pourraient contribuer à atténuer la crise de l'eau et à promouvoir la durabilité dans le Sud-Ouest, où de plus en plus de sources et de rivières s'assèchent, notamment la rivière Jemez, à l'est de Zuni Pueblo. Imaginez, dit Enote, si les tribus étaient consultées sur l'emplacement des sources historiques pour éviter qu'elles ne soient détruites par le développement.
Enote, qui est également fondateur et directeur exécutif de la Colorado Plateau Foundation, qui aide à financer la conservation et la préservation culturelle menées par les autochtones dans cette région désertique, a déclaré que les conditions de méga-sécheresse catalysent les solutions intertribales. Le Flower Hill Institute, basé à Jemez Pueblo, est l'un des groupes soutenus par la fondation. En mai, il a lancé un programme visant à honorer les agriculteurs Pueblo et à promouvoir une combinaison de pratiques traditionnelles et modernes, notamment pour la conservation de l'eau. Michael Kotutwa Johnson, agriculteur Hopi des zones arides et expert en ressources naturelles, a qualifié le rassemblement d'« événement historique », étant donné que les tribus Pueblo « n'ont pas été unies autour d'une question clé comme celle-ci depuis la révolte des Pueblo de 1680 ».
Bryn Fragua, coordinateur agricole, technique et tribal de l'institut, a déclaré qu'il était puissant d'entendre différentes langues Puebloan parlées « en harmonie » alors que les dirigeants et d'autres membres de la communauté faisaient des offrandes et des prières pour le travail sur le point d'être accompli. Les jours précédant et pendant l’événement, le temps était venteux, poussiéreux, sec et chaud. « Après cette réunion, il semblait que ces paroles prononcées et ces prières faites nous apportaient des nuages de pluie », a déclaré Fragua. « Depuis, il pleut. »
« Dans nos langues, le climat n'est pas quelque chose d'extérieur », a déclaré Enote. « C'est le souffle du vent, le moment de la fonte des neiges et le silence avant les moussons d'été. Le plateau du Colorado nous enseigne que le respect n'est pas passif. C'est un soin actif, c'est une cérémonie et c'est choisir d'écouter. »"
Nicolas Guillot
27.12.2025 à 21h23
https://www.especes-menacees.fr/actualites/exploiter-les-connaissances-autochtones-peut-nous-aider-a-naviguer-dans-un-climat-changeant/
#Metaglossia
#metaglossia_mundus
"Redefining AI for underserved cultures and languages
THE rapid evolution of large language models has achieved grammatical correctness across major languages, yet this represents marketing inclusivity rather than true linguistic equality. While today’s AI can construct sentences in Tagalog, Swahili or Mongolian, the output remains inconsistent and inferior to English performance. Worse, the proliferation of wrapper applications without proper localization amplifies these deficiencies, potentially causing more harm than having no AI support at all.
Our experience building Egune AI in Mongolia revealed that perfect grammar is merely the starting point. The deeper challenge is not making AI speak every language; it is making AI think in every culture. When children grow up interacting with AI that fundamentally operates in English thought patterns, they gradually adopt linear, analytical reasoning, losing the circular, contextual cognition that characterizes many Asian and Indigenous languages. This represents cognitive homogenization disguised as technological progress.
The implications extend far beyond individual users. When governments process documents through foreign AI systems, health care providers rely on AI trained on different medical traditions, and educational institutions deploy AI that misunderstands local pedagogical approaches, entire nations become digitally dependent. They participate in the AI revolution as data providers rather than value creators, feeding sensitive information into systems that extract economic benefit without proportional return.
Digital sovereignty has become essential for nations serious about their technological future. This transcends data localization requirements, though those are important. It is about controlling the AI systems that increasingly mediate citizen interactions with essential services. When critical infrastructure depends on foreign entities subject to external laws and priorities, true independence becomes impossible.
Building AI for languages with limited digital presence requires fundamental rethinking. Mongolian, spoken by three million people, represents less than 0.01 percent of internet content. Web scraping yields mostly translated content reflecting foreign thought patterns rather than authentic expression. We discovered that a million translated sentences teach less about genuine Mongolian thinking than a thousand authentic native conversations.
Our solution involved building dedicated data teams to create quality datasets. We digitized books, transcribed audio and video content, collected academic writing, and manually cleaned and corrected everything. While major AI systems train on trillions of tokens of varying quality, we focused on millions of carefully curated, culturally authentic examples. Every piece underwent native-speaker verification for natural expression. This approach took longer and cost more, but the results justified the investment: our AI understands not just Mongolian words but Mongolian thinking patterns.
Sovereign AI development enables capabilities impossible with foreign systems. During Mongolia’s harsh winters, we immediately update models with emergency protocols. When legislation changes, government agencies integrate updates within days rather than months. This agility becomes impossible when dependent on global release cycles prioritizing larger markets.
Local development ensures consistent alignment with national priorities. Security vulnerabilities specific to local infrastructure receive immediate attention. Critical bugs affecting local users receive priority treatment rather than languishing in global backlogs. Most importantly, economic value from AI development remains domestic, creating jobs and building expertise rather than extracting value abroad.
Our recent launch of Egune Chat on iOS and Android platforms demonstrates practical sovereign AI implementation. Users select their geographic region, accessing AI trained on specific cultural and administrative contexts. Rural Mongolians receive responses that consider traditional practices and limited infrastructure. Urban professionals get advice relevant to modern city resources. This is not translation; it is fundamentally different training for different realities...
Building national AI capabilities no longer requires competing with trillion-parameter models. Focused, culturally optimized systems serve specific populations better than generic global solutions while requiring a fraction of the resources. A $50 million investment in sovereign AI creates more local value than $500 million spent on foreign AI subscriptions.
International collaboration need not conflict with digital sovereignty. Nations can share open-source frameworks and methodologies while maintaining control over implementations. Mongolia’s experience can inform efforts in the Philippines, while Filipino innovations in multi-dialect processing might benefit Indonesia. This creates a network of diverse, interoperable AI systems rather than monolithic global platforms.
The future of inclusive AI requires recognizing that each culture offers unique problem-solving patterns worth preserving in the digital age. True inclusivity demands AI systems that understand local context, respect sovereignty, and serve actual population needs rather than assuming universal solutions exist..." By Badral Sanlag December 28, 2025 https://www.manilatimes.net/2025/12/28/business/sunday-business-it/inclusive-ai-from-the-ground-up-redefining-ai-for-underserved-cultures-and-languages/2249949 #Metaglossia #metaglossia_mundus
"Vivre l’Afrique au Maroc: quand le football devient langage africain
Lahcen Haddad.
ChroniqueLa CAN version Maroc 2025 a commencé bien avant le sifflet du premier match opposant le Maroc aux îles Comores. On la sent dans l’air, dans les rues qui se parent de couleurs, dans les drapeaux africains éclatants qui embellissent les carrefours, dans les accents et les langues africaines qui se croisent partout. On la lit aussi sur les maillots: des jerseys africains riches de traditions, de motifs, d’histoires et de savoir-faire millénaires.
Par Lahcen Haddad
Le 29/12/2025 à 16h00
L’Afrique est rassemblée sans sommet, sans tribune officielle, sans discours. Elle s’assemble par l’émotion. Le football est sans doute la plus grande sensation commune partagée par tous les Africains.
Mais les Africains ont, depuis longtemps, fait parler le football à leur manière. Le football est une langue africaine à part entière. Les rythmes, l’improvisation, l’intensité physique, l’intelligence collective en constituent la grammaire commune, partagée entre les Africains. Chaque match se lit comme un texte, comme un palimpseste: avec son intertextualité, sa propre rhétorique, ses signifiants. Le style de chaque équipe révèle son identité. Le football africain est reconnaissable au-delà des résultats. Les stades deviennent ainsi des espaces de production de sens culturel et interculturel. Et l’interculturel, en Afrique, n’a jamais été une exception: il a toujours été une évidence. L’Afrique parle et se parle via le jeu, à travers le football.
Mais l’organisation de la CAN au Royaume chérifien, terre de baraka et d’empires mythiques, n’est pas un simple exercice de soft power. C’est une expérience vécue, célébrée, chantée et dansée par des foules venues de tous les coins d’une Afrique fière de son africanité. L’explication par le «soft power» ne rend pas justice à la réalité de ce qui se joue. Le soft power est un regard externe, une lecture exogène — une forme d’objectification, au sens lacanien du terme.
La CAN relève au contraire d’un regard interne: une reconnaissance de soi, de ses capacités, de ses rêves et de ses ambitions. Son audience n’est pas d’abord le monde, mais les Africains eux-mêmes. Le football y devient une connexion horizontale, traversant espaces, communautés et sociétés; il n’est pas une projection, mais une présence. Les Africains ne jouent pas pour être regardés: ils jouent pour se reconnaître.
Reconnu comme l’un des pays les plus hospitaliers au monde, le Maroc se distingue en Afrique non pas comme un simple carrefour, mais comme un véritable centre de l’africanité. Son ouverture, son soutien constant et son sens profond de l’identité sont forgés par une longue histoire, depuis les caravanes des IXème et Xème siècles qui transportaient non seulement l’or et le sel, mais aussi des cultures, des religions, des vivres et un art de vivre partagé.
«le football opère comme un espace de décolonisation symbolique. Il traverse les langues imposées, court-circuite les bureaucraties postcoloniales et ignore les cartographies héritées»
— Lahcen Haddad
Dans ce contexte, le stade devient une demeure continentale: la culture marocaine y joue le rôle de pont, de trait d’union au sens le plus authentique du terme. Il n’y a pas ici de hiérarchie des identités africaines, ni d’ethnocentrisme hérité du regard occidental, mais une appartenance horizontale, multiple, multiculturelle et plurielle. Être hôte devient alors un acte de reconnaissance envers la terre africaine et sa mosaïque de géographies symboliques, de mémoires croisées et de récits partagés.
La condition postcoloniale de l’Afrique n’a pas seulement fragmenté les territoires; elle a désaccordé les sensibilités politiques, désarticulé les temporalités et installé une dissonance durable entre les peuples et les formes héritées de l’État. Les frontières tracées à la règle par l’histoire coloniale ont laissé des cicatrices mémorielles, des blocages politiques et des blessures existentielles encore actives.
Face à ces lignes imposées, le football opère comme un espace de décolonisation symbolique. Il traverse les langues imposées, court-circuite les bureaucraties postcoloniales et ignore les cartographies héritées. Il ne reconnaît ni centre métropolitain ni périphérie subalterne. L’unité africaine ne se proclame pas dans les sommets ni les communiqués technocratiques: elle se fabrique par le bas, dans les tribunes, dans les corps en mouvement, dans les chants partagés. C’est là que s’exprime une fierté nationale décolonisée — ni folklorisée, ni mimétique du nationalisme occidental, mais vécue, située, charnelle.
L’Afrique se rassemble davantage dans le chant que dans le discours, dans le rythme que dans l’institution, dans le jeu que dans la norme. Cependant, il ne faut pas idéaliser cette unité: elle demeure temporaire, imparfaite, parfois même contradictoire. Mais c’est précisément parce qu’elle est vécue qu’elle est vraie. La puissance de l’instant peut, parfois, dépasser celle de la permanence. Nous sommes ici face à un moment postmoderne, presque uncanny: un moment qui ouvre un horizon de rêve, même s’il est éphémère, fragile et voué à disparaître.
Bien sûr, la mémoire demeure. Elle s’attache aux smartphones autant qu’à l’imaginaire collectif. Mémoire des moments vécus, parfois au point de se substituer à la chose elle-même — non comme falsification, mais comme refiguration du réel. Ce jeu de la mémoire — fait de buts devenus anthologiques, comme celui d’Ayoub El Kaabi, d’efforts titanesques portés par des équipes entières, de gestes justes et de prouesses incarnées par de grands joueurs tels que Riyad Mahrez, Victor Osimhen, Omar Marmoush, Ademola Lookman ou Achraf Hakimi — s’inscrit durablement dans la conscience des supporters, même lorsque certains n’ont pas encore pleinement pris part au récit sur le terrain.
Si l’union vécue se dissipe avec le temps, la mémoire, elle, persiste par la narration. Elle se nourrit de répétitions, de partages, de récits réactivés. Une communauté mémorielle se constitue ainsi, non pour figer l’instant, mais pour lui donner une continuité symbolique. Au sens ricœurien, le football devient récit: il articule l’événement, la mémoire et l’identité. L’unité ne survit pas comme fait politique durable, mais comme horizon narratif — maintenu vivant par l’imaginaire collectif qui insiste, obstinément, sur la possibilité de l’union.
Le football ne résout pas les problèmes de l’Afrique, mais il montre quelque chose d’essentiel: une imagination partagée, une capacité d’être en commun, de vivre ensemble, d’être de bons voisins. La CAN est un miroir, pas un blueprint. Le Maroc 2025-2026 est un moment charmeur où l’Afrique se regarde dans le miroir du terrain et croit en elle-même.
Dans un monde divisé, fragmenté et privé de boussole, la CAN rappelle que l’Afrique existe déjà — chaque fois qu’un ballon circule librement entre ses peuples. Malgré les frontières ouvertes ou fermées, les espaces aériens libérés ou verrouillés, malgré les paroles médiatiques tantôt intégratrices et tolérantes, tantôt saturées de haine, le ballon défie les rancœurs et les calculs mesquins. Il circule à travers les périples, en dépit des obsessions fragmentaires et fragmentantes de certains leaders.
Le ballon crée le rêve, ouvre un horizon de coexistence: celui du bon voisinage, d’une unité africaine profondément désirée — non par les institutions, mais par les peuples africains eux-mêmes.
Par Lahcen Haddad
Le 29/12/2025"
https://fr.le360.ma/economie/vivre-lafrique-au-maroc-quand-le-football-devient-langage-africain_TPMCMZRR6ZFMZNYFYCIPFO2EOM/
#Metaglossia
#metaglossia_mundus
"A row erupted over the rising cost of translators in the UK benefits system after far-right, anti-Islam activist Tommy Robinson accused the government of wasting millions of pounds of public money on people who "can't speak English.
In a post on X, Robinson criticised the use of taxpayer-funded interpreters for migrants and pushed for their deportation: “If they can't speak English, then they shouldn't be here anyway. Absolute p***take. Deport.”
— TRobinsonNewEra (@TRobinsonNewEra)
His comments came as a new report called for migrants to be banned from using free translation services when claiming benefits. The study was published by the Policy Exchange think tank, according to the Daily Mail.
It argues that the ability to speak English should be a basic requirement or bare minimum for accessing the welfare system.
The report follows a surge in benefit claims and says the government should stop offering free translators in most civil cases. It describes the benefits system as part of a “social contract” with society and its people, adding: “Part of this is the ability to converse in the official, national language.”
According to official figures cited in the study, spending on translation services in civil cases rose by 80 per cent in the three years after the Covid-19 pandemic, reaching £12.8 million last year.
Claimants are currently entitled to free translation to help them appeal decisions denying benefits such as Personal Independence Payment and Employment and Support Allowance.
Should claimants pay for their own translators?
The report also says that in future, claimants who cannot speak English should be expected to pay for their own interpreters.
It recommends that free translation should remain only for deaf people and for criminal cases, where “freedom and liberty are on the line”.
The proposal is to curb the growing welfare bill and reduce the influence of courts over benefit rules. The report argues that courts have gradually expanded the rules on who qualifies for benefits, driving up overall costs. It says Parliament has failed to assert control, with MPs unwilling to reverse decisions made by judges...
Tommy Robinson is known for his strong opposition to immigration, especially from Muslim-majority countries. He argues that mass migration compromises British identity, security, and social cohesion. In September, Robinson led major anti-immigration rallies in London that included tens of thousands of supporters and sometimes turned violent.
About the Author
TOI World Desk"
https://timesofindia.indiatimes.com/world/uk/if-they-cant-speak-english-tommy-robinson-slams-12-million-going-to-translators-for-migrants-in-uk/articleshow/126230426.cms
#Metaglossia
#metaglossia_mundus
"Uber has introduced a new in-app translation tool designed to help drivers and passengers communicate more easily, even when they don’t share a common language.
According to a report by Mashable, the ride-hailing platform has launched an automatic translation feature that supports more than 100 languages, allowing messages sent through the Uber app to be translated in real time. The feature is aimed at improving communication around pick-ups, directions and delays, particularly in busy or unfamiliar locations.
Uber says the tool works directly within the app’s messaging system, meaning neither drivers nor passengers need to switch to external translation apps. Messages are translated automatically, helping both sides understand key information such as where to meet, which entrance to use, or whether there are any issues with traffic or access.
The company has positioned the update as part of a wider push to improve the overall ride experience, especially in cities and regions where language barriers are more common.
What the new translation tool does The new feature allows:
Automatic translation of in-app messages Support for over 100 languages Clearer communication during pick-ups and drop-offs Uber believes this will reduce confusion, missed connections and cancelled trips caused by misunderstandings between drivers and passengers.
DM News Commentary From a private hire driver’s point of view, better communication tools are always welcome. Language barriers can be a genuine challenge, particularly in city centres, airports and tourist hotspots where pick-up instructions can make or break a job.
That said, while translation tools may help smooth out day-to-day issues, they don’t address some of the wider frustrations drivers often raise — such as pay rates and service fees, etc.
For passengers, especially visitors to the UK, clearer communication could make app-based travel less stressful — provided the translations are accurate and reliable in real-world conditions." matthewDec 29, 2025 https://dmnews.co.uk/uber-rolls-out-real-time-translation-feature-to-bridge-language-gaps-for-drivers-and-passengers/ #Metaglossia #metaglossia_mundus
"De Christine de Pizan à Simin Daneshvar, quarante traductrices pionnières
Elles décident de signer leurs traductions au lieu de rester anonymes, comme il est d’usage à l’époque dans un monde régi par les hommes. Loin de se confiner au travail intellectuel que représente la traduction et l’écriture (la plupart sont aussi écrivaines), elles voyagent, luttent pour les droits des femmes et mènent une vie défiant les conventions de leur temps.
Quelles œuvres traduisent-elles? Des œuvres littéraires, bien sûr, mais aussi des publications scientifiques et politiques. Pourquoi traduisent-elles ces œuvres?
À quels défis sont-elles confrontées pendant cinq siècles? Quels sont leurs accomplissements envers et contre tout? Quelle est leur influence sur la société en tant que figures littéraires, scientifiques et politiques? Quelle est leur contribution à la cause des droits des femmes?
Voici les courtes biographies de quarante traductrices/autrices rédigées avec l’aide de Wikipédia - ces femmes disposent toutes d’une page Wikipédia (en français ou dans d’autres langues) grâce aux multiples collaborateurs/collaboratrices de l’encyclopédie en ligne. Elles méritent maintenant des biographies complètes, des documentaires, des expositions et des œuvres d’art en tous genres. Place à un voyage virtuel aussi passionnant qu’incomplet.
Crédits illustration : Les chimistes Claudine Picardet (qui tient un livre à la main) et Marie-Anne Paulze Lavoisier (à sa droite), dont les traductions sont essentielles lors de la Révolution chimique menée par André Lavoisier (assis) entouré de ses collègues. Portrait de groupe anonyme disponible dans Wikimedia...
Le 29/12/2025
par Marie Lebert
Contact : marie.lebert@gmail.com
https://actualitte.com/article/128408/auteurs/de-christine-de-pizan-a-simin-daneshvar-quarante-traductrices-pionnieres
#Metaglossia
#metaglossia_mundus
"Abstract
Background: Psychiatric interpreters in Japan-many of whom work on a voluntary basis-play a vital role in bridging critical language barriers for the country’s immigrants while facing unique psychological challenges within a non-certified system.
Aim: This qualitative study aimed to explore the psychological experiences and support needs of volunteer psychiatric interpreters in Japan.
Methods: Semistructured interviews were conducted with 15 medical interpreters recruited via a national dispatch organization (10/15, 66.7%), local networks (3/15, 20%), and hospital referrals (2/15, 13.3%). Data were analyzed using the constant comparative method.
Results: Four major themes emerged: inhibiting factors (e.g., emotional exhaustion, lack of support), facilitating factors (e.g., agency assurance, patient approval), interpreters’ personal qualities (e.g., resilience, altruism), and the pursuit of a professional foundation (e.g., desire for formal training). Participants perceived an emotional toll greater than in general medical interpreting, intensified by Japan’s volunteer-based, under-supported system. None of the participants were full-time psychiatric interpreters, limiting the applicability to highly specialized settings.
Conclusions: Structured psychological training and support networks are recommended to enhance interpreter well-being and improve access to mental health services for immigrants, thereby contributing to psychiatric care, public health outcomes, and cross-cultural communication."
https://www.cureus.com/articles/445123-psychological-experiences-and-support-needs-of-volunteer-psychiatric-interpreters-in-japan-a-qualitative-study#!/
#Metaglossia
#metaglossia_mundus
|