 Your new post is loading...
|
Scooped by
Charles Tiayon
April 28, 2023 11:30 PM
|
In new research, a team of linguists asked native speakers of various languages to rate recordings of other languages for pleasantness. One from New Guinea rose to the top in a recent study. Credit: Annelisa Leinbach / Big Think; Adobe Stock KEY TAKEAWAYS - It's a common stereotype that certain languages sound beautiful, but is this actually true? Researchers asked 820 participants to listen to audio clips spoken in one of 228 languages and rate the language's pleasantness.
- They found negligible differences between the pleasantness scores of each language, suggesting that certain languages are not intrinsically beautiful to the human ear.
- Familiarity with a language tends to make it more pleasant to the listener.
Copy a link to the article entitled http://Are%20certain%20languages%20intrinsically%20beautiful? Share Are certain languages intrinsically beautiful? on Facebook Share Are certain languages intrinsically beautiful? on Twitter Share Are certain languages intrinsically beautiful? on LinkedIn It’s often said that French is silky, German is brutish, Italian is sexy, and Mandarin is angry. But do those stereotypes of these diverse languages hold empirically across cultures? Are some languages intrinsically beautiful? To find out, a trio of researchers from Lund University in Sweden and the Russian Academy of Sciences recruited 820 participants from the research subject site Prolific to listen to 50 spoken recordings randomly selected from 228 languages. The audio clips were taken from the film Jesus, which has been translated into more than 2,000 languages. For this reason, it is commonly used in linguistics research. The subjects were native speakers of English, Chinese (either Mandarin, Hakka, or Cantonese), or Semitic languages (Arabic, Hebrew, or Maltese). After listening to different recordings, they were asked, “How much do you like the sound of this language?” They then could respond on a scale ranging from “not at all” to “very much.” Participants were also asked if they recognized the language. If they marked yes, they were asked to identify it. The familiarity effect Analyzing data from the surveys, the researchers found that subjects rated languages that they recognized 12.2% higher, even if they actually had misidentified the language. The researchers expected this strong familiarity effect. So how did participants score unrecognized languages? “There were only negligible differences between world regions when the language was not recognized,” the authors reported, “suggesting that languages spoken in different parts of the world do not sound intrinsically beautiful or unpleasant, regardless of the listeners’ own first language.” Controlling for familiarity, the vast majority of languages scored within 2% to 3% of each other in pleasantness. Though not statistically separate from the pack, a couple of languages did surface at the top and bottom. At the very top was Tok Pisin, an English-adjacent Creole language spoken throughout Papua New Guinea. Six percentage points down from Tok Pisin at the bottom was Chechen, which is spoken by approximately 1.7 million people in the North Caucasus of Eastern Europe. The researchers also monitored different acoustic characteristics of the recordings to see if these would affect how the languages were rated. Overall, there was a possible slight preference for nontonal languages, they found. In tonal languages, altering the tone of a spoken word changes the word’s meaning. The researchers also noticed that increasingly higher vocal pitches slightly lowered the score of the linked language. Additionally, if the clip featured a male speaker, the associated language scored about 4 points lower. On the other hand, if the clip featured a “breathy female voice,” the language was rated as much more pleasant. “Voices are more appealing if they sound healthy and sex-typical,” the researchers commented, “presumably because we have evolved to look for signs of fitness in the voice, creating some universal standards of auditory beauty analogous to the appeal of… symmetrical faces and unblemished skin.” The experiment was fairly well designed but had its drawbacks. For example, it could have benefited from a greater number of raters from additional language backgrounds. Moreover, the spoken phrases they rated could have been better standardized to control for differences in speaking styles, loudness, and vocal characteristics. Still, overall, the study constitutes a fascinating exploration of the spoken word, revealing that a language’s beauty is likely not intrinsic, but rather exists in the ear of the listener.
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"Literary Notes: Glossaries of Federal Board’s Urdu textbooks: inaccurate, misleading
Rauf Parekh Published July 21, 2025 Updated a day ago
MUHAMMAD Ahsan Khan is a 90-year-old scholar from Lahore. A voracious reader, expert lexicographer and true connoisseur of the Urdu language, he loves words. That’s why whenever he comes across some erroneous expressions or incorrectly explained words of Urdu, he gets irritated, makes a phone call to me and expresses his concern.
Last week, too, Ahsan Khan Sahib gave me a ring. Sounding shocked, he lamented that even the textbooks of Urdu are now full of errors and the glossaries in them are misleading. He then asked me to have a look at some Urdu textbooks, especially the ones published by the Federal Textbook Board in collaboration with the National Book Foundation, Islamabad.
On his suggestion, I got copies of Urdu textbooks for class IX, X, XI and XII, published by the Federal Textbook Board, and leafed through them. The back title of each book says “Approved by Government of Pakistan; Ministry of Federal Education & Professional Training; National Curriculum Council Secretariat”.
Let us have a quick look at the glossaries appended to these textbooks, published by the Federal Textbook Board, to find why Ahsan Sahib was so annoyed. The book titled Model Darsi Kitab (Model Textbook): Urdu, for Class XI, has listed in its glossary, for instance, “pesh khaima” (page 130). The meanings given are “nateeja, kisi kaam se pehle aane vali soorat”, which can be translated into English as ‘result, something happening before some work is done’. Aside from what pesh khaima actually means, what makes one wonder is how result can occur before something is done.
Another entry on the same page is “tees maar khani dikhana” and instead of explaining, the same expression is repeated as meaning. It simply means that nobody has bothered to proofread it even. On the next page, an entry is “dastak”. The meanings given are “darvaza khatkhatana”. What the compilers could not understand is that the word dastak means ‘knock at a door’ (noun) and not ‘to knock on a door’ (verb), but the definition given has turned the noun into an infinitive.
“Laa ubaali tabiyet” is the entry on page 131. The meanings given are “josh vala tabiyet”. Firstly, tabiyet is a feminine noun but the compilers think it is masculine, hence vala, instead of vali. Secondly, the meanings given are exactly opposite: laa ubaali is an Arabic phrase which literally means ‘I don’t care’. The correct meaning would be ‘careless disposition’.
The book for class X says “khalish” means “khwahish jo poori na ho” (page 167). But literal meaning of khalish, a Persian word, is ‘prick of a thorn’ and it signifies mental prick, continued resentment or concern. “Safed posh”, says the book on page169, is someone clothed in white. But it is a metaphor and it refers to someone not rich but maintaining a certain standard and good reputation. Class IX book says “maoof hona” means “sochne smajhne ke qabil” (page 122). Apparently, the words na hona are missing and it has reversed the sense. On page 123, the explanation to the word “bhaar” is incomplete and the word bhatti is spelt as “bathi”, again proving that proofreading is something unheard of in the Federal Board.
Several entries in these books are given in plural form or as oblique case while a glossary, and a dictionary, too, must list words in singular forms and avoid oblique cases. A glossary sometimes needs to explain different shades of meanings or use synonyms, separated with commas or semicolon. But the compilers are, perhaps, laa ubaali (according to their own definition of the phrase), leaving punctuation marks out, listing more than one shades of meanings in one go, without separating the nuances. Josh, you know!
Here I have restricted myself to the glossaries only, though there are many other lapses in text as well, for instance, in class X book, Zahra Nigah’s name has been written as “Zahra Niga” in the table of contents, but in the glossary it has been spelt as “Zahra Nigar” (page 173). The signs to show tashdeed (phonetic twining of consonants) and izaafat (enclitic possessive or adjectival compound) are also missing from the glossaries, making pronunciation difficult for the students.
The facts mentioned here are based on a cursory reading and a thorough checking may reveal that much of the glossary in each book is inaccurate and misleading. One hopes the Federal Board would contact a senior lexicographer like Ahsan Khan or Saleemur Rahman for putting the books in order.
Two of these books I purchased from the National Book Foundation’s Karachi office have been stamped with the words “Test Edition”, with print line saying that about 100, 000 or so copies have been printed. One wonders if textbooks can be published in large quantities, sold and taught on a trial and error basis!
drraufparekh@yahoo.com
Published in Dawn, July 21st, 2025"
https://www.dawn.com/news/1925538
#metaglossia_mundus
Hymnbook translation team seeks people fluent in Bulgarian, Czech, Danish, Dutch, Estonian, European Portuguese, Finnish, Haitian, Hungarian, Latvian, Lithuanian, Norwegian, Polish, Romanian, Swedish, Thai and Ukrainian.
"Headquarters (USA) - English
21 July 2025 - SALT LAKE CITY Featured Stories
Call for Applications: Paid and Volunteer Roles for Translations of ‘Hymns—For Home and Church’Individuals fluent in Bulgarian, Czech, Danish, Dutch, Estonian, European Portuguese, Finnish, Haitian, Hungarian, Latvian, Lithuanian, Norwegian, Polish, Romanian, Swedish, Thai and Ukrainian are needed
https://newsroom.churchofjesuschrist.org/article/call-for-applications-paid-volunteer-roles-translations-hymns-for-home-and-church
The hymnbook language translation team is seeking talented individuals fluent in Bulgarian, Czech, Danish, Dutch, Estonian, European Portuguese, Finnish, Haitian, Hungarian, Latvian, Lithuanian, Norwegian, Polish, Romanian, Swedish, Thai and Ukrainian. 2025 BY INTELLECTUAL RESERVE, INC. ALL RIGHTS RESERVED.
Download Photo
On June 18, 2018, The Church of Jesus Christ of Latter-day Saints announced plans to publish a new unified hymnal. The new hymnbook, “Hymns—For Home and Church,” is a major, multi-year project involving thousands of employees and volunteers worldwide. The global hymnbook will become available digitally and in print in dozens of languages over the coming years.
Languages and Roles
The hymnbook language translation team is seeking talented individuals fluent in Bulgarian, Czech, Danish, Dutch, Estonian, European Portuguese, Finnish, Haitian, Hungarian, Latvian, Lithuanian, Norwegian, Polish, Romanian, Swedish, Thai and Ukrainian for three open roles:
Translator or content reviewer (paid, independent contractor, 5+ hours a week)
Singer (paid, independent contractor, 1–10 hours a week)
Community feedback volunteer (unpaid, up to 1 hour a week)
Latter-day Saints and friends of the Church with any background in translation, music, poetry or other creative writing experience in these languages are needed (additional languages may be added later), and all levels of experience are welcome to apply as soon as possible. Translation team members may live anywhere.
Job Descriptions
Translator or Content Reviewer
A translator is responsible for creating the initial translation of a song from English into the target language and for incorporating feedback from reviewers.
A content reviewer is responsible for reviewing the work of a translator, with a focus on meaning and language acceptability.
Applicants should possess solid English skills, be able to write creatively in the target language, have a general familiarity with music and poetry, possess basic technical skills, meet deadlines consistently, and communicate clearly and regularly with other team members.
Singer
A singer is responsible for singing translations of hymns and creating an unofficial audio recording throughout the translation process.
Download Photo
A singer is responsible for singing translations and creating an unofficial audio recording throughout the translation process. These recordings will not be officially published by the Church. They will only be used internally by the translation team and may be shared with volunteers via a survey to solicit feedback on the translations. (The Church will recruit singers for the published audio versions of the hymnbook later.)
A singer does not need to have professional vocal experience, but should generally be familiar with music, be able to sing a melody clearly and on pitch, meet deadlines, and communicate clearly and regularly with other team members.
Community Feedback Volunteer
A community feedback volunteer is asked to complete 5- to 20-minute digital surveys in the target language for hymn translations. Surveys include questions about the meaning, language, and musicality of a hymn translation, and the volunteer has the opportunity to provide feedback before the hymn is published. Where possible, some projects have volunteers meet to sing and offer feedback in person.
Access to an electronic device (phone, tablet, computer) is required. English is not required.
See this link for more details and a link to the online application form."
https://newsroom.churchofjesuschrist.org/article/call-for-applications-paid-volunteer-roles-translations-hymns-for-home-and-church
#metaglossia_mundus
."Traduire Günter Grass avec Olivier Mannoni
Atelier de traduction
Jeu, 02.10.2025
19h00-20h30
| À la découverte de « Prendre la pose », récit posthume
Goethe-Institut Paris, Paris
Langue
En français et en allemand
Prix
Entrée libre
Inscription obligatoire par mail : bibliotheque-paris@goethe.de
Partie de la série: Anna & Günter Grass"
https://www.goethe.de/ins/fr/fr/sta/par/ver.cfm?event_id=26796527
#metaglossia_mundus
"Tanger – L’École supérieure Roi Fahd de Traduction de Tanger (ESRFT) organise, les 11 et 12 novembre prochain, un Congrès international sous le thème “Le Sahara marocain au miroir de la traduction: Questions et approches”.
Ce congrès scientifique, qui se tiendra à l’occasion de la célébration du 50è anniversaire de la glorieuse Marche Verte, vise à approfondir la réflexion sur le rôle de la traduction dans l’affirmation des droits historiques du Maroc, et à explorer les contributions des traducteurs dans la diffusion des preuves historiques et juridiques relatives à cette question, indique un communiqué de l’ESRFT...
Il réunira d’éminents chercheurs et académiciens en vue d’explorer une panoplie de textes et de documents d’archives historiques, diplomatiques, juridiques et économiques, nationaux et étrangers, qui attestent de la marocanité du Sahara, et ce dans le but de soulever des questions intellectuelles et culturelles liées au domaine de la traduction.
“Ce Congrès sera un véritable plaidoyer en faveur de notre cause nationale, grâce aux contributions scientifiques qui mettront en lumière le rôle de la traduction dans le maintien de la souveraineté du Royaume sur l’ensemble de ses territoires du sud”, relève la même source, notant que cet événement sera l’occasion d’affirmer le rôle de la traduction dans le soutien de la marocanité du Sahara.
L’Ecole rappelle que la traduction n’a cessé de jouer un rôle stratégique dans la défense des causes nationales et leur rayonnement à l’échelle internationale, soulignant que grâce à son efficacité en tant que médiatrice interculturelle, elle contribue à mettre en lumière les enjeux fondamentaux préoccupant les peuples et les nations, tout en renforçant les positions officielles dans divers forums internationaux, et ce à travers des preuves historiques et juridiques.
“La question du Sahara marocain illustre parfaitement cette réalité, s’imposant au-delà des défis et des dilemmes d’ordre politique, économique et diplomatique dans l’Afrique du Nord. Ainsi, elle a su enchaîner des succès remarquables en occupant une place privilégiée à l’échelle des instances juridiques internationales”, ajoute la même source.
A ce titre, les archives historiques étrangères vont offrir une source inépuisable de documents français, espagnols, allemands, britanniques et américains, qui regorgent de preuves irréfutables soutenant cette légitimité.
Elles fournissent également des témoignages et des confessions de Marocains retenus dans les camps de Tindouf, qui constituent un fonds essentiel qu’il est important d’explorer et d’examiner à travers l’acte de traduction.
Il s’agit également d’autres textes relevant de l’anthropologie et de la sociologie coloniale, ainsi que les récits viatiques rédigés par des voyageurs, venus au Maroc sous des motivations multiples."
21 juillet, 2025 https://www.mapexpress.ma/actualite/culture-et-medias/congres-international-sahara-marocain-au-miroir-traduction-les-11-12-novembre-tanger/ #metaglossia_mundus
"Le traducteur Nguyen Le Chi honoré comme « Ami de la littérature chinoise »
Le 21 juillet, à Nanjing (Chine), le traducteur Nguyen Le Chi, fondateur de la société par actions Chibooks, a reçu le titre d'« Ami de la littérature chinoise » des mains de Truong Hong Sam, président de l'Association des écrivains chinois. C'est la première fois qu'un traducteur vietnamien reçoit ce titre.
Hà Nội Mới
21/07/2025
Le titre d'« Ami de la littérature chinoise » récompense le travail de traduction de la littérature chinoise réalisé par des traducteurs littéraires chevronnés de divers pays, sélectionnés par l'Association des écrivains chinois. Le traducteur Nguyen Le Chi a reçu ce titre après plus de 25 ans de travail acharné dans le domaine de la traduction et de l'édition, qui a permis de faire découvrir de nombreuses œuvres littéraires chinoises aux lecteurs vietnamiens.
Des traducteurs récompensés par le titre d'« Amis de la littérature chinoise ». Photo : Chibooks
Outre le traducteur Nguyen Le Chi, 14 traducteurs étrangers ont également reçu ce titre lors de la 7e Conférence internationale sur la traduction de la littérature chinoise pour sinologues, qui s'est tenue du 20 au 24 juillet à Nanjing. Organisée tous les deux ans par l'Association des écrivains chinois depuis 2010, cette conférence a réuni de nombreux écrivains et traducteurs de renom du monde entier .
Sous le thème « Traduction pour l’avenir », la conférence de cette année compte sur la participation de 39 écrivains chinois de premier plan tels que : Liu Zhenyun, Dongxi, Tat Phiyu…, ainsi que de 39 traducteurs littéraires de pays et territoires tels que : le Vietnam, la Thaïlande, la Corée, le Japon, l’Iran, l’Italie, le Mexique, l’Espagne, la Turquie, les Pays-Bas, la Pologne…
Le traducteur Nguyen Le Chi et le certificat d'« Ami de la littérature chinoise ». Photo : Chibooks
S'adressant à la presse chinoise, le traducteur Nguyen Le Chi a déclaré : « Lire et traduire de belles histoires est ma passion de toujours. Pouvoir accéder à d'excellentes œuvres littéraires et les traduire, notamment chinoises, est une grande chance. J'espère poursuivre mon cheminement vers la recherche, la découverte et la diffusion d'histoires précieuses, devenant ainsi un pont littéraire entre les lecteurs vietnamiens et chinois. Parallèlement, j'espère aussi faire découvrir la littérature vietnamienne aux lecteurs chinois et du monde entier. »"
https://www.vietnam.vn/fr/dich-gia-nguyen-le-chi-duoc-vinh-danh-la-nguoi-ban-cua-van-hoc-trung-quoc
#metaglossia_mundus
Dans l'univers de textes générés par l'IA il y a des mots qui reviennent fréquemment...
"Les mots qui trahissent l’Intelligence Artificielle (IA) générative : plaidoyer pour une vigilance lexicale
Le 21/07/2025
Ethique, Déontologie et Valeurs, Pratiques professionnelles
Classé dans Education, evolution, Formation, travail social
Dans l’univers des textes générés par l’intelligence artificielle, certains mots apparaissent avec une fréquence telle qu’ils en deviennent des signatures. “Défi”, “crucial”, “mettre en lumière” : ces expressions, très présentes dans certains écrits, intriguent autant qu’elles révèlent les mécanismes profonds de la génération automatique de texte.
Pourquoi ces termes reviennent-ils si souvent ? Sont-ils le fruit d’un biais algorithmique ? Sont-ils la conséquence d’une influence de la traduction anglais-français ? Est-ce le résultat d’une volonté de donner du relief à des propos standardisés ? Il est temps d’interroger cette tendance et d’inviter à une utilisation plus consciente du langage à un moment où l’IA s’invite dans la production de contenus à vocation sociale et journalistique.
L’empreinte de l’IA sur le vocabulaire, de la répétition à la standardisation
J’ai pu constater que des textes produits par des modèles d’IA révèlent une surreprésentation de certains mots et expressions. Il apparait dans notre secteur que des termes comme “crucial”, “défi”, “mettre en lumière”, mais aussi “pivotal”, “insights”, “delve into” (en anglais), sont utilisés bien plus fréquemment par les IA que par les humains.
Cette surutilisation n’est pas anodine : elle résulte du mode de fonctionnement même des modèles de langage, qui s’appuient sur des probabilités pour prédire le mot le plus “attendu” à chaque étape de la génération de texte.
Vous le savez sans doute, les modèles d’IA sont entraînés sur d’immenses corpus. Dans ceux-ci, certains mots sont déjà surreprésentés dans des contextes formels ou académiques. Ils reproduisent donc les schémas linguistiques dominants, privilégiant des termes perçus comme sérieux, universels.
Ainsi les mots comme “crucial” ou “défi” servent à donner de l’importance à un propos. Ils permettent de dramatiser ou de structurer un argumentaire. En fait, cela correspondrait à une attente implicite des textes institutionnels ou journalistiques.
La traduction automatique : un facteur d’uniformisation ?
Les IA traduisent votre demande en anglais, puis retraduisent leurs réponses en français. Ne soyez donc pas surpris si le standard des réponses s’appuie sur des termes anglo-saxons. Il en est de même pour les sources académiques. L’IA va vous trouver des écrits de chercheurs que vous ne connaissez pas, tout simplement parce qu’ils travaillent aux États-Unis. Leurs travaux peuvent ne pas avoir été traduits et certains n’ont pas dépassé la notoriété locale. Or l’IA leur donne une importance démesurée comparativement aux chercheurs des pays qui comme la France ou d’autres pays européens. C’est aussi ainsi qu’une domination culturelle est appelée à s’étendre.
La question de la traduction anglais-français est centrale. Les IA, conçues majoritairement en anglais et entraînées sur des textes anglophones, traduisent ensuite leurs productions dans d’autres langues, souvent de manière littérale. Ainsi, “challenge” devient “défi”, “crucial” reste “crucial”, “shed light on” se transforme en “mettre en lumière”. Cette traduction directe favorise la récurrence de certaines expressions, parfois au détriment de la richesse stylistique propre à chaque langue. Bref, nous risquons de voir notre langage s’appauvrir en n’utilisant plus certaines nuances ou certains synonymes de ces « mots valise » qui transportent des idées générales où chacun trouve ce qu’il pense. Ce sont des mots consensuels.
Les limites de la traduction neuronale
Il faut le reconnaitre, les progrès de la traduction automatique sont indéniables. Nous pouvons converser grâce à certains outils de traduction dans de multiples langues. Toutes passent par une traduction en anglais avant d’être à nouveau retraduite dans la langue attendue. Le problème est que les modèles peinent encore à saisir toutes les nuances culturelles et idiomatiques.
Ils privilégient la sécurité lexicale : Il vaut mieux choisir un mot “passe-partout” que risquer une tournure maladroite. Ce biais se retrouve dans la production française, où l’on observe une uniformisation du style et une perte de spontanéité.
Une étude nous explique que traduction par intelligence artificielle, désormais mécanisée, n’est plus en mesure d’analyser le sens profond du message d’un auteur. Cela conduit l’IA à des erreurs de traduction, erreurs que la traduction humaine évite.
L’uniformisation du vocabulaire n’est pas qu’un détail stylistique. Elle pose la question de la diversité des voix et de la capacité de l’IA à refléter la pluralité des expériences humaines. En multipliant les “défis cruciaux” et les “mises en lumière”, on appauvrit le débat, on gomme les aspérités du réel, on rend le discours interchangeable et prévisible.
L’effet sur la perception du lecteur
Face à des textes dans lesquels les mêmes formules reviennent sans cesse, le lecteur finit par développer une forme de lassitude, voire de méfiance. En tout cas, c’est mon cas ! Certains outils de détection de textes générés par l’IA se fondent d’ailleurs sur l’analyse de la fréquence de ces mots pour identifier l’origine du texte.
Loin de renforcer la crédibilité, cette standardisation nuit à l’authenticité du propos. Il faut le reconnaitre, le web est désormais saturé d’articles rédigés par des intelligences artificielles qui d’ailleurs ne sont pas identifiés comme tels. C’est « open-bar » pour celles et ceux qui veulent développer les thèses qui les arrangent. C’est un peu problématique.
Il nous faut apprendre à restaurer la nuance
Dans le champ du travail social, où chaque situation humaine est singulière, la langue doit être vivante, nuancée, respectueuse des vécus. Allons-nous voir des rapports sociaux utilisant tous ou presque les mêmes mots, les mêmes tournures de phrases, les mêmes arguments ? Je le crains. Et il va falloir sans doute résister à cela.
Les travailleurs sociaux et les professionnels qui aident les personnes ont peut-être un rôle pour préserver cette diversité linguistique. Leur expérience, leur capacité à écouter, à reformuler, à trouver des mots pour dire l’indicible, sont des remparts contre l’appauvrissement du discours.
Valorisons l’expertise humaine
Il me parait utile de rappeler combien il est fondamental que l’IA ne remplace pas la sensibilité et la créativité. Nous devons garder cette habileté à saisir les subtilités de notre langage. Nos mots ne sont pas ceux de la technocratie ni de la commercialisation. Les professionnels du social, en interaction constante avec des personnes aux parcours variés, doivent continuer de savoir adapter leur vocabulaire.
Ils doivent, malgré l’attrait de l’IA, éviter les clichés, et donner du sens à chaque mot. Leur travail mérite d’être reconnu et soutenu, notamment face à la montée en puissance des outils numériques dans le secteur. Sinon ce sont les usagers qui en pâtiront.
Repenser notre rapport au langage
L’enjeu bien évidemment n’est pas de bannir certains mots, mais de questionner leur usage. Nous avons tous intérêt à enrichir notre palette lexicale. Il nous faut savoir cultiver la diversité des styles. Il s’agit d’inviter chaque rédacteur, qu’il soit humain ou assisté par une IA, à faire preuve de discernement. Nous pouvons varier les formulations et préférer la précision à l’emphase automatisée. Ne cédons pas aux sirènes technologiques au nom d’une facilité intellectuelle qui peut se retourner contre nous.
Quelques pistes pour agir
Je crois qu’il est nécessaire aujourd’hui de se pencher sur ce sujet. Comment ? Quatre aspects me semblent nécessaires pour prendre en compte ce que l’IA nous propose :
Il nous faut d’abord prendre conscience des mots “marqueurs” de l’IA dans le champ du travail social et les utiliser avec parcimonie.
Nous avons intérêt à favoriser la relecture humaine, notamment dans les contextes sensibles ou à forte dimension relationnelle
Pour cela, il nous faut continuer d’encourager la formation des professionnels à la rédaction nuancée et créative. Cela peut se traduire par des ateliers d’écriture sans IA
Enfin, les encadrements peuvent aussi valoriser les initiatives qui mettent en avant la diversité des voix, des styles et des expériences.
Conclusion : Pour une langue vivante, au service du lien social
La vigilance lexicale n’est pas un luxe, mais une nécessité dans ce monde où l’intelligence artificielle tente de s’imposer partout. Il s’agit de préserver la richesse de notre langue, d’éviter que le discours ne se fige dans des formules toutes faites.
C’est une façon aussi de donner toute sa place à l’humain dans la production de sens. Les travailleurs sociaux, soutenus par des professionnels engagés, sont les garants d’une parole authentique, capable de dire la complexité du monde sans se réfugier derrière des mots standardisés.
C’est à cette exigence que nous devons collectivement nous atteler, pour que la langue reste un outil d’émancipation, de dialogue et de transformation sociale.
https://dubasque.org/les-mots-qui-trahissent-lintelligence-artificielle-ia-generative-plaidoyer-pour-une-vigilance-lexicale/
#metaglossia_mundus
New study explores ethical and technical advances in transcribing early modern texts using HTR tools like Transkribus.
"Early modern text transcription revolutionized by ethical machine learning tools
Over recent years, digitization efforts have made sixteenth- and seventeenth-century printed books more widely available than ever before. Scholars are now able to search digital transcriptions for keywords without leaving their desks or having to visit physical archives. Still, as easy as access is, most digitized material remains untranscribed due to limitations of time, labor, and funds.
A new article published in The Sixteenth Century Journal by Serena Strecker and Kimberly Lifton addresses both the technical and the ethical dimensions of this issue. The authors discuss alternatives to traditional transcription methods, which often relied on outsourced laborers—such as graduate students or workers—to manually transcribe historical texts.
Optical Character Recognition (OCR) software, while effective for transcribing late 19th- and 20th-century texts, is inappropriate for the type of inconsistencies common in early modern print. Early modern scholars have thus turned increasingly to Handwritten Text Recognition (HTR) technology. Transkribus, the most effective HTR software, supports public transcription model access or personal training, providing a new solution to the transcription challenge.
Strecker and Lifton conducted a case study using Transkribus on a sample group of four sixteenth-century German exempla collections. The results of their experiments proved that even publicly available models of HTR can generate very accurate early modern printed text transcriptions. Additionally, if scholars use the public models of Transkribus to generate training data, they can develop their own models tailored to their source materials in a five-step process.
Handwriting by Wilhelm Moritz Keferstein around 1864, examples of letters extracted from the handwritten chronicle of the Zoological Museum of Göttingen. Credit: F. Welter-Schultes
This approach not only maximizes transcription accuracy but also guarantees ethical compliance. It is “no longer necessary nor desirable” to employ outsourced workers, the authors argue. Instead, they promote a shift toward empowering individual researchers to produce their own transcriptions, which avoids reinforcing inequalities in academia and reproducing the long-lasting effects of colonial labor practices."
by Dario Radley July 21, 2025
https://archaeologymag.com/2025/07/early-modern-text-transcription-machine-learning-tools/
#metaglossia_mundus
"Real-time English-French translations can be provided for 500 to 1,000 financial news stories per day, the company claims.
The service effectively removes language barriers and delays for French-speaking wealth managers, institutional investors, and online trading platforms, it adds.
The translations have to be very precise, given the complexity of some of the subjects—equities, bonds, FX, commodities, macroeconomics, central banks and politics.
Smart metadata will enable personalized content delivery by asset class, company, region, or topic, Dow Jones says.
The switch will lead to the “upskilling” of human translators at some existing services into new roles involving prompt engineering and quality assurance, the company continues.
“The successful integration of AI into our French language service highlights our commitment to delivering unparalleled financial news with both speed, accuracy and precision to a broader global audience,” says Joe Cappitelli, general manager of Dow Jones Newswires.
Cappitelli adds: “This launch enables us to provide critical insights to French-speaking investors and professionals, reinforcing our leadership in leveraging advanced technology to meet evolving market demands and expanding our reach into key non-English speaking markets.”" Dow Jones Unveils AI-Driven French Translation Service by Ray Schultz , Yesterday https://www.mediapost.com/publications/article/407522/dow-jones-unveils-ai-driven-french-translation-ser.html #metaglossia_mundus
Explore the top Japanese literature to read in Summer 2025. From heartwarming fiction to classic crime and literary gems, discover the best new and re-released titles available in the UK.
"Discover Japan in Translation – Must-Read Japanese Literature for Summer 2025
July 20, 2025
This summer, readers can immerse themselves in a rich tapestry of Japanese literature—from long-awaited translations to re-released classics. From cozy slice-of-life stories to twisty mysteries, these new and revived titles offer a little something for every reader. Whether you’re discovering Japanese fiction for the first time or returning to its literary treasures, these books promise stories you’ll carry long after the beach towel is folded away.
Perfect for sunny days, long commutes, or reflective evenings, here are some of the top 2025 Japanese literature titles to add to your reading list.
The Passengers on the Hankyu Line
The Passengers on the Hankyu Line by Hiro Arikawa (Doubleday, Transworld UK)
Hiro Arikawa (The Travelling Cat Chronicles) returns with an interconnected tale of strangers whose lives intertwine during their daily commute. Set on the Hankyu Railway line in Osaka, this novel gently weaves together the personal struggles, regrets, and small victories of its characters. An artist unsure of her path, a retiree grieving his wife, a teenager facing bullying. All find fleeting moments of connection aboard the same train. With signature warmth, The Passengers on the Hankyu Line is a meditation on how even the briefest encounters can change a life.
Inspector Imanishi Investigates
Inspector Imanishi Investigates – Seicho Matsumoto (Penguin Classics)
Originally published in 1961, this landmark crime novel follows the dogged Inspector Imanishi as he investigates the murder of a man found on a Tokyo railway track. What seems like a random killing turns into a decades-spanning mystery involving rural villages, poetry, and shifting identities. Matsumoto, a pioneer of the social mystery genre in Japan, uses the case to examine postwar Japanese society with acute psychological insight. This reissue offers modern readers a slow-burning, cerebral detective story that reveals as much about the human condition as it does the mechanics of crime.
The Healing Hippo Of Hinode Park
The Healing Hippo of Hinode Park by Michiko Aoyama (Doubleday)
Michiko Aoyama, acclaimed for What You Are Looking for is in the Library, delivers another tender exploration of modern life in The Healing Hippo of Hinode Park. The story centers on a mysterious statue of a hippopotamus in a Tokyo park, said to offer emotional relief to those who sit beside it. Told through the lives of five people—each burdened by regrets or personal losses—the novel gently connects them through fate, chance, and quiet kindness. Aoyama’s style is thoughtful and full of warmth, ideal for readers seeking optimism and emotional resonance.
Murder in the House of Omari
Murder in the House of Omari by Taku Ashibe (Pushkin Vertigo)
A brilliant new locked-room mystery, Murder in the House of Omari resurrects the Golden Age detective story with a Japanese twist. When a body is discovered in a sealed room within the sprawling Omari estate, amateur sleuth Kyosuke Kamizu is called to investigate. Tensions erupt and a brutal murder occurs behind seemingly impossible circumstances. Rich with classic tropes—hidden clues, secret passages, and unreliable witnesses—Ashibe’s plotting is both nostalgic and fresh. It’s a must-read for fans of Agatha Christie or Yukito Ayatsuji.
Strange Houses
Strange Houses by Uketsu (Pushkin Vertigo)
A breakout hit from the online horror author, Strange Houses is a chilling collection of short stories exploring supernatural forces inhabiting ordinary homes. Each eerie tale centers on a different residence across Japan. From haunted floor plans to staircases that lead to nowhere, Uketsu uses domestic spaces to unsettle and disorient. With creeping tension and unnerving originality, this debut in English cements Uketsu as a fresh new voice in Japanese horror.
The Restaurant of Lost Recipes
The Restaurant of Lost Recipes – Hisashi Kashiwai (Pan)
In this companion volume to The Menu of Happiness, Hisashi Kashiwai returns to Kyoto’s most mysterious restaurant. Each visitor to the mysterious Kyoto restaurant requests a dish from their past—sometimes obscure, sometimes forgotten. Through precise research and intuition, the chef recreates meals that unlock lost emotions and mend relationships. The result is an anthology of quiet transformation through taste and nostalgia. Perfect for readers drawn to the magic of memory and cuisine.
Thousand Cranes
Thousand Cranes by Yasunari Kawabata (Vintage Archive)
A masterpiece from Nobel Laureate Yasunari Kawabata – first published in 1952 – Thousand Cranes returns in a fresh edition. Set in post-war Japan, it follows Kikuji, a young man entangled in complex emotional relationships, including those once linked to his late father. Through the ritual of the tea ceremony, Kawabata explores themes of beauty, shame, and longing. His spare, lyrical prose captures fleeting moments with haunting precision. This classic reaffirms Kawabata’s place among the most poetic voices in world literature.
The Man Who Died Seven Times
The Man Who Died Seven Times – Yasuhiko Nishizawa (Pushkin Vertigo)
Yasuhiko Nishizawa’s imaginative mystery blends science fiction and classic detective tropes in a gripping tale of identity and death. The story begins with a man found dead—only for the same man to turn up alive, and then die again, in a different city. Detective Saegusa is drawn into a bizarre investigation involving doppelgängers, experimental memory technology, and philosophical questions about the soul. As the body count rises, so do the existential stakes. Fast-paced and thought-provoking, The Man Who Died Seven Times offers a genre-bending take on modern crime fiction.
The Cat Who Saved the Library
The Cat Who Saved the Library – Sosuke Natsukawa (Picador)
Sosuke Natsukawa (The Cat Who Saved Books) returns with another charming bibliophilic tale. This standalone story introduces a magical feline named Tora who appears in a struggling suburban library. There, he forms a bond with a retired teacher, leading them on surreal adventures within the pages of forgotten books. As they fight to preserve the library from closure, the characters rediscover courage, connection, and the power of stories. Heartwarming, gently fantastical, and steeped in a love for reading, this is a perfect comfort read.
The Story of a Single Woman
The Story of a Single Woman – Chiyo Uno (Pushkin Press)
Chiyo Uno was one of the most pioneering female voices in modern Japanese literature, and this newly reissued novella from Pushkin Press brings her sharp wit and psychological nuance to a fresh audience. The Story of a Single Woman follows Ayako, a young woman navigating love, independence, and self-identity in 1920s Tokyo. Breaking with conventions of her time, Ayako is intelligent, sexually autonomous, and often critical of the men around her. Uno’s crisp, elegant prose captures a woman’s inner life with candor and clarity, making this a quietly radical and deeply modern classic"
https://theartsshelf.com/2025/07/20/discover-japan-in-translation-must-read-japanese-literature-for-summer-2025/
#metaglossia_mundus
"...DeepL, an artificial intelligence (AI)-based translation solution, will be integrated into Zoom, a video conferencing platform, following Microsoft (MS) Teams, expanding the scope of support.
Companies using Zoom will be able to conduct natural multilingual video conferences through DeepL's real-time voice translation function without a separate translation function.
"We used to support only MS Teams, but now we can support Zoom," DeepL Chief Technology Officer Sebastian Enderline said at a press conference held at Grand Intercontinental Seoul Parnas in Gangnam-gu, Seoul on the 21st. "500,000 companies around the world are using Zoom to hold meetings every day." Through this, many companies will be able to hold meetings in multinational languages," he said.
In the future, companies using Zoom will be able to integrate the DeepL function into real-time meetings by adding DeepL's voice translation function "DeepL Voice" app within Zoom. Language entered by voice supports a total of 16 languages, and language translated into subtitles supports a total of 35 languages.
When video conference participants speak naturally in their native language, they translate the speech into their preferred language and provide it to other language users as text.
DeepL also announced some feature updates along with its collaboration with Zoom. Three languages were added to the voice input language: Chinese Mandarin, Ukrainian, and Romanian, and subtitles recently began to support Vietnamese and Hebrew.
In addition, after the meeting, the minutes and translations of the entire meeting were downloaded so that companies could use them for follow-up work.
DeepL is a language solution company founded in 2017 based in Cologne, Germany. DeepL operates "DeepL Voice," which translates voice conversations in real-time and provides subtitles along with its core product, translator service.
The function provided on the video conferencing platform is "DeepL Voice for Meeting," and if participants talk in their own languages, they can immediately check the translated content for their own language with subtitles.
While developing AI technology specialized in translation, it continues to grow in specialized areas, including one of Forbes' list of top 50 AI companies. The cumulative investment amounts to about 420 million dollars (about 580 billion won) and has 200,000 corporate customers around the world.
In particular, the Korean market, along with Japan, is the most concentrated market in Asia by DeepL. Law firm Sejong and others are already using DeepL's solution in their work.
"Korea has many distribution and manufacturing multinational companies operating overseas and leading companies in the world," Enderline CTO said. "Korea is a key market for DeepL because there are many excellent companies that DeepL can support."
JEONG Hojun jeong.hojun@mk.co.kr
Input : 2025-07-21 14:08:14
https://www.mk.co.kr/en/it/11373212
#metaglossia_mundus
"Microsoft introduces two new initiatives focused on preserving European language and culture
Expansion of Microsoft’s European digital commitments aims to strengthen partnerships and improve digital accessibility
BY AMBER HICKMAN | 21 JULY 2025
Microsoft has launched two new initiatives to promote European culture and linguistic diversity in Europe as part of its European digital commitments.
The first initiative is targeted towards improving representation of languages within Europe, including the European Union’s 24 official languages and a variety of additional languages recognised on a national level, as many of these languages represent less than 0.6 per cent of web content currently.
“As the world digitises, much of Europe’s linguistic and cultural diversity risks being left behind,” said Brad Smith, vice chair and president of Microsoft in an online blog post. “The majority of online web content, the primary source of training data for today’s large language models, is in English. Much of it reflects an American perspective. The European Commission has warned that the continent’s ambition to digitise its vast cultural corpus remains ‘significantly out of reach’. As Europe’s leaders have recognised, without urgent action, this imbalance is not just a cultural concern, it’s a commercial one. AI that doesn’t understand Europe’s languages, histories and values can’t fully serve its people, its businesses, or its future.”
To help bridge the language gap, Microsoft will collaborate with European partners to increase the availability of multilingual data. With the ICube Laboratory at the University of Strasbourg in France, Microsoft will support AI training efforts by creating and deploying a team of employees from the Microsoft Open Innovation Center (MOIC) and its AI for Good lab. This team will be supported by global internal network of more than 70 Microsoft engineers, data scientists and policy professionals.
The team will start by accessing Microsoft’s own store of multilingual data and making it accessible and transparent to the European public including open-source developers.
MOIC will also partner with Common Crawl, one of the largest free and open repositories of web-crawled data. MOIC will fund work at Common Crawl and leverage native speakers to annotate European language data.
Meanwhile, the second initiative is focused on digitally safeguarding Europe’s cultural heritage. For this Microsoft has partnered with the French Ministry of Culture and French firm Iconem to create a digital twin of the Notre Dame in Paris, France.
Microsoft is also working with the Bibliothèque Nationale de France to digitise nearly 1,500 cinematic model sets from shows at the Opera National de Paris between 1800 and 1914."
https://www.technologyrecord.com/article/microsoft-introduces-two-new-initiatives-focused-on-preserving-european-language-and-culture
#metaglossia_mundus
"Microsoft Rewards is planning to pay people to use Bing instead of its competitors, Furtune.com reports.
People with a Microsoft account can sign up for the rewards program, which gives users points when they search on Bing or purchase an item in the Microsoft Store.
Points can later be reimbursed for items.
According to Wired, Google still has a stronghold on the search engine market with close to 86% of all usage. Bing is in second place with about 10%.
The rewards program, according to report, will have different levels. Level 1 members will be able to earn points for 10 searches a day, and Level 2 users can earn points for 50 searches per day. The number of searches used are refreshed each day..."
https://businessamlive.com/microsoft-sets-to-pay-you-for-using-bing-search-engine-instead-of-google/
#metaglossia_mundus
MICA’s new School of Applied Creativity aims to teach what AI can’t — emotion, context and cultural intelligence, says CEO Jaya Deshmukh
"...If AI is capable of writing scripts, designing logos, editing videos, generating music, and planning campaigns, what should a creative school focus on teaching? And more importantly, what essential skills should a creative professional acquire?...
MICA, formerly known as the Mudra Institute of Communications, Ahmedabad, long recognised as India’s premier institute for strategic marketing and communication, has joined the conversation with a twist. It has launched the School of Applied Creativity, imagination-focused school for the AI era, aimed not at coders or animators, but at future storytellers, thinkers, creators and cultural strategists.
Deshmukh, CEO of MICA, said the school was set up in response to a fundamental and pressing question: “What will it take to thrive creatively in a world where machines can do almost everything?”
“We’re moving from the Information Age to the Imagination Age,” she adds. “AI gives you instant access to knowledge, but it lacks context, emotion, culture. That’s where human creativity still matters — and that’s what we aim to cultivate.”
MICA’s leap from communication to imagination
The School of Applied Creativity is set to begin its first academic year in July 2026, with an expected batch size of 80–100 students. It will offer:
A redesigned one-year CCC (Crafting Creative Communication) programme.
A new two-year PG programme in Media, Entertainment & Content, with modules in experience design, talent management, and digital storytelling.
These programmes aim to go beyond skill-building by fostering creative thinking...
How does MICA’s school stand out?
With the government pushing initiatives like IICT and global tech collaborations on the rise, where does MICA fit in?
In contrast to IICT’s technically focused, production-driven methodology, MICA’s new school presents itself as a hub for applied creativity — a setting where “thinking” comes before action.
“We’re not competing with institutes teaching tools. Our aim is to nurture creators who ask better questions, not just execute faster solutions,” said Deshmukh....
Admissions will be through MICAT, the institute’s long-standing creativity and aptitude-based test... The school said it welcomes applicants from diverse academic backgrounds, not just design, media or communication streams.
“Imagination doesn’t require a prior degree,” Deshmukh said.
Details around the fee structure are still being finalised, but the institute indicates that scholarships and funding options will be available.
Is there a market for creative thinking?
The decision to open the school comes at a time when traditional higher education faces significant challenges, particularly in creative disciplines where AI is swiftly taking over execution-level tasks. In such a landscape, differentiation can come from originality rather than mere output...
https://www.afaqs.com/news/advertising/ai-can-do-it-all-but-human-creativity-still-matters-mica-ceo-9517363
#metaglossia_mundus
"Surtitling performances – providing live translations projected above the stage – ensures that all audiences can enjoy the shows, a cornerstone of its cultural policy. First appearing in the 1980s, surtitles were initially used in opera.
The Avignon Festival has consistently demonstrated its commitment to linguistic diversity, annually featuring a guest language and welcoming many international artists to the stages of the Cité des Papes. Surtitling performances – providing live translations projected above the stage – ensures that all audiences can enjoy the shows, a cornerstone of its cultural policy. First appearing in the 1980s, surtitles were initially used in opera.
Introduced in Avignon under the creative direction of Olivier Py (2013-2022), surtitling became standard practice with the arrival of his successor, Tiago Rodrigues, who has championed ever-greater linguistic diversity. With a multilingual program, performances in foreign languages – and in French – are surtitled. In 2023, the festival created a dedicated position to "oversee the surtitling process from start to finish," explained Clara Moulin-Tyrode, deputy production director of the festival.
While companies presenting work at the festival are asked to provide their own surtitling solutions, the festival itself allocates "€60,000 to €70,000" to supplement these efforts. The process is coordinated with other venues on tour. France's national performing arts agency (ONDA) and France's international cultural agency offer their support, while partnerships are formed with specialized organizations such as Panthéa, a leading European surtitling company for live performance..."
The challenge of surtitling at the Avignon Festival 4 hours ago — Surtitling By Calypso Joyeux 20 July 2025 https://www.lemonde.fr/en/culture/article/2025/07/20/the-challenge-of-surtitling-at-the-avignon-festival_6743560_30.html #metaglossia_mundus
"Frontiers in Psychology Sec. Psychology of Language
Volume 16 - 2025 | doi: 10.3389/fpsyg.2025.1618531
This article is part of the Research Topic Rethinking the Embodiment of Language: Challenges and Future Horizons View all 3 articles
Embodied Empathy in Translation Studies: Enhancing Global Readers' Cognitive and Emotional Engagement with Translations of Traditional Chinese Medicine Terminology Provisionally accepted Tong Zhou*Jinghui Wang Tsinghua University, Beijing, China
The embodied nature of language comprehension has gained increasing theoretical and empirical support in the fields of linguistics, cognitive science and psycholinguistics (Gallese, 2018;Shtyrov et al., 2023;Garello et al., 2024;Visani et al., 2025). Embodied language comprehension theory holds that language is not a mere abstract symbol system, but is deeply rooted in the sensorimotor and affective experiences of individuals (Niedenthal, 2007;Zwaan & Taylor, 2006). As Gibbs (2006) points out, whereas traditional beliefs tend to view meaning as an abstract entity divorced from bodily experience, embodiment theory emphasizes that the process of language comprehension is frequently accompanied by a bodily simulation of the motor, perceptual, and affective states embedded in linguistic content (Barsalou, 1999;Gallese & Lakoff, 2005;Glenberg & Kaschak, 2002;Hauk et al., 2004). Embodiment is particularly crucial in cross-cultural contexts, as language is not only a medium for information transfer, but also a carrier of cultural and experiential worldviews.Within this theoretical framework, cross-cultural discourse provides unique challenges and opportunities for embodied language processing research. Traditional Chinese medicine (TCM) discourse is a typical example of this, with a language system that highly integrates bodily metaphors, physiological imagery, and culturally embedded affective models Pritzker & Hui, 2014;Unschuld, 2012;Wang & Chen, 2023). While conveying medical knowledge, TCM expressions encode an embodied worldview that intertwines physical health and emotional states (Tiquia, 2011). As TCM does not dichotomize mind and body, but views emotions as a unity of physicality and experience, its texts provide rich resources for exploring how readers mobilize embodied meanings in cross-cultural contexts.Several international organizations have played a pivotal role in the process of standardizing TCM terminology by issuing translation guidelines and reference documents. The World Health Organization (WHO), the International Organization for Standardization (ISO), and the World Federation of Chinese Medicine Societies (WFCMS) have developed translation frameworks for core TCM terminology (Han et al., 2025). Nevertheless, despite relatively coordinated efforts, inconsistencies in terminology translation still exist and a universally recognized uniform standard has not yet been formed (Zhou et al., 2023). Inconsistency in the use of terminology hinders the reader's understanding of the translated text and his or her effective grasp of the cultural meaning and deeper connotations carried by TCM terminology (Ye & Zhang, 2017). In addition to the problem of terminological inconsistency, the presentation of the translation itself constitutes another obstacle. Many TCM terms tend to be overly abstract and academic in the English translation process, thus weakening their embodied and experiential dimensions. For global readers, whose bodily experiences and cultural frameworks often differ from those of TCM contexts, understanding such expressions requires linguistic decoding along with the simulation of unfamiliar bodily states and emotional experiences. Nevertheless, the disconnection between language, culture, and bodily experience often hinders the successful realization of this simulation process (Beard, 2001;Overgaard, 2017;Rizzolatti & Sinigaglia, 2020).Against the above background, this paper proposes the concept of embodied empathy as an innovative theoretical perspective for exploring the reception effect of cross-cultural translation. Specifically, this paper explores how translators can, through linguistic choices in translating TCM texts, stimulate readers' sensory arousal and emotional resonance, thus engaging their embodied cognitive resources.The target readership of this paper is a global audience of non-Chinese-speaking backgrounds who are exposed to TCM mainly through translations and lack direct experience and familiarity with its linguistic, cultural and embodied underpinnings. A group of particular interest is the general public readers, those who are more inclined to connect emotionally with the vivid and experiential dimensions of the language than the professional TCM practitioners who value terminological equivalence and clinical precision. Accordingly, the embodied empathy framework and translation strategies proposed in this paper are primarily intended for intercultural communication and public engagement contexts, rather than for clinical, regulatory, or professional translation purposes. While this paper principally develops a conceptual framework and proposes translation strategies based on theoretical analysis, it does not include empirical validation of the embodied empathy model. The aim is to offer a hypothesisgenerating perspective on how embodied mechanisms may inform TCM translation. Future research is needed to empirically examine the effectiveness of these strategies, for example, through reader response studies, cognitive-affective testing, or neurocognitive experiments.
Recent research in cognitive linguistics and neuroscience emphasizes that language comprehension is not based on purely abstract symbolic representations, but is deeply rooted in sensorimotor and affective systems. Pulvermüller (2018) introduces the concept of Action Perception Circuits (APCs), suggesting that language representations and bodily experiences are integrated at the neural level through such networks and this is supported by Hauk et al. (2004), who show that reading action verbs activates motor cortical areas associated with the corresponding body parts, revealing the somatotopic organization of semantic meaning. Gallese and Cuccio (2015) build on the theory of embodied simulation by suggesting that language comprehension involves mirroring mechanisms, that is, the simulation of bodily movements and emotional states through pre-reflective neural processes. Their notion of "paradigmatic knowledge" highlights that language users mobilize internalized motor patterns when comprehending linguistic meaning, even in the absence of actual movement. From an enactivist perspective, van Elk et al. (2010) argue that comprehension is not based on representational cognitive structures, but rather is generated dynamically through embodied interactions with the environment, focusing more on procedural than declarative knowledge.Even in the comprehension of abstract concepts, embodiment still plays an important role. Borghi et al. (2017) propose a "multiple representation view", which integrates sensorimotor simulation with metaphorical, emotional and social contexts. Overall, these perspectives collectively point to a dynamic view of language comprehension as an emotionally and perceptually based, socially embedded, and interactively generated process, a cognitive schema that also provides theoretical underpinnings for the notion of embodied empathy in translation (Zwaan et al., 2002;Glenberg, 1997;Kaschak et al., 2005;Shiang et al., 2024;Kiefer & Pulvermüller, 2012). Within an embodied cognition framework, empathy is increasingly viewed as a sensorimotor-affective process that integrates perspective-taking and emotional resonance. Recent research in psychology and psychotherapy suggests that empathy is a multidimensional construct, comprising cognitive and affective components that are interrelated yet remain relatively independent. (Gladstein, 1983;Goldstein & Michaels, 1985;Hoffman, 1977;Strayer, 1987;Basch, 1983;Bohart & Greenberg, 1997).In this context, cognitive empathy refers to an individual's ability to adopt or infer another's perspectives, thoughts, and affective states; whereas affective empathy involves affective resonance or direct experience of another's emotions (Gladstein, 1983). This dual model lays the groundwork for more embodied empathy research, which seeks to integrate the two empathic mechanisms through a sensorimotor and affective simulation lens.Simulation theory suggests that empathy arises from the internal reproduction of another person's state, and that its neural mechanisms are mediated primarily by the mirror neuron system (Gallese, 2003). In his model of "shared multimodal space", Gallese (2003) states that observing another person's actions or emotions activates the observer's own neural circuits, which resonate at the body level. At the level of language comprehension, Gallese and Cuccio (2015) further propose a tripartite model, in which bodily action, affective states, and linguistic structures work together to shape the process of comprehension. The reader generates analogical empathy through sensory movements and affective activation in reading, and enters the linguistic world through a perceptual experience. This view is echoed in the psychological and psychotherapeutic traditions. Rogers (1980) emphasizes that empathy encompasses cognitive insight as well as affective attunement, allowing one to experience another person's world as if it were one's own. Barrett-Lennard (1993) points out that the validity of such affective resonance depends on its successful intersubjective communication. In clinical practice, Cooper (2001) identifies three forms of embodied empathy: allowing the body's resonance to unfold naturally, interpreting bodily cues, and maintaining awareness of one's own embodied state. Nonetheless, he also cautions that empathy is always proximate, and that its alignment is fleeting but influential.Viewed collectively, these perspectives underscore the multidimensional nature of empathy, in which bodily simulation, emotional resonance, perspective taking and intersubjective communication are intertwined. In language comprehension and translation, particularly with culturally embodied texts like TCM, embodied empathy bridges the gap between semantic access and affective simulation, enabling meaning to emerge through cognitive understanding and affective engagement. Integrating insights from embodied language comprehension and empathy theory, this paper proposes a novel theoretical framework, embodied empathy, for translation studies, shedding light on how experiential meaning can be conveyed across linguistic and cultural boundaries. Embodied empathy, as conceptualized in this paper, refers to the target reader's capacity to simulate the bodily sensations, emotional state and cultural logic embedded in the source language expression. Its realization depends on the translator's effective mobilization of the reader's sensorimotor system and emotional schema by the translated text, so that the comprehension process transcends the purely cognitive level and finally achieves an embodied experience of meaning. Operationally, this concept can be indicated by the degree to which a translation evokes sensory imagery, emotional resonance, and culturally grounded associations in the reader's experience. Such effects may be preliminarily assessed through reader-response studies, experiential feedback, or cognitive-affective measures that examine whether the translation stimulates embodied simulation beyond mere semantic comprehension.Within this framework, the success of translation depends not only on semantic equivalence, but also on experiential equivalence as reflected in the reader's embodied engagement with the source text (Alexieva, 2018). A number of central questions need to be carefully considered in the translation process: Can readers from global biomedical backgrounds simulate the bodily-emotional states when reading TCM translations? Are their pre-existing embodied experiences sufficient to support the corresponding emotional resonance? If not, how can the translator intervene to bridge this experiential gap? In this sense, the translator functions as an intertextual interpreter and a cross-modal mediator bridging embodied worlds. His or her task is to reconstruct a perceptual-affective ecosystem in the target language, so that it retains the physiological realism, sensory vividness and emotional tension of the source text. This requirement is particularly paramount in the translation of TCM, because the bodily processes described in TCM are deeply rooted in cosmological and cultural metaphors that often lack direct counterparts in global epistemologies.To operationalize the theoretical framework, this paper defines three interrelated dimensions: sensorimotor simulation, affective resonance and cultural mapping. These three dimensions work synergistically to facilitate readers' cognitive understanding of the translated content as well as their deeper engagement at the physical and emotional levels. In highly culturally saturated domains such as TCM, embodied empathy provides a crucial theoretical reference for assessing whether translations are able to achieve a balance between semantic equivalence and experiential depth.Sensorimotor simulation refers to the ability of readers to simulate bodily actions or perceptual experiences (e.g., movement, temperature, texture, etc.) on a mental level during language comprehension. This simulation does not involve actual bodily movements, but is achieved by the activation of the corresponding sensorimotor pathways in the brain (Pulvermüller, 2018;Hauk et al., 2004). For instance, when a reader reads the expression "a chilling gust of wind", the reader not only grasps the semantic content of "cold", but may also internally recreate the tactile sensation of the cold wind on the skin. If the translation does not sufficiently evoke similar sensory associations in the target readers, the embodied experience may only be partially activated, even if the semantic rendering is equivalent.Affective resonance enables readers to emotionally attune to the tone of a text, comprehend its affective connotations, and ultimately experience the emotional state evoked by the language (Gallese, 2003;Rogers, 1980;Cooper, 2001). For example, the expression "a fire rising in the chest with nowhere to escape" not barely conveys the abstract emotional concept of anger, but also evokes an embodied emotional intensity that allows the reader to feel the agitation rising within. Translations that effectively convey emotional content and stimulate readers' emotional involvement are more likely to generate deeper empathy among the readers of the translated language.Cultural mapping is defined as the ability of readers to project and interpret culture-specific physical and emotional metaphors within their own conceptual framework. This process determines whether the translation can effectively fit into the reader's cultural-cognitive schema, thus enabling the resonance of meaning and experience (Borghi et al., 2017). In TCM, for example, expressions such as "ascending of liver qi pattern" (World Health Organization, 2022) draw upon cosmological and metaphorical systems rooted in Yin-yang and Five-elements theory. Here, the term "liver" is not merely an anatomical reference, but a culturally coded symbol of emotional regulation and systemic flow.Translators need to consider how to reconstruct these embodied metaphors so that target readers can build conceptual bridges to the source culture's embodied worldview.Though sensorimotor simulation, affective resonance, and cultural mapping can be analytically distinguished as three operational dimensions, they function interactively in actual discourse processing. Embodied empathy is not a simple sum of these components, but an emergent phenomenon arising from their dynamic and mutually reinforcing interplay. As such, embodied empathy serves as a conceptual bridge linking the embodied-cognitive mechanisms of language comprehension with the cross-cultural dynamics of translation reception. It foregrounds the affective and perceptual fidelity of translation, inviting target readers to intellectually comprehend and emotionally engage with the embodied meanings of the text. Such a perspective pushes translation studies towards a more integrated mode of understanding, in which language, body and culture are seen as an inseparable organic whole in the process of meaning construction. On the basis of the theoretical model of embodied empathy, this paper further outlines three main types of challenges in cross-cultural translation, corresponding to the aforementioned operational dimensions: sensorimotor simulation, affective resonance, and cultural mapping. These challenges are particularly prominent in the translation of TCM, as the language is highly intertwined with bodily experience, emotional logic, and cosmological worldview (Hsiao et al., 2008). It should be emphasized that these three types of challenges are not independent of each other, but often show complex interactions and overlapping relationships in the actual translation process. The first challenge lies in that the translation may only partially activate the target reader's sensorimotor system, which can lead to a reduced sense of bodily vividness compared to the original. TCM language frequently encodes embodied states, such as cold, heat, pressure, and flow, through rich, image-based metaphors. Yet in translation, conceptual clarity often takes precedence over sensory immediacy, and the translators often render vivid, somatically charged language into abstract biomedical terms (Ye & Zhang, 2017). As sensorimotor grounding is central to embodied comprehension (Pulvermüller, 2018), any loss in sensory imagery can limit the reader's cognitiveaffective immersion in the translated text.For example, the expression 寒湿困脾证 (hán shī kùn pí zhèng), standardized as "cold dampness affecting the spleen pattern" in the WHO's 2022 terminology guideline, exemplifies a significant case of sensorimotor attenuation in translation (World Health Organization, 2022). In the source term, 寒 (hán, "cold") and 湿 (shī, "dampness") are not abstract meteorological descriptors but directly invoke tactile and thermal experiences: cold connotes chill, stiffness, and contraction; damp suggests heaviness, stickiness, and stagnation. The character 困 (kùn, "affect") further reinforces this embodied image by implying functional sluggishness and visceral inertia, capturing a bodily state of internal blockage.However, the English translation, while semantically equivalent, tends to render these vividly embodied sensations in more technical biomedical terms. The structure "cold dampness affecting the spleen pattern" adopts a clinical and formulaic register, which may reduce the tactile immediacy and perceptual intensity present in the original. In particular, the somatic weight of 困 (kùn, "affect"), a key experiential cue, becomes less salient, potentially leading to a flattened sensorimotor profile.This example highlights a central concern in the embodied empathy framework: when the translation has limited capacity to activate the target reader's sensorimotor system, the resulting text may remain intelligible but offer reduced experiential engagement. Such attenuation undermines the capacity of translation to convey the perceptual and bodily dimensions essential to embodied understanding. A second challenge involves the disruption of emotional resonance in translation. Numerous TCM expressions convey emotional experience through embodied metaphors that reflect intertwined physiological and affective states, such as frustration, repressed anger, or anxiety (Brehaut et al., 2007).However, when these expressions are rendered into emotionally neutral or clinically detached formulations, the original affective intensity is significantly diminished. As a result, the target-language reader may cognitively grasp the semantic content but is unlikely to engage emotionally with the experience described. As empathy theory suggests (Gallese, 2003;Rogers, 1980), affective resonance is not merely ornamental; it constitutes a foundational mechanism for empathic understanding. When the emotional tenor of the original is flattened, the pathways for embodied empathy are obstructed, limiting the translation's capacity to foster experiential engagement.The expression 肝火上炎证 (gān huǒ shàng yán zhèng), translated by the World Health Organization in 2022 as "upward flaming of liver fire pattern," exemplifies a highly embodied and emotionally charged linguistic construct in TCM. It encapsulates a deep entwinement of somatic perception, emotional experience, and culturally embedded metaphor. In TCM discourse, the liver is not merely a physiological organ but a symbolic and functional center of emotional regulation, particularly associated with the emotion of anger. The canonical principle of the liver reflects this culturally coded mind-body integration, where internal emotional turbulence directly affects physiological processes.The term 火(huǒ, "fire") functions dually as a pathological agent and a metaphorical representation of affective arousal, signaling sensations of heat, agitation, and impulsivity. Meanwhile, 上炎 (shàng yán, "upward flaming") introduces directional and kinetic imagery, simulating an embodied experience of internal energy surging upward during emotional outbursts.By contrast, the standardized English translation, "upward flaming of liver fire pattern", as adopted in WHO guidelines, while terminologically equivalent, tends to attenuate the affective intensity conveyed in the original. Although flaming nominally retains the imagery of fire, its metaphorical resonance in English appears relatively diffuse and may not strongly evoke stable emotional associations. The phrase "liver fire pattern" adopts a clinicalized structure that places emphasis on biomedical taxonomy, which may limit the experiential richness embedded in the source term. Consequently, the translated expression functions more as a diagnostic label, with reduced physiological and emotional immediacy compared to the original. The dynamic force, directional intensity, and emotional agitation present in the Chinese expression are correspondingly diminished. When affective schemas related to anger or heat are not fully engaged, the potential for activating embodied empathy becomes considerably constrained.This case illustrates the challenge of affective disconnection, as articulated in the proposed theoretical framework. Despite semantic equivalence, the translation may only partially elicit embodied emotional engagement, leaving readers intellectually informed while generating attenuated affective resonance. As a consequence, the mechanism of embodied empathy remains less fully activated. The third challenge stems from fundamental differences between the cultural models of body and health embedded in TCM and those assumed by global biomedical readers. Unlike the Western anatomical framework, TCM conceptualizes organs such as the liver and spleen as part of a dynamic system governed by qi and yin-yang balance (Runhu, 2015;Zhao et al., 2023). These expressions are rooted in cosmological and metaphorical assumptions that lack direct counterparts in Western epistemologies. Consequently, even precise lexical renderings may fall short in eliciting embodied understanding if the reader lacks the necessary cultural-cognitive schema, ultimately disturbing the alignment between language, body, and worldview.Consider the term 命门之水 (mìng mén zhī shuǐ), rendered in the WHO terminology guidelines (2022) as "the water of the gate of life". This expression exemplifies a culturally and somatically embedded TCM construct whose meaning extends far beyond its surface lexicon. According to the interpretations provided by both classical texts and contemporary scholarship, 命门 (mìng mén, "gate of life") denotes not merely an anatomical site located between the kidneys, but rather serves as a vital energetic hub associated with reproductive function, metabolic regulation, and life force (Li, 2005;Meng, 2006;Chen & Zheng, 2013). 水(shuǐ, "water") refers to stored yin essence, conceptually balancing the yang fire, and representing inner cooling, moistening, and sustaining forces. From the perspective of embodied cognition, the phrase encodes sensations of thermal regulation and vital flow, reflecting a worldview in which physiological and cosmological orders are intertwined.While the WHO translation maintains a close literal correspondence, it may not fully convey the cultural and embodied dimensions embedded in the original. In English, "gate of life" does not readily evoke a familiar reference to an organ or energy center, which may render the term less accessible or metaphorically resonant. Similarly, the term "water" has limited affinity with the concepts of "yin vitality" and "essence" in TCM concepts. Despite the lexical equivalence of this translation, its ability to activate the cultural-bodily schema of the target readers is relatively limited, and the concept may remain abstract, making it difficult to realize the in-depth communication of embodied experience.This case exemplifies the challenge of cultural incommensurability in translation, where the worldview encoded in the source language terminology is difficult to map perceptually or emotionally onto the target language culture. The result constitutes a state of semantic suspension: although the translation achieves equivalence at the lexical level, it may not comprehensively stimulate embodied or affective experiences, thereby limiting the possibility of deeper intercultural understanding.Taken together, the three types of challenges, namely sensorimotor attenuation, affective disconnection and cultural incommensurability, constitute the most prominent embodied barriers in cross-cultural translation of TCM. As a typological framework, it reveals the key disruptions in embodied empathy and provides a diagnostic tool for assessing whether the translation goes beyond basic semantic equivalence and realizes a higher level of translation adequacy. From the perspective of embodied empathy, translation is not merely a matter of semantic transfer, but a reconstructive act that seeks to preserve and reawaken the source text's perceptual, somatic, and affective dimensions (Robinson, 2014;Chávez, 2009). In response to the preceding analysis of the three embodiment-based challenges, the following section introduces three corresponding translation strategies developed in this paper to address each of these barriers. This paper primarily develops its strategies for intercultural communication targeting non-specialist global readers, aiming to enhance experiential engagement and cultural resonance. While the framework may offer insights for professional translation practice, its main focus is not on terminological adaptation for clinical or regulatory purposes. To mitigate the effects of sensorimotor attenuation, this paper introduces the strategy of sensory activation. This approach emphasizes the restoration of perceptual immediacy and bodily vividness in translation by foregrounding sensory metaphors, tactile descriptors, and experiential analogies. Instead of relying solely on technical formulations, the translator can reconstruct the source text's somatic imagery, which helps re-engage the reader's sensorimotor pathways and strengthen embodied empathy.In this paper, the standardized WHO translation of 寒湿困脾证 (hán shī kùn pí zhèng) as "cold dampness affecting the spleen pattern" is rendered as "a condition of internal heaviness and chill that obstructs the spleen's activity", which exemplifies the sensory activation strategy within the embodied empathy framework. Compared to more abstract biomedical formulations, this rendering appears to offer greater potential for sensorimotor simulation, as it seeks to preserve the tactile and kinesthetic dimensions embedded in the source term. The phrase "internal heaviness and chill" evokes direct bodily imagery, allowing target readers to mentally simulate sensations of cold, weight, and stagnation. Additionally, the verb "obstructs" introduces a dynamic element of physiological resistance, reflecting the somatic inertia implied by 困(kùn, "obstruct"). Hence, this translation facilitates an embodied mode of comprehension, reinforcing the experiential and empathetic connection between reader and text.Rather than focusing solely on semantic equivalence, this strategy aims to enhance sensory immediacy and bodily vividness by reintroducing concrete perceptual cues and kinetic imagery, elements that may be attenuated in the process of terminological standardization. In counteracting the abstraction typical of biomedical renderings, the approach enables readers to simulate the somatic experiences encoded in the source term. This, in turn, enhances the translation's potential to evoke embodied empathy and maintain the perceptual-affective integrity of the original TCM expression. In light of the affective disconnection challenge translations, this paper advances the strategy of affective reenactment, which aims to reactivate the emotional dynamics embedded in the source text by reconstructing its affective imagery and emotional trajectory. The goal is to preserve the perceptualaffective interplay encoded in TCM terminology while enhancing the experiential depth and empathetic efficacy of cross-cultural translations.When the source term contains implicit emotional triggers, such as anger or worry, but the target language lacks corresponding body-emotion mappings, translators may render the causal logic of emotional arousal more explicit. This can guide target readers in understanding the affective motivations underlying physiological expressions. Additionally, affective reenactment involves amplifying and concretizing the somatic imagery, directionality, and dynamic movement inherent in the original expression, while favoring emotionally potent and kinetically charged verbs over neutral technical formulations, thereby enhancing embodied expressivity.An illustrative example is the WHO's standardized translation of 肝火上炎证 (gān huǒ shàng yán zhèng) as "upward flaming of liver fire pattern". While this version maintains terminological equivalence, it may somewhat reduce the emotional intensity conveyed in the original. In contrast, the alternative translation proposed in this paper, "a surge of blazing heat rising from the liver in response to unexpressed anger", demonstrates the affective reenactment strategy. It retains the thermal and directional aspects of 上炎 (shàng yán, "upward flaming"), explicitly articulating the underlying emotional causality linked to anger ("in response to unexpressed anger"). The word "surge" evokes a sudden, forceful upward motion, and the adjective "blazing" triggers vivid somatic sensations, jointly enriching the embodied dimension of the expression.Through the reconstruction of the connection between emotion, body, and cultural metaphor, this approach engages the reader's affective repertoires. Instead of leaving the reader emotionally detached, the translation activates the internal dynamics of emotional tension embedded in the source, reviving its expressive force. In this way, affective reenactment acts both as a remedy for potential emotional flattening and a channel for restoring the experiential depth essential to embodied empathy. Anchored in the theoretical framework of embodied empathy, cultural grounding seeks to bridge the interpretive gap caused by the target culture's lack of embodied background knowledge. It does so through a dual mechanism: explicitation and the construction of cultural-bodily schemas. Cultural grounding aims, for one thing, to help target-language readers establish cognitive footholds within their own cultural frameworks, and for another, to re-embody abstract or symbolically dense TCM expressions in ways that evoke sensory and cultural associations.More specifically, when source-language terms lack perceptible cultural or bodily referents in the target culture and may not readily engage readers' sensorimotor or conceptual schemas, translators are encouraged to actively reconstruct an embodied understanding. This may involve cultural explanation, analogical reasoning based on bodily functions, or symbolic cues, allowing readers to simulate the philosophical and physiological worldview embedded in the source expression through their own experiential frameworks.To implement the cultural grounding strategy, this paper proposes a composite approach termed cultural-embodied explication, which integrates several complementary methods. First, embodied explicitation involves articulating the bodily sensations, energetic processes, or physiological mechanisms implied in the source term by using lexicon rich in perceptual or functional resonance, such as flow, pressure, essence, or surge, to enhance the somatic vividness of the translation. Second, cultural mapping compensation addresses the absence of equivalent philosophical or somatic frameworks in the target culture by introducing analogous concepts (e.g., energy flow, yin-yang balance, meditative vitality) that provide accessible interpretive bridges. Third, metaphorical analogy and contextual elaboration entail supplementing the translation with metaphorical parallels or brief contextual explanations to reconstruct the cultural and embodied meanings underlying the original expression.This paper proposes a revised translation of 命门之水 (mìng mén zhī shuǐ) as "the yin essence stored at the life-gate, a vital reservoir of cooling energy that balances the body's inner fire". Unlike the WHO's literal translation "the water of the gate of life", which maintains lexical equivalence but may not comprehensively engage the target reader's cultural-bodily schema, the revised version employs the strategy of cultural grounding through culturally and somatically enriched explication. Specifically, the term "yin essence" explicitly introduces the somatic and cosmological dimension of 水 (shuǐ, "water"), which in TCM refers not to physical water but to an abstract, nourishing, cooling substance essential for sustaining life. The phrase "stored at the life-gate" locates 命门 (mìng mén) as a symbolic and functional center rather than an anatomical site, while the elaboration "a vital reservoir of cooling energy that balances the body's inner fire" draws on familiar metaphors of energy balance and inner equilibrium to evoke the yin-yang dynamic. This contextual elaboration helps readers simulate the embodied sensations of thermal regulation and energetic flow, making the expression experientially intelligible. Instead of merely conveying lexical meaning, the translation incorporates embodied explicitation (e.g., "yin essence", "cooling energy"), cultural mapping compensation (e.g., analogies to energy centers or balance models in holistic health), and contextual elaboration to anchor the source term in culturally resonant schemas. This multidimensional approach helps to restore the vividness and philosophical depth of the terminology, thereby eliminating the semantic suspension and reactivating the interpretative channels necessary for the realization of embodied empathy and cross-cultural understanding.To summarize, the three proposed strategies of sensory activation, affective reenactment and cultural grounding constitute an organic theoretical framework for addressing the challenges of embodiment in TCM translation. These approaches emphasizes sensory immediacy, emotional resonance and culturally embedded bodily schema, transcending the singular pursuit of semantic equivalence. Such a shift transforms translation into a medium of cross-cultural embodiment, enabling readers to intellectually comprehend and viscerally engage with the conceptual and philosophical logic of TCM. Embodied Empathy Framework in TCM Translation: Theoretical Contributions, Practical Implications, and Scope LimitationsThe theoretical innovation of this paper lies in extending the concept of embodiment from cognitive linguistics into the field of translation studies, with a particular focus on the cross-cultural translation of TCM terminology. Centered on the tripartite coupling of language, body, and culture, the proposed framework of embodied empathy in translation offers an operational model that moves beyond conventional applications of embodiment, typically limited to everyday verbs or sensory expressions, and demonstrates its relevance in highly abstract and culturally embedded professional discourse. While sharing the common objective of enhancing the accuracy and effectiveness of TCM translation, this paper seeks to contribute to ongoing efforts by identifying specific areas where embodied mechanisms may enrich cross-cultural comprehensibility. This paper fills a critical gap in current translation studies by uncovering the somatic schema and experiential underpinnings embedded in even the most technical TCM terminology, which in turn opens up a novel pathway for reconstructing experiential depth and emotional expressivity in cross-cultural medical discourse.The findings of this paper resonate with and extend several key arguments in the field of TCM translation. Notably, Li's (1996Li's ( , 2008) ) advocacy of the principle of ethnic specificity underscores the epistemological incommensurability between Chinese and Western medical paradigms, emphasizing the importance of preserving the culturally embedded worldview of TCM terminology. The analysis in this paper echoes the above viewpoints, pointing out that embodied empathy helps translators retain the emotional and bodily dimensions of culturally specific terms in the translation process, thus reinforcing their unique conceptual identities in the target language. Meanwhile, the cultural schema theory mentioned by Zhou and Wang (2013) is further substantiated in this paper. By highlighting the role of body simulation in translation reception, this paper presents the dynamic activation mechanism of cultural knowledge in the process of cross-cultural comprehension from a sensorimotor perspective. Moreover, the reader-oriented approach proposed in this paper complements the emphasis on translators' cognitive differences discussed by Li and Sang (2023), by focusing on how linguistic strategies can help non-Chinese readers simulate unfamiliar embodied experiences. In line with the concept of Yu et al. (2022) of the balanced application of foreignization and domestication strategies, the embodied empathy framework proposed in this paper provides a hybrid path of cultural specificity and acceptability for translation practice. Finally, this paper also responds to Xie's (2000) call for greater international collaboration in the standardization of TCM terminology, providing a new theoretical perspective for Chinese and global readers to reach a shared experiential understanding through translation.Although the translation strategies proposed in this paper may appear to diverge from established professional norms, or even conflict with, the standardization principles advocated in the WHO International Standard Terminologies on Traditional Chinese Medicine (2022), they nonetheless offer significant theoretical and practical value. A close analysis of the handbook's structure reveals a multilayered presentation of terms. The "English term" column usually adopts highly abstract and academic expressions, which often lack embodied resonance and are less likely to stimulate the sensory engagement of the target language readers. In contrast, the "Synonyms" column tends to adopt more vivid expressions with sensory imagery, dynamic metaphors, and affective undertones, reflecting the intention to retain the dimension of embodied meaning in the translation process. Taking "伤寒类病, shāng hán lèi bìng" as an example, the "English term" column translates it as "cold damage", which is semantically close to the literal meaning, but may not fully capture the cultural specificity or experiential immediacy associated with the term in TCM contexts, especially in reference to the core embodied experience of acute febrile illnesses induced by exogenous cold pathogens. In contrast, the translations provided in the "Synonyms" column, such as "exogenous febrile disorders" and "cold injury", are more effective in activating readers' cultural schemas and bodily imaginations. Among them, "exogenous febrile disorders" highlights the characteristics of acute fever in exogenous diseases, while "cold injury" draws on people's more intuitive associations of cold as a cause of illness. Similarly, "胃火证, wèi huǒ zhèng" is expanded to "stomach heat exuberance pattern" and "excess stomach heat pattern" in the "Synonyms" column, which more concretely portray the physical characteristics of internal heat exuberance.The above examples reveal a potential compensatory mechanism commonly observed in terminology systems such as those of WHO, ISO, and WFCMS, which attempt to balance the expressive tensions of embodied language with the technical demands of terminological standardization. Specifically, compilers often include complementary expressions in the "Synonyms" column to enhance both conceptual clarity and perceptual accessibility. This practice suggests that translating TCM's highly symbolic and experiential lexicon requires more than rigid adherence to standard equivalence; it calls for a flexible approach that negotiates between embodied empathy and regulatory consistency. A notable example of this balancing act is the WHO's standardized term 峻下逐水药 (jùn xià zhú shuǐ yào), rendered as "drastic water-expelling medicines". Here, "drastic" aptly conveys the forceful purgative effect, while "water-expelling" specifies its function. Yet from the perspective of embodied empathy, this translation remains relatively clinical, potentially falling short of evoking the visceral intensity and dynamic bodily impact, a concept traditionally linked to powerful, aggressive purgation. This example illustrates a partially successful integration of standardization with experiential resonance, while also highlighting opportunities for further enriching TCM translations through the enhancement of sensory and affective dimensions.Though this paper constructs a theoretical framework for the translation of TCM terminology based on embodied empathy and proposes three corresponding strategies, several limitations remain and warrant further discussion. To begin with, the number of TCM terms analyzed is relatively small, focusing chiefly on a few representative diagnostic expressions. As such, the findings may not fully capture the overall complexity of the TCM terminological system. However, given that the primary objective of this paper is to propose and preliminarily validate the applicability of embodied empathy theory in the context of TCM translation, the use of illustrative cases serves to clearly demonstrate the translation challenges and responsive strategies associated with embodied dimensions. Additionally, TCM comprises a wide range of text types, including classical theoretical texts, clinical guidelines, pharmacopeias, and popular science materials, each with distinct communicative functions and embodied representational features, as well as differing audience expectations (Luo & Deng, 2017). This paper has not yet explored how these variations may necessitate differentiated translation strategies, which will be addressed in future research.Lastly, this paper principally focuses on intercultural communication targeting general, non-specialist readers, with an emphasis on fostering experiential resonance and perceptual engagement in translation. It does not directly address issues related to terminological adaptation or clinical applicability within professional medical contexts. Nevertheless, it is acknowledged that the practical demands of standardization, especially in clinical or regulatory environments, require translators to carefully navigate the balance between preserving embodied meaning and ensuring terminological consistency.In such contexts, translators may adopt a layered approach: maintaining standardized terminology for core diagnostic or regulatory expressions, while enriching surrounding explanatory content or supplementary materials with embodied, culturally resonant language. This dual strategy allows for both compliance with professional norms and the retention of experiential depth. Introducing embodiment as a complementary evaluative dimension thus enhances existing translation paradigms centered on semantic equivalence and standardization, while also offering insights into the communicative transition between expert discourse and public understanding. This paper has proposed an embodied empathy-oriented framework for the cross-cultural translation of TCM terminology, addressing three core challenges: sensorimotor attenuation, affective disconnection, and cultural incommensurability. In response, it introduces three targeted translation strategies, sensory activation, affective reenactment, and cultural grounding, each designed to retain the perceptual, emotional, and culturally situated dimensions of TCM discourse. These strategies aim to move beyond semantic equivalence, advocating for a translation approach that reactivates the embodied experiences encoded in the source text. The restoration of somatic imagery, emotional intensity, and culturally resonant schemas enables the model to foster deeper experiential engagement and improve the target reader's empathetic and cognitive access to TCM concepts.While this approach may depart from the terminological standardization logic emphasized in institutional guidelines such as the WHO 2022 manual, it reveals the interpretive and experiential gaps that can arise from overly abstract or clinical renderings. In this light, this paper offers both a theoretical extension of embodiment theory into the field of translation studies and a practical response to the growing need for culturally sensitive, experientially rich translations in global TCM communication.It argues for a more dynamic balance between professionalization and perception, between standardization and resonance, providing a foundation upon which future interdisciplinary and readeroriented research may continue to build.Although limited in scope and reader context, the findings highlight the importance of balancing technical consistency with embodied intelligibility. Building on this paper, future research may proceed along several fruitful lines. First, empirical validation is needed to assess the effectiveness of embodied empathy in translation reception. This could involve reader response studies, eye-tracking methodologies, or sensory perception experiments to examine how different translation strategies impact cognitive and affective engagement. Moreover, comparative studies across languages and cultures could investigate the diversity of bodily metaphors and perceptual schemas, contributing to a more inclusive and empirically grounded model of linguistic embodiment." https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1618531/abstract #metaglossia_mundus
"Alexander Dolitsky: Why cross-cultural literacy matters more than ever
By SENIOR CONTRIBUTOR,
Cross–cultural communication requires a knowledge of how culturally different people groups communicate with each other. Studying other languages helps us understand what people and societies have in common, and it has profound implications in developing a critical awareness of social relationships. Indeed, understanding these relationships and the way other cultures function is the groundwork of successful business, foreign affairs, and interpersonal relationships.
Elements of language are culturally relevant and should be considered. There are, however, several challenges that come with language socialization. Sometimes people can over-generalize or label cultures with stereotypical and subjective characterizations. For instance, one may stereotype by saying that Americans eat hamburgers and French fries in the McDonald’s restaurant daily, and Russians eat borshch (beet and cabbage soup) for breakfast and drink vodka before bedtime. Both stereotypes are far from the truth.
With increasing international trade and travels, it is unavoidable that different cultures will meet, conflict, cooperate and blend together. People from different cultures often find it difficult to communicate, not only due to language barriers but also because of different culture, styles, customs, and traditions. These differences contribute to some of the biggest challenges of effective cross–cultural communication.
Cultures provide people with ways of thinking, seeing, hearing, behaving, understanding and interpreting the world. Thus, the same words or gestures can mean very different things to people from different cultures—even when they speak the same language (e.g., Canada, Australia, New Zealand, England, South Africa and the United States).
The quote “Two nations divided by a common language,” often attributed to George Bernard Shaw, highlights the differences in vocabulary, pronunciation, and cultural nuances that can exist between speakers of the same language. When languages are different, however, and translation is needed just to communicate, the potential for misunderstandings significantly increases.
From the mid–1980s to early–2000s, I was an unofficial Russian translator in Alaska for the US and State of Alaska governments, as well as for various public institutions and private individuals. The most challenging aspect of the translation was relaying specific terminology, such as that used by the US Coast Guard, medical professionals, political protocols and verbiage and, especially, jokes and humorous expressions. Often, I had to provide cultural and historic backgrounds before translating a joke.
Ones, a member of the Russian delegation, in an informal setting over dinner, told a joke to his Alaskan counterparts:
“Archaeologists found an ancient sarcophagus in Egypt with human–made artifacts and skeletal remains. Experts around the world thoroughly investigated this finding to identify the person buried in the sarcophagus but had no success. So, they invited a KGB agent (Soviet Committee for State Security) Major Ivan Ivanov to investigate this matter. Major Ivanov spent nearly three hours in solitude with the skeleton and, finally, with a confidence in his voice, reported to the archaeologists that the remains and skeleton belong to the Egyptian Pharaoh Ramses the Second. Archaeologists were impressed by this quick revelation and asked Ivanov, “How certain are you of this remarkable conclusion?” Then Ivanov replied with a great pride, ‘After three hours of the bulldozer interrogation, the skeleton itself revealed to me his identity!’”
The Russian jokester was a large, broad-shouldered man, his voice deep and curt. No one among the Alaskan delegation laughed after hearing the joke. They sat still at the table, holding crystal shots of vodka, and just stared with alarm at the joke-teller.
I had to provide the Alaskans with some background about the notorious brutality of the Soviet KGB. Unfortunately, in the process of explaining the joke, the humor disappeared.
In teaching Russian language at the University of Alaska Southeast for 16 years, my very first message to students was to emphasize that a language must always be understood and learned in a cultural context. As an example, I shared with them a personal and rather humorous story of my early arrival to the United States in Philadelphia during the winter of 1978.
In the early years of my immigration, I watched a lot of TV to learn English, American traditions and lifestyles. Many advertisements described food items and dishes, including various salads, using the word “delicious.”
It was a new experience for me because there were no TV ads for commercial products in the former Soviet Union due to a lack of commercial competition. The government controlled standardized prices for commercial products throughout the entire country.
So, I understood the word “delicious” as a name of the salad (a noun) rather than the quality of the salad (an adjective). In fact, food dishes have a particular name in Russia — Chicken Kiev, Salad Stolichniy (salad capital), Borshch (beet and cabbage soup), Beef Stroganoff (meat stew), Blini (Russian for pancakes), etc.
Later that year, my uncle from Canada, accompanied by his wife and daughter, visited me in Philadelphia. As a welcome greeting to America, they invited me to a fancy restaurant downtown. When the waiter asked for my order, I requested a steak, shot of vodka and “delicious” salad, hoping my order would match the “delicious” salad that I had seen on TV.
The puzzled waiter leaned slightly and whispered to me, “Sir, all our food is delicious.” Then, I clarified to the waiter, “I want a delicious salad.” The confused waiter served me a cabbage with mustard.
So, that evening in the fancy restaurant, I enjoyed a delicious steak and stuffed myself with a cut-in-half cabbage with mustard. This was a prime lesson in cross-cultural miscommunication.
Indeed, the demographics and cultural complexity of our nation changes rapidly. It is only a matter of time before ethnic minorities in our country take a lead in shaping the cultural and ethnic landscape of our nation and, eventually, become a significant ethnic majority. These demographic and cultural changes are unavoidable. However, our society should learn to make inclusive and, yet, conservative cross–cultural adjustments without undermining the fundamental core of American Judeo–Christian religious, cultural and moral values."
https://www.newsbreak.com/must-read-alaska-560697/4119970336659-alexander-dolitsky-why-cross-cultural-literacy-matters-more-than-ever
#metaglossia_mundus
"Localization Translation Tools Market to Grow at 9.8% CAGR with Innovations from Lokalise MotionPoint GlobalLink Weglot RWG Phrase Transifex and Alconost
07-20-2025 05:29 AM CET | Advertising, Media Consulting, Marketing Research
Press release from: STATS N DATA
Localization Translation Tools Market
The Localization Translation Tools market is experiencing unprecedented growth as businesses increasingly recognize the importance of reaching diverse audiences across the globe. With the rapid expansion of global commerce and the digital economy, the demand for effective Localization Translation Tools has surged. These tools encompass a wide range of applications, including Translation Management Systems (TMS), Computer-Assisted Translation (CAT) Tools, and Globalization Management Systems (GMS), which facilitate the accurate and efficient translation of content in multiple languages.
In recent years, several catalysts have fueled the growth of this market. Technological breakthroughs, particularly in artificial intelligence (AI) and machine learning, have revolutionized the way translations are performed. AI-powered Localization Tools, such as Neural Machine Translation (NMT) Software, have greatly enhanced translation accuracy and speed, making it easier for companies to localize their content. Additionally, strategic partnerships between technology providers and language service providers (LSPs) have expanded the capabilities and reach of localization solutions.
As businesses navigate the complexities of globalization, actionable insights are crucial for executives, investors, and decision-makers. Understanding the dynamics of the Localization Software Market, including emerging trends and potential challenges, will enable organizations to make informed decisions that drive growth and enhance customer engagement.
You can access a sample PDF report here: https://www.statsndata.org/download-sample.php?id=82498
The Localization Translation Tools market is experiencing significant growth, driven by the increasing globalization of businesses and the rising demand for multilingual content across various sectors. As organizations expand their reach into international markets, the need for efficient and effective translation solutions becomes paramount.
This market is projected to grow at a compound annual growth rate (CAGR) of 9.8% from 2025 to 2032, reflecting a robust trend toward the adoption of advanced localization technologies. Factors contributing to this growth include the acceleration of digital transformation, the proliferation of e-commerce, and the necessity for businesses to communicate with diverse customer bases in their native languages.
Additionally, the emergence of artificial intelligence and machine learning is enhancing the capabilities of localization tools, allowing for more accurate and contextually relevant translations. As companies increasingly prioritize customer experience and engagement, the demand for high-quality localization services becomes critical.
By 2032, the Localization Translation Tools market is expected to surpass a valuation of several billion dollars, underscoring its vital role in facilitating effective communication in an interconnected world. The integration of innovative features such as real-time translation, collaboration tools, and automated workflows further supports the market's expansion, enabling organizations to streamline their localization processes and achieve faster time-to-market for their products and services.
Overall, the Localization Translation Tools market is poised for dynamic growth, driven by technological advancements and the increasing importance of cultural adaptation in global business strategies.
Several key drivers are shaping the Localization Translation Tools market. One of the foremost is the growing emphasis on sustainability. Companies are now more conscious of their environmental impact and are seeking tools that not only facilitate localization but also promote sustainable practices. Digitization is another significant driver, as businesses increasingly move towards digital platforms that necessitate quick and efficient localization to cater to global audiences.
Shifting consumer expectations are also transforming the landscape. Today's consumers demand personalized experiences in their native languages, prompting businesses to invest in high-quality Localization Translation Tools. The integration of AI into localization processes has further accelerated this trend, with solutions like Workflow Automation for Localization and Real-time Collaboration Translation Tools becoming essential in meeting consumer demands.
Emerging technologies, such as predictive analytics and big data analytics for localization, are providing businesses with deeper insights into market trends and consumer preferences. Organizations are leveraging these insights to enhance their Localization Quality Assurance (QA) Tools, ensuring that translations are not only accurate but also contextually relevant.
Market Segmentation
The Localization Translation Tools market can be segmented based on type and application, allowing for a comprehensive understanding of its various components.
Segment by Type:
• Translation Management System (TMS): These platforms streamline the localization process by managing translation projects, workflows, and resources, enabling efficient collaboration among teams.
• Computer-Assisted Translation (CAT) Tools: These tools assist translators by providing translation memory (TM) and terminology management capabilities, enhancing consistency and speed in translations.
Segment by Application:
• Website Localization: Adapting websites to meet the language and cultural preferences of local audiences.
• Software Localization: Customizing software applications to ensure usability and functionality in different languages and regions.
• Business Localization: Tailoring business communications and documentation for international markets.
• Game Localization: Modifying video games to resonate with local cultures and languages.
• Others: This category includes various specialized localization applications, such as multimedia localization, e-learning localization, and legal localization tools.
Get 30% Discount On Full Report: https://www.statsndata.org/ask-for-discount.php?id=82498
Competitive Landscape
The Localization Translation Tools market is characterized by a competitive landscape featuring several key players that are driving innovation and offering diverse solutions to meet the needs of businesses. Prominent companies include:
• Lokalise: Known for its user-friendly cloud-based localization platform, Lokalise focuses on workflow automation and real-time collaboration, making it a popular choice for agile localization.
• MotionPoint: Specializing in website localization, MotionPoint uses AI-driven solutions to help businesses expand their online presence in multiple languages effectively.
• GlobalLink: A comprehensive localization solution, GlobalLink offers a suite of tools for managing translation projects and integrating with various content management systems (CMS).
• Weglot: Weglot provides a seamless website localization solution that allows businesses to translate and manage their content in multiple languages effortlessly.
• RWG: RWG focuses on providing customized localization solutions and has developed partnerships with various technology providers to enhance its offerings.
• Phrase: Known for its API-first localization platform, Phrase enables businesses to integrate localization seamlessly into their development workflows.
• Transifex: This platform offers cloud-based localization solutions that cater to software, mobile apps, and multimedia content, ensuring rapid translations.
• Alconost: Alconost specializes in game localization and provides tailored solutions for developers looking to reach global audiences.
• Unbabel: Unbabel combines AI and human translation to deliver high-quality translations for customer support and business communications.
• Gridly: Gridly offers a unique approach to localization through its spreadsheet-like interface, allowing for easy management of translation projects.
• memoQ: A robust CAT tool, memoQ provides advanced features for translation memory, terminology management, and QA, making it a favorite among professional translators.
• LingoHub: LingoHub focuses on collaborative localization, offering tools for teams to work together efficiently on translation projects.
• OneSky: OneSky provides an all-in-one localization platform that caters to developers, marketers, and translators, facilitating quick and effective localization.
• Wordbee: Wordbee offers a comprehensive suite of localization tools, including project management and translation memory capabilities.
• Alocai: Alocai focuses on AI-powered localization solutions, enabling businesses to automate and enhance their translation processes.
• SandVox: SandVox provides specialized tools for website localization, ensuring that businesses can effectively reach international audiences.
Each of these players is actively launching new products, expanding their services, or forming strategic partnerships to enhance their market position and address evolving customer needs.
Localization Translation Tools Market: Transforming Challenges into Triumphs
In the fast-paced world of global commerce, a leading player in the localization translation tools market faced a daunting challenge that threatened to disrupt its operations and erode its competitive advantage. As businesses increasingly expanded their reach into diverse international markets, the demand for swift and accurate translation services surged. However, the key player found itself grappling with an outdated translation workflow that was not only time-consuming but also prone to errors. With languages evolving and cultural nuances becoming increasingly complex, their existing processes struggled to keep pace, leading to delays in product launches and customer dissatisfaction. The pressure mounted as competitors began to leverage more advanced technologies, leaving the key player at a critical crossroads where the need for innovation became imperative to survive and thrive.
Recognizing the urgency of the situation, the key player turned to comprehensive data analysis to uncover insights that would illuminate a path forward. STATS N DATA, a leader in market analysis and strategic consulting, stepped in to dissect the existing workflows and pinpoint inefficiencies. Through a meticulous examination of data patterns and user feedback, they developed a ground-breaking strategy that would revolutionize the player's approach to localization. This strategy included the integration of advanced machine translation technologies, enhanced collaboration tools for linguists, and the implementation of a continuous feedback loop that captured customer insights in real-time. By advocating for a more agile and collaborative translation process, STATS N DATA empowered the key player to embrace innovation while maintaining the human touch that is often essential in localization.
The results of this transformative strategy were nothing short of remarkable. Within just a few months of implementation, the key player reported a significant increase in market share, reclaiming its position as a leader in the localization translation tools market. The efficiency of the translation process improved dramatically, with project turnaround times decreasing by nearly 40%. This newfound agility allowed the company to launch products in new markets faster than ever before, ultimately leading to a staggering 25% increase in revenue year-over-year. Moreover, customer satisfaction scores soared as clients praised the accuracy and cultural relevance of the translations. The key player not only navigated its way through a potential crisis but emerged stronger and more competitive, illustrating the profound impact that strategic data-driven decisions can have in the localization translation tools market.
The Localization Translation Tools market presents numerous opportunities for growth, particularly in untapped niches and evolving buyer personas. As globalization continues to expand, businesses are seeking comprehensive localization solutions that cater to specific industries, such as legal, medical, and marketing localization. By identifying these niches, companies can offer customized Localization Solutions that meet the unique needs of different sectors.
However, challenges remain, including regulatory hurdles that complicate localization efforts in certain regions. Businesses must navigate varying language regulations and compliance requirements, which can pose significant obstacles. Additionally, supply-chain gaps, particularly in the availability of skilled translators, can hinder effective localization. To address these challenges, organizations need to invest in training and development programs for translators and explore innovative solutions such as Machine Translation Post-Editing (MTPE) Tools to improve efficiency.
Technological Advancements
Technological advancements are reshaping the Localization Translation Tools market, driving innovation and enhancing capabilities. AI technologies play a pivotal role in this transformation. AI-powered Localization Tools streamline translation processes, reduce turnaround times, and improve accuracy. Digital twins and virtual reality applications are emerging as valuable tools for localization, particularly in industries like gaming and e-learning.
The Internet of Things (IoT) is also influencing localization strategies, as smart devices and wearables require tailored content for diverse users. Additionally, blockchain technology is gaining traction in the localization space, offering secure and transparent solutions for managing translation workflows and ensuring data integrity.
As these technologies continue to evolve, they will further enhance the capabilities of Localization Translation Tools, enabling businesses to meet the demands of a rapidly changing global marketplace.
Research Methodology and Insights
At STATS N DATA, our research methodology combines top-down and bottom-up approaches to provide robust insights into the Localization Translation Tools market. We employ primary data collection methods, including surveys and interviews with industry experts, alongside secondary data analysis of market trends and developments.
Our multi-layer triangulation process ensures that our insights are reliable and comprehensive. By synthesizing data from various sources, we deliver actionable intelligence that empowers businesses to make informed decisions in the Localization Software Market.
In conclusion, the Localization Translation Tools market is poised for significant growth, driven by technological advancements, shifting consumer expectations, and the increasing importance of effective localization strategies. As organizations continue to prioritize global engagement, the demand for high-quality Localization Translation Tools will only increase, offering ample opportunities for innovation and development in this dynamic landscape.
For customization requests, please visit: https://www.statsndata.org/request-customization.php?id=82498
Q: What are the best localization translation tools?
A: The best localization translation tools vary based on specific needs and preferences, but some of the most widely recognized tools include SDL Trados Studio, memoQ, Smartling, Lokalise, and Transifex. SDL Trados Studio is favored for its advanced features and extensive translation memory capabilities, making it popular among professional translators. memoQ offers strong collaboration features and is praised for its user-friendly interface. Smartling is known for its cloud-based solution and seamless integration with various content management systems. Lokalise excels in managing translations for mobile and web applications, while Transifex is designed for developers and focuses on continuous localization. Each tool has its strengths, so the best choice depends on the specific requirements of the project.
Q: How does AI impact localization software?
A: AI significantly impacts localization software by enhancing translation quality and efficiency. Machine learning algorithms can improve machine translation (MT) results, helping to produce more fluent and contextually accurate translations. AI-driven tools can also assist in automating repetitive tasks, such as content extraction and formatting, which speeds up the localization process. Additionally, AI can analyze user data to provide insights into translation preferences and trends, enabling more tailored localization efforts. The integration of AI allows for continuous improvement of translation memories and glossaries, as the system learns from past translations and user feedback, ultimately leading to better consistency and quality.
Q: What is the role of a TMS in localization workflows?
A: A Translation Management System (TMS) plays a crucial role in localization workflows by centralizing and streamlining the translation process. It serves as a platform where project managers can upload content, assign tasks to translators, and track progress in real time. A TMS typically integrates with other tools such as CAT (Computer-Assisted Translation) tools, machine translation engines, and content management systems, which enhances collaboration and efficiency. By organizing projects, managing resources, and facilitating communication among stakeholders, a TMS helps ensure that projects are completed on time and within budget, while maintaining high-quality standards.
Q: How can CAT tools improve translation efficiency?
A: Computer-Assisted Translation (CAT) tools enhance translation efficiency through features such as translation memory, terminology management, and collaborative functionalities. Translation memory stores previously translated segments, allowing translators to reuse existing translations, which saves time and increases consistency. Terminology management helps ensure that specific terms are translated uniformly across projects, reducing ambiguity and errors. Many CAT tools also offer collaborative features that allow multiple translators to work on the same project simultaneously, which accelerates the translation process. Additionally, CAT tools often include quality assurance checks, which help identify and rectify errors before finalizing translations.
Q: What are the benefits of cloud-based localization platforms?
A: Cloud-based localization platforms offer numerous benefits, including accessibility, scalability, and cost-effectiveness. Since they are hosted in the cloud, users can access them from anywhere with an internet connection, facilitating remote collaboration among teams spread across different locations. These platforms can easily scale to accommodate varying project sizes, allowing businesses to add or reduce resources as needed. Furthermore, cloud-based solutions typically require lower upfront costs since they operate on a subscription model, making them more affordable for companies of all sizes. They also often come with built-in security measures, ensuring that sensitive content is protected during the localization process.
Q: How do localization tools handle different file formats?
A: Localization tools are designed to handle a wide variety of file formats, including but not limited to XML, JSON, HTML, and various document formats such as Microsoft Word and Excel. These tools use specific import and export functions to extract translatable text from files while preserving the original formatting. Many localization platforms have built-in capabilities to manage localization for software and applications, which require specific file types for strings and resources. By supporting multiple file formats, localization tools enable teams to work efficiently with diverse content types, ensuring that all materials are localized accurately and consistently.
Q: What are the essential features of a comprehensive localization solution?
A: A comprehensive localization solution should include several essential features to effectively manage the translation process. Firstly, it should have a robust translation memory system to store and reuse previous translations, which helps maintain consistency. Secondly, strong terminology management is crucial for ensuring that specific terms are used correctly across projects. User-friendly collaboration tools are also important, allowing translators, project managers, and clients to communicate effectively. Additionally, the solution should support various file formats and integration capabilities with other systems, such as content management systems and TMS. Quality assurance features, reporting tools, and analytics are also vital for tracking progress and ensuring high translation quality.
Q: How to choose the right localization tool for an enterprise?
A: Choosing the right localization tool for an enterprise involves several steps. First, assess the specific needs of your organization, including the types of content you need to localize, the languages required, and the volume of translation work. Next, consider the features offered by various tools, such as translation memory, collaboration capabilities, and support for different file formats. Evaluate the integration capabilities with other tools your organization uses, such as content management systems and project management software. It is also important to consider user experience and the learning curve for your team. Finally, compare pricing models and ensure that the selected tool fits within your budget while meeting your localization goals.
Q: What is the cost of localization software?
A: The cost of localization software can vary widely based on the features offered, the scale of use, and the pricing model. Many localization tools operate on a subscription basis, with monthly or annual fees that can range from a few hundred to several thousand dollars, depending on the number of users and the level of service. Some tools may also charge based on usage, such as per word translated or per project. Enterprise-level solutions often require custom pricing based on the specific needs and scale of the organization. It is important to evaluate the total cost of ownership, which includes not just the subscription fees but also potential costs for training, support, and integration with existing systems.
Q: How do localization tools ensure quality and consistency?
A: Localization tools ensure quality and consistency through several mechanisms. Most tools include translation memory, which helps maintain consistency by allowing translators to reuse previous translations for identical or similar segments. Terminology management systems also play a key role by ensuring that specific terms are translated uniformly across different projects. Quality assurance features, such as automated checks for errors, inconsistencies, and adherence to style guides, help identify issues before finalization. Additionally, many tools facilitate peer reviews and collaboration, allowing translators to provide feedback on each other's work. By integrating these features, localization tools help uphold high standards of quality and consistency throughout the translation process.
Q: What are the latest trends in the localization translation tools market?
A: The localization translation tools market is experiencing several notable trends. One significant trend is the increasing adoption of AI and machine learning technologies, which are enhancing machine translation capabilities and automating various aspects of the localization process. Another trend is the rise of cloud-based solutions, which allow for greater flexibility, scalability, and collaborative features. Additionally, there is a growing focus on continuous localization, particularly in software development environments, where updates and new features need to be localized rapidly. The demand for localization tools that support agile methodologies is also increasing, as businesses seek to streamline their workflows. Finally, there is a heightened emphasis on user experience, with tools becoming more user-friendly and accessible to non-experts.
Q: How does machine translation integrate with localization tools?
A: Machine translation (MT) integrates with localization tools through built-in engines or API connections, allowing for automated translation of content as part of the localization workflow. Many localization platforms offer options to use various MT engines, such as Google Translate or Microsoft Translator, to generate initial translations. These translations can then be refined by human translators, who can leverage translation memory and terminology management features to improve accuracy and consistency. The integration of MT helps speed up the localization process, particularly for large volumes of content, while still allowing for human oversight to ensure quality. This hybrid approach is becoming increasingly common in modern localization strategies.
Q: What are the challenges in the localization software market?
A: The localization software market faces several challenges. One major challenge is the need for tools that can handle the increasing complexity of global content, including varied languages, cultural nuances, and different file formats. Additionally, ensuring quality and consistency across translations remains a persistent issue, particularly as the demand for rapid localization grows. Another challenge is the integration of new technologies, such as AI and machine learning, which require ongoing development and adaptation. Furthermore, many organizations still face internal resistance to adopting new localization tools or processes, often due to a lack of understanding of their benefits. Finally, maintaining data security and compliance with various regulations poses additional challenges for localization software providers.
Q: How do localization tools support agile development?
A: Localization tools support agile development by enabling continuous localization practices that align with the iterative nature of agile methodologies. They allow for quick updates and translations of new features or content, which is essential for maintaining pace with rapid development cycles. Many localization platforms offer integration with version control systems and development environments, facilitating seamless updates and localization of content as it changes. Collaboration features enable cross-functional teams, including developers, designers, and translators, to work together efficiently. Additionally, the use of translation memory and automated workflows helps streamline the localization process, reducing bottlenecks and ensuring timely delivery of localized content.
Q: What are the key drivers for growth in the localization market?
A: Key drivers for growth in the localization market include the increasing globalization of businesses and the rising demand for localized content across various industries. As companies expand into new markets, they require localization solutions to adapt their products and communications to different languages and cultures. The growth of digital content, particularly in e-commerce, software, and media, further fuels the demand for localization services. Technological advancements, such as AI and machine learning, are also driving growth by improving the efficiency and quality of localization processes. Furthermore, the shift towards remote work and global collaboration has increased the need for cloud-based localization solutions, contributing to market expansion.
Related Reports:
Investment Trading Software Market
https://www.statsndata.org/report/investment-trading-software-market-7710
Legal Protection Insurance Market
https://www.statsndata.org/report/legal-protection-insurance-market-6884
Animal Pain Medicines Market
https://www.statsndata.org/report/animal-pain-medicines-market-223042
Ultra-low Ash Polypropylene Market
https://www.statsndata.org/report/ultra-low-ash-polypropylene-market-148416
Conveyer Belts Market
https://www.statsndata.org/report/conveyer-belts-market-35254
John Jones
Sales & Marketing Head | Stats N Data
Email: sales@statsndata.org
Website: www.statsndata.org
STATS N DATA is a trusted provider of industry intelligence and market research, delivering actionable insights to businesses across diverse sectors. We specialize in helping organizations navigate complex markets with advanced analytics, detailed market segmentation, and strategic guidance. Our expertise spans industries including technology, healthcare, telecommunications, energy, food & beverages, and more.
Committed to accuracy and innovation, we provide tailored reports that empower clients to make informed decisions, identify emerging opportunities, and achieve sustainable growth. Our team of skilled analysts leverages cutting-edge methodologies to ensure every report addresses the unique challenges of our clients.
At STATS N DATA, we transform data into knowledge and insights into success. Partner with us to gain a competitive edge in today's fast-paced business environment. For more information, visit https://www.statsndata.org or contact us today at sales@statsndata.org"
https://www.openpr.com/news/4110740/localization-translation-tools-market-to-grow-at-9-8-cagr-with
#metaglossia_mundus
"The DWP has allocated more than £500,000 for translation services during the current year. The details emerged following an inquiry from Reform UK MP Lee Anderson regarding the Government's expenditure on services this year.
In a written parliamentary question, he specifically requested information about the DWP's spending on translating documents into languages beyond English and other native UK languages, and how much had been spent each year from 2023 onwards.
The MP also requested details about which languages the department had commissioned translations for. DWP minister Andrew Western provided the response. His figures revealed that the DWP has expended £546,323.38 on translation services during the current year to date.
The previous year saw the department spend a total of £882,118 for these services, while 2023's total expenditure reached £707,777.12. Mr Western also shared details of which languages the department had provided translation services into.
These include:
Oromo ( Afan)
Urdu
Pakistani Punjabi
Kinyarwanda
Castilian
Greek
Romanian
Bengali
Pashtu
Swedish
Lithuanian
German
Tamil
Gujarati
Spanish
Georgian
Danish
Portuguese (Brazilian)
Tagalog (Filipino)
Thai
Italian
Kazakh
Norwegian
Hungarian
Welsh
Flemish
French
Somali
Tigrinya
Malay (Malaysian)
Arabic
Braille (Unified English)
Estonian
Traditional Chinese
Japanese
Sinhalese
Farsi (Persian)
Slovenian
Latvian (Lettish)
Spanish (LatAm)
Vietnamese
Dari
Serbian
Danish
English (Easy Read)
Tanzanian Swahili
BSL (British Sign Language)
Amharic
Bahasa Indonesia
Ukrainian
Croatian
Korean
Catalan
Galician
Finnish
Kurdish / Kurdish Sorani
Kurdish Kurmanji
Norwegian
Turkish
Bosnian
Macedonian
Nepali
Uzbek
Russian
Dutch
Maltese
Hebrew
Portuguese
Polish
Czech
Albanian
Bulgarian
Hindi
Simplified Chinese
Slovak
Icelandic
Indian Punjabi
Sindhi.
Former Reform UK MP Rupert Lowe previously asked in Parliament whether the DWP might adopt a policy of not offering translation and interpretation services for speakers of non-UK languages when accessing its services. Mr Western also responded to outline the Government's position on this matter, providing his response in May of this year.
He said: "DWP has a statutory duty to provide language services to its customers in line with the Equality Act. The aim of the service is to provide spoken and written translation services for staff and customers who are deaf, hard of hearing or do not speak English as a first language in order to access DWP services.""
Nicholas Dawson
13:33, 19 Jul 2025
https://www.walesonline.co.uk/news/cost-of-living/dwp-spent-21million-translating-documents-32093747
#metaglossia_mundus
"Promotion du multilinguisme aux Nations Unies : Le Vietnam contribue avec la traduction du Pacte pour l'avenir
19/07/2025 11:38
Lors de l’événement "Le multilinguisme en action" tenu le 18 juillet au siège des Nations Unies à New York, l’ambassadeur Dô Viêt Hung, chef de la Mission permanente du Vietnam auprès des Nations Unies, a remis la version vietnamienne du "Pacte pour l’Avenir" (Pact for the Future) au président de la 79e session de l’Assemblée générale, Philemon Yang.
New York (VNA) – Lors de l’événement "Le multilinguisme en action" tenu le 18 juillet au siège des Nations Unies à New York, l’ambassadeur Dô Viêt Hung, chef de la Mission permanente du Vietnam auprès des Nations Unies, a remis la version vietnamienne du "Pacte pour l’Avenir" (Pact for the Future) au président de la 79e session de l’Assemblée générale, Philemon Yang.
Adopté par les dirigeants mondiaux lors du Sommet pour l’Avenir en septembre 2024, le "Pacte pour l’Avenir" définit une vision commune et des orientations de coopération dans des domaines clés afin d’accélérer la réalisation des Objectifs de développement durable (ODD), de renforcer le système multilatéral et de promouvoir le rôle et l'efficacité de la plus grande organisation multilatérale de la planète.
Philemon Yang a salué l'engagement des pays ayant contribué à traduire le document dans 33 langues, en plus des six langues officielles des Nations Unies, permettant ainsi à plus de 3,5 milliards de personnes à travers le monde, dont les locuteurs vietnamiens, d’y accéder.
Il a souligné que le multilinguisme est une priorité de la 79e session, essentielle pour rendre l’action onusienne plus inclusive. La traduction vietnamienne est désormais disponible sur le site officiel des Nations Unies à l’adresse https://www.un.org/pga/79/multilingualism-in-action-translations-of-the-pact-for-the-future-in-global-languages-2
Auparavant, après avoir présenté ses lettres de créances, l'ambassadeur Dô Hung Viêt a rencontré du 4 au 17 juillet plusieurs hauts responsables onusiens, notamment la secrétaire générale adjointe Amina J. Mohammed ; la secrétaire générale adjointe chargée des stratégies et politiques de gestion et de la conformité, Catherine Pollard ; la secrétaire générale adjointe aux affaires légales, Elinor Hammarskjold ; la secrétaire générale adjointe et secrétaire exécutive de la Commission économique et sociale des Nations Unies pour l’Asie et le Pacifique (CESAP), Armida Alisjahbana.
Lors de ces rencontres, le diplomate vietnamien a réaffirmé l'engagement fort du Vietnam envers le multilatéralisme et l'agenda des Nations Unies, notamment les initiatives visant à promouvoir la paix, le développement durable et une réponse efficace aux défis mondiaux. -VNA
https://fr.vietnamplus.vn/promotion-du-multilinguisme-aux-nations-unies-le-vietnam-contribue-avec-la-traduction-du-pacte-pour-lavenir-post248292.vnp
#metaglossia_mundus
UK’s Power Vocabulary tool powers language learning in India through stories
PTIUpdated: July 19, 2025 15:33 IST London, Jul 19 (PTI) An innovative English learning tool devised by a UK-based Indian-origin edtech entrepreneur using classic stories is now being used by children in India to help enhance literacy outcomes and bridge gaps in speech. Power Vocabulary, conceptualised by Champs Learning founder Nishikant Kothikar, was originally designed for the England school curriculum but has proved very effective in India, where English proficiency is seen as vital for academic success and competitive exams. “The majority of people use the same 500 words, so our children tend to follow what they hear. The only way to expand your vocabulary is by reading. The more you read, the wider the scope, the better your language skills become,” said Kothikar. Learning from his own educational experiences, the edtech expert was inspired to create Power Vocabulary to help pupils move beyond rote memorisation to a more engaging learning method by associating words with images and stories. The module claims to be unlike conventional vocabulary apps or rote word lists because it is story-driven and visual-based. Power Vocabulary functions by offering words selected from classic children’s literature, visual imagery for strong word associations, audio pronunciation guides with easy phonetic cues and a structured weekly learning model aligned with book-based modules. Children who use the online tool engage with eight classic books across eight modules, reinforced by weekly videos placing vocabulary in story context. This multi-sensory approach enhances memory retention, making language learning fun, intuitive, and lasting. Its simplicity and use of children’s adventure stories are said to captivate learners and motivate continued reading. Building on its initial success, Power Vocabulary is being scaled independently through ITTRP Pvt Ltd – a multinational technology partner with offices in the UK, US and India. “Users maximise learning by completing exercises each week and taking assessments after five weeks. Motivated learners are encouraged to create their own sentences using new words to deepen understanding,” said Champs Learning, which is designed to offer structured academic support in Maths, English, Science, and Reasoning for primary and secondary students across the UK and India. (This story has not been edited by THE WEEK and is auto-generated from PTI)
"In the words of South African political firebrand Julius Malema, “They don’t have borders in Europe, but when it comes to us, they say no, you must have borders.” These powerful words cut deep into the very soul of Africa’s post-colonial condition, a condition where artificial lines drawn by Euro... MyJoyOnline
Published: Jul 18, 2025
In the words of South African political firebrand Julius Malema, “They don’t have borders in Europe, but when it comes to us, they say no, you must have borders.”
These powerful words cut deep into the very soul of Africa’s post-colonial condition, a condition where artificial lines drawn by European rulers at the Berlin Conference of 1884-85 continue to fragment and weaken the continent.
The division of Africa is not an accidental relic of history, it is a strategic construct. A divided Africa fuels Europe’s dominance and impedes Africa’s capacity to chart its destiny. In contrast, a united Africa, a United States of Africa, could rebalance global power through its immense human capital, strategic geography, and vast mineral wealth.
The Colonial Legacy: Borders Born in Berlin Africa is the only continent whose borders were drawn without the consent of its people. European powers, with rulers and compasses, partitioned entire civilizations during the Berlin Conference, creating 54 nations from what could have been one vast continental force. These borders ignored ethnic, linguistic, and cultural realities, ensuring internal strife, fragmentation, and dependence.
Europe no longer functions this way. With the creation of the European Union (EU), European countries enjoy the free movement of people, goods, services, and capital. Yet, the same powers that championed unity at home, enforce division abroad, particularly in Africa. Why? Because unity in Africa threatens their interests.
How Division Makes Europe Thrive
• Raw Material Access at Minimum Cost
Africa is home to 60% of the world’s arable land, 90% of platinum group metals, 40% of global gold reserves, and vast deposits of cobalt, coltan, diamonds, and lithium. Countries like the Democratic Republic of Congo (DRC), despite being among the richest in resources, are some of the poorest in development due to persistent instability and exploitation. A fragmented Africa allows Western powers and multinational corporations to exploit resources through backdoor deals with individual governments or warlords, something that would be far more difficult in a united Africa with a centralized policy on resource extraction.
• Weapons and War: A Profitable Chaos
Division allows for regional conflicts, and where there are wars, there are weapons. The top arms exporters in the world, mostly Western nations, have consistently sold weapons to African governments and rebels alike. A united and peaceful Africa would starve the European arms industry of billions.
• Currency Control and Economic Dependency
Several African countries, particularly in West and Central Africa, still use the CFA Franc, a colonial currency controlled by the French Treasury. This means France directly benefits from monetary policies affecting African nations. A common African currency backed by mineral reserves, as Malema proposes, would be catastrophic for the euro, the dollar, and the pound, whose value depends on global resource control rather than domestic production.
Practical Examples of European Interference in African Unity
• Nkrumah’s Dream Sabotaged
Kwame Nkrumah’s vision for a United States of Africa was fiercely resisted, not just by some African leaders, but by Western nations who feared a geopolitical giant that could reject neo-colonial arrangements. Nkrumah was overthrown in 1966 with alleged support from the CIA.
• Libya and Gaddafi’s Fate
Muammar Gaddafi, a key proponent of the African Union and an advocate for an African gold-backed currency, was violently removed in 2011. Leaked emails from Hillary Clinton’s office revealed that Gaddafi’s plans to introduce a gold dinar threatened Western monetary systems, making him a target.
• France’s Role in Francophone Africa
France continues to wield economic and military influence in its former colonies, including through bases, CFA currency control, and direct intervention in elections and policy. This interference is a textbook case of neocolonialism disguised as diplomacy.
The Vision of Julius Malema: A Call for Pan-Africanism
Malema’s call for a borderless Africa echoes the dreams of Marcus Garvey, W.E.B. Du Bois, Kwame Nkrumah, Thomas Sankara, and PLO Lumumba. His assertion that a united Africa could collapse the dollar is not hyperbole, it is geopolitical reality. With a unified Africa:
• A continental army could protect mineral-rich regions from foreign interference.
• A common passport could unleash intra-African trade and innovation.
• A central African bank could finally leverage the continent’s $100 trillion worth of natural resources.
• A pan-African digital infrastructure could challenge Silicon Valley monopolies on data.
The Path Forward: What Must Be Done
• Economic Integration First
The African Continental Free Trade Area (AfCFTA) must be accelerated. Trade between African nations is under 15%, while intra-European trade is above 70%. We need unified customs and tariff systems.
• Cultural Reconnection
African unity isn’t only economic, it is spiritual and cultural. Reviving African languages, philosophies, and historical consciousness can weaken colonial mindsets that pit us against one another.
• Pan-African Education
Universities must teach Pan-African history and economics. The youth must be taught that Ghana’s fate is tied to Nigeria’s, just as South Africa’s destiny is bound to Ethiopia’s.
• Political Commitment to Confederation
Like the EU, Africa must begin as a confederation with shared sovereignty in trade, defense, and diplomacy, and gradually move toward federalism. This confederation can then become the United States of Africa, not as a dream, but as destiny.
Conclusion: One Africa or No Africa
“If we become a United States of Africa… we can collapse the dollar,” Malema boldly asserts. And he is right. The world’s balance of power is tilted because Africa is fragmented. Europe’s prosperity is a pyramid whose foundation is Africa’s division. But the time is ripe for a paradigm shift. Africa must rise. Not in isolated islands of prosperity, but as one unified continent. Not just in slogans and flags, but in shared governance, economic solidarity, and cultural pride. The formation of a United States of Africa is not merely Pan-African idealism, it is the geopolitical necessity of the 21st century.
As PLO Lumumba once said, “Africa must write its own narrative.” And that narrative begins the moment we say no to borders that were never ours and yes to unity that has always been our strength.
References
• Malema, Julius. (Various Speeches), Economic Freedom Fighters (EFF)
• Nkrumah, Kwame. Africa Must Unite (1963)
• Clinton Emails on Libya and Gold Dinar Project (U.S. State Dept.)
• AfCFTA Reports (African Union, 2023)
• Lumumba, PLO. (2021). Pan-Africanism and the Struggle for Africa’s Unity" https://cedirates.com/news/borders-that-bleed-us-why-a-divided-africa-sustains-europes-power-and-how-the-usa-can-shift-the-global-balance/ #metaglossia_mundus
"Nigerian scholar champions AI-Human integration for global language solutions
By Raji Rasak
Mr Nehemiah David, a Nigerian scholar and First Class Honours graduate of French Education from the University of Benin, says he is championing Artificial Intelligence (AI) human integration for global language solutions.
David disclosed this in a telephone interview with the News Agency of Nigeria (NAN) in Lagos on Friday.
According to him, he is among a growing cohort of African professionals shaping the future of translation and interpretation in a digital age.
“As AI transforms global communication, I am advocating for a model that prioritises human insight in tandem with technological advancement.
“Machines can translate the sentence but only humans can translate the soul of the message,” he said.
David said that he combined academic excellence with practical field experience.
According to him, looking ahead, he sees translation not simply as a technical service but as foundational to global cooperation.
“In the coming decade, translation won’t just support development, it will drive it.
“It will be essential infrastructure for public policy, international business, and inclusive education,” he said.
“With over a decade in language services, my contributions span translation, interpretation, and education across both national and international sectors.
“From interpreting at African Union summits and environmental policy forums to mentoring French language learners across Nigeria, my approach bridges linguistic precision with cultural understanding,” he said.
He said that his research had garnered interest from AI think tanks, language education networks, and ethics panels across Francophone West Africa.
“Translation is not merely a service; it is a responsibility.
“Done right, it fosters understanding; mishandled, it risks deepening divides,” he said.
David, however, proposed an integrated model where AI functions as a tool, but with human interpreters guiding its application.
“Citing examples from Microsoft’s multilingual meeting platforms to robotic interpreters in Japanese airports, AI should be the engine, and humans the pilot,” he said.
David encouraged adoption of blended systems that retained the speed of machines while preserving the depth of human expression.
His recommendations included national frameworks to train interpreters in AI post-editing,
“Curriculum redesign to embed AI in tertiary language programmes, and collaboration between technologists, linguists, and policy stakeholders,” he said.
He also called on the Nigerian Institute of Translators and Interpreters (NITI) to champion these reforms at scale...
Edited by Vivian Ihechu"
https://nannews.ng/2025/07/18/nigerian-scholar-champions-ai-human-integration-for-global-language-solutions/
#metaglossia_mundus
"ARTIFICIAL INTELLIGENCEHow AI is Changing Our Understanding of Human Decision-Making
Understanding human decision-making has been a central goal in psychology for decades. Researchers have long sought to design cognitive models that explain how people think and predict their behavior. Now, the rise of artificial intelligence (AI) is fundamentally transforming this field. Recent breakthroughs in AI are revealing new insights into the mental processes that underlie our choices. At the center of this transformation is an innovative approach called “Centaur Mode,” where AI and human intelligence work together in ways that highlight the nature of human cognition.
The Dawn of a New Era in Cognitive Science
Centaur is a foundation AI model of human cognition that can predict and simulate human behavior with striking accuracy. The model is trained on more than ten million individual decisions made by over 60,000 participants across 160 psychological experiments. Created by researchers at Helmholtz Munich, the model is designed to bridge the gap between traditional cognitive theories and modern AI capabilities. The name “Centaur” originates from the mythological creature with a human upper body and horse-like legs. This naming reflects the model's unique ability to combine human-like decision making with the predictive power of artificial intelligence. The model can simulate human behavior in situations it has never encountered before. When researchers test it on new psychological experiments, Centaur responds in ways that mirror real human choices. This capability suggests that AI can now capture fundamental patterns in how humans make decisions across different contexts.
The Foundation: Psych-101 Dataset
The secret behind Centaur's success lies in its training data. The researchers created Psych-101, a dataset containing over 10 million individual decisions from more than 60,000 participants across 160 psychological experiments. This comprehensive collection encompasses trial-by-trial data from psychological studies, encompassing memory games, gambling tasks, and problem-solving scenarios. Each experiment was carefully transcribed into text to prepare the data. This natural language data allows researchers to process human behavioral data using large language models in a way that preserves the rich context of experimental settings. This approach enables the model to understand not only how people decide, but also the circumstances under which they make those decisions.
How Centaur Works
Centaur is built on Meta's Llama 3.1 70B language model and fine-tuned using a technique called quantized low-rank adaptation (QLoRA). This method modified only 0.15% of the base model's parameters while achieving remarkable improvements in predicting human behavior.
The training process involved showing the model complete transcripts of psychological experiments, including everything participants were told, what they saw, and what they did. The model learned to predict human choices by analyzing patterns across millions of decisions, gradually developing an understanding of human cognitive processes.
Breaking Performance Barriers
Centaur has shown an impressive performance across multiple metrics. It achieved 64% accuracy in predicting human behavior, significantly outperforming previous models that could only predict certain aspects of human behavior with much lower accuracy. In rigorous testing across 160 experiments, Centaur consistently outperformed traditional cognitive models, including established theories like Prospect Theory and reinforcement learning frameworks.
Perhaps most remarkably, Centaur demonstrated its ability to generalize beyond its training data. The model successfully predicted human behavior in experiments with modified cover stories, structural changes, and entirely new domains it had never encountered before. This generalization ability suggests that Centaur has learned fundamental principles of human cognition rather than merely memorizing specific patterns.
Key Findings
One of the most striking discoveries from Centaur research is the alignment of the model's internal representations with human neural activity. This discovery suggests that when AI learns to predict human behavior, it develops internal processes that mirror aspects of human cognition. Despite being trained only on behavioral data, Centaur showed improved ability to predict human brain activity measured through fMRI scans.
This unexpected neural alignment suggests that the model may have uncovered genuine insights into how the human brain processes information. The fact that a model trained purely on behavioral choices can predict neural responses indicates that behavior and brain activity share underlying computational principles.
This discovery suggests that human decision-making may be more predictable than previously thought. The patterns that Centaur learns from human choices reveal underlying structures in how we process information and make decisions. These patterns are observed across various types of decisions, ranging from simple memory tasks to complex problem-solving scenarios.
The research also shows that AI can capture human cognitive biases. When Centaur makes predictions, it exhibits the same systematic errors and shortcuts that humans use in decision-making. This finding suggests that these biases are not flaws in human thinking but rather integral parts of how our cognitive systems work. They represent efficient strategies that our brains use to navigate complex environments with limited resources.
The Centaur reveals that our choices are not random or purely logical. They follow patterns that can be learned and predicted, but these patterns are complex and context-dependent. Centaur demonstrates that human decision-making involves a complex interplay of cognitive processes that interact in sophisticated ways.
A New Window into Human Thinking
Traditional psychology has long sought to understand human decision-making through isolated studies and theoretical models. The Centaur approach represents a different path. By training AI on massive amounts of human behavioral data, researchers can now test theories about decision-making at unprecedented scales. When AI makes predictions about human behavior, researchers can compare these predictions with actual human choices to identify gaps in current psychological theories. This process creates a feedback loop where AI helps us understand ourselves better.
Beyond feedback, Centaur can be used for scientific discovery. The researchers demonstrated this by using the model in conjunction with language models, such as DeepSeek-R1, we can generate new hypotheses about human decision-making strategies. This approach, known as scientific regret minimization, enables researchers to identify patterns in human behavior that existing theories cannot explain.
Centaur represents a new paradigm in cognitive science, where AI models serve as both subjects of study and tools for generating new theoretical insights. The combination of large-scale behavioral data and AI's capabilities opens possibilities for discoveries that would be impossible through traditional experimental approaches alone.
Challenges and Future Directions
While the development of Centaur is a significant advancement in cognitive science, critical challenges remain. The model's predictions are based on patterns from psychological experiments, which may not fully capture the complexity of real-world decision-making. Human choices in laboratory settings may differ from those in natural environments, where the stakes are higher and contexts are more complex.
There are also questions about the generalizability of these findings across different populations and cultures. The psychological studies used to train Centaur primarily involved participants from specific demographic groups. Understanding how decision-making patterns vary across different cultures and contexts remains an active area of research.
The ethical implications of AI systems that can predict human behavior also require careful consideration. While these tools can provide valuable insights, they also raise questions about privacy and the potential for manipulation. As AI becomes better at understanding human decision-making, we need frameworks to ensure these capabilities are used responsibly.
The development of Centaur represents just the beginning of a new era in cognitive science. The researchers plan to expand the dataset to include more diverse populations, demographic information, and psychological characteristics. Future versions may incorporate multimodal data, including visual and auditory information, to capture a more complete picture of human cognition.
The success of Centaur also points toward the development of more sophisticated cognitive architectures that combine domain-specific and domain-general modules. This could lead to AI systems that not only predict human behavior but also exhibit more human-like reasoning capabilities.
The Bottom Line
Centaur represents a shift in how we the study human cognition. By combining the scale and power of modern AI with the rich tradition of psychological research, it offers new insights into human decision-making. While challenges remain, the model's success in predicting behavior across diverse domains suggests that we are entering a new era where AI and cognitive science can work together to unlock the mysteries of the human mind.
Dr. Tehseen Zia
July 18, 2025
Dr. Tehseen Zia is a Tenured Associate Professor at COMSATS University Islamabad, holding a PhD in AI from Vienna University of Technology, Austria."
https://www.unite.ai/how-ai-is-changing-our-understanding-of-human-decision-making/
#metaglossia_mundus
"Large language models function through sophisticated retrieval rather than genuine reasoning, according to research published across multiple studies in 2025.
AI head with neural networks showing difference between approximate retrieval and true reasoning capabilities The debate surrounding artificial intelligence capabilities has intensified following key research findings published in March and April 2025. Subbarao Kambhampati, a professor at Arizona State University and former president of the Association for the Advancement of Artificial Intelligence, has challenged prevailing assumptions about large language models through extensive technical documentation.
According to Kambhampati's research, published as Can Large Language Models Reason and Plan? in March 2025, these systems excel at what he terms "universal approximate retrieval" rather than principled reasoning. "LLMs are trained to predict the distribution of the n-th token given n-1 previous tokens," the research states, explaining that current models function as sophisticated n-gram systems trained on web-scale language corpora.
The study examined GPT-3, GPT-3.5, and GPT-4 performance across planning instances derived from International Planning Competition domains, including the well-known Blocks World environment. While GPT-4 achieved 30% empirical accuracy in Blocks World tasks—an improvement over earlier versions—this performance collapsed when researchers obfuscated action and object names. Standard artificial intelligence planners experienced no difficulty with such modifications.
Technical challenges expose fundamental limitations Testing methodology revealed critical distinctions between pattern recognition and genuine reasoning capabilities. When researchers reduced the effectiveness of approximate retrieval through name obfuscation, model performance "plummeted precipitously," according to the findings. These results suggest that improved performance stems from enhanced retrieval over larger training corpora rather than actual planning abilities.
Yann LeCun, VP and Chief AI Scientist at Meta, supported this perspective through social media commentary on April 16, 2025. "To invent new knowledge and new artifacts, or simply to deal with new situations for which they have not been explicitly trained, AI systems need to learn mental models of the world," LeCun stated. "Manipulating our mental model of the world is what we call thinking."
The technical analysis differentiates between knowledge acquisition and reasoning application. Many research papers claiming planning abilities actually confuse general planning knowledge extraction for executable plans, according to Kambhampati's analysis. When evaluated on abstract plans such as "wedding plans" without execution requirements, these distinctions become less apparent to casual observers.
This confusion stems from the fundamental difference between declarative knowledge about planning processes and procedural capability to execute those plans. Large language models excel at retrieving and synthesizing information about planning methodologies, step sequences, and best practices from their training data. However, this knowledge extraction differs significantly from the computational reasoning required to generate executable plans that account for resource constraints, temporal dependencies, and goal interactions. The research demonstrates that when models produce planning outputs, they often rely on pattern matching from similar scenarios in their training corpus rather than systematic reasoning through problem constraints.
The distinction becomes particularly evident in domains requiring precise execution sequences where subgoal interactions create complex dependencies. Abstract planning scenarios like event organization or project management may appear successful when models generate reasonable-sounding step lists that human evaluators can easily correct or adapt. However, when these same models face formal planning problems with explicit preconditions, effects, and goal states, their performance deteriorates substantially. According to the research findings, this degradation occurs because executable planning requires verification of logical consistency across action sequences, a capability that extends beyond the approximate retrieval mechanisms underlying current language model architectures.
Marketing professionals encounter similar challenges when evaluating AI tools for campaign planning and optimization tasks. Systems may generate comprehensive marketing strategies that include audience segmentation, channel selection, and budget allocation recommendations based on extracted knowledge from successful campaigns in their training data. While these outputs may contain valuable insights and appear strategically sound, they often lack the precise logical reasoning necessary to ensure budget constraints are respected, timing dependencies are maintained, and conflicting objectives are properly balanced across campaign elements.Branded content marketing
LLM-Modulo framework provides alternative approach Research teams propose the LLM-Modulo framework as a constructive application of current capabilities. This approach leverages language models' idea generation abilities while maintaining external verification systems. "The cleanest approach—one we advocate—is to let an external model-based plan verifier do the back prompting and to certify the correctness of the final solution," the research documentation explains.
This framework acknowledges models' value as knowledge sources while avoiding attribution of autonomous reasoning capabilities. Similar to historical knowledge-based AI systems, LLMs effectively replace human knowledge engineers by providing problem-specific information, albeit with relaxed correctness requirements compared to traditional approaches.
Fine-tuning experiments showed limited improvement in planning performance. Such modifications essentially convert planning tasks into memory-based approximate retrieval, functioning more like compilation from System 2 to System 1 processing rather than demonstrating actual planning capability.
Self-verification claims lack empirical support Recent studies from Kambhampati's laboratory challenge claims about self-improvement capabilities in large language models. Research on plan verification and constraint verification indicates that self-verification performance actually worsens due to false positives and false negatives during solution evaluation.
The assumption that models excel at verification compared to generation lacks justification for LLM systems, unlike established computational complexity principles. "While for many computational tasks (e.g. those in class NP), the verification is often of lower complexity than generation, that fact doesn't seem particularly relevant for LLMs which are generating (approximately retrieving) guesses," the research explains.
Human-in-the-loop prompting presents additional complications through potential Clever Hans effects. In such scenarios, humans with knowledge of correct solutions unconsciously guide models toward accurate responses, making it difficult to attribute success to model capabilities versus human steering.
Knowledge closure limitations The discussion extends beyond individual model limitations to broader epistemological constraints. According to Kambhampati's analysis shared on social media in April 2025, "Neither LLMs nor LRMs have the ability to go beyond the humanity's knowledge closure—which is needed for true discoveries."
Large Reasoning Models (LRMs) such as OpenAI's o1 and o3 series represent attempts to address reasoning limitations through verifier signal compilation. However, this approach essentially creates synthetic "reasoning webs" to supplement training data rather than enabling genuine knowledge discovery beyond human capabilities.
The research emphasizes that verifiers themselves remain products of human knowledge, creating inherent bounds on system capabilities. "LLMs/LRMs are great as force multipliers," Kambhampati noted. "But if you really want your AI agent to learn and go beyond the humanity's knowledge closure, you also need your agent to act in the world (not simulator), and learn from that."
Marketing implications for AI adoption These findings carry significant implications for marketing professionals evaluating AI-powered solutions. Understanding limitations becomes crucial when assessing AI tools that claim advanced reasoning capabilities for campaign optimization and customer analysis.Branded content marketing
Recent PPC Land research indicates that marketing teams increasingly rely on AI for complex decision-making processes, from campaign optimization to customer segmentation strategies. However, the fragility revealed in reasoning studies presents concerns for applications requiring logical consistency.
Marketing analytics applications face particular challenges, as minor changes in data presentation or problem formulation could yield dramatically different insights. This undermines confidence in AI-driven marketing intelligence platforms that depend on consistent logical processing.
The research suggests marketing professionals should distinguish between scenarios requiring genuine reasoning versus those suitable for approximate retrieval and pattern recognition. Many marketing applications may benefit from standard language models rather than more expensive reasoning-enhanced alternatives.Branded content marketing
Current trends show 80% of companies have chosen to block LLM access to their websites, reflecting growing concerns about AI capabilities and limitations across industries. This statistic underscores the importance of understanding actual versus claimed AI capabilities when developing marketing technology strategies.
Technical architecture insights Language model architecture provides additional context for understanding these limitations. Modern systems use transformer models processing text through attention mechanisms, learning patterns from vast training datasets to understand context and generate responses. The development process includes training phases, continuous improvement through techniques like Supervised Fine-Tuning and Reinforcement Learning with Human Feedback, and inference phases generating outputs based on user inputs.Branded content marketing
These systems function as "giant non-veridical memories akin to an external System 1 for us all," according to the research characterization. This framing helps explain both capabilities and limitations observed across different evaluation scenarios.
Future development considerations The research concludes that while LLMs demonstrate remarkable approximate retrieval abilities suitable for various applications, attributing reasoning or planning capabilities creates false expectations. Effective deployment requires understanding these systems as powerful idea generation tools requiring external verification rather than autonomous reasoning agents.AI for digital advertising
This perspective maintains the potential value of current technologies while establishing realistic boundaries for application development. As marketing professionals increasingly integrate AI tools into their workflows, these distinctions become essential for successful implementation and appropriate expectation management." Luis Rijo Jul 19, 2025 https://ppc.land/large-language-models-lack-true-reasoning-capabilities-researchers-argue/ #metaglossia_mundus
Tech giant Meta has apologized and said it has fixed an auto-translation issue that led one of its social media platforms to mistakenly announce the death of Indian politician Siddaramaiah.
By Amarachi Orie, CNN Fri July 18, 2025
The office of Siddaramaiah, chief minister of the Indian state of Karnataka, criticized Meta's "frequently inaccurate" auto-translation tool. Aijaz Rahi/AP CNN — Tech giant Meta has apologized and said it has fixed an auto-translation issue that led one of its social media platforms to mistakenly announce the death of Indian politician Siddaramaiah.
The chief minister of the southwestern Indian state of Karnataka posted on Instagram Tuesday in the local Kannada language, saying he was paying his respects to the late Indian actress B. Saroja Devi. He also paid tribute to the actress on Facebook and X.
However, Meta’s auto-translation tool inaccurately translated the Instagram post to suggest that Siddaramaiah, who uses just one name, was the one who “passed away.”
“Chief Minister Siddaramaiah passed away yesterday multilingual star, senior actress B. Took darshan of Sarojadevi’s earthly body and paid his last respects,” the erroneous, garbled translation read, CNN affiliate News 18 reported.
A Meta spokesperson told news agency Press Trust of India Thursday: “We fixed an issue that briefly caused this inaccurate Kannada translation. We apologize that this happened.”
Politician calls for use of ‘grossly misleading’ tool to be halted Also on Thursday, Siddaramaiah criticized the auto-translation tool as “dangerous” in posts on Facebook and X, adding that such “negligence” from tech giants “can harm public understanding & trust.”
His posts included a photo of an email his office sent to Meta (META) voicing “a serious concern” about the auto-translation tool on its platforms, “particularly Facebook and Instagram.”
The email, which had the subject line “Urgent Request to Address Faulty Auto-Translation of Kannada Content on Meta Platforms,” urged the tech company to “temporarily suspend” its auto-translation tool for content written in Kannada “until the translation accuracy is reliably improved.”
His office also requested that Meta work with Kannada language experts to improve the feature.
Kannada is the official language of Karnataka and is also spoken in bordering Indian states. Some 45 million people spoke Kannada as their first language in the early 2010s, and another 15 million spoke it as their second language, based on the latest available data...
The email from Siddaramaiah’s office calls Meta’s auto-translation from Kannada to English “frequently inaccurate and, in some cases, grossly misleading.”
“This poses a significant risk, especially when public communications, official statements, or important messages from the Chief Minister and the Government are incorrectly translated. It can lead to misinterpretation among users, many of whom may not realise that what they are reading is an automated and faulty translation rather than the original message,” it continues.
“Given the sensitivity of public communication, especially from a constitutional functionary like the Chief Minister, such misrepresentations due to flawed translation mechanisms are unacceptable,” it adds.
As of Friday, the auto-translation of Siddaramaiah’s Instagram post reads: “The multilingual star, senior actress B Sarojadevi who passed away yesterday, paid his last respects,” which still appears to be inaccurate.
CNN has reached out to Meta and the Karnataka chief minister’s office for further comment.
The incident comes months after the US tech giant apologized for a technical error that led some Instagram users to see graphic, violent videos." https://edition.cnn.com/2025/07/18/tech/meta-apologizes-siddaramaiah-scli-intl #metaglossia_mundus
|
"In new research, a team of linguists asked native speakers of various languages to rate recordings of other languages for pleasantness.
It’s often said that French is silky, German is brutish, Italian is sexy, and Mandarin is angry. But do those stereotypes of these diverse languages hold empirically across cultures? Are some languages intrinsically beautiful?
To find out, a trio of researchers from Lund University in Sweden and the Russian Academy of Sciences recruited 820 participants from the research subject site Prolific to listen to 50 spoken recordings randomly selected from 228 languages. The audio clips were taken from the film Jesus, which has been translated into more than 2,000 languages. For this reason, it is commonly used in linguistics research.