 Your new post is loading...
|
Scooped by
Charles Tiayon
November 28, 2011 3:00 AM
|
Le Nouveau Dictionnaire du Jazz, publié ce mois-ci (éditions Robert Laffont - collection Bouquins), plus qu'une nouvelle mise à jour, est une refonte du Dictionnaire du Jazz qui datait de 1988. Par...
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"SSC Combined Hindi Translator Exam 2025 notification has been released. The registration process begins at ssc.gov.in.
Staff Selection Commission has released SSC Combined Hindi Translator Exam 2025 notification. Candidates who want to apply for Combined Hindi Translators Examination, 2025 can find the direct link through the official website of SSC at ssc.gov.in.
SSC Combined Hindi Translator Exam 2025: Notification out, registration begins The last date to apply is June 26, 2025. The last date for making online fee payment is June 27, 2025.
The correction window will open on July 1 and close on July 2, 2025. The computer-based exam (Paper 1) will be held on August 12, 2025.
This recruitment drive will fill approximately 437 Group ‘B’ Non-Gazetted posts of Junior Hindi Translator, Junior Translation Officer, Junior Translator, Senior Hindi Translator, Senior Translator and Sub-Inspector (Hindi Translator) for various Ministries/ Departments/Organisations of the Government of India.
Eligibility Criteria Candidates who want to apply for the posts can check the educational qualification and age limit through the Detailed Notification available here.
Selection Process The examination will consist of two papers. Paper-I will consist of Objective Type Multiple choice questions only. Based on the marks scored in the Paper-I; i.e., Computer Based Examination, candidates will be shortlisted, category-wise, to appear in Paper-II (Descriptive Paper).
The application fee is ₹100/-. Women candidates and candidates belonging to Scheduled Castes (SC), Scheduled Tribes (ST), Persons with Benchmark Disabilities (PwBD) and Ex-Servicemen eligible for reservation are exempted from payment of fee. Fee can be paid only through online payment modes, namely BHIM UPI, Net Banking, or by using Visa, MasterCard, Maestro, or RuPay Debit card. Online fee can be paid by candidates up to June 27, 2025.
The correction charges can be paid only through online payment modes, namely BHIM UPI, Net Banking, or by using Visa, MasterCard, Maestro, or RuPay Debit card. There will be negative marking of 0.25 marks for each wrong answer in Paper-I. Candidates are, therefore, advised to keep thisin mind while answering the Question. For more related details candidates can check the official website of SSC." Detailed notification 👇🏿👇🏿👇🏿 https://www.hindustantimes.com/education/employment-news/ssc-combined-hindi-translator-exam-2025-notification-out-registration-begins-at-sscgovin-101749178183810.html
#metaglossia_mundus
"Translate your captions into another language Use Descript’s Translate feature to generate translated captions, subtitles, and transcripts from your original script.
If you want to add a translated voiceover, see our additional guide on dubbing speech to add a translated voiceover. ... Descript supports caption translation into a wide range of languages. Dubbing (translated voiceover using AI Speakers) is a separate feature with its own set of supported languages. Learn more about dubbing in a different language in this dedicated Dub speech article.
Supported languages for translated captions Brazilian Portuguese Greek (Beta) Polish Catalan Hindi (Beta) Romanian Chinese (Mandarin) Hungarian Slovak Croatian Italian Slovenian Czech Japanese Spanish (US) Danish Korean Swedish Dutch Latvian Turkish English (US) Lithuanian Finnish Malay French (FR) Norwegian
Before getting started Style and apply captions before translating — Translating will copy the styling from your original captions layer. If you change the style later, you'll need to re-translate to apply those updates to your translated versions. Custom transitions and animations aren’t supported in translated videos — To include transitions between scenes in your translated version, use Smart Transitions. Make sure to apply them before translating. Performance tip — Translating more than 4 languages at a time may cause slowdowns or errors. AI usage limits for Translate Translate uses a separate pool of AI time from other features. The amount of translation time available depends on your plan. Learn more about AI usage limits..." 👇🏿👇🏿👇🏿 https://help.descript.com/hc/en-us/articles/27177566394509-Translate-your-captions-into-another-language #metaglossia_mundus
"Providing a British Sign Language interpreter for some Isle of Man TT races makes the event "inclusive", those behind the service have said.
Dubbed "Deaf TT", the service sees live captions for radio commentary provided on the event's website as well as British Sign Language interpreted coverage at a section of the grandstand during the final two days of racing.
Launched in 2016 by the Manx Deaf Society, the initiative aims to make the event more accessible for those who cannot follow live spoken commentary.
Chief executive Lucy Buxton said it enabled those with a hearing impairment to be on the "same playing field" as they were given the information as it happened.
"If you have to rely on someone else to get that information you sometimes feel as though you are an afterthought," she said.
In the past, those who could not follow live coverage would rely on a summary of the race released in the evening, or would need to be told what had happened.
Ms Buxton said as the TT was "so fast", as soon as someone explains what had just happened "they have missed the next thing".
The service provided the "same access possibility, whether people choose to follow it or not", and meant that the TT "becomes quite an inclusive event", she continued.
IOM TT
Nine seats are provided in the grandstand for the service
Ms Buxton said the idea came about following a trip to see a Formula One race in Italy with her husband when a lack of English commentary demonstrated what it was like not to be able to follow what was being said.
The service was developed after similar frustrations werE raised by the society's members in relation to the TT.
The government now provides nine free seats at the Grandstand on both Friday and Saturday of race week, as well as organising the captions for commentary on the TT website.
The signed commentary is provided by a visiting British Sign Language (BSL) interpreter UK organisation Cosign.
As place names and people's names were finger spelt in BSL, organisers had to create their own vocabulary for brevity due the fast-paced racing.
That included Cronk y Voddy becoming "road jump jump" and May Hill in Ramsey turning into "house of the vampires" due to a gothic looking house on the corner, Ms Buxton said.
Similarly signs were created for the riders who were often mentioned in commentary, such as John McGuinness who is now the sign for a pint.
BSL interpretor for the event Carol Kyle said it was a "privilege to make the races accessible to those who live on the island or visit" for the event.
She said as the TT was a time trial, it could be "extremely challenging" explaining the person ahead on the road was not necessarily the person winning.
"But the enthusiasm of the commentators is very infectious, and hopefully I translate that enthusiasm, anticipation and speed so people get that feeling of the TT," she added."
https://www.bbc.com/news/articles/cvgdzvv8kddo
#metaglossia_mundus
Interpreter Richard “Smitty” Smith will retire after 46 years at RIT’s National Technical Institute for the Deaf.
"Richard Smith remembers pivotal moments that led to his career path as an ASL-English interpreter. When he was 16 years old, he had the opportunity to see a sign-language interpreter at work. After that, he signed up for free sign-language classes offered at RIT.
“It was these moments, along with others, that made me realize this was the work I wanted to do and that NTID was where I wanted to be,” said Smith.
Forty-six years later, “Smitty,” as he affectionately known on campus, will retire June 30 from his role as curriculum support/materials development coordinator in NTID’s Department of American Sign Language-English Interpretation.
Throughout his career, Smith, who started at NTID in 1979, has earned several awards, including the Alice Beardsley Professional Interpreter of the Year Award, the Department of Interpreting Services Interpreter Emeritus Award, the Genesee Valley Region Registry of Interpreters Service Award, and the Rochester Deaf Kitchen Most Valuable Interpreter Award. He has also twice received the NTID Advisory Group Award.
“NTID has given me far more than I have contributed—a life, a career, and a purpose,” he said. “I was fortunate to be in the right place at the right time and couldn’t be more grateful.”
How has the field of ASL interpreting changed throughout the years?
The field has changed quite a bit. There is more research and new and different ways of looking at things and more Deaf colleagues interpreting and taking on leadership roles. There is also more open dialogue, which is truly wonderful. When I first started, there were no degree programs offered for interpreters—just training programs. Now, interpreters can earn a doctoral degree or specialty certificates, such as interpreting in healthcare.
What will you miss most after you retire?
Hands down, I will miss seeing all of the students—Deaf, hard-of-hearing, and hearing—grow and become leaders. I will also miss talking about ASL and the work that we do as interpreters.
What are a few of your most memorable interpreting assignments?
I had the opportunity to work in China and see the Great Wall. I was also able to interpret in the beautiful city of St. Petersburg in Russia. It was also a pleasure to work as an interpreter on the social work team within Interpreting Services for eight years. Those were the best years.
What are your plans during retirement?
I have plans to go to the gym, volunteer at the Rochester Deaf Kitchen food pantry, and continue with my hobby of making raised garden beds, rolling plant stands, and holiday ornaments from scrap wood and recycled materials. Oh, and I plan to take naps."
https://www.rit.edu/news/interpreter-richard-smitty-smith-retire-after-46-years-ntid
#metaglossia_mundus
The accredited program trained 13 bilingual interpreters in professional standards for healthcare, social services, government and education.
"Language Learning Access Lab trains local interpreters
The Language Access Lab Students from the program listen during one of the presentations. The Language Access Lab announced the completion of its first community interpreter training program, which trains bilingual individuals to interpret in healthcare, social services, government and education settings.
Irene Paxia is the executive director of the lab and said it’s important to have professional interpreters available who are trained on ethics, cultural mediation, terminology and best practices.
Paxia said the first person who signed up for the program works at a local hospital and often gets asked to interpret for patients.
“They wanted to do this to interpret for their support of nursing staff, and just to make patients more comfortable when they go into their rooms, with things as simple as providing a glass of water," she said. "So, they are thrilled that they can do that.”
The first cohort of 13 individuals was trained under a licensed instructor with support from two bilingual coaches – one for Spanish and one for Burmese.
The Language Access Lab also offers free presentations for organizations and administrators to understand the dos and don'ts of interpreting, with three hour-long sessions.
The training was supported through a partnership with the St. Joseph Community Health Foundation and the Indiana University Language Roadmap."
By Ella Abbott, a multimedia reporter for 89.1 WBOI June 6, 2025 https://www.wboi.org/arts-culture/2025-06-06/language-learning-access-lab-trains-local-interpreters #metaglossia_mundus
Publishing houses And Other Stories, Pushkin, and Charco have leveraged trends and attention from book awards to attract readers in the U.S.
"
U.K. Publishing in 2025: Three Indie Presses Find Success in Translation
By Adam Critchley | Jun 06, 2025
The U.S. has become an increasingly important market for small literary U.K. publishers. That is especially true for publishers working in translation, and even more so for those that have attracted attention from award juries. Look no further than And Other Stories, publisher of The Book of Disappearance, written by Palestinian author Ibtisam Azem and translated by Sinan Antoon, which was longlisted for the 2025 International Booker Prize, and Heart Lamp by Banu Mushtaq, translated from the Kannada by Deepa Bhasthi, which was also longlisted for the prize.
Mushtaq’s book, a collection of short stories about Muslim women living in India to which PW gave a starred review, quickly sold out in U.S. bookstores and online. Before reprints could be delivered, its e-book edition was deeply discounted to take full advantage of the short window of attention that is all-important for a publisher like And Other Stories.
Stefan Tobler
Founder and publisher Stefan Tobler says North America accounts for around half of And Other Stories’ sales. He argues that the long-held belief that American readers are not interested in buying books in translation, especially from cultures and languages they are unfamiliar with, is a myth.
“I honestly think that nowadays most English-language literary readers are perfectly happy to read translations,” Tobler says, noting that the choke point to growing the market further rests not with readers who are uninterested in buying the books but with publishers who are unable to publish more of them. “The blockage seems to be industry-side: about finding the right books for the press and the right translators. For many of the literatures of the world, once you get beyond the ‘big’ languages, there is a relatively small pool of literary translators into English, and far fewer editors who can read the books in their original languages.”
When the publisher can find a book that resonates with readers, it can and does sell in significant numbers. Noting that translations from Spanish have a strong built-in market in the U.S., Tobler points to the success of Signs Preceding the End of the World by Mexican author Yuri Herrera, translated from the Spanish by Lisa Dillman. The novel, about a girl who is trying to bring her migrant brother back from the U.S. to Mexico, has sold more than 100,000 copies—and more than 50,000 in North America—since its publication in 2015, according to the publisher. The And Other Stories catalog now includes a larger range of Spanish-language writers, including Argentine César Aira, Uruguayan Mario Levrero, and Mexican Pulitzer Prize winner Cristina Rivera Garza, who runs the MFA creative writing program in Spanish at the University of Houston.
Typically, the press’s books are published simultaneously in the U.S. and U.K. “For a while, we were acquiring almost exclusively books that we could publish both in North America and Europe,” Tobler says, “but we are making more and more exceptions to that.” The press has begun acquiring some North America–only rights, such as for That Reminds Me by Derek Owusu, published in 2023.
U.S.-born Adam Freudenheim, the publisher and managing director of London-based Pushkin Press, concurs with Tobler. In 2022, Pushkin was named Independent Publisher of the Year at the British Book Awards; since then, it has seen more and more reasons to expand its footprint stateside.
“My sense is that in recent years the U.S. market—like the U.K. market—has become more and more receptive to works in translation, both commercial and literary,” Freudenheim says. That is particularly the case, he adds, for books translated from Japanese, Korean, and Spanish. As an example, Freudenheim points to Pushkin’s most popular series, Seishi Yokomizo’s mid-20th-century Japanese novels featuring detective Kosuke Kindaichi; The Honjin Murders, translated by Louise Heal Kawai, is the press’s bestselling title in the States.
Pushkin has recently employed a strategy of selling off North American rights to some of its most popular titles. Those include Agustina Bazterrica’s Tender Is the Flesh, translated from the Spanish by Sarah Moses, published by Scribner in the U.S., where more than 750,000 copies have been sold across formats, according to Pushkin; The Passenger by Ulrich Alexander Boschwitz, translated from the German by Philip Boehm, which is published in the U.S. by Metropolitan and is Pushkin’s first book to hit 100,000 English-language copies sold worldwide; and At Night All Blood Is Black by David Diop, translated from the French by Anna Moschovakis, which won the 2021 International Booker Prize and is published in the U.S. by FSG.
More recently, Freudenheim says, the press’s relationship with the U.S. has grown deeper: in May 2024, Pushkin acquired New Hampshire–based Steerforth Press and Hanover Publisher Services, which had previously provided it with distribution services, and with which Pushkin had a working relationship for more than a decade. At the time, Freudenheim called the acquisition “a natural step.” Today, he says U.S. sales continue to grow, noting that the market now accounts for 20%–25% of Pushkin’s overall sales.
Edinburgh-based Charco Press is another publisher in Consortium’s distribution stable to attract the attention of prize juries with literary translations, and its Spanish-language titles and translations in particular have proven a good niche for Charco in the U.S. Caroline Casey, Charco’s director for North American publicity and marketing, says that “U.S. sales have grown dramatically since we launched in 2016, and now account for 55% of our overall sales, outpacing those in the U.K.”
Article continues below.
RELATED STORIES:
PW issue Contents
More in International -> News
Request permission to reprint of this article.
FREE E-NEWSLETTERS
Enter e-mail address
PW Daily Tip Sheet
More Newsletters
Casey attributes much of the stateside growth to the press’s smart packaging and marketing amid a burgeoning interest in Spanish-language titles. “When I’m visiting stores, I consistently hear that they want to build out their Spanish-language sections, and our sales have been steadily increasing, as more people become aware of the Charco brand,” she says. “One thing that Samuel McDowell and Carolina Orloff, Charco’s cofounders, set out to do was create a unified look and sensibility, which was such a smart marketing decision. If you’ve loved a Charco book, you’re able to recognize another one in a bookstore. It has really helped us build an identifiable list with booksellers and readers.”
The company’s bestselling author to date is Argentine novelist Claudia Piñeiro, whose novel Elena Knows was a finalist for the International Booker Prize in 2022. Earlier this year, Piñeiro, who has had six novels translated into English—including three by Charco—was featured as an Author of the Day at the 2025 London Book Fair. “We’ve made a real commitment to Piñeiro’s work, and people have really responded to it,” Casey says.
Among the other Latin American authors in Charco’s catalog of translations are Argentina’s Selva Almada and Roque Larraquy, Chile’s Diamela Eltit, Mexico’s Margo Glantz, and Uruguayan centennial poet Ida Vitale. In addition to the translations, Charco publishes Larraquy’s work in the Spanish and distributes it throughout the Americas. And under its Spanish-language OriginalES imprint, the press also publishes Colombia’s José Eustasio Rivera, Cuba’s Karla Suárez, Mexico’s Ave Barrera, and Uruguay’s Fernanda Trías, among others.
Adam Critchley is a writer in Mexico City"
https://www.publishersweekly.com/pw/by-topic/international/international-book-news/article/97931-u-k-publishing-in-2025-three-indie-presses-find-success-in-translation.html
#metaglossia_mundus
"Le jeudi 5 juin s'est tenue dans les locaux de la SACEM la 12e édition des Prix d’adaptation en doublage et en sous-titrage de l'Association des traducteurs adaptateurs de l'audiovisuel (ATAA).
Le jury de l’adaptation en doublage pour un film cinéma était composé de Julia Briand (language production manager, Netflix), Adèle Masquelier (autrice, finaliste cinéma doublage 2017), Taric Mehani (directeur artistique), Françoise Monier (autrice, lauréate cinéma doublage 2023) et Xavier Varaillon (auteur, lauréat série doublage 2021).
Le jury de l’adaptation en sous-titrage pour une série était composé de Muriel Blanc-Pignol (autrice, lauréate séries sous-titrage 2024), Ariane Carbonell (french language manager, Netflix), Florian Etcheverry (journaliste), Valérie Julia (autrice, finaliste séries sous-titrage 2024), Antoine Leduc (auteur, finaliste séries sous-titrage 2024) et Blandine Raguenet (autrice).
Prochain événement de l'ATAA, la 8e édition du Prix de la traduction de documentaire audiovisuel aura lieu le 6 novembre 2025.
Le palmarès Prix de l’adaptation en doublage d'un film en prise de vue réelles : Sandrine Chevalier pour Empire of Light (The Walt Disney Company - Searchlight Pictures / Dubbing Brothers) Prix de l’adaptation en doublage d'un film d'animation : Franck Hervé pour Kung Fu Panda 4 (Universal Pictures / Deluxe Media Paris) et Anne Fombeurre et Abel-Antoine Vial pour Marcel le coquillage (avec ses chaussures) (L'Atelier Distribution / Dubbing Brothers) Prix de l’adaptation en sous-titrage d’une série : Laure-Hélène Cesari et Mona Guirguis pour Brassic 5 (Canal+ / Imagine)"
https://ecran-total.fr/2025/06/06/le-palmares-de-la-13e-edition-des-prix-de-lataa/ #metaglossia_mundus
The folds and ridges of the human brain are more complex than any other in the animal kingdom, and a new study shows that this complexity may be linked to the brain's level of connectivity and our reasoning abilities.
"Your Brain Wrinkles Are Way More Important Than We Ever Realized HUMANS 07 June 2025 ByDAVID NIELD The folds and ridges of the human brain are more complex than any other in the animal kingdom, and a new study shows that this complexity may be linked to the brain's level of connectivity and our reasoning abilities.
Research led by a team from the University of California, Berkeley (UC Berkeley) looked at the brain shapes and neural activity of 43 young people, and in particular the lateral prefrontal cortex (LPFC) and lateral parietal cortex (LPC) – parts of the brain that handle reasoning and high-level cognition.
The grooves and folds on the brain are known as sulci, with the smallest grooves known as tertiary sulci. These are the last to form as the brain grows, and the research team wanted to see how these grooves related to cognition.
The researchers looked at sulcal length and sulcal depth. (Jannsen et al., J. Neurosci, 2022) "The hypothesis is that the formation of sulci leads to shortened distances between connected brain regions, which could lead to increased neural efficiency, and then, in turn, individual differences in improved cognition with translational applications," says neuroscientist Kevin Weiner, from UC Berkeley.
The analysis revealed each sulci had its own distinct connectivity pattern, and that the physical structure of some of these grooves was linked to the level of communication between brain areas – and not just areas that were close to each other.
It adds to the findings of a 2021 study carried out by some of the same researchers, which found the depth of certain sulci are associated with cognitive reasoning. Now we have more data to help scientists understand why that might be.
Between 60 and 70 percent of the brain's cortex (or outer layer) is hidden away inside folds, and these patterns change with age too. Tertiary sulci can vary significantly between individuals as well.
The researchers developed models that could identify sulci with high levels of accuracy. (Häkkinen et al., J. Neurosci., 2025) "While sulci can change over development, getting deeper or shallower and developing thinner or thicker gray matter – probably in ways that depend on experience – our particular configuration of sulci is a stable individual difference: their size, shape, location and even, for a few sulci, whether they're present or absent," says neuroscientist Silvia Bunge, from UC Berkeley.
It's clear from this research that the peaks and valleys of these brain structures are much more important than previously realized. They're not just random folds used to pack brains inside skulls – and may have evolved in certain directions over time.
Going forward, the researchers have big plans when it comes to studying brain grooves. Eventually, it's possible that a map of these sulci could help in assessing brain development in children and spotting neurological disorders.
There's a lot more work to do before that can happen though, and the researchers are emphasizing that brain fold length and depth are just two of many factors involved when it comes to our cognitive abilities.
"Cognitive function depends on variability in a variety of anatomical and functional features," says Bunge.
"Importantly, we know that experience, like quality of schooling, plays a powerful role in shaping an individual's cognitive trajectory, and that it is malleable, even in adulthood."
The research has been published in the Journal of Neuroscience." https://www.sciencealert.com/your-brain-wrinkles-are-way-more-important-than-we-ever-realized
"La police japonaise autorise les interprètes à intervenir par téléphone lors des interrogatoires des suspects étrangers. La mesure est prise car les crimes impliquant des ressortissants étrangers sont en augmentation dans tout le pays.
Près de 4200 policiers et agents parlant des langues étrangères et environ 9600 interprètes du secteur privé participent en personne aux interrogatoires et signent les dépositions.
Mais la police est confrontée à la nécessité de réagir rapidement en cas d’urgence et de faire appel à des interprètes spécialisés dans les langues rares.
L'Agence nationale de police a modifié ses règles d'enquête criminelle pour permettre aux interprètes vivant dans des zones reculées de fournir leurs services à distance.
En principe, ils sont tenus d’interpréter en personne s’ils le peuvent.
Mais s’ils ne le peuvent pas et vivent dans des zones éloignées, ils peuvent se rendre dans un commissariat proche de leur domicile. Ils peuvent alors fournir des interprétations par téléphone aux personnes se trouvant dans une salle d’interrogatoire d’un autre commissariat.
Les nouvelles règles entreront en vigueur le 1er juillet.
Selon la Police nationale, les étrangers ont commis 21 794 infractions pénales l'année dernière. Au total, 12 170 seraient impliqués.
Ces chiffres sont nettement inférieurs aux niveaux atteints dans le passé, mais ils sont en hausse ces dernières années."
https://www3.nhk.or.jp/nhkworld/fr/news/20250605_16/
#metaglossia_mundus
"Intelligence artificielle, déshumanisation et précarité : les traducteurs aux premières loges de la dégradation de l’emploi causé par les technologies
Les traducteurs professionnels sont depuis des décennies aux premières loges de l’impact qu’ont les nouvelles technologies sur le monde du travail. Cette position les a forcés non seulement à s’adapter à un contexte toujours plus incertain et mouvant, mais leur a également permis d’observer, au fil du temps, comment chaque innovation technologique, tout en leur fournissant des outils utiles pour leur métier, vient généralement rajouter une couche supplémentaire de complexité, de déshumanisation et de perte de contrôle sur les conditions dans lesquelles ils traduisent, voire sur la nature même de leur travail.
Dans ce métier où la maîtrise des langues est fondamentale, avec toute la richesse des usages et des nuances, des codes linguistiques, des subtilités de ton et de sens, des doubles sens et des innombrables occasions où seules une véritable immersion culturelle et une connaissance approfondie du contexte de chaque texte permettent d’obtenir un résultat de qualité qui se lit aussi fidèlement et naturellement que dans la langue d’origine, toute la valeur du traducteur réside précisément dans l’expérience qu’il a accumulée, dans sa sensibilité et dans son appréciation personnelle.
Or, les technologies présentées comme un progrès semblent en réalité servir à orienter la profession vers une logique ultra-capitaliste, où la rentabilité compte plus que la qualité et où le travailleur n’est plus au centre de la situation (ni même à la marge parfois) ; une évolution qui semble anticiper assez précisément ce vers quoi de nombreuses autres professions spécialisées pourraient se diriger.
Avec les capacités croissantes de l’intelligence artificielle (IA), ce n’est pas seulement le monde de la traduction qui se transforme, mais aussi celui des employés de bureau, des auditeurs, des avocats, des recruteurs, des gestionnaires, des publicitaires, des analystes, des journalistes et de nombreux types d’artistes et de professionnels de la créativité, qui voient poindre à l’horizon une menace que de nombreux traducteurs affrontent déjà aujourd’hui.
Les mutations de la profession sous les effets des technologies
D’après l’une des études menées ces dernières années sur le secteur, qui avait recueilli le plus de réponses (plus de 7.000 personnes interrogées dans 178 pays), il y aurait environ 640.000 traducteurs professionnels dans le monde, dont trois quarts travaillent à leur compte. C’est précisément cette majorité de traducteurs qui subit une détérioration rapide de leur profession à cause des technologies. Dans le cadre de cet article, nous avons discuté avec une douzaine de traducteurs issus de plusieurs pays d’Europe et des Amériques. Tous exercent une partie de leur activité en tant que collaborateurs sous-traitants, régulièrement sollicités par des agences de traduction.
Jean-Jacques (nom d’emprunt, comme tous les traducteurs cités), qui peut se targuer de près de 30 ans d’expérience professionnelle avec le français, l’anglais, l’espagnol et le néerlandais, notamment, nous explique qu’après avoir satisfait à une série de tests, les traducteurs indépendants peuvent être intégrés à la base de collaborateurs de ces grandes entreprises dont « les clients ont généralement besoin de traductions régulières et d’une certaine sécurité dans leurs opérations », explique-t-il à Equal Times. « Évidemment, elles prélèvent une commission en qualité d’intermédiaire et peuvent faire pression sur les tarifs des traducteurs, car elles fournissent généralement un travail plus régulier ». Ce sont les plus grandes agences (qui captent un cinquième du marché) qui font que leur métier n’est plus ce qu’il était, car « elles intègrent de nombreuses technologies visant à accélérer le travail, réduire leurs coûts et maintenir ou augmenter leurs marges bénéficiaires », explique-t-il.
Jean-Jacques a toujours fait preuve d’ouverture d’esprit et de curiosité à l’égard des progrès technologiques dans son domaine d’activité. Il est tout le contraire d’un technophobe et, pourtant, il constate par lui-même que les conditions de travail et la nature même de son travail se sont progressivement dégradées.
L’IA et la traduction sont intimement liées depuis les années 1940, nous rappelle-t-il lui-même. Il a commencé à constater les limites des systèmes de traduction automatique dès 2003 et a assisté avec intérêt à la façon dont la traduction basée sur les réseaux neuronaux a permis, depuis 2016, aux grandes agences de l’intégrer dans leurs outils de traduction assistée par ordinateur (TAO).
« Leur principale tâche consiste à segmenter les textes en unités de traduction. Il s’agit généralement de phrases, mais il peut également s’agir d’un seul mot, faisant partie du titre d’un rapport, par exemple. Ces logiciels de TAO présentent ensuite le document à traduire sous forme de grille avec, à gauche, la phrase dans la langue d’origine et une case à droite pour insérer la traduction ». Les outils de TAO stockent ensuite chaque segment traduit dans des bases de données appelées « mémoires de traduction » (MT) qui s’enrichissent au fur et à mesure que de nombreux traducteurs y contribuent. Ces mémoires qui peuvent atteindre des tailles considérables. À titre d’exemple, les institutions européennes mettent leurs mémoires de traduction à disposition du public et contiennent des milliards de segments dont la traduction est déjà retenue pour l’avenir.
Ce faisant, « si une phrase identique ou similaire à celle à traduire apparaît dans la mémoire de traduction, les outils de TAO permettent de la suggérer au traducteur afin d’accélérer le travail », explique-t-il. « Évidemment, les agences ont sauté sur l’occasion et cette capacité à récupérer des textes préalablement traduits signifie également que l’envoi d’un texte à un traducteur lorsqu’il est accompagné d’une mémoire de traduction complète leur permet de réduire le tarif pour chaque phrase qui existe déjà dans la mémoire » avec des « tarifs différents » par mot « selon qu’il s’agit de correspondances parfaites ou d’une analogie pour chaque segment à traduire avec la mémoire de traduction ». De la sorte, si le tarif payé pour une nouvelle phrase est de 100 %, il peut tomber à 30-50 % du tarif de départ pour une phrase qui présente un taux de similitude élevé.
Cette façon de travailler, avec un prétraitement automatique des textes, s’est déjà imposée comme une norme dans le secteur, où « travailler avec des agences revient à automatiquement accepter ce type de tarification ». Qui plus est, la charge mentale et la nature même du travail sont totalement déformées en faveur d’un modèle de fonctionnement déshumanisé et aliénant.
Étant donné que les entreprises pré-traduisent automatiquement tout ce qui peut l’être, à la place d’un texte propre à traduire librement, « nous recevons presque toujours des fichiers déjà segmentés, contenant de nombreuses phrases déjà traduites et avec des suggestions de traduction automatique pour les segments qui n’ont pas d’équivalence dans la mémoire de traduction ». Ce faisant, le traducteur cesse d’être traducteur : « il ne s’agit plus tant de traduire que de réviser et corriger les segments proposés par la machine lorsqu’il existe des correspondances exactes », regrette Jean-Jacques.
Outre la charge mentale que représente le fait de corriger plutôt que de traduire librement, « ces outils ne comprennent pas le texte, ils peuvent donc proposer des traductions très proches, mais qui ne correspondent pas au contexte dans lequel elles s’inscrivent ». En d’autres termes, « même si l’on me paie moins parce que le texte semble correspondre, je dois corriger les erreurs et le réécrire entièrement ». À cela s’ajoute le fait que les outils de traduction automatique, à l’instar d’autres IA génératives, souffrent d’« hallucinations » et « peuvent subitement ajouter ou supprimer des parties d’une phrase, ce qui rend la correction de ce type de textes extrêmement fastidieuse, car il faut absolument tout vérifier ».
De véritables vaches à lait
Rosa, traductrice de l’anglais et du français vers l’espagnol forte de deux décennies d’expériences similaires, partage entièrement l’avis de Jean-Jacques. « Beaucoup de gens ne se rendent pas compte qu’une machine ne peut pas remplacer un être humain ni que la traduction automatique laisse fortement à désirer », affirme-t-elle. Bien qu’elle travaille volontiers avec ses principaux clients directs, elle explique qu’avec les agences, « seule la rentabilité compte et, dans cette optique, la traduction est devenue une marchandise qu’on nous soutire au prix le plus bas possible, au détriment des traducteurs, traités comme des vaches à lait ». Malheureusement, constate-t-elle, « tout leur est égal : depuis la qualité du produit, tant qu’il est vaguement acceptable (et je passe mon temps à corriger de véritables horreurs), jusqu’à la façon dont nous sommes traités ; elles ne pensent qu’à leurs précieux bénéfices, et ce, le plus vite possible. Ces agences sont donc celles qui exigent le plus, paient le moins et nous traitent le plus mal ».
« L’un de mes clients est une grande entreprise qui sous-traite la gestion de ses traductions à une agence, qui à son tour sous-traite les traductions à une agence de traduction. Cette dernière fait appel à des traducteurs indépendants pour que nous réalisions leurs traductions sur une plateforme infernale », explique-t-elle. « Au final, je me retrouve à traduire des segments dans de minuscules cases, où certains termes sont en outre soulignés en différentes couleurs — afin de tirer parti des mémoires de traduction du système et, bien sûr, de ne pas nous payer ces termes —, au milieu d’une page avec cinquante mille fonctions, ce qui crée une confusion extrême et est visuellement épuisant, et qui fait partie d’un système mondial extrêmement compliqué », ce qui fait perdre « énormément de temps » et d’énergie dans des tâches qui n’ont rien à voir avec la traduction proprement dite, et ce pour un prix « inférieur de moitié à ce que je touche dans des conditions normales ».
« Je l’accepte par nécessité, mais ce n’est pas l’envie qui me manque de les envoyer paître », assure-t-elle, car « ils essaient d’automatiser tout au maximum, afin de réduire leurs propres coûts, tandis que nous devons investir notre temps dans une multitude de tâches administratives, informatiques et bureaucratiques pour venir à bout d’une simple traduction ou révision ».
« Et si la mécanique se grippe à cause d’une correction ou d’une erreur, la pagaille est telle qu’entre les messages et les alertes, le réviseur, en qualité de responsable final, est obligé de remplir plusieurs documents expliquant ce qui s’est passé, la façon d’éviter que cela se reproduise à l’avenir — rabaisse-toi, repens-toi ! — et il est pénalisé par une période sans travail ni révisions. C’est de la folie furieuse », explique-t-elle.
« Il s’agit de grandes entreprises qui n’y connaissent rien en traduction, mais tout en bénéfices », ajoute Rosa. « À titre d’exemple extrême, il existe des plateformes entièrement automatisées » qui paient jusqu’à 7 fois moins que dans des conditions normales et qui recrutent en envoyant « un e-mail automatisé annonçant qu’une traduction est disponible sur la plateforme. Il faut alors y accéder à toute vitesse et, comme un groupe de chiens affamés, essayer d’attraper un morceau de proie à se partager entre plusieurs, car on traduit les segments qui restent encore libres » (c’est-à-dire des mots ou des phrases hors contexte) « et, en quelques minutes, voire quelques secondes, la traduction est bouclée ».
Après une révision, « si on te corrige quelque chose, on te menace, de manière automatique également, de te déconnecter du système si des corrections devaient à nouveau être nécessaires ; des corrections qui sont parfois très discutables ». Et cette tendance à privilégier la rentabilité au détriment de la qualité est l’aspect le plus destructeur de tous : « il existe des plateformes qui mettent les traductions aux enchères et c’est la personne disposée à accepter le prix le plus bas qui remporte le marché. En plus, cette traduction passe ensuite par un réviseur qui est rémunéré en fonction des corrections qu’il apporte, lesquelles sont déduites de la rémunération du premier traducteur », déplore-t-elle. « Je ne sais pas quel type de traduction cela donne, mais permettez-moi de douter de sa qualité ainsi que, sans aucun doute, de la qualité de vie et de la satisfaction des traducteurs ».
L’IA, un outil et une menace
L’avancée technologique qui rend possible les pré-traductions automatisées et qui, dans certaines entreprises et organisations, commence déjà à remplacer le travail humain pour les textes les plus prévisibles ou impersonnels, tels que les tableaux comptables et les documents bureaucratiques, est l’IA basée sur les réseaux neuronaux que Jean-Jacques a vue émerger en 2016.
« En général, l’IA sans supervision humaine peut éventuellement servir à effectuer des tâches monotones, mais, même dans des domaines très techniques, elle se trompe souvent », explique à Equal Times José F. Morales, professeur de Logique computationnelle et chercheur au département d’IA de l’Université polytechnique de Madrid (UPM) et à l’Institut IMDEA Software.
Une IA de traduction « aura du mal à comprendre les connotations d’un texte ou à mettre de l’émotion là où il le faut », et son utilisation commence déjà à dégrader les langues : « Certains usages étranges de l’anglais sont en train d’être normalisés par les traducteurs automatiques et l’IA, ce qui les fait apparaître dans de plus en plus de textes et incite les gens à les considérer comme corrects. Ensuite, les IA elles-mêmes se nourrissent de ces textes et sont entraînées avec et la boucle est bouclée », un cercle vicieux qui pourrait s’aggraver de manière exponentielle dans les années à venir, souligne-t-il.
Pourtant, il nous assure qu’il s’agit d’un outil utile et que « pour les traductions, comme pour presque tout, l’IA fonctionne toujours de la même manière. Nous devrions traiter l’IA comme s’il s’agissait d’un étudiant que nous supervisons : nous pouvons lui demander de faire un travail à condition que nous disposions de nos propres critères nous permettant de savoir si son résultat est bon ou non ».
Il y a également un problème de lisibilité, fait remarquer Rosa : le style qui résulte d’un usage abusif de l’IA peut relever du supplice pour un lecteur humain. « Une machine ne peut pas remplacer un humain, sauf pour des textes impersonnels, répétitifs et dépourvus du moindre style littéraire », affirme-t-elle, et si les traducteurs sont mis à l’écart au point de rendre leur métier intenable, « je m’inquiète pour l’avenir, non seulement du mien, mais aussi de l’art de l’écriture, car à long terme, sans intervention, le même sort pourrait être réservé au journalisme et à la littérature ».
Pour sa part, Alina, traductrice qui travaille principalement du russe vers l’espagnol et l’anglais, et qui a également des notions d’arabe, de suédois, d’allemand, d’ukrainien, de tatar et de biélorusse, voit les choses clairement : « Le sujet de l’IA nous met face à une dichotomie éternelle, car elle est à la fois un outil et une menace », assure-t-elle. Effectivement, « le fait que l’IA apprenne de nous me donne le vertige », de sorte que « nous-mêmes, les traducteurs, lui apprenons à traduire, à s’améliorer… Nous apprenons à la machine à nous remplacer ».
Que faire : résister, comme Hollywood
En quête de réponses sur la façon de relever ce défi, Equal Times s’est entretenu avec la sociologue Lindsay Weinberg, directrice du Laboratoire de justice technologique de l’université Purdue (Indiana, États-Unis), et Robert Ovetz, politologue spécialisé dans les organisations à but non lucratif et les mouvements ouvriers, professeur à l’université d’État de San José (Californie, États-Unis). Ils suggèrent tous deux de rejeter catégoriquement l’ingérence de l’IA, la ligne rouge pouvant être tracée à la défense du travail créatif considéré comme ne pouvant être effectué (et rémunéré) que par un être humain, en s’inspirant pour cela de la résistance couronnée de succès des auteurs et scénaristes hollywoodiens.
S’ils s’y opposent, les traducteurs, « comme nous, les professeurs d’université », seront probablement présentés comme « des technophobes anxieux qui refusent de s’adapter à leur époque », plutôt que comme des défenseurs « de la qualité et de l’intégrité de nos conditions de travail », souligne Mme Weinberg. De plus, on affirmera que l’IA est « supérieure, plus efficace ou plus fiable qu’un traducteur humain », alors que d’innombrables exemples prouvent que « ce n’est pas le cas ». Face à une conception mécanique du travail, réduite à des zéros et des uns, la traduction « exige une sensibilité culturelle et une conscience du contexte, deux types de connaissances hautement qualitatives, imperméables à l’automatisation ».
Par ailleurs, les traducteurs free-lance sont isolés les uns des autres par définition, tandis que les associations professionnelles de chaque pays ou région sont généralement loin de constituer une force syndicale ou unitaire capable d’opposer une résistance organisée. Même si certains pays semblent suffisamment conscients de la nature du problème, d’autres ont une perception plus naïve de l’IA.
« Ils sont très fragmentés, ils travaillent à domicile, ils font appel à des intermédiaires » et, en outre, « ils n’ont aucun contact direct avec les utilisateurs de leur travail », observe M. Ovetz : « s’ils souhaitent remédier à cela, ils doivent cartographier la structure de leur travail et identifier qui se trouve à la fin de la chaîne d’approvisionnement de leur produit », c’est-à-dire « ils doivent identifier d’où vient le travail qui leur est confié, la façon dont il est distribué et où il va », étant donné que « la clé pour s’organiser est de comprendre la chaîne d’approvisionnement, afin de pouvoir trouver les points faibles pour la perturber et défendre leurs intérêts ».
Et c’est là que les traducteurs peuvent « trouver des alliés », car « certains de ces clients disposent de personnel syndiqué, ils pourraient donc agir conjointement avec eux, ou les amener à changer la manière dont le travail est attribué, ou dont ils sont payés, ou dont les produits finis leur sont livrés », comme c’est le cas avec les clients qui ne sont pas des agences.
De fait, il recommande d’insister pour que les traducteurs aient accès au texte original et sur « la façon dont le processus d’automatisation dégrade le produit final ». Ainsi, « la tactique consisterait à s’organiser autour de cela et à discréditer le processus de travail » automatisé, en tenant bien compte du fait que « la technologie est utilisée pour rationaliser la tâche, la diviser en petites parties et sous-traiter ou automatiser différents éléments de celle-ci, où ce qui reste est confié aux traducteurs humains ».
En effet, comme l’a souligné Jason Resnikoff, professeur d’histoire contemporaine à l’université de Groningue aux Pays-Bas, lors d’une récente conférence de l’Institut syndical européen sur l’automatisation, « les discours sur le progrès technologique sont généralement défavorables aux travailleurs, et résister à ces discours est dangereux en soi », mais les syndicats et les travailleurs devraient le faire malgré tout.
« L’employeur dira, par exemple, quelque chose comme “ces gens ne sont pas réalistes, ils ne se rendent pas compte de l’évolution de l’économie” », déclare Mme Weinberg.
« Je pense que les syndicats vont devoir encaisser le coup pendant un certain temps », estime M. Resnikoff : « Il revient aux syndicats de trouver les moyens de refuser les modifications des moyens de production décidées par les employeurs sans s’opposer au progrès, ce qui les oblige à imposer leur propre définition du progrès ».
La cause des sans-patrons
Il convient peut-être à nouveau de ne pas perdre de vue cette idée de « société juste » pour que l’automatisation cesse d’être, comme elle l’a été depuis le XXe siècle, un « perturbateur du tissu de l’emploi », estime M. Resnikoff. À cela s’ajoute la question de savoir comment renverser le mouvement de dégradation supplémentaire que connaît le statut du traducteur, à l’instar d’autres emplois hautement qualifiés, qu’Elena, traductrice de l’anglais et du français vers l’espagnol avec plus de trois décennies d’expérience, dépeint très bien.
Un peu comme les « sans-culottes » de 1789, « je me considère “sans-patron” », déclare-t-elle, car « fiscalement, nous sommes traités comme une entreprise, mais je considère que nous sommes dans une catégorie inadéquate ». Elena se considère comme une « travailleuse indépendante ou indépendante involontaire », puisque, comme de nombreux collègues free-lance, elle effectue un travail similaire, dans la pratique, à celui d’un salarié, mais comme « une travailleuse qui a été déconnectée d’une entreprise employeuse et classée comme indépendante », et tout cela « en perdant les droits et les protections associés à un emploi salarié ».
C’est pourquoi elle appelle les syndicats à défendre également la cause des « employés sans patron » : « On nous traite comme une entreprise, alors que ce n’est pas le cas, parce qu’en tant qu’indépendants individuels, nous avons beau travailler dur, nous n’allons pas générer de bénéfices — comme une entreprise peut le faire — parce que notre travail échange notre temps contre de l’argent, et que notre tête et nos efforts ont une limite physique. » C’est précisément le problème que les employeurs, à travers l’IA, tentent d’éliminer de l’équation, sans tenir compte du fait que, sans traducteurs humains, la qualité nécessaire pour rendre les textes précis et utiles, voire intelligibles ou directement lisibles pour leurs lecteurs humains, est impossible.
D’où l’importance de s’organiser et de sensibiliser tout le monde à la faiblesse de ce système, insiste M. Ovetz : les traducteurs doivent sans relâche aller vers les nouveaux arrivants « de leur domaine et les préparer à ce qu’ils vont rencontrer ». Et il prévient : « ils doivent tendre la main à ces personnes et les associer à leurs efforts, faute de quoi tout le monde sera divisé ».
This article has been translated from Spanish by Charles Katsidonis"
By José Álvarez Díaz
6 June 2025
https://www.equaltimes.org/intelligence-artificielle?lang=en
#metaglossia_mundus
"Google’s conversational ‘Search Live’ is now starting to roll out
Of all the wild demos we saw at Google I/O 2025, the new “Search Live” experience was easily one of the most futuristic. The idea of having a real-time, back-and-forth conversation with Google Search felt like a true step into the next generation of AI-powered search.
And now, as reported by 9to5Google, it looks like Google is beginning to test this powerful new feature in the wild. Powered by Project Astra (the same underlying tech as Gemini Live), Search Live aims to transform the standard query-and-response of search into a fluid, ongoing conversation.
XREMOVE ADS
How Search Live works For users who have the feature rolled out to them, there are a couple of new ways to launch this experience in the Google app. The first is a new waveform icon that appears under the main search bar, replacing the previous Google Lens shortcut on the left. The second is a new circular button that shows up to the right of the text field when you’re in an AI Mode conversation.
Once launched, you’re taken to a clean, fullscreen interface with a curved, Google-colored waveform that animates as you speak. You’ll find pill-shaped buttons for “Mute” and “Transcript,” along with a three-dot menu that reveals four different voice options to choose from: Cosmo, Neso, Terra, and Cassini.
The interaction is designed to be truly conversational. You can ask your question, and Search Live will respond while also surfacing a scrollable carousel of the websites it used to inform its answer. It can even ask you clarifying questions to refine your query, and you’re free to ask as many follow-ups as you need. In a particularly neat touch, you can even exit the Google app and your conversation can continue running in the background.
What’s here now and what’s on the way It’s important to set expectations, as this initial rollout doesn’t include the full vision we saw demoed at I/O just yet. What is available now is the voice-based conversational experience. You can talk to Search, get spoken answers, and see sourced links.
What is not yet available is the powerful camera capability that allows you to stream video from your phone’s camera and ask questions about what you’re seeing in real-time. For that, you’ll still need to rely on your Gemini app for the time being.
This is a massive step forward for what Google Search can be. Moving from a simple tool for finding information to a true, interactive, conversational partner is a profound shift. We can’t wait to get our hands on this and are even more excited for when the full experience, camera and all, becomes widely available. For now, keep an eye on your Google app – you might just get an early look at the future of search." By Robby Payne June 6, 2025 https://chromeunboxed.com/googles-conversational-search-live-is-now-starting-to-roll-out/ #metaglossia_mundus
Podcast: When we think of creativity, we usually think of the arts — the ability to compose a song, write a novel, express ourselves through painting, dance, or theater. It’s the mysterious spark that ignites our imaginations, allowing us to communicate and experience emotions, ideas, and worlds that we could otherwise never touch.
But creative thinking isn’t just limited to artists and their work. Increasingly, researchers are discovering that it plays a key role in human intelligence, problem-solving, and even our well-being. On this episode, we explore what creativity is, how it works, and how we can use it in unexpected ways. We hear about why one musician says AI programs aren’t a threat, but a means of democratizing music; what research has revealed about the power of creativity to shape the brains and success of children; and how the burgeoning field of “design thinking” is helping to improve our health care system.
...
Cognitive scientist and psychologist Keith Holyoak explains how and why creative thinking — and especially the ability to draw analogies — is central to human intelligence and problem-solving. His latest book is “The Human Edge: Analogy and the Roots of Creative Intelligence.”
We talk with emergency physician and researcher Bon Ku about expanding our definition of creativity beyond the arts, and how “design thinking” is helping to improve patient outcomes, make hospitals more efficient, and expand access to health care. He co-authored ”Health Design Thinking: Creating Products and Services for Better Health.”
Air Date: June 6, 2025
Listen 49:10
https://whyy.org/episodes/exploring-the-secrets-of-human-creativity/
#metaglossia_mundus
"There are an estimated 100,000 words in modern Chinese, yet the Hanyu Da Cidian dictionary (the most inclusive available Chinese dictionary) contains over 370,000 words, including less frequently used or specialised terms.
Meanwhile, English, the lingua franca, has approximately 170,000 words in modern use, and a whopping 600,000 entries in the Oxford English Dictionary.
Even with this volume of words and meanings from both languages, however, some things remain untranslatable.
But that’s where translators step in.
Translators work on translation – the process of converting the meaning of a text from one language to another. They balance staying true to the original meaning and making a text sound natural in the target language, ensuring the final result communicates the same message, feeling, and tone as the original.
Localisation, on the other hand, involves more than translating.
It’s adapting the tone, style, and context of content to align with the cultural norms and preferences of the target audience. This ensures that the message is not only understood but also resonates emotionally.
In short, you can translate without localising, but there’s no vice versa. Doing that simply results in your message not hitting the mark, or worse, changing it entirely.
Whatever the case, neither are easy tasks – Amy Tang can tell you that.
Tang pursued a BA in Political Science and Government at the National Cheng Kung University and has a Graduate Institute of Translation and Interpretation from the National Taiwan Normal University. Source: Amy Tang
The start of a translator and localisation expert’s journey
Born and raised in Taiwan, Tang’s learning of new languages began when she enrolled in English after-school classes.
“I never really thought about why I liked the language; I just happened to be good at it, and it was easy for me to pick up,” admits Tang. “I got a sense of achievement out of it, so it got me interested in exploring more.”
That said, the more she learned about the languages, the more Tang discovered the magic and beauty behind them.
“There are a lot of small details you pick up that you would not know if you’re not a professional in this field,” she says.
Tang currently speaks five languages: her mother tongue is Mandarin, she picked up English as a child, took up French in college, learned German during a university semester abroad in Austria, and most recently, dabbled in Korean for a work project.
Most notable of Tang’s language capabilities is her MA in Language Interpretation and Translation from the National Taiwan Normal University.
This particular postgraduate degree was prompted by Tang’s undergraduate studies when she pursued a BA in Political Science and Government at the National Cheng Kung University.
There, she took courses in the foreign language department and learned about translation and interpretation – a subject that sparked her interest.
“I think it’s cool to be able to translate and interpret on the spot,” she says. “I took an introduction course, and then took more courses and realised that this was a field I wanted to study about or at least enhance my skills in.”
By the end of her postgraduate studies, Tang, already working as a freelance translator for English to Mandarin (and vice versa) projects, came across a job opportunity to work in a translation localisation company and jumped on board.
We caught up with Tang to learn more about working in translation and localisation and the best ways to help you get started on your own translation journey.
If you’re looking to venture into your own career in this field, these are Tang’s translation tips to take note of. Source: Amy Tang
What is your favourite part about translating from Mandarin to English?
One thing that I’ve been enjoying is discovering all the redundancies in Mandarin. It’s a non-structural language, so if you read or hear it, you can still understand it, no matter how redundant it is. But when you translate it, you realise that so many redundant words are used.
There are also many hidden things that you need to look out for, like missing subjects, for instance. When you hear a sentence in Mandarin, you know what the speaker means, but in English, you need to narrow it down to a certain word or a specific aspect to be able to translate it.
Mandarin has a lot of words that cover a broad idea, but English has a lot of specialised words to choose from. So, that’s one thing I definitely find interesting when translating from Mandarin to English.
What is the hardest part about translation?
The hardest part about translation is learning to be flexible. Many translators will get very fixed on translating a certain word into the target language, which is not how it should be.
Of course, it depends on the type of text that you’re working on – if you’re working on marketing pieces, you’ll need to be a little more creative; if you’re working on legal documents, you need to stay close to what is meant in the original text.
But flexibility plays a big role here – you do not just translate word for word. You need to make sure that you understand what the text means in your source language and what it would look like in your target language.
You also need to focus on your audience. Can the audience understand you in this specific field or context? That’s the real challenging part.
You can tell a person is really good at their work if you, as the reader, don’t need to ask a lot of clarifying questions. That’s part of mindreading in translation – knowing what the audience wants.
Overlooking the packed hemicycle in Brussels, in booths half-obscured by tainted glass, a brigade of interpreters feed precious audio streams of the proceedings to lawmakers from 27 nations of the European Parliament. Source: AFP
What translation tips do you have for aspiring translators?
There is a common myth that people believe – if you know the source language, you’ll know how to translate it into your mother tongue or another language you know, but that’s not true.
To be able to translate, you need to ask yourself if you’re good enough in your mother tongue or your native language. Many ignore that, thinking they know enough, so it’s okay. The truth is that many people are not good at their native language or mother tongue, especially in written form. You can see grammar mistakes, awkward sentences or even incorrect word choices.
When translating for the written form, you have to make sure you’re using the right tone, grammar, and punctuation. You need to be very detailed, and you need to know the language well.
Second, you must ensure that your understanding of the source language is up to a certain proficiency, preferably completely fluent or near native-level.
For instance, I’ve worked with Mandarin-to-English translators who are native English speakers themselves, and we know it’s harder for them to learn Mandarin. So, their understanding of Mandarin is very critical, or else we will see mistranslations here and there and confuse the readers or audience.
But people often think that because they know the language, there’s no need to work on it further. In this case, you’ll find out that your understanding of the source language isn’t enough.
That’s the last thing – to be a translator, you need to do a lot of research and be willing to learn new things. You need to be curious to look up all the information you need, or else you’ll give the wrong or inaccurate translation.
Do you think having a good command of translation or interpretation skills is crucial in the localisation industry?
Video game localisation is very nuanced, where you have to strike a delicate balance between staying true to the source language and being very natural and creative in the target language. In my personal opinion, I think it’s harder compared to other translation tasks because you need to follow the source closely and need to make it sound natural and not stiff – you have to find a middle ground.
So, localisation does not translate directly to being creative – it’s about focusing on your target audience and meeting their expectations of your product.
I’ve talked to some translators in the gaming industry, and they said that even though we believe creative translations read better, many players actually expect something closer to the original text. So that’s where strategies in localisation apply because, through that, the audience will feel more connected to the game.
The same applies to novels or films; novel translators have said the same.
If you translate a Japanese or English novel into Mandarin, you don’t make it read like a Mandarin novel written by a Mandarin author. You translate the original text into Mandarin and make sure people can tell that the story was originally written in another language to give people the feeling of reading something different.
If the translation is presented as a Mandarin classic or modern literature, your audience doesn’t want that. The readers expect elements from a foreign language, so you need to balance the translation with the localisation.
But again, I think it really depends on what you’re working on.
During her undergraduate studies, Tang participated in an exchange programme at Austria’s Johannes Kepler Universität Linz. Source: Amy Tang
Does having the experience of being abroad help build a better understanding of how to localise content?
I would say yes because even though it’s just a half-year exchange programme, you get to know a lot more about different cultures and people.
And because you’re studying abroad, you’ll have a lot of local classmates and professors, so you’ll understand how they speak and work on stuff or their expectations and values. This way, you learn how to communicate better and interact with them. In the localisation industry, this allows me to work with linguists from different parts of the world and really understand their approaches to translation and their work style.
So, studying abroad definitely gave me an edge when I started my career in localisation with a big international team."
https://studyinternational.com/news/translation-tips-localisation/
#metaglossia_mundus
VLC introduces cutting-edge AI subtitling and translation capabilities
"The widely used VLC media player, developed by the nonprofit organization VideoLAN, has surpassed 6 billion global downloads and introduced a preview of its AI-driven subtitle system and translation in over 100 languages.
During CES 2025, VideoLAN president Jean-Baptiste Kempf demonstrated the revolutionary feature, which leverages open-source AI models to seamlessly generate and translate subtitles locally on users’ devices. Kempf explained that this capability operates directly within the executable, without relying on cloud services. “What’s important is that this is running on your machine locally, offline, without any cloud services. It runs directly inside the executable,” he said.
This development builds on prior efforts involving external plug-ins that employed OpenAI’s Whisper speech recognition system. Unlike earlier versions, the new feature is embedded directly into the VLC player, enabling automatic subtitle generation and real-time translation without external dependencies. Users can expect enhanced accessibility and localization for global content, though no specific release date for the public rollout has been announced.
The unveiling comes as VideoLAN marks a major milestone—exceeding 6 billion downloads of VLC. Kempf proudly noted that “VLC’s user base continues to expand, even amidst fierce competition from streaming platforms.” This growth underscores the enduring appeal of versatile, privacy-conscious media tools.
Key Features and Implications of VLC’s AI Subtitling Functionality
• Real-time Subtitle Creation: Automated generation of subtitles during video playback.
• Offline Operation: Full functionality without requiring an internet connection.
• Multi-Language Support: Translation and subtitling available in over 100 languages.
• Enhanced Privacy: Local processing ensures user data remains private.
• Seamless Integration: Directly embedded in the VLC executable for intuitive use.
Comparison with Other Solutions
Unlike many streaming services and dedicated subtitling software that depend on cloud-based AI for translation, VLC’s approach prioritizes user autonomy and privacy by handling all processes locally. While cloud services often require continuous internet access and potentially expose sensitive user data, VLC’s offline capabilities set it apart as a secure and flexible option.
Competitors offering cloud-dependent subtitling solutions may deliver rapid updates and more extensive language models, but VLC’s embedded system ensures robust functionality without compromising user privacy. This distinction may appeal to professionals, content creators, and viewers in regions with limited internet access or stringent data privacy requirements.
Source : sentisight.ai"
https://www.daily-sun.com/post/808493
#metaglossia_mundus
Ram Mohan advocates for AI approaches that help preserve endangered languages and promote digital inclusion.
"How Multilingual AI Can Protect Language and Improve Global Technology RAM MOHAN / JUN 5, 2025 Ram Mohan is Chairman and Founder of the Coalition on Digital Impact (CODI), an independent, global coalition founded to empower global communities to access and navigate the Internet in their native languages.
The modern ability to digitize medical records, financial transactions, and cultural documents means that important items can be recorded for posterity. But when it comes to language, technology’s rapid advancement could lead to the extinction of global languages.
For example, in southwestern Ethiopia, the Ongota language has virtually no digital presence, lacking a standardized script, keyboard support, app localization, digitized content, or representation in AI language models. Ongota speakers—and their knowledge systems—are effectively invisible in the digital world, and the language will likely die with the handful of elderly people who still speak it.
Current estimates predict that over half of the world’s languages will become extinct within 75 years, with some estimates suggesting that one language dies every two weeks. AI has created new possibilities for communication, with instant translation services and voice assistants. But while AI has the potential to bridge linguistic divides, it also poses a threat to the very languages it seeks to translate and understand.
As governments and private companies race to develop AI, large language models (LLMs) like ChatGPT are being trained with a handful of dominant, data-rich languages, such as English, Spanish, and Mandarin. Ignoring countless languages and the communities they represent has created an AI landscape that is heavily skewed toward the Western world. As AI becomes more integrated into sectors like education, healthcare, and governance, the prevalence of a handful of languages in these systems risks leading to linguistic homogenization.
Harnessing AI to Tackle Today’s Challenges The digital access movement encompasses all efforts to address economic, cultural, and technical disparities that limit access to the internet and the opportunities it creates. But language has been overlooked, despite being a key factor in forming meaningful connections online.
OUR CONTENT DELIVERED TO YOUR INBOX. Join our newsletter on issues and ideas at the intersection of tech & democracy Enter email address Subscribe Language preservation must move beyond oral traditions and become part of the digital age. We must work to design AI systems that embrace linguistic diversity and integrate multilingualism into the internet infrastructure to avoid erasure and improve our digital world. AI systems can be part of the solution, helping to document and revive endangered languages through text, speech recognition, and translation tools. So, how can we build a world where everyone can navigate the internet in their own language?
Internet pioneer Vint Cerf says, “Meaningful spoken and written linguistic access is necessary to realize the Internet's full potential. Absent effective action, many communities around the world will be wrongly excluded. This cannot be an acceptable outcome."
The newly launched Coalition on Digital Impact (CODI), an independent, global alliance of internet leaders and like-minded organizations, is rising to meet this challenge. Through education, advocacy, and awareness efforts, CODI and its members are working to reduce barriers to internet access, build a more secure infrastructure for the future of the web, and advance policies that promote equitable access to the digital world.
Speakers of minority languages are largely unable to reap the benefits of AI. A child in a remote village may struggle to find educational content in their native tongue or may face challenges in using voice-activated devices or translation services. Without digital tools, the practical advantages of maintaining a language diminish, hastening its extinction.
Universal Acceptance: A Small Change With Outsized Benefits For AI to be a force for good in the digital age, we need to foster a culture of multilingualism and 'Universal Acceptance' online. One solution is for organizations to implement Universal Acceptance (UA) protocols, the principle that all valid domain names, email addresses, and internet protocols should be accepted and recognized by all internet systems, regardless of language, script, or other characteristics. Non-Latin scripts such as Arabic, Chinese, and Cyrillic are increasingly being used for domain names, and UA ensures that systems can handle these characters and scripts. UA-compliance can be achieved through simple code edits or features like voice-to-text or live captions ― small changes with far-reaching benefits.
Designing products and services for a wide range of users, without the need for adaptations or specialized features, is essential for global market participation. A 2017 study found that the Universal Acceptance of internet domain names could be a USD $9.8 billion+ opportunity. UA represents the future of the web ― one that is backed by a more democratic, secure infrastructure ― and an enormous economic opportunity. But this future will depend on global collaboration and investment.
A truly multilingual internet requires addressing language barriers, literacy challenges, cultural sensitivities, and social stigmas that impede technology adoption. AI will play a decisive role in the future of global languages – it will either preserve them or accelerate their extinction. By developing AI systems that reflect linguistic diversity and building multilingualism and Universal Acceptance into internet systems, we can create a digital world where every linguistic group has a place. The time to act is now." Ram Mohan https://www.techpolicy.press/how-multilingual-ai-can-protect-language-and-improve-global-technology/ #metaglossia_mundus
"Nonbinary Hebrew transforms the language for everyone As a living language, Hebrew is constantly evolving to adapt to the needs of its speakers. That must include nonbinary people too.
How do people who go by they/them pronouns in English refer to themselves in Hebrew? What do you call the Jewish rite of passage ceremony for a nonbinary tween? What plural words should we use for a group of people who each have a different gender?
I was faced with these wonderfully productive dilemmas when I studied Hebrew in college eight years ago. I had come out a year earlier as nonbinary, which means for me that my gender does not fit neatly into the boxes of man or woman. I asked Eyal Rivlin, my Hebrew professor at University of Colorado at Boulder, about the established conventions for nonbinary people. After he and I did research by asking friends, family and colleagues and examining literature, we realized there was no comprehensive system for speaking Hebrew without using the masculine or feminine.
We decided to experiment and created the Nonbinary Hebrew Project in 2018.
Hebrew is both ancient and modern. As a living language, it is constantly changing, evolving and growing to adapt to the needs of its speakers. It is used daily for conversation, prayer, ritual and study. As such, it is critical that everyone who needs or wants to use Hebrew can do so in a way that is affirming.
One of the aspects of Hebrew that distinguishes it from English is its use of what linguists call grammatical gender. English has some familiar uses of it, such as the personal pronouns she/her and he/him. But in many languages, including Hebrew, almost all parts of speech are gendered, including verbs, nouns and adjectives. This grammatical gender is often chosen based on the gender of the speaker or the subject of the sentence.
Some queer communities in Israel use “lashon me’orevet,” or “language that crosses over,” in which they intentionally challenge this convention by switching grammatical gender mid-sentence or in every other sentence for the same speaker or subject. This is affirming for many people — but I sought to create a third option for people like myself for whom the grammatical masculine or feminine did not entirely affirm my identity.
The system that Eyal and I created is intuitive to Hebrew speakers because it creates a third parallel system of grammatical gender for use alongside the masculine and feminine options.
In traditional Hebrew, for example, grammatically masculine words are usually considered the default, while grammatically feminine words often end with an additional “-et” or “-ah.” In our new, more expansive option, many singular nouns, verbs and adjectives end with “-eh” to distinguish them from the masculine or feminine.
In another example from traditional Hebrew, grammatically masculine plural words end in “-im,” with groups referred to with the pronouns “hem” or “atem,” while feminine plural words end in “-ot,” with groups referred to with “hen” or “aten.” In our new system, plural words can use the ending “-imot” or “-emen,” both of which combine existing plural endings and were already used by some people before our project.
Since the Nonbinary Hebrew Project’s creation, many people across the world have applied our system for their own uses, such as Yizkor (memorial) prayers, baby-naming ceremonies and wedding blessings. There are other innovations as well, such as using “bet mitzvot,” instead of a bar or bat mitzvah. I can now start the day with the prayer “Modet Ani” (rather than “modeh” or “modah”). And when I’m called to the Torah, I can receive a blessing that truly honors me.
Another application I have been excited to see is the use of our system to refer to the Divine without using the masculine or feminine.
There is a rich history in Hebrew texts of gender being used in playful ways, for both people and the Divine. Many names for the Divine, such as Rock or Fountain of Life, are already beyond traditional notions of binary gender.
The Nonbinary Hebrew Project opens up possibilities for radical joy, euphoria and recognition of the other as a sibling rather than a stranger. We connect with one another through shared language, and our project aims to provide another tool for weaving communities closer together.
If you’re interested in learning more, come to one of our workshops this Friday evening, June 6, at Congregation Sha’ar Zahav in San Francisco or stay tuned for our virtual workshops by checking the calendar at nonbinaryhebrew.com. You can find applied uses of the system on the website, as well as grammar charts, podcasts and news articles.
This Pride Month, and all year long, let’s find joy together by using language to uplift one another and honor each other’s light." BY LIOR GROSS JUNE 5, 2025 https://jweekly.com/2025/06/05/nonbinary-hebrew-transforms-the-language-for-everyone/ #metaglossia_mundus
"Amazon Lex extends custom vocabulary feature to additional languages
Posted on: Jun 4, 2025
Amazon Lex now extends custom vocabulary support to multiple languages, including Chinese, Japanese, Korean, Portuguese, Catalan, French, German, and Spanish locales. This enhancement enables you to improve speech recognition accuracy for domain-specific terminology, proper nouns, and rare words across a wider range of languages, creating more natural and accurate conversational experiences. With custom vocabulary, you can provide Amazon Lex with specific phrases that should be recognized during audio conversations, even when the spoken audio might be ambiguous. For example, you can ensure technical terms like "Cognito" or industry-specific vocabulary like "solvency" are correctly transcribed during bot interactions, providing consistent speech recognition capabilities that work both for intent recognition and improving slot value elicitation.
This feature is now available in all AWS Regions where Amazon Lex operates for the supported languages."
https://aws.amazon.com/about-aws/whats-new/2025/06/amazon-lex-custom-vocabulary-additional-languages/
#metaglossia_mundus
Dans un futur proche, la proportion de données en langage naturel sur le Web pourrait diminuer au point d’être éclipsée par des textes générés par l’Intelligence artificielle.
Les agents conversationnels tels que ChatGPT facilitent parfois notre quotidien en prenant en charge des tâches rébarbatives. Mais ces robots intelligents ont un coût. Leur bilan carbone et hydrique désastreux est désormais bien connu. Un autre aspect très préoccupant l’est moins : l’intelligence artificielle pollue les écrits et perturbe l’écosystème langagier, au risque de compliquer l’étude du langage.
Une étude publiée en 2023 révèle que l’utilisation de l’intelligence artificielle (IA) dans les publications scientifiques a augmenté significativement depuis le lancement de ChatGPT (version 3.5). Ce phénomène dépasse le cadre académique et imprègne une part substantielle des contenus numériques, notamment l’encyclopédie participative Wikipedia ou la plate-forme éditoriale états-unienne Medium.
Le problème réside d’abord dans le fait que ces textes sont parfois inexacts, car l’IA a tendance à inventer des réponses lorsqu’elles ne figurent pas dans sa base d’entraînement. Il réside aussi dans leur style impersonnel et uniformisé.
La contamination textuelle par l’IA menace les espaces numériques où la production de contenu est massive et peu régulée (réseaux sociaux, forums en ligne, plates-formes de commerce…). Les avis clients, les articles de blog, les travaux d’étudiants, les cours d’enseignants sont également des terrains privilégiés où l’IA peut discrètement infiltrer des contenus générés et finalement publiés.
La tendance est telle qu’on est en droit de parler de pollution textuelle. Les linguistes ont de bonnes raisons de s’en inquiéter. Dans un futur proche, la proportion de données en langues naturelles sur le Web pourrait diminuer au point d’être éclipsée par des textes générés par l’IA. Une telle contamination faussera les analyses linguistiques et conduira à des représentations biaisées des usages réels du langage humain. Au mieux, elle ajoutera une couche de complexité supplémentaire à la composition des échantillons linguistiques que les linguistes devront démêler.
Read more: L’IA au travail : un gain de confort qui pourrait vous coûter cher
Quel impact sur la langue ? Cette contamination n’est pas immédiatement détectable pour l’œil peu entraîné. Avec l’habitude, cependant, on se rend compte que la langue de ChatGPT est truffée de tics de langage révélateurs de son origine algorithmique. Il abuse aussi bien d’adjectifs emphatiques, tels que « crucial », « essentiel », « important » ou « fascinant », que d’expressions vagues (« de nombreux… », « généralement… »), et répond très souvent par des listes à puces ou numérotées. Il est possible d’influer sur le style de l’agent conversationnel, mais c’est le comportement par défaut qui prévaut dans la plupart des usages.
Un article de Forbes publié en décembre 2024 met en lumière l’impact de l’IA générative sur notre vocabulaire et les risques pour la diversité linguistique. Parce qu’elle n’emploie que peu d’expressions locales et d’idiomes régionaux, l’IA favoriserait l’homogénéisation de la langue. Si vous demandez à un modèle d’IA d’écrire un texte en anglais, le vocabulaire employé sera probablement plus proche d’un anglais global standard et évitera des expressions typiques des différentes régions anglophones.
L’IA pourrait aussi simplifier considérablement le vocabulaire humain, en privilégiant certains mots au détriment d’autres, ce qui conduirait notamment à une simplification progressive de la syntaxe et de la grammaire. Comptez le nombre d’occurrences des adjectifs « nuancé » et « complexe » dans les sorties de l’agent conversationnel et comparez ce chiffre à votre propre usage pour vous en rendre compte.
Du lundi au vendredi + le dimanche, recevez gratuitement les analyses et décryptages de nos experts pour un autre regard sur l’actualité. Abonnez-vous dès aujourd’hui !
Ce qui inquiète les linguistes La linguistique étudie le langage comme faculté qui sous-tend l’acquisition et l’usage des langues. En analysant les occurrences linguistiques dans les langues naturelles, les chercheurs tentent de comprendre le fonctionnement des langues, qu’il s’agisse de ce qui les distingue, de ce qui les unit ou de ce qui en fait des créations humaines. La linguistique de corpus se donne pour tâche de collecter d’importants corpus textuels pour modéliser l’émergence et l’évolution des phénomènes lexicaux et grammaticaux.
Les théories linguistiques s’appuient sur des productions de locuteurs natifs, c’est-à-dire de personnes qui ont acquis une langue depuis leur enfance et la maîtrisent intuitivement. Des échantillons de ces productions sont rassemblés dans des bases de données appelées corpus. L’IA menace aujourd’hui la constitution et l’exploitation de ces ressources indispensables.
Pour le français, des bases comme Frantext (qui rassemble plus de 5 000 textes littéraires) ou le French Treebank (qui contient plus de 21 500 phrases minutieusement analysées) offrent des contenus soigneusement vérifiés. Cependant, la situation est préoccupante pour les corpus collectant automatiquement des textes en ligne. Ces bases, comme frTenTen ou frWaC, qui aspirent continuellement le contenu du Web francophone, risquent d’être contaminées par des textes générés par l’IA. À terme, les écrits authentiquement humains pourraient devenir minoritaires.
Les corpus linguistiques sont traditionnellement constitués de productions spontanées où les locuteurs ignorent que leur langue sera analysée, condition sine qua non pour garantir l’authenticité des données. L’augmentation des textes générés par l’IA remet en question cette conception traditionnelle des corpus comme archives de l’usage authentique de la langue.
Alors que les frontières entre la langue produite par l’homme et celle générée par la machine deviennent de plus en plus floues, plusieurs questions se posent : quel statut donner aux textes générés par l’IA ? Comment les distinguer des productions humaines ? Quelles implications pour notre compréhension du langage et son évolution ? Comment endiguer la contamination potentielle des données destinées à l’étude linguistique ?
Une langue moyenne et désincarnée On peut parfois avoir l’illusion de converser avec un humain, comme dans le film « Her » (2013), mais c’est une illusion. L’IA, alimentée par nos instructions (les fameux « prompts »), manipule des millions de données pour générer des suites de mots probables, sans réelle compréhension humaine. Notre IA actuelle n’a pas la richesse d’une voix humaine. Son style est reconnaissable parce que moyen. C’est le style de beaucoup de monde, donc de personne.
Bande annonce du film « Her » (2013) de Spike Jonze. À partir d’expressions issues d’innombrables textes, l’IA calcule une langue moyenne. Le processus commence par un vaste corpus de données textuelles qui rassemble un large éventail de styles linguistiques, de sujets et de contextes. Au fur et à mesure l’IA s’entraîne et affine sa « compréhension » de la langue (par compréhension, il faut entendre la connaissance du voisinage des mots) mais en atténue ce qui rend chaque manière de parler unique. L’IA prédit les mots les plus courants et perd ainsi l’originalité de chaque voix.
Bien que ChatGPT puisse imiter des accents et des dialectes (avec un risque de caricature), et changer de style sur demande, quel est l’intérêt d’étudier une imitation sans lien fiable avec des expériences humaines authentiques ? Quel sens y a-t-il à généraliser à partir d’une langue artificielle, fruit d’une généralisation déshumanisée ?
Parce que la linguistique relève des sciences humaines et que les phénomènes grammaticaux que nous étudions sont intrinsèquement humains, notre mission de linguistes exige d’étudier des textes authentiquement humains, connectés à des expériences humaines et des contextes sociaux. Contrairement aux sciences exactes, nous valorisons autant les régularités que les irrégularités langagières. Prenons l’exemple révélateur de l’expression « après que » : normalement suivie de l’indicatif, selon les livres de grammaire, mais fréquemment employée avec le subjonctif dans l’usage courant. Ces écarts à la norme illustrent parfaitement la nature sociale et humaine du langage.
La menace de l’ouroboros La contamination des ensembles de données linguistiques par du contenu généré par l’IA pose de grands défis méthodologiques. Le danger le plus insidieux dans ce scénario est l’émergence de ce que l’on pourrait appeler un « ouroboros linguistique » : un cycle d’auto-consommation dans lequel les grands modèles de langage apprennent à partir de textes qu’ils ont eux-mêmes produits.
Cette boucle d’autorenforcement pourrait conduire à une distorsion progressive de ce que nous considérons comme le langage naturel, puisque chaque génération de modèles d’IA apprend des artefacts et des biais de ses prédécesseurs et les amplifie.
Il pourrait en résulter un éloignement progressif des modèles de langage humain authentique, ce qui créerait une sorte de « vallée de l’étrange » linguistique où le texte généré par l’IA deviendrait simultanément plus répandu et moins représentatif d’une communication humaine authentique." https://theconversation.com/vocabulaire-et-diversite-linguistique-comment-lia-appauvrit-le-langage-252944 #metaglossia_mundus
"...BookTranslate.ai is the Gutenberg Revolution of translation.”— Balint Taborski EGER, HUNGARY, June 5, 2025 /EINPresswire.com/ -- BookTranslate.ai, a groundbreaking AI-powered literary translation platform, has outperformed legendary human translator Donald Keene in a blind test — and it’s now open to publishers, authors, and translators worldwide.
Unlike generic AI tools, BookTranslate.ai doesn’t just translate — it analyzes, refines, and critiques its own output across three advanced stages. The system includes a sophisticated AI Literary Analysis Engine, an Advanced Multi-Pass Refinement System, and a final-stage AI Review Engine (“The Finalizer”). This full-stack process delivers translations of exceptional fidelity and style — even surpassing acclaimed human translations in blind literary evaluations.
Founded by Hungarian indie publisher, translator, and software developer Balint Taborski (Praxeum Publishing), BookTranslate.ai emerged from the critical need for high-fidelity translations of complex literary and philosophical works. "Traditional translation is often prohibitively expensive and slow for small, niche, budget-constrained publishers like myself, while generic machine translation tools lacked the nuance for publishable quality," says Taborski. "I needed to cut costs without sacrificing quality. I needed a system that could not only translate with precision but also preserve authorial intent and literary integrity. What began as an internal tool has evolved into a platform that can empower authors and publishers worldwide."
BookTranslate.ai's unique three-stage process ensures unparalleled quality:
1. AI Literary Analysis Engine (Preprocessor): Before translation, this engine performs a deep, holistic analysis of the entire manuscript, understanding its genre, style, themes, and structure. It generates a custom "Translation Blueprint" – detailed instructions and an AI-suggested glossary – to guide the subsequent translation with unparalleled contextual awareness.
2. Advanced Multi-Pass Refinement System: Each paragraph, guided by the Blueprint, undergoes five distinct AI-driven passes of translation and proofreading. This iterative process refines grammar, fluency, tone, and stylistic consistency, aiming for output that is ~98% publication-ready.
3. AI Final Review Engine (The "Finalizer"): This final stage acts as an expert AI second opinion. The Finalizer meticulously compares the AI-translated text against the original source, identifying subtle errors, inconsistencies, or nuanced improvement opportunities that even a robust multi-pass system might overlook. Its suggestions, which users can review and selectively apply, further elevate the translation's polish towards perfection.
The platform's capabilities have been validated through multiple public demonstrations and blind tests. Notably, in an evaluation of Osamu Dazai's Ningen Shikkaku (No Longer Human), Google's Gemini 2.5 Pro model assessed BookTranslate.ai's translation as "Best," outperforming both DeepL and Donald Keene's acclaimed version, stating it "delivered the most fluent, faithful, and emotionally rich translation." Gemini also remarked, when reviewing BookTranslate.ai's unedited English translation of a 19th century classic of French political philosophy, Édouard Laboulaye’s *The State and Its Limits*, "Given the consistency and elegance, it's more straightforward to attribute it to a talented human translator."
Additional compelling case studies, including blind analyses of translations for works by H.P. Lovecraft and Eugen von Böhm-Bawerk, demonstrating the system's versatility and the AI Finalizer's impact, are available on the BookTranslate.ai website.
"BookTranslate.ai is the Gutenberg Revolution of translation," Taborski adds. "It's not just about AI generating text; it's about AI demonstrating a deep literary understanding, refining its own work with meticulous care, and even providing critical editorial feedback. An entire AI editorial board is collaborating on the manuscript under the hood. This allows us to make great literature and important ideas accessible across languages at a fraction of the cost, without sacrificing quality. Soon, we will look upon traditional human translation much as we now look upon the hand-copying of books."
BookTranslate.ai invites authors, publishers, and translators to experience the future of literary translation. Full book examples, a free trial for up to 3000 characters, and an on-request full-chapter translation offer are available at https://booktranslate.ai.
About BookTranslate.ai:
BookTranslate.ai is an advanced AI translation platform specializing in books and long-form texts. Its proprietary three-stage system—comprising pre-translation literary analysis, multi-pass AI refinement, and an AI final review engine—delivers publishable-quality translations that preserve authorial voice and nuance. Founded by indie publisher Balint Taborski, BookTranslate.ai aims to democratize access to global literature and ideas. Balint Taborski BookTranslate.ai +36 70 218 1428 hello@booktranslate.ai
https://www.pahomepage.com/business/press-releases/ein-presswire/819508395/booktranslate-ai-stuns-publishing-world-ai-outperforms-legendary-translator-in-blind-test/ #metaglossia_mundus
"TransPerfect Launches New App for Mobile Interpretation Supports Intuitive Access to a Phone or Video Interpreter in Seconds
NEW YORK, June 05, 2025 (GLOBE NEWSWIRE) -- TransPerfect, the world's largest provider of language and AI solutions for global business, today announced it has launched the TransPerfect Interpretation App, enabling users to connect to a live, expertly trained video or phone interpreter in seconds.
Available on the web, iOS, Android, and Microsoft devices, the app makes accessing over-the-phone interpretation and video remote interpretation simple and fast. It supports 200+ languages, including American Sign Language, enabling users across multiple industries to quickly and effectively respond to customer needs. It’s ideal for hospitals, medical and benefit providers, financial branches, legal, retail stores, first responders, and others in any situation where customer interactions happen outside of a contact center.
“This app marks a significant leap forward in making interpretation services more accessible and easier to use,” commented TransPerfect Connect Vice President Steven Cheeseman. “With on-demand access to professional interpreters, we’re empowering organizations to overcome language barriers in real time. It’s fast, intuitive, and purpose-built to support the critical work our clients perform every day.”
To use the app, simply select a language from the menu and tap to choose audio or video interpretation. If a video interpreter is not available, the call will automatically route to an audio interpreter.
Available to all existing TransPerfect customers, the app enables users to:
Capture necessary data (account numbers, claim numbers, medical record numbers, etc.) Adjust the audio, mute lines, turn video on and off Add additional parties regardless of whether they have the application installed Capture user satisfaction at the end of every call (customizable by client) TransPerfect President and Co-CEO Phil Shawe stated, “TransPerfect customers now have easy access to a live interpreter—wherever, whenever, and in any language.”
To learn more about the launch, visit https://go.transperfect.com/mobile_interpretation or email us at mobileinterp@transperfect.com..." June 05, 2025 10:00 ET | Source: TransPerfect https://www.globenewswire.com/news-release/2025/06/05/3094498/0/en/TransPerfect-Launches-New-App-for-Mobile-Interpretation.html #metaglossia_mundus
Google reveals details behind the development of AI Mode. Learn design insights and what marketers need to know.
Google built AI Mode to handle longer, conversational search queries. Google claims AI search delivers "more qualified clicks." Google has no data to share to verify claims of quality improvements.
SEJ STAFF Matt G. Southern
Google has shared new details about how it designed and built AI Mode.
In a blog post, the company reveals the user research, design challenges, and testing that shaped its advanced AI search experience.
These insights may help you understand how Google creates AI-powered search tools. The details show Google’s shift from traditional keyword searches to natural language conversations.
User Behavior Drove AI Mode Creation Google built AI Mode in response to the ways people were using AI Overviews.
Google’s research showed a disconnect between what searchers wanted and what was available.
Claudia Smith, UX Research Director at Google, explains:
“People saw the value in AI Overviews, but they didn’t know when they’d appear. They wanted them to be more predictable.”
The research also found people started asking longer questions. Traditional search wasn’t built to handle these types of queries well.
This shift in search behavior led to a question that drove AI Mode’s creation, explains Product Management Director Soufi Esmaeilzadeh:
“How do you reimagine a Search gen AI experience? What would that look like?”
AI “Power Users” Guided Development Process Google’s UX research team identified the most important use cases as: exploratory advice, how-to guides, and local shopping assistance.
This insight helped the team understand what people wanted from AI-powered search.
Esmaeilzadeh explained the difference:
“Instead of relying on keywords, you can now pose complex questions in plain language, mirroring how you’d naturally express yourself.”
According to Esmaeilzadeh, early feedback suggests that the team’s approach was successful:
“They appreciate us not just finding information, but actively helping them organize and understand it in a highly consumable way, with help from our most intelligent AI models.”
Industry Concerns Around AI Mode While Google presents an optimistic development story, industry experts are raising valid concerns.
John Shehata, founder of NewzDash, reports that sites are already “losing anywhere from 25 to 32% of all their traffic because of the new AI Overviews.” For news publishers, health queries show 26% AI Overview penetration.
Mordy Oberstein, founder of Unify Brand Marketing, analyzed Google’s I/O demonstration and found the examples weren’t as complex as presented. He shows how Google combined readily available information rather than showcasing advanced AI reasoning.
Google’s claims about improved user engagement have not been verified. During a recent press session, Google executives claimed AI search delivers “more qualified clicks” but admitted they have “no data to share” on these quality improvements.
Further, Google’s reporting systems don’t differentiate between clicks from traditional search, AI overviews, and AI mode. This makes independent verification impossible.
Shehata believes that the fundamental relationship between search and publishers is changing:
“The original model was Google: ‘Hey, we will show one or two lines from your article, and then we will give you back the traffic. You can monetize it over there.’ This agreement is broken now.”
What This Means For SEO professionals and content marketers, Google’s insights reveal important changes ahead.
The shift from keyword targeting to conversational queries means content strategies need to focus on directly answering user questions rather than optimizing for specific terms.
The focus on exploratory advice, how-to content, and local help shows these content types may become more important in AI Mode results.
Shehata recommends that publishers focus on content with “deep analysis of a situation or an event” rather than commodity news that’s “available on hundreds and thousands of sites.”
He also notes a shift in success metrics: “Visibility, not traffic, is the new metric” because “in the new world, we will get less traffic.”
Looking Ahead Esmaeilzadeh said significant work continues:
“We’re proud of the progress we’ve made, but we know there’s still a lot of work to do, and this user-centric approach will help us get there.”
Google confirmed that more AI Mode features shown at I/O 2025 will roll out in the coming weeks and months. This suggests the interface will keep evolving based on user feedback and usage patterns."
Matt G. Southern Senior News Writer at Search Engine Journal https://www.searchenginejournal.com/google-shares-details-behind-ai-mode-development/548474/
#metaglossia_mundus
Ngũgĩ’s most important contributions were on language. His novels, his essays, and his unyielding belief in the dignity of African languages reshaped world literature. To call him merely a “great African writer” would be to shrink his genius – he was one of the most vital thinkers of our age, a voice who spoke from Kenya but to humanity.
"Ngũgĩ wa Thiong’o: The Writer Who Made Language a Battlefield
Ngũgĩ’s most important contributions were on language. His novels, his essays, and his unyielding belief in the dignity of African languages reshaped world literature. To call him merely a “great African writer” would be to shrink his genius – he was one of the most vital thinkers of our age, a voice who spoke from Kenya but to humanity.
Ngũgĩ wa Thiong’o, the literary giant from Kenya and unflinching advocate for African languages, passed away on 28 May 2025, leaving behind a legacy that bridged continents and generations. For those of us who knew him – whether through his words or in solidarity campaigns – his life was a testament to the power of art as resistance.
I am deeply saddened to learn of Ngũgĩ’s passing. Ngũgĩ did not just write books – he declared war. War on colonialism and neocolonialism, on linguistic shame, on the very idea that African thought must be filtered through colonial grammar to be “literature”. Always assuming that we would soon have our next conversation, I felt bereft when news of his death broke.
When I helped organise London protests for his release in 1978, we shouted his banned titles outside Kenya’s High Commission. We prepared banners saying “Kenyatta sheds Petals of Blood”, Waingereza walifunga Kimathi, Mzee anafunga Ngũgĩ! (The British locked up Kimathi, Mzee [Kenyatta] locks up Ngũgĩ!). Years later, when I visited him at his home near Kamĩrĩĩthu, the site of the people’s theatre at which Ngaahika Ndeenda was performed, and which the regime destroyed, he told me: “You thought you were demanding my freedom? You were demanding yours.” He showed me the manuscript he’d written on scraps of paper while in prison. “They gave me a pen to confess,” he laughed. “I used it to imagine”. “They thought they buried me in prison,” he laughed, “but they planted a seed”. His warmth and clarity in person mirrored the fearlessness of his writing.
I always resented the fact that so much of his writing was relegated to the African Writers Series. True, the series brought attention to the wealth of writers from the African continent. But many of them were, like Ngũgĩ, literary giants, not merely “African writers”. Their contribution to art, politics, and thinking were and are of universal relevance. The failure to recognise this universalism is perhaps one of the reasons that, despite being nominated several times, Ngũgĩ was denied the Nobel prize for literature.
Ngũgĩ’s most important contributions were on language. His novels, his essays, and his unyielding belief in the dignity of African languages reshaped world literature. To call him merely a “great African writer” would be to shrink his genius – he was one of the most vital thinkers of our age, a voice who spoke from Kenya but to humanity. Ngũgĩ’s decision to write in Gĩkũyũ was not symbolic – it was an act of intellectual insurrection. He tore down the lie that creativity required the approval of imperial languages. “To think in my mother tongue,” he once told me, “is to dream in freedom”. His Gĩkũyũ works (Mũrogi wa Kagogo, Matigari) were not “translations from English” – they were the originals, the canon itself. Beyond “African Writer”, Western obituaries will box him into “African literature”. But Ngũgĩ belonged to the same pantheon as Dostoevsky, Marquez, and Orwell – writers who exposed the machinery of power. Wizard of the Crow is as universal as 1984; Decolonising the Mind, perhaps as urgent as Fanon.
In his Foreword to the Daraja Press edition of Mau Mau From Within: The Story of the Kenya Land and Freedom Army, Ngũgĩ wrote: “We don’t have to use the vocabulary of the colonial to describe our struggles, especially today when there is a worldwide movement to overturn monuments to slavery and colonialism.”
For those of us who stood beside Ngũgĩ – in protest, in lecture halls, or in his Limuru home – he was also a generous comrade, always leaning forward to listen, even as the world often leaned away from him. I admired how attentive Ngũgĩ always was to aspiring younger writers. At conferences where we both spoke, I watched him corner young African writers, urging them to “write dangerously”. He’d ask: “Who owns your language? Who profits from your silence?” His belief in them was visceral. “Talent is common,” he told them, “What’s rare is the courage to use it.”
In his piercing 2020 interview with Daraja Press, Ngũgĩ laid bare his life’s mission: Colonialism” stole our land, but language was the theft of our dreams”. He recounted how, as a child, British teachers beat Gĩkũyũ out of his classmates–”not just with canes, but with the lie that our words were small, ugly things”. His entire oeuvre, from his early writings as “James” Ngugi, to Petals of Blood, Decolonising the Mind and Wizard of the Crow, was a counterattack: “I write in Gĩkũyũ not to be provincial, but to prove that the universal lives in the particular.”
Kenya today claims Ngũgĩ as a national treasure – yet how recklessly it squandered him in his prime. No streets are named for the man whose books were banned in schools, politicians quote Decolonising the Mind while gutting funding for indigenous-language education. When I last visited Nairobi, a young poet told me, “They praise Ngũgĩ now because he’s no longer dangerous.” But Ngũgĩ’s legacy is danger: the kind that outlives regimes.
Kiumbi kiaguo ikio ki, kiaguo ikio. “A thing that forgets its origins will soon die.” Ngũgĩ, they jailed you, exiled you, misnamed you – yet here you remain.
Rest well, comrade and friend.
Sincere condolences to Ngugi’s family."
Firoze Manji
June 5, 2025
https://www.theelephant.info/opinion/2025/06/05/ngugi-wa-thiongo-the-writer-who-made-language-a-battlefield/
#metaglossia_mundus
En rompant avec l’anglais pour écrire en kikuyu, il a incarné une pensée radicale de la décolonisation.
"L'héritage vivant de Ngũgĩ wa Thiong’o : décoloniser l’esprit et les langues
Published: June 5, 2025 3.59pm SAST
Christophe Premat, Stockholm University
Ngũgĩ wa Thiong’o est mort le 28 mai 2025 à l’âge de 87 ans. Son nom restera dans l’histoire non seulement comme celui d’un grand romancier kenyan, mais aussi comme celui d’un penseur radical de la décolonisation. À l’instar de Valentin-Yves Mudimbe, disparu quelques semaines plus tôt, il a su interroger les conditions mêmes de la possibilité d’un savoir africain en contexte postcolonial. Mais là où Mudimbe scrutait les « bibliothèques coloniales » pour en dévoiler les présupposés, Ngũgĩ a voulu transformer la pratique même de l’écriture : en cessant d’écrire en anglais pour privilégier sa langue maternelle, le kikuyu, il a posé un geste politique fort, un acte de rupture.
En tant que spécialiste des théories postcoloniales, j'analyse la manière dont ces parcours critiques se sont efforcés à repenser la manière dont le savoir est produit et transmis en Afrique.
Pour Ngũgĩ, la domination coloniale ne s’arrête pas aux frontières, aux institutions ou aux lois. Elle prend racine dans les structures mentales, dans la manière dont un peuple se représente lui-même, ses valeurs, son passé, son avenir.
Langue et pouvoir : une géopolitique de l’imaginaire
Dans Décoloniser l’esprit (1986), Ngũgĩ wa Thiong’o explique pourquoi il a décidé d’abandonner l’anglais, langue dans laquelle il avait pourtant connu un succès international. Il y pose une affirmation devenue centrale dans les débats sur les héritages coloniaux : « Les vrais puissants sont ceux qui savent leur langue maternelle et apprennent à parler, en même temps, la langue du pouvoir. » Car tant que les Africains seront contraints de penser, de rêver, d’écrire dans une langue qui leur a été imposée, la libération restera incomplète.
À travers la langue, les colonisateurs ont conquis bien plus que des terres : ils ont imposé une certaine vision du monde. En contrôlant les mots, ils ont contrôlé les symboles, les récits, les hiérarchies culturelles. Pour Ngũgĩ, le bilinguisme colonial ne relève pas d’un enrichissement, mais d’une fracture : il sépare la langue du quotidien (la langue vernaculaire) de celle de l’école, de la pensée, du droit, de la littérature. Il y voit une violence structurelle, une « dissociation entre l’esprit et le corps », qui rend impossible une appropriation pleine et entière de l’expérience africaine.
Une aliénation tenace
L’analyse de Ngũgĩ met en lumière les impasses des politiques linguistiques postcoloniales qui ont souvent continué à faire des langues européennes des langues d’État, de savoir et de prestige, tout en reléguant les langues africaines à la sphère privée. C’est en ce sens qu’on peut parler de diglossie, c’est-à-dire de situation de cohabitation de deux langues avec des fonctions sociales distinctes. Cette diglossie instituée produit une hiérarchisation des langues qui reflète, en profondeur, une hiérarchisation des cultures.
We’re 10! Support us to keep trusted journalism free for all.
Support our cause
Loin d’en appeler à un retour passéiste ou à une clôture identitaire, Ngũgĩ veut libérer le potentiel des langues africaines : leur permettre de dire le contemporain, d’inventer une modernité qui ne soit pas un simple calque des modèles européens. Il reprend ici à son compte la tâche historique que se sont donnée les écrivains dans d’autres langues « mineures » : faire pour le kikuyu ce que Shakespeare a fait pour l’anglais, ou Tolstoï pour le russe.
Il s’agit non seulement d’écrire dans les langues africaines, mais de faire en sorte que ces langues deviennent des vecteurs de philosophie, de sciences, d’institutions — bref, de civilisation. Ce choix d’écrire en kikuyu ne fut pas sans conséquences. En 1977, Ngũgĩ coécrit avec Ngũgĩ wa Mirii une pièce de théâtre, Ngaahika Ndeenda (« Je me marierai quand je voudrai »), jouée en langue kikuyu dans un théâtre communautaire de Kamiriithu, près de Nairobi.
La pièce, portée par des acteurs non professionnels, dénonçait avec virulence les inégalités sociales et les survivances du colonialisme au Kenya. Son succès populaire inquiète les autorités : quelques semaines après la première, Ngũgĩ est arrêté sans procès et incarcéré pendant près d’un an. À sa libération, interdit d’enseigner et surveillé de près, il choisit l’exil.
Ce bannissement de fait durera plus de vingt ans. C’est dans cette période de rupture qu’il entame l’écriture en kikuyu de son roman Caitaani Mutharaba-Ini (Le Diable sur la croix), qu’il rédige en prison sur du papier hygiénique.
Une pensée toujours actuelle
L’œuvre de Ngũgĩ éclaire la manière dont les sociétés africaines contemporaines restent prises dans des logiques de domination symbolique, malgré les indépendances politiques. La mondialisation a remplacé les formes les plus brutales de l’impérialisme, mais elle reconduit souvent les logiques de domination symbolique. Dans le champ culturel, les ex-puissances coloniales continuent d’exercer une influence considérable à travers les réseaux diplomatiques, éducatifs, éditoriaux.
La Francophonie, par exemple, se présente comme un espace de coopération linguistique, mais elle perpétue souvent des asymétries dans la validation des productions culturelles. Le fait de présenter les langues coloniales comme des langues de communication dépassant les clôtures des langues vernaculaires est une illusion que Ngũgĩ dénonce avec virulence.
Des penseurs comme Jean-Godefroy Bidima ou Seloua Luste Boulbina ont montré à quel point les politiques linguistiques postcoloniales ont tendance à officialiser certaines langues au détriment d’autres, créant une nouvelle forme de langue de bois, souvent coupée des réalités populaires. La philosophe algéro-française évoque à ce propos un espace public plurilingue à instituer: un espace qui ne se résume pas à opposer langues coloniales et langues vernaculaires, mais qui réinvente les usages, les hybridations, les ruses du langage.
Cette réflexion fait écho à la position de Ngũgĩ : écrire dans sa langue ne suffit pas, encore faut-il produire un langage à la hauteur des luttes sociales et politiques.
Pour une mémoire active de Ngũgĩ wa Thiong’o
À l’heure où les débats sur la décolonisation se multiplient, souvent vidés de leur substance ou récupérés par des logiques institutionnelles, relire Ngũgĩ wa Thiong’o permet de revenir à l’essentiel : l’émancipation passe par un changement de regard sur soi-même, qui commence dans la langue. La véritable libération n’est pas seulement politique ou économique ; elle est aussi culturelle, cognitive, symbolique.
En refusant de penser depuis des catégories importées, en assumant le risque d’un geste radical, Ngũgĩ wa Thiong’o a ouvert la voie à une pensée authentiquement africaine, enracinée et universelle. Son œuvre rappelle qu’il ne suffit pas de parler au nom de l’Afrique ; encore faut-il parler depuis elle, avec ses langues, ses imaginaires, ses luttes. À l’heure de sa disparition, son message reste plus vivant que jamais."
https://theconversation.com/lheritage-vivant-de-ngugi-wa-thiongo-decoloniser-lesprit-et-les-langues-258209
#metaglossia_mundus
"New York, the World’s Most Linguistically Diverse Metropolis
In Language City, Ross Perlin, a linguist, takes readers on a tour of the city’s communities with endangered tongues.
June 05, 2025
As co-director of the Endangered Language Alliance, Ross Perlin has managed a variety of projects focused on language documentation, language policy, and public programming around urban linguistic diversity. Photo by Cecil Howell.
Half of all 7,000-plus human languages may disappear over the next century, and—because many have never been recorded—when they’re gone, it will be forever. Ross Perlin, a lecturer in Columbia’s Department of Slavic Languages and co-director of the Endangered Language Alliance, is racing against time to map little-known languages across New York. In his new book, Language City, Perlin follows six speakers of endangered languages deep into their communities, from the streets of Brooklyn and Queens to villages on the other side of the world, to learn how they are maintaining and reviving their languages against overwhelming odds. He explores the languages themselves, from rare sounds to sentence-long words to bits of grammar that encode entirely different worldviews.
Seke is spoken by 700 people from five ancestral villages in Nepal, and a hundred others living in a single Brooklyn apartment building. N’ko is a radical new West African writing system now going global in Harlem and the Bronx. After centuries of colonization and displacement, Lenape, the city’s original indigenous language and the source of the name Manhattan, “the place where we get bows,” has just one native speaker, along with a small band of revivalists. Also profiled in the book are speakers of the indigenous Mexican language Nahuatl, the Central Asian minority language Wakhi, and Yiddish, braided alongside Perlin’s own complicated family legacy. On the 100th anniversary of a notorious anti-immigration law that closed America’s doors for decades, and the 400th anniversary of New York’s colonial founding, Perlin raises the alarm about growing political threats and the onslaught of languages like English and Spanish.
Perlin talks about the book with Columbia News, along with why New York is so linguistically diverse, and his current and upcoming projects.
How did this book come about?
I’m a linguist, writer, and translator, and have been focused on language endangerment and documentation for the last two decades. Initially, I worked in southwest China on a dictionary, a descriptive grammar, and corpus of texts for Trung, a language spoken in a single remote valley on the Burmese border. While finishing up my dissertation, I came back to New York and joined the Endangered Language Alliance (ELA), which had recently been founded to focus on urban linguistic diversity.
ELA’s key insight was a paradox: Even as language loss accelerates everywhere—with as many as half of the world’s 7,000 languages now considered endangered—cities are more linguistically diverse than ever before, thanks to migration, urbanization, and diaspora. This means that speakers of endangered, indigenous, and primarily oral languages are now often right next door, and that fieldwork can happen in a different key, with linguists and communities making common cause as neighbors, for the long term.
For the last 12 years, I’ve been ELA’s co-director, together with Daniel Kaufman. Leading a small, independent nonprofit has been its own adventure. We do everything from in-depth research on little-studied languages (projects on Jewish, Himalayan, and indigenous Latin American languages, for example, with much of the research happening right here in the five boroughs) to public events, classes, collaborations with city agencies, childrens books, and the first-ever language map of New York City. At some point, I knew that all my notes on everything I was learning and seeing around the city and beyond had to become a book. The urgency only grew as a series of unprecedented crises started hitting the immigrant and diaspora communities we work with. The crises are still unfolding.
Can you share some details about the six people you portray in the book, and the endangered languages they speak?
Language City is both the story of New York’s languages—the past, present, and future of the world's most linguistically diverse city—and the story of six specific people doing extraordinary things to keep their embattled mother tongues alive. They come from all over the world, but converge in New York. They represent a variety of strategies for language maintenance and revitalization in the face of tremendous odds. All of them are people I’ve known and worked with for years. Between them, in all their multilingualism, they actually speak about 30 languages, but they are also regular people you might run into on the subway.
Rasmina is one of the youngest speakers of Seke, a language from five villages in Nepal, and now a sixth, vertical village in the middle of Brooklyn. Husniya, a speaker of Wakhi from Tajikistan, has gone through every stage of her life and education in a different language, and can move easily along New York City’s new Silk Road. Originally from Moldova, Boris is a Yiddish novelist, poet, editor, and one-man linguistic infrastructure.
Ibrahima, a language activist from Guinea, champions N’ko, a relatively new writing system created to challenge the dominance of colonial languages in West Africa. Irwin is a Queens chef who writes poetry in, and cooks through, his native Nahuatl, the language of the Aztecs. And Karen is a keeper of Lenape, the original language of the land on which New York was settled.
How did New York end up as such a linguistically diverse city?
Four hundred years ago, this Lenape-speaking archipelago became nominally Dutch under the West India Company. In fact, the territory evolved into an unusual commercial entrepôt at the fulcrum of three continents, where the languages of Native Americans, enslaved Africans, and European refugees and traders were all in the mix. A reported 18 languages were spoken by the first 400-500 inhabitants. This set the template, but the defining immigration waves of the 19th and 20th centuries were of another order, as New York became a global center for business, politics, and culture, as well as the pre-eminent gateway to the U.S. and a bridge between hemispheres. From the half-remembered myth of Ellis Island to the very present reality of 200,000 asylum seekers arriving in the last few years, I argue—in what I think is the first linguistic history of any city—that applying the lens of language helps us understand the city (and all cities) in profoundly new ways. From Frisian and Flemish 400 years ago, to Fujianese and Fulani today, deep linguistic diversity, although always overlooked, has been fundamental.
What did you teach in the spring semester?
The class I’ve taught every year for the last six years—Endangered Languages in the Global City. I designed it around our research at ELA, and it’s completely unique to Columbia. Hundreds of students sign up, testifying to the massive interest Columbia students have in learning about linguistic diversity, though we can admit only a fraction of them. Many of the endangered language speakers and activists featured in Language City have visited the class, and we also take students out to some of the city’s most linguistically diverse neighborhoods, where many students have done strikingly original fieldwork.
The spring semester was something of a departure: I stepped in to teach Language and Society, as well as a course I designed last year called Languages of Asia, which focuses on the continent’s lesser-known language families and linguistic areas. I’ve also been supervising senior theses together with Meredith Landman,who directs the undergraduate program in linguistics. Across the first half of the 20th century, Columbia and Barnard became foundational sites for the study of linguistic diversity and for documenting languages, thanks to Franz Boas, who, in 1902, became the head of Columbia’s anthropology department—the first in the country—and his students. There is hope today for Columbia’s formative role to be revived: The University's local and global position makes this as urgent and achievable as ever. Additionally, the linguistics major was recently (at last) restored, and an ever-growing number of students are doing remarkable work on languages from around the world.
What are you working on now?
ELA keeps on ELA’ing, with a huge range of projects. On any given day, we might be recording and working with speakers of languages originally from Guatemala, Nepal, or Iran. With 700+ languages and counting, our language map (languagemap.nyc) is continually being updated, and we have people here making connections between language and health, education, literacy, technology, translation. This fall, I’ll be in Berlin as a fellow of the American Academy, researching via an obscure film (in 2,000 languages and counting) how missionary linguists and Bible translators shape global linguistic diversity.
Any summer plans related to language preservation at Columbia or elsewhere?
Ongoing work at ELA: The sky, if not for the budget, would be the limit. There have been recent sessions on a little-known language from Afghanistan, with a woman who is probably the only speaker in New York. Someone whose family still remembers a highly endangered Jewish language variant from Iran recently got in touch. And we met a speaker of a Mongolic language of western China at a restaurant. I also hope to continue soon some scouting of urban linguistic diversity in the Caucasus, past and present, with support from the Harriman Institute."
https://news.columbia.edu/news/new-york-worlds-most-linguistically-diverse-metropolis
#metaglossia_mundus
"Newswise — Creativity often emerges from the interplay of disparate ideas—a phenomenon known as combinational creativity. Traditionally, tools like brainstorming, mind mapping, and analogical thinking have guided this process. Generative Artificial Intelligence (AI) introduces new avenues: large language models (LLMs) offer abstract conceptual blending, while image (T2I) and T2-three-dimensional (3D) models turn text prompts into vivid visuals or spatial forms. Yet despite their growing use, little research has clarified how these tools function across different stages of creativity. Without a clear framework, designers are left guessing which AI tool fits best. Given this uncertainty, in-depth studies are needed to evaluate how various AI dimensions contribute to the creative process.
A research team from Imperial College London, the University of Exeter, and Zhejiang University has tackled this gap. Their new study (DOI: 10.1016/j.daai.2025.100006), published in May 2025 in Design and Artificial Intelligence, investigates how generative AI models with different dimensional outputs support combinational creativity. Through two empirical studies involving expert and student designers, the team compared the performance of LLMs, T2I, and T2-3D models across ideation, visualization, and prototyping tasks. The results provide a practical framework for optimizing human-AI collaboration in real-world creative settings.
To map AI's creative potential, the researchers first asked expert designers to apply each AI type to six combinational tasks—including splicing, fusion, and deformation. LLMs performed best in linguistic-based combinations such as interpolation and replacement but struggled with spatial tasks. In contrast, T2I and T2-3D excelled at visual manipulations, with 3D models especially adept at physical deformation. In a second study, 24 design students used one AI type to complete a chair design challenge. Those using LLMs generated more conceptual ideas during early, divergent phases but lacked visual clarity. T2I models helped externalize these ideas into sketches, while T2-3D tools offered robust support for building and evaluating physical prototypes. The results suggest that each AI type offers unique strengths, and the key lies in aligning the right tool with the right phase of the creative process.
“Understanding how different generative AI models influence creativity allows us to be more intentional in their application,” said Prof. Peter Childs, co-author and design engineering expert at Imperial College London. “Our findings suggest that large language models are better suited to stimulate early-stage ideation, while text-to-image and text-to-3D tools are ideal for visualizing and validating ideas. This study helps developers and designers align AI capabilities with the creative process rather than using them as one-size-fits-all solutions.”
The study's insights are poised to reshape creative workflows across industries. Designers can now match AI tools to specific phases—LLMs for generating diverse concepts, T2I for rapidly visualizing designs, and T2-3D for translating ideas into functional prototypes. For educators and AI developers, the findings provide a blueprint for building more effective, phase-specific design tools. By focusing on each model’s unique problem-solving capabilities, this research elevates the conversation around human–AI collaboration and paves the way for smarter, more adaptive creative ecosystems.
###
References
DOI
10.1016/j.daai.2025.100006
Original Source URL
https://doi.org/10.1016/j.daai.2025.100006
Funding information The first and third authors would like to acknowledge the China Scholarship Council (CSC)." Released: 5-Jun-2025 7:35 AM EDT Source Newsroom: Chinese Academy of Sciences https://www.newswise.com/articles/designing-with-dimensions-rethinking-creativity-through-generative-ai ##metaglossia_mundus
|