 Your new post is loading...
|
Scooped by
Charles Tiayon
June 29, 9:54 PM
|
"La créolisation de la langue française Le mot "créole" provient du mot criollo, du portugais crioulo. Ce terme désigne ce qui est né dans la colonie par opposition à ce qui vient d'Europe. Jean-Luc Mélenchon l'utilise à mauvais escient Notre langue est partagée par bien des pays, et vu la natalité de certains le français va devenir après le chinois, une des langues la plus parlée dans le Monde. En 2022 selon l’Observatoire de la langue française, le français serait toujours la cinquième langue la plus parlée après l’anglais, le chinois, l’hindi et l’espagnol, avec 321 millions de locuteurs, même si le rythme de croissance se ralentit… 29 pays ont en effet le français comme langue officielle, ce n’est pas rien
Emmanuel Macron énonçait même le 28 novembre 2018 lors de son discours au Burkina Faso que « le français sera la première langue de l’Afrique et peut-être du monde si nous savons y faire dans les prochaines décennies ». Poudre de Perlin Pinpin... En vérité (comme il le dit au lieu d'user de dans les faits) il y a plus de locuteurs ailleurs qu'en France même. Lors d’un colloque sur la francophonie, organisé à l’Assemblée Nationale, le 18 juin 2025, Jean-Luc Mélenchon a notamment évoqué l’opportunité qu’il y aurait à donner un autre nom à la langue française, langue créole selon lui : « Si quelqu'un pouvait trouver un autre nom pour qualifier notre langue, il serait le bienvenu » S’il s’en était tenu à cette formule je n’aurais pas applaudi mais c'est audible, même si je préfère reconnaître toujours l’histoire de la langue et de conserver son nom d’origine… Même si la terminologie serait fausse à un moment donné (ce qui reste à démontrer) du fait du nombre de citoyens réellement français, garder toujours l’appellation contrôlée me paraît saine. J’aime bien raconter que Jean-Marie Chauvet, quand il a «découvert» la grotte, a estimé qu’une représentation sur un pendant rocheux, mi homme, mi animal, selon lui, devait être baptisé « panneau du sorcier ». La plupart des scientifiques estiment cette interprétation erronée. Mais comme dit Jean Clottes, comme le nom est donné on n'en change pas. Jean-Luc Mélenchon aurait dit ensuite « Si nous voulons que le français soit une langue commune, il faut qu'elle soit une langue créole » Faut-il se demander au nom de quelle compétence prononce-t-il une chose pareille? Peut-être cherche-t-il simplement à faire dire des conneries à ses adversaires? Ces propos ont vivement fait réagir la droite, notamment Gérald Darmanin qui a insisté sur le fait que « la langue française appartient aux Français » et que c'est « notre patrimoine le plus précieux». Nous serons tous d’accord pour dire que le Ministre de la Justice n’est pas non plus un bon juge sur ces questions et comme ce n’est pas mon propos, je n’expliquerais pas ici pourquoi les deux assertions sont nulles et non avenues. Parlons du discours de LFI, puisque l’inénarrable Sophia Chikirou a défendu son Mentor à l’Assemblée Nationale. Le fait que des français (locuteurs minoritaires et anciens) veulent s’emparer de ce sujet prouvent que LFI comme la droite pensent à tort qu’ils sont maîtres et possesseurs de cette langue. Et en cela leur discours est aussi vieux, rance et pourri (je n'ai pas peur des mots) que celui des gens du faux-centre, de la droite ou des extrêmes droite lorgnant vers le fascisme. Ceux qui veulent garder la langue pour l'enfermer sous la très réactionnaire Académie Française... (sans beaucoup de linguistes ou de grammairiens compétents) se réfèrent à un moment de notre histoire où celle-ci a été « fixée » sur le modèle de la manière de parler à la cour de Louis XIV. La langue française et ses absurdités ubuesques - notamment en matière d’orthographe - a d’abord été une arme d’oppression, et de guerre contre le peuple (qui parlait le patois). La République n’a pas été clémente à ce sujet. Et quand Mélenchon veut, de son point de vue nationale, proposer que le nom de la langue soit changé, il ferait mieux de se taire à nouveau, et laisser les locuteurs, les autres, causer (lui on l’a assez entendu) Aya Nakamura pendant la cérémonie des J.O. n'a pas chanté dans une langue créole ou créolisée... ELLE a chanté SA langue, qui devient la NÔTRE grâce au génie de son talent... et nos reprises. J’en pleure encore de sa démonstration magistrale. Les artistes réparent le Monde et ce jour-là avec la complicité de la Garde Républicaine, elle a, en une seule séquence, réconcilié la France et son histoire, celle d'aujourd'hui et d'hier, malgré les blessures non refermées de la colonisation... Vouloir parler à la place des autres, nommer les choses à la place de ceux qui pratiquent le français c'est jouer aux académiciens réactionnaires et conservateur. Mélenchon n'est pas roi de France. Ni Macron (qui en a pourtant le fantasme) Ce n'est pas en collant une nouvelle étiquette à un produit à réformer qu’on change quoi que ce soit. Au pire c’est une manœuvre pour tromper l’usager, le consommateur (on voit cela tous les jours en Marketing. Ce n'est pas le nom Maison Perrier qui change l'image de marque de Perrier, c'est la loi qui oblige Nestlé à changer la marque puisque cela fait belle lurette que son eau n'est pas naturelle. Il n'y a que les fachos pour croire à la pureté de la langue. Où est le projet politique nécessaire qui permettrait de faire du français une langue universelle dont les règles changeraient avec l'assentiment de tous? L'orthographe folklorique, les règles de grammaire répressive pourraient changer... Mais qui le dira? Les linguistes francophones de la Planète, certainement. Cette polémique est absurde, elle a été lancée par Jean-Luc Mélenchon pour de toutes autres raisons que ce qu'elle annonce. Il cherche à tort le clivage. Le créole est une langue de résistance liée à l'histoire de la colonisation dans des lieux précis et pas au Québec, par exemple. Le respect qu'on doit aux locuteurs de tous les pays, ce serait de les laisser gérer cela eux-mêmes. L'anglais a énormément emprunté au français - beaucoup plus que dans le sens inverse -! Comment doivent-ils nommer leur langues, triturées à Singapour, devenues quasiment la langue officielle aux Pays-bas, ou dans le quartier de l'Europe à Bruxelles où il faut réclamer sa bière en anglais? Le scandale c'est que le français n’a pas la première place au Parlement Européen alors que les Anglais n'y sont plus. Alors que les traités l’obligent. Mais Mélenchon ne défendra pas cette position, cela ne fait pas partie de son agenda. Le respect qu'on doit au créole implique de laisser ceux qui ont transformé notre langue de revendiquer eux-mêmes et de ne pas se substituer à eux. Je le répète, ce n'est certainement pas à un Français de France de faire cette proposition et même d’en parler. Il y a d’autres sujets prioritaires et d’autres sujets grave à préciser dans ce colloque notamment la baisse des crédits et l’affaiblissement des Alliances Françaises et donc ne notre « soft power » (pour causer le globish dont la racine n’est pas « nôtre » et pas neutre) Et pourquoi Mélenchon ne hurle-t-il pas plutôt sur un sujet bien plus important que tous les autres, pourquoi ne parle-t-il pas de "la sixième extinction" dont va s'emparer De Villepin... TOUT ce bruit pour rien est méprisable." Pierre Oscar Lévy (pol) réalisateur de documentaires, etc... https://blogs.mediapart.fr/pol/blog/290625/la-creolisation-de-la-langue-francaise #metaglossia_mundus
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
Europe wants artificial intelligence to understand all its languages. Can it overcome English dominance to make AI truly multilingual?
"The race to make AI as multilingual as Europe Can Europe stop AI from becoming English by default?
June 30, 2025 - 6:00 am
Image by: AbsolutVision The European Union has 24 official languages and dozens more unofficial ones spoken across the continent. If you add in the European countries outside the union, then that brings at least a dozen more into the mix. Add dialects, endangered languages, and languages brought by migrants to Europe, and you end up with hundreds of languages.
One thing many of us in technology could agree on is that the US dominates — and that extends to online languages. There are many reasons for this, mostly due to American institutions, standards bodies, and companies defining how computers, their operating systems, and the software they run work in their nascent days. This is changing, but for the short term at least, it remains the norm. This has also led to the majority of the web being in English. An astounding 50% of websites are in English, despite it being the native tongue of only about 6% of the world’s population, with Spanish, German, and Japanese next, but a long way behind, each only between 5-6% of the web.
As we delve deeper into the new wave of AI-powered applications and services, many are driven by data in large language models (LLMs). As much of the data in these LLMs is scraped (controversially in many cases) from the web, LLMs predominantly understand and respond in English. As we find ourselves at the start of or in the midst of a shift in technological paradigm caused by the rapid growth of AI tools, this is a problem, and we’re bringing that problem into a new age.
Europe already boasts several high-profile AI companies and projects, such as Mistral and Hugging Face. Google DeepMind also originated as a European company. The continent has research projects that develop language models to enhance how AI tools comprehend less commonly spoken languages.
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!
Your email address
I agree to TNW storing and processing my personal data to receive the requested newsletter(s). For more information check out TNW's Privacy Policy.* This article explores some of these initiatives, questions their effectiveness, and asks whether their efforts are worthwhile or if many users default to using English versions of tools. As Europe seeks to build its independence in AI and ML, does the continent have the companies and skills necessary to achieve its goals?
Terminology and technology primer To make sense of what follows, you don’t need to understand how models are created, trained, or function. But it’s helpful to understand a couple of basics about models and their human language support.
Unless model documentation explicitly mentions it is multilingual or cross-lingual, prompting it or requesting a response in an unsupported language may cause it to translate back and forth or respond in a language it does understand. Both strategies can produce unreliable and inconsistent results — especially in low-resource languages.
While high-resource languages, such as English, benefit from abundant training data. Low-resource languages, such as Gaelic or Galician, have far less, which often leads to inferior performance
The harder concept to explain regarding models is “open,” which is unusual, as software in general has had a fairly clear definition of “open source” for a while. I don’t want to delve too deeply into this topic as the exact definition is still in flux and controversial. The summary is that even when a model might call itself “open” and is referenced as “open,” the meaning of “open” isn’t always the same.
Here are two other useful terms to know:
Training teaches a model to make predictions or decisions based on input data.
Parameters are variables learned during model training that define how the model maps inputs to outputs. In other words, how it understands and responds to your questions. The larger the number of parameters, the more complex the model is.
With that brief explanation done, how are European AI companies and projects working to enhance these processes to improve European language support?
Hugging Face When someone wants to share code, they typically provide a link to their GitHub repository. When someone wants to share a model, they typically provide a Hugging Face link. Founded in 2016 by French entrepreneurs in New York City, the company is an active participant in creating communities and a strong proponent of open models. In 2024, it started an AI accelerator for European startups and partnered with Meta to develop translation tools based on Meta’s “No Language Left Behind” model. They are also one of the driving forces behind the BLOOM model, a groundbreaking multilingual model that set new standards for international collaboration, openness, and training methodologies.
Hugging Face is a useful tool for getting a rough idea of the language support in models. At the time of writing, Hugging Face lists 1,743,136 models and 298,927 datasets. Look at its leaderboard for monolingual models and datasets, and you see the following ranking for models and datasets that developers tag (add metadata) as supporting European languages at the time of writing:
Language Language code Datasets Models English English en 27,702 205,459 English eng 1,370 1,070 French fra 1,933 850 Spanish Español es 1,745 10,028 German Deutsch de 1,442 9,714 English eng 1,370 1,070 You can already see some issues here. These aren’t tags set in stone. The community can add values freely. While you can see that they follow them for the most part, there is some duplication.
As you can see, the models are dominated by English. A similar issue applies to the datasets on Hugging Face, which lack non-English data.
What does this mean?
Lucie-Aimée Kaffee, EU Policy Lead at Hugging Face, said that the tags indicate that a model has been trained to understand and process this language or that the dataset contains materials in that language. She added that the confusion between language support often comes during training.“When training a large model, it’s common for other languages to accidentally get caught in training because there were some artefacts of it in that dataset,” she said. “The language a model is tagged with is usually what the developers intended the model to understand.”
As one of the main and busiest destinations for model developers and researchers, Hugging Face not only hosts much of their work, but also lets them create outward-facing communities to tell people how to use them.
Thomas Wolf, co-founder of Hugging Face, described Bloom as “the world’s largest open multilingual language model.” Credit: Shauna Clinton/Web Summit via Sportsfile Mistral AI Perhaps the best-known Europe-based AI company is France’s Mistral AI, which unfortunately declined an interview. Its multilingual challenges partly inspired this article. At the FOSDEM developer conference in February 2024, linguistics researcher Julie Hunter asked one of Mistral’s models for a recipe in French — but it responded in English. However, 16 months is an eternity in AI development, and neither the company’s “Le Chat” chat interface nor running its 7B model locally reproduced the same error in recent tests. But interestingly, 7B did produce a spelling error in the opening line: “boueef” — and more may follow.
While Mistral sells several commercial models, tools, and services, its free-to-use models are popular, and I personally tend to use Mistral 7B for running tasks through local models.
Until recently, the company wasn’t explicit about its models having multilingual support, but its announcement of the Magistral model at London Tech Week in June 2025 confirmed support for several European languages.
EuroLLM EuroLLM was created as a partnership between Portuguese AI platform Unbabel and several European universities to understand and generate text in all official European Union languages. The model also includes non-European languages widely spoken by immigrant communities and major trading partners, such as Hindi, Chinese, and Turkish.
Like some of the other open model projects in this article, its work was partly funded by the EU’s High Performance Computing Joint Undertaking program (EuroHPC JU). Many of them share similar names and aims, making it confusing to separate them all. EuroLLM was one of the first, and as Ricardo Rei, Senior Research Scientist at Unbabel, told me, the team has learned a lot from the projects that have come since.
As Unbabel’s prime business is language translation, and translation is a key task for many multilingual models, the work on EuroLLM made sense to the Portuguese platform. Before EuroLLM, Unbabel had already been refining existing models to make its own and found them all too English-centric.
One of the team’s biggest challenges was finding sufficient training data for low-resource languages. Ultimately, the availability of training material reflects the number of people who speak the language. One of the common data sources used to train European language models is Europarl, which contains transcripts of the European Parliament’s activities translated into all official EU languages. It’s also available as a Hugging Face dataset, thanks to ETH Zürich.
Currently, the project has a 1.7B parameter model and a 9B parameter model, and is working on a 22B parameter model. In all cases, the models can translate, but are also general-purpose, meaning you can chat with them in a similar way to ChatGPT, mixing and matching languages as you do.
OpenLLM Europe OpenLLM Europe isn’t building anything directly, but it’s fostering a Europe-wide community of LLM projects, specifically medium and low-resource languages. Don’t let the one-page GitHub repository fool you: the Discord server is lively and active.
OpenEuroLLM, Lumi, and Silo A joint project between several European universities and companies, OpenEuroLLM is one of the newer and larger entrants to the list of projects funded by EuroHPC. This means that it has no public models as of yet, but it involves many of the institutions and individuals behind the Lumi family of models that focus on Scandinavian and Nordic languages. It aims to create a multilingual model, provide more datasets for other models and conform to the EU AI Act.
I spoke with Peter Sarlin of AMD Silo, one of the companies involved in the project and a key figure in Finnish and European AI development, about the plans. He explained that Finland, especially, has several institutes with significant AI research programs, including Lumi, one of the supercomputers part of EuroHPC. Silo, through its SiloGen product, offers open source models to customers, with a strong focus on supporting European languages. Sarlin pointed out that while sovereignty is an important motivation to him and Silo for creating and maintaining models that support European languages, the better reason is expanding the business and helping companies build solutions for small markets such as Estonia.
“Open models are great building blocks, but they aren’t as performant as closed ones, and many businesses in the Nordics and Scandinavia don’t have the resources to build tools based on open models,” he said. “So Silo and our models can step in to fill the gaps.”
Under Sarlin’s leadership, Silo AI built a Nordic LLM family to protect the region’s linguistic diversity. Credit: Silo AI The Lumi models use a “cross-lingual training” technique in which the model shares its parameters between high-resource and low-resource languages.
All this prior work led to the OpenEuroLLM project, which Sarlin describes as “Europe’s largest open source AI initiative ever, including pretty much all AI developers in Europe apart from Mistral.”
While many efforts are underway and performing well, the training data issue for low-resource languages remains the biggest challenge, especially amid the move towards more nuanced reasoning models. Translations and cross-lingual training are options, but can create responses that sound unnatural to native speakers. As Sarlin said, “We don’t want a model that sounds like an American speaking Finnish.”
OpenLLM France France is one of the more active countries in AI development, with Mistral and Hugging Face leading the way. From a community perspective, the country also has OpenLLM France. The project (unsurprisingly) focuses on French language models, with several models of different parameters and datasets, which help other projects train and improve their models that support French. The datasets include a mix of political discourse, meeting recordings, theatre shows, and casual conversations. The project also maintains a leaderboard of French models on Hugging Face, one of the few (active) European language model benchmark pages.
Do Europeans care about multilingual AI? Europe is full of people and projects working on multilingual language models. But do consumers care? Unfortunately, getting language usage rates for proprietary tools such as ChatGPT or Mistral is almost impossible. I created a poll on LinkedIn asking if people use AI tools in their native language, English, or a mixture of both. The results were a 50/50 split between English and a mixture of languages. This could indicate that the number of people using AI tools in a non-English language is higher than you think.
Typically, people use AI tools in English for work and in their own language for personal tasks.
Kaffee, a German and English speaker, said: “I use them mostly in English because I speak English at work and with my partner at home. But then, for personal tasks…, I use German.”
Kaffee mentioned that Hugging Face was working on a soon-to-be-published research project that fully analysed the usage of multilingual models on the platform. She also noted anecdotally that their usage is on the rise.
“Users have a conception that models are now more multilingual. And with the accessibility through large models like Llama, for example, being multilingual, I think that made a big impact on the research world regarding multilingual models and the number of people wanting to now use them in their own language.”
The internet was always supposed to be global and for everyone, but the damning statistic that 50% of sites are in English shows it never really worked out that way. We’re entering a new phase in how we access information and who controls it. Maybe this time, the (AI) revolution will be international.
STORY BY Chris Chinchilla Technology writer, podcaster, and video maker by day. Fiction, games, and music by night. chrischinchilla.com"
https://thenextweb.com/news/making-multilingual-ai-in-europe #metaglossia_mundus
Northwest Translators and Interpreters Society - A forum for language professionals in the Pacific Northwest
"The Northwest Translators & Interpreters Society (NOTIS) CONFERENCE GRANTS 2025
NOTIS's latest round of conference grants is now OPEN! Be sure to submit your application(s) before the deadline: Thursday, July 31, at 11:59 p.m. PDT. Please read the information on this page carefully before submitting your application (links below).
This round of grants will allow 6-10 *members* to attend EITHER the NOTIS 2025 Annual Conference OR a different T&I conference of their choosing with considerable financial support from NOTIS.
We will reserve up to 5 grants for selected applicants who wish to attend the NOTIS 2025 Annual Conference (September 13, 2025, in Lynnwood, Washington), covering the full cost of registration and a little extra for associated expenses.
The remaining grants, for up to $750* in registration fees and associated expenses, will be awarded to selected applicants who plan to attend a different, non-NOTIS T&I conference scheduled anytime between August 21, 2025, and September 1, 2026. *After reviewing all applications anonymously, the selection committee will determine the total amount of each grant depending on a variety of factors (e.g., conference cost, location, and availability of funds).
Again, please read the information on this page carefully before submitting your application(s). You will find both application links at the bottom of this page.
Not a NOTIS member? Click the green button below to learn more about the many benefits of NOTIS membership — and consider joining us!
WHY JOIN NOTIS?
GRANT ELIGIBILITY
To qualify for any conference grant from NOTIS in 2025, you must:
be a member of NOTIS in good standing
not be a NOTIS Board member
complete the application before the deadline, replying thoroughly and thoughtfully to the two short essay questions
not have received a NOTIS grant or scholarship in 2023 or 2024
agree to return the full grant amount to NOTIS if you are unable, for any reason, to attend the conference
If you are applying for a grant to attend a non-NOTIS conference, you must:
fulfill all of the eligibility requirements listed above
know the name, date(s), and registration cost (exact or, if not yet available, estimated) of the conference you wish to attend, and provide this information on your application
Please note that you are welcome to apply for both grants in this cycle (for the NOTIS conference and for another), but you will only be eligible to receive one grant. The decision as to which is up to the discretion of the selection committee.
HOW IT WORKS
NOTE: You do not need to register for the conference before submitting your application; however, if selected, you will need to provide proof of registration before NOTIS issues payment.
Candidates will be evaluated anonymously by a committee of their peers using a predetermined set of objective criteria.
NOTIS will announce the results via email by Friday, August 15. Please mark this date on your calendar and keep an eye out for our email!
If you are selected to receive a grant from NOTIS but do not respond to our email by Tuesday, August 19, at 12:00 p.m. PDT, we will offer the grant to a runner-up.
If you are selected to receive a grant for a non-NOTIS conference, we will ask you to include a simple budget of anticipated expenses in your response (by the date mentioned above).
All grantees must register and pay for the conference mentioned on their application. Once we receive proof of payment, we will issue a reimbursement check.
If something comes up and you cannot attend the agreed-upon conference, you must return the full grant amount to NOTIS.
PURPOSE
Through its evolving scholarship and grant program, NOTIS aims to support members by facilitating access to essential training and networking opportunities. Another round of grants, to cover training or certification costs, will be announced in early 2026.
This year’s conference grants are intended to help NOTIS members attend career-changing conferences — including ours — where they can connect with colleagues, learn alongside experts, share insights, earn CEUs, meet employers, and more…
We understand that conference costs can be prohibitive, especially when travel and time off work are involved, and we want to help!
We are especially interested in helping early-career language professionals and those who might otherwise find it difficult to attend a conference due to financial constraints, distance, or other factors. Still, we encourage all interested members to apply!
APPLICATIONS
NOTE: While you are welcome to submit applications for both grants, you may only receive one. The final decision will be up to the discretion of the selection committee.
Click HERE to apply for a grant to attend the NOTIS 2025 Annual Conference
Click HERE to apply for a grant to attend a different (non-NOTIS) conference
QUESTIONS?
Email social@notisnet.org with any questions or comments. We look forward to hearing from you!
Yours sincerely,
The Member Care & Development Committee at NOTIS
https://notisnet.org/2025-Conference-Grants
#metaglossia_mundus
"CDT Research, Free Expression
Content Moderation in the Global South: A Comparative Study of Four Low-Resource Languages June 28, 2025 Mona Elswah, Aliya Bhatia, Dhanaraj Thakur
Graphic for a CDT Research report, entitled “Content Moderation in the Global South: A Comparative Study of Four Low-Resource Languages.” An illustration of four different hands, all being touched by / wrapped around a bright blue connective internet cord.
Executive Summary: Insights from Four Case Studies
Over the past 18 months, the Center for Democracy & Technology (CDT) has been studying how content moderation systems operate across multiple regions in the Global South, with a focus on South Asia, North and East Africa, and South America. Our team studied four languages: the different Maghrebi Arabic Dialects (Elswah, 2024a), Kiswahili (Elswah, 2024b), Tamil (Bhatia & Elswah, 2025), and Quechua (Thakur, 2025). These languages and dialects are considered “low resource” due to the scarcity of training data available to develop equitable and accurate AI models for them. To study content moderation in these languages spoken predominantly in the Global South, we interviewed social media users, digital rights advocates, language activists, representatives from tech companies, content moderators, and creators. We distributed an online survey to over 560 frequent social media users across multiple regions in the Global South. We organized roundtables, focus group sessions, and talks to get to know these regions and the content moderation challenges they often face. We did this through essential collaborations with regional civil society organizations in the Global South to help us understand the local dynamics of their digital environments.
When we initially delved into this topic, we recognized that the culture of secrecy that surrounds content moderation would pose challenges in our investigation. Content moderation remains an area that technology companies keep largely inaccessible to public scrutiny, except for the information they choose to disclose. It is a field where the majority, if not all, participants are discouraged from engaging in external studies like this or revealing the specifics of their operations. Despite this, we gathered invaluable data and accessed communities that had previously not been reached. Our findings significantly contribute to the scientific and policy communities’ understanding of content moderation and its challenges in the Global South. The data we present in this report also contributes to our understanding of the information environment in the Global South, which is understudied in current scholarship.
Here, we compare and synthesize the insights we gained from studying the four regions and present our recommendations for improving content moderation in low-resource languages of the Global South.
While the insights from this project may be applicable to other non-Western contexts and low-resource or indigenous languages, we have learned that each language carries its own rich history and linguistic uniqueness, which must be acknowledged when discussing content moderation in general. By comparing these four case studies, we can identify some of the overall content moderation challenges that face languages in the Global South. Additionally, this comparison can help us identify the particular challenges inherent in moderating diverse linguistic and cultural contexts, enhancing our understanding of what could possibly be “effective” content moderation for these regions and beyond.
While we acknowledge the uniqueness of each language, when comparing the four languages we examined, we find that:
The content moderation policies currently employed by large tech companies have limitations. Currently, global tech companies use two main approaches to content moderation: Global and Local. The global approach involves applying a uniform set of policies to all users worldwide. While this approach helps prevent external interventions (e.g., by governments) and is in some ways easier, it ignores unique linguistic and cultural nuances. The local approach, exemplified by TikTok, involves tailoring policies, particularly those related to cultural matters, to specific regions. This approach, despite its promise of inclusivity, sometimes poses obstacles and limitations on users trying to challenge local norms that violate their rights. An exception to the two approaches was found in the Kiswahili case: JamiiForums, a Tanzanian platform, has developed its own unique methods for moderating local languages, introducing what is known as “multi-country approach.” Their unique approach, which entails assigning moderators to content from their native language, poses more promise and large user satisfaction, but leaves a question of whether it can be applicable on a large scale. Users in the Global South are increasingly concerned about the spread of misinformation and hate speech on social media in their regions. All four case studies highlighted user concerns regarding the spread of hate speech and harassment and inconsistent moderation of the same. Additionally, users are increasingly worried about the wrongful removal of their content, particularly in the Tamil and Quechua cases. Tamil and Quechua users linked the content restrictions to the companies’ desire to “silence their voices” more often than Kiswahili and Maghrebi Arabic-speaking users. We identified four major outsourcing service providers that dominate the content moderation market for the low-resource languages we examined: Teleperformance, Majorel, Sama, and Concentrix. Across the four cases, we found that content moderators for non-English languages are often exploited, overworked, and underpaid. They endure emotional turmoil from reviewing disturbing content for long hours, with minimal psychological support and few wellbeing breaks. Additionally, we found that the hiring process for moderators lacks diversity and cultural competencies. Moderators from a single country are often tasked with moderating content from across their region, despite dialectical and contextual variations. In general, moderators are required to review content in dialects other than their own, which leads to many moderation errors. In some cases, moderators are assigned English-language content from around the world, with no regard for their familiarity with specific regional contexts, as long as they possess a basic understanding of English. Resistance is a common phenomenon among users in the Global South. Many users across the case studies employed various tactics to circumvent and even resist against what they saw as undue moderation. Despite the constant marginalization of their content and their languages, users developed various tactics to evade the algorithms, commonly known as “algospeak.” We found tactics that involved changing letters in the language, using emojis, uploading random content alongside material they believed would be restricted, and avoiding certain words. In examples from our Quechua case study, some simply posted in Quechua (instead of Spanish) because they found that it was often unmoderated. Lastly, many NLP researchers and language technology experts in the Global South have developed tools and strategies to improve moderation in many low-resource languages. They have engaged with their local communities to collect datasets that represent specific dialects of the language. They enlisted students and friends to help annotate data and have published their work, creating networks to represent their language in global scholarship. However, these scholars and experts often feel underutilized or unheard by tech companies. If consulted and their knowledge utilized, these groups could significantly improve the current state of content moderation for low-resource languages" https://cdt.org/insights/content-moderation-in-the-global-south-a-comparative-study-of-four-low-resource-languages/ #metaglossia_mundus
Manitoba’s Minister for Accessibility is apologizing to the deaf and hard of hearing community for comments about interpreter
"By Sav Jonsa Jun 27, 2025
Manitoba’s Minister for Accessibility is apologizing to the deaf and hard of hearing community after comments she made about an American Sign Language (ASL) Interpreter were made public.
On Thursday, an ASL-English Interpreter provided her services at an event put on by Minister Nahanni Fontaine to celebrate Indigenous women graduating from high school, college and university.
Interpreter Sheryl LaVallee shared the stage alongside various speakers so members of the audience who communicate using ASL, a visual language comprised of hand movements and facial expressions, could be included in the conversation.
Soon after Fontaine made her speech to the crowd, she went to a media scrum off-stage to address reporters.
But not before sharing her grievances to her press secretary, Ryan Stelter.
In front of the media, he congratulated Fontaine on her speech.
“I was thrown off,” Fontaine replied. “It wasn’t great but, because the woman – she shouldn’t have been on the stage.”
Fontaine continued, saying she couldn’t see the left side of the stage due to the interpreter and that “all I could see was her…”
“Frantic hand movements?” he offered.
“Yes! I’m like, f*** why did I have her on the stage,” added Fontaine, “Jesus, I’m like ‘you need to leave’.”
APTN News heard the transgression on its video recording of the news conference Thursday and promptly contacted the Manitoba government for a response.
The media communications team initially denied the request to provide a statement or interview unless APTN handed over the footage to ‘verify’ the transcript. APTN did provide a transcript but did not comply with the request for the raw footage.
Soon after the footage aired Friday afternoon, Fontaine provided an emailed statement that said, in part:
“I sincerely apologize to the deaf and hard of hearing community, and to all Manitobans for my comments,” wrote Fontaine.
“Yesterday, during a private debrief with my staff, I was reflecting on my public speaking performance and remarked I had been distracted by the interpreter’s hand movements. I was expressing frustration on my own poor planning to ensure clear sight lines at the event.”
“My comments did not acknowledge signing is not simply “hand movements,” but a full and rich language used by thousands of Manitoban(s) every day.”
Fontaine says she spoke with LaVallee to apologize and receive feedback on how to improve the experience of deaf and hard of hearing Manitobans at events.
Fontaine continued, “As the Minister responsible for Accessibility I understand that ASL interpretation is integral to our public events, and we must continue to build understanding and respect for sign language and Manitobans who rely on it.”
In May, the NDP government provided funding to the modernized ASL-English Interpretation Advanced Diploma Program at Red River College Polytechnic in the amount of $225,000 for renovation and equipment costs, in addition to $190,000 in annual funding to support the program’s operations.
The use of ASL-English interpreters is part of the Manitoba government’s Public Service Commission Policy on Sign Language, where employees are responsible to “incorporate sign language interpreting services and other accessibility features as part of public engagements and communication.”" https://www.aptnnews.ca/national-news/manitoba-accessibility-minister-apologizes-for-comments-about-sign-language-interpreter/ #metaglossia_mundus
"‘You think in two languages when you translate’: Deepa BhasthiDeepa Bhasthi selected Banu Mushtaq’s 12 stories and translated them from Kannada to English
By Shubhangi Shah
Issue Date: July 06, 2025
Updated: June 29, 2025 06:58 IST
Deepa Bhasthi | PTI
Interview/ Deepa Bhasthi, translator
Banu Mushtaq’s International Booker Prize-winning Heart Lamp is bold, impactful, and radical. Equally radical is the work of Deepa Bhasthi, who selected Mushtaq’s 12 stories which comprise the book and translated them from Kannada to English, but with an accent. “Translating with an accent is very important to me because I don’t believe in revering western centricity at all,” she tells The WEEK. So “roti” doesn’t become flatbread as “pizza, burger, or pasta was never explained to us.”
In her translator’s note ‘Against Italics’, Bhasthi further makes the case for not using italics as they “serve to not only distract visually, but more importantly, they announce words as imported from another language, exoticising them and keeping them alien to English”. And regarding footnotes— “there are none”, she writes.
In an interview with THE WEEK, Bhasthi talks about the collaboration with Mushtaq, the challenges of translating Heart Lamp, and her views on pushing the boundaries of English. Excerpts:
Q. How did your collaboration with Banu Mushtaq happen?
A. I didn’t know Banu until three years ago. She got in touch with me through a mutual friend and asked if I’d be interested in translating her stories. Although I had read a few of her stories, it was then that I started reading everything that she had written. And her stories struck me, how universal their themes were, and their greater relevance across communities, religions, castes and even nationalities.
Q. How did you narrow down the stories to the 12 that comprise Heart Lamp?
A. There were multiple factors that went into choosing the 12 stories—those that moved me personally, those that showcased well the kind of themes she has been writing about, and then those I thought would work well in English.
Q. What was challenging about translating Banu’s work?
A. Getting the cultural context was challenging as I don’t belong to her community or religion. There were several nuances of the Islamic society that were, initially, challenging to understand. For that, I read a lot of literature, watched Urdu TV shows and listened to music to get used to the mannerisms, voices and her community’s thought process.
And translating itself is a tough job as you’re trying to think in two languages simultaneously and getting under the skin of the other person’s thoughts.
Q. The International Booker jury called Heart Lamp “a radical translation”. You clearly pushed the contours of English by introducing Kannada tonality. How do you view this style of translation with an accent?
A. Translating with an accent is very important for me because I don’t believe in revering western centricity at all. For decades, we’ve been trying to translate in a way to make it easier for the western readers. Meanwhile, we have been made to understand their culture without italics or any footnotes. So pizza, burger, or pasta was never italicised or explained to us, so I don’t see why we’ve to explain roti as flatbread. Also, we shouldn’t underestimate the interest or the intelligence of readers and dumb down the text just because they might not be familiar with some words or phrases.
So I wanted to retain that, because it’s also important to be aware that I was not trying to translate into British or American English, and the way characters speak in English in the stories is the way we speak English in India. So I didn’t see why we needed to reject that and cater to a readership that wanted everything easy. I was trying to push the boundaries of English and the language is very capable of being bent and twisted. It’s not a rigid language by any sense.
Q. Despite having a rich literary culture, Kannada has remained among the lesser-represented languages in translation.
A. We have a glorious unbroken literary history of about 1,000 years and there are extraordinary works in Kannada. I hope this win sparks some interest in both reading in Kannada and translating from the Kannada language.
Q. In India, language can be a sensitive issue, with concerns over imposition of Hindi or English. How do you think translations fit into this conversation?
A. From the south Indian perspective, there’s a greater concern over the imposition of Hindi compared with English, for several reasons. And it’s very simple: what one doesn’t know, one fears. The more we read literature from each other’s languages, the more we realise that human beings are the same everywhere, whether they eat different foods or speak different languages. So, it is important for languages to be in conversation with each other, because then we get familiar with each other’s stories and that fosters a greater sense of understanding than anything else."
https://www.theweek.in/theweek/leisure/2025/06/28/you-think-in-two-languages-when-you-translate-deepa-bhasthi.html
#metaglossia_mundus
"Vocablos cubanos serán incluidos en el Diccionario de la Lengua Española de 2026 se consultó inicialmente un listado del Diccionario de Americanismos, que contenía las palabras más utilizadas en Cuba.
Con esta adición al diccionario del próximo año, se establece una conexión entre la lexicografía tradicional y las herramientas digitales modernas.
29 de junio de 2025 La inclusión de 100 cubanismos en la próxima edición del Diccionario de la Lengua Española (DLE) representa un reconocimiento significativo de las tradiciones léxicas nativas en Cuba, donde el idioma se presenta como un organismo en constante evolución.
Para dar a conocer esta iniciativa tanto al público nacional como al extranjero, se realizó un anuncio en la Biblioteca Nacional José Martí sobre la incorporación de estos términos en la edición del DLE que se lanzará en 2026.
El evento fue presidido por la Academia Cubana de la Lengua este viernes, donde se discutió el proceso de investigación llevado a cabo para integrar estas expresiones del argot popular. Se examinó una selección de los casos más típicos y recurrentes en el habla cotidiana.
Esta elección se fundamentó en una metodología rigurosa y criterios de fuentes que garantizaron la homogeneidad de las voces. Según Roberto Méndez Martínez, miembro de Número de la Real Academia Cubana de la Lengua, este ejercicio representa una sinergia entre la investigación y el tratamiento léxico contemporáneo, asegurando que cada una de las 100 palabras seleccionadas cumpla con los requisitos necesarios y refleje la autenticidad de Cuba.
Siguiendo esta metodología de fuentes, se consultó inicialmente un listado del Diccionario de Americanismos, que contenía las palabras más utilizadas en Cuba, según el pronunciamiento de Alexander Puente, profesor de la Facultad de Artes y Letras (FAYL) de la Universidad de La Habana y miembro de la Academia Cubana de la Lengua.
Además, se contrastó cada uno de estos vocablos mediante la documentación correspondiente, lo que llevó a descartar algunos términos y a buscar alternativas adicionales.
Con esta adición al diccionario del próximo año, se establece una conexión entre la lexicografía tradicional y las herramientas digitales modernas, realzando así el valor de la cultura cubana en su diversidad y promoviendo un enfoque inclusivo que aboga por la igualdad y la equidad." SURtv.net 30 de junio, 2025 https://www.telesurtv.net/vocablos-cubanos-seran-incluidos-en-el-diccionario-de-la-lengua-espanola-de-2026/ #metaglossia_mundus
"Ce passionné de Tintin va traduire “Les Bijoux de la Castafiore” en... patois de l’Auxois !
Tintin parle désormais le patois de l’Auxois ! L’association tintinophile La Confrérie aux pinces d’or relance ses activités avec comme projet la traduction de l’album Les Bijoux de la Castafiore dans la langue régionale. Une édition limitée paraîtra en 2025, portée par la passion de Nicolas Poussy.
Manon Tautou (manon.tautou@lebienpublic.fr)
Nicolas Poussy, passionné de Tintin depuis son plus âge, traduit Les Bijoux de la Castafiore en patois de l’Auxois. La BD avait déjà été traduite en patois dijonnais.
Après plusieurs années d’hibernation, l’association tintinophile La Confrérie aux pinces d’or, basée à Mont-Saint-Jean, renoue avec l’aventure. Et pas n’importe laquelle : Nicolas Poussy s’est lancé le défi de traduire Les Bijoux de la Castafiore en patois de l’Auxois. Une édition limitée et numérotée de l’album est prévue pour le second semestre 2025. Tintin, le capitaine Haddock et la Castafiore parleront avec… l’accent local.
« Le patois, c’est plus compliqué à transmettre à l’écrit »
L’idée n’est pas nouvelle pour l’association, puisqu’en 2008-2009, un album de Tintin avait déjà été adapté en patois bourguignon-dijonnais par le linguiste Gérard Taverdet, sous le titre “Lés ancorpions de lai Castafiore” (Les Bijoux de la Castafiore).
« Cette fois, j’ai voulu traduire dans la langue que j’entendais petit », explique Nicolas Poussy, à l’initiative du projet. Un travail de fourmi, car, précise-t-il : « Le patois est une langue orale, c’est plus compliqué à transmettre à l’écrit. Je m’inspire de ce que j’entendais enfant, d’écrits, … Et puis, rien qu’à quelques kilomètres, les mots utilisés peuvent être différents ! ». Cela fait quatre ans que ce passionné de Tintin s’attelle à cette tâche sur son temps libre. Il est épaulé par plusieurs relecteurs. « Nous avons choisi de traduire Les Bijoux de la Castafiore , un album dont les paysages parlent à tout le monde et peuvent résonner avec ceux de la Bourgogne et de l’Auxois », détaille-t-il.
Une première vague de 2 000 exemplaires
Pour concrétiser le projet, l’association a lancé une campagne de pré-souscription, ouverte jusqu’au 15 juillet. L’objectif est de financer une première édition limitée de 2 000 exemplaires numérotés, proposés au prix de 13,50 € l’unité, ou 38,50 € avec l’adhésion à l’association. « Ensuite, nous verrons en fonction du succès », ajoute Nicolas.
Les fonds collectés permettront de couvrir les coûts d’édition, d’impression et de livraison. Les bénéfices serviront à développer des activités culturelles et pédagogiques autour du patois : ateliers de patoisants, lectures publiques, pièces de théâtre… « L’objectif est de fédérer autour d’une langue, et pourquoi pas, organiser une vraie fête locale », confie-t-il encore.
L’association a été lancée en 2008
L’album est publié par les Éditions Casterman et Tintinimaginatio (anciennement Moulinsart) sont les ayants droit officiels de l’œuvre d’Hergé. Fondée en 2008, La Confrérie aux pinces d’or s’est donné pour mission de rassembler les passionnés de Tintin, à travers publications, expositions, conférences..."
https://www.bienpublic.com/culture-loisirs/2025/06/29/ce-passionne-de-tintin-va-traduire-les-bijoux-de-la-castafiore-en-patois-de-l-auxois
#metaglossia_mundus
"Par Frederic Becquemin 29 juin 2025
DeepL rejoint désormais le classement prestigieux du TIME Magazine, qui distingue les 100 entreprises les plus influentes de 2025. Cette reconnaissance internationale souligne l’impact révolutionnaire de sa technologie d’intelligence artificielle linguistique avancée sur les communications mondiales.
DeepL vient d’intégrer le célèbre classement TIME100 des entreprises qui façonnent l’avenir mondial. Cette reconnaissance internationale place la société allemande aux côtés de géants technologiques comme OpenAI et Anthropic. La plateforme spécialisée dans la traduction automatisée rejoint ainsi une liste sélective d’organisations qui transforment notre époque grâce à leurs innovations révolutionnaires.
Cette nomination reflète l’influence grandissante de DeepL sur le marché de l’IA florissant. L’entreprise se distingue par son approche unique de l’intelligence artificielle linguistique avancée, permettant des traductions d’une qualité exceptionnelle. Parmi les entreprises internationales renommées qui lui font confiance, on compte Harvard Business Publishing et Panasonic Connect, témoignant de la fiabilité de cette innovation technologique phare.
Intégration au prestigieux classement TIME100 Reconnaissance aux côtés des leaders technologiques mondiaux Plus de 200 000 entreprises utilisent la plateforme Partenariat avec des marques de renommée internationale L’impact direct de l’intelligence artificielle linguistique en entreprise
Les barrières linguistiques représentent un défi majeur pour les organisations modernes. Une étude récente révèle que 70 % des entreprises américaines rencontrent quotidiennement des obstacles linguistiques industriels qui freinent leur productivité. Ces difficultés se traduisent concrètement par des retards dans les projets et une communication internationale facilitée compromise.
Lire aussi : Camieg : quel est le role de la Caisse d'Assurance Maladie des Industries Électriques et Gazières
L’outil développé par DeepL transforme cette réalité en permettant une expansion sur les marchés internationaux plus fluide. Les données montrent que 61 % des entreprises interrogées reconnaissent que les barrières linguistiques ont ralenti leur développement à l’étranger. Grâce à cette technologie, les organisations bénéficient d’une croissance accélérée des entreprises et peuvent désormais opérer sans contraintes géographiques.
Une reconnaissance saluée par la direction de DeepL Jarek Kutylowski, fondateur et directeur général de DeepL, a exprimé sa fierté suite à cette nomination prestigieuse. Cette distinction honore le travail accompli par ses équipes et valide la vision stratégique de l’entreprise. Le dirigeant souligne que ce succès collectif des équipes résulte d’années d’investissement dans la recherche et le développement technologique.
Être reconnu par TIME constitue une validation extraordinaire de notre engagement à éliminer les barrières linguistiques dans le monde entier. Cette distinction reflète la passion et le dévouement de nos équipes.
Jarek Kutylowski, CEO de DeepL Cette récompense s’inscrit parfaitement dans la mission stratégique de DeepL qui vise à démocratiser l’accès à une communication multilingue de qualité professionnelle.
Un secteur de l’IA linguistique très concurrentiel DeepL accumule les distinctions dans un environnement hautement compétitif. L’entreprise innovante primée figure pour la deuxième année consécutive dans le classement Forbes AI 50, confirmant sa position de leader technologique. Fast Company l’a par ailleurs désignée parmi les entreprises les plus innovantes de 2025.
Lire aussi : La déconnexion au travail clairement impossible dans notre société hyperconnectée
ZDNET a classé DeepL au troisième rang des outils populaires d’intelligence artificielle, attestant de l’adoption massive de ses solutions. Cette performance remarquable dans un secteur en pleine effervescence démontre la capacité de l’entreprise à maintenir son avance technologique face à une concurrence internationale acharnée." https://www.mediavenir.fr/deepl-integre-elite-des-entreprises-influentes-du-classement-time/ #metaglossia_mundus
"La Journée mondiale de la langue kiswahili, célébrée chaque année le 7 juillet, rend hommage au kiswahili, l'une des langues les plus parlées en Afrique et dans le monde, avec plus de 200 millions de locuteurs. Le kiswahili est un outil essentiel de communication et d'intégration en Afrique orientale, centrale et australe. Il est la langue officielle de l'Union africaine, de la Communauté de développement de l'Afrique australe et de la Communauté économique des États de l'Afrique de l'Est. Le kiswahili est plus qu'une langue : c'est un vecteur de l'identité, de l'unité et de la culture africaines. Depuis son rôle dans les mouvements de libération, notamment ceux menés par Mwalimu Julius Nyerere, jusqu'à son utilisation moderne dans l'éducation, la diplomatie et les médias, le kiswahili continue de favoriser la cohésion régionale et la compréhension culturelle mondiale.
Reconnue par l'UNESCO comme la première langue africaine à bénéficier d'une journée internationale, la langue kiswahili incarne le pouvoir du multilinguisme pour promouvoir la diversité, la tolérance et le développement durable. En tant que pont entre les communautés et les civilisations, cette langue joue un rôle essentiel dans l'éducation, la sauvegarde de la culture et le progrès socio-économique. Plus qu'un simple moyen de communication, le kiswahili est porteur d'une identité, de valeurs et d'une vision du monde, représentant la riche mosaïque culturelle du continent africain.
Consciente de sa portée mondiale croissante, l'Assemblée générale des Nations Unies a adopté la résolution A/RES/78/312, qui affirme l'importance du kiswahili dans la promotion de la solidarité, de la paix et de l'unité panafricaine.
Contexte
Le swahili a une histoire riche et complexe, façonnée par diverses cultures et langues au fil des siècles. Ses origines font l'objet d'un débat, avec deux théories principales sur son développement. La première suggère que le swahili est avant tout une langue bantoue apparue le long de la côte de l'Afrique de l'Est entre 100 et 500 ans de notre ère. Il a évolué comme une langue véhiculaire (c'est-à-dire une langue de communication entre des communautés d'une même région ayant des langues maternelles différentes), aidant les communautés bantoues à communiquer avec les commerçants d'Arabie et d'Asie. Le swahili est progressivement devenu une langue essentielle pour le commerce, la diplomatie et les échanges culturels.
La seconde théorie met en avant l'influence de l'arabe sur le développement du swahili. Le terme « swahili » est dérivé du mot arabe sawāḥilī, qui signifie « de la côte », ce qui reflète les liens commerciaux et culturels étendus de la région avec les commerçants des pays arabes. Au fil du temps, le vocabulaire, la grammaire et les systèmes d'écriture arabes ont influencé le swahili, en particulier dans les régions côtières. Ce lien a permis à la langue de s'épanouir en tant que langue parlée et écrite en Afrique de l'Est, renforçant ainsi son rôle dans la communication locale et internationale.
« Lugha ni uti wa mgongo wa utamaduni, mshikamano na maendeleo »
La langue est la colonne vertébrale de la culture, de l'unité et du développement.
Le saviez-vous ?
La langue kiswahili est la langue la plus parlée en Afrique subsaharienne et sert de langue véhiculaire dans plus de 14 pays africains.
Le Département de la communication globale des Nations Unies dispose d'un service d'information en kiswahili.
Des mots comme « Harambee » (se rassembler) et « Uhuru » (liberté) reflètent les racines profondes du kiswahili dans la communauté et l'unité.
Le kiswahili s'écrivait à l'origine en caractères arabes, avant que l'alphabet latin ne devienne la norme.
Le kiswahili a une structure logique et phonétique, ce qui en fait l'une des langues africaines les plus faciles à apprendre pour les débutants.
Préserver la diversité linguistique
L'UNESCO estime qu'il existe 8 324 langues, parlées ou signées (page en anglais). Parmi elles, environ 7 000 langues sont encore utilisées. Seules plusieurs centaines de langues sont véritablement valorisées dans le système éducatif et dans le domaine public, et moins d'une centaine sont utilisées dans le monde numérique. Cela signifie que toutes les deux semaines, une langue disparaît pour toujours, emportant avec elle tout un patrimoine culturel et intellectuel. En préservant et en valorisant la langue kiswahili, nous assurons la survie de l'une des langues africaines les plus parlées, qui fait également office de pont entre les différentes communautés."
https://www.un.org/fr/observances/kiswahili-day
Le chef suprême de la confrérie des « Dozos sans frontières » a invité tous les acteurs religieux à prôner le pardon et le vivre-ensemble. « Nous sommes contre le fait qu’on dise des paroles grossières à l’égard des religions révélées.
"...L’Académie endogène des savoirs (ACADES) a dédicacé son tout premier livre sur la célébration de la journée du 15-Mai. Le livre est intitulé « Le culturel, le spirituel, le cultuel et l’immortel ». La cérémonie de dédicace a eu lieu ce samedi 28 juin 2025, à Ouagadougou.
Cet ouvrage est le fruit de neuf collaborations, huit études scientifiques et une contribution poétique. L’œuvre est subdivisée en quatre sections : traditions et genres ; traditions et religions du livre ; traditions, langues et médias ; et traditions, culture et savoirs.
La section « traditions et genres » met en lumière le rôle de la femme à la fois dans les sociétés traditionnelles et modernes. La seconde section, « traditions et religions du livre », parle des relations que les traditions entretiennent avec les religions. Cette partie montre comment les religions révélées intègre dans leur pratique certaines valeurs africaines. La section 3, « traditions, langues et médias », analyse la capacité des langues nationales à promouvoir les coutumes. Les auteurs ont également analysé la place des valeurs traditionnelles dans les médias. La dernière partie, « traditions, culture et savoirs », décrit les savoirs et compétences dans les modes de vie traditionnelle.
Le livre met en lumière les éléments de la tradition qui peuvent aider la société actuelle à mieux construire la paix et la cohésion sociale. « Si nous ne connaissons pas véritablement notre tradition, comment allons-nous envisager de construire une société solide ? La tradition est le socle de la construction d’une bonne société », a laissé entendre Pr Yves Dakouo, membre de l’ACADES.
Et d’ajouter : « La culture endogène, ce que nous pensons pouvoir être appelé la culture endogène, elle n’est pas statique. Elle est nécessairement dynamique car, au fil du temps, elle est en contact avec des éléments nouveaux. Un contact culturel au départ qui peut être conflictuel. Mais progressivement, ces éléments nouveaux vont être intégrés aux éléments anciens. Et lorsqu’ils seront pleinement intégrés et adaptés, on perdra même leur caractère étranger et ils deviendront des valeurs endogènes. Donc l’endogène n’est pas quelque chose qui est statique et figé. C’est quelque chose qui est dynamique. Ce qui est nouveau aujourd’hui peut-être ancien demain, et pleinement intégré. »
Pour Pr Dramane Konaté, la célébration du 15-Mai ne doit pas se limiter à l’immolation d’un animal. Car le 15-Mai englobe plusieurs aspects des traditions et des coutumes et va au-delà de l’immolation d’un animal...
L’ouvrage est disponible à la librairie Mercury au prix de 5 000 francs CFA.
Rama Diallo
Lefaso.net"
https://lefaso.net/?page=impression&id_article=139133
#metaglossia_mundus
AI-assisted translation tools are being utilized in court systems to address language barriers, improve efficiency & maintain public trust.
"AI in court translation: Navigating opportunities, risks & the human factor
Natalie Runyon Director / ESG content / Thomson Reuters Institute
27 Jun 2025
AI-assisted translation tools are being utilized in court systems to address language barriers, improve efficiency, and maintain public trust through effective governance, human involvement, and ethical guardrails
Key insights:
AI-assisted translation toolsare being utilized in court systems to address language barriers, improve efficiency, and maintain public trust through effective governance and human involvement.
Orange County Superior Courtdeveloped its CAT system to address translation issues, starting with Spanish and Vietnamese languages.
Establishing ethical guardrailsis essential for the successful deployment of AI-assisted translation tools to build confidence and maintain public trust.
New ways of utilizing AI are showing up in the nation’s court system regularly. Recently, an AI-assisted victim impact statement made its way into the courtroom at a sentencing hearing. Still, less publicized AI innovations, such improving internal workflows within courts and influencing evidence in trials, are arising across court systems around the world every day.
Yet one of most debated areas of AI use in courts in the United States involves the fundamental challenge of providing timely and accurate language access for individuals with limited English proficiency. The National Center for State Courts (NCSC) and the Thomson Reuters Institute, through their partnership on AI Policy in the Law and Courts, hosted a recent webinar focused on how AI can assist in the translation of written documents from one language to another. (The webinar did not address in detail how AI can support interpretation, or spoken-language conversion from one language to another.) However, the webinar captured key insights from experts on how AI-assisted translation of documents can enhance court services, the pros and cons of using these tools, and the critical risks that must be managed.
Indeed, a core challenge facing courts today is a critical shortage of qualified human translators and interpreters. Not meeting these language demands creates substantial barriers to due process and potentially affects many individuals’ liberties, housing rights, access to justice, and other fundamental legal protections, while simultaneously undermining public trust in the judicial system.
“There’s a very high demand for translators and interpreters, and a shortage of both, particularly in less common language pairs and more rural areas,” said Florencia Russ, an American Translators Association certified translator and CEO of Transcend Translations. “If there isn’t a translator and interpreter available, that can mean that hearings have to be postponed… [and] people may spend more time in limbo.”
Using AI to pioneer translation
To address this long-standing translation issue, courts are turning to AI-powered tools that are specifically designed for use by court systems — something the Orange County Superior Court saw firsthand. After testing other tools to address language barriers in the justice system, the Superior Court took the initiative to develop its Court Application for Translation (CAT) system, powered by Microsoft Azure, according to Deputy COO Blanca Escobedo.
Orange County developed the system with the Spanish and Vietnamese languages first and trained the model using court-specific terms and words. The court system took a thoughtful approach with robust governance, oversight, and input from multidisciplinary teams, said Escobedo, adding that the project’s first phase focused on low-risk use cases, mainly translating educational materials and video scripts. Later phases will concentrate on collaborative court essays and juvenile reports that typically run longer than 100 pages.
Escobedo explained how rigorous quality control has been a key pillar of CAT outputs since its inception, which was essential to ensure trust and confidence in the output. Each output then is reviewed by certified translators and given a score to assess the performance of the AI-assisted translation. Results showed 80% of Spanish translations were usable as-is (with 17% requiring minor corrections, and 3% containing major errors); while Vietnamese translations achieved 57% accuracy (with 39% needing minor adjustments, and 4% with major errors). The difference reflects the greater availability of Spanish training materials.
Governance, human involvement & ethical guardrails
As the webinar pointed out, effective governance, human participation, and ethical guardrails are all key ingredients for the effective deployment of AI-assisted translation tools to build confidence and maintain the trust of the public. Grace Spulak, Principal Court Management Consultant at NCSC, outlined the consequences of using AI-assisted translation tools without deploying such safeguards. “If people do not feel the information they are getting from the courts is accurate and reliable, people will not trust the courts,” Spulak said. “They won’t look to the courts as a source of authority, and they won’t use information that they get from the courts.”
Spulak, Russ, and Escobedo outlined the necessary mechanisms to ensure public trust is maintained with translated documents using AI, including:
Effective governance — Spulak, who is leading NCSC’s efforts to provide guidance on AI translation, recommends that courts have policies in place “if they are going to use AI in any context for translation, so that people understand what it is the court is doing and what things are being translated.” These protocols should have “clear limits on how AI is used, how it will be reviewed, and how the court will ensure that it’s providing quality translations to folks,” she adds. In addition, there should be established guidelines for implementation, human review processes, and feedback mechanisms from internal and external stakeholders.
Humans in the loop for accuracy and quality control — Having individuals review outputs and oversee development, testing, and ongoing quality control are vital mechanisms for using AI to translate documents. “There has to be a human in the loop who has read and has done machine-translation, post-editing to ensure that that those translations are accurate” for effective use in a legal context, explained Russ, of Transcend Translations.
Transparency — The concept of transparency forms the cornerstone of ethical AI translation implementation in courts. Escobedo described how the Orange County Superior Court is purposeful in how it deploys AI assisted translation. In the court’s process, information is shared with executive and judicial leadership to ensure there is a level of comfort with the solutions the court is developing. This includes clearly informing users when AI translation has been used through disclaimers on documents and maintaining open communication with staff and the judiciary. In this way, transparency is enhanced when there are “clear guidelines for how a document translated with AI-assisted translation has to be disclosed, where it is disclosed, and whether it be a watermark or an oral declaration,” Russ said.
Privacy & security — Privacy and security considerations are equally critical. Orange County’s approach demonstrates best practices with an on-premises solution that’s protected by firewalls, ensuring that confidential information remains secure. Additionally, reviewers sign confidentiality agreements to provide another layer of protection.
Using AI’s powerful capability… with caveats
AI-assisted translation has improved language access in courts and addressed resource constraints while enhancing efficiency. Indeed, Orange County’s CAT tool resulted in “a reduction in translation expenditures” and significant improvement in turnaround times, noted Escobedo.
For other courts, the Orange County Superior Court’s approach to AI-assisted translation can serve as a guide for thoughtful implementation through transparency, phased rollout, and continuous quality assessment. Courts modelling their efforts based on the experience of Escobedo and the CAT application will need to balance innovation in order to address a lack of language access within court systems while still actively identifying and mitigating the risks associated with such innovation.
With careful planning and commitment to quality, courts can harness AI to enhance justice for all community members, but as former Minnesota State Supreme Court Chief Justice Bridget Mary McCormack said in concluding the webinar: “We are still at a stage where we want humans supervising this work, and we want humans who are certified and experts to be the ones supervising the work.”"
https://www.thomsonreuters.com/en-us/posts/ai-in-courts/navigating-language-translation/
#metaglossia_mundus
"LuminAfrica brings the Bible to unreached languages and people
Written by: Mergon Foundation
Article source: www.facebook.com
Over 6.1 billion people currently have access to the full Bible in their own language. But more than 1.5 billion still don’t – and for 220 million people, not even a single verse exists in their heart language.
With over 3,500 languages still lacking sufficient scripture, Bible translation remains one of the most urgent and complex missions of our time. Many of these communities come from oral cultures, meaning translation isn’t just about finding the right words. Translation is about honouring culture, traditions, and the culture’s unique ways of engaging with truth.
That’s why translation efforts must be locally led, culturally sensitive, and deeply collaborative. Translators work through each passage – drafting, checking, testing, and refining with their communities and advisors until a clear, accurate version takes shape.
Launched in 2020, LuminAfrica is a Mergon partner that’s making strides in this vital work. Through collaborative efforts between South African Bible translation organisations and local resource partners, they are helping to close the gap between those who have access to God’s word and those who don’t.
Imagine a day when every tribe and tongue have God’s word in their heart language. It’s a mission worth supporting.
Find out more about LuminAfrica here: "https://luminafrica.bible"
https://joynews.co.za/luminafrica-brings-the-bible-to-unreached-languages-and-people/
#metaglossia_mundus
"Risk Management in Translation, by Anthony Pym
Abstract: A recent addition to Cambridge University Press’s Elements in Translation and Interpreting series is Risk Management in Translation, authored by Anthony Pym, one of the world’s foremost translation studies scholars (Ioannidis, 2024). In this concise but insightful volume, Pym presents and analyses a topic of immense academic and practical relevance which synthesises and develops aspects touched upon in some of his previous research (e.g., see Pym, 2015). Risk Management in Translation is divided in six sections. The first, entitled ‘Why talk about risk?’ (pp. 1-8), highlights the inherent uncertainty underlying (multilingual) communication. Pym goes beyond traditional rule-based notions of equivalence, highlighting that in many areas of translation, the “problems of translating a text do not have clear, simple solutions” (Pym, 2025a, p. 1). In discussing the role of risk and probability in numerical economic and business-related contexts, Pym then delineates how risk management interplays with the translator’s craft, and how – consciously or not – it may affect a translator’s work and mindset. As such, the importance of cooperation in enhancing the probability of a successful outcome – as well as in the avoidance of failure – is underscored."
https://www.researchgate.net/publication/392812615_Risk_management_in_translation_-_Book_review
#metaglossia_mundus
"SAN DIEGO (FOX 5/KUSI) — A former Afghan interpreter for the U.S. military who is seeking asylum in the United States is now facing possible deportation after being detained by immigration officials earlier this month, sparking outrage among supporters who say he should be treated as an ally, not a criminal.
Sayed Naser was arrested by U.S. Immigration and Customs Enforcement (ICE) agents on June 12 following a mandatory immigration hearing at the federal courthouse in downtown San Diego. He has been held at the Otay Mesa Detention Center ever since.
Afghan man who worked as interpreter for US Army detained by ICE in San Diego Naser, who worked with American forces in Afghanistan, is among the growing number of asylum seekers caught up in the Trump administration’s crackdown on undocumented immigrants. Advocates warn that returning him to Afghanistan could be a death sentence, given his work with the U.S. military and the threat posed by the Taliban.
On Thursday, a federal immigration judge approved a motion to dismiss Naser’s case — a move VanDiver said may sound favorable but could actually strip Naser of his opportunity to formally claim asylum.
“You would think a motion to dismiss is a good thing, but it’s not in this case,” said Shawn VanDiver, a representative of AfghanEvac and Iraq and Afghanistan Veterans of America. “It means that his notice to appear was dismissed, along with his defense of asylum.”
VanDiver said recent changes in federal immigration policy, including a Supreme Court ruling allowing the administration to redirect asylum seekers to third countries regardless of conditions there, are compounding Naser’s legal peril.
US soldier’s son, born on Army base in Germany, is deported to Jamaica “Now they’ve put him into expedited removal proceedings,” VanDiver said. “All that stands between him and deportation to anywhere in the world that President Trump decides, thanks to the Supreme Court ruling this week, is a credible fear interview.”
Naser must now convince an asylum officer that he faces a real threat of harm if deported — and he must do so without legal representation, as attorneys are not allowed in that phase of the process. If the officer finds his fear credible, he may be allowed to pursue asylum. If not, he could be deported within days.
“The best-case scenario is he passes his credible fear interview, and gets an expedited asylum interview, get’s his asylum, and can request his green card in a year, and he’s safe — and then his family can come here too,” VanDiver said.
It remains unclear when Naser’s interview will take place. Until then, he remains in federal custody, awaiting a decision that could determine whether he finds asylum or deportation." by: Tony Shin Jun 27, 2025 / 07:37 PM PDT https://fox5sandiego.com/news/local-news/former-afghan-interpreters-deportation-feared-after-san-diego-ice-arrest/ #metaglossia_mundus
"This week’s conversation is about curiosity in translation and interpretation. Not just the literal, “how do you say this thing in that language?” but how do we use our curiosity to communicate our ideas effectively, to investigate what’s really being said when we’re quite literally not speaking one another’s language.
Luckily, there are people like Silvia Villacampa who have a few things to teach the rest of us…
Silvia Villacampa worked in medical interpretation and is now managing director of Liberty Language Services.
“In the moment, the curiosity must be around the culture of both parties, both speakers. Culture is language, language is culture. It’s all intertwined.” ~ Silvia Villacampa
Episode webpage and full show notes: https://lynnborton.com/2025/06/26/curiosity-in-translation-interpretation-with-silvia-villacampa/"
https://kpfa.org/episode/choose-to-be-curious-june-26-2025/
#metaglossia_mundus
A group of authors, including Pulitzer prize winner Kai Bird, accuse Microsoft of using copyrighted works to train its large language model.
by Suhasini Srinivasaragavan
26 JUN 2025
The latest complaint comes as Meta and Anthropic both receive legal relief in similar copyright lawsuits.
A group of authors has filed a lawsuit against Microsoft, accusing the tech giant of using copyrighted works to train its large language model (LLM).
The class action complaint filed by several authors and professors, including Pulitzer prize winner Kai Bird and Whiting award winner Victor LaVelle, claims that Microsoft ignored the law by downloading around 200,000 copyrighted works and feeding it to the company’s Megatron-Turing Natural Language Generation model.
The end result, the plaintiffs claim, is an AI model able to generate expressions that mimic the authors’ manner of writing and the themes in their work.
“Microsoft’s commercial gain has come at the expense of creators and rightsholders,” the lawsuit states. The complaint seeks to not just represent the plaintiffs, but other copyright holders under the US Copyright Act whose works were used by Microsoft for this training.
The aggrieved party seeks damages of up to $150,000 per infringed work, as well as an injunction prohibiting Microsoft from using any of their works without permission.
This latest lawsuit is yet another that seeks to challenge how AI models are trained. Visual artists, news publishers and authors are just some of the creators who claim that AI models infringe upon their rights.
However, yesterday (25 June), a US court ruled that Meta’s training of AI models on copyrighted books fell under the “fair use” doctrine of copyright law.
The lawsuit was brought by author Richard Kadrey and others back in 2023.
Earlier this year, the authors’ counsel claimed that Meta allowed its LLM Llama to commit copyright infringement on pirated data and upload it for commercial gain.
In the decision yesterday, the judge said that the ruling does not mean that Meta’s use of copyrighted materials to train its LLM is lawful but that the plaintiffs “made the wrong arguments”, ultimately failing to prove their case.
He added that Meta’s use of the copyrighted works was “transformative” and the authors failed to provide any meaningful evidence of “market dilution” resulting from Meta’s actions. As a result, he said Meta was entitled to a summary judgement on its fair use defence.
While in another blow to authors, a different US court earlier this week ruled that Anthropic’s use of purchased books to train Claude AI also qualifies as “fair use”.
This case was brought by Andrea Bartz, Charles Graeber and Kirk Wallace Johnson in 2024, who claimed that Anthropic used pirated versions of various copyrighted material to train Claude, its flagship AI model.
However, “Claude created no exact copy, nor any substantial knock-off. Nothing traceable to [the plaintiffs’] works,” the judge wrote in his summary judgement. The judge said that Anthropic must face a separate trial for the claim that it pirated works from the internet.
Although, it appears that Big Tech companies, at times, acknowledge the role copyright holders play in creating the primary data from which their AI models learn.
Last year, Bloomberg reported that Microsoft and publishing giant HarperCollins signed a content licensing deal where the tech giant could use some of HarperCollins’ books for AI training.
While AI search engine Perplexity, which has repeatedly come under fire for allegedly scraping content from news publishers, also launched a revenue sharing platform with publishers after receiving backlash.
Meanwhile OpenAI has a content-sharing deal for ChatGPT with more than 160 outlets in several languages.
Earlier this year, Thomson Reuters CPO David Wong told SiliconRepublic.com that not only is it possible to create AI systems that respect copyright, but that respecting copyright will further those systems and improve accessibility to information.
Recent rulings seem to place Big Tech as the emerging winner in the AI fair use battle. Still, companies such as OpenAI and Microsoft continue to battle similar lawsuits." https://www.siliconrepublic.com/business/microsoft-lawsuit-ai-copyright-kai-bird-victor-lavelle #metaglossia_mundus
Nearly 3 in 4 Imperial County residents speak mostly Spanish at home.
"A new California bill could force local governments in Imperial County to start translating their agendas into Spanish. The lack of translation has kept many county residents from fully participating in the democratic process.
An immigrant worker, who was arrested during a raid outside a hardware store in Pomona in April, has been released from ICE custody. Now, immigrant rights advocates are pushing for the release of two other workers still in detention.
State Bill Would Require Imperial County To Translate Key Documents Into Spanish
Last September, dozens of public speakers gathered at the Imperial County Board of Supervisors meeting in El Centro. They were there to comment on the county’s proposed lithium spending plan — part of a major discussion taking place across the county about future tax revenue from the burgeoning industry. But some of the speakers also wanted to talk about something else.
“There’s no Spanish translation of the updated plan,” said Fernanda Vega, an organizer with the Imperial Valley Equity and Justice Coalition. “We cannot continue to push aside Spanish-speaking residents, especially when their health and livelihoods are at stake.” Nearly 3 in 4 Imperial County residents speak mostly Spanish at home, and more than a quarter don’t speak English fluently, according to the U.S. Census Bureau. But the county government and many cities often don’t publish translated versions of agendas and other key documents. Without consistent Spanish translation in local government, these residents are in essence locked out of the democratic process.
Now, a new California bill could force the county government and the region’s two largest cities to start offering Spanish translations of their meeting agendas, which are currently published only in English. Among other changes, SB 707 would require that certain counties and cities with large communities who speak languages other than English translate their agendas and also provide translated instructions for tuning into meetings remotely. In a speech on the California Senate floor earlier this month, the bill’s author, state Sen. María Elena Durazo (D-Los Angeles), said it would make it easier for non-English speakers to follow local government meetings and strengthen access to the democratic process.
Pomona Day Laborer Released From ICE Custody Faces Work Ban
A day laborer arrested during an immigration enforcement raid outside a hardware store in Pomona in April has been released from custody, but now faces release conditions that immigrant rights advocates call punitive. They’re also pushing for the release of two other workers still in detention.
Sponsored
Edvin Juarez Cobon and nine other day laborers, or jornaleros in Spanish, were arrested by Border Patrol at a Home Depot on April 22. Cobon was released on bond on June 13 under ICE’s Alternatives to Detention program, after being held for nearly two months at the Imperial Detention Facility. Cobon, who is Guatemalan, is now required to wear an ankle monitor and is prohibited from working.
“I’m worried because my family depends on me,” Cobon said in Spanish. “I’m not someone who stays home. I want to be able to work to make ends meet.” Alexis Teodoro, workers’ rights director at the Pomona Economic Opportunity Center (PEOC), said he’s never seen immigration officials restrict someone’s ability to work. “I think it’s part of the strategy of the administration doing everything it can, every step of the way,” said Teodoro, “to make the lives of immigrants impossible so they can self-deport.”"
Keith Mizuguchi
Jun 26
https://www.kqed.org/news/12046054/new-bill-would-require-imperial-county-to-offer-spanish-translations-of-agendas
#metaglossia_mundus
Since April 2024, the word slop has gained traction online, with online searches for the term AI slop increasing dramatically - EducationTimes.com
"Cambridge Dictionary Includes ‘Slop’ As The Low-Quality Content Generated By AI
Cambridge Dictionary includes ‘Slop’ as the low-quality content generated by AI
Since April 2024, the word slop has gained traction online, with online searches for the term AI slop increasing dramatically
TNN | Posted June 24, 2025 05:19 PM
As a new Artificial Intelligence-related definition of the word slop enters the Cambridge Dictionary, language experts are tracking emerging Artificial Intelligence (AI) terms. Traditionally used as the word to define liquid or wet food waste, especially when it is fed to animals, slop has found new meaning due to the rapid rise of AI. As per the newly added definition, slop refers to content on the internet that is of very low quality, especially when it is created by artificial intelligence.
Since April 2024, slop has gained traction online, with online searches for the term AI slop increasing dramatically and continuing to grow.
Colin McIntosh, programme manager, Cambridge Dictionary, said, “The updated entry reflects growing concerns about increasing amounts of low-quality content created by AI. It is an important reminder that quality and integrity remain unmistakably human. In an era of machine-made content, those values are more crucial than ever.”
Wendalyn Nichols, publishing manager, Cambridge Dictionary, said, “Think of email in the 90s or hashtag in the 2000s. Now, AI-related words are becoming increasingly part of our everyday lives. It is our job to track terms used in popular culture and add the ones that are likely to have staying power to the Dictionary.”
Emerging new AI words
Other new words about AI identified by lexicographers at the Cambridge Dictionary reflect the evolving English language as technology continues to re-shape our world. Terms such as AI washing, the behaviour of a company or organisation that tries to make people believe that it is using AI to make its products or services better, when really it is not doing this or is only partly doing it; and decel, someone who believes that AI and other new technologies are developing so quickly that they are likely to cause very serious problems and that progress should be deliberately slowed down, are being monitored for possible inclusion in the Cambridge Dictionary.
Other AI terms which are being monitored by Cambridge Dictionary lexicographers include Neocloud, a noun, which is used to refer to a start-up that specialises in AI-based cloud computing. Meta face, noun, which refers to a trend where photos that have been enhanced using AI technology make everyone look similarly flawless and unrealistically beautiful.
The lexicographers are also monitoring abbreviations such as BYOAI, which is an abbreviation for “bring your own artificial intelligence”: the practice of companies saying that employees can use their own artificial intelligence tools when at work. E/acc is a noun and an abbreviation for “effective accelerationism”: a movement that believes AI and other new technologies should be allowed to develop as quickly as possible without any restrictions.
Some other terms include Artificial superintelligence, a noun, which is used to define a type of artificial intelligence that is much more intelligent than any human and can think, act, learn, etc., independently and beyond the abilities of people. Agentic AI, noun, a term used to describe a type of artificial intelligence that can make decisions and take actions without the need for human input. Intention economy, noun, which is used to refer to a system in which AI learns what people are likely to want to buy or do in the future, with companies using the information to create corresponding products and services."
https://www.educationtimes.com/article/newsroom/99738869/cambridge-dictionary-includes-slop-as-the-low-quality-content-generated-by-ai
#metaglossia_mundus
Shoogly, skooshy, beamer and bummer are among 13 new entries added by editors at the OED.
"Oxford English Dictionary is hoaching with new Scottish words
A total beamer - a football fan deals with Scotland's elimination from a tournament The Oxford English Dictionary is hoaching with new Scottish words - with beamer, bummer and tattie scone among 13 new entries.
There is also a listing for Scotland's shoogly subway trains - not the kind of place where passengers would want to risk using skooshy cream.
Many of the new additions have a food theme, with Lorne sausage, morning rolls and playpiece also making the grade.
Oxford English Dictionary (OED) editors say they will consider a new word for inclusion when they have gathered enough independent examples of its usage "from a good variety of sources".
They said there also has to be evidence that a word has been in use for a "reasonable amount of time".
Some of the words date back to the 1700s and already feature in Scots language dictionaries.
They are among nearly 600 new words and phrases adopted into the OED.
What new Scottish words are in the OED?
The streets are hoaching during the Edinburgh Festival, if you're planning to chum someone along Aye, right - A sarcastic phrase - used ironically to express contempt or incredulity. Similar to "yeah, right".
Beamer - A term for a flushed or blushing face, especially one resulting from embarrassment. Extended to mean a humiliating or shameful situation.
Bummer - A person in a position of authority. Normally used in the expression "heid (head) bummer". It sometimes has a humorous suggestion of pomposity or officiousness.
Chum - To join someone as a companion, as in "I'll chum you along".
Hoaching - Crowded, swarming or thronging. It is derived from the verb "hotch" - to swarm', dating back to 1797.
Morton's rolls A well-fired morning roll, perfect for a slice of square sausage
Lorne or Square sausage - Sausage meat formed into square slices that are grilled or fried.
Morning roll - A soft white bread roll, its first usage dating back to Farmer's Magazine in 1801.
Playpiece - A snack taken to school by children to eat during the morning break or playtime. Also used in Northern Ireland.
Shoogly - A word used to mean unstable or wobbly. The OED cites it being used to describe to describe Glasgow's unsteady subway carriages.
Skooshy - Applied to anything that can be squirted. Whipped cream squirted from an aerosol can is often called "skooshy cream" north of the border.
Tattie scone - A type of flat savoury cake made with flour and mashed cooked potatoes. Goes nicely with square sausage on a morning roll.
Well-fired - Refers to rolls baked until brown or black and crusty on top."
https://www.bbc.com/news/articles/crenw75rlr1o #metaglossia_mundus
Peformance Interpretation Limited are on the Isle of Wight for the first time.
"The Isle of Wight Festival is for everyone - and British Sign Language (BSL) interpreters ensure that those who are deaf or hard of hearing can enjoy the music too.
Performance Interpreting Limited are at the festival for the first time in 2025, providing interpretations of performances.
The Isle of Wight Festival's adoption of BSL sees them join many major festivals in offering the language, which makes the event more accessible.
Lynn, co-ordinator of interpretations at the festival, said: "We’re here on the Isle of Wight for the first time as part of Performance Interpreting Limited.
The Corrs go down a storm at Isle of Wight Festival 2025
Emmanuel Kelly feels "part of something iconic" at Isle of Wight Festival 2025
"We’ll be covering around 10 to 15 acts overall, plus any additional requests that come in."
"Our interpreters are prepared for impromptu, last‑minute requests."
The team, five-strong at the Isle of Wight Festival, attend many events across the summer.
Lynn said: "It’s not just festivals, it’s also concerts and sporting events. We will be there."
Interpreters recently provided BSL at Download Festival, and will continue to sign at events throughout the summer.
BSL is a welcome addition to the Isle of Wight Festival, with many benefitting from the interpretations and able to enjoy the music too.
Follow the County Press's live coverage of the Isle of Wight Festival throughout the weekend"
https://www.countypress.co.uk/news/25256830.bsl-interpretation-isle-wight-festival-2025/
#metaglossia_mundus
Incorpora la inteligencia artificial en sus estudios mientras sus responsables evidencian que una correcta interpretación requiere intervención humana
"La Facultad de Traducción de Soria se defiende: «La IA no puede leer entre líneas»
Incorpora la inteligencia artificial en sus estudios mientras sus responsables evidencian que una correcta interpretación requiere intervención humana
La Facultad de Traducción e Interpretación se creó hace 29 años en el Campus de Soria.
Milagros Hervada
Soria
23.06.2025 | 12:00
Actualizado: 23.06.2025 | 19:01
Hacerse entender y comprender otros idiomas es cada vez más sencillo gracias a la inteligencia artificial, IA, pero los resultados no son siempre los deseados. Desde la Facultad de Traducción e Interpretación del Campus Universitario de Soria, dependiente de la Universidad de Valladolid, reconocen que la IA es una herramienta más en su trabajo, aunque les está haciendo «daño», pero también recalcan, «es imposible sustituir al traductor humano».
Y eso es así porque «la IA no puede leer entre líneas», defiende el decano de la facultad, Miguel Ibáñez, quien matiza que «no lee las connotaciones», por lo que «para la traducción de una ruta turística puede valer, pero un texto publicable, un libro, si es importante para la venta de un producto, requiere una revisión», y esa sólo puede hacerla el humano, asegura, por eso se da el caso de que hay empresas que «vuelven al traductor humano y genera más trabajo por la postedición». En resumen, «que no se trata únicamente de sustituir una palabra por otra».
Ibáñez reconoce que la matrícula ha decrecido en los últimos años. El pasado curso iniciaron primero 27 alumnos, y este verano abandonarán la facultad 35 graduados, además de los tres alumnos de máster, precisamente centrado en Entornos digitales multilingües. Igualmente Traducción ofrece formación de doctorado –conjunto con la Universidad de Alicante– y son muchos los que siguen este camino.
La facultad se adapta a los tiempos y también la inteligencia artificial está en sus dinámicas de estudio. De hecho, el decano apostilla que la IA propicia que se traduzca más, y de una forma más cómoda. Por eso la consideran «más que un enemigo, un aliado». Constituye una herramienta estupenda cuando se trata de una pretraducción que hace la máquina y que después requiere una revisión, porque «sólo establece referencias, pero sin saber qué dice». Es decir, traduce palabras pero «sin reflexión», añade Ibáñez, y sin el traductor humano se pierden matices y «es fundamental a la hora de interpretar porque una palabra tiene muchas acepciones».
La Facultad aporta un bagaje de casi 30 años en los que las tecnologías han ido a velocidad de crucero y sus clases se han ido adaptando a los tiempos. Recuerda el decano «cuando los alumnos iban con sus diccionarios en papel». Ahora es todo electrónico. «Surgieron programas de memoria de traducción y lo incorporamos. Ahora es la IA y eso facilita, pero requiere una revisión», insiste. Por eso el Grado incorpora asignaturas de postedición desde hace ya varios cursos.
...
La matrícula en la Universidad de Valladolid está viva en estos días tras las pruebas de acceso –el plazo de preinscripción es del 4 de junio al 4 de julio de 2025, y la matrícula para antiguos alumnos del 24 de junio al 10 julio– y Traducción e Interpretación defiende su espacio porque incorpora la nueva tecnología, ofrece una formación muy personalizada «y un trato cercano». De hecho, la facultad se planificó para no ser un centro masificado, con unas 60 ó 70 plazas.
https://www.heraldodiariodesoria.es/soria/250623/200944/facultad-traduccion-soria-defiende-frente-ia-leer-lineas.html
#metaglossia_mundus
"The future of simultaneous interpretation: AI and the human interpreter
The Future of Simultaneous Interpreting: Artificial Intelligence and the Irreplaceable Role of the Human Interpreter
The simultaneous interpreting sector is undergoing an unprecedented transformation. The emergence of artificial intelligence, remote interpreting platforms, and advances in real-time machine translation are reshaping the way multilingual events are organized. But do these changes represent an opportunity or a threat for professional interpreters?
At Código Lingua, with years of experience in simultaneous interpreting in Valencia and internationally, we analyze the most relevant technological innovations, their apparent advantages, and their real limitations. Because while technology is here to stay, the human factor remains essential to ensure precise, empathetic, and contextualized communication.
What is Changing in the Sector?
Digital Remote Interpreting Platforms
The rise of remote interpreting platforms has been one of the major changes, largely driven by the pandemic. These tools enable the provision of remote simultaneous interpreting services, connecting interpreters and listeners from anywhere in the world. For organizers, they offer a flexible and accessible alternative; for interpreters, an environment that requires new technical skills and constant adaptation.
Generative AI and Real-Time Automated Voice
Another significant innovation is the emergence of automated voice solutions using generative artificial intelligence to produce real-time translation. These tools, integrated into mobile devices or virtual assistants, promise fast translation in multiple languages without human intervention.
Although they are presented as an innovative solution, their use in formal and complex environments is still far from reliable. Simultaneous interpreting is not just about translating words, but about understanding and conveying tone, intent, and context. Using them for travel as a tourist may be somewhat useful, but we insist that, as of today, they are not a reliable tool to replace conference interpreters.
Machine Translation in Virtual Events
Machine translation systems have also proliferated in virtual events, especially webinars and e-learning platforms. Again, these tools can be useful to provide a general idea of the content, but they are far from delivering the necessary quality in situations where nuance, terminological precision, and communicative fluency are essential.
In this changing landscape, the challenge is clear: how to make the most of technology without losing the essence and value of the human interpreter.
Where Is AI Already Being Applied in Interpreting?
Artificial intelligence is already being used in some contexts where accuracy is not critical:
Tourism and basic multilingual customer service
Instant translation mobile apps
Internal corporate events, with low linguistic demands
Video tutorials or pre-recorded presentations with automatic translation
However, when it comes to international conferences, diplomatic negotiations, court proceedings, or technical presentations, the role of a professional interpreter remains irreplaceable. AI lacks the ability to interpret irony, manage non-verbal language, grasp cultural references, or adapt to unexpected situations in real time.
Advantages Promised by New Technological Solutions
New technologies applied to simultaneous interpreting undoubtedly offer several benefits:
Global accessibility: remote interpreting allows working with qualified interpreters regardless of their location.
Reduction of logistical costs: by eliminating travel or on-site equipment.
Speed of implementation: especially for virtual or hybrid events organized on short notice.
Compatibility with new platforms: integration with Zoom, Teams, Meet and other digital tools.
However, these advantages should not be confused with replacement. Technology can facilitate the interpreter’s work, but not replace them. As we know well at Código Lingua, the success of a multilingual event depends on the technical channel and the human and linguistic quality of the professional interpreter.
Risks of Relying Exclusively on AI in Multilingual Events
The advancement of artificial intelligence in the field of remote interpreting has generated great expectations, but also presents significant risks for the quality and reliability of communication in multilingual settings. At Código Lingua, specialists in simultaneous interpreting services in Valencia, we have observed how technology can be useful in very specific contexts, such as for interpreters’ prior preparation, but also how its indiscriminate use can negatively impact the development of a professional event.
When interpretation is entirely delegated to a machine, what is most valuable is lost: the ability to understand nuance, emotion, and context. Below, we analyze the main errors and risks of relying exclusively on AI during congresses, technical meetings, or international conferences.
Common mistakes in automatic interpretation
Ambiguity, irony, technical terms
Automatic interpretation often fails in aspects that are fundamental for a human interpreter. One of the most common mistakes is the poor handling of linguistic ambiguity: words with double meanings, colloquial expressions, or emotionally charged phrases can be misinterpreted or translated literally, leading to confusion or even serious misunderstandings.
Irony and humor are especially difficult for an automated system to grasp. At best, the communicative effect is lost; at worst, the wrong message is conveyed.
In technical or scientific events, specialized terminology requires in-depth knowledge of the subject. Machine translation often fails with technical terms, which compromises attendees’ understanding and the event’s reputation.
That is why relying on simultaneous interpreting services provided by professionals with specific training is essential to ensure the accuracy and coherence of the discourse.
Cultural or institutional context errors
An AI system does not understand the cultural or institutional codes surrounding a speech. In a political congress or an international summit, the human interpreter not only translates but interprets tone, cultural references, formal protocols, or diplomatic expressions.
Simultaneous interpreting in Valencia, when conducted by interpreters with local and international experience, allows the message to be adapted to the specific setting and helps avoid uncomfortable or inappropriate situations. AI, by contrast, lacks this adaptability and sensitivity.
Ethical and privacy implications
Data protection
When using automatic remote interpreting tools, it is important to consider where data is stored, who has access to recordings, and how information security is managed. Many automatic translation platforms use external servers without explicit privacy guarantees, putting the confidentiality of exchanged messages at risk.
In contrast, by hiring simultaneous interpreting services in Valencia with providers like Código Lingua, clients can be sure that all data protection regulations are met and that interpreters operate under strict professional confidentiality agreements.
Confidentiality in legal, political, or business settings
In sectors such as legal, business, or diplomatic, confidentiality is non-negotiable. The involvement of an automated tool may constitute a serious breach of privacy rights, with potential legal or institutional consequences.
Professional interpreters are trained to ensure neutrality, professional secrecy, and respect for the content being interpreted — something that no AI can reliably guarantee. Therefore, in these settings, human presence remains essential.
What happens when technology fails?
Automation is never free from technical risks. Connection outages, synchronization issues, audio errors, or platform malfunctions can leave attendees without access to interpretation during a key presentation. In onsite or hybrid events, this can compromise the entire session.
By contrast, simultaneous interpreting services managed by experienced professionals always include technical support: soundproof booths, specialized technicians, prior testing, and a human team ready to respond to unforeseen events.
Technology can fail. Human judgment cannot. That’s one of the reasons why more and more congress organizers, international forums, and specialized seminars continue to rely on professional interpreters.
The added value of the human interpreter in an automated world
In a context where artificial intelligence and automated platforms are advancing rapidly, it’s worth remembering that simultaneous interpreting is not just a mechanical transfer of words between languages. It is, above all, a human act of understanding, adaptation, and connection.
That is why working with a professional conference interpreter offers a series of irreplaceable advantages, especially in complex multilingual events such as congresses, symposiums, institutional meetings, or international forums. At Código Lingua, with extensive experience in simultaneous interpreting services in Valencia, we witness it every day: the best results are achieved when technology serves human talent — not the other way around.
Real-time adaptability
Program changes
In any event, no matter how well planned, last-minute changes are common: a speaker modifies their presentation, a new order of speakers is introduced, or someone unexpectedly joins the panel. A human interpreter can quickly adapt, reorganize their documentation, and maintain message coherence.
In contrast, an automated tool cannot improvise, anticipate, or modify its behavior based on a new instruction. This real-time adaptability is one of the pillars of the professional value interpreters provide.
Accents, emotions, interruptions
Simultaneous interpreting takes place in very diverse contexts, where speakers may have different accents, intonations, or levels of clarity. Moreover, the emotional component of a speech is often just as important as its content.
A professional interpreter knows how to handle a difficult accent, interpret a contained emotion, or resolve a sentence interrupted by audience reaction. That level of emotional and adaptive understanding is, for now, far beyond the reach of artificial intelligence.
Cultural knowledge and empathy
A key element that distinguishes a human interpreter from any automated system is their cultural knowledge and capacity for empathy.
In multilingual events involving participants from different countries, intercultural sensitivity is essential. The conference interpreter doesn’t just translate words: they contextualize, soften expressions, respect cultural protocols, and prevent misunderstandings stemming from social or institutional differences.
Thanks to their training and experience, a professional interpreter can detect when a phrase may be inappropriate in another culture and adapt it in real time to preserve the speaker’s intent without causing friction. This deeply human role is essential for successful international communication.
The interpreter’s presence as a guarantee of successful communication
Beyond the technique, the presence of the human interpreter at the event inspires confidence. For both the speaker and the audience, knowing that a real person is in charge of the interpretation conveys assurance, professionalism, and responsiveness.
Additionally, in simultaneous interpreting services in Valencia, working with local interpreters who understand the environment, the audience, and the specific sector of the event adds unquestionable value.
At Código Lingua, we advocate for a smart hybrid model, where technology is a useful tool but never a substitute for a trained professional. Because simultaneous interpreting is not just translation: it is understanding, conveying, and connecting.
Simultaneous interpreting in the digital age
Digitalization has opened up new possibilities in the field of simultaneous interpretation, especially with the rise of hybrid events. This format, which combines in-person and remote participation, requires an equally versatile linguistic solution that can maintain interpretation quality regardless of the channel.
In this context, simultaneous interpretation services for hybrid events must rely on technological tools, yes, but without losing the value of the human factor. The key to success lies in combining the support offered by new technologies with professional interpreters, who bring precision, empathy, and communication control.
How to choose the most suitable service depending on the type of event
When planning an international event, choosing the most suitable type of simultaneous interpretation service depends on several factors:
Is it in-person, virtual, or hybrid?
Which digital platforms will be used?
How many languages will be interpreted and in what format?
What is the profile of the audience and the speakers?
What is the level of technical complexity of the content?
Working with an experienced provider like Código Lingua allows for a complete and personalized evaluation, adapting the simultaneous interpretation services in Valencia to the specific needs of each client.
Technological evolution has transformed the way we communicate, but in the field of simultaneous interpretation, the human factor remains essential. Neither artificial intelligence nor automatic platforms can match the precision, empathy, and adaptability of a professional interpreter trained to act in real time, understand cultural nuances, and ensure smooth communication.
The value of professional interpretation
In hybrid events, technical conferences, or institutional meetings, relying on simultaneous interpretation services in Valencia provided by professionals like those at Código Lingua is a guarantee of quality, reliability, and the success of your multilingual event."
https://codigolingua.com/en/future-of-simultaneous-interpretation/
#metaglossia_mundus
"If language is power, why is Australia going quiet?
26 Jun 2025|Francesca Ciuffetelli
In 1992, Paul Keating said, ‘Asia is where our future substantially lies’. Decades later, the rhetoric remains, but the follow-through is still lacking. Despite Australia’s pivot to the Indo-Pacific, our cultural competency is inhibiting our progress, and our next generation of leaders is even less prepared.
National security decisions are often shaped by assumptions grounded in one’s own cultural framework. Misinterpreting another country’s motives, communication styles, or strategic behaviours due to cultural blind spots can escalate tensions or lead to strategic miscalculations. The cause of such blind spots lies within education systems that fail to equip future leaders with relevant regional knowledge and language skills, leaving them ill-prepared to understand or engage effectively with key Indo-Pacific partners.
In the 1990s and early 2000s, Asia-focused engagement in Australia rose, with a particular interest on educating our youth. Under former prime minister Keating, programs such as the Asia Education Foundation were launched, Indonesian became one of the most taught Asian languages, and regional literacy was seen as a national strategic asset. At its peak in 2002, more than 1,000 Victorian Year 12 students studied Indonesian, while more than 300 did so in New South Wales. These numbers reflected a clear priority: building a generation of Australians who understood our region not just strategically, but linguistically and culturally.
Two decades later, this promising momentum has collapsed. By 2022, only 387 students in Victoria and just 90 in NSW studied Indonesian at Year 12 level, a decline of more than 60 percent. While the Asia Education Foundation and other programs still exist today, numbers have plummeted due to a shifting policy focus, low visibility and a lack of strong political advocacy.
Conversely, interest in studying Indonesian has grown in China, with at least 19 Chinese universities offering related modules and exchange programs in Indonesia. This shift reflects a growing recognition of Indonesia’s strategic importance to China. It’s a deliberate investment in future regional understanding, one that recognises language and education as essential tools of strategic influence.
For Australia, this trend presents a strategic challenge. As China equips a new generation of students with the linguistic and cultural tools to engage directly with Indonesian counterparts, it is also enhancing its ability to build trust, shape regional narratives and embed itself more deeply in key diplomatic, economic and security conversations. Without a comparable level of cultural and linguistic capability, Australian officials and institutions may find it increasingly difficult to engage with nuance, foster sustained partnerships, or counter competing narratives in the region. Over time, this capability gap could erode Australia’s relative influence in Indonesia and limit its ability to respond effectively to regional developments.
From a national security perspective, this matters. Language proficiency is not simply a communicative skill; it is a strategic enabler. Security outcomes improve when decision-makers possess cultural intelligence: the ability to interpret behaviour through the lens of another’s worldview. This intelligence is cultivated early, through education that prioritises understanding cultures and languages of Australia’s strategic region, rather than defaulting to traditional Eurocentric languages currently popular within our education system such as French and Italian.
National security training must begin long before entry into government. If Australia is serious about its place in the Indo-Pacific, we need to inspire the next generation to engage with the region. That means embedding cultural competency into the classroom. Expanding regional studies and prioritising languages such as Indonesian and Mandarin should be a national priority, not an afterthought. These languages reflect our geopolitical reality and are key to fostering culturally literate analysts, diplomats and policymakers.
To reverse declining enrolments, students need to see tangible value in choosing these pathways. That could mean higher university admission bonuses for strategic languages, scholarships for study abroad, or guaranteed internships in government and industry for high-achieving language students. These incentives work. A study found that over half of senior students said a university admission bonus had influenced their decision to continue language study into Year 12. The same paper also confirmed that bonus points, clearer university pathways and strategic messaging helped boost enrolments. When students see tangible academic and career value, they are more likely to commit to languages that reflect Australia’s regional future.
At the same time, we must support the teachers delivering this capability. A 2021 report commissioned by the Asia Education Foundation highlighted persistent challenges, including teacher shortages, limited training and a failure to properly integrate Indonesian studies into the curriculum.
Continuing to prioritise such European languages as French and Italian, while culturally enriching, risks reinforcing a Eurocentric bias that no longer aligns with Australia’s strategic future. If Australia wants to lead in the Indo-Pacific, we must invest in the cultural and linguistic capability of our youth. This is not just about language; it’s about building a generation that understands, respects, and can navigate the complexities of our region. By incentivising students and empowering teachers, we can turn cultural competency from a gap in our national security into one of our greatest strengths."
https://www.aspistrategist.org.au/if-language-is-power-why-is-australia-going-quiet/
#metaglossia_mundus
"Feature: Arabic-dubbed Chinese animation "Ne Zha 2" premiers in Riyadh
Source: XinhuaEditor: huaxia2025-06-26 19:24:30
RIYADH, June 26 (Xinhua) -- A soft ripple of guzheng music floated through the foyer of Reel Cinema in northern Riyadh on Wednesday night as dozens of movie-goers posed beneath a towering poster of Ne Zha 2, the first time the Chinese animation blockbuster has reached Saudi screens in Arabic.
Among the early arrivals was Bushra al-Dawood, a journalist for the Saudi outlet Gorgeous. To celebrate the premiere, she paired a black abaya embroidered with red crimson blossoms and matching red shoes. "A nod to the fiery spirit of Chinese culture and Ne Zha," she smiled. "The film's landscapes are so vivid that I can't wait to travel there and see those mountains and rivers for myself."
Inside the 200-seat auditorium, laughter, gasps and spontaneous applause punctuated the two-hour screening of the Arabic-dubbed edition, which blends standard Arabic with Saudi, Egyptian and other dialects. When the lights came up, clusters of children rushed back to the poster for selfies, while adults lingered in animated debate about the plot's twists and mythical creatures.
"The movie is visually stunning, the story is beautiful, and I had no trouble following it thanks to the Arabic dub," said Shahad, a fourth-year Chinese-language major at King Saud University. "I saw posters of Ne Zha 2 all over China during a summer camp but never caught a screening there. The moment I heard it would open in Riyadh, I signed up right away. I'll be back with my family."
The film's Saudi distributor, CineWaves Films, believes the combination of state-of-the-art animation and localised dialogue will broaden its appeal.
"'Ne Zha 2' is a high-quality, truly original work that speaks to audiences everywhere," said Faisal Baltyuor, CineWaves chairman. "By dubbing it into Saudi dialect we remove the language barrier and make the story even more inviting for local viewers."
Directed by Chinese filmmaker Jiaozi, Ne Zha 2 continues the coming-of-age saga of the rebellious boy-god first introduced in 2019's record-breaking Ne Zha. This time the stakes are higher, the universe larger and the visuals more ambitious, with richly textured dragons, fiery battles and sweeping panoramas rendered in full 3-D.
Saudi animation veteran Malik Nejer, who supervised the Arabic version, said selecting different dialects for rival clans helped newcomers navigate a world rooted in Chinese folklore.
"Many Arab viewers don't know Chinese mythology," Nejer explained. "So we matched each on-screen tribe with a distinct Arabic dialect. It guides the audience through the plot and mirrors the linguistic diversity of our own region."
He also mentioned when concepts had no exact equivalent, the team searched for cultural parallels, "letting viewers feel an instant connection."
Backed by CineWaves and Dubai-based PBA Entertainment, Ne Zha 2 opens nationwide in Saudi Arabia on Thursday and will roll out to the UAE, Bahrain, Oman, Kuwait, and Qatar in early July.
"We've seen Chinese products expand abroad in waves. Now it's time for cultural exports," said PBA chairman Shi Kejun. "With Saudi-China ties deepening, I'm confident we'll soon see more Chinese films not only screened here but even shot on Saudi locations."
Chinese Ambassador to Saudi Arabia Chang Hua called the premiere "a highlight of the 2025 China-Saudi Cultural Year," which also marks 35 years of diplomatic relations.
"By hearing the story in their own language, Saudi audiences can better appreciate China's rich mythological heritage," Chang told Xinhua. "We hope the film sparks wider interest in Chinese culture and inspires further collaboration in creative industries."
As the last viewers drifted out into the warm Riyadh night, reporter Bushra al-Dawood adjusted her red-blossom abaya and waved goodbye: "Ne Zha's courage will stay with me, and my next stop would be those beautiful Chinese landscapes!" Enditem."
https://english.news.cn/20250626/01632ddf08e1406c83deb3e2d1aa3eb6/c.html
#metaglossia_mundus
|
"La créolisation de la langue française
Le mot "créole" provient du mot criollo, du portugais crioulo. Ce terme désigne ce qui est né dans la colonie par opposition à ce qui vient d'Europe. Jean-Luc Mélenchon l'utilise à mauvais escient
Notre langue est partagée par bien des pays, et vu la natalité de certains le français va devenir après le chinois, une des langues la plus parlée dans le Monde. En 2022 selon l’Observatoire de la langue française, le français serait toujours la cinquième langue la plus parlée après l’anglais, le chinois, l’hindi et l’espagnol, avec 321 millions de locuteurs, même si le rythme de croissance se ralentit… 29 pays ont en effet le français comme langue officielle, ce n’est pas rien
Emmanuel Macron énonçait même le 28 novembre 2018 lors de son discours au Burkina Faso que « le français sera la première langue de l’Afrique et peut-être du monde si nous savons y faire dans les prochaines décennies ». Poudre de Perlin Pinpin...
En vérité (comme il le dit au lieu d'user de dans les faits) il y a plus de locuteurs ailleurs qu'en France même.
Lors d’un colloque sur la francophonie, organisé à l’Assemblée Nationale, le 18 juin 2025, Jean-Luc Mélenchon a notamment évoqué l’opportunité qu’il y aurait à donner un autre nom à la langue française, langue créole selon lui : « Si quelqu'un pouvait trouver un autre nom pour qualifier notre langue, il serait le bienvenu »
S’il s’en était tenu à cette formule je n’aurais pas applaudi mais c'est audible, même si je préfère reconnaître toujours l’histoire de la langue et de conserver son nom d’origine… Même si la terminologie serait fausse à un moment donné (ce qui reste à démontrer) du fait du nombre de citoyens réellement français, garder toujours l’appellation contrôlée me paraît saine. J’aime bien raconter que Jean-Marie Chauvet, quand il a «découvert» la grotte, a estimé qu’une représentation sur un pendant rocheux, mi homme, mi animal, selon lui, devait être baptisé « panneau du sorcier ». La plupart des scientifiques estiment cette interprétation erronée. Mais comme dit Jean Clottes, comme le nom est donné on n'en change pas.
Jean-Luc Mélenchon aurait dit ensuite « Si nous voulons que le français soit une langue commune, il faut qu'elle soit une langue créole » Faut-il se demander au nom de quelle compétence prononce-t-il une chose pareille?
Peut-être cherche-t-il simplement à faire dire des conneries à ses adversaires?
Ces propos ont vivement fait réagir la droite, notamment Gérald Darmanin qui a insisté sur le fait que « la langue française appartient aux Français » et que c'est « notre patrimoine le plus précieux».
Nous serons tous d’accord pour dire que le Ministre de la Justice n’est pas non plus un bon juge sur ces questions et comme ce n’est pas mon propos, je n’expliquerais pas ici pourquoi les deux assertions sont nulles et non avenues.
Parlons du discours de LFI, puisque l’inénarrable Sophia Chikirou a défendu son Mentor à l’Assemblée Nationale. Le fait que des français (locuteurs minoritaires et anciens) veulent s’emparer de ce sujet prouvent que LFI comme la droite pensent à tort qu’ils sont maîtres et possesseurs de cette langue. Et en cela leur discours est aussi vieux, rance et pourri (je n'ai pas peur des mots) que celui des gens du faux-centre, de la droite ou des extrêmes droite lorgnant vers le fascisme.
Ceux qui veulent garder la langue pour l'enfermer sous la très réactionnaire Académie Française... (sans beaucoup de linguistes ou de grammairiens compétents) se réfèrent à un moment de notre histoire où celle-ci a été « fixée » sur le modèle de la manière de parler à la cour de Louis XIV. La langue française et ses absurdités ubuesques - notamment en matière d’orthographe - a d’abord été une arme d’oppression, et de guerre contre le peuple (qui parlait le patois). La République n’a pas été clémente à ce sujet. Et quand Mélenchon veut, de son point de vue nationale, proposer que le nom de la langue soit changé, il ferait mieux de se taire à nouveau, et laisser les locuteurs, les autres, causer (lui on l’a assez entendu)
Aya Nakamura pendant la cérémonie des J.O. n'a pas chanté dans une langue créole ou créolisée... ELLE a chanté SA langue, qui devient la NÔTRE grâce au génie de son talent... et nos reprises. J’en pleure encore de sa démonstration magistrale.
Les artistes réparent le Monde et ce jour-là avec la complicité de la Garde Républicaine, elle a, en une seule séquence, réconcilié la France et son histoire, celle d'aujourd'hui et d'hier, malgré les blessures non refermées de la colonisation...
Vouloir parler à la place des autres, nommer les choses à la place de ceux qui pratiquent le français c'est jouer aux académiciens réactionnaires et conservateur. Mélenchon n'est pas roi de France. Ni Macron (qui en a pourtant le fantasme)
Ce n'est pas en collant une nouvelle étiquette à un produit à réformer qu’on change quoi que ce soit. Au pire c’est une manœuvre pour tromper l’usager, le consommateur (on voit cela tous les jours en Marketing. Ce n'est pas le nom Maison Perrier qui change l'image de marque de Perrier, c'est la loi qui oblige Nestlé à changer la marque puisque cela fait belle lurette que son eau n'est pas naturelle. Il n'y a que les fachos pour croire à la pureté de la langue.
Où est le projet politique nécessaire qui permettrait de faire du français une langue universelle dont les règles changeraient avec l'assentiment de tous? L'orthographe folklorique, les règles de grammaire répressive pourraient changer... Mais qui le dira? Les linguistes francophones de la Planète, certainement.
Cette polémique est absurde, elle a été lancée par Jean-Luc Mélenchon pour de toutes autres raisons que ce qu'elle annonce. Il cherche à tort le clivage.
Le créole est une langue de résistance liée à l'histoire de la colonisation dans des lieux précis et pas au Québec, par exemple.
Le respect qu'on doit aux locuteurs de tous les pays, ce serait de les laisser gérer cela eux-mêmes.
L'anglais a énormément emprunté au français - beaucoup plus que dans le sens inverse -! Comment doivent-ils nommer leur langues, triturées à Singapour, devenues quasiment la langue officielle aux Pays-bas, ou dans le quartier de l'Europe à Bruxelles où il faut réclamer sa bière en anglais? Le scandale c'est que le français n’a pas la première place au Parlement Européen alors que les Anglais n'y sont plus. Alors que les traités l’obligent. Mais Mélenchon ne défendra pas cette position, cela ne fait pas partie de son agenda.
Le respect qu'on doit au créole implique de laisser ceux qui ont transformé notre langue de revendiquer eux-mêmes et de ne pas se substituer à eux. Je le répète, ce n'est certainement pas à un Français de France de faire cette proposition et même d’en parler.
Il y a d’autres sujets prioritaires et d’autres sujets grave à préciser dans ce colloque notamment la baisse des crédits et l’affaiblissement des Alliances Françaises et donc ne notre « soft power » (pour causer le globish dont la racine n’est pas « nôtre » et pas neutre)
Et pourquoi Mélenchon ne hurle-t-il pas plutôt sur un sujet bien plus important que tous les autres, pourquoi ne parle-t-il pas de "la sixième extinction" dont va s'emparer De Villepin...
TOUT ce bruit pour rien est méprisable."
Pierre Oscar Lévy (pol) réalisateur de documentaires, etc...
https://blogs.mediapart.fr/pol/blog/290625/la-creolisation-de-la-langue-francaise
#metaglossia_mundus