 Your new post is loading...
|
Scooped by
Charles Tiayon
November 9, 2011 4:44 PM
|
The Bing translation tool introduced by Facebook last month is apparently lost in translation, with successful translation rates of under 50 percent.
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
"Mateo Pierre, traductor editorial: "Traducir es enfrentarse siempre a un abismo"
Empezó estudiando Traducción e Interpretación en Salamanca y, tras un máster en Gestión del Patrimonio Cultural, llegó su primer encargo. Desde entonces, no ha dejado de traducir, corregir, enseñar ni participar en la cadena del libro
Mateo Pierre empezó estudiando Traducción e Interpretación en Salamanca y, tras un año de paréntesis para repensar su rumbo, se embarcó en un máster en Gestión del Patrimonio Cultural. Allí mismo, casi por casualidad, llegó su primer encargo profesional: dos libros, uno a propuesta de una profesora y otro después de una conversación con un editor. Desde entonces no ha dejado de traducir, corregir, enseñar ni participar en la cadena del libro, también desde una caseta de la Feria de Madrid.
"He sido librero, profesor, traductor, corrector. No me he podido dedicar solo a traducir, como me gustaría, pero todo eso me ha permitido entender mejor cómo funciona el sector". Aunque se considera ante todo traductor editorial, no cree que eso se limite solo a la literatura. Traduce también revistas, ensayos, cómic y recetarios. "Todos los géneros se enriquecen mutuamente", dice, y esa variedad le permite saltar de un tono a otro, de una voz a otra, sin agotarse en un único tema. "A veces uno se alivia cambiando de género", añade.
Traducir, para él, no es repetir, sino interpretar. Una obra creativa en sí misma: "Con las lenguas cercanas, como el francés, es muy fácil hacer un karaoke del original. Pero eso no sirve. Hay que andarse con mil ojos y desconfiar mucho de uno mismo, y eso también es una lección de vida".
Con experiencia en inglés, francés y alemán, cree que el traductor vive en la duda constante. No tanto por ignorancia como por una desconfianza fértil, que obliga a mantenerse alerta. "Traducir es enfrentarse siempre a un abismo. Nunca estás seguro al cien por cien".
Preocupado por la precariedad del sector, sobre todo con el impacto de la IA, defiende un mayor reconocimiento del oficio, no solo con laureles: "No podemos pagar el alquiler enseñando nuestro nombre en las portadas". Termina lamentando la fuga de talento dentro del sector y la importancia de seguir formándose y de asociarse: "Solo no se puede, pero con amigos y compañeros, sí"." Carmela García Prieto Madrid29 JUL 2025 4:50 https://www.elperiodico.com/es/ocio-y-cultura/libros/20250729/mateo-pierre-traductor-editorial-entrevista-119906152 #metaglossia_mundus
"MONTRÉAL — La Cour suprême amorce finalement la traduction partielle «de certaines des décisions les plus importantes rendues par la Cour avant l’entrée en vigueur de la Loi sur les langues officielles en 1970».
Dans un communiqué publié mardi, le Bureau du registraire de la Cour suprême affirme que «cette initiative est entreprise à l’occasion du 150e anniversaire de la Cour suprême du Canada en 2025, année durant laquelle la Cour commémore son histoire et son héritage en tant qu’institution qui défend la primauté du droit, inspire la confiance du public et sert notre communauté»...
Quelque 6000 décisions unilingues rendues avant 1970, certaines en français, mais la plupart en anglais, se trouvaient sur le site de la Cour suprême et, donc, représentaient une forme de communication administrative qui ne répondait pas aux exigences de la Loi sur les langues officielles, comme l’avait noté le commissaire Théberge.
Manoeuvre d’esquive
Après le dépôt de la procédure par DCQ, le plus haut tribunal avait décidé d’esquiver l’obligation légale en retirant simplement l’ensemble de ces décisions de son site web, faisant valoir, d’une part, que la traduction serait trop coûteuse et, d’autre part, que ces décisions étaient disponibles sur des sites web privés tels que la banque de données juridiques CanLII.
Le commissaire Théberge n’avait pas caché son irritation face à cette façon de se soustraire à la lettre et à l’esprit de la loi. DCQ avait pour sa part annoncé qu’il maintenait ses procédures dans le but d’obtenir de la Cour fédérale qu’elle confirme que la Cour suprême était bel et bien en infraction avant de retirer ces décisions.
24 décisions sur 6000
L’annonce de mardi explique que le comité indépendant chargé de «sélectionner les décisions les plus importantes d’un point de vue historique ou jurisprudentiel rendues par la Cour avant 1970» a présenté son rapport le 6 juin dernier et que celui-ci «énumère 24 décisions importantes de la Cour suprême qui, de l’avis du comité, devraient être traduites».
Cependant, avant même la création du comité, le Bureau du registraire avait commencé la traduction de l’affaire Roncarelli c. Duplessis, une des décisions unilingues anglaises invoquées par DCQ en raison de son importance en matière de liberté de religion et parce qu’elle représente toujours une référence en droit constitutionnel en ce qui a trait à la notion d’État de droit. Le Bureau du registraire affirme que cette première traduction «devrait être disponible tant en français qu’en anglais sur le site web de la Cour plus tard cet automne».
Le Bureau du registraire précise cependant, comme le juge en chef de la Cour suprême Richard Wagner l’a fait à quelques reprises, que «ces décisions n’auront toutefois pas un caractère officiel, étant donné qu’elles ne peuvent pas être approuvées par les juges qui les ont rendues, ceux-ci étant tous décédés»."
Pierre Saint-Arnaud, La Presse Canadienne 29 juillet 2025
https://lactualite.com/actualites/la-cour-supreme-ne-traduira-que-24-des-6000-decisions-unilingues-anterieures-a-1970/
#metaglossia_mundus
"Concours d’accès à l’Ecole Supérieure Roi Fahd de Traduction au titre de l’année 2025/2026.
Créée en 1983, l’École Supérieure Roi Fahd de Traduction de Tanger (ESRFT) accueille sa première promotion en septembre 1986.
La ville de Tanger, grâce à son emplacement stratégique au carrefour de l’Europe, de l’Afrique et du Monde Arabe, a été choisie comme site idéal pour l’établissement de l’École.
Tirant parti de cet avantage, l’École remplit efficacement sa mission de promotion de la communication culturelle et civilisationnelle, étroitement liée à la traduction.
Inscription Concours ESRFT Tanger 2025/2026
L’École Supérieure Roi Fahd de Traduction annonce l’ouverture du concours d’accès aux filières de traduction écrite et de traduction de conférence pour l’année universitaire 2025-2026. Ce concours se déroulera à Tanger les 10 et 11 septembre 2025 et s’adresse aux titulaires d’une licence ou diplôme équivalent, selon les filières et langues spécifiées...
Les candidats doivent être titulaires de l’une des licences suivantes, selon la filière visée:
Filière: Traduction écrite Arabe – Amazighe – Français
Licence en études amazighes.
Filière: Traduction de conférence Arabe – Anglais – Français
Licence en études anglaises.
Licence en études françaises.
Licence en études arabes.
Licence en études espagnoles.
Licence en études allemandes.
Licence en études italiennes.
Filière: Traduction écrite Arabe – Français – Anglais
Licence en études françaises.
Licence en droit privé ou public en langue française.
Licence en sciences politiques.
Licence en sciences économiques.
Les épreuves du concours se composent de trois tests écrits, chacun d’une durée de deux heures:
Traduction de l’arabe vers la langue de spécialité (français, anglais, espagnol ou allemand).
Traduction de la langue de spécialité (français, anglais, espagnol ou allemand) vers l’arabe.
Traduction du français ou de l’anglais vers l’arabe.
Inscription en ligne obligatoire sur la plateforme suivante: https://concours.esrft.ma
Les candidats devront y téléverser les documents suivants au format PDF:
Copie du diplôme de licence ou certificat d’inscription au 6ᵉ semestre.
Relevés de notes de tous les semestres de la licence.
Copie du baccalauréat.
Curriculum Vitae.
Copie de la carte nationale d’identité.
La période de candidature est ouverte du 28 juillet au 15 août 2025.
Seuls les candidats présélectionnés seront convoqués pour déposer leur dossier physique auprès du service de la scolarité de l’École.
Les résultats de la présélection seront publiés au plus tard le 3 septembre 2025 sur le site: https://esrft.uae.ac.ma
Déroulement du concours:
Les épreuves écrites auront lieu les 10 et 11 septembre 2025 à Tanger.
Les entretiens oraux pour les candidats admissibles se tiendront entre le 24 et le 25 septembre 2025.
Les résultats finaux seront publiés sur le site de l’école au cours de la dernière semaine de septembre 2025."
https://www.dreamjob.ma/orientation/inscription-concours-esrft-tanger-2025-2026/
#metaglossia_mundus
"AI models are neglecting African languages — scientists want to change that
Scientists record 9,000 hours of languages spoken in Kenya, Nigeria and South Africa as free-access training data for AI models.
By Sarah Wild
More than 2,000 languages spoken in Africa are being neglected in the artificial intelligence (AI) era. For example, ChatGPT recognizes only 10–20% of sentences written in Hausa, a language spoken by 94 million people in Nigeria. These languages are under-represented in large language models (LLMs) because of a lack of training data. But researchers across Africa are changing that.
Language specialists have recorded 9,000 hours of people speaking different African languages and transformed the recordings into digitized language data sets. The researchers, who are part of a research project called African Next Voices, released the first tranche of data this month from what is the largest AI-ready language-data-creation initiative for multiple African languages.
The data will be open access and available for developers to incorporate into LLMs, such as those that convert speech into text or provide automatic language translation.
“It’s really exciting to see the improvements this is going to bring to the modelling of these specific languages, and how it’s also going to help the entire community that is working across language technologies for Africa,” says Ife Adebara, chief technology officer at the non-profit organization Data Science Nigeria, based in Lagos, who is co-leading the Nigerian arm of the project. Languages in Nigeria being recorded include Hausa, Yoruba, Igbo and Naijá.
“Under-representation of local languages in AI models remains a key challenge in scaling the most-promising artificial intelligence tools,” says Sanjay Jain, director for digital public infrastructure at the Gates Foundation, based in Seattle, Washington, which has funded the project with a US$2.2-million grant.
18 languages
The African Next Voices project involves recording 18 languages spoken in 3 countries: South Africa, Kenya and Nigeria. The recordings are then transcribed and translated by people, reviewed and quality checked.
Researchers take part in a transcription workshop at Dedan Kimathi University of Technology in Nyeri, Kenya. Somali, Kikuyu and Maasai speakers were represented in the training.Credit: African Next Voices: Pilot Data Collection in Kenya
The researchers showed individuals from diverse communities images and asked them to describe what they saw, explains Lilian Wanzare, a computational linguist at Maseno University in Kenya, and Next Voices project lead for Kenya, where the languages spoken include Dholuo, Kikuyu, Kalenjins, Maasai and Somali.
The focus has been to generate databases of everyday language, she says. “There’s a huge push towards localised data sets, because the impact is in capturing the people within their local settings.” For example, “if you build a model for farmers to help with decision-making, it relies on local data”, such as soil conditions and pesticides that work in the area, Wanzare explains.
Whereas the principal investigators in each country chose the subject areas for their data sets, the projects needed to focus on key development sectors, such as health, agriculture and education, says Jain.
Vukosi Marivate, a computer scientist at the University of Pretoria and the project lead for South Africa, says that his team is working with a consortium of organizations to create AI language models with the data. He hopes that technology businesses can then improve on those models. South Africa is collecting data for Setswana, isiZulu, isiXhosa, Sesotho, Sepedi, isiNdebele and Tshivenda."
https://www.nature.com/articles/d41586-025-02292-5
#metaglossia_mundus
Antidote 12 enrichit sa suite de correction avec l’IA : reformulation, correction express, dictionnaires et guides pour soigner vos écrits.
"Antidote 12 : la correction grammaticale se muscle avec l’IA
BYOTHE
28 JUILLET 2025
Quand on publie régulièrement du contenu sur un site web, la qualité de la langue est indéniablement un critère fondamental. Une faute mal placée, une tournure bancale ou un mot mal choisi, et c’est la crédibilité qui en prend un coup.
C’est pourquoi, depuis plusieurs années, le logiciel Antidote fait partie de mon processus de rédaction. Depuis la version 9, que j’utilise depuis 2016, cet outil ne cesse de se perfectionner tout en restant simple à prendre en main. Antidote 12, disponible depuis octobre 2024, apporte quelques nouveautés très sympas sans bouleverser les habitudes. On fait le point...
Un correcteur toujours aussi redoutable
Le cœur du logiciel reste bien sûr le correcteur grammatical, l’un des plus complets du marché. Il s’intègre facilement dans vos outils quotidiens (traitement de texte, messagerie, navigateur…), détecte automatiquement les erreurs et les signale avec un code couleur clair :
Rouge pour les fautes de grammaire et d’orthographe
Orange pour les erreurs de typographie
Bleu pour les problèmes de style
Chaque erreur est accompagnée d’une explication. Cela permet de comprendre rapidement pourquoi c’est incorrect, et comment le corriger. Un clic suffit pour valider une suggestion, et votre texte est mis à jour. L’approche est pédagogique, fluide et surtout très rassurante.
Des dictionnaires riches et interconnectés
Antidote 12 ne se limite pas à corriger : il propose aussi une série de 10 dictionnaires interconnectés qui couvrent tous les besoins des rédacteurs, professionnels ou non. On y trouve :
Définitions
Synonymes et antonymes
Familles de mots
Cooccurrences
Champs lexicaux
Conjugaisons
Rimes
Citations
Étymologie
Vous pouvez naviguer d’un mot à l’autre, écouter des prononciations, consulter des exemples, ou explorer des usages contextuels. C’est un outil précieux pour enrichir votre vocabulaire, éviter les répétitions, ou tout simplement affiner votre style.
L’un des grands atouts d’Antidote, c’est sa volonté d’expliquer les règles plutôt que d’imposer des corrections sans contexte. Pour cela, il propose 11 guides linguistiques couvrant tous les aspects de la langue :
Orthographe
Grammaire
Syntaxe
Style
Typographie
Lexique
Rédaction
Ponctuation
Phonétique
Histoire de la langue
Points de langue (avec des chroniques publiées par Druide)
Vous pouvez y accéder librement ou via une infobulle qui vous redirige automatiquement vers la règle concernée lors d’une correction. Un excellent moyen de se cultiver tout en corrigeant ses textes.
Le filtre Anti-Oups! : pas d’erreurs dans vos mails
Ce filtre s’intègre à votre logiciel de messagerie pour analyser les emails avant leur envoi. Il vérifie la présence de fautes, mais aussi des éléments souvent oubliés, comme les pièces jointes. Combien de fois a-t-on promis un fichier dans un mail… sans l’attacher ?
Avec Anti-Oups!, ce genre d’erreur devient beaucoup plus rare.
Avec la version 12, l’Anti-Oups! évolue un peu et analyse le ton de vos messages pour détecter des formulations qui pourraient être vexantes pour votre interlocuteur… Antidote devient un très bon outil pour éviter les conflits !
Bien entendu, si vous ne souhaitez pas utiliser cette fonction, elle peut être désactivée.
Antidote 12 : deux nouveautés qui changent la donne
La version 12 d’Antidote se distingue surtout par deux nouvelles fonctionnalités qui modernisent clairement l’expérience d’utilisation : la reformulation assistée par IA et la correction express dans les navigateurs.
Une reformulation intelligente et efficace
Parfois, on sent qu’une phrase ne fonctionne pas, mais on ne sait pas trop comment l’améliorer. C’est exactement là qu’intervient la reformulation IA. Sélectionnez une phrase, et Antidote vous propose de :
la raccourcir
la réécrire dans un style plus fluide
la rendre inclusive
ou encore de l’améliorer globalement
Le tout se fait en quelques secondes, et le résultat reste cohérent avec le ton général du texte. C’est un outil redoutable pour affiner un contenu sans perdre du temps à tout réécrire. On peut reformuler mot à mot, phrase par phrase, ou par blocs entiers. C’est rapide, naturel et très utile.
La correction express dans le navigateur
Autre nouveauté particulièrement pratique : la correction instantanée dans les champs de texte des navigateurs (comme ceux des formulaires ou des CMS). Grâce à une icône discrète, vous savez si le champ est corrigé automatiquement. Et là encore, Antidote vous propose directement :
des suggestions de correction ou de reformulation
une option pour continuer la correction dans Antidote si vous avez besoin d’un contrôle plus approfondi
En bas à droite, l’icône d’Antidote en forme de flacon signifie que la correction express est active
Ce mode de correction en ligne est particulièrement utile pour celles et ceux qui travaillent dans des interfaces web, rédigent des contenus en ligne ou remplissent des formulaires importants. Cela fait gagner un temps précieux tout en sécurisant la qualité du texte.
Des tarifs toujours raisonnables
Antidote est désormais proposé sous forme d’abonnement annuel, avec 4 formules disponibles :
Pour une langue (français ou anglais) :
Antidote+ individuel : 59 €/an
Antidote+ familial (jusqu’à 5 utilisateurs) : 99 €/an
Pour deux langues (français et anglais) :
Antidote+ individuel : 89 €/an
Antidote+ familial (jusqu’à 5 utilisateurs) : 199 €/an...
https://byothe.fr/antidote-12-la-correction-grammaticale-avec-ia/
#metaglossia_mundus
"Postgraduate Programme in Translation Technology: Translation Technology Modules
Postgraduate Programme in Translation Technology: Translation Technology Modules (Brussels) (30 sp.)
location_on
Brussels
school
Postacademic education
home_storage
30 ECTS credits
language
English
account_balance
Faculty of Arts
Discover the programme
keyboard_arrow_down
The Postgraduate Programme in Translation Technology is an international leading programme at the Faculty of Arts of KU Leuven which provides students and professionals with the necessary technological knowledge and ICT competences to pursue a career in translation and localisation in the 21st century.
What can you find on this webpage?
Our (future) students can find information about the programme, admission requirements, objectives and evaluation.
All other information with regard to the study programme can be found at https://www.arts.kuleuven.be/english/education/PGTT."
https://onderwijsaanbod.kuleuven.be/opleidingen/e/SC_56892518/diploma_omschrijving
#metaglossia_mundus
"Eriksen Translations has once again earned a spot on CSA Research's annual list of top global and regional language services providers.
PRNewswire-PRWeb/ -- Eriksen Translations, a leading provider of language services, has been named by CSA Research as one of its 2025 Global and Regional Market Leaders. Based on independently verified 2024 revenue data, the recognition highlights companies making a significant impact on the global language services and technology market, as detailed in CSA Research's Listing of Global and Regional LSPs (2025).
Since 1986, Eriksen Translations has helped organizations engage multilingual audiences through tailored translation and localization solutions. Serving a broad range of sectors—including finance, insurance, healthcare, education, law, and the arts—Eriksen supports leading companies, government agencies, nonprofits, NGOs, and renowned museums and cultural institutions.
"We're honored to be recognized by CSA Research during a time of transformation in our industry," says Vigdis Eriksen, Founder and CEO. "None of this would be possible without the people behind it—our dedicated staff, our expert linguists, and the continued trust of our clients."
In the 2025 Listing of Global and Regional LSPs, Eriksen was ranked #64 globally and #19 out of 26 companies ranked in North America. Eriksen has been included in CSA Research's rankings eleven times, reflecting its longstanding role in the language services industry.
"We're honored to be recognized by CSA Research during a time of transformation in our industry," says Vigdis Eriksen, Founder and CEO. "None of this would be possible without the people behind it—our dedicated staff, our expert linguists, and the continued trust of our clients."
Dr. Arle Lommel, senior analyst at CSA Research, comments, "Organizations included in this year's language services and technology market study are helping define what it means to deliver scalable, AI-enhanced language solutions in today's global economy."
He adds, "We're seeing the industry evolve beyond traditional localization. Success now requires natural language processing insight, adaptive technologies, and broader integration with enterprise content operations, what we define as the post-localization era."
About the Research
CSA Research's listing of Global and Regional LSPs (2025) is based on independently verified revenue data and includes both global and regional market leaders across seven regions. The study is the most comprehensive of its kind, involving hundreds of companies of all sizes across the language services and translation technology sector...
Media Contact
Jennifer Murphy, Eriksen Translations Inc., 1 646-460-2428, jennifer.murphy@eriksen.com, https://eriksen.com "
Eriksen Translations Inc.
Jul 28, 2025, 05:00 ET
https://www.prweb.com/releases/eriksen-translations-recognized-in-csa-researchs-2025-listing-of-global-and-regional-language-services-providers-302514481.html
#metaglossia_mundus
"With 2 billion monthly users in 200 countries, Google’s AI Overviews can claim to be the most popular generative artificial-intelligence product yet released to the public. The short summaries generated by the company’s Gemini AI model have turned Google from search engine to answer engine, settling the nerves of investors who were worried that ChatGPT was going to smash Google’s business model to pieces.
Then again, to describe those billions as “users,” as parent company Alphabet Inc. did when announcing its quarterly earnings last week, is perhaps disingenuous. No one consciously uses AI Overviews — it’s just there when users perform a regular search on Google, something billions of them have done several times a day for two decades. That’s one key advantage Google has over its competitors: People already associate the service with finding things out. The company has every right to capitalize on that reputation, one it built off the back of genuine innovation and quality (though, admittedly, it was later solidified with illegal multibillion-dollar deals to prevent competition).
Google’s second advantage with AI Overviews, however, warrants further scrutiny. Like other generative AI tools, the feature draws heavily from content that Google does not own but is available on the open web. Summarized answers are synthesized from one or more sources into a rewritten piece of information.
That’s useful for users; it saves them a click. But it’s devastating for content creators, who lose a would-be visitor and the revenues that follow. Startling Pew Research data released last week suggested users were considerably less likely to click through to websites if presented with an AI Overview, as is increasingly the case. One in five searches in a March 2025 sampling contained an AI Overview, a frequency that rises to as high as 60% if the queries are longer or contain the bread-and-butter words of journalism: who, what, where, when or why. (Google has pushed back against the methodology of the Pew study, saying its dataset — 68,879 searches by 900 US adults — was too small to be representative.)
Other AI chatbots offer the same kind of functionality, of course. But in those cases, content publishers can block these companies’ “crawlers” if they wish to do so by adding a line of code that acts as a digital bouncer at the door. That approach doesn’t work with Google, however, because blocking its AI crawler means also blocking a site from Google’s search results as well — a death sentence for any website.
Google is leveraging its dominant position in one industry to force success in another. It’s monopolistic behavior and something that should be addressed immediately as part of the remedies being devised as part of the antitrust trial it lost last year.
This is about taking away Google’s cheat code. “Google still thinks they’re special and that they don’t have to play by the same rules that the rest of the industry does,” Matthew Prince, chief executive officer of Cloudflare, told Bloomberg News in an interview last week. His company recently launched a tool that would allow publishers to set up a “pay-per-crawl” model for AI use. It works on crawlers from OpenAI, Anthropic, Perplexity and most others — but blocking Google AI would, again, mean blocking a site from Google’s search engine.
In Google’s defense, the launch of AI Overviews was a move spurred not by a desire to crush the economics of web — which has driven its entire business — but to stop its users from deserting the company in favor of AI chatbots. “The consumer is forcing them,” Wells Fargo analyst Ken Gawrelski said. Google was more than satisfied with the status quo, Gawrelski told me, which is partly why the company was beaten to market by smaller AI firms that didn’t need to worry about protecting an existing revenue stream.
Now the fight is on, Google is playing catch-up and doing rather well at it. It has protected its advertising revenue, which in the last quarter was up 12% to a record-high $54.2 billion compared with the period a year earlier. Its AI and cloud business faces supply constraints, warranting an additional $10 billion in capital expenditure, bringing it to $85 billion for the year. It recently added “AI Mode” to its search engine, which is like AI Overviews on steroids. The company has barely started to integrate AI across its varied products like Gmail and Maps — the Financial Times noted that 15 distinct Google products have more than 500 million users.
Executives say they will be able to monetize all of these innovations quickly. The company has less to say about what happens to the businesses that rely on Google traffic to stay alive, in turn providing the content that makes smart AI possible. The shift is profound: Google’s creation democratized the web, making it possible for an ecosystem of new sites and services to be found and supported. Now, the company’s strategy is to make it so users need to visit only Google. “We have to solve the business models for the varying players involved,” Sundar Pichai, Alphabet’s CEO, said in a call with analysts without elaborating.
Salvaging content creators from the coming AI wreckage begins by forcing Google to relinquish its unfair advantage. Only then will the company be compelled to enter into reasonable arrangements with content creators to utilize their content, as it has already done with the likes of Reddit. We can be mildly encouraged by the fact that Google is reportedly seeking out content deals for use within its other AI products. Perhaps this is in anticipation that the unfair advantage won’t last."
Dave Lee of Bloomberg News, 7/29/25
https://www.advisorperspectives.com/articles/2025/07/29/google-reaping-rewards-unfair-ai-advantage
#metaglossia_mundus
"Google, Microsoft unveil new AI search tools
Google AI Mode and Microsoft 365 Copilot Search now available
Google, Microsoft unveil new AI search tools
Google has announced a new AI mode for its search engine in the UK that generates results using the Gemini LLM, rather than the familiar list of search results.
This will enable the use of more conversational natural language search prompts rather than the keyword-heavy searches typically used with a traditional search engine.
AI mode, already available in India and the US, has been launched in the UK this week. It allows users to search with text or voice, or with photos taken using a phone camera, for example. AI Mode is available as a tab on the search results page and as apps for Android and iOS.
The feature was announced in a blog post on 28th July, but it would appear coverage is not available in the UK at time of writing. In trying to test the feature using a Windows laptop, Computing was greeted by a message: “AI Mode is not currently available on your device or account”.
Google’s announcement comes as searching via LLMs is becoming more popular, both via dedicated search engines such as Perplexity, which list sources, and also using chatbots which may not.
Google is not replacing its traditional search with AI Mode, at least not yet, but it is clearly mindful of this trend.
However the move to AI-generated search results and summaries is already seeing a reduction of traffic to sites including retailers and media websites.
Figures from the Pew Research Centre found that users are only half as likely to click on a link when presented with an AI generated overview, and that 26% of pages that included a summary were closed immediately, compared to 16% that did not include a summary.
Microsoft 365 Copilot Search
Google was not the only tech giant to release a new AI search feature, with Microsoft last week unveiling Microsoft 365 Copilot Search. Unlike the consumer-facing Google AI Mode, Microsoft’s feature, integrated into the Copilot app, is intended for organisational use, designed to unearth information hidden in enterprise applications via a conversational interface.
“Because it's integrated with Microsoft 365 Copilot, users can find the results they need with search, then seamlessly transition to chat for deeper exploration or follow-up task completion,” Microsoft says in a blog post.
Copilot search works across Microsoft’s applications and also third-party solutions using connectors. Microsoft claims hundreds of such connectors are already available.
Copilot Search is available to users with an eligible Microsoft 365 Copilot licence at no additional cost."
John Leonard
29 July 2025
https://www.computing.co.uk/news/2025/ai/google-microsoft-unveil-new-ai-search-tools
#metaglossia_mundus
"Description: Western Translation Theory from Herodotus to Nietzsche offers the most comprehensive collection of translation theory readings available to date, from the Histories of Herodotus in the mid-fifth century to the end of the nineteenth century. This work provides a rich panoply of thinking about translation across the centuries, covering such topics as the best type of translator, problems of translating sacred texts, translation and language teaching, translation as rhetoric, translation and empire, and translation and gender. This pioneering anthology contains over 140 texts with 30 new ones included in this edition. 21 texts by 18 authors appear here for the first time in English translation. Every entry includes a bibliographical headnote and footnotes. Intended for classroom use in History of Translation Theory, History of Rhetoric or History of Western Thought courses, this anthology is also key reading for scholars of translation and those interested in the intellectual history of the West."
ISBN 9781032867113
448 Pages 1 B/W Illustrations
January 20, 2026 by Routledge
Format
Paperback
Available for pre-order on December 30, 2025. Item will ship after January 20, 2026
Original Price£35.99
Sale PriceGBP £28.79
https://www.routledge.com/Western-Translation-Theory-from-Herodotus-to-Nietzsche/Robinson/p/book/9781032867113
#metaglossia_mundus
"The annual Derek Walcott Prize for Poetry is awarded to a full-length book of poems by a living poet who is not a U.S citizen, published in the previous calendar year. The book must be in English or in English translation, and may have been published anywhere in the world. The prize includes a $2,000 cash award. In the case of translations, the prize money may be shared by the poet and the translator.
The award is powered by Arrowsmith Press, in partnership with The Derek Walcott Festival in Port-of-Spain, Trinidad. Previous winners include Antonella Anedda, Mosab Abu Toha, Saddiq Dzukogi and Canisia Lubrin.
Hussain Ahmed has been included in the list of Walcott Finalists for his book Blue Exodus (Orison Books), Theresa Lola makes the cut for her second poetry collection Ceremony for the Nameless (Penguin Press), and Ajibola Tolase has been shortlisted for the critically-acclaimed 2000 Blacks (University of Pittsburgh Press).
This year’s Walcott Prize judge is Ishion Hutchinson. He is the author of three poetry collections – School of Instructions: a Poem, House of Lords and Commons, and Far District – and the book of essays, Fugitive Tilts. Born in Port Antonio, Jamaica, he is the W.E.B. Du Bois Professor in the Humanities at Cornell University.
The 2025 Derek Walcott Prize winner will be announced in October."
https://thebritishblacklist.co.uk/out-of-africa-three-nigerian-writers-shortlisted-for-2025-derek-walcott-prize/
#metaglossia_mundus
"U.S. lawmakers urged the release of Afghan interpreter Ziaulhaq Shinwari, detained by immigration officers despite legally relocating after risking his life aiding American forces.
Two members of Congress, Senator Richard Blumenthal and Representative John Larson of Connecticut, are demanding the release of Ziaulhaq Shinwari, an Afghan interpreter who once risked his life helping U.S. forces.
Shinwari, who worked as a translator and cultural adviser for American contractors at Camp Mike Spann in Mazar‑e‑Sharif, legally relocated to the United States after aiding the U.S. mission in Afghanistan.
On July 16, he was unexpectedly detained by U.S. Immigration and Customs Enforcement (ICE) officers when he went to an immigration center for biometric processing tied to his green card application.
In a joint statement released July 27, Blumenthal’s office said Shinwari “bravely risked his life” for U.S. troops and “does not deserve such treatment.”
The lawmakers also condemned his detention as a “clear violation of due process,” stressing that allies who entered the country legally should not be subjected to such actions.
Blumenthal and Larson warned that the case raises troubling questions about how Afghan partners who supported U.S. forces are being treated after relocation.
Advocates argue that Shinwari’s detention could undermine trust among Afghans still waiting for safe passage, urging swift action to secure his release and restore confidence in U.S. commitments." U.S. Lawmakers Demand Release of Afghan Interpreter Detained by Immigration Officials By Fidel Rahmati July 28, 2025 https://www.khaama.com/u-s-lawmakers-demand-release-of-afghan-interpreter-detained-by-immigration-officials/ #metaglossia_mundus
"... Acclaimed literary translator Anton Hur, known for bringing celebrated Korean works to a global audience, has made his debut as a novelist with the science-fiction titled "Toward Eternity."
At a press conference in Seoul on Monday marking the novel's Korean release, Hur said his career in translation was part of his strategic path to fulfilling his childhood dream of becoming an English-language fiction writer, a goal he has achieved with this book.
Originally written in English and published by HarperVia last July, "Toward Eternity" is a sci-fi novel set in the near future that explores immortality and what it means to be human.
At the event, Hur shared his creative philosophy, explaining he sees himself not as the one crafting the language, but as a vessel through which "the language materializes itself."
"I was once very touched when poet Lee Seong-bok told me the words came to him, not that he was writing them," Hur recalled. "While writing this book, I realized that I am a means for the language to materialize themselves and that I am a secretary to these words, who only prepares a pen and paper."
Much of his debut novel was written on the Seoul subway, an environment he called a "great creative destination" fueled by its unique rhythm and noise.
"As a full-time translator at the time, I had very little time for my own writing," he added.
A major name in translation, Hur rose to prominence with his work on Chung Bora's "Cursed Bunny," which was shortlisted for both the International Booker Prize and the National Book Award. His other notable translations include Hwang Sok-yong's "The Prisoner," Shin Kyung-sook's "I Went to See My Father," Park Sang-young's "Love in the Big City" and "Beyond the Story: 10-Year Record of BTS."
In a unique role reversal, the English novel of his was translated into Korean by novelist Chung.
"When someone offers to translate your work of literature, it is an immense honor," he said. "It is as if they are saying they will sacrifice some part of their life for your work. I was more than happy to enjoy the honor."
Born in Stockholm, Sweden, and now based in Korea, he released his first Korean essay "No One Told Me Not To" in 2023.
In a message to potential readers, Hur emphasized that fiction should be, above all, entertaining.
"I tried to ensure the joy I felt during the writing process was captured in the book, and since it is an easy read, I hope it reaches many people.""
From translator to translated: Anton Hur on debut novel 'Toward Eternity'
15:37 July 28, 2025
Woo Jae-yeon
SEOUL, July 28
#metaglossia_mundus
"Zhadan: War has changed Ukrainian language, filling it with pain
“in times of war, language breaks down”: "The usual structures that support its functionality and effectiveness collapse. War deprives us of our balance. Accordingly, it deprives us of our usual intonations. Looking into the darkness, you are forced, one way or another, to carefully evaluate the weight of what is said and heard."
27.07.2025 07:03
Ukrainian writer, poet, and soldier Serhii Zhadan is convinced that the Russian war has changed the Ukrainian language—its lightness has disappeared, replaced by pain.
According to a Ukrinform correspondent, Zhadan spoke about this at the Austrian State Prize for European Literature award ceremony during a solemn event at the Salzburg Festival.
"Talking about literature in times of war is a great luxury. It is now much more common to talk about war in Ukrainian. To see the war, you don't need to open a book — just look out the window," the writer said.
He spoke about one of the recent Russian attacks on Kharkiv and emphasized: "The Russians are destroying our cities and our fellow citizens. Russia is waging this aggressive and unjust war to destroy us."
According to Zhadan, Ukrainian books currently being published will almost certainly feature the war, or “even if it is not in the plot, it will fill the pauses and voids.”
Literature, he noted, does not always seem appropriate when it comes to contemplating death. But it is necessary to bear witness to the war “in order to continue fighting” — “to bear witness in order to love.”
The writer believes that war has deformed the current Ukrainian language.
"What happened to our language? How did war change it? Its lightness disappeared. Instead, pain appeared. A lot of pain. And it turned out that its excessive presence deforms the language, deprives it of balance. We now speak the language of people who particularly want to be heard, who are trying to explain themselves. There is no need to look for excessive egocentrism behind this. We are not shouting to draw attention to ourselves — we are shouting to draw attention to those who are worse off than us, who are particularly bad off, who are suffering, who are in pain. We are shouting for those who cannot speak now, who have been deprived of their voice, who have been deprived of their heartbeat," said Zhadan.
According to him, “in times of war, language breaks down”: "The usual structures that support its functionality and effectiveness collapse. War deprives us of our balance. Accordingly, it deprives us of our usual intonations. Looking into the darkness, you are forced, one way or another, to carefully evaluate the weight of what is said and heard."
According to Zhadan, Ukrainians today are trying not just to preserve the remnants of reality that broke down with the start of the war. "We are trying to reassemble this reality, to restart it, to reimagine it, to rename it. We are learning to control our language from scratch, we are testing words for functionality and effectiveness, we are like a person who is learning to walk again after a terrible catastrophe," he emphasized.
At the same time, the writer emphasized that it is language that gives Ukrainians the opportunity to “speak again after a long period of numbness, after deadly silence, after muteness, which comes, confirming your lack of strength and desire to explain anything.”
"It is language that gives us the opportunity to explain the world to ourselves and ourselves to the world. Today, language is our most accurate and effective tool in our attempts to understand the world, in our efforts to be convincing and understandable. We use a language that is only now growing and recovering, like a branch after a break. We use this language to talk about things that we have never talked about before, that were not in our vocabulary, that we never pronounced because they were simply not part of our experience," he said.
Today, Ukrainians have a completely different experience, Zhadan noted, “and, accordingly, a completely different language.” "This language will obviously be used to write completely different literature. Perhaps this literature will lack nuances and doubts, playfulness and frivolity. However, I want to believe that it will not lack the courage to talk about pain and joy, about light and darkness, about powerlessness and hope. It will not be afraid to bear witness to those who need love and understanding. In fact, I suppose that this will be literature of love and understanding. After all, this literature will be written by people who are currently being deprived of precisely that — love and understanding," said the artist.
He added that the language in which books are currently written in Ukraine “is the language of people who are trying to protect their lives and their dignity, their voice and their right to speak.”
As reported by Ukrinform, Serhii Zhadan became this year's winner of the Austrian State Prize for European Literature, awarded by the country's Ministry of Culture. The prize is worth €25,000.
The official award ceremony took place on July 25 with the participation of Federal Minister of Art and Culture, Vice-Chancellor Andreas Babl, as part of a festive event during the Salzburg Festival.
https://www.ukrinform.net/rubric-society/4019229-zhadan-war-has-changed-ukrainian-language-filling-it-with-pain.htm
"by Macquarie University edited by Lisa Lock, reviewed by Andrew Zinin
Education and training of Australian health practitioners should place greater emphasis on the importance of using professional interpreting services in clinical settings, according to a new book by Macquarie University researcher Dr. Jinhyun Cho.
In her new book based on analysis of interviews with 67 health care interpreters in Australia, Macquarie University linguistics researcher Dr. Jinhyun Cho suggests that many clinicians don't sufficiently understand or appreciate the skills and value of qualified interpreters.
"My research suggests a lack of awareness and understanding leads to interpreters being underutilized or used in ways that undermine their potential effectiveness," says Dr. Cho.
"As a result, health care access for a huge number of people without functional English is limited, and that can have a real cost in terms of health outcomes."
Australia was regarded as a world pioneer in the provision of interpreting services in the 1970s when multiculturalism was first enthusiastically embraced.
Dr. Cho says her research shows an English "monolingual mindset" still prevails in many areas of public life including health care institutions.
Assuming someone with basic conversational English can understand medical English in a stressful and unfamiliar setting may be unsafe, she says.
"There's also a very common misconception that anyone who knows two languages can be an interpreter," Dr. Cho says.
"Many health care interpreters told me stories of non-clinical staff like receptionists and cleaners being used as ad hoc interpreters simply because they are bilingual and are there on the spot."
Previous Australian research has shown that bilingual family members are frequently expected to act as interpreters, and sometimes this is neither culturally appropriate nor effective.
Tragic case study In her book, Dr. Cho relates a case study of a 35-year-old Afghan refugee who presented to a general practice with a painful left leg with her young daughter as "interpreter."
Suspecting deep vein thrombosis (DVT), the doctor told the patient she might have a serious blood clot in her leg and gave her a referral for an ultrasound examination at the local hospital.
However, the need for urgency was lost in translation and mother and daughter decided to wait until another family member more proficient in English could read and explain the letter and written information provided.
Tragically, two days later part of the clot dislodged and found its way to her lungs, and she died from pulmonary embolism.
While not all interpreting scenarios have life-or-death consequences, skilled interpreters need to have the ability to instantly comprehend and express culturally contextualized meaning from one language into another, Dr. Cho says.
Mental health conditions like depression, for example, may be stigmatized, referred to only euphemistically, or even dismissed as non-existent in some cultures.
Cancer may be so feared in a culture as to be "unmentionable" to the patient.
And cultural differences in understandings of diseases and syndromes can make some concepts and terminology essentially "untranslatable."
The clinic environment and time constraints can also place stress on clinicians, patients and interpreters.
In telephone interpreting in particular, background noise, poor audio quality and the absence of non-verbal language cues can all reduce communication effectiveness.
Yet when communication problems occur, the cause tends to be attributed to the interpreter rather than the broader situation, says Dr. Cho.
It turns out none of this is new. Dr. Cho says she was "shocked" when she looked back nearly 50 years to research conducted at the University of New South Wales in the early days of organized health care interpretation in Australia.
"Many of the issues and challenges raised by the health care interpreters I spoke to are exactly the same as those faced by interpreters in the late 1970s," Dr. Cho says.
"While Australia has achieved a lot in terms of establishing the interpreting network and system, we haven't progressed very far in recognizing the importance of interpreting and optimizing its value."
Now, as it was in the 1970s, the answer is to improve education for health professionals.
"Health care interpreters in Australia are trained to work with health professionals and need to regularly update their medical knowledge as part of their professional recertification requirements," Dr. Cho says.
"But health professionals are rarely trained in when and how to work with interpreters.
"Access to qualified interpreting services really is a basic human right and an essential tool for enabling health care equity and social inclusion."" https://medicalxpress.com/news/2025-07-lost-health-underused.html #metaglossia_mundus
The Eastern Cape’s magistrate’s courts are facing a shortage of court interpreters, which sometimes results in cases being delayed for hours or entirely postponed. The Dispatch team has had a front-row seat in some of East London’s courts where cases have been postponed due to the unavailability of interpreters.
"Advertising and recruitment process for 28 vacant posts is under way, says ministry spokesperson By ZIYANDA ZWENI - 28 July 2025 https://www.dailydispatch.co.za/news/2025-07-28-eastern-cape-courts-held-up-by-shortage-of-interpreters/ #metaglossia_mundus
"Abstract: Research on voice recognition for African languages is limited due to the scarcity of digital resources for training and adaptation, despite its broad usefulness. The Hausa language, spoken by almost fifty million inhabitants in West and Central Africa, is an example of a linguistic domain that has not been thoroughly studied. The Hausa language employs diacritics, which are symbols located above alphabetical characters to convey further information. By removing diacritics, the number of homographs increases, making it difficult to distinguish between similar words. This paper presents a study on speech recognition in the Hausa Language, specifically focusing on diacritized words. The study utilises the state-of-the-art wave2vec2.0 and Whisper deep learning architecture models, for transcribing audio signals into corresponding Hausa text. According to the results obtained in the study, the Whisper-large deep model emerged as the best, achieving a word error rate of 4.23% representing a considerable improvement of 43.9% when compared to the existing state-of-the-art model for Hausa language speech recognition. Additionally, the Whsiper-large model demonstrated a diacritic coverage of 92%, precision of 98.87%, with a diacritic error rate of 2.1%."
Abdulqahar Mukhtar Abubakar, Deepa Gupta & Susmitha Vekkot
2024
https://link.springer.com/article/10.1007/s10772-024-10111-x
#metaglossia_mundus
"How podcasts are powering indigenous language revival
by Mafumane Tlhapi
Mainstream radio combined with social media can do more to preserve South African indigenous languages than either can on their own.
A recent study by North-West University (NWU) master’s graduate Gofaone Motsamai explores how Motsweding FM radio is using Facebook to promote Setswana through podcasts and live streams.
“Motsweding FM is not just broadcasting, it’s preserving,” says Gofaone. “Through Facebook, the station connects Setswana speakers across borders, offering accessible and engaging content that supports linguistic and cultural continuity.”
The research, completed as part of a Communication master’s degree in the Faculty of Humanities, examined how the radio station uses Facebook to share Setswana language content, how audiences engage with it, and what digital challenges and opportunities arise. The study focused on how traditional broadcasters are adapting to social media to maintain cultural relevance.
Culturally relevant and connected
One listener who participated in a focus group said the flexibility of podcasting is what keeps them engaged: “I don’t always listen live, but I catch up through the podcast later. It helps me stay connected to my language.”
The study found that Facebook’s interactive tools allow for real-time feedback and dialogue, giving Setswana speakers a sense of community online. But there are obstacles. “Facebook’s algorithm tends to favour English-language content,” says Gofaone. “Some users also struggle with limited internet access or lack the digital skills to engage fully.”
The research recommends more investment by government entities and private sector companies in digital literacy to increase participation and urges broadcasters and policymakers to work together to make indigenous language content more discoverable on platforms like Facebook.
“There’s potential to expand into other digital technologies and form partnerships that can take this further,” says Gofaone. “What Motsweding FM is doing on Facebook is a start, but the long-term success of indigenous language preservation in the digital space will depend on continued innovation and support.”
The study highlights how digital media, when used intentionally, can play a growing role in keeping languages such as Setswana alive in a rapidly changing media landscape."
https://news.nwu.ac.za/how-podcasts-are-powering-indigenous-language-revival
#metaglossia_mundus
GitHub's new Spark platform builds full-stack apps from simple text prompts, escalating the 'vibe coding' race against rivals like Google and Amazon.
"GitHub Releases Spark Tool Which Can Build Full Apps From a Single Prompt
GitHub's new Spark platform builds full-stack apps from simple text prompts, escalating the 'vibe coding' race against rivals like Google and Amazon.
GitHub has launched Spark, a new AI tool that builds full-stack apps from simple text prompts. Spark is GitHub’s ambitious entry into the “vibe coding” trend, allowing users to go from an idea to a deployed application without writing code or configuring a server.
Available in a public preview for GitHub Copilot Pro+ subscribers, the platform aims to eliminate the friction between concept and implementation. It directly challenges a crowded field of competitors from Google, Amazon, and others, escalating the race to define the future of AI-native software development.
From Prompt to Production: How GitHub Spark Works
Spark operates as a complete application factory, translating a user’s vision into a functional, full-stack product with remarkable speed. The process begins with a simple prompt in natural language, where a user might ask it to “create a task-management app” or “build a weather dashboard.” From there, Spark takes over, orchestrating a complex series of automated tasks that would typically require a team of developers and system administrators.
The engine driving this transformation is Anthropic’s Claude Sonnet 4 model, which interprets the user’s intent and generates a coherent software architecture. This includes creating both the frontend user interface and the backend logic.
Simultaneously, Spark provisions all necessary infrastructure out-of-the-box, including a PostgreSQL database for data storage and a complete hosting environment on Microsoft Azure infrastructure. This seamless integration eliminates the traditional headaches of server setup, SSL certificate installation, and domain configuration, fulfilling the platform’s promise of a “no setup required” experience.
A standout feature is Spark’s ability to embed intelligence within the apps it creates. The platform allows users to integrate powerful Large Language Models from providers like OpenAI, Meta, DeepSeek, and xAI directly into their applications. Crucially, this is achieved without any need for the user to manage API keys, a significant technical hurdle for non-developers. This empowers creators to build sophisticated, AI-driven tools without needing deep expertise in backend authentication or API management.
Unlike many other app builders that trap projects in a proprietary sandbox, every application generated by Spark is backed by its own GitHub repository. This is a critical distinction, as it provides a professional-grade foundation from the outset. The repository comes pre-configured with GitHub Actions for continuous integration and deployment (CI/CD), automating the process of shipping updates.
It also includes Dependabot to monitor for security vulnerabilities and keep software dependencies up to date, ensuring the application remains secure and maintainable over time.
This robust foundation supports a highly flexible and multi-layered development workflow designed to accommodate users of all skill levels. A creator can begin with a simple prompt, then use a visual, drag-and-drop editor to refine the user interface. For more granular control, they can dive directly into the generated code.
For the most complex tasks, the entire project can be launched in a GitHub Codespace, allowing them to iterate with powerful Copilot agents to debug issues, add new features, or refactor the codebase. This tiered approach ensures that Spark is both accessible to beginners and powerful enough for seasoned developers.
The ‘Vibe Coding’ Gold Rush Heats Up
GitHub’s launch of Spark intensifies an already fierce competition to capitalize on the “vibe coding” phenomenon—a workflow where developers use natural language to generate code at high speed. While this approach accelerates development, it often bypasses critical quality checks.
The dangers of this high-speed, low-scrutiny approach are not merely theoretical. Recent, high-profile failures have served as stark warnings to the industry. In one unsettling incident, a product manager watched as Google’s Gemini CLI deleted his files after hallucinating commands, with the agent itself confessing its own “gross incompetence” and admitting, “I have lost your data. This is an unacceptable, irreversible failure.”
This came just a week after SaaStr founder Jason Lemkin reported that a Replit AI agent wiped his company’s production database, a catastrophic event that Replit’s CEO called “unacceptable and should never be possible.”
These back-to-back fiascos highlight a growing philosophical divide in the market for AI development tools. On one side, platforms like Spark and Google’s recently unveiled Opal are leaning into the speed and accessibility of vibe coding. Opal, for instance, uses a visual workflow editor to target a wider, less technical audience, allowing users to build apps without writing any code. This strategy prioritizes rapid prototyping and democratizing creation, accepting that the initial output may require further refinement.
On the other side of the spectrum, competitors are building tools specifically designed to impose order on the chaos. Amazon’s Kiro is the leading example of this cautious, structure-first approach. Instead of immediately generating code, Kiro employs a “specification-driven” model that first creates project plans, design documents, and task lists.
This ensures that the resulting software is well-documented and maintainable from the start. Emphasizing this focus on enterprise-grade reliability, Amazon CEO Andy Jassy claimed, “Kiro has a chance to transform how developers build software.”
A third strategy is also emerging: using AI to police other AIs. Anysphere, the company behind the popular Cursor editor, recently launched Bugbot, an automated tool that integrates with GitHub to review pull requests and find flaws before they reach production.
This represents a critical safety net, with one engineering manager at Discord noting, “we’ve had PRs approved by humans, and then Bugbot comes in and finds real bugs afterward. That builds a lot of trust.” This approach acknowledges that while AI will accelerate code creation, it also necessitates a new class of AI-powered quality control to manage the risks.
An All-in-One Platform for the AI Era
Spark is available exclusively to subscribers of GitHub Copilot Pro+, a premium tier costing $39 per month. This pricing strategy positions Spark as a powerful incentive to draw users deeper into GitHub’s AI ecosystem, rather than as a standalone product.
By making Spark an exclusive perk for its most expensive Copilot plan, GitHub is sending a clear signal. This isn’t just a tool; it’s the capstone of its AI subscription, designed to create a sticky ecosystem that is hard for developers to leave.
The platform represents a strategic bet on an integrated future for software development. By bundling hosting, databases, and deployment into a single, prompt-driven interface, GitHub is creating a powerful, self-contained world for creators.
This vision of democratization is a recurring theme. Speaking about a similar enterprise push with Replit, Microsoft Americas President Deb Cupp stated, “our collaboration with Replit democratizes application development, enabling business teams across enterprises to innovate and solve problems without traditional technical barriers.”
Yet, the need for human oversight remains critical. As Anthropic CEO Dario Amodei noted about agentic systems, “we’re heading to a world where a human developer can manage a fleet of agents, but I think continued human involvement is going to be important for the quality control…” Spark’s design, which keeps the developer in the loop, seems to embrace this philosophy.
The potential is significant, with some leaders like Anysphere CEO Michael Truell predicting, “I expect AI coding agents to handle at least 20% of a software engineer’s work by 2026.” With Spark, GitHub is not just launching another tool; it is making a bold play to own the entire development lifecycle, from the initial spark of an idea to the final, globally deployed product."
Markus Kasanmascheff
July 27, 2025
https://winbuzzer.com/2025/07/27/github-releases-spark-tool-which-can-build-full-apps-from-a-single-prompt-xcxwbn/
#metaglossia_mundus
"Why do “mama” and “papa” sound alike in so many languages? Experts say baby talk may be the reason.
Speakers of certain languages, such as Spanish, Italian or Catalan, may be able to understand one another when conversing in their native tongue due to their shared linguistic roots. However, even in completely different languages from opposite corners of the world, some words bear more than passing resemblance.
The words for “mother” and “father” are perhaps the two best examples, especially when we also take into consideration their numerous shortened forms, or informal alternatives.
Combination of sounds key
According to Lane Greene, language correspondent for The Economist, there’s a fascinating, but fairly logical, reason people across the globe use similar words to refer to their parents. And it’s all about how easily different sounds combine together.
“A few consonants, such as /b/, /m/, /t/ and /k/ show up frequently in nearly every spoken language in the world,” Greene explains. “Almost certainly, that’s because they’re easy to make.”
The influence of babies on words for “mother and ”father"
And because they are easy to make, they tend to be part of the first words uttered by babies, whose parents assume, rightly or wrongly, their offspring learn to say their names before any other words.
“A baby vocalizing will at first make a vowel-like sound, usually something like ‘aaaah,’ which requires very little in the way of control over the mouth,” Greene continues. “If the baby briefly closes their mouth and continues vocalizing, air will come out the nose, making the /m/ sound that’s used around the world in words for ‘mother.’”
It’s thought to be a similar story for words for “father.”
“To say ‘papa,’ babies can easily stop their breath when they close their lips rather than going on breathing through the nose, producing a /b/ or /p/ sound,” Greene elaborates. “And that explains ‘papa’ in English, ‘baba’ in Arabic, and ‘bà ba’ in Mandarin.
Babbling’s important role in language
So, then, it appears many of the words for “mother” and “father” in different languages may simply have come from babies babbling.
But, just like today, whether babies who helped ‘invent’ languages 100,000 years ago were actually referring to their parents when muttering “mama,” “papa” or other variations, is something we’ll never know"
Roddy Constwitter
Update: Jul 27th, 2025 18:29 EDT
https://en.as.com/latest_news/this-is-the-reason-why-words-like-mother-and-father-are-similar-in-many-completely-different-languages-n/
#metaglossia_mundus
"In the late 1980s, two American researchers conducted what would become one of the most cited experiments in education. A group of Grade 7 and 8 students was told to read a passage about baseball and answer a set of comprehension questions. As expected, strong readers who knew a lot about the sport obtained the highest scores. Surprisingly, however, students with lower reading ability but with extensive knowledge of baseball outperformed those with stronger reading skills but limited knowledge of the sport.
The now-famous baseball study challenged the long-held belief that reading is a skill that must be taught in isolation. It revealed how our prior knowledge of a topic acts like a scaffold that helps us make sense of new concepts by connecting them to what we already understand. A 2019 study published in Psychological Science reinforced this idea. Researchers found that when students are unfamiliar with 59 percent or more of the terms in what they’re reading, their ability to comprehend the text significantly suffers. To develop a child’s comprehension skills, it’s not enough to teach them how to read. We must assess what they know, build on that knowledge, and guide them to find the connections between ideas.
Understanding the science behind teaching comprehension skills matters now more than ever. In 2022, the Philippines ranked among the bottom 10 countries in reading comprehension in the 2022 Program for International Student Assessment. According to the World Bank, the Philippines has a 90 percent learning poverty rate, which means nine out of 10 Filipino 10-year-olds are unable to read or understand a simple paragraph. Since assuming office, Department of Education (DepED) Secretary Sonny Angara has prioritized addressing the learning crisis by launching various targeted interventions. Last week, DepEd announced a major improvement: the number of Grade 3 students who were unable to recognize letters dropped from 65,000 last year to just 2,000. This progress is certainly no small feat, and serves as a promising sign that urgent, focused efforts can move the needle.
But the real challenge lies ahead. How do we make sure students can critically understand what they read? When schools think about catch-up measures, especially for older students, a common but flawed response is to keep sacrificing instructional time for other subjects to focus solely on reading. However, as the baseball study shows, comprehension is deeply tied to background knowledge and vocabulary, and reading initiatives need to be integrated with, rather than separated from, the broader learning experience. The most effective interventions find the right balance between pulling out struggling readers for small-group remediation, when necessary, while also providing them with access to targeted, differentiated instruction that is embedded within their regular classes.
At the same time, literacy experts assert that remediation of older nonreaders should not be limited to giving them simplified reading-level appropriate texts. While this may seem supportive on paper, it deprives them of opportunities to be exposed to the vocabulary and ideas they need to catch up. Instead, teachers must use scaffolding techniques (e.g., pre-teaching difficult words they’ll encounter or using graphic organizers) to guide struggling students to comprehend age-appropriate material. When done well, this approach fosters the student’s cognitive growth and strengthens their confidence and self-belief.
Beyond classroom-level remediation, an integral part of the solution lies in designing a curriculum that does not just emphasize “core subjects” but one that embraces the value of an interdisciplinary education. It is easy to dismiss some fields as “minor subjects” or “nice-to-haves.” But when a student learns about science and math alongside history, literature, and the arts in a coherent and cumulative manner, they encounter key concepts and vocabulary repeatedly across subjects as opposed to learning them in fragmented units. This strengthens comprehension by training them to build meaningful connections across domains, and to flexibly apply what they know to different contexts.
In training sessions, I have often heard teachers say that although they agree that reading skills must be integrated in every subject and that “every teacher has to be a reading teacher,” they are not always explicitly taught how to do this. While there is no doubt that our public school teachers are some of the most resourceful people I have ever met, they need access to specialists, updated resources, and constant instructional coaching to help them track student progress and adapt their teaching accordingly.
DepEd has already taken encouraging steps in reducing the number of nonreaders in the country. Now comes the harder task: Ensuring every child doesn’t just learn how to read, but is also equipped with the thinking skills, sound judgment, and confidence to make sense of, and effectively navigate the world."
More than just reading
Eleanor Pinugu
@inquirerdotnet
Philippine Daily Inquirer / 05:20 AM July 28, 2025
https://opinion.inquirer.net/184979/more-than-just-reading
#metaglossia_mundus
"Multimodal world construals in English translations of Hongloumeng: a cognitive stylistic and systemic functional linguistic analysis
Abstract: Text world, a key concept in Text World Theory, refers to the mental representation discourse creates in the reader’s mind. The way readers conceive or interpret the text world is known as world construal. Hongloumeng, the classic Chinese novel, is well-known for its realistic representation of a text world. However, the novel in English (target text or TT) may offer a different construal of the world compared with that in Chinese (source text or ST). This study examines two English translations of the novel, exploring to what extent they offered different world construals, how the translators employed verbal elements to shape readers’ conceptualizations of the text world, and how the editors and publishers employed visual elements to facilitate these conceptualizations. The research proposes a multimodal framework for analyzing the texts and cover designs of the translations by David Hawkes, Xianyi Yang, and Gladys Yang. The analysis suggests that these translations offer different construals of the text world, potentially bringing different emotional experiences to target readers. This article offers insight into how verbal and visual elements in a translation work together to facilitate mental representation of the text world. Furthermore, the integration of Text World Theory with models of cognitive stylistics and Systemic Functional Linguistics provides an effective framework for analyzing multimodal world construals in translation."
Minru Zhao & Dechao Li
Humanities and Social Sciences Communications volume 12, Article number: 1147 (2025)
Published: 21 July 2025
https://www.nature.com/articles/s41599-025-05504-5
#metaglossia_mundus
"“Love” in different linguistic cultures A lecture on ‘Comparative Analysis of the Concept of “Love” in Different Linguistic Cultures’ will be delivered by Ven Dr Waskaduwe Siri Sarana Thero, on July 28 at the Council Room of the Royal Asiatic Society, 96, Ananda Coomaraswamy Mawatha, Colombo 7 (1st floor of the Mahaweli Centre Building) at 5. 30 pm).
Venerable Dr Waskaduwe Siri Sarana Thero is a former Researcher at the South Ural State University, Russia. His lecture explores love through psycholinguistics and cultural-linguistics, comparing how Sinhalese, Russian, German, and Kazakh speakers understand it.
It provides an introduction to psycholinguistic and cultural-linguistic research methodologies, and key findings in these fields, focusing on how different cultures perceive the concept of “love”" https://www.sundaytimes.lk/250727/sunday-times-2/love-in-different-linguistic-cultures-606733.html #metaglossia_mundus
"Abstract: Translation technology has changed the translation industry in a big way. Translators are facing a transformation that impacts their work, income, and control over their translations. Despite the negative impacts, there have also been positives. To better understand the importance of machine-assisted tools in achieving translation quality, we will conduct a thorough exploration. Specifically, we will evaluate the quality of literary texts that have undergone machine-assisted translation to uncover the underlying themes. Utilizing machine-assisted tools for literary text translation often leads to noticeable differences in translation quality. Therefore, a systematic approach based on theoretical frameworks is needed instead of relying on random conventions or practices. Although advancements in translation technology have not significantly impacted translating literature, the state of publishing presents difficulties for translators and copy editors alike—smaller fees and tighter deadlines can lead to a lack of quality control." Translation Quality of Literary Texts by Machine(-Assisted) Translation July 2025 DOI: 10.1007/978-3-031-73899-9_5
Aladdin Tarawneh Mohammad Al-Badawi Wafa Abu hatab Al-Hareth Alhalalmeh https://www.researchgate.net/publication/393972844_Translation_Quality_of_Literary_Texts_by_Machine-Assisted_Translation #metaglossia_mundus
"...A British woman claims she heard three Marks & Spencer employees at London Heathrow airport talking to each other in Hindi and reported them to their employer. A British woman claims she heard three Marks & Spencer employees at London Heathrow airport talking to each other in Hindi and reported them to their employer. Her viral social media post has sparked mixed reactions, with many wondering what crime the staffers had committed and others criticising the woman as ‘racist’.
A woman claims she heard three staffers at London airport conversing in Hindi (Representational image) Hindi at Heathrow Lucy White took to the social media platform X on July 25 to complain about the Marks & Spencer staffers. “Just landed in Heathrow Airport T3. Went into M&S. Three staff were speaking in another language,” she claimed.
White asked the employees what language they were speaking in, and was told they were conversing in Hindi – the most widely spoken language in India.
White further claimed she recorded their speech and had plans to report them to Marks & Spencer. “We must confront them every time,” she wrote.
Her post has gone viral with over 4.6 million views. It proved to be deeply controversial on social media, with some agreeing with White and her take on what language must be spoken in London.
"It's alienating to hear shop assistants speaking in a foreign language in a British store. I wouldn’t shop in stores that permit this,” wrote one X user. “Well, what are you waiting for? Report them,” another said, to which the British woman replied indicating she was in the process of reporting the incident to Marks & Spencer.
The majority of users, however, called out White for being racist and xenophobic, while expecting staffers at one of the world’s busiest international airports to speak only English.
“People speaking Hindi at work aren’t the problem—you policing languages in a multicultural country is. This isn’t the confrontation you think it is. It’s just xenophobia,” wrote X user Clare.
“Surely typing ‘morning all, I’m a little bit racist’ would have been much easier?” another asked sarcastically. “Why shouldn’t people speak a different language when they’re talking amongst themselves? And at an international airport too.”
“Urgent, Marks & Spencer. You have multilingual staff working at the world's busiest airport. Please advise,” a third person joked." Woman outraged by London airport shop employees talking in Hindi, says she reported them Sanya Jain Updated on: Jul 27, 2025 09:10 am IST https://www.hindustantimes.com/trending/woman-hears-hindi-at-london-airport-reports-staff-internet-asks-what-s-the-crime-101753582703218.html #metaglossia_mundus
|