 Your new post is loading...
|
Scooped by
Charles Tiayon
September 4, 2012 4:45 PM
|
Les spécialistes du dictionnaire Robert et de la grammaire ont validé sa méthode. Elle est déclinée dans plusieurs ouvrages. Certains se sont vendus à des centaines de milliers d'exemplaires.
Researchers across Africa, Asia and the Middle East are building their own language models designed for local tongues, cultural nuance and digital independence
"In a high-stakes artificial intelligence race between the United States and China, an equally transformative movement is taking shape elsewhere. From Cape Town to Bangalore, from Cairo to Riyadh, researchers, engineers and public institutions are building homegrown AI systems, models that speak not just in local languages, but with regional insight and cultural depth.
The dominant narrative in AI, particularly since the early 2020s, has focused on a handful of US-based companies like OpenAI with GPT, Google with Gemini, Meta’s LLaMa, Anthropic’s Claude. They vie to build ever larger and more capable models. Earlier in 2025, China’s DeepSeek, a Hangzhou-based startup, added a new twist by releasing large language models (LLMs) that rival their American counterparts, with a smaller computational demand. But increasingly, researchers across the Global South are challenging the notion that technological leadership in AI is the exclusive domain of these two superpowers.
Instead, scientists and institutions in countries like India, South Africa, Egypt and Saudi Arabia are rethinking the very premise of generative AI. Their focus is not on scaling up, but on scaling right, building models that work for local users, in their languages, and within their social and economic realities.
“How do we make sure that the entire planet benefits from AI?” asks Benjamin Rosman, a professor at the University of the Witwatersrand and a lead developer of InkubaLM, a generative model trained on five African languages. “I want more and more voices to be in the conversation”.
Beyond English, beyond Silicon Valley
Large language models work by training on massive troves of online text. While the latest versions of GPT, Gemini or LLaMa boast multilingual capabilities, the overwhelming presence of English-language material and Western cultural contexts in these datasets skews their outputs. For speakers of Hindi, Arabic, Swahili, Xhosa and countless other languages, that means AI systems may not only stumble over grammar and syntax, they can also miss the point entirely.
“In Indian languages, large models trained on English data just don’t perform well,” says Janki Nawale, a linguist at AI4Bharat, a lab at the Indian Institute of Technology Madras. “There are cultural nuances, dialectal variations, and even non-standard scripts that make translation and understanding difficult.” Nawale’s team builds supervised datasets and evaluation benchmarks for what specialists call “low resource” languages, those that lack robust digital corpora for machine learning.
It’s not just a question of grammar or vocabulary. “The meaning often lies in the implication,” says Vukosi Marivate, a professor of computer science at the University of Pretoria, in South Africa. “In isiXhosa, the words are one thing but what’s being implied is what really matters.” Marivate co-leads Masakhane NLP, a pan-African collective of AI researchers that recently developed AFROBENCH, a rigorous benchmark for evaluating how well large language models perform on 64 African languages across 15 tasks. The results, published in a preprint in March, revealed major gaps in performance between English and nearly all African languages, especially with open-source models.
Similar concerns arise in the Arabic-speaking world. “If English dominates the training process, the answers will be filtered through a Western lens rather than an Arab one,” says Mekki Habib, a robotics professor at the American University in Cairo. A 2024 preprint from the Tunisian AI firm Clusterlab finds that many multilingual models fail to capture Arabic’s syntactic complexity or cultural frames of reference, particularly in dialect-rich contexts.
Governments step in
For many countries in the Global South, the stakes are geopolitical as well as linguistic. Dependence on Western or Chinese AI infrastructure could mean diminished sovereignty over information, technology, and even national narratives. In response, governments are pouring resources into creating their own models.
Saudi Arabia’s national AI authority, SDAIA, has built ‘ALLaM,’ an Arabic-first model based on Meta’s LLaMa-2, enriched with more than 540 billion Arabic tokens. The United Arab Emirates has backed several initiatives, including ‘Jais,’ an open-source Arabic-English model built by MBZUAI in collaboration with US chipmaker Cerebras Systems and the Abu Dhabi firm Inception. Another UAE-backed project, Noor, focuses on educational and Islamic applications.
In Qatar, researchers at Hamad Bin Khalifa University, and the Qatar Computing Research Institute, have developed the Fanar platform and its LLMs Fanar Star and Fanar Prime. Trained on a trillion tokens of Arabic, English, and code, Fanar’s tokenization approach is specifically engineered to reflect Arabic’s rich morphology and syntax.
India has emerged as a major hub for AI localization. In 2024, the government launched BharatGen, a public-private initiative funded with 235 crore (€26 million) initiative aimed at building foundation models attuned to India’s vast linguistic and cultural diversity. The project is led by the Indian Institute of Technology in Bombay and also involves its sister organizations in Hyderabad, Mandi, Kanpur, Indore, and Madras. The programme’s first product, e-vikrAI, can generate product descriptions and pricing suggestions from images in various Indic languages. Startups like Ola-backed Krutrim and CoRover’s BharatGPT have jumped in, while Google’s Indian lab unveiled MuRIL, a language model trained exclusively on Indian languages. The Indian governments’ AI Mission has received more than180 proposals from local researchers and startups to build national-scale AI infrastructure and large language models, and the Bengaluru-based company, AI Sarvam, has been selected to build India’s first ‘sovereign’ LLM, expected to be fluent in various Indian languages.
In Africa, much of the energy comes from the ground up. Masakhane NLP and Deep Learning Indaba, a pan-African academic movement, have created a decentralized research culture across the continent. One notable offshoot, Johannesburg-based Lelapa AI, launched InkubaLM in September 2024. It’s a ‘small language model’ (SLM) focused on five African languages with broad reach: Swahili, Hausa, Yoruba, isiZulu and isiXhosa.
“With only 0.4 billion parameters, it performs comparably to much larger models,” says Rosman. The model’s compact size and efficiency are designed to meet Africa’s infrastructure constraints while serving real-world applications. Another African model is UlizaLlama, a 7-billion parameter model developed by the Kenyan foundation Jacaranda Health, to support new and expectant mothers with AI-driven support in Swahili, Hausa, Yoruba, Xhosa, and Zulu.
India’s research scene is similarly vibrant. The AI4Bharat laboratory at IIT Madras has just released IndicTrans2, that supports translation across all 22 scheduled Indian languages. Sarvam AI, another startup, released its first LLM last year to support 10 major Indian languages. And KissanAI, co-founded by Pratik Desai, develops generative AI tools to deliver agricultural advice to farmers in their native languages.
The data dilemma
Yet building LLMs for underrepresented languages poses enormous challenges. Chief among them is data scarcity. “Even Hindi datasets are tiny compared to English,” says Tapas Kumar Mishra, a professor at the National Institute of Technology, Rourkela in eastern India. “So, training models from scratch is unlikely to match English-based models in performance.”
Rosman agrees. “The big-data paradigm doesn’t work for African languages. We simply don’t have the volume.” His team is pioneering alternative approaches like the Esethu Framework, a protocol for ethically collecting speech datasets from native speakers and redistributing revenue back to further development of AI tools for under-resourced languages. The project’s pilot used read speech from isiXhosa speakers, complete with metadata, to build voice-based applications.
In Arab nations, similar work is underway. Clusterlab’s 101 Billion Arabic Words Dataset is the largest of its kind, meticulously extracted and cleaned from the web to support Arabic-first model training.
The cost of staying local
But for all the innovation, practical obstacles remain. “The return on investment is low,” says KissanAI’s Desai. “The market for regional language models is big, but those with purchasing power still work in English.” And while Western tech companies attract the best minds globally, including many Indian and African scientists, researchers at home often face limited funding, patchy computing infrastructure, and unclear legal frameworks around data and privacy.
“There’s still a lack of sustainable funding, a shortage of specialists, and insufficient integration with educational or public systems,” warns Habib, the Cairo-based professor. “All of this has to change.”
A different vision for AI
Despite the hurdles, what’s emerging is a distinct vision for AI in the Global South – one that favours practical impact over prestige, and community ownership over corporate secrecy.
“There’s more emphasis here on solving real problems for real people,” says Nawale of AI4Bharat. Rather than chasing benchmark scores, researchers are aiming for relevance: tools for farmers, students, and small business owners.
And openness matters. “Some companies claim to be open-source, but they only release the model weights, not the data,” Marivate says. “With InkubaLM, we release both. We want others to build on what we’ve done, to do it better.”
In a global contest often measured in teraflops and tokens, these efforts may seem modest. But for the billions who speak the world’s less-resourced languages, they represent a future in which AI doesn’t just speak to them, but with them."
Sibusiso Biyela, Amr Rageh and Shakoor Rather
20 May 2025
https://www.natureasia.com/en/nmiddleeast/article/10.1038/nmiddleeast.2025.65
#metaglossia_mundus
'You can't reproduce the language, but you can reproduce the effect it has on you when you read it'
"MAY 21, 2025
May 20, 2025 | Jaclyn Severance
Beautiful Choices: UConn Makes Its Mark on the World of Literary Translation
'You can't reproduce the language, but you can reproduce the effect it has on you when you read it'
On its face, the idea of translating a piece of literature from one language to another seems simple.
The English word “cat,” for instance, is chat when translated into French. In Spanish, it’s gato. In Turkish, it’s kedi. In Russian, it’s kot.
But with most forms of literature, the reality of translation is not so simple.
“There’s this equivalency assumption – that I can make an equivalent in the language that I am translating into,” says Catherine Keough, a literary translator and graduate student in UConn’s Department of English.
“But once someone starts engaging with the practice of translation, it becomes so clear that every single move that the translator is making to shift this text into the language they’re working in is a choice,” Keough says.
Choosing to put one word next to another can change that first word’s meaning.
Adding a third word into the mix can complicate things even further.
When it comes to a literary form like poetry, there’s also sometimes rhyme to contend with. And rhythm. And attitude.
A poem has tone. A poet instills a mood into the language they choose – it’s light, or it’s dark, or it’s somewhere in between. It could be humorous, or joyful, or sad, or none of those things, or all of those things, depending on choice.
A chosen phrase, the juxtaposition of words – it’s all done deliberately to convey something.
And when those phrases and words are crafted in Mandarin Chinese, or Arabic, or Hindi, the emotions they evoke and the cultural context they reflect typically don’t just translate word-for-word into another language, like English.
“Whether we’re focusing on the meaning, or the sound, or the rhythm, or the rhyme, or any of the formal features of the writing, every time we make one of those choices, we’re automatically making other choices impossible,” explains Christopher Clarke, a literary translator; visiting assistant professor in UConn’s Department of Literatures, Cultures, and Languages; and editor of World Poetry Review, UConn’s literary translation journal.
Because of this complexity, because of the myriad choices each translator must make when attempting to translate a text, translating poetry is as much of a skill and an art as writing original poetry itself.
And for the last nine years, UConn’s program in literary translation has been teaching hundreds of graduate and undergraduate students how to undertake translations – and how to do them well.
From Pond Hockey to Hockey East
Established in 2016, UConn’s program in literary translation has at times had as many as 125 undergraduate and 20 graduate students participating in its minor in literary translation and graduate certificate programs, respectively, or just taking the program’s course offerings as electives.
One year, Clarke noted, he had nearly 20 different languages in the undergraduate classroom at once – something that makes UConn’s program somewhat unique compared to others in the U.S.
“It is a multilingual workshop environment – everyone comes in with whatever other language they work with, and we build around that,” he says. “There are a few others like this in the country, but not many.”
Students in the program range from native bilingual speakers, to new learners of a foreign language, to creative writers looking for new techniques for expression, and they all share one common language to work toward: English.
They’re taught the tools and techniques for selecting, translating, and pitching translations, with many students publishing their work in literary journals or going on to pursue book-length translation projects.
“World Literature Today, one of the most respected international magazines in the field, has ranked us ‘among the finest translation programs in the world,’” notes Peter Constantine, a professor, literary translator, and editor and the director of UConn’s literary translation program. “This recognition reflects the impressive number of translations and peer-reviewed articles our undergrad and grad students have published, along with the prestigious awards and grants they’ve earned, including the NEA and PEN/Heim translation grants.”
World Poetry Review, the biannual literary journal founded in 2017 and based in UConn’s literary translation program, is just one of many outlets for literary translators seeking to have their work published.
And while it’s still a relative newcomer in a field that looks significantly different outside of the U.S. – only approximately 3% of all books in the United States are works in translation, compared to 45% in France and even greater numbers in other countries, according to Clarke – World Poetry Review is making its mark in the literary translation world.
(Word Poetry Review)
Four translations included in the journal’s Issue 10 were longlisted this spring for inclusion in the “Best Literary Translations” anthology, published annually by Deep Vellum.
One translation – Kate Deimling’s translation of six poems by the French poet Gabriel Zimmerman – will be included in the anthology’s 2026 edition.
The four longlisted works – translations from Deimling, Samuel Martin, Heather Green, and recent UConn alumna Zeynep Özer ’24 MA – competed amongst 400 submissions for inclusion in the anthology, a competition Constantine described as “particularly intense, as the anthology chooses the best translations of poetry, short fiction, and essays, drawn from U.S. literary journals and magazines.”
The 2026 anthology will mark the second time that a translation from World Poetry Review has been included in “Best Literary Translations.” The 2025 edition included work by the contemporary poet Yordan Eftimov translated from Bulgarian by Jonathan Dunne. UConn graduate student Xin Xu’s ’23 Ph.D. translation of the Chinese poet Yuan Yongping was longlisted that year.
For UConn’s literary translation journal and program, it isn’t quite the equivalent of winning the World Series or the Stanley Cup.
But it’s recognition that the program has grown significantly from the humble beginnings of skates on a pond to a team of real players in a growing and dynamic international field.
“It’s like if our team was invited to join a popular conference – like if suddenly World Poetry Review got to play in Hockey East,” says Clarke, the journal’s editor. “The bonus for us is that we will have work published next to work from other better-known journals or long-established journals, and our name listed among these many important other publications.”
Is the Original Beautiful? Is Yours?
There’s no golden rule on the kinds of translations that get accepted to journals like World Poetry Review, explains Clarke.
Texts can be contemporary or historical. Translators can be new to the field or established.
Every issue is different, though Clarke tries to curate his issues around submissions that complement each other in some way.
“We just launched Issue 11, and we’d received a really great submission of contemporary Ukrainian poetry, written in Ukrainian,” Clarke says. “And then, as counterpoint, I had another submission of Ukrainian poetry written in Russian. And then, as a late submission that I also really liked, we had some poetry from Russia, in Russian, and I thought it was a really interesting mix of aesthetic and political commentary to run the three together at the same time.”
The journal also launched a bonus dossier featuring 14 different translations of the 1926 poem “J’ai tant rêvé de toi” by the French poet Robert Desnos – a striking example of how each translator’s individual choices can impact the way a reader experiences the original text.
“I tell our students: You can translate this, and it might mean the same thing, but ask yourself, is the poem in the original language beautiful? Is yours?” Clarke says. “And if they aren’t both, then you’re doing a disservice and it’s not a good translation, even if it’s very accurate.
“You have to translate the way you react to it, and really what you’re trying to reproduce is not the language – because you can’t reproduce the language, you’re using different tools. But you can reproduce the effect that it has on you when you read it.”
World Poetry Review will have an open call for submissions for its next issue in August 2025 – an opportunity for both established and upcoming translators, including UConn students, to compete for a space that’s quickly become notable in the field.
“Competition for publication in World Poetry Review is considerable,” says Constantine. “World Poetry Review is not a student publication, but it has included outstanding translations by both UConn undergraduate and graduate students, work that holds its own beside that of widely published literary translators.”
That includes work like alumnus Michal Ciebielski’s ’20 (ENG, CLAS) translation of Grzegorz Kwiatkowski, which set off a remarkable career for the contemporary Polish poet, according to Constantine.
“Thanks to Michal’s translations, Kwiatkowski’s work was discovered outside Poland, leading to versions in German, French, Greek, and Slovene,” Constantine says.
“It’s a reminder of how literary translators can open doors and shape careers for the writers they translate, and it’s especially rewarding to see one of our own undergraduates play such a part.”
Issue 12 of World Poetry Review will launch in October."
https://today.uconn.edu/2025/05/beautiful-choices-uconn-makes-its-mark-on-the-world-of-literary-translation/
#metaglossia_mundus
Google Meet va traduire en temps réel toutes vos conversations. Comme l'a annoncé Google lors de la Google I/O 2025, le service de visioconférence va se servir de l'IA pour doubler les paroles de vos interlocuteurs qui ne parlent pas la même langue que vous.
"Publié le 21 mai 2025 à 08:51
Google Meet va traduire en temps réel toutes vos conversations. Comme l’a annoncé Google lors de la Google I/O 2025, le service de visioconférence va se servir de l’IA pour doubler les paroles de vos interlocuteurs qui ne parlent pas la même langue que vous.
Google a profité de la Google I/O 2025 pour annoncer l’arrivée d’une nouvelle fonctionnalité au sein de Google Meet. Désormais, le service de visioconférence va pouvoir traduire en temps réel, avec « une faible latence », les conversations de ses utilisateurs. Pour réaliser cette prouesse, Google Meet s’appuie évidemment sur l’intelligence artificielle de Google.
Le géant de Mountain View indique que le modèle d’IA à l’origine de la fonction n’est autre que AudioLM, une intelligence artificielle capable de générer de l’audio, comme des voix humaines ou de la musique, à partir d’un petit extrait sonore. Le modèle est développé par Deepmind, la filiale de Google consacrée à l’IA.
Comment fonctionne la traduction en temps réel de Google Meet ?
Concrètement, l’IA va doubler les paroles de vos interlocuteurs. Il n’est pas question de sous-titres, une fonction qui est déjà disponible sur le service. Au lieu d’entendre la voix de votre collègue dans sa langue natale, vous entendrez celle de l’IA dans la langue de votre choix. En fait, l’IA va venir se substituer aux paroles de votre interlocuteur. Celui-ci aura également droit à une traduction en temps réel de vos réponses, ce qui vous permettra d’échanger sans avoir à apprendre une nouvelle langue.
« Les différences linguistiques sont un obstacle fondamental à la connexion avec d’autres personnes – que vous travailliez avec toutes les régions du monde ou que vous discutiez simplement avec votre famille à l’étranger », explique Google dans un billet de blog.
Comme l’explique Google, vous entendrez d’abord la voix originale, en sourdine, puis la traduction arrivera juste après, ce qui « entraînera des retards dans la conversation ». Google précise que Google Meet est programmé pour préserver les caractéristiques de la voix de l’utilisateur, comme le ton, l’intonation et l’émotion. Grâce à l’IA générative, Meet peut comprendre et percevoir les nuances dans les paroles des internautes.
Dans un premier temps, le service de traduction en temps réel de Google Meet est réservé aux abonnés Google AI Pro et Ultra, le nouvel abonnement à plus de 250 dollars par mois qui vient d’arriver aux États-Unis.
Seules deux langues sont prises en charges : l’anglais et l’espagnol. L’italien, l’allemand et le portugais seront déployés dans les « prochaines semaines ». Google prévoit de mettre la traduction en temps réel à disposition de toutes les entreprises utilisant Workspace. Google promet des « tests précoces » dans le courant de l’année."
https://www.01net.com/actualites/google-meet-traduction-temps-reel-conversations-arrive.html
#metaglossia_mundus
"Google’s new features in Search, Meet and more escalate the AI war
ByTim Biggs
Updated May 21, 2025 — 2.29pmfirst published at 9.41am
Google has announced a new wave of AI features, expanding the technology’s reach to online shopping, video conferences and even its ubiquitous search engine, which is getting a mode that relies entirely on chatbots rather than web links.
The new “AI mode” for Google search is currently live in the US only, and is designed to engage users in conversation to answer their queries. It appears on browsers and in the Google app, and will automatically perform multiple web searches to speak confidently on any topic. It can even be given follow-up questions, or be prompted with images, videos or screenshots.
After decades of dominance, Google’s search empire is increasingly under threat from startups such as OpenAI and Perplexity.CREDIT:GETTY IMAGES
Meanwhile, shopping in AI Mode will allow bots to go through checkout on your behalf and can apply products to your own photos for a preview of how new clothes will look.
Other features, the majority of which are only available to Google’s paying subscribers, include live language translations in Meet calls, personalised smart replies in Gmail, and a Deep Think mode for the Gemini chatbot that can reason to break down complex tasks. In the future, Google plans to roll out expanded AI powers to its Chrome web browser, so the chatbot could gain a holistic understanding of the projects you’re working on.
Advertisement
“More intelligence is available, for everyone, everywhere. And the world is responding, adopting AI faster than ever before,” said chief executive Sundar Pichai in announcing the updates overnight at the Google I/O developer conference.
“What all this progress means is that we’re in a new phase of the AI platform shift where decades of research are now becoming reality for people, businesses and communities all over the world.”
The new products come at a time when the search giant is under unprecedented threat from AI start-ups as well as old rivals including Microsoft and Apple.
US-based OpenAI and Perplexity are fast moving into Google’s turf off the back of rapidly improving generative AI. And Apple, which said last week it is seeing Google searches on iPhones drop for the first time, is expected to make some major AI announcements of its own at its development conference next month.
On Wednesday, Bloomberg reported that Apple was planning to allow outsiders to build AI features based on the large language models that the company uses for Apple Intelligence, citing people with knowledge of the matter.
Apple has been bedevilled by AI.
The move is part of a broader attempt to become a leader in generative AI; a field that has bedevilled Apple. The company launched the Apple Intelligence platform last year in a bid to catch up with rivals. But the initial features haven’t been widely used, and other AI platforms remain more powerful. The bet is that expanding the technology to developers will lead to more compelling uses for it.
Apple Intelligence already powers iOS and macOS features such as notification summaries, text editing and basic image creation. But the new approach would let developers integrate the underlying technology into specific features or across their full apps. The plan could echo the early success of the App Store, and turn Apple’s operating systems into the largest software platforms for AI.
A spokesperson for Cupertino, California-based Apple declined to comment.
The new plan for developers is expected to be one of the highlights of the developers conference, better known as WWDC. But the biggest announcement will likely be overhauled versions of the iPhone, iPad and Mac operating systems, part of a project dubbed “Solarium”. The idea is to make the interfaces more unified and cohesive. The new approach will be largely reminiscent of visionOS, the operating system on the Vision Pro headset.
Yet while Google and Apple go head-to-head on AI, the two giants also face extraordinary regulatory scrutiny.
‘I don’t see how it doesn’t happen’: Apple eyes giant change to devices
A US federal judge has determined that Google has an illegal monopoly in search, and is mulling what penalties to impose, with one mooted option being the forced sale of the Chrome web browser. Yet with roughly 90 per cent of the search market, and the latest raft of AI features touching every corner of the company’s business, wrestling away Google’s hold on the ecosystem could be next to impossible.
Meanwhile in a separate matter, a US judge ruled last month that Apple must allow developers to steer customers to the web to complete purchases, bypassing the company’s revenue sharing system. Which means that a surge in new apps, powered by expanded access to the iPhone’s on-device AI, may not result in as big a financial benefit as Apple hopes."
https://www.smh.com.au/technology/google-apple-turn-up-the-heat-in-the-ai-arms-race-20250521-p5m0xm.html
#metaglossia_mundus
"Google Meet adding real-time speech interpreter, consumer beta today
Abner Li
May 20 2025 - 11:00 am PT
At I/O 2025, Google announced that Meet is getting real-time Speech Translation capability that’s like having an actual human interpreter in the call.
Meet already offers text captions that can be translated in real-time. Speech Translation takes things a step further, with Google translating “your spoken words into your listener’s preferred language.”
Language differences are a fundamental barrier to connecting with other people — whether you’re working across regions or just chatting with family abroad.
For example, if you’re speaking Spanish, the other person will hear English that preserves your voice, including tone, intonation, and emotion. Meanwhile, you will hear what they’re saying in Spanish.
This happens in “near real time, with low-latency,” with the goal of allowing a free-flowing conversation. You will first hear the original language/speech, but it will be quite faint, before the translated version comes in. Officially, Google notes how “Translation will cause delays in the conversation.”
This is powered by a large language audio model from Google DeepMind. Trained and built on audio data, AudioLM performs direct audio-to-audio transformations. This allows it to preserve as much of the original audio as possible.
In the top-right corner of Google Meet on the web, you can access a “Speech translation with Gemini” panel to specify the “Language you speak in this call” and “Language you prefer to hear.” The entire video feed will get a Gemini glow and “Translating from [language]” pill in the corner.
Google Meet Speech Translation
Google Meet Speech Translation
Notably, Speech Translation will begin rolling out in beta for consumers starting today as part of the new Google AI Pro and Ultra plans. Available on the web, only one participant in the call needs to be subscribed for Speech Translation to work. English and Spanish are supported right now, with Italian, German, and Portuguese following in the “coming weeks.”
Google plans to bring this to businesses with “early testing coming to Workspace customers this year.”"
https://9to5google.com/2025/05/20/google-meet-speech-translation/
#metaglossia_mundus
"Google Meet traducirá en tiempo real diálogos en inglés y español con "la voz" del usuario por REDACCIÓN EFE MAYO 20, 2025
Captura de un video de Google de la presentación del traductor de voz simultáneo en Google Meet. EFE/ Google Mountain View (EE.UU.), 20 may (EFE).- Google Meet anunció este martes que desde hoy ya puede traducir conversaciones en inglés y español manteniendo "la voz" del interlocutor gracias a la inteligencia artificial (IA), un anuncio realizado por el gigante tecnológico en su conferencia de desarrolladores Google I/O.
De acuerdo Sundar Pichai, director ejecutivo de la compañía, esta nueva función ayuda a las personas a superar las barreras lingüísticas "casi en tiempo real", manteniendo la voz, tono y expresión de interlocutor.
El gigante tecnológico demostró esta nueva función en el Anfiteatro Shoreline de Mountain View, en la sede de la compañía en California, con un video pregrabado en el que se vio una conversación entre una estadounidense y una latina que tenían una conversación (cada una en su lengua nativa) sobre el alquiler de una casa.
"El resultado es una conversación auténtica y natural, incluso en diferentes idiomas, ya sean nietos angloparlantes charlando sin esfuerzo con sus abuelos hispanohablantes o colegas de todo el mundo conectados desde diferentes continentes", anota Google en un comunicado.
De momento, esta herramienta solo está disponible para suscriptores de los planes Google AI Pro y Ultra en versión beta, pero la empresa destacó que "pronto estará disponible para empresas".
Además, adelantó que añadirán más idiomas en las próximas semanas, pero Google no indicó cuáles serían." https://quepasamedia.com/noticias/vida-y-estilo/tecnologia/google-meet-traducira-en-tiempo-real-dialogos-en-ingles-y-espanol-con-la-voz-del-usuario/ #metaglossia_mundus
Imogen Nunn, 25, who was born deaf, died in Brighton on New Year’s Day 2023 after taking a poisonous substance.
"A nurse involved in the care of a deaf TikTok star who died after ingesting poison warned of a “huge shortage” of British Sign Language (BSL) interpreters during an inquest into the death.
Imogen Nunn, 25, died in Brighton, East Sussex, on New Year’s Day 2023 after taking a poisonous substance she ordered online.
Ms Nunn, who was born deaf, raised awareness of hearing and mental health issues on her social media accounts, which attracted more than 780,000 followers.
There is a huge shortage of BSL interpreters. Even in my current job I still struggle to get interpreters for my role in my work and because I’ve seen deaf patients requiring access to mental health teams, I see that they are also struggling
On Tuesday, the Inquest at West Sussex Coroners Court in Horsham were informed of a “huge shortage” in BSL interpreters from Carmen Jones, a nurse for the deaf adult community team (DACT) at South West London and St George’s NHS Trust.
Just days before Ms Nunn’s death, she received a check-in visit at her home from care professionals after sending a text message saying she had had an increase in suicidal thoughts.
No BSL interpreter was brought to the meeting as there was not enough time to arrange it, the court was told in March.
Communicating through a BSL interpreter on Tuesday, Ms Jones said: “There is a huge shortage of BSL interpreters.
“Even in my current job I still struggle to get interpreters for my role in my work and because I’ve seen deaf patients requiring access to mental health teams, I see that they are also struggling.”
She told senior coroner Penelope Schofield “it would be very difficult” for a deaf person to communicate the crisis they were in without an interpreter.
“It’s based around language, how can anyone understand another person if they don’t share a language?” Ms Jones added.
Consultant psychiatrist Simon Baker, who visited Ms Nunn on 29 December 2022 at her home, previously told the court he was “surprised” how well the meeting had gone.
The inquest into Ms Nunn’s death was previously adjourned for two months because there were no BSL interpreters available to translate for two members of DACT.
This correlated with concerns noted in a prevention of future deaths report written by Ms Schofield regarding Ms Nunn’s care.
It reads: “During the course of the inquest (which has yet to be concluded), I heard evidence that there was a lack of availability of British Sign Language Interpreters able to help support deaf patients in the community who were being treated with mental health difficulties.
“This was particularly apparent when mental health staff were seeking an interpreter at short notice for a patient who was in crisis.
“The lack of interpreters available has meant that urgent assessments are being carried out with no interpreters present.”..."
https://www.wharfedaleobserver.co.uk/news/national/25177992.deaf-tiktok-star-had-no-translator-care-check-up-three-days-death/
#metaglossia_mundus
"Concepts en sciences humaines et sociales à l’épreuve de la traduction (Univ. Paris Cité) Le 12 Juin 2025 À : Université Paris Cité Publié le 20 Mai 2025 par Marc Escola (Source : Florence Zhang)
Programme
9h30 François Rastier, Mots, termes, concepts : quelles unités, que traduire ?
10h30 Thamy Ayouch, Traduire la race, lever les démentis : le diachronique, le dialogique et le diacritique
11h30 René Lemieux, La retraduction de La pensée sauvage de Claude Lévi-Strauss en anglais et l’enjeu de l’intertextualité
Pause
14h Bruno Poncharal, Sur la notion de faux-ami conceptuel
15h Stéphane Feuillas, La voie avec quelle voix ? (titre provisoire)
16h Kazuhiko Yatabe, Le concept de société et la sociologie japonaise : le périlleux périple d’une notion clé des sciences sociales
17h Discussions.
Responsable : Florence Zhang Adresse : Université Paris Cité - voir sur une carte Document(s) joint : https://www.fabula.org/actualites/documents/127788_534a19bf7e7ddddd80fedc3578f8d002.pdf";
https://www.fabula.org/actualites/127788/concepts-en-sciences-humaines-et-sociales-a-l-epreuve-de-la-traduction.html #metaglossia_mundus
For those who can inject credibility, provenance and structure into the system, the next era of instant answer discovery offers a chance to lead.
"Google Search is collapsing. Here is how to profit from AI For those who can inject credibility, provenance and structure into the system, the next era of instant answer discovery offers strategic upside and a chance to lead.
James Dore Strategist May 21, 2025 – 10.52am Google Search is tanking.
The web’s main gateway is about to vanish in a puff of generative smoke. According to Statcounter, Google’s global search market share dropped below 90 per cent for the first time in nearly a decade.
Apple confirmed in 2025 that Safari-based Google searches had declined for the first time in 22 years. Referral traffic to news publishers has long been falling. AI is now accelerating that decline.
The search engine is being replaced by the answer engine. Getty
Artificial intelligence assistants are displacing search. Platforms such as ChatGPT, Copilot and Google’s SGE are becoming the default for getting fast, confident answers. But these systems don’t cite sources in any meaningful way. They replace discovery with assertion, and increasingly, they’re learning from content that they or their peers already produced.
We’ve entered the search ouroboros. It feeds on its own outputs and erodes its utility. If we reach the tipping point where both search and content are AI-generated, the system becomes fully recursive. A closed loop. A Frankenstein’s monster where truth is untethered from fact.
The implications are existential: for publishers, for educators, for policymakers and for anyone who believes a functioning democracy relies on discoverable, verifiable information.
The end of search Google’s dominance is no longer guaranteed. Its share of global search traffic is slipping. In 2024, Google’s SGE trial led to a 44 per cent drop in link clicks.
“The internet may be eating itself. But feeding it wisely can break the cycle.”
Gartner predicts global search volume will drop 25 per cent by 2026 as AI tools replace basic queries. In the US, 58.5 per cent of Google searches already end without a single click.
SGE is Google’s attempt to stay relevant, but it signals a shift away from discovery. For content creators, there are fewer ways in.
The rise of AI as truth Search historically pointed outwards, ranking results by authority, relevance and freshness. AI flattened that model.
The problem? The input pool is already changing. Estimates suggest over 30 per cent of web content in 2024 was AI-generated, and rising fast. Content farms and SEO-optimised spam are increasingly built for generative engines, not humans.
This is not just a quality issue. It’s an epistemic one. When the inputs are synthetic, the outputs become untethered. Like a photocopy of a photocopy, clarity degrades. Verifiability disappears. Truth blurs into statistical probability.
The diminishing loop: AI feeding AI LLMs, the foundation for tools like ChatGPT were trained on a snapshot of the world. That world is now increasingly shaped by those same models’ outputs.
This is model pollution: the point where AI-generated content floods the web, drowning out verifiable information.
With no metadata, attribution or accuracy incentives, AIs are training on themselves. Credibility is eroding, and the system risks converging into homogenised consensus. Once LLMs dominate knowledge access, misinformation will not be fringe. It will be standard.
Why this is an opportunity for the brave I’m arguably part of the problem. I’ve built programmatic tools that help clients game traditional search rankings and LLM visibility. In doing so, I’ve fed the very machine that I’m warning you about. That may work now, but it proves the system still rewards scale over substance and remains brittle.
But in that brittleness lies opportunity. The advantage now lies with those who inject trust and provenance.
Major publishers and other sources of high-quality content should be racing to sign licensing agreements with OpenAI, Gemini, Perplexity and their ilk. It’s a rare win-win: LLMs desperately need trusted, human-authored content to ground their outputs. Traditional media needs alternatives to the collapsing search funnel. Those who move first will shape how future answers are surfaced and attributed.
Reddit, the AP and Politico license content to OpenAI. A startup could do the same with verified sources, but Facebook’s failed fact-checking showed why authority must be decentralised and democratised.
It comes at a cost The moment AI replaces links with answers, monetisation models break. Referral traffic drops. CPMs collapse. The ad-supported web cannot survive this shift. Google earned $175 billion from search ads in 2023.
Which is why monetising the query layer is next. Not just through API access or licensing, but through real, scaled incentives for publishers. If GPTs become the new homepage, then attention must be compensated upstream.
This demands a rethink of how content is structured, attributed and surfaced. Publishers should not fear AI. They should optimise for it, build schemas, enforce provenance, and create incentives for users to go beyond the summary.
GPTs have not yet found the right balance of monetisation, sustainability and competitive equity. In generative SEO, success means becoming part of the model’s memory.
The endgame We are watching the web rewire itself. The search engine is being replaced by the answer engine. If left unchecked, that engine will soon spin into machine-confirmation bias, where self-reinforcing predictions masquerade as truth.
But it can be reversed. Trusted content creators, those with authority, rigour and real-world grounding, have a rare moment to become the backbone of the next knowledge layer.
The internet may be eating itself. But feeding it wisely can break the cycle." https://www.afr.com/technology/google-search-is-collapsing-here-is-how-to-profit-from-ai-20250521-p5m0yl
Site des Nations Unies sur la Journée mondiale de la diversité culturelle pour le dialogue et le développement, célébrée chaque année le 21 mai. La diversité culturelle est une force motrice du développement pour ce qui est de la croissance économique et comme moyen de mener une vie intellectuelle, affective, morale et spirituelle plus satisfaisante.
"
Journée mondiale de la diversité culturelle pour le dialogue et le développement
21 mai
La protection de la diversité des expressions culturelles plus importante que jamais
Tous les ans le 21 mai, la Journée mondiale de la diversité culturelle pour le dialogue et le développement est organisée par l'UNESCO pour célébrer non seulement la richesse des cultures du monde, mais aussi le rôle essentiel du dialogue interculturel pour la paix et le développement durable.
Selon l'UNESCO, 89 % de tous les conflits actuels ont lieu dans des pays où le dialogue interculturel est faible. Pour forger une coopération efficace et maintenir la paix, le renforcement du dialogue interculturel doit être une priorité.
Le secteur culturel et créatif constitue l’un des moteurs de développement les plus puissants au monde. Il représente plus de 48 millions d’emplois à l’échelle globale - dont près de la moitié sont occupés par des femmes - soit 6,2 % de tous les emplois existants et 3,1 % du PIB mondial. C’est également le secteur qui emploie et offre des opportunités au plus grand nombre de jeunes de moins de 30 ans.
Cependant, le secteur culturel et créatif n'a toujours pas la place qu'il mérite dans les politiques publiques et la coopération internationale.
Une Déclaration historique pour la Culture
Plus grande conférence mondiale consacrée à la culture de ces 40 dernières années, MONDIACULT 2022 a réuni pendant trois jours à Mexico près de 2600 participants. 150 Etats ont répondu à l’invitation de l’UNESCO et du Mexique, en envoyant des délégations – 135 d’entre eux étaient représentés au plus haut niveau, par leur ministre de la Culture.
Dans cette Déclaration, fruit de dix mois de négociations multilatérales animées par l’UNESCO, les Éats affirment pour la première fois la culture comme étant un « bien public mondial ». À ce titre, ils appellent à intégrer la culture « en tant qu'objectif spécifique à part entière » parmi les prochains Objectifs de Développement Durable des Nations Unies.
Le texte adopté par les États définit un ensemble de droits culturels qu’il convient de prendre en compte dans les politiques publiques, allant des droits sociaux et économiques des artistes, à la liberté artistique, jusqu'au droit des communautés autochtones à sauvegarder et à transmettre leurs connaissances ancestrales, et à la protection et promotion du patrimoine culturel et naturel.
Il appelle également à une régulation substantielle du secteur numérique, en particulier des grandes plateformes, au bénéfice de la diversité culturelle en ligne, de la propriété intellectuelle des artistes et d'un accès équitable pour tous aux contenus.
La Conférence MONDIACULT 2025 est l'occasion de faire le point sur les réalisations nationales, régionales et internationales suite à l'adoption de la Déclaration historique de MONDIACULT , qui a défini un ensemble de droits culturels qui doivent être garantis. La dynamique a été évidente, avec l'inclusion de la culture dans les agendas du G20, du G7, du G77+Chine et d'autres forums régionaux et internationaux.
Elle suit également l'appel de la Déclaration à faire de la culture « un objectif spécifique à part entière » dans l'Agenda de développement post-2030. MONDIACULT 2025 est un moment décisif et stratégique pour lancer un appel mondial en faveur d'un objectif culturel à part entière, en présence de milliers de décideurs et d'influenceurs culturels.
Culture et développement durable
Avec l'adoption, en septembre 2015 de l'Agenda 2030 Agenda pour le développement durable par les Nations Unies, et de la résolution sur la culture et le développement durable par l'Assemblée générale en décembre de la même année, le message de la Journée mondiale de la diversité culturelle pour le dialogue et le développement est plus important que jamais. Le meilleur moyen de réaliser les 17 objectifs de développement durable est de s’appuyer sur le potentiel créatif des diverses cultures du monde, et d’engager un dialogue permanent afin d'assurer que tous les membres de la société bénéficient du développement durable.
Les Indicateurs thématiques de l’UNESCO pour la culture dans le Programme 2030 forment un cadre d’indicateurs thématiques visant à mesurer et évaluer la contribution de la culture à la réalisation des Objectifs et des Cibles du Programme de développement durable à l’horizon 2030, tant à l’échelle nationale qu’à l’échelle locale.
Pourquoi la diversité culturelle est-elle importante ?
La dimension culturelle est présente dans les trois quarts des grands conflits mondiaux. Combler le fossé entre les cultures est urgent et nécessaire pour la paix, la stabilité et le développement.
La diversité culturelle est une force motrice du développement pour la croissance économique et comme moyen de mener une vie intellectuelle, affective, morale et spirituelle plus satisfaisante. Il existe plusieurs conventions culturelles , qui favorisent la promotion de la diversité culturelle, en affirmant son atout indispensable pour éliminer la pauvreté et pour réaliser le développement durable.
Ces traités internationaux s'efforcent de protéger et de sauvegarder le patrimoine culturel et naturel mondial, dont les sites archéologiques, le patrimoine subaquatique, les collections des musées, le patrimoine immatériel comme les traditions orales et d'autres formes de patrimoine tout en soutenant la créativité, l'innovation et l'émergence de secteurs culturels dynamiques.
Dans le même temps, l'acceptation et la reconnaissance de la diversité culturelle – notamment par l'utilisation innovante des médias et des technologies de l'information et de la communication (TIC) – sont propices au dialogue entre les civilisations et les cultures, au respect et à la compréhension mutuelle.
Origines et objectifs de la Journée
En 2001, l'UNESCO a adopté la Déclaration universelle sur la diversité culturelle , et, en décembre 2002, l'Assemblée générale des Nations Unies, dans sa résolution 57/249, a déclaré le 21 mai comme la Journée mondiale pour la diversité culturelle pour le dialogue et le développement. En 2015, la deuxième Commission de l'Assemblée générale des Nations Unies a adopté la résolution sur la culture et le développement durable A/C.2/70/L.59, affirmant la contribution de la culture aux trois dimensions du développement durable, reconnaissant davantage la diversité naturelle et culturelle du monde et que les cultures et les civilisations peuvent contribuer au développement durable et en sont des catalyseurs essentiels.
La journée donne l'occasion d'approfondir notre compréhension des valeurs de la diversité culturelle et de favoriser la progression des quatre objectifs de la Convention sur la protection et la promotion de la diversité des expressions culturelles , adoptée le 20 octobre 2005 :
Soutenir des systèmes durables de gouvernance de la culture ;
Parvenir à un échange équilibré de biens et services culturels et améliorer la mobilité des artistes et des professionnels de la culture ;
Intégrer la culture dans le développement durable ; et
Promouvoir les droits de l'homme et des libertés fondamentales.
La dixième session de la Conférence des Parties à la Convention sur la protection et la promotion de la diversité des expressions culturelles (Paris, 18-20 juin 2025) commémore le 20e anniversaire de la Convention. L'objectif est d'examiner les progrès accomplis, d'évaluer la mise en œuvre du Fonds international pour la diversité culturelle et de discuter de diverses recommandations.
Les principales activités comprennent l'établissement d'orientations stratégiques pour le Comité intergouvernemental pour 2026-2027, l'élection de 12 nouveaux membres du Comité et la révision du règlement intérieur. En outre, un Forum de la société civile sera organisé le 17 juin 2025 pour renforcer la collaboration avec les parties prenantes de la société civile."
https://www.un.org/fr/observances/cultural-diversity-day
#metaglossia_mundus
"The £50,000 International Booker Prize Winner: ‘Heart Lamp’. Published by the independent house And Other Stories, ‘Heart Lamp’ wins the United Kingdom’s 2025 International Booker Prize.
‘Heart Lamp’ by Banu Mushtaq — in its translation to English by Deepa Bhasthi — is the winner of the 2025 International Booker Prize.
This evening (May 20) at the Tate Modern’s Turbine Hall in London, it has been announced that Heart Lamp: Selected Stories by India’s Banu Mushtaq and translated by Deepa Bhasthi is the winner of the 2025 International Booker Prize. This is the first win for the independent publisher And Other Stories, although its work has been nominated six times.
This honor, the brother award to the better-known Booker Prize for Fiction, is focused on translated work and was established in 2005. The win pays £50,000 (US$66,901), which is divided between the winning author and translator (or translators where a book has more than one). Each of the shortlisted titles is also given £5,000 (US$6,690)..."
In Feature Articles by Porter Anderson
Porter Anderson, Editor-in-Chief | @Porter_Anderson
May 20, 2025
https://publishingperspectives.com/2025/05/the-50000-international-booker-prize-winner-heart-lamp/
#metaglossia_mundus
"Language isn't just for communication -- it also shapes how sensory experiences are stored in the brain
May 20, 2025 Source: PLOS Our ability to store information about familiar objects depends on the connection between visual and language processing regions in the brain, according to a new study. Share: FULL STORY Our ability to store information about familiar objects depends on the connection between visual and language processing regions in the brain, according to a study published May 20 in the open-access journal PLOS Biology by Bo Liu from Beijing Normal University, China, and colleagues.
Seeing an object and knowing visual information about it, like its usual color, activate the same parts of the brain.
Seeing a yellow banana, for example, and knowing that the object represented by the word "banana" is usually yellow, both excite the ventral occipitotemporal cortex (VOTC). However, there's evidence that parts of the brain involved in language, like the dorsal anterior temporal lobe (ATL), are also involved in this process -- dementia patients with ATL damage, for example, struggle with object color knowledge, despite having relatively normal visual processing areas.
To understand whether communication between the brain's language and sensory association systems is necessary for representing information about objects, the authors tested whether stroke-induced damage to the neural pathways connecting these two systems impacted patients' ability to match objects to their typical color.
They compared color-identification behavior in 33 stroke patients to 35 demographically-matched controls, using fMRI to record brain activity and diffusion imaging to map the white matter connections between language regions and the VOTC.
The researchers found that stronger connections between language and visual processing regions correlated with stronger object color representations in the VOTC, and supported better performance on object color knowledge tasks.
These effects couldn't be explained by variations in patients' stroke lesions, related cognitive processes (like simply recognizing a patch of color), or problems with earlier stages of visual processing.
The authors suggest that these results highlight the sophisticated connection between vision and language in the human brain.
The authors add, "Our findings reveal that the brain's ability to store and retrieve object perceptual knowledge -- like the color of a banana -- relies on critical connections between visual and language systems. Damage to these connections disrupts both brain activity and behavior, showing that language isn't just for communication -- it fundamentally shapes how sensory experiences are neurally structured into knowledge."" https://www.sciencedaily.com/releases/2025/05/250520161846.htm #metaglossia_mundus
"On 13-14 May 2025, the Council of Europe’s Committee of Experts on Intercultural Inclusion (ADI‑INT) met in Strasbourg and advanced in its work on preparing guidance, tools and policies to foster inclusive societies across member states.
The Committee approved a “Guidance document on strategies for inclusion in the fields under the responsibility of the Steering Committee on Anti-discrimination, Diversity and Inclusion (CDADI)”, which will now be submitted to the CDADI for adoption at its 11th meeting (1-3 July 2025).
The ADI-INT members and participants also discussed the next steps for the self‑assessment tool on multilevel governance for intercultural inclusion, with the consultants: Migration Policy Group. This tool aims to assist national, regional and local authorities in evaluating their coordination and cooperation on inclusion policies.
Karoline Fernandez de la Hoz Zeitler (Spain) was re-elected Chair, and Krzysztof Stanowski (Lublin, Poland) was re-elected Vice-Chair of the ADI-INT. Grégory Jaquet (Canton of Neuchâtel, Switzerland) was reappointed Gender Equality Rapporteur for the Committee.
On her re-election the Chair said “The challenges we face in advancing intercultural inclusion, as well as the implementation of Recommendation CM/Rec(2022)10 on multilevel policies and governance for intercultural integration, continue to grow in relevance within both national and global contexts. The work, recommendations, and tools developed by the ADI-INT Committee to date have made a valuable contribution to promoting intercultural integration and strengthening multilevel governance in our countries. These outputs support the design, implementation, and monitoring of effective policies in these fields, not only in Spain but also across all participating member states. I am honoured to continue serving in this role, to help finalise the initiatives currently underway and to collaboratively shape the next steps of our Committee’s important work.”
During this 7th meeting the Committee heard a presentation and exchanged views on state initiatives and community‑level strategies supporting the return and reintegration of displaced Ukrainians, as well as on the "Feasibility study on desegregation and inclusion policies and practices in the field of education for Roma and Traveller children.” This was followed by a roundtable, during which members and participants discussed challenges in implementing Committee of Ministers’ Recommendation (2022)10 on multilevel policies and governance for intercultural integration sharing obstacles and best‑practice solutions.
The Committee members then exchanged views with Petra Roter, President of the Advisory Committee on the Framework Convention for the Protection of National Minorities, on the Advisory Committee’s recently revised Thematic Commentary on Education.
The meeting concluded with a study visit to Strasbourg’s “T’Rêve” social inclusion project and the new “Equality Space” where participants observed some of the innovative best practices concerning inclusion, access to services and anti-discrimination education at the local level."" https://www.coe.int/en/web/migration-and-refugees/-/7th-meeting-of-committee-of-experts-on-intercultural-inclusion-adi-int-held-in-strasbourg #metaglossia_mundus
"Call for Submissions: DSAC Publishing Hub 2025/26 – Empowering South African Authors, Publishers, and Language Experts
The Department of Sport, Arts, and Culture (DSAC) and Academic and Non-Fiction Authors Association of South Africa (ANFASA) released the third annual DSAC Publishing Hub open call. Building on two successful previous cycles, this established program continues its mission of preserving all South African official languages and amplifying marginalised voices.
the DSAC Publishing Hub is poised to amplify its reach and continue to deepen its impact with further publication of impactful literary works. An impressive collection of fifty-seven works, comprising of forty-three physical books, eight audiobooks, and six books converted to braille have been produced to date. Notably, four Khoi and San books written in Khwedam,! Xuhnthali, and Nama were published to further emphasising the importance of linguistic and cultural preservation.
As part of DSAC's broader mission to revitalise South Africa’s publishing sector and cultivate a thriving literary culture, the initiative continues to prioritise inclusivity and representation. With a strong emphasis on elevating indigenous languages literature and bridging systemic gaps, the program enters its third year with the ambitious goal of further producing new remarkable literary works—cementing its role as a catalyst for transformation in the national literary ecosystem
The Honourable Minister Gayton McKenzie affirmed that South African literature has significantly strengthened through the first two cycles of this initiative. "As we enter the third year, we're not just continuing to publish books—we're building a sustainable ecosystem for preserving our languages, documenting our histories, and ensuring all South Africans see themselves reflected in our national literature," said Minister McKenzie.
The DSAC Publishing Hub's inaugural cycle generated over 314,337 digital impressions, establishing new pathways for accessibility in South African literature.
Key elements of the 2025/26 initiative include:
Substantial Financial Support: R25,000 towards content development for successful authors and R80,000 per manuscript for publishers and created job opportunities for language experts, investment in South African literature
Indigenous Language Focus: Dedicated funding for works in official South African languages, including indigenous languages and the Khoi and San Languages.
Genre Expansion: Support for fiction, non-fiction, drama, children's literature, poetry, political narratives, and historically significant works
Digital Innovation: Increased production of audiobooks to reach tech-savvy audiences
Accessibility Commitment: Continued development of Braille and audio formats for visually impaired readers
"The Publishing Hub has demonstrated consistent growth and extraordinary impact over its first two cycles," said the Prof Sihawukele Ngubane – Chairman of ANFASA “With this third round of funding, we're strengthening the foundation we've built—creating both cultural preservation and sustainable livelihoods in our creative economy."
This initiative addresses critical challenges in South Africa's publishing landscape, including limited publishing opportunities for indigenous language works and accessibility barriers for readers with disabilities.
Key Dates:
Publisher and Selection Panel Applications: Now Open (Closes June 13, 2025)
Manuscript Submissions: Now Open (Closes June 27, 2025)
Selections Announcement: August 2025
Publication Schedule: October 2025 - March 2026
The initiative particularly encourages submissions from authors that are women, youth, and those from historically underrepresented communities.
For comprehensive submission guidelines or to apply, visit https://www.anfasa.org.za/dasc-publishing-hub/ .
For media inquiries and further information, contact media@jtcomms.co.za or call (011) 788 7632 and Ms. Zimasa Velaphi |Head of Communications and Marketing| Cell: 072 172 8925 |Email: zimasav@dsac.gov.za
...
Date Published
20-May-2025"
https://www.dsac.gov.za/CallforSubmissionsDSACPublishingHub2025/26
For AI to be beneficial in accessibility tech, the people who use it must be involved in its development.
"AI is now used for audio description. But it should be accurate and actually useful for people with low vision
Since the recent explosion of widely available generative artificial intelligence (AI), it now seems that a new AI tool emerges every week.
With varying success, AI offers solutions for productivity, creativity, research, and also accessibility: making products, services and other content more usable for people with disability.
The award-winning 2024 Super Bowl ad for Google Pixel 8 is a poignant example of how the latest AI tech can intersect with disability.
Directed by blind director Adam Morse, it showcases an AI-powered feature that uses audio cues, haptic feedback (where vibrating sensations communicate information to the user) and animations to assist blind and low-vision users in capturing photos and videos.
Javier in Frame showcases an accessibility feature found on Pixel 8 phones.
The ad was applauded for being disability inclusive and representative. It also demonstrated a growing capacity for – and interest in – AI to generate more accessible technology.
AI is also poised to challenge how audio description is created and what it may sound like. This is the focus of our research team.
Audio description is a track of narration that describes important visual elements of visual media, including television shows, movies and live performances. Synthetic voices and quick, automated visual descriptions might result in more audio description on our screens. But will users lose out in other ways?
AI as people’s eyes
AI-powered accessibility tools are proliferating. Among them is Microsoft’s Seeing AI, an app that turns your smartphone into a talking camera by reading text and identifying objects. The app Be My AI uses virtual assistants to describe photos taken by blind users; it’s an AI version of the original app Be My Eyes, where the same task was done by human volunteers.
There are increasingly more AI software options for text-to-speech and document reading, as well as for producing audio description.
Audio description is an essential feature to make visual media accessible to blind or vision impaired audiences. But its benefits go beyond that.
Increasingly, research shows audio description benefits other disability groups and mainstream audiences without disability. Audio description can also be a creative way to further develop or enhance a visual text.
Traditionally, audio description has been created using human voices, script writers and production teams. However, in the last year several international streaming services including Netflix and Amazon Prime have begun offering audio description that’s at least partially generated with AI.
Yet there are a number of issues with the current AI technologies, including their ability to generate false information. These tools need to be critically appraised and improved.
Is AI coming for audio description jobs?
There are multiple ways in which AI might impact the creation – and end result – of audio description.
With AI tools, streaming services can get synthetic voices to “read” an audio description script. There’s potential for various levels of automation, while giving users the chance to customise audio description to suit their specific needs and preferences. Want your cooking show to be narrated in a British accent? With AI, you could change that with the press of a button.
However, in the audio description industry many are worried AI could undermine the quality, creativity and professionalism humans bring to the equation.
The language-learning app Duolingo, for example, recently announced it was moving forward with “AI first” development. As a result, many contractors lost jobs that can now purportedly be done by algorithms.
On the one hand, AI could help broaden the range of audio descriptions available for a range of media and live experiences.
But AI audio description may also cost jobs rather than create them. The worst outcome would be a huge amount of lower-quality audio description, which would undermine the value of creating it at all.
AI shouldn’t undermine the quality of assistive technologies, including audio description.
Can we trust AI to describe things well?
Industry impact and the technical details of how AI can be used in audio description are one thing.
What’s currently lacking is research that centres the perspectives of users and takes into consideration their experiences and needs for future audio description.
Accuracy – and trust in this accuracy – is vitally important for blind and low-vision audiences.
Cheap and often free, AI tools are now widely used to summarise, transcribe and translate. But it’s a well-known problem that generative AI struggles to stay factual. Known as “hallucinations”, these plausible fabrications proliferate even when the AI tools are not asked to create anything new – like doing a simple audio transcription.
If AI tools simply fabricate content rather than make existing material accessible, it would even further distance and disadvantage blind and low-vision consumers."
Published: May 21, 2025 2.56am SAST
Kathryn Locke, Tama Leaver, Curtin University
https://theconversation.com/ai-is-now-used-for-audio-description-but-it-should-be-accurate-and-actually-useful-for-people-with-low-vision-256808
#metaglossia_mundus
"Kyiv hosts event on 85th anniversary of translator of Ukrainian literature Abbas Abdulla
Kyiv has hosted a literary and artistic evening at the Hennadii Udovenko Diplomatic Academy of Ukraine at the Ministry of Foreign Affairs, dedicated to the 85th anniversary of the birth of Azerbaijani poet, translator of Ukrainian literature, and diplomat Abbas Abdulla (1940-2019), Report informs.
The event was organized by the Embassy of Azerbaijan in Ukraine and the Ukrainian-Turkic Center.
Abbas Abdulla was the most outstanding Azerbaijani translator and researcher of Ukrainian literature, had a high artistic command of the Ukrainian language, translated Ukrainian literature from the original, and knew Ukrainian culture, history, mentality, and traditions.
At the evening, a letter from the Ministry of Culture and Strategic Communications of Ukraine to the organizers and participants of the event was read, which noted Abbas Abdulla's enormous contribution to the development of Ukrainian-Azerbaijani relations, as well as the value of his translation heritage and the importance of honoring his memory...
The anthem of Azerbaijan and Azerbaijani melodies were also played on the bandura (Ukrainian plucked-string folk-instrument) by Marina Vishnevskaya. Poems by Abbas Abdulla were recited by activist Polina Pyatnitsa, who studies the Azerbaijani language at the Ukrainian-Turkic Center's elective course. The curator of the event, Maryna Honcharuk, said that for his highly artistic translations of Ukrainian literature into Azerbaijani, Abbas became a laureate of the M.T. Rylskyi Literary Prize in 1984 (the highest state award of Ukraine given to translators).
"He fell in love with the Ukrainian people and their culture, and although Abbas Abdulla's subsequent life developed rapidly, he became not only one of the greatest poets, translators, and literary scholars of Azerbaijan, but also an influential public and political figure and diplomat. However, Ukraine became the love of his life, and Abbas presented its literature and culture to the Azerbaijani people at a highly artistic level," Honcharuk said.
According to her, Abdulla wrote more than 200 works and several monographs on Azerbaijani-Ukrainian literary ties over a period of 100 years (1840-1940), research on the Azerbaijani "Shevchenkoiana" and the activities of the Ukrainian society "Prosvita" in Baku at the beginning of the last century.
"Individual works by Abbas were translated into Ukrainian by Mykola Miroshnychenko, Pavlo Movchan, Dmytro Pavlychko, Petro Perebyinis and others," said Maryna Honcharuk...
By Nazrin Babayeva May 19, 2025 15:52 https://report.az/en/cultural-policy/kyiv-hosts-event-in-honor-of-85th-anniversary-of-outstanding-poet-translator-of-ukrainian-literature-abbas-abdulla/
Interpreters who provide services through Canadian Hearing Services are heading into their fourth week on strike in Sudbury and the rest of Ontario
"Unionized interpreters for the deaf and deaf-blind people in the Northeast say they're on strike because of workload issues as well as to back their demands for better pay and contract terms.
The members of CUPE Local 2073 held a public demonstration Friday outside the office of the Canadian Hearing Services (CHS) on Riverside Drive in Sudbury to raise awareness...
Mara Waern, president of CUPE Local 2073, said CHS management has not been willing to make a meaningful offer to the workers. She said the union is not willing to accept a one-year contract that has only a two per cent pay increase.
Waern said the union wants a second year added to the contract with an additional three per cent pay hike.
"We're actually not asking for a lot," said Waern who added that a two-year contract gives the workers a bit more security and stability.
She added that the workforce is being stretched too thin and there is a significant service area for the workers at the Sudbury office.
"Well, I'm an employment consultant. My area is from Parry Sound, North Bay, Sudbury, Manitoulin Island, and now I'm also covering Sault Ste Marie," said Waern.
"Okay, that's my job, but we are 206 employees across the province, and we cover Ontario from Windsor to Thunder Bay to Sudbury and and like every corner," she added.
Waern added that CHS seemed to be less-than-cooperative in the bargaining process.
"They wouldn't commit to a next bargaining date, and they wanted a media blackout. I had said to them, ‘Well, tell you what. I won't do any media tonight. If you put something on the table today, and if you put something good on the table, maybe I'd agree to a media blackout,’ to which they said ‘no’. So everything has been a no. And we're hoping that if we can generate some community interest in our cause, that the employer will come back to the table and just do something that's fair," said Waern.
MPP West said he was at the rally because he wanted to show support for the workers.
"We all know the affordability crunch is hitting people when you have workers who have only had a one per cent increase since their last contract," said West...
"When the employer, when the CEO, is making $300,000 of taxpayer dollars, and these people are making a lot less, and just wanting to keep up with the cost of inflation, it's important we support them, especially during these times," said West.
MPP Gélinas commented that there are people going without the services of interpreters for everyday activities where people need help.
"Lots of people depend on them for interpretation. They have weekly medical treatments, and now there is no interpretation. It's an essential service. Those workers deserve respect, asking for a two-year agreement. It's a no-brainer to me,” she said.
Len Gillis https://www.sootoday.com/local-news/strike-continues-for-ontarios-interpreters-for-people-who-are-deaf-and-deaf-blind-10678262
#metaglossia_mundus
"Rédigé par Zakaria SABTI, Co-Founder Volund Ventures le Mardi 20 Mai 2025
L’Afrique est souvent présentée comme le continent de la prochaine grande révolution numérique – et à l’aube de l’ère de l’intelligence artificielle, cette prédiction prend un sens nouveau. Malgré un retard historique dans l’accès au numérique traditionnel, dû notamment à un faible taux d’alphabétisation et à des infrastructures limitées, le continent pourrait sauter une étape technologique et tirer profit de l’IA de manière spectaculaire. Plus de 60 % de la population d’Afrique subsaharienne est analphabète, ce qui l’a longtemps exclue d’un Web 2.0 fondé sur le texte. Or, l’émergence d’interfaces vocales, visuelles et intuitives remet les compteurs à zéro – offrant à des millions de personnes un accès inédit à l’information et aux services digitaux. Ce contexte socio-économique particulier, combiné à l’adoption rapide des technologies mobiles, place l’Afrique en position de plus grand bénéficiaire de la révolution de l’IA.
Comment et pourquoi ?
De l’illettrisme numérique à l’inclusion par l’IA
Pendant des années, l’écart s’est creusé entre un monde connecté reposant principalement sur la lecture/écriture et les populations africaines éloignées de l’éducation formelle. En Afrique subsaharienne, des dizaines de millions de personnes n’ont pas accès à l’information en ligne simplement parce qu’elles ne savent ni lire ni écrire. Conséquence : un retard dans l’adoption des services du Web 2.0 (e-gouvernement, e-commerce, éducation en ligne, etc.), majoritairement textuels.
Cependant, l’intelligence artificielle vocale change la donne. Désormais, un agriculteur du Sahel peut interroger un assistant vocal dans sa langue maternelle pour obtenir la météo ou des conseils agricoles, sans avoir à taper le moindre mot. Des entreprises africaines innovent en lançant des smartphones vocaux adaptés : en Côte d’Ivoire, par exemple, le “Superphone” intègre un assistant qui comprend 50 langues africaines et vise les 40 % d’Ivoiriens illettrés. De même, des assistants vocaux émergent en langues locales comme le twi au Ghana ou le kiswahili en Afrique de l’Est. La technologie s’adapte aux réalités linguistiques africaines : le continent compte près de 2 000 langues, et des modèles linguistiques entraînés ou finement ajustés sur ces langues commencent à voir le jour, souvent en open source. Des initiatives comme Masakhane (communauté open-source en NLP africain) témoignent de cette effervescence. L’IA permet ainsi de contourner le frein de l’illettrisme en rendant le numérique accessible par la voix, la vidéo et les langues locales, ouvrant la voie à une inclusion massive de populations jusqu’ici en marge du digital.
Parallèlement, l’essor des avatars vidéo conversationnels (CVA) – ces personnages virtuels capables d’interactions orales en temps réel – apporte une dimension humaine aux interfaces. Il est bien moins intimidant pour un nouvel utilisateur de parler à un avatar amical dans sa langue que de remplir un formulaire en ligne. Des entreprises africaines, comme la startup marocaine TwinLink, conçoivent déjà des agents numériques hyperréalistes capables d’écouter, de comprendre les spécificités culturelles locales et de répondre avec des expressions faciales et des gestes naturels dans plusieurs langues. TwinLink, pionnier dans cette approche en Afrique, démontre déjà son efficacité dans des secteurs clés comme l'éducation, le recrutement et l’accès à l’information financière, facilitant ainsi l’inclusion numérique des populations jusque-là marginalisées. Demain, grâce à cette innovation locale, un élève marocain pourra dialoguer avec un tuteur virtuel adapté culturellement pour apprendre à lire, ou un jeune chercheur d’emploi pourra être accompagné, conseillé et orienté par un avatar vidéo adapté à son contexte culturel et linguistique. Ces avancées marquent un saut technologique majeur : l’Afrique peut ainsi passer directement de l’analogie orale traditionnelle à l’ère de l’IA conversationnelle, sans jamais devoir passer par l’étape textuelle. À l'image de la téléphonie mobile, adoptée massivement sans passer par le téléphone fixe, le continent peut aujourd’hui contourner le PC/clavier au profit des interfaces vocales et visuelles propulsées par l’IA.
Technologies émergentes : l’atout d’un terrain vierge
Si l’Afrique risque de profiter à plein de la révolution de l’IA, c’est aussi parce qu’elle aborde cette ère sans les boulets du passé. Là où des économies matures doivent composer avec des systèmes hérités et des processus lourds, de nombreux pays africains peuvent adopter dès le départ les technologies les plus récentes – et concevoir des solutions ex nihilo alignées sur ces nouveautés. Voici quelques-unes des technologies émergentes qui pourraient transformer le paysage africain :
- IA agentique et modèles multi-agents : On assiste à l’essor d’IA capables d’autonomie, c’est-à-dire pouvant planifier et exécuter des tâches avec un minimum d’intervention humaine. Un système agentique peut, par exemple, naviguer sur Internet, comparer des offres et prendre des décisions simples pour l’utilisateur. Mieux, on développe maintenant des modèles multi-agents, où plusieurs IA collaborent entre elles pour résoudre un problème complexe en se répartissant les rôles. Ce paradigme réduit les erreurs (les agents se corrigent mutuellement) et améliore l’efficacité, chaque agent pouvant gérer une partie de la tâche en parallèle. Pour l’Afrique, cela signifie la possibilité d’automatiser des pans entiers de l’économie – de l’optimisation des récoltes agricoles par des agents intelligents coopératifs, jusqu’à la gestion urbaine avec des essaims d’agents surveillant trafic, énergie, sécurité, de concert. Les frameworks d’orchestration d’agents se multiplient, rendant ces solutions plus accessibles.
- Apprentissage multimodal : Les modèles d’IA de nouvelle génération peuvent comprendre et générer plusieurs types de données simultanément – texte, voix, images, vidéo. GPT-4o ou des systèmes équivalents sont ainsi capables de décrire une image, d’analyser un document écrit et de répondre à des questions orales en un seul système intégré. « The future belongs to multimodal AI »: ces modèles peuvent traiter texte, images, vidéo et audio de manière fluide. Concrètement, en Afrique, un agriculteur pourra prendre en photo une plante malade et interroger oralement l’IA qui, comprenant l’image et la question, lui répondra dans sa langue avec un diagnostic. L’apprentissage multimodal ouvre la voie à des applications AI plus concrètes sur le terrain, et réduit la barrière technologique (puisqu’une simple photo ou question orale peuvent déclencher une expertise pointue).
- Fine-tuning à grande échelle : Plutôt que de créer des IA généralistes pas toujours adaptées aux réalités locales, la tendance est à la personnalisation de grands modèles sur des données spécifiques. Grâce à l’abondance de données et à la puissance de calcul croissante, on peut affiner les modèles existants (ex. GPT, Llama) sur des corpus en langues africaines ou sur des cas d’usage particuliers (santé rurale, météo tropicale…). Il devient envisageable d’avoir des IA hautement spécialisées, entraînées sur des millions d’exemples pertinents. L’Afrique profite particulièrement de cette évolution car elle peut créer ses propres modèles dérivés répondant aux défis locaux (par exemple un modèle météo fine-tuned sur le climat sahélien, ou un modèle médical entraîné sur les symptômes de maladies endémiques en Afrique centrale). L’essor de l’IA open-source avancée facilite encore cette appropriation : des modèles de pointe sont mis à disposition de tous (Meta a ouvert LLaMA, Stability AI a libéré Stable Diffusion pour les images, etc.), ce qui permet aux chercheurs et startups africains d’expérimenter librement sans dépendre des géants technologiques. En 2024, on a ainsi vu exploser les projets open-source fournissant des alternatives locales aux grands modèles propriétaires, démocratisant l’IA au niveau mondial.
- Convergence IA/robotique : La fusion de l’intelligence logicielle et des machines physiques arrive à maturité, entraînant un changement de paradigme industriel. Les robots ne sont plus de simples bras automatisés et aveugles : ils intègrent désormais des cerveaux IA qui leur permettent de percevoir leur environnement, d’apprendre et de s’adapter. Pour l’Afrique, cette convergence arrive à point nommé. Dans des secteurs comme l’agriculture, la santé ou la logistique, où le manque d’infrastructures et de main-d’œuvre qualifiée est un défi, l’essor de robots autonomes intelligents pourrait apporter un saut d’efficacité. On peut imaginer des drones agricoles pilotés par IA survolant les champs pour cibler précisément les zones à irriguer ou à traiter, des robots maçons imprimant en 3D des logements à bas coût, ou encore des robots médicaux aidant au diagnostic dans des dispensaires isolés. Ces innovations étaient inimaginables sans l’IA moderne pour doter les machines de capacités cognitives. Désormais, l’IA embarquée dans les robots permet de les déployer dans des environnements non structurés (routes africaines, chantiers informels, etc.) en surmontant les imprévus – là où de simples automatismes échoueraient. En somme, la robotique intelligente offre à l’Afrique une occasion unique d’industrialiser et de mécaniser de façon agile et adaptée, sans reproduire nécessairement le modèle des usines du XXᵉ siècle.
Toutes ces tendances technologiques – IA agentique, multimodalité, open-source, robotique AI – s’imbriquent et se renforcent mutuellement. Combinées à un terrain vierge, elles donnent à l’Afrique un avantage : celui de pouvoir bâtir des systèmes neufs, optimisés pour l’IA, plutôt que d’essayer de greffer laborieusement l’IA sur des structures dépassées.
Réinventer plutôt qu’intégrer : la stratégie du leapfrog
Un piège guette toutefois les pays africains : vouloir simplement ajouter une couche d’IA à des processus existants, sans repenser en profondeur l’organisation. Pour pleinement récolter les fruits de l’intelligence artificielle, il ne suffit pas d’adopter des outils d’IA – il faut redessiner les processus eux-mêmes pour l’ère de l’IA. Cela signifie passer d’une logique d’automatisation de l’existant à une logique de transformation radicale (disruption).
Dans le secteur privé, ce changement de paradigme est illustré par le concept d’entreprise “AI full-stack”. Plutôt que de vendre des solutions d’IA à des acteurs en place, des entrepreneurs bâtissent de nouvelles entreprises où l’IA n’est pas un gadget mais le cœur du modèle. « Au lieu de vendre une IA à un cabinet d’avocats, pourquoi ne pas lancer un cabinet juridique piloté par l’IA ? Au lieu d’aider les codeurs, pourquoi ne pas créer une agence de dev entièrement automatisée ? ». Ce mouvement récent de startups réinvente des filières entières avec l’IA comme colonne vertébrale, et non comme une simple couche additionnelle.
Appliquée à l’Afrique, où de nombreux secteurs formels sont peu développés, cette approche full-stack est une opportunité en or : ne pas numériser la bureaucratie existante, mais imaginer la fourniture de services publics, l’éducation ou la finance directement avec l’IA au centre. Par exemple, au lieu d’informatiser laborieusement un état civil papier, des pays explorent des systèmes d’identification numérique basés sur la biométrie et gérés par IA, plus sécurisés et plus efficaces. De même, plutôt que d’ouvrir des banques traditionnelles partout, des fintech africaines conçoivent des services financiers 100 % mobiles et dopés à l’IA (scoring de crédit par IA, chatbots bancaires vocaux, etc.), contournant le modèle bancaire classique. L’Afrique a déjà prouvé avec le mobile money qu’elle pouvait inventer de nouveaux modèles (comme M-Pesa au Kenya qui a révolutionné les paiements sans passer par la case “banque traditionnelle”). Elle peut reproduire ce schéma avec l’IA : inventer de nouvelles façons de faire dans l’administration, l’agriculture, la santé, en profitant du fait qu’il y ait moins de « dinosaures » à déranger.
Adopter une stratégie de leapfrog implique également d’anticiper la formation et l’emploi. Plutôt que de craindre que l’IA ne détruise des emplois, les décideurs africains peuvent planifier la requalification de la main-d’œuvre vers de nouveaux métiers créés par l’IA. Dans un continent où la population est la plus jeune du monde (âge médian d’environ 20 ans) et où 70% des Africains seront digital natives en 2030, le potentiel humain est immense. En intégrant l’IA dans l’éducation (par exemple via des tuteurs IA personnalisés dès le primaire) et dans la formation professionnelle, l’Afrique peut forger une génération d’entrepreneurs et de travailleurs aguerris aux outils d’IA, capables d’innover localement. Cette jeunesse familiarisée avec les technologies peut alors concevoir des processus inédits, adaptés à son contexte, au lieu d’hériter de schémas désuets. En somme, ne pas imiter les modèles occidentaux du XXᵉ siècle, mais inventer ceux du XXIᵉ siècle grâce à l’IA.
Le cas du Maroc : de la production industrielle à la fabrication de robots intelligents
Prenons l’exemple du Maroc, qui illustre le potentiel industriel du continent à l’ère de l’IA et qui s’est hissé en quelques années parmi les leaders mondiaux dans des industries de pointe, comme l’automobile et l’aéronautique. En 2024, la production automobile marocaine a atteint 500 000 véhicules – une hausse de 12 % par rapport à l’année d’avant – faisant du Royaume l’un des plus grands producteurs de voitures en Afrique, rivalisant même avec l’Europe du Sud. Dans l’aéronautique, le Maroc est devenu un maillon indispensable de la chaîne mondiale : « Chaque avion qui vole dans le monde porte, au moins, une pièce fabriquée au Maroc », déclarait fièrement l’ex-ministre de l’Industrie, Moulay Hafid Elalamy. Des usines près de Casablanca produisent des composants critiques (structures en composite, pièces de moteurs) pour Airbus, Boeing ou le F-16. Cette double capacité à fabriquer des pièces complexes et à assembler des systèmes high-tech place le pays dans une position enviable. Or, si le Maroc sait fabriquer des voitures et des avions, il peut fabriquer des robots. Après tout, un robot n’est qu’une machine alliant mécanique de précision (un savoir-faire déjà maîtrisé localement) et intelligence logicielle. Le chaînon manquant, c’est ce « cerveau » du robot – l’IA – qu’il faut désormais développer sur le sol africain. L’enjeu stratégique pour le Maroc et l’Afrique en général est de passer de la production manufacturière à la création de valeur ajoutée technologique. Plutôt que d’importer des systèmes d’IA ou des logiciels conçus à l’étranger pour animer ces robots, il s’agit de concevoir et entraîner des modèles d’IA localement, en tenant compte des réalités africaines. Un robot agricole destiné aux champs marocains, par exemple, gagnerait à être doté d’une vision par ordinateur entraînée sur les cultures locales et les conditions de luminosité du pays, plutôt que d’utiliser une vision standard “Made in Silicon Valley”.
Le Maroc commence à investir dans cette voie : des centres de recherche en robotique et IA voient le jour, des ingénieurs formés localement collaborent avec la diaspora et des partenaires internationaux pour développer des solutions africaines. On peut citer par exemple le succès de startups comme InstaDeep (fondée en Tunisie, avec des bureaux au Maroc), qui a développé des algorithmes d’IA de classe mondiale rachetés par un leader biotechnologique, prouvant que le talent africain en IA existe et peut rivaliser globalement. En misant sur l’IA “cerveau” autant que sur le “corps” des machines, des pays comme le Maroc pourraient non seulement produire des robots en masse, mais aussi les doter d’une intelligence conçue sur le continent. C’est crucial pour que l’Afrique ne se contente pas d’être l’atelier du monde de l’IA, mais en devienne également le laboratoire d’idées et d’innovations.
Enfin, la dynamique marocaine reflète un phénomène plus large : de Lagos à Nairobi, un écosystème de startups AI “full-stack” émerge, où l’on crée des produits finis intégrant matériel et logiciel IA, afin de répondre à des besoins locaux (drones anti-braconnage en Afrique australe, robots d’inspection de pipelines pétroliers au Nigeria, etc.). Ces jeunes entreprises intègrent toute la chaîne de valeur sur place, du design à l’assemblage, en passant par l’algorithmique. Si elles sont soutenues (par des politiques pro-innovation, des fonds d’investissement africains, des programmes de formation), elles pourraient devenir les champions technologiques de demain, faisant de l’Afrique un exportateur net de solutions robotiques et d’IA.
L’Afrique AI-first : un avenir ambitieux et crédible
Loin des clichés de “dernier de la classe technologique”, l’Afrique pourrait bien être la grande gagnante de la révolution de l’intelligence artificielle. Le continent réunit en effet des conditions propices à un bond en avant historique : une population jeune avide de changement, des besoins immenses dans tous les secteurs (et donc autant d’opportunités d’innovation), moins de systèmes hérités freinant l’adoption du neuf, et désormais un accès facilité aux technologies de pointe (grâce à l’open-source et à la baisse des coûts). Surtout, l’IA permet de niveler certaines asymétries : la barrière de l’alphabétisation s’estompe avec les interfaces vocales, la barrière de la langue recule avec les modèles polyglottes, la barrière de l’expertise diminue grâce aux agents intelligents accessibles depuis un simple téléphone mobile.
Pour que l’Afrique devienne ce leader de l’ère de l’IA, il faudra toutefois relever plusieurs défis : investir massivement dans les infrastructures numériques (data centers, connexions haut débit, électricité fiable) pour supporter ces nouvelles applications ; adapter les cadres réglementaires pour encourager l’innovation tout en protégeant les citoyens ; et développer les compétences localement, du niveau utilisateur jusqu’au niveau concepteur d’IA. Des signaux positifs existent – en témoigne la multiplication des hubs technologiques, des programmes de formation en ligne en IA, ou la récente création d’un Conseil africain de l’IA pour harmoniser les stratégies continentales.
En redéfinissant ses processus et en embrassant pleinement les technologies émergentes, l’Afrique a l’opportunité de “craquer le code” de l’IA à sa manière. Cela signifie trouver des solutions inédites aux problématiques locales en s’appuyant sur l’IA, et exporter ces solutions vers le reste du monde. Du médecin virtuel parlant zoulou aux robots solaires entretenus via une plateforme IA, les innovations nées en Afrique pourraient bientôt inspirer d’autres régions. L’ère de l’intelligence artificielle ne fait que commencer, et elle pourrait consacrer l’Afrique comme son plus grand gagnant, à condition d’oser une vision ambitieuse, affranchie des modèles dépassés, et résolument tournée vers l’avenir. #metaglossia_mundus
L’UNESCO a adopté sa déclaration universelle sur la diversité culturelle
"Mercredi 21 mai, journée Mondiale de la diversité culturelle pour le dialogue et le développement ACCUEIL » TOUTE L’ACTUALITÉ » MERCREDI 21 MAI, JOURNÉE MONDIALE DE LA DIVERSITÉ CULTURELLE POUR LE DIALOGUE ET LE DÉVELOPPEMENT
20 mai 2025 Jean-Paul Chambrillon
Le 2 novembre 2001, l’UNESCO a adopté sa déclaration universelle sur la diversité culturelle. Elle reconnaît, pour la première fois, la diversité culturelle comme « héritage commun de l’humanité » et considère sa sauvegarde comme étant un impératif concret et éthique inséparable du respect de la dignité humaine.
Diversement cultivés Suite à cela, l’Assemblée générale des Nations Unies a proclamé le 21 mai, « Journée mondiale de la diversité culturelle pour le dialogue et le développement » afin d’approfondir nos réflexions sur les valeurs de la diversité culturelle pour apprendre à mieux « vivre ensemble ». C’est pourquoi l’UNESCO fait appel aux états membres et à la société civile pour célébrer cette journée en y associant le plus grand nombre d’acteurs et de partenaires !
Objectifs Cette journée donne l’occasion de mieux connaître et apprécier ce que nous devons aux autres cultures, et à prendre la mesure de la diversité de leurs apports, de leur unicité, de leur complémentarité et de leur solidarité.
Connaître et reconnaître nos différences, les respecter en ce qu’elles fondent notre propre identité, c’est donner la chance aux siècles qui s’annoncent de s’épanouir enfin hors des conflits identitaires de tous ordres. La diversité culturelle est un droit humain fondamental. Lutter pour sa promotion c’est lutter contre les stéréotypes et le fondamentalisme culturels.
Les autorités publiques sont de plus en plus sensibles à la nécessité de développer le dialogue interculturel, en vue de renforcer la paix, la sécurité, et la stabilité au niveau mondial.
Promouvoir le dialogue mutuel En instaurant le 21 mai la « Journée mondiale de la diversité culturelle pour le dialogue et le développement « , l’ONU donne une orientation importante en plaçant le dialogue mutuel – au-delà du sexe, de l’âge, de la nationalité, de l’appartenance culturelle et de la religion – au centre de tous les efforts pour parvenir à un monde de paix, capable de faire face à l’avenir. « Seul le dialogue peut servir de base à une société pluraliste et culturelle », déclare Elsbeth Müller, secrétaire générale d’UNICEF suisse à l’intégration, de même que la compréhension mutuelle entre les cultures, jouent là un rôle essentiel.“
Site à visiter : https://www.un.org/fr/observances/cultural-diversity-day
Source : journée mondiale" https://www.petiterepublique.com/2025/05/20/mercredi-21-mai-journee-mondiale-de-la-diversite-culturelle-pour-le-dialogue-et-le-developpement/ #metaglossia_mundus
"Le professeur Rafael Schögler a rejoint l’équipe du Département des arts, langues et littératures de l’Université de Sherbrooke cet hiver.
Ses travaux explorent les dynamiques de la traduction et la circulation des savoirs, en mettant l’accent sur les politiques de traduction dans les sciences sociales et humaines. Actuellement, il se penche sur les politiques de traduction dans des contextes d’après-exil afin d’approfondir la compréhension de leurs effets sur les processus de démocratisation en Europa après la Seconde Guerre mondiale. Il participe également à des projets collaboratifs examinant l’impact de la traduction missionnaire sur les cosmovisions autochtones et, avec Christina Korak et Edson Krenak, prépare l’édition d’un numéro spécial sur la traduction avec, pour, et au sein de communautés autochtones. Récemment, il a aussi commencé une réflexion et des recherches empiriques sur les enjeux éthiques de la traduction dans des cadres multilingues et interdisciplinaires.
À la croisée de la traductologie et de la sociologie, Rafael Schögler est activement engagé dans plusieurs instances éditoriales. Il fait partie de l’équipe de rédaction de la revue interdisciplinaire Translation in Society et contribue aux comités scientifiques de Translation and Interpreting Studies (TIS) ainsi que de la revue Chronotopos – A Journal of Translation History."
https://www.usherbrooke.ca/flsh/actualites/nouvelles/details/55649
#metaglossia_mundus
"Presentan el primer diccionario veterinario Inglés-Español con más de 50.000 entradas
Aunque concebido para veterinarios, este diccionario resulta útil para traductores, periodistas científicos, técnicos de seguridad alimentaria, especialistas en medioambiente, bioquímicos, genetistas, farmacólogos, y otros profesionales de disciplinas afines
REDACCIÓN | Martes, 20 de mayo de 2025, 10:10 CET
Mercedes Jaime Sisó, profesora titular del Departamento de Filología Inglesa y Alemana de la Universidad de Zaragoza ha visto culminado estos días uno de los proyectos más ambiciosos de lexicografía aplicada al ámbito científico en lengua española: el Diccionario Inglés-Español de términos veterinarios, una obra única que recopila más de 50.000 entradas organizadas de forma alfabética y con una orientación eminentemente académica y profesional.
Fruto de una dedicación constante, diaria y casi artesanal durante más de treinta años, este diccionario reúne el vocabulario especializado necesario para el estudio, la práctica y la investigación en todas las ramas de la medicina veterinaria y disciplinas afines.
La autora, doctora en Filología Inglesa y docente recién jubilada, ha dedicado su carrera a la enseñanza del inglés científico aplicado a las ciencias veterinarias. Su experiencia directa con estudiantes de grado, máster y doctorado, así como su contacto continuo con profesionales e investigadores, ha sido clave en el desarrollo de este ambicioso proyecto que cubre todas las áreas de especialización veterinaria: desde anatomía, cirugía o parasitología hasta bromatología, acuicultura o producción animal.
Compaginando su labor docente con la investigación y el contacto continuo con especialistas nacionales e internacionales, ha construido una herramienta sin precedentes. Este diccionario no es un glosario, ni una recopilación parcial, sino una obra integral, cuidadosamente estructurada y pensada para resolver las dificultades reales que encuentran quienes necesitan leer, traducir o redactar textos técnicos en este ámbito.
A diferencia de otros glosarios o diccionarios, cada entrada se enmarca en su campo científico específico mediante abreviaturas normalizadas. Además, identifica si el término se aplica a una especie concreta. Incluye información gramatical, variantes geográficas, sinónimos, referencias cruzadas y definiciones cuando es necesario.
La obra se ha publicado en formato impreso en una edición limitada que ha sido repartida en los Colegios Veterinarios españoles, Facultades de Veterinaria y otras instituciones como Institutos Cervantes en países anglosajones o la Real Academia de la Lengua Española y la de las Ciencias Veterinarias. Se espera próximamente poder contar con este diccionario en formato online. No solo es una herramienta de consulta, sino también un legado lingüístico y académico."
https://lnkd.in/gqUQC2Xj
#metaglossia_mundus
Au secours, le verbe «performer» est partout!
Par Aliénor Vinçotte
BILLET D’HUMEUR - Employé à toutes les sauces et tous les domaines, ce mot en dit long sur notre époque.
Publicité
Voilà un verbe qui en dit long sur notre époque. «Les jeunes veulent bouger plus pour mieux performer», «je ne donne plus de coups, j’aide les gens à performer», «je veux aider l’équipe à performer», peut-on lire à droite et à gauche. Plutôt que de «briller» ou de «se surpasser», on «performe» à toutes les sauces. Il n’y a plus de place pour la lenteur, la réflexion, c’est l’efficacité à tout prix ! Issu du verbe anglais «to perform», «performer» est considéré comme un anglicisme... même si au XIIIe siècle c’était surtout un verbe français «parformer» qui signifiait alors «exécuter, accomplir, parfaire», rappelle le Dictionnaire de l’ancienne langue française, Godefroy.
Un mot révélateur de notre époque
Il nous est revenu par le biais de l’anglais, d’abord avec le nom «performance», employé dans le domaine du sport. Il s’applique aujourd’hui quand il s’agit de parler des «bonnes performances en Bourse» ou bien d’une «performance artistique». Sauf qu’aujourd’hui, n’importe qui «performe», du salarié lambda au joueur de football en passant par la start-up qui réussit sa percée. Le mot ne serait qu’anecdotique s’il n’était pas aussi révélateur de l’air du temps. Car «performer», ce n’est pas seulement réussir. C’est réussir sur tous les plans avec efficacité. On ne se contente plus de faire son travail, il faut à tout prix atteindre des objectifs, dépasser les standards.
Avec ce mot charrie tout un imaginaire de performance chiffrée, objectivée, analysée à coups de diapositives et de tableaux croisés dynamiques. Mais surtout, le verbe s’étend à tous les domaines : on performe au bureau, mais aussi dans son couple, à l’école, à la salle de sport, quand on devient parent... et même en vacances ! Tout est prétexte pour rentabiliser le moindre mouvement, la moindre décision. Peu de place est laissée à la spontanéité ou à l’imprévu. Tout est bon pour créer un PowerPoint ou un Excel.
Derrière ce verbe qui se veut moderne et branché se cache une vieille obsession : chercher l’efficacité à tout prix. Il ne suffit plus d’être bon, il faut être optimal. «Performer» illustre à lui seul cette société qui s’auto-évalue et qui cherche à produire avant tout du résultat. En employant ce verbe, on veut montrer son dynamisme ou sa motivation. Mais ne serait-il pas en train de nous épuiser tous ? Son emploi illustre peut-être notre peur de ne pas être à la hauteur, notre angoisse de ne pas être assez visible, assez rentable ou calibré. En réalité, loin de nous encourager, ce mot nous pèse.
Et si on arrêtait de performer pour laisser place à la vie, aux erreurs, aux apprentissages ? Laissons le droit à la paresse (parfois), à l’ennui, l’imperfection d’exister ! Car même une sieste fait partie de la «productivité». Laissons la parole aux Sages de l’Académie française pour qui on peut «accomplir une performance» et non «performer». À la place, ces derniers conseillent de privilégier les verbes «exceller», «briller», «réussir», «se surpasser», sans doute plus humains.
Par Aliénor Vinçotte
20 mai 2025
https://www.lefigaro.fr/langue-francaise/expressions-francaises/au-secours-le-verbe-performer-est-partout-20250520
#metaglossia_mundus
This paper contends that, in the digital era, the creation of art is no longer an endeavor exclusively pursued by humans to achieve a creative product. Over the past decade, computer-generated theater has emerged and progressed significantly through successive projects. This advancement has incited debate about whether these AI-generated works possess literary merit and originality comparable to Human-authored texts. Therefore, this interdisciplinary study aims to draw a comparison between an AI-generated play and a human-authored play in terms of originality, fluency, flexibility, and effectiveness. It utilizes Computational methods and NLP tools to process the two plays, analyze both content and language, and derive quantitative measures that support the creativity assessment of the two plays. The results of content and computational analysis indicate that the human-generated play has higher scores in all indexes of creativity. However, the results also suggest that the AI-generated play features significant creativity potential close in assessment to the human proficiency in several indexes. Thus, AI is capable of creative literary products, though it is not as masterful as those produced by creative humans.
By Silvia Elias, Bunder Sebail Alshammari, …Khaled Mostafa Karam Show authors
Humanities and Social Sciences Communications volume 12, Article number: 689 (2025)
Published: 20 May 2025
https://www.nature.com/articles/s41599-025-04999-2
#metaglossia_mundus
19 May 2025, 03:16 pm (PT) Jesse Peterson, Author: I have not taught English for a long time. But a few weeks ago, two older students came to me, begging to learn English for travel. I refused: "Well, I won't teach anymore, it's too tiring. Besides, you are too old, there is no way you can study. It's better to stay at home and travel by YouTube on the television!"
They still insisted, telling me to teach because it will be worth my while. No thanks! I had to play the last card, honestly saying: "Your brains are now like T-Rex fossils, harder than reinforced concrete. Learning English right now is mission impossible!"
I was not kidding. In linguistics, the so-called fossil effect is when your brain gets older, it hardens like limestone, unable to be molded anymore. If in the distant past you learned to pronounce a word wrong, it will continue to be wrong. Especially the final sound in English, which determines the plural, possession, or tense of a verb.
For example, they want to say: "Adam's English class is on Tuesdays and Thursdays," but I hear: "Adam Englit cat i o two day an Turday" which sounds like an English cat is doing something for two days.
Or as simple as: "Excuse me, did you get my message?" becomes "Ếch mê, đit du ghét mày massa?" which sounds like the confession of a drunk frog.
Finally, the two older students hugged their British Shorthair cat and left, not wanting to study with me anymore. Fine! It will be the thirteenth month of the New Year before I feel enough motivation to teach again due to conditions such as:
- Toilet paper effect: A Westerner walked into a Vietnamese toilet and discovered there was no paper, so he had to try a "bum gun." After a few panic attacks, they adapted and changed their hygiene habits. Learning English is the same, throwing you into an environment full of native speakers (bumguns). If you can't speak it, you're done. 100% effective, side effects... depends!
Also referred to in "The Art of War," Sun Tzu describes "on death ground" as a situation where there is no chance of escape, and an army must fight to survive (learn the language) or perish.
Students thrown in the meat grinder, for example. If the students can survive the flash cards, Super Simple Songs and forced high-fives, they might just have a chance.
- Micro listening effect: This is listening to each syllable clearly, instead of crushing everything like an elephant in a china shop. Some Vietnamese people would refer to it as the Ignore Your Wife, Let's Go Drink effect. Also refers to the importance of listening to every sound such as in the case "bring the pets home dear" and the terrible result is the cat smiles evilly as the dog is left behind in the pet salon.
- Lonely old man effect: I once met a 60-year-old Canadian CBC journalist who was determined to learn Chinese, an extremely difficult language to learn. The secret? He met a beautiful young Chinese girl, her eyes so sparkling that they could melt the stiffness of a corpse. He was motivated, excited, encouraged, virile, and then studied like crazy! Love! The ‘elixir’ that breaks down fossils. He was instead back from the dead like a dire wolf.
- Crammed into a box effect: Particularly for children, after a full day of school including English class and other classes, complicated social bonding, ect, drained of dopamine and the ability to focus and learn anything at all, the teacher does not speak a second language (the monkey see monkey no do effect) but, at least young people stay out of the house so mom and dad can have some quiet time and feel good about money kept busy.
- Hot girl effect: I taught a few famous women known for their beauty to ‘fill their minds’ with a second language, a second culture, and a second way to increase... followers. One of my former students once participated in an international beauty contest. She flipped her silky black hair back, flashed a million-watt smile, and announced: "I've invented the best English learning method in history!... blah blah blah." I nodded vigorously, my eyes turned into hearts, my ears seemed to hear an angel singing, my feet were floating. What method? I don't remember anymore, but who cares when the students are this beautiful! This is an extremely good method of... keeping your teacher motivated.
This effect also relates to:
- Effect of friendship: A celebutante (famous for being famous) friend/student spoke in a rising intonation, handed all her homework back to me,"Teacher, can you please do it for me? I’m sooo busy," she gushed. This method is super effective at lowering the teachers' morale and self-worth.
- Effect of cr.bby grump teacher: When studying Vietnamese my teacher threatened me: "If you pronounce it wrong, I will tie your legs together, and hang you upside down!" Thanks to that, I speak Vietnamese better than a CBC reporter (also works with Sun Tzu's "Art of War": "learn or hang upside down").
Take your pick. English centers are springing up like mushrooms, advertising "super-fast learning", "100% success", "speak like a native in three days". But I just want to ask one question: Wha du chill dren du in whe dey go two scoon?
If you understand the above sentence, congratulations, you are ready to... travel to Youtube! Happy trails.
*Jesse Peterson is an author who has published some books in Vietnamese, including "Jesse Cười", "Funny Tragedy: adding color to life"." https://e.vnexpress.net/news/perspectives/handbook-of-language-effects-reasons-why-i-ve-stopped-teaching-english-4878367.html
#metaglossia_mundus
"9th International Conference on Public Service Interpreting and Translation - EXPERIENCE AND TRANSFORMATION IN PSIT, 11th to 13th March 2026 University of Alcalá (Madrid)
Public Service Interpreting and Translation (PSIT) is facing unprecedented challenges in an increasingly globalised and interconnected world. The 9th International Conference on Public Service Interpreting and Translation (PSIT9) aims to delve into the complexities of contemporary society. Nowadays society is marked by all kinds of crises (economic, war, migration and environmental), technological advances, cultural diversity, and ethical considerations and their impact on the changing demands of TISP.
Numerous issues pose complex challenges and require deeper debate, such as using machine translation and generative AI; the need for inclusive language and a gender perspective; or ethical issues that arise when working in high-risk environments such as healthcare, legal or humanitarian settings. In these settings, accuracy, confidentiality, and impartiality are essential, and we must also keep in mind the well-being of the translator, interpreter or intercultural mediator, especially when they are faced with traumatic situations.
PSIT 9 seeks to explore the multidimensional challenges that contemporary societies face. The main objective of the PSIT9 Conference is to foster debate on the dynamics, challenges, and advances in AI, machine translation and interpreting, and digital communication. It will also consider how they affect linguistic diversity and accessibility, as well as language communities and policies in our increasingly interconnected world.
Researchers/teachers/professionals/language service providers are invited to submit proposals in English or Spanish that contribute to fostering inter- and transdisciplinary debate on PSIT, language rights, and technology.
Important dates: Submission of proposals: by 15th of September 2025 via the conference website.
Notification of the Scientific Committee’s decision: 15 November 2025.
Registration: 20 December 2025 (early registration); 30 January 2026.
Submit chapter for E-book: by 30th of October 2026
Submit article for the FITISPos-IJ Volume (2027): by 30th of November 2026.
Further information: tisp9@uah.es https://fitisposgrupo.web.uah.es/psit9-conference/ #metaglossia_mundus
|